hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
752bc94bdb813f33b60a4fa6280c58bed1b2d03d | 1,803 | py | Python | src/translation.py | Wlodarski/DR-Altimeter | b831a2ce4f05d89fcadba5bab623a39d6527264e | [
"MIT"
] | null | null | null | src/translation.py | Wlodarski/DR-Altimeter | b831a2ce4f05d89fcadba5bab623a39d6527264e | [
"MIT"
] | 6 | 2020-03-28T18:56:47.000Z | 2020-04-06T13:09:55.000Z | src/translation.py | Wlodarski/DR-Altimeter | b831a2ce4f05d89fcadba5bab623a39d6527264e | [
"MIT"
] | null | null | null | #! python3
"""
MIT License
Copyright (c) 2020 Walter Wlodarski
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import gettext
from locale import getdefaultlocale
class Translation:
__slots__ = ["gettext"]
@staticmethod
def temporary_(message):
return message
def __init__(self):
self.gettext = self.temporary_
def __call__(self, text):
return self.gettext(text)
def set_lang(self, clp):
current_locale, encoding = getdefaultlocale()
lang = clp.args.lang if clp.args.lang in clp.all_lang else ""
chosen_lang = gettext.translation(
"DR-Altimeter", localedir=clp.localedir, languages=[lang, current_locale], fallback=True,
)
chosen_lang.install()
self.gettext = chosen_lang.gettext
return self.gettext
| 34.673077 | 101 | 0.742651 | 250 | 1,803 | 5.272 | 0.52 | 0.066768 | 0.019727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003458 | 0.198003 | 1,803 | 51 | 102 | 35.352941 | 0.908022 | 0.600111 | 0 | 0 | 0 | 0 | 0.026648 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.1 | 0.55 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
752d3e4ee7475225bb08bd56270853593c9c7737 | 676 | py | Python | config/urls.py | stkrizh/otus-django-hasker | 9692b8060a789b0b66b4cf3591a78e32c8a10380 | [
"MIT"
] | null | null | null | config/urls.py | stkrizh/otus-django-hasker | 9692b8060a789b0b66b4cf3591a78e32c8a10380 | [
"MIT"
] | 10 | 2020-06-05T22:56:30.000Z | 2022-02-10T08:54:18.000Z | config/urls.py | stkrizh/otus-django-hasker | 9692b8060a789b0b66b4cf3591a78e32c8a10380 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import include, path
from users.views import LogIn, LogOut, Settings, SignUp
urlpatterns = [
path(f"{settings.ADMIN_URL}/", admin.site.urls),
path("", include("questions.urls")),
path("api/v1/", include("api.urls")),
path("login", LogIn.as_view(), name="login"),
path("logout", LogOut.as_view(), name="logout"),
path("settings", Settings.as_view(), name="settings"),
path("signup", SignUp.as_view(), name="signup"),
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 32.190476 | 80 | 0.704142 | 91 | 676 | 5.142857 | 0.340659 | 0.08547 | 0.08547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001701 | 0.130178 | 676 | 20 | 81 | 33.8 | 0.794218 | 0 | 0 | 0 | 0 | 0 | 0.147929 | 0.031065 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
7539ce8921d3225bf80008bbe5aba7640173e66f | 554 | py | Python | research/datasets/tests/test_datasets.py | joaopfonseca/research | 02659512218d077d9ef28d481178e62172ef18cd | [
"MIT"
] | 1 | 2021-01-25T00:09:32.000Z | 2021-01-25T00:09:32.000Z | research/datasets/tests/test_datasets.py | joaopfonseca/research | 02659512218d077d9ef28d481178e62172ef18cd | [
"MIT"
] | null | null | null | research/datasets/tests/test_datasets.py | joaopfonseca/research | 02659512218d077d9ef28d481178e62172ef18cd | [
"MIT"
] | null | null | null | from urllib.request import urlopen
import multiprocessing.dummy as mp
from multiprocessing import cpu_count
import ssl
from .._base import FETCH_URLS
ssl._create_default_https_context = ssl._create_unverified_context
def test_urls():
"""Test whether URLS are working."""
urls = [
url
for sublist in [[url] for url in list(FETCH_URLS.values()) if type(url) == str]
for url in sublist
]
p = mp.Pool(cpu_count())
url_status = p.map(lambda url: (urlopen(url).status == 200), urls)
assert all(url_status)
| 24.086957 | 87 | 0.693141 | 80 | 554 | 4.6125 | 0.525 | 0.073171 | 0.04336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.209386 | 554 | 22 | 88 | 25.181818 | 0.835616 | 0.054152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
753e238d11dcef3e0069b4e553a40395bf220dd3 | 421 | py | Python | python3/contains_duplicate_3.py | joshiaj7/CodingChallenges | f95dd79132f07c296e074d675819031912f6a943 | [
"MIT"
] | 1 | 2020-10-08T09:17:40.000Z | 2020-10-08T09:17:40.000Z | python3/contains_duplicate_3.py | joshiaj7/CodingChallenges | f95dd79132f07c296e074d675819031912f6a943 | [
"MIT"
] | null | null | null | python3/contains_duplicate_3.py | joshiaj7/CodingChallenges | f95dd79132f07c296e074d675819031912f6a943 | [
"MIT"
] | null | null | null | """
Space : O(1)
Time : O(n**2)
"""
class Solution:
def containsNearbyAlmostDuplicate(self, nums: List[int], k: int, t: int) -> bool:
if t == 0 and len(nums) == len(set(nums)):
return False
for i, cur_val in enumerate(nums):
for j in range(i+1, min(i+k+1, len(nums))):
if abs(cur_val - nums[j]) <= t:
return True
return False
| 26.3125 | 85 | 0.503563 | 61 | 421 | 3.442623 | 0.57377 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.346793 | 421 | 15 | 86 | 28.066667 | 0.745455 | 0.07601 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
753f4b0a72fb6a598355d9cd10551930431e1d41 | 5,024 | py | Python | LIBRAY_MANAGEMENT/f_passwd.py | ShriyasnhAgarwl/Hacktoberfest | 5e8adf77a833f7b99dbddff92716e05641dac857 | [
"MIT"
] | null | null | null | LIBRAY_MANAGEMENT/f_passwd.py | ShriyasnhAgarwl/Hacktoberfest | 5e8adf77a833f7b99dbddff92716e05641dac857 | [
"MIT"
] | null | null | null | LIBRAY_MANAGEMENT/f_passwd.py | ShriyasnhAgarwl/Hacktoberfest | 5e8adf77a833f7b99dbddff92716e05641dac857 | [
"MIT"
] | null | null | null | from tkinter import *
from tkinter import ttk
from tkinter import messagebox
import sqlite3
from sqlite3 import Error
#creating window
class Fp(Tk):
def __init__(self):
super().__init__()
self.iconbitmap(r'libico.ico')
self.maxsize(480, 320)
self.title("Forget Password")
self.canvas = Canvas(width=500, height=500, bg='black')
self.canvas.pack()
self.photo = PhotoImage(file='forgot.png')
self.canvas.create_image(-20, -20, image=self.photo, anchor=NW)
#creating variables
a = StringVar()
b = StringVar()
c = StringVar()
d = StringVar()
e = StringVar()
#verifying input
def ins():
if (len(d.get())) < 8 or len(e.get()) < 8:
while True:
if not re.search("[a-z]", d.get()):
flag = -1
break
elif not re.search("[A-Z]", d.get()):
flag = -1
break
elif not re.search("[0-9]", d.get()):
flag = -1
break
elif not re.search("[_@$]", d.get()):
flag = -1
break
elif re.search("\s", d.get()):
flag = -1
break
else:
flag = 0
break
if len(d.get()) == 0:
messagebox.showinfo("Error","Please Enter Your Password")
elif flag == -1:
messagebox.showinfo("Error","Minimum 8 characters.\nThe alphabets must be between [a-z]\nAt least one alphabet should be of Upper Case [A-Z]\nAt least 1 number or digit between [0-9].\nAt least 1 character from [ _ or @ or $ ].")
elif d.get() != e.get():
messagebox.showinfo("Error","New and retype password are not some")
else:
try:
self.conn = sqlite3.connect('library_administration.db')
self.myCursor = self.conn.cursor()
self.myCursor.execute("Update admin set password = ? where id = ?",[e.get(),a.get()])
self.conn.commit()
self.myCursor.close()
self.conn.close()
messagebox.showinfo("Confirm","Password Updated Successfuly")
self.destroy()
except Error:
messagebox.showerror("Error","Something Goes Wrong")
def check():
if len(a.get()) < 5:
messagebox.showinfo("Error","Please Enter User Id")
elif len(b.get()) == 0:
messagebox.showinfo("Error","Please Choose a question")
elif len(c.get()) == 0:
messagebox.showinfo("Error", "Please Enter a answer")
else:
try:
self.conn = sqlite3.connect('library_administration.db')
self.myCursor = self.conn.cursor()
self.myCursor.execute("Select id,secQuestion,secAnswer from admin where id = ?",[a.get()])
pc = self.myCursor.fetchone()
if not pc:
messagebox.showinfo("Error", "Something Wrong in the Details")
elif str(pc[0]) == a.get() or str(pc[1]) == b.get() or str(pc[2]) == c.get():
Label(self, text="New Password", font=('arial', 15, 'bold')).place(x=40, y=220)
Entry(self, show = "*", textvariable=d, width=40).place(x=230, y=224)
Label(self, text="Retype Password", font=('arial', 15, 'bold')).place(x=40, y=270)
Entry(self, show = "*", textvariable=e, width=40).place(x=230, y=274)
Button(self, text="Submit", width=15, command=ins).place(x=230, y=324)
except Error:
messagebox.showerror("Error","Something Goes Wrong")
#label and input box
Label(self, text="Enter User Id",bg='black',fg='white', font=('arial', 15, 'bold')).place(x=40, y=20)
Label(self, text="Security Question",bg='black',fg='white',font=('arial', 15, 'bold')).place(x=40, y= 70)
Label(self, text="Security Answer",bg='black',fg='white',font=('arial', 15, 'bold')).place(x=40, y= 120)
Entry(self, textvariable=a, width=40).place(x=230, y=24)
ttk.Combobox(self, textvariable = b,values=["What is your school name?", "What is your home name?","What is your Father name?", "What is your pet name?"], width=37,state="readonly").place(x=230, y=74)
Entry(self, show = "*", textvariable=c, width=40).place(x=230, y=124)
Button(self, text='Verify', width=15,command = check).place(x=275, y=170)
Fp().mainloop()
| 50.24 | 250 | 0.488256 | 577 | 5,024 | 4.22877 | 0.32409 | 0.029508 | 0.065984 | 0.02459 | 0.337295 | 0.317623 | 0.268852 | 0.237705 | 0.194262 | 0.156148 | 0 | 0.041074 | 0.370024 | 5,024 | 99 | 251 | 50.747475 | 0.729858 | 0.013336 | 0 | 0.266667 | 0 | 0.011111 | 0.195509 | 0.015245 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0.077778 | 0.055556 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7548e64006c12b84e95b327958f7a42ffbcf5097 | 2,658 | py | Python | code/syn.py | ron-rivest/audit-lab | b8f1478035db04afa2c5e5349b9666fd8df5976e | [
"MIT"
] | 3 | 2018-09-12T03:06:28.000Z | 2019-05-04T06:45:54.000Z | code/syn.py | YAXINLEI/audit-lab | b8f1478035db04afa2c5e5349b9666fd8df5976e | [
"MIT"
] | 11 | 2017-09-19T18:23:02.000Z | 2018-07-06T19:08:49.000Z | code/syn.py | YAXINLEI/audit-lab | b8f1478035db04afa2c5e5349b9666fd8df5976e | [
"MIT"
] | 23 | 2017-09-05T17:09:59.000Z | 2019-12-15T19:55:53.000Z | # syn.py
# Ronald L. Rivest (with Karim Husayn Karimi)
# August 3, 2017
# python3
"""
Routines to generate a synthetic test election dataset for OpenAuditTool.py.
Calls data generation routines in syn1.py for elections "of type 1",
and calls routines in syn2.py for elections "of type 2".
"""
import numpy as np
import cli_syn
import OpenAuditTool
class Syn_Params(object):
""" An object we can hang synthesis generation parameters off of. """
pass
##############################################################################
## random choices
def geospace(start, stop, num=7):
"""
Return a list of up to num distinct integer values,
from start, start+1, ..., stop, inclusive, geometrically spread out.
A bit like numpy.linspace, but geometrically spread
out rather than linearly spread out, and only integers returned.
>>> geospace(1, 64)
[1, 2, 4, 8, 16, 32, 64]
>>> geospace(0,1)
[0, 1]
>>> geospace(0,10)
[0, 1, 2, 3, 5, 7, 10]
>>> geospace(20, 10000)
[20, 56, 159, 447, 1260, 3550, 10000]
Twelve-tone equal temperament scale
>>> geospace(1000, 2000, num=13)
[1000, 1059, 1122, 1189, 1260, 1335, 1414, 1498, 1587, 1682, 1782, 1888, 2000]
This should presumably be replaced by numpy.logspace !
(although duplicates need to be removed...)
"""
answer = {start, stop}
start = max(start, 1)
for i in range(1, num-1):
answer.add(int(np.rint(start*(stop/start)**(i/(num-1)))))
return sorted(answer)
def geospace_choice(e, syn, start, stop, num=7):
"""
Return a random element from geospace(start, stop, num),
based on syn.RandomState.
"""
elts = geospace(start, stop, num)
return syn.RandomState.choice(elts)
def generate_segments(e, syn, low, high):
"""
Return list of random segments (r, s) where low <= r < s <= high.
Number of segments returned is (high-low).
Since r<s, does not return segments of the form (k, k).
Intent is that cids are integers in range low <= cid <= high,
and each segment yields a contest group covering cids r..s (inclusive).
The segments "nest" -- given any two segments, either they
are disjoint, or they are equal, or one contains the other.
"""
assert low <= high
L = []
if low!=high:
L.append((low, high))
mid = syn.RandomState.choice(range(low, high))
L.extend(generate_segments(e, syn, low, mid))
L.extend(generate_segments(e, syn, mid+1, high))
return L
if __name__=="__main__":
e = OpenAuditTool.Election()
args = cli_syn.parse_args()
cli_syn.dispatch(e, args)
| 26.316832 | 82 | 0.623401 | 382 | 2,658 | 4.293194 | 0.484293 | 0.032927 | 0.029268 | 0.036585 | 0.097561 | 0.057317 | 0 | 0 | 0 | 0 | 0 | 0.067317 | 0.228743 | 2,658 | 100 | 83 | 26.58 | 0.732683 | 0.57374 | 0 | 0 | 1 | 0 | 0.008929 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.111111 | false | 0.037037 | 0.111111 | 0 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
754b6ed304a3a91854108f6ce34b4157cf24597b | 6,519 | py | Python | pycalc/MAVProxy/modules/lib/ANUGA/redfearn.py | joakimzhang/python-electron | 79bc174a14c5286ca739bb7d8ce6522fdc6e9e80 | [
"CC0-1.0"
] | null | null | null | pycalc/MAVProxy/modules/lib/ANUGA/redfearn.py | joakimzhang/python-electron | 79bc174a14c5286ca739bb7d8ce6522fdc6e9e80 | [
"CC0-1.0"
] | 8 | 2021-01-28T19:26:22.000Z | 2022-03-24T18:07:24.000Z | pycalc/MAVProxy/modules/lib/ANUGA/redfearn.py | joakimzhang/python-electron | 79bc174a14c5286ca739bb7d8ce6522fdc6e9e80 | [
"CC0-1.0"
] | null | null | null | """
This module adapted ANUGA
https://anuga.anu.edu.au/
------------
Implementation of Redfearn's formula to compute UTM projections from latitude and longitude
Based in part on spreadsheet
www.icsm.gov.au/gda/gdatm/redfearn.xls
downloaded from INTERGOVERNMENTAL COMMITTEE ON SURVEYING & MAPPING (ICSM)
http://www.icsm.gov.au/icsm/
"""
from geo_reference import Geo_reference, DEFAULT_ZONE
def degminsec2decimal_degrees(dd,mm,ss):
assert abs(mm) == mm
assert abs(ss) == ss
if dd < 0:
sign = -1
else:
sign = 1
return sign * (abs(dd) + mm/60. + ss/3600.)
def decimal_degrees2degminsec(dec):
if dec < 0:
sign = -1
else:
sign = 1
dec = abs(dec)
dd = int(dec)
f = dec-dd
mm = int(f*60)
ss = (f*60-mm)*60
return sign*dd, mm, ss
def redfearn(lat, lon, false_easting=None, false_northing=None,
zone=None, central_meridian=None, scale_factor=None):
"""Compute UTM projection using Redfearn's formula
lat, lon is latitude and longitude in decimal degrees
If false easting and northing are specified they will override
the standard
If zone is specified reproject lat and long to specified zone instead of
standard zone
If meridian is specified, reproject lat and lon to that instead of zone. In this case
zone will be set to -1 to indicate non-UTM projection
Note that zone and meridian cannot both be specifed
"""
from math import pi, sqrt, sin, cos, tan
#GDA Specifications
a = 6378137.0 #Semi major axis
inverse_flattening = 298.257222101 #1/f
if scale_factor is None:
K0 = 0.9996 #Central scale factor
else:
K0 = scale_factor
#print 'scale', K0
zone_width = 6 #Degrees
longitude_of_central_meridian_zone0 = -183
longitude_of_western_edge_zone0 = -186
if false_easting is None:
false_easting = 500000
if false_northing is None:
if lat < 0:
false_northing = 10000000 #Southern hemisphere
else:
false_northing = 0 #Northern hemisphere)
#Derived constants
f = 1.0/inverse_flattening
b = a*(1-f) #Semi minor axis
e2 = 2*f - f*f# = f*(2-f) = (a^2-b^2/a^2 #Eccentricity
e = sqrt(e2)
e2_ = e2/(1-e2) # = (a^2-b^2)/b^2 #Second eccentricity
e_ = sqrt(e2_)
e4 = e2*e2
e6 = e2*e4
#Foot point latitude
n = (a-b)/(a+b) #Same as e2 - why ?
n2 = n*n
n3 = n*n2
n4 = n2*n2
G = a*(1-n)*(1-n2)*(1+9*n2/4+225*n4/64)*pi/180
phi = lat*pi/180 #Convert latitude to radians
sinphi = sin(phi)
sin2phi = sin(2*phi)
sin4phi = sin(4*phi)
sin6phi = sin(6*phi)
cosphi = cos(phi)
cosphi2 = cosphi*cosphi
cosphi3 = cosphi*cosphi2
cosphi4 = cosphi2*cosphi2
cosphi5 = cosphi*cosphi4
cosphi6 = cosphi2*cosphi4
cosphi7 = cosphi*cosphi6
cosphi8 = cosphi4*cosphi4
t = tan(phi)
t2 = t*t
t4 = t2*t2
t6 = t2*t4
#Radius of Curvature
rho = a*(1-e2)/(1-e2*sinphi*sinphi)**1.5
nu = a/(1-e2*sinphi*sinphi)**0.5
psi = nu/rho
psi2 = psi*psi
psi3 = psi*psi2
psi4 = psi2*psi2
#Meridian distance
A0 = 1 - e2/4 - 3*e4/64 - 5*e6/256
A2 = 3.0/8*(e2+e4/4+15*e6/128)
A4 = 15.0/256*(e4+3*e6/4)
A6 = 35*e6/3072
term1 = a*A0*phi
term2 = -a*A2*sin2phi
term3 = a*A4*sin4phi
term4 = -a*A6*sin6phi
m = term1 + term2 + term3 + term4 #OK
if zone is not None and central_meridian is not None:
msg = 'You specified both zone and central_meridian. Provide only one of them'
raise Exception, msg
# Zone
if zone is None:
zone = int((lon - longitude_of_western_edge_zone0)/zone_width)
# Central meridian
if central_meridian is None:
central_meridian = zone*zone_width+longitude_of_central_meridian_zone0
else:
zone = -1
omega = (lon-central_meridian)*pi/180 #Relative longitude (radians)
omega2 = omega*omega
omega3 = omega*omega2
omega4 = omega2*omega2
omega5 = omega*omega4
omega6 = omega3*omega3
omega7 = omega*omega6
omega8 = omega4*omega4
#Northing
term1 = nu*sinphi*cosphi*omega2/2
term2 = nu*sinphi*cosphi3*(4*psi2+psi-t2)*omega4/24
term3 = nu*sinphi*cosphi5*\
(8*psi4*(11-24*t2)-28*psi3*(1-6*t2)+\
psi2*(1-32*t2)-psi*2*t2+t4-t2)*omega6/720
term4 = nu*sinphi*cosphi7*(1385-3111*t2+543*t4-t6)*omega8/40320
northing = false_northing + K0*(m + term1 + term2 + term3 + term4)
#Easting
term1 = nu*omega*cosphi
term2 = nu*cosphi3*(psi-t2)*omega3/6
term3 = nu*cosphi5*(4*psi3*(1-6*t2)+psi2*(1+8*t2)-2*psi*t2+t4)*omega5/120
term4 = nu*cosphi7*(61-479*t2+179*t4-t6)*omega7/5040
easting = false_easting + K0*(term1 + term2 + term3 + term4)
return zone, easting, northing
def convert_from_latlon_to_utm(points=None,
latitudes=None,
longitudes=None,
false_easting=None,
false_northing=None):
"""Convert latitude and longitude data to UTM as a list of coordinates.
Input
points: list of points given in decimal degrees (latitude, longitude) or
latitudes: list of latitudes and
longitudes: list of longitudes
false_easting (optional)
false_northing (optional)
Output
points: List of converted points
zone: Common UTM zone for converted points
Notes
Assume the false_easting and false_northing are the same for each list.
If points end up in different UTM zones, an ANUGAerror is thrown.
"""
old_geo = Geo_reference()
utm_points = []
if points == None:
assert len(latitudes) == len(longitudes)
points = map(None, latitudes, longitudes)
for point in points:
zone, easting, northing = redfearn(float(point[0]),
float(point[1]),
false_easting=false_easting,
false_northing=false_northing)
new_geo = Geo_reference(zone)
old_geo.reconcile_zones(new_geo)
utm_points.append([easting, northing])
return utm_points, old_geo.get_zone()
| 26.717213 | 91 | 0.595644 | 907 | 6,519 | 4.200662 | 0.292172 | 0.031496 | 0.015748 | 0.015748 | 0.087139 | 0.032021 | 0 | 0 | 0 | 0 | 0 | 0.080855 | 0.303728 | 6,519 | 243 | 92 | 26.82716 | 0.758537 | 0.061513 | 0 | 0.067669 | 0 | 0 | 0.014941 | 0 | 0 | 0 | 0 | 0 | 0.022556 | 0 | null | null | 0 | 0.015038 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7551dd7a08f30ce3108b03fcbbf5b83d7533a27d | 908 | py | Python | core/migrations/0002_auto_20220209_1251.py | mazdakdev/video-compressor | 17e9f9f3f70f41b953a4c84ec6b1370f9faa97e3 | [
"MIT"
] | 3 | 2022-02-11T12:09:29.000Z | 2022-02-12T19:13:17.000Z | core/migrations/0002_auto_20220209_1251.py | mazdakdev/Video-compressor | 17e9f9f3f70f41b953a4c84ec6b1370f9faa97e3 | [
"MIT"
] | null | null | null | core/migrations/0002_auto_20220209_1251.py | mazdakdev/Video-compressor | 17e9f9f3f70f41b953a4c84ec6b1370f9faa97e3 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.3 on 2022-02-09 12:51
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='video',
name='percent',
field=models.PositiveIntegerField(default=1, validators=[django.core.validators.MaxValueValidator(100), django.core.validators.MinValueValidator(1)]),
),
migrations.AlterField(
model_name='video',
name='video_240',
field=models.FileField(blank=True, null=True, upload_to='Videos/240p/%Y/%m/%d'),
),
migrations.AlterField(
model_name='video',
name='video_360',
field=models.FileField(blank=True, null=True, upload_to='Videos/360p/%Y/%m/%d'),
),
]
| 30.266667 | 162 | 0.60793 | 98 | 908 | 5.55102 | 0.510204 | 0.082721 | 0.110294 | 0.159926 | 0.415441 | 0.415441 | 0.345588 | 0.1875 | 0.1875 | 0.1875 | 0 | 0.053333 | 0.256608 | 908 | 29 | 163 | 31.310345 | 0.752593 | 0.049559 | 0 | 0.391304 | 1 | 0 | 0.111498 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
75522ca8fbe7cd45f4d322e2008582dbf31ff13b | 533 | py | Python | setup_guide/translation.py | uktrade/invest | 15b84c511839b46e81608fca9762d2df3f6df16c | [
"MIT"
] | 1 | 2019-01-18T03:50:46.000Z | 2019-01-18T03:50:46.000Z | setup_guide/translation.py | uktrade/invest | 15b84c511839b46e81608fca9762d2df3f6df16c | [
"MIT"
] | 50 | 2018-01-24T18:04:08.000Z | 2019-01-03T03:30:30.000Z | setup_guide/translation.py | uktrade/invest | 15b84c511839b46e81608fca9762d2df3f6df16c | [
"MIT"
] | 2 | 2018-02-12T15:20:52.000Z | 2019-01-18T03:51:52.000Z | from .models import SetupGuidePage, SetupGuideLandingPage
from modeltranslation.translator import TranslationOptions
from modeltranslation.decorators import register
@register(SetupGuidePage)
class SetupGuidePageTranslation(TranslationOptions):
fields = (
'description',
'heading',
'sub_heading',
'subsections',
)
@register(SetupGuideLandingPage)
class SetupGuideLandingPageTranslation(TranslationOptions):
fields = (
'heading',
'sub_heading',
'lead_in',
)
| 23.173913 | 59 | 0.712946 | 37 | 533 | 10.189189 | 0.540541 | 0.106101 | 0.090186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206379 | 533 | 22 | 60 | 24.227273 | 0.891253 | 0 | 0 | 0.333333 | 0 | 0 | 0.121951 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.388889 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7558d464af2c0903cb1ca1c124d0a67287c52fa3 | 13,195 | py | Python | stacker/deploy.py | elliottgorrell/stacker | 6a8f2518dd04606255107937820bf3c249efd839 | [
"MIT"
] | null | null | null | stacker/deploy.py | elliottgorrell/stacker | 6a8f2518dd04606255107937820bf3c249efd839 | [
"MIT"
] | null | null | null | stacker/deploy.py | elliottgorrell/stacker | 6a8f2518dd04606255107937820bf3c249efd839 | [
"MIT"
] | null | null | null | __author__ = "Steve Mactaggart && Elliott Gorrell"
import base64
import json
import re
import sys
import traceback
from datetime import datetime
import boto3
import botocore
import yaml
from cf_helper import secure_print
from cf_helper.utils import DeployException, CloudFormationUtil, STSUtil
class DeployExecutor(object):
REGEX_YAML = re.compile('.+\.yaml|.+.yml')
REGEX_JSON = re.compile('.+\.json')
cf_client = None
ec2_client = None
kms_client = None
role = None
def execute(self, stack_name, template_name, config_filename=None,
role=None, add_parameters=None, version=None, ami_id=None, ami_tag_value=None,
scope=None, create=False, delete=False, dry_run=False,
debug=False):
self.role = role
try:
config_params = dict()
if config_filename is not None:
if debug:
print "Resolving config file {} using scope {}".format(config_filename, scope)
config_params = self.load_parameters(config_filename, scope)
# First override any of the defaults with those supplied at the command line
if add_parameters is None or len(add_parameters) == 0:
adds = {}
else:
adds = dict(item.split("=") for item in add_parameters)
config_params.update(adds)
self.create_boto_clients()
if version:
config_params["VersionParam"] = version
else:
version = datetime.now().isoformat('-').replace(":", "-")
if ami_id:
config_params["AMIParam"] = ami_id
elif ami_tag_value:
config_params["AMIParam"] = self.get_ami_id_by_tag(ami_tag_value)
secrets = []
for key in config_params:
# Check that config file doesn't have scopes (Parameters for more than one Cloudformation file)
if type(config_params[key]) is dict:
raise DeployException("Objects were found with nested values, you will need to specify which set of parameters to use with \"--scope <object_name>\"".format(key))
# Check if the value contains KMS encrypted value (KMSEncrypted /KMSEncrypted tag pair)
# if true, decrypt and replace the value
encryption_check = re.search('KMSEncrypted(.*)/KMSEncrypted', config_params[key])
if encryption_check:
decrypted_value = self.kms_client.decrypt(CiphertextBlob=base64.b64decode(encryption_check.group(1)))["Plaintext"]
config_params[key] = decrypted_value
secrets += [decrypted_value]
cloudformation = self.load_cloudformation(template_name)
raw_cloudformation = str(cloudformation)
# Go through parameters needed and fill them in from the parameters provided in the config file
# They need to be re-formated from the python dictionary into boto3 useable format
parameters = self.import_params_from_config(cloudformation, config_params, create, secrets)
print "Using stack parameters"
# This hides any encrypted values that were decrypted with KMS
print secure_print(json.dumps(parameters, indent=2), secrets)
if create:
change_set_name = "Create-{}".format(version.replace(".", "-"))
changeset = self.get_change_set(stack_name, raw_cloudformation, parameters, change_set_name, create)
elif delete:
if not dry_run:
result = self.cf_client.delete_stack(StackName=stack_name)
print result
else:
print "[Dry-Run] Not deleting stack."
else:
change_set_name = "Update-{}".format(version.replace(".", "-"))
changeset = self.get_change_set(stack_name, raw_cloudformation, parameters, change_set_name)
if changeset is not None:
self.cf_client.wait_for_change_set_to_complete(change_set_name=change_set_name,
stack_name=stack_name,
debug=False)
change_set_details = self.cf_client.describe_change_set(ChangeSetName=change_set_name,
StackName=stack_name)
self.print_change_set(change_set_details)
if dry_run:
response = self.cf_client.delete_change_set(ChangeSetName=change_set_name,
StackName=stack_name)
else:
response = self.cf_client.execute_change_set(ChangeSetName=change_set_name,
StackName=stack_name)
self.cf_client.wait_for_deploy_to_complete(stack_name=stack_name)
except botocore.exceptions.ClientError as e:
if str(e) == "An error occurred (ValidationError) when calling the UpdateStack operation: No updates are to be performed.":
print "No stack update required - CONTINUING"
else:
print "Unexpected error: %s" % e
sys.exit(1)
except DeployException as error:
print "ERROR: {0}".format(error)
sys.exit(1)
except Exception as error:
traceback.print_exc(file=sys.stdout)
traceback.print_stack(file=sys.stdout)
print "ERROR: {0}".format(error)
traceback.print_exc(file=sys.stdout)
sys.exit(1)
def create_boto_clients(self):
if self.ec2_client is None:
self.ec2_client = self._boto_connect('ec2')
if self.cf_client is None:
cf_client = self._boto_connect('cloudformation')
self.cf_client = CloudFormationUtil(cf_client)
if self.kms_client is None:
self.kms_client = self._boto_connect('kms')
def _boto_connect(self, client_type):
if self.role:
sts = STSUtil(sts_arn=self.role, debug=True)
credentials = sts.authenticate_role()['Credentials']
client = boto3.client(client_type,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'])
# If no role is specified the current environments will be used
else:
client = boto3.client(client_type)
return client
def load_parameters(self, config_filename, scope=None):
try:
with open(config_filename) as config_file:
if re.match(self.REGEX_YAML, config_filename):
config_data = yaml.load(config_file)
elif re.match(self.REGEX_JSON, config_filename):
config_data = json.load(config_file)
else:
raise DeployException("Config must be a YAML or JSON file")
if scope is not None:
if scope not in config_data:
raise DeployException("Cannot find scope '{}' within the '{}' configuration file."
.format(scope, config_filename))
parameters = config_data[scope]
else:
parameters = config_data
return parameters
except Exception as error:
raise DeployException("Unable to open config file '{}'\n{}".format(config_filename, error))
def get_ami_id_by_tag(self, ami_tag_value):
images = self.ec2_client.describe_images(Filters=[
{'Name': 'tag:ArtifactID',
'Values': [ami_tag_value]}
])['Images']
if len(images) == 0:
raise DeployException("No images found for search '{}'".format(ami_tag_value))
elif len(images) > 1:
print images
raise DeployException("More than 1 image found for search '{}'".format(ami_tag_value))
else:
ami_id = images[0]["ImageId"]
print "Located AMI {} - {} created {}".format(ami_id, images[0]['Name'], images[0]['CreationDate'])
return ami_id
def import_params_from_config(self, cloudformation, config_params, create, secrets = []):
parameters = []
if 'Parameters' in cloudformation:
for param, values in cloudformation['Parameters'].items():
if param in config_params:
parameters += [{
"ParameterKey": param,
"ParameterValue": config_params[param]
}]
else:
# If this is first deployment of stack we try and use a default provided in the CF
if create:
defaultValue = values.get('Default')
if defaultValue is not None:
parameters += [{
"ParameterKey": param,
"ParameterValue": defaultValue
}]
# There is no default and there is no previous value to use so throw an error
else:
raise DeployException("Cannot CREATE new stack with missing parameter {}".format(param))
sys.exit(-1)
# This is an update of existing stack so use current value
else:
print "Using current stack value for parameter {}".format(param)
parameters += [{
"ParameterKey": param,
"UsePreviousValue": True
}]
return parameters
else:
print "Specified template has no stack parameters"
def get_change_set(self, stack_name, cloudformation, parameters, change_set_name, create = False):
if create:
changeset = self.cf_client.create_change_set(
StackName=stack_name,
TemplateBody=cloudformation,
Parameters=parameters,
Capabilities=[
'CAPABILITY_IAM',
],
ChangeSetName=change_set_name,
ChangeSetType="CREATE"
)
else:
changeset = self.cf_client.create_change_set(
StackName=stack_name,
TemplateBody=cloudformation,
Parameters=parameters,
Capabilities=[
'CAPABILITY_IAM',
],
ChangeSetName=change_set_name,
)
return changeset
def print_change_set(self, change_set_details):
if len(change_set_details['Changes']) > 0:
print "-------------------------------"
print "CloudFormation changes to apply"
print "-------------------------------"
for x in change_set_details['Changes']:
change = x["ResourceChange"]
if change["Action"] == "Add":
replace_mode = "New resource"
elif change["Action"] == "Modify":
replace_mode = change["Replacement"]
if replace_mode == "False":
replace_mode = "Update in place"
elif replace_mode == "False":
replace_mode = "Full replacement"
elif replace_mode == "Conditional":
replace_mode = "Conditionally replace"
else:
replace_mode = "Delete resource"
change_mode = "[{} - {}]".format(change["Action"], replace_mode)
print "{} {}/{} ({})".format(change_mode.ljust(34), change["LogicalResourceId"],
change.get("PhysicalResourceId", ""), change["ResourceType"])
print ""
else:
print "No CloudFormation changes detected"
def load_cloudformation(self, template_name):
try:
with open(template_name, "r") as myfile:
if re.match(self.REGEX_YAML, template_name):
cloudformation = yaml.load(myfile)
elif re.match(self.REGEX_JSON, template_name):
cloudformation = json.load(myfile)
else:
raise DeployException("Cloudformation template must be a JSON or YAML file")
except Exception as error:
raise DeployException("Unable to open CloudFormation template '{}'\n{}".format(template_name, error))
if cloudformation is None:
raise DeployException("It looks like the CloudFormation template file is empty")
return cloudformation
| 42.702265 | 182 | 0.551648 | 1,317 | 13,195 | 5.33713 | 0.215642 | 0.035851 | 0.022194 | 0.018495 | 0.18822 | 0.147674 | 0.127614 | 0.108835 | 0.108835 | 0.08593 | 0 | 0.003935 | 0.36438 | 13,195 | 308 | 183 | 42.840909 | 0.834148 | 0.061008 | 0 | 0.272727 | 0 | 0 | 0.134432 | 0.007352 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.053719 | null | null | 0.103306 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3336022271959da6f3feefe377eea05c0033d68 | 2,981 | py | Python | src/pages/mainPage.py | mvoitko/Habrahabr-tests | 0b909178ba09b31dbd02c73f8b34f191746a27c2 | [
"MIT"
] | null | null | null | src/pages/mainPage.py | mvoitko/Habrahabr-tests | 0b909178ba09b31dbd02c73f8b34f191746a27c2 | [
"MIT"
] | null | null | null | src/pages/mainPage.py | mvoitko/Habrahabr-tests | 0b909178ba09b31dbd02c73f8b34f191746a27c2 | [
"MIT"
] | null | null | null | """
Created on Oct 28, 2016
@author: mvoitko
"""
import re
import time
import locale
from datetime import datetime
from selenium import webdriver
from selenium.common.exceptions import *
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from src import config
from src.utils import helper
from src.pages.basePage import BasePage
from src.locators.mainLocators import MainLocators
class MainPage(BasePage):
"""
Main Habrahabr page representation.
Class for UI actions related to this page
"""
url = config.base_url + 'interesting'
locators_dictionary = MainLocators.locators_dictionary
def search(self, querry):
"""
Search given querry.
:param querry: str - text to search
:return: MainPage: selenium.webdriver.*
"""
self.click_on('search button')
self.fill('search field', querry)
self.find('search field').send_keys(Keys.ENTER)
return MainPage(self.driver)
def get_search_results(self):
"""
Get search results.
:param querry: str - text to search
:return: results: list of selenium.webdriver.remote.webelement.WebElement
"""
return self.find_elems('post')
def sort_by(self, sorting_param):
"""
Sort search results page by given sorting parameter.
:param sorting_param: str - sort by parameter
:return: MainPage: selenium.webdriver.*
"""
# old_post = self.driver.find_element(*MainLocators.locators_dictionary['POST TITLE'])
sorting_param = "sort by " + sorting_param
self.click_on(sorting_param)
# WebDriverWait(self.driver, self.timeout).until(EC.staleness_of(old_post))
return MainPage(self.driver)
def get_posts_timestamps(self):
"""
Get posts timestamps.
:return: timestamps: list of datetime objects of posts.
"""
time.sleep(1)
timestamps = []
timestamp_elements = self.find_elems('post timestamp')
for timestamp in timestamp_elements:
if re.match(helper.pattern_today, timestamp.text, re.IGNORECASE):
date_object = helper.parse_today(timestamp.text)
elif re.match(helper.pattern_yesterday, timestamp.text, re.IGNORECASE):
date_object = helper.parse_yesterday(timestamp.text)
elif re.match(helper.pattern_current_year, timestamp.text, re.IGNORECASE):
date_object = helper.parse_current_year(timestamp.text)
elif re.match(helper.pattern_full, timestamp.text):
date_object = helper.parse_full(timestamp.text)
else:
raise NoSuchElementException(
"Cannot find POST TIMESTAMP locator on the {1} page".format(str(cls)))
timestamps.append(date_object)
return timestamps | 35.488095 | 94 | 0.667226 | 345 | 2,981 | 5.646377 | 0.327536 | 0.053388 | 0.026694 | 0.041068 | 0.191478 | 0.191478 | 0.160678 | 0.070842 | 0 | 0 | 0 | 0.003557 | 0.245555 | 2,981 | 84 | 95 | 35.488095 | 0.862606 | 0.242536 | 0 | 0.044444 | 0 | 0 | 0.05953 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.288889 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f335630c98f006cde2b540a21572bf9a60f94dc4 | 2,595 | py | Python | mainsite/migrations/0001_initial.py | MuratovER/IMO | 90efd087917159dc5b8aab3c8946003496e54418 | [
"MIT"
] | null | null | null | mainsite/migrations/0001_initial.py | MuratovER/IMO | 90efd087917159dc5b8aab3c8946003496e54418 | [
"MIT"
] | null | null | null | mainsite/migrations/0001_initial.py | MuratovER/IMO | 90efd087917159dc5b8aab3c8946003496e54418 | [
"MIT"
] | 1 | 2022-03-31T21:19:14.000Z | 2022-03-31T21:19:14.000Z | # Generated by Django 4.0.3 on 2022-05-07 11:10
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Speciality',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('description', models.TextField()),
('key', models.CharField(max_length=10)),
('price', models.IntegerField(blank=True, null=True)),
('score', models.IntegerField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='Profile',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('gender', models.CharField(blank=True, choices=[('Male', 'M'), ('Female', 'F'), ('None of this', 'N')], max_length=12, null=True)),
('birthdate', models.DateTimeField(blank=True, null=True)),
('country', models.CharField(max_length=100)),
('city', models.CharField(max_length=100)),
('citizenship', models.CharField(blank=True, max_length=100, null=True)),
('email', models.EmailField(max_length=150)),
('phone', models.CharField(max_length=100)),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Post',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=100)),
('short_description', models.CharField(blank=True, max_length=200, null=True)),
('text', models.TextField()),
('created_date', models.DateTimeField(default=django.utils.timezone.now)),
('published_date', models.DateTimeField(blank=True, null=True)),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 45.526316 | 148 | 0.589595 | 266 | 2,595 | 5.631579 | 0.349624 | 0.066088 | 0.084112 | 0.11215 | 0.497997 | 0.425901 | 0.287049 | 0.287049 | 0.287049 | 0.287049 | 0 | 0.024059 | 0.263198 | 2,595 | 56 | 149 | 46.339286 | 0.759414 | 0.017341 | 0 | 0.346939 | 1 | 0 | 0.078493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.081633 | 0 | 0.163265 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f339a807e3723d74dbde7cf96ca0b74587cac911 | 285 | py | Python | day01/ex04/test.py | d-r-e/Machine-Learning-Bootcamp | 618cad97c04d15fec6e8a371c526ad8e08cae35a | [
"MIT"
] | null | null | null | day01/ex04/test.py | d-r-e/Machine-Learning-Bootcamp | 618cad97c04d15fec6e8a371c526ad8e08cae35a | [
"MIT"
] | 6 | 2021-05-25T08:51:39.000Z | 2021-05-25T08:51:40.000Z | day01/ex04/test.py | d-r-e/Python-Bootcamp-42AI | 618cad97c04d15fec6e8a371c526ad8e08cae35a | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from eval import Evaluator
def main():
words = ["Le", "Lorem", "Ipsum", "est", "simple"]
coefs = [1, 2, 1, 4, 0.5]
print(Evaluator.zip_evaluate(coefs, words))
print(Evaluator.enumerate_evaluate(coefs, words))
if __name__ == '__main__':
main()
| 23.75 | 51 | 0.642105 | 39 | 285 | 4.435897 | 0.717949 | 0.16185 | 0.208092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029787 | 0.175439 | 285 | 12 | 52 | 23.75 | 0.706383 | 0.073684 | 0 | 0 | 0 | 0 | 0.114625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f340cb596921e5050e8fd921f0453c58f3a6bf7a | 607 | py | Python | config/config_example1_singleton.py | orest-d/design-patterns-finance | 5878912dfa5b34925b00c38da978e7b9e4735a14 | [
"CC0-1.0"
] | null | null | null | config/config_example1_singleton.py | orest-d/design-patterns-finance | 5878912dfa5b34925b00c38da978e7b9e4735a14 | [
"CC0-1.0"
] | null | null | null | config/config_example1_singleton.py | orest-d/design-patterns-finance | 5878912dfa5b34925b00c38da978e7b9e4735a14 | [
"CC0-1.0"
] | null | null | null | class Config:
__instance = None
@staticmethod
def get_instance():
""" Static access method. """
if Config.__instance == None:
Config()
return Config.__instance
def __init__(self):
if Config.__instance != None:
raise Exception("This class can't be created, use Config.getInstance() instead")
else:
Config.__instance = self
self.db_driver="sqlite",
self.sqlite_file = "database.sqlite"
s = Config()
print (s)
s = Config.get_instance()
print (s, id(s))
s = Config.get_instance()
print (s, id(s)) | 22.481481 | 92 | 0.594728 | 71 | 607 | 4.816901 | 0.464789 | 0.204678 | 0.157895 | 0.116959 | 0.160819 | 0.160819 | 0.160819 | 0.160819 | 0.160819 | 0 | 0 | 0 | 0.286656 | 607 | 27 | 93 | 22.481481 | 0.789838 | 0.034596 | 0 | 0.2 | 0 | 0 | 0.141623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.25 | 0.15 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f34de76ae9039725d601fb7aab02966a635e24da | 2,071 | py | Python | Control/AFGRoperations.py | RNolioSC/TrabalhoFormais | 1668fe2cd21eb2a5546e5d04381e5927637aab4c | [
"MIT"
] | null | null | null | Control/AFGRoperations.py | RNolioSC/TrabalhoFormais | 1668fe2cd21eb2a5546e5d04381e5927637aab4c | [
"MIT"
] | null | null | null | Control/AFGRoperations.py | RNolioSC/TrabalhoFormais | 1668fe2cd21eb2a5546e5d04381e5927637aab4c | [
"MIT"
] | null | null | null | class AFGR:
@staticmethod
def af_to_gr(dict_af):
dict_swap = {}
for keys in dict_af:
dict_swap[keys] = []
for list in dict_af[keys]:
dict_swap[keys].insert(len(dict_swap[keys]), list[0])
dict_swap[keys].insert(len(dict_swap[keys]), list[1])
return dict_swap
@staticmethod
def gr_to_af(dict_gr, estados_aceitacao):
dict_swap = {}
# Adicionando estado final
dict_swap['F'] = []
estados_aceitacao.insert(len(estados_aceitacao), 'F')
for keys in dict_gr:
dict_swap[keys] = []
qtd_elementos = len(dict_gr[keys])
contador = 0
while contador < qtd_elementos:
if contador+1 < len(dict_gr[keys]):
if dict_gr[keys][contador+1].istitle():
dict_swap[keys].insert(len(dict_swap[keys]), [dict_gr[keys][contador], dict_gr[keys][contador+1]])
contador += 2
else:
if AFGR.verifica_estado_final(dict_swap, dict_gr, keys, contador):
dict_swap[keys].insert(len(dict_swap[keys]), [dict_gr[keys][contador], 'F'])
contador += 1
else:
if AFGR.verifica_estado_final(dict_swap, dict_gr, keys, contador):
dict_swap[keys].insert(len(dict_swap[keys]), [dict_gr[keys][contador], 'F'])
contador += 1
# Caso o ultimo elemento seja um nao terminal (NAO FINALIZADO TA REPETIDO O S)
AFGR.verifica_estado_final(dict_swap, dict_gr, keys, contador-2)
return dict_swap
# Verifica os estados finais, tem que ver uma forma melhor no futuro
@staticmethod
def verifica_estado_final(dict_swap, dict_gr, keys, contador):
for estados in dict_swap[keys]:
if estados[0] == dict_gr[keys][contador] and estados[1] != 'F':
estados[1] = 'F'
return False
return True
| 38.351852 | 122 | 0.551424 | 254 | 2,071 | 4.275591 | 0.228346 | 0.162063 | 0.143646 | 0.18232 | 0.450276 | 0.415285 | 0.415285 | 0.415285 | 0.415285 | 0.305709 | 0 | 0.009552 | 0.34283 | 2,071 | 53 | 123 | 39.075472 | 0.788391 | 0.08112 | 0 | 0.414634 | 0 | 0 | 0.00316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f350b79d3a3986015ca3232bfcbeaee0d5fcd69f | 6,501 | py | Python | syft_proto/execution/v1/protocol_pb2.py | karlhigley/syft-proto | f7c926ce26f7551b7ab430fcab9c204819845396 | [
"Apache-2.0"
] | null | null | null | syft_proto/execution/v1/protocol_pb2.py | karlhigley/syft-proto | f7c926ce26f7551b7ab430fcab9c204819845396 | [
"Apache-2.0"
] | null | null | null | syft_proto/execution/v1/protocol_pb2.py | karlhigley/syft-proto | f7c926ce26f7551b7ab430fcab9c204819845396 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: syft_proto/execution/v1/protocol.proto
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from syft_proto.types.syft.v1 import id_pb2 as syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2
from syft_proto.execution.v1 import role_pb2 as syft__proto_dot_execution_dot_v1_dot_role__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='syft_proto/execution/v1/protocol.proto',
package='syft_proto.execution.v1',
syntax='proto3',
serialized_options=b'\n$org.openmined.syftproto.execution.v1',
serialized_pb=b'\n&syft_proto/execution/v1/protocol.proto\x12\x17syft_proto.execution.v1\x1a!syft_proto/types/syft/v1/id.proto\x1a\"syft_proto/execution/v1/role.proto\"\x9f\x02\n\x08Protocol\x12,\n\x02id\x18\x01 \x01(\x0b\x32\x1c.syft_proto.types.syft.v1.IdR\x02id\x12\x12\n\x04name\x18\x02 \x01(\tR\x04name\x12\x42\n\x05roles\x18\x03 \x03(\x0b\x32,.syft_proto.execution.v1.Protocol.RolesEntryR\x05roles\x12\x12\n\x04tags\x18\x04 \x03(\tR\x04tags\x12 \n\x0b\x64\x65scription\x18\x05 \x01(\tR\x0b\x64\x65scription\x1aW\n\nRolesEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12\x33\n\x05value\x18\x02 \x01(\x0b\x32\x1d.syft_proto.execution.v1.RoleR\x05value:\x02\x38\x01\x42&\n$org.openmined.syftproto.execution.v1b\x06proto3'
,
dependencies=[syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2.DESCRIPTOR,syft__proto_dot_execution_dot_v1_dot_role__pb2.DESCRIPTOR,])
_PROTOCOL_ROLESENTRY = _descriptor.Descriptor(
name='RolesEntry',
full_name='syft_proto.execution.v1.Protocol.RolesEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='syft_proto.execution.v1.Protocol.RolesEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='key', file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='value', full_name='syft_proto.execution.v1.Protocol.RolesEntry.value', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='value', file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=b'8\001',
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=339,
serialized_end=426,
)
_PROTOCOL = _descriptor.Descriptor(
name='Protocol',
full_name='syft_proto.execution.v1.Protocol',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='syft_proto.execution.v1.Protocol.id', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='id', file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='syft_proto.execution.v1.Protocol.name', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='name', file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='roles', full_name='syft_proto.execution.v1.Protocol.roles', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='roles', file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tags', full_name='syft_proto.execution.v1.Protocol.tags', index=3,
number=4, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='tags', file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='description', full_name='syft_proto.execution.v1.Protocol.description', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='description', file=DESCRIPTOR),
],
extensions=[
],
nested_types=[_PROTOCOL_ROLESENTRY, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=139,
serialized_end=426,
)
_PROTOCOL_ROLESENTRY.fields_by_name['value'].message_type = syft__proto_dot_execution_dot_v1_dot_role__pb2._ROLE
_PROTOCOL_ROLESENTRY.containing_type = _PROTOCOL
_PROTOCOL.fields_by_name['id'].message_type = syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2._ID
_PROTOCOL.fields_by_name['roles'].message_type = _PROTOCOL_ROLESENTRY
DESCRIPTOR.message_types_by_name['Protocol'] = _PROTOCOL
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Protocol = _reflection.GeneratedProtocolMessageType('Protocol', (_message.Message,), {
'RolesEntry' : _reflection.GeneratedProtocolMessageType('RolesEntry', (_message.Message,), {
'DESCRIPTOR' : _PROTOCOL_ROLESENTRY,
'__module__' : 'syft_proto.execution.v1.protocol_pb2'
# @@protoc_insertion_point(class_scope:syft_proto.execution.v1.Protocol.RolesEntry)
})
,
'DESCRIPTOR' : _PROTOCOL,
'__module__' : 'syft_proto.execution.v1.protocol_pb2'
# @@protoc_insertion_point(class_scope:syft_proto.execution.v1.Protocol)
})
_sym_db.RegisterMessage(Protocol)
_sym_db.RegisterMessage(Protocol.RolesEntry)
DESCRIPTOR._options = None
_PROTOCOL_ROLESENTRY._options = None
# @@protoc_insertion_point(module_scope)
| 43.05298 | 727 | 0.76219 | 886 | 6,501 | 5.27088 | 0.156885 | 0.057816 | 0.075375 | 0.089936 | 0.625054 | 0.501927 | 0.471734 | 0.425482 | 0.395931 | 0.388223 | 0 | 0.041732 | 0.111675 | 6,501 | 150 | 728 | 43.34 | 0.766926 | 0.053992 | 0 | 0.496063 | 1 | 0.007874 | 0.236731 | 0.197655 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047244 | 0 | 0.047244 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f35ab5301440aa325869937b34d4d4a986a14ae0 | 1,367 | py | Python | 016 3Sum Closest.py | ChiFire/legend_LeetCode | 93fe97fef7e929fdbdc25fbb53955d44e14ecff8 | [
"MIT"
] | 872 | 2015-06-15T12:02:41.000Z | 2022-03-30T08:44:35.000Z | 016 3Sum Closest.py | ChiFire/legend_LeetCode | 93fe97fef7e929fdbdc25fbb53955d44e14ecff8 | [
"MIT"
] | 8 | 2015-06-21T15:11:59.000Z | 2022-02-01T11:22:34.000Z | 016 3Sum Closest.py | ChiFire/legend_LeetCode | 93fe97fef7e929fdbdc25fbb53955d44e14ecff8 | [
"MIT"
] | 328 | 2015-06-28T03:10:35.000Z | 2022-03-29T11:05:28.000Z | """
Given an array S of n integers, find three integers in S such that the sum is closest to a given number, target. Return
the sum of the three integers. You may assume that each input would have exactly one solution.
For example, given array S = {-1 2 1 -4}, and target = 1.
The sum that is closest to the target is 2. (-1 + 2 + 1 = 2).
"""
__author__ = 'Danyang'
class Solution:
def threeSumClosest(self, num, target):
"""
Three pointers scanning algorithm
Similar to 014 3Sum
:param num: array
:param target: target
:return: sum of the three digits
"""
min_distance = 1<<32
num.sort()
min_summation = 0
for i, val in enumerate(num):
j = i+1
k = len(num)-1
while j<k:
lst = [val, num[j], num[k]]
if min_distance>abs(target-sum(lst)):
min_summation = sum(lst)
if sum(lst)==target:
return min_summation
min_distance = abs(target-min_summation)
elif sum(lst)>target:
k -= 1
else:
j += 1
return min_summation
if __name__=="__main__":
print Solution().threeSumClosest([1, 1, 1, 1], 0)
| 31.790698 | 120 | 0.512802 | 174 | 1,367 | 3.913793 | 0.41954 | 0.088106 | 0.032305 | 0.038179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032847 | 0.398683 | 1,367 | 42 | 121 | 32.547619 | 0.79562 | 0 | 0 | 0.086957 | 0 | 0 | 0.019011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f35b1b6906f039410938832141f022b5a187af5f | 4,253 | py | Python | SENN/models.py | EdwardGuen/SENN-revisited | 41145a89214c6d978eb7c83e74c0f43007e0be4d | [
"MIT"
] | null | null | null | SENN/models.py | EdwardGuen/SENN-revisited | 41145a89214c6d978eb7c83e74c0f43007e0be4d | [
"MIT"
] | null | null | null | SENN/models.py | EdwardGuen/SENN-revisited | 41145a89214c6d978eb7c83e74c0f43007e0be4d | [
"MIT"
] | null | null | null | # torch
import torch.nn as nn
class Senn(nn.Module):
"""Self-Explaining Neural Network (SENN)
Args:
conceptizer: conceptizer architecture
parametrizer: parametrizer architecture
aggregator: aggregator architecture
Inputs:
x: image (b, n_channels, h, w)
Returns:
pred: vector of class probabilities (b, n_classes)
concepts: concept vector (b, n_concepts)
relevances: vector of concept relevances (b, n_concepts, n_classes)
x_reconstructed: reconstructed image (b, n_channels, h, w)
"""
def __init__(self, conceptizer, parametrizer, aggregator):
super(Senn, self).__init__()
self.conceptizer = conceptizer
self.parametrizer = parametrizer
self.aggregator = aggregator
def forward(self, x):
concepts, x_reconstructed = self.conceptizer(x)
relevances = self.parametrizer(x)
pred = self.aggregator(concepts, relevances)
return pred, (concepts, relevances), x_reconstructed
class VAESenn(nn.Module):
"""VaeSENN
Args:
conceptizer: conceptizer architecture
parametrizer: parametrizer architecture
aggregator: aggregator architecture
Inputs:
x: image (b, n_channels, h, w)
Returns:
pred: vector of class probabilities (b, n_classes)
concepts: concept vector (b, n_concepts)
relevances: vector of concept relevances (b, n_concepts, n_classes)
x_reconstructed: reconstructed image (b, n_channels, h, w)
log_var: log variance of concepts posteriors
"""
def __init__(self, conceptizer, parametrizer, aggregator):
super(VAESenn, self).__init__()
self.conceptizer = conceptizer
self.parametrizer = parametrizer
self.aggregator = aggregator
def forward(self, x):
concepts, mean, log_var, x_recon = self.conceptizer(x)
relevances = self.parametrizer(x)
pred = self.aggregator(concepts, relevances)
return pred, (concepts, relevances), x_recon, log_var, mean
class GaussSiamSenn(nn.Module):
"""VSiamSENN
!! Naming not consistent with report
Args:
conceptizer: conceptizer architecture
parametrizer: parametrizer architecture
aggregator: aggregator architecture
Inputs:
x: image (b, n_channels, h, w)
Returns:
pred: vector of class probabilities (b, n_classes)
concepts: concept vector (b, n_concepts)
relevances: vector of concept relevances (b, n_concepts, n_classes)
"""
def __init__(self, conceptizer, parametrizer, aggregator):
super(GaussSiamSenn, self).__init__()
self.conceptizer = conceptizer
self.parametrizer = parametrizer
self.aggregator = aggregator
def forward(self, x, x_eq = None, x_diff = None):
if self.conceptizer.training:
concepts, (L1, L2, KL) = self.conceptizer.forward_training(x, x_eq, x_diff)
else:
concepts = self.conceptizer(x)
relevances = self.parametrizer(x)
pred = self.aggregator(concepts, relevances)
if self.conceptizer.training:
return pred, (concepts, relevances), (L1, L2, KL)
else:
return pred, (concepts, relevances)
class InvarSennM(nn.Module):
"""InvarSENN
Args:
m1: m1 architecture
m2: m2 architecture
Inputs:
x: image (b, n_channels, h, w)
Returns:
pred: vector of class probabilities (b, n_classes)
e1: concept vector (b, n_concepts)
relevances: vector of concept relevances (b, n_concepts, n_classes)
e2: noise vector (b, n_concepts)
x_reconstructed: reconstructed input image
e1_reconstructed: reconstructed concept vector
e2_reconstructed: reconstructed noise vector
"""
def __init__(self, m1, m2):
super(InvarSennM, self).__init__()
self.m1 = m1
self.m2 = m2
def forward(self, x):
pred, (e1, relevances), e2, x_reconstructed = self.m1(x)
e1_reconstructed, e2_reconstructed = self.m2(e1, e2)
return pred, (e1, relevances), e2, x_reconstructed, (e1_reconstructed, e2_reconstructed)
| 33.753968 | 96 | 0.648013 | 467 | 4,253 | 5.734475 | 0.149893 | 0.01419 | 0.033607 | 0.033607 | 0.691187 | 0.691187 | 0.667289 | 0.612397 | 0.612397 | 0.612397 | 0 | 0.0096 | 0.265225 | 4,253 | 125 | 97 | 34.024 | 0.84736 | 0.429579 | 0 | 0.510204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163265 | false | 0 | 0.020408 | 0 | 0.367347 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f35da1b1d84b79f3638ec8c0791f85ce28bc153d | 7,514 | py | Python | backend/benefit/applications/tests/factories.py | City-of-Helsinki/kesaseteli | 964f801c2dba72c4105b6e436b12b821b199d6d2 | [
"MIT"
] | 2 | 2021-05-10T09:28:35.000Z | 2021-05-17T12:15:34.000Z | backend/benefit/applications/tests/factories.py | City-of-Helsinki/yjdh | 1c07576b456d2be9c3171363450ed46de2c1bbcb | [
"MIT"
] | 931 | 2021-05-21T15:24:35.000Z | 2022-03-31T20:07:40.000Z | backend/benefit/applications/tests/factories.py | City-of-Helsinki/yjdh | 1c07576b456d2be9c3171363450ed46de2c1bbcb | [
"MIT"
] | 6 | 2021-07-06T11:07:02.000Z | 2022-02-07T12:42:21.000Z | import decimal
import itertools
import random
from datetime import date, timedelta
import factory
from applications.enums import ApplicationStatus, ApplicationStep, BenefitType
from applications.models import (
AhjoDecision,
Application,
APPLICATION_LANGUAGE_CHOICES,
ApplicationBasis,
ApplicationBatch,
DeMinimisAid,
Employee,
)
from calculator.models import Calculation
from companies.tests.factories import CompanyFactory
from users.tests.factories import HandlerFactory
class DeMinimisAidFactory(factory.django.DjangoModelFactory):
granter = factory.Faker("sentence", nb_words=2)
# delay evaluation of date_start and date_end so that any freeze_time takes effect
granted_at = factory.Faker(
"date_between_dates",
date_start=factory.LazyAttribute(
lambda _: date.today() - timedelta(days=365 * 2)
),
date_end=factory.LazyAttribute(lambda _: date.today()),
)
amount = factory.Faker("pyint", min_value=1, max_value=100000)
ordering = factory.Iterator(itertools.count(0))
class Meta:
model = DeMinimisAid
class ApplicationBasisFactory(factory.django.DjangoModelFactory):
identifier = factory.Sequence(
lambda id: f"basis_identifier_{id}"
) # ensure it is unique
class Meta:
model = ApplicationBasis
class ApplicationFactory(factory.django.DjangoModelFactory):
company = factory.SubFactory(CompanyFactory)
employee = factory.RelatedFactory(
"applications.tests.factories.EmployeeFactory",
factory_related_name="application",
)
company_name = factory.Faker("sentence", nb_words=2)
company_form = factory.Faker("sentence", nb_words=1)
company_department = factory.Faker("street_address")
official_company_street_address = factory.Faker("street_address")
official_company_city = factory.Faker("city")
official_company_postcode = factory.Faker("postcode")
use_alternative_address = factory.Faker("boolean")
alternative_company_street_address = factory.Faker("street_address")
alternative_company_city = factory.Faker("city")
alternative_company_postcode = factory.Faker("postcode", locale="fi_FI")
company_bank_account_number = factory.Faker("iban", locale="fi_FI")
company_contact_person_phone_number = factory.Sequence(
lambda n: f"050-10000{n}"
) # max.length in validation seems to be 10 digits
company_contact_person_email = factory.Faker("email")
company_contact_person_first_name = factory.Faker("first_name")
company_contact_person_last_name = factory.Faker("last_name")
association_has_business_activities = None
applicant_language = factory.Faker(
"random_element", elements=[v[0] for v in APPLICATION_LANGUAGE_CHOICES]
)
co_operation_negotiations = factory.Faker("boolean")
co_operation_negotiations_description = factory.LazyAttribute(
lambda o: factory.Faker("sentence") if o.co_operation_negotiations else ""
)
pay_subsidy_granted = False
pay_subsidy_percent = None
additional_pay_subsidy_percent = None
apprenticeship_program = factory.Faker("boolean")
archived = factory.Faker("boolean")
application_step = ApplicationStep.STEP_1
benefit_type = BenefitType.EMPLOYMENT_BENEFIT
start_date = factory.Faker(
"date_between_dates",
date_start=date(date.today().year, 1, 1),
date_end=date.today() + timedelta(days=100),
)
end_date = factory.LazyAttribute(
lambda o: o.start_date + timedelta(days=random.randint(31, 364))
)
de_minimis_aid = True
status = ApplicationStatus.DRAFT
@factory.post_generation
def bases(self, created, extracted, **kwargs):
if basis_count := kwargs.pop("basis_count", random.randint(1, 5)):
for bt in ApplicationBasisFactory.create_batch(basis_count, **kwargs):
self.bases.add(bt)
de_minimis_1 = factory.RelatedFactory(
DeMinimisAidFactory,
factory_related_name="application",
)
de_minimis_2 = factory.RelatedFactory(
DeMinimisAidFactory,
factory_related_name="application",
)
class Meta:
model = Application
class ReceivedApplicationFactory(ApplicationFactory):
status = ApplicationStatus.RECEIVED
applicant_terms_approval = factory.RelatedFactory(
"terms.tests.factories.ApplicantTermsApprovalFactory",
factory_related_name="application",
)
calculation = factory.RelatedFactory(
"calculator.tests.factories.CalculationFactory",
factory_related_name="application",
)
@factory.post_generation
def calculation(self, created, extracted, **kwargs):
self.calculation = Calculation.objects.create_for_application(self)
self.calculation.calculated_benefit_amount = decimal.Decimal("321.00")
self.calculation.save()
class HandlingApplicationFactory(ReceivedApplicationFactory):
status = ApplicationStatus.HANDLING
@factory.post_generation
def calculation(self, created, extracted, **kwargs):
self.calculation = Calculation.objects.create_for_application(self)
self.calculation.calculated_benefit_amount = decimal.Decimal("123.00")
self.calculation.handler = HandlerFactory()
self.calculation.save()
class DecidedApplicationFactory(HandlingApplicationFactory):
status = ApplicationStatus.ACCEPTED
class EmployeeFactory(factory.django.DjangoModelFactory):
# pass employee=None to prevent ApplicationFactory from creating another employee
application = factory.SubFactory(ApplicationFactory, employee=None)
first_name = factory.Faker("first_name")
last_name = factory.Faker("last_name")
social_security_number = factory.Faker("ssn", locale="fi_FI")
phone_number = factory.Sequence(lambda n: f"050-10000{n}")
email = factory.Faker("email")
employee_language = factory.Faker(
"random_element", elements=[v[0] for v in APPLICATION_LANGUAGE_CHOICES]
)
job_title = factory.Faker("job")
monthly_pay = factory.Faker("random_int", max=5000)
vacation_money = factory.Faker("random_int", max=5000)
other_expenses = factory.Faker("random_int", max=5000)
working_hours = factory.Faker("random_int", min=18, max=40)
is_living_in_helsinki = factory.Faker("boolean")
collective_bargaining_agreement = factory.Faker("words")
class Meta:
model = Employee
class ApplicationBatchFactory(factory.django.DjangoModelFactory):
proposal_for_decision = AhjoDecision.DECIDED_ACCEPTED
application_1 = factory.RelatedFactory(
DecidedApplicationFactory,
factory_related_name="batch",
status=factory.SelfAttribute("batch.proposal_for_decision"),
)
application_2 = factory.RelatedFactory(
DecidedApplicationFactory,
factory_related_name="batch",
status=factory.SelfAttribute("batch.proposal_for_decision"),
)
decision_maker_title = factory.Faker("sentence", nb_words=2)
decision_maker_name = factory.Faker("name")
section_of_the_law = factory.Faker("word")
decision_date = factory.Faker(
"date_between_dates",
date_start=factory.LazyAttribute(lambda _: date.today() - timedelta(days=30)),
date_end=factory.LazyAttribute(lambda _: date.today()),
)
expert_inspector_name = factory.Faker("name")
expert_inspector_email = factory.Faker("email")
class Meta:
model = ApplicationBatch
| 36.833333 | 86 | 0.729838 | 804 | 7,514 | 6.588308 | 0.292289 | 0.092883 | 0.023787 | 0.027374 | 0.343024 | 0.315839 | 0.253162 | 0.194261 | 0.185766 | 0.185766 | 0 | 0.013088 | 0.176338 | 7,514 | 203 | 87 | 37.014778 | 0.842786 | 0.03021 | 0 | 0.202381 | 0 | 0 | 0.094754 | 0.029525 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | false | 0 | 0.059524 | 0 | 0.553571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f35f43c00e526592a35ccf5d1df69d3b25cc9782 | 492 | py | Python | homework(december)/decemberAssigment1/random1.py | tkanicka/python_learning | 67fc0e8ca6333571f8b0d30f835b759d670a8643 | [
"Unlicense"
] | null | null | null | homework(december)/decemberAssigment1/random1.py | tkanicka/python_learning | 67fc0e8ca6333571f8b0d30f835b759d670a8643 | [
"Unlicense"
] | null | null | null | homework(december)/decemberAssigment1/random1.py | tkanicka/python_learning | 67fc0e8ca6333571f8b0d30f835b759d670a8643 | [
"Unlicense"
] | null | null | null | import random
class Play:
def __init__(self, name="Player"):
self.name = name
def print_name(self):
print("your name is ", self.name)
def TossDie(self, x=1):
for i in range(x):
print(random.randint(1, 6))
def RPC(self, x=1):
for i in range(x):
options = ["rock", "paper", "scissors"]
print(random.choice(options))
player1 = Play("Andula")
player1.print_name()
player1.RPC(3)
player1.TossDie(2)
| 15.375 | 51 | 0.565041 | 67 | 492 | 4.059701 | 0.462687 | 0.088235 | 0.044118 | 0.066176 | 0.132353 | 0.132353 | 0.132353 | 0.132353 | 0 | 0 | 0 | 0.028653 | 0.29065 | 492 | 31 | 52 | 15.870968 | 0.750716 | 0 | 0 | 0.117647 | 0 | 0 | 0.08589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.058824 | 0 | 0.352941 | 0.294118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f370a807612b9899c6544506def0d96f26936479 | 358 | py | Python | app/requests.py | Mash14/personal-blog | 4117097b98ffab7a0dfc43261162da88cde863c2 | [
"MIT"
] | null | null | null | app/requests.py | Mash14/personal-blog | 4117097b98ffab7a0dfc43261162da88cde863c2 | [
"MIT"
] | null | null | null | app/requests.py | Mash14/personal-blog | 4117097b98ffab7a0dfc43261162da88cde863c2 | [
"MIT"
] | null | null | null | import urllib.request,json
from .models import Quote
def get_quotes():
get_quotes_url = 'http://quotes.stormconsultancy.co.uk/random.json'
with urllib.request.urlopen(get_quotes_url) as url:
get_quotes_data = url.read()
get_quotes_response = json.loads(get_quotes_data)
print(get_quotes_data)
return get_quotes_response | 29.833333 | 71 | 0.73743 | 51 | 358 | 4.882353 | 0.490196 | 0.289157 | 0.156627 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173184 | 358 | 12 | 72 | 29.833333 | 0.841216 | 0 | 0 | 0 | 0 | 0 | 0.133705 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0.111111 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f373d9e39b7c68b7583c76a11336122d800a3114 | 2,142 | py | Python | misc/phyler_classify.py | hurwitzlab/LSA-pipeline | 0515fe325ccf49c7914ea8aa1404cdee6dfe2648 | [
"MIT"
] | 39 | 2015-09-20T06:33:54.000Z | 2021-12-21T06:59:53.000Z | misc/phyler_classify.py | hurwitzlab/LSA-pipeline | 0515fe325ccf49c7914ea8aa1404cdee6dfe2648 | [
"MIT"
] | 23 | 2015-09-23T12:00:43.000Z | 2020-05-27T15:42:11.000Z | misc/phyler_classify.py | hurwitzlab/LSA-pipeline | 0515fe325ccf49c7914ea8aa1404cdee6dfe2648 | [
"MIT"
] | 19 | 2015-10-16T21:40:12.000Z | 2020-11-10T07:58:00.000Z | #!/usr/bin/env python
import sys, getopt
import glob,os
# sample the first 10**7 reads
def get_fasta(fp,fo):
f = open(fp)
g = open(fo,'w')
lastlinechar = ''
writenext = False
read_count = 0
for line in f:
if (line[0] == '@') and (lastlinechar != '+'):
g.write('>'+line[1:])
writenext = True
read_count += 1
elif writenext:
g.write(line)
writenext = False
lastlinechar = line[0]
if read_count >= 10**7:
break
f.close()
g.close()
return read_count
help_message = 'usage example: python read_phyler.py -r 1 -i /project/home/original_reads/ -o /project/home/phyler/'
if __name__ == "__main__":
try:
opts, args = getopt.getopt(sys.argv[1:],'hr:i:o:',["inputdir="])
except:
print help_message
sys.exit(2)
for opt, arg in opts:
if opt in ('-h','--help'):
print help_message
sys.exit()
elif opt in ('-r',"--filerank"):
fr = int(arg)-1
elif opt in ('-i','--inputdir'):
inputdir = arg
if inputdir[-1] != '/':
inputdir += '/'
elif opt in ('-o','--outputdir'):
outputdir = arg
if outputdir[-1] != '/':
outputdir += '/'
fr = str(fr) + '/'
os.system('mkdir '+outputdir+fr)
FP = glob.glob(os.path.join(inputdir+fr,'*.fastq'))
read_count = 0
for fp in FP:
fileprefix = fp[fp.rfind('/')+1:fp.index('.fastq')]
fasta_file = outputdir + fr + fileprefix + '.fasta'
read_count += get_fasta(fp,fasta_file)
os.system('cat %s*.fasta > %sall.fa' % (outputdir+fr,outputdir+fr))
os.system('rm '+outputdir+fr+'*.fasta')
os.system('touch '+outputdir + fr + 'all.count.' + str(read_count))
os.system('blastall -p blastn -W15 -a1 -e0.01 -m8 -b1 -i %s -d /seq/msctmp/bcleary/src/MetaPhylerV1.25/markers/markers.dna > %s' % (outputdir+fr+'all.fa',outputdir+fr+'all.phyler.blastn'))
os.system('rm '+outputdir+fr+'all.fa')
os.system('/seq/msctmp/bcleary/src/MetaPhylerV1.25/metaphylerClassify /seq/msctmp/bcleary/src/MetaPhylerV1.25/markers/markers.blastn.classifier /seq/msctmp/bcleary/src/MetaPhylerV1.25/markers/markers.taxonomy %s > %s' % (outputdir+fr+'all.phyler.blastn',outputdir+fr+'all.phyler.blastn.classification'))
os.system('rm '+outputdir+fr+'all.phyler.blastn') | 34 | 304 | 0.653595 | 323 | 2,142 | 4.260062 | 0.346749 | 0.103924 | 0.071221 | 0.055233 | 0.275436 | 0.161337 | 0.102471 | 0.102471 | 0 | 0 | 0 | 0.021358 | 0.147526 | 2,142 | 63 | 305 | 34 | 0.732202 | 0.022876 | 0 | 0.103448 | 0 | 0.051724 | 0.322658 | 0.160612 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.034483 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f374cca632fe9ea398a4cffa7dc83253ebb4f185 | 9,481 | py | Python | aws/sagemaker/kmeans/daal_kmeans_docker/container/kmeans/Kmeans.py | ravi9/csp | 42cc55bf36841514341fc45f17363f287bc70114 | [
"MIT"
] | null | null | null | aws/sagemaker/kmeans/daal_kmeans_docker/container/kmeans/Kmeans.py | ravi9/csp | 42cc55bf36841514341fc45f17363f287bc70114 | [
"MIT"
] | null | null | null | aws/sagemaker/kmeans/daal_kmeans_docker/container/kmeans/Kmeans.py | ravi9/csp | 42cc55bf36841514341fc45f17363f287bc70114 | [
"MIT"
] | 1 | 2018-11-06T06:16:35.000Z | 2018-11-06T06:16:35.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
import daal.algorithms.kmeans.init
from daal.algorithms import kmeans
from daal.data_management import InputDataArchive, OutputDataArchive
from daal.data_management import Compressor_Zlib, Decompressor_Zlib, \
level9, DecompressionStream, CompressionStream, HomogenNumericTable
from daal.data_management import BlockDescriptor, HomogenNumericTable, BlockDescriptor_Float32, readOnly, readWrite
#from utils import printNumericTable
import numpy as np
from numpy import float32, float64, int32
import warnings
class Kmeans:
'''
....Constructor to set Kmeans compute parameters
....'''
def __init__(
self,
nClusters,
maxIterations=300,
initialCentroidMethod='defaultDense',
method='defaultDense',
oversamplingFactor=0.5,
nRounds=5,
accuracyThreshold=0.0001,
gamma=1.0,
distanceType='euclidean',
assignFlag=True,
dtype=float64,
):
"""\n\t\tnClusters: default: None\n\t\t\tnumber of centroids to compute\n\t\tmaxIterations: default: 300\n\t\t\tmaximum number of iterations \n\t\tinitialCentroidMethod: default: \xe2\x80\x99defaultDense' \n\t\t\tInitial centroid assignment method. Refer here for other available methods\n\t\t method: default: 'defaultDense'\n\t\t\tfinal centroid computation mode. Refer here for other available methods\t \n\t\toversamplingFactor: default: 0.5\n\t\t\tapplicable only if initialCentroidMethod is \xe2\x80\x98parallelPlusDense\xe2\x80\x99, \xe2\x80\x98parallelPlusCSR\xe2\x80\x99\n\t\t\tA fraction of nClusters in each of nRounds of parallel K-Means++.\n\t\t\tL=nClusters*oversamplingFactor points are sampled in a round\n\t\tnRounds: default: 5\n\t\t\tapplicable only if initialCentroidMethod is \xe2\x80\x98parallelPlusDense\xe2\x80\x99, \xe2\x80\x98parallelPlusCSR\xe2\x80\x99\n\t\t\tThe number of rounds for parallel K-Means++. (L*nRounds) must be greater than nClusters.\n\t\taccuracyThreshold: default: 0.0001\n\t\t\tThe threshold for termination of the algorithm.\n\t\tgamma: default:1.0\n\t\t\tThe weight to be used in distance calculation for binary categorical features.\n\t\tdistanceType: default: 'euclidean'\n\t\t\tThe measure of closeness between points being clustered.\n\t\tassignFlag: default: True\n\t\t\tFlag that enables cluster assignments for clustered data points.\n\t\t"""
self.nClusters = nClusters
self.initialCentroidMethod = initialCentroidMethod
self.oversamplingFactor = oversamplingFactor
self.nRounds = nRounds
self.method = method
self.maxIterations = maxIterations
self.accuracyThreshold = accuracyThreshold
self.gamma = gamma
self.distanceType = distanceType
self.assignFlag = assignFlag
self.dtype = dtype
def compute(self, data):
if self.method == 'lloydCSR':
self.method = kmeans.lloydCSR
elif self.method == 'defaultDense':
self.method = kmeans.lloydDense
if self.initialCentroidMethod == 'defaultDense':
initMethod = kmeans.init.deterministicDense
elif self.initialCentroidMethod == 'deterministicCSR':
initMethod = kmeans.init.deterministicCSR
elif self.initialCentroidMethod == 'randomDense':
initMethod = kmeans.init.randomDense
elif self.initialCentroidMethod == 'randomCSR':
initMethod = kmeans.init.randomCSR
elif self.initialCentroidMethod == 'plusPlusDense':
initMethod = kmeans.init.plusPlusDense
elif self.initialCentroidMethod == 'plusPlusCSR':
initMethod = kmeans.init.plusPlusCSR
elif self.initialCentroidMethod == 'parallelPlusDense':
initMethod = kmeans.init.parallelPlusDense
elif self.initialCentroidMethod == 'parallelPlusCSR ':
initMethod = kmeans.init.parallelPlusCSR
initAlg = kmeans.init.Batch(self.nClusters, method=initMethod,
oversamplingFactor=self.oversamplingFactor,
nRounds=self.nRounds,
dtype=self.dtype)
initAlg.input.set(kmeans.init.data, data)
res = initAlg.compute()
InitialCentroidsResult = res.get(kmeans.init.centroids)
algorithm = kmeans.Batch(
self.nClusters,
self.maxIterations,
method=self.method,
accuracyThreshold=self.accuracyThreshold,
gamma=self.gamma,
distanceType=self.distanceType,
assignFlag=self.assignFlag,
)
algorithm.input.set(kmeans.data, data)
algorithm.input.set(kmeans.inputCentroids,
InitialCentroidsResult)
res = algorithm.compute()
if self.assignFlag != False:
self.clusterAssignments = res.get(kmeans.assignments)
self.centroidResults = res.get(kmeans.centroids)
self.objectiveFunction = res.get(kmeans.objectiveFunction)
return self
def predict(self, centroidResults, data):
algorithm = kmeans.Batch(
self.nClusters,
0,
method=self.method,
accuracyThreshold=self.accuracyThreshold,
gamma=self.gamma,
distanceType=self.distanceType,
assignFlag=True,
)
algorithm.input.set(kmeans.data, data)
algorithm.input.set(kmeans.inputCentroids, centroidResults)
res = algorithm.compute()
return res.get(kmeans.assignments)
def compress(self, arrayData):
compressor = Compressor_Zlib()
compressor.parameter.gzHeader = True
compressor.parameter.level = level9
comprStream = CompressionStream(compressor)
comprStream.push_back(arrayData)
compressedData = np.empty(comprStream.getCompressedDataSize(),
dtype=np.uint8)
comprStream.copyCompressedArray(compressedData)
return compressedData
def decompress(self, arrayData):
decompressor = Decompressor_Zlib()
decompressor.parameter.gzHeader = True
# Create a stream for decompression
deComprStream = DecompressionStream(decompressor)
# Write the compressed data to the decompression stream and decompress it
deComprStream.push_back(arrayData)
# Allocate memory to store the decompressed data
bufferArray = np.empty(deComprStream.getDecompressedDataSize(),
dtype=np.uint8)
# Store the decompressed data
deComprStream.copyDecompressedArray(bufferArray)
return bufferArray
# -------------------
# ***Serialization***
# -------------------
def serialize(
self,
data,
fileName=None,
useCompression=False,
):
buffArrObjName = (str(type(data)).split()[1].split('>')[0]
+ '()').replace("'", '')
dataArch = InputDataArchive()
data.serialize(dataArch)
length = dataArch.getSizeOfArchive()
bufferArray = np.zeros(length, dtype=np.ubyte)
dataArch.copyArchiveToArray(bufferArray)
if useCompression == True:
if fileName != None:
if len(fileName.rsplit('.', 1)) == 2:
fileName = fileName.rsplit('.', 1)[0]
compressedData = Kmeans.compress(self, bufferArray)
np.save(fileName, compressedData)
else:
comBufferArray = Kmeans.compress(self, bufferArray)
serialObjectDict = {'Array Object': comBufferArray,
'Object Information': buffArrObjName}
return serialObjectDict
else:
if fileName != None:
if len(fileName.rsplit('.', 1)) == 2:
fileName = fileName.rsplit('.', 1)[0]
np.save(fileName, bufferArray)
else:
serialObjectDict = {'Array Object': bufferArray,
'Object Information': buffArrObjName}
return serialObjectDict
infoFile = open(fileName + '.txt', 'w')
infoFile.write(buffArrObjName)
infoFile.close()
# ---------------------
# ***Deserialization***
# ---------------------
def deserialize(
self,
serialObjectDict=None,
fileName=None,
useCompression=False,
):
import daal
if fileName != None and serialObjectDict == None:
bufferArray = np.load(fileName)
buffArrObjName = open(fileName.rsplit('.', 1)[0] + '.txt',
'r').read()
elif fileName == None and any(serialObjectDict):
bufferArray = serialObjectDict['Array Object']
buffArrObjName = serialObjectDict['Object Information']
else:
warnings.warn('Expecting "bufferArray" or "fileName" argument, NOT both'
)
raise SystemExit
if useCompression == True:
bufferArray = Kmeans.decompress(self, bufferArray)
dataArch = OutputDataArchive(bufferArray)
try:
deSerialObj = eval(buffArrObjName)
except AttributeError:
deSerialObj = HomogenNumericTable()
deSerialObj.deserialize(dataArch)
return deSerialObj
| 43.490826 | 1,404 | 0.629259 | 878 | 9,481 | 6.779043 | 0.281321 | 0.008065 | 0.00756 | 0.004704 | 0.171035 | 0.128024 | 0.116935 | 0.116935 | 0.116935 | 0.116935 | 0 | 0.014346 | 0.272123 | 9,481 | 217 | 1,405 | 43.691244 | 0.848138 | 0.179201 | 0 | 0.259887 | 0 | 0 | 0.042322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039548 | false | 0 | 0.050847 | 0 | 0.135593 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f37556e45733e4284a2c266789d0279aa779aaab | 1,037 | py | Python | packages/syft/src/syft/proto/lib/python/bytes_pb2.py | jackbandy/PySyft | 0e20e90abab6a7a7ca672d6eedfa1e7f83c4981b | [
"Apache-2.0"
] | null | null | null | packages/syft/src/syft/proto/lib/python/bytes_pb2.py | jackbandy/PySyft | 0e20e90abab6a7a7ca672d6eedfa1e7f83c4981b | [
"Apache-2.0"
] | null | null | null | packages/syft/src/syft/proto/lib/python/bytes_pb2.py | jackbandy/PySyft | 0e20e90abab6a7a7ca672d6eedfa1e7f83c4981b | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: proto/lib/python/bytes.proto
"""Generated protocol buffer code."""
# third party
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
b'\n\x1cproto/lib/python/bytes.proto\x12\x0fsyft.lib.python"\x15\n\x05\x42ytes\x12\x0c\n\x04\x64\x61ta\x18\x01 \x01(\x0c\x62\x06proto3'
)
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(
DESCRIPTOR, "proto.lib.python.bytes_pb2", globals()
)
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_BYTES._serialized_start = 49
_BYTES._serialized_end = 70
# @@protoc_insertion_point(module_scope)
| 34.566667 | 139 | 0.788814 | 133 | 1,037 | 5.902256 | 0.548872 | 0.04586 | 0.09172 | 0.09172 | 0.086624 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039871 | 0.105111 | 1,037 | 29 | 140 | 35.758621 | 0.806034 | 0.224687 | 0 | 0 | 1 | 0.0625 | 0.199243 | 0.197982 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3792dc86555ef5c680134d2150cbf714d907165 | 27,319 | py | Python | sequenceur.py | lperezfr/turbot-toulouse-robot-race | 52043120db44cf3c901429d1990c4c96c9c2b5f6 | [
"Apache-2.0"
] | 1 | 2017-10-06T22:13:27.000Z | 2017-10-06T22:13:27.000Z | sequenceur.py | lperezfr/turbot-toulouse-robot-race | 52043120db44cf3c901429d1990c4c96c9c2b5f6 | [
"Apache-2.0"
] | null | null | null | sequenceur.py | lperezfr/turbot-toulouse-robot-race | 52043120db44cf3c901429d1990c4c96c9c2b5f6 | [
"Apache-2.0"
] | null | null | null | # encoding:utf-8
# Librairies tierces
import time
import os
# Mes classes
from voiture import Voiture
from asservissement import Asservissement
from arduino import Arduino
class Sequenceur:
# General
# CONST_NOMBRE_MESURES_DEPASSEMENT_DISTANCE = 1000 # Nombre de mesures consecutives du telemetre avant de considerer qu'un depassement de distance est effectif
DUREE_DEPASSEMENT_TELEMETRE = 0.1 # Temps en secondes pendant lequel le telemetre doit mesurer un depassement avant de considerer qu'un depassement est effectif
DISTANCE_DEPASSEMENT_TELEMETRE_IR = 1 # TODO: remettre ? Distance min mesuree par le telemetre IR pour confirmer depassement
# Premiere ligne droite
VITESSE_PREMIERE_LIGNE_DROITE = 50 # 45 pendant 4.8 fonctionne
DUREE_PREMIERE_LIGNE_DROITE = 4.15 # 4.5 lors des essais à 33s
DISTANCE_BORDURE_PREMIERE_LIGNE_DROITE = 30
# Ligne droite avant 180°
VITESSE_LIGNE_DROITE_AVANT_180 = 25
DISTANCE_DECLENCHEMENT_180 = 80
# Virage 180°
POSITION_ROUES_180_DEBUT = 70
POSITION_ROUES_180_FIN = 25 # Initialement 30 ou 35, mais ca passe trop pres
VITESSE_180_DEBUT = 30
VITESSE_180_FIN = 38
DUREE_LIGNE_DROITE_PENDANT_180 = 0.3
# Ligne droite apres premier virage 180°
VITESSE_LIGNE_DROITE_APRES_PREMiER_VIRAGE = 45
DISTANCE_BORDURE_APRES_PREMIER_VIRAGE = 30
DUREE_LIGNE_DROITE_SANS_SUIVI_BORDURE_APRES_PREMIER_VIRAGE = 1
DUREE_LIGNE_DROITE_APRES_PREMIER_VIRAGE = 2.5 # Auparavant 2.5
# Chicane
VITESSE_ENTREE_CHICANE = 25
DISTANCE_DECLENCHEMENT_CHICANE = 60
VITESSE_PREMIER_VIRAGE = 42
VITESSE_CHICANE = 40
DUREE_LIGNE_DIAGONALE_CHICANE_1 = 0.7 # 0.9 lors des essais du soir
DUREE_LIGNE_DIAGONALE_CHICANE_2 = 0.7 # 0.6 lors des essais du soir
DUREE_LIGNE_DIAGONALE_CHICANE_3 = 0.85 # 0.8 lors des essais du soir
DUREE_LIGNE_DIAGONALE_CHICANE_4 = 0.75 # 0.6 lors des essais du soir
#VITESSE_ENTREE_CHICANE = 25
#DISTANCE_DECLENCHEMENT_CHICANE = 20
#VITESSE_PREMIER_VIRAGE = 25
#VITESSE_CHICANE = 25
#VITESSE_CHICANE = 46
#DUREE_LIGNE_DIAGONALE_CHICANE_1 = 1.1
#DUREE_LIGNE_DIAGONALE_CHICANE_2 = 0.7
#DUREE_LIGNE_DIAGONALE_CHICANE_3 = 0.7
#DUREE_LIGNE_DIAGONALE_CHICANE_4 = 0.6
#DELTA_CAP_LIGNE_DIAGONALE = 27
#DUREE_LIGNE_DROITE_CHICANE_1 = 0.35
#DUREE_LIGNE_DROITE_CHICANE_2 = DUREE_LIGNE_DROITE_CHICANE_1
#DUREE_LIGNE_DROITE_CHICANE_3 = DUREE_LIGNE_DROITE_CHICANE_1
#DUREE_LIGNE_DROITE_CHICANE_4 = DUREE_LIGNE_DROITE_CHICANE_1
DELTA_CAP_LIGNE_DIAGONALE = 27
DUREE_LIGNE_DROITE_CHICANE_1 = 0.40
DUREE_LIGNE_DROITE_CHICANE_2 = DUREE_LIGNE_DROITE_CHICANE_1 - 0.05
DUREE_LIGNE_DROITE_CHICANE_3 = DUREE_LIGNE_DROITE_CHICANE_1 + 0.25
DUREE_LIGNE_DROITE_CHICANE_4 = DUREE_LIGNE_DROITE_CHICANE_1 - 0.05
# Ligne droite après chicane sans telemetre pour stabilisation
VITESSE_LIGNE_DROITE_SORTIE_CHICANE = 45
DUREE_LIGNE_DROITE_SORTIE_CHICANE = 1.0
# Ligne droite au telemetre apres chicane
VITESSE_LIGNE_DROITE_APRES_CHICANE = 50
DISTANCE_BORDURE_LIGNE_DROITE_APRES_CHICANE = 30
DUREE_LIGNE_DROITE_APRES_CHICANE = 2.7
# Derniere ligne droite suivi bordure
VITESSE_DERNIERE_LIGNE_DROITE = 55
DISTANCE_BORDURE_DERNIERE_LIGNE_DROITE = 40
DUREE_LIGNE_DROITE_SANS_SUIVI_BORDURE_APRES_DERNIER_VIRAGE = 1 # On commence par une ligne droite au cap
DUREE_DERNIERE_LIGNE_DROITE = 4.7 # On poursuit par un suivi bordure
# Acceleration finale
VITESSE_DERNIERE_LIGNE_DROITE_CAP = 60
DUREE_DERNIERE_LIGNE_DROITE_CAP = 1.7
# Ralentissement ligne droite finale suivi bordure
VITESSE_RALENTISSEMENT_FINAL = 40
DISTANCE_BORDURE_RALENTISSEMENT_FINAL = 30
DUREE_RALENTISSEMENT_FINAL = 1.0
# Suivi courbes au telemetre IR
VITESSE_SUIVI_COURBE_TELEMETRE_IR = 25
DISTANCE_SUIVI_COURBE_TELEMETRE_IR = 60
DUREE_SUIVI_COURBE_TELEMETRE_IR = 180
# Durees d'appui sur le bouton poussoir
DUREE_APPUI_COURT_REDEMARRAGE = 2 # Nombre de secondes d'appui sur le poussoir pour reinitialiser le programme
DUREE_APPUI_LONG_SHUTDOWN = 10 # Nombre de secondes d'appui sur le poussoir pour eteindre le raspberry
programme = [
###########################################################
# Attente stabilisation gyro - ETAPE 0
###########################################################
{
'instruction' : 'attendreGyroStable', # Attend stabilisation du gyro
'conditionFin' : 'attendreGyroStable'
},
{
'label' : 'attendBouton',
'instruction' : 'tourne', # Attend l'appui sur le bouton
'positionRoues' : 0,
'vitesse' : 0,
'conditionFin' : 'attendBouton'
},
{
'instruction' : 'setCap', # Cap asuivre = cap actuel
'conditionFin' : 'immediat'
},
###########################################################
# PREMIERE LIGNE DROITE ETAPE 1
###########################################################
{
'label' : 'debutLigneDroite', # Ligne droite avec suivi bordure
'instruction' : 'ligneDroiteTelemetre',
'vitesse' : VITESSE_PREMIERE_LIGNE_DROITE,
'distance' : DISTANCE_BORDURE_PREMIERE_LIGNE_DROITE,
'conditionFin' : 'duree',
'duree' : DUREE_PREMIERE_LIGNE_DROITE
},
{
'instruction' : 'tourne', # Freine
'positionRoues' : 0,
'vitesse' : -30,
'conditionFin' : 'duree',
'duree' : 0.5
},
############ TEST
#{
# 'instruction' : 'ligneDroiteTelemetre',
# 'vitesse' : 40, # Max 55 ? 45 plus raisonnable...
# 'recalageCap' : False,
# 'distance' : 40,
# 'antiProche' : False,
# 'conditionFin' : 'duree',
# 'duree' : 4, # Fin quand distance telemetre s'envole
# 'activationDistanceIntegrale' : False,
# 'nextLabel' : 'arret'
#},
###########################################################
# VIRAGE 180° ETAPE 2
###########################################################
{
'instruction' : 'ligneDroite', # Ligne droite au cap
'vitesse' : VITESSE_LIGNE_DROITE_AVANT_180,
'conditionFin' : 'telemetre',
'distSupA' : DISTANCE_DECLENCHEMENT_180 # Fin quand distance telemetre s'envole
},
{
'instruction' : 'tourne', # Commence le virage 180°
'positionRoues' : POSITION_ROUES_180_DEBUT,
'vitesse' : VITESSE_180_DEBUT,
'conditionFin' : 'cap',
'capFinalMini' : 60, # En relatif par rapport au cap initial, pour la gauche : 180 300, pour la droite 60 180
'capFinalMaxi' : 180, # En relatif par rapport au cap initial
},
{
'instruction' : 'ajouteCap',
'cap' : 90,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite au cap
'vitesse' : VITESSE_180_DEBUT,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_PENDANT_180
},
{
'instruction' : 'tourne', # Puis finit le virage 180°
'positionRoues' : POSITION_ROUES_180_FIN,
'vitesse' : VITESSE_180_FIN,
'conditionFin' : 'cap',
'capFinalMini' : 60, # En relatif par rapport au cap initial
'capFinalMaxi' : 180 # En relatif par rapport au cap initial
},
{
'instruction' : 'ajouteCap',
'cap' : 90,
'conditionFin' : 'immediat',
},
###########################################################
# LIGNE DROITE AVEC SUIVI BORDURE ETAPE 3
###########################################################
{
'instruction' : 'ligneDroite', # Ligne droite sans suivi bordure pour sortir proprement du virage
'vitesse' : VITESSE_LIGNE_DROITE_APRES_PREMiER_VIRAGE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_SANS_SUIVI_BORDURE_APRES_PREMIER_VIRAGE
},
{
'label' : 'debutLigneDroiteSortieVirage', # Ligne droite avec suivi bordure sans recalage de cap
'instruction' : 'ligneDroiteTelemetre',
'recalageCap' : False,
'activationDistanceIntegrale' : True,
'vitesse' : VITESSE_LIGNE_DROITE_APRES_PREMiER_VIRAGE,
'distance' : DISTANCE_BORDURE_APRES_PREMIER_VIRAGE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_APRES_PREMIER_VIRAGE
},
###########################################################
# CHICANES ETAPE 4
###########################################################
{
'instruction' : 'tourne', # Freine
'positionRoues' : 0,
'vitesse' : -5,
'conditionFin' : 'duree',
'duree' : 0.3
},
{
'instruction' : 'ligneDroite', # Ligne droite
'vitesse' : VITESSE_ENTREE_CHICANE,
'conditionFin' : 'telemetre',
'distSupA' : DISTANCE_DECLENCHEMENT_CHICANE # Fin quand distance telemetre s'envole
},
# PREMIERE CHICANE
{
'instruction' : 'ajouteCap', # 1ere diagonale a droite
'cap' : DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite',
'vitesse' : VITESSE_PREMIER_VIRAGE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DIAGONALE_CHICANE_1
},
{
'instruction' : 'ajouteCap',
'cap' : -DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite chicane
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_CHICANE_1
},
# DEUXIEME CHICANE
{
'instruction' : 'ajouteCap', # 2e diagonale a gauche
'cap' : -DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite',
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DIAGONALE_CHICANE_2
},
{
'instruction' : 'ajouteCap',
'cap' : +DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite chicane
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_CHICANE_2
},
# TROISIEME CHICANE
{
'instruction' : 'ajouteCap', # 3eme diagonale a droite
'cap' : DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite',
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DIAGONALE_CHICANE_3
},
{
'instruction' : 'ajouteCap',
'cap' : -DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite chicane
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_CHICANE_3
},
# QUATRIEME CHICANE
{
'instruction' : 'ajouteCap', # 4e diagonale a gauche
'cap' : -DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite',
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DIAGONALE_CHICANE_4
},
{
'instruction' : 'ajouteCap',
'cap' : DELTA_CAP_LIGNE_DIAGONALE,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite chicane (TODO: verifier si c'est utile)
'vitesse' : VITESSE_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_CHICANE_4
},
###########################################################
# LIGNE DROITE APRES CHICANE ETAPE 5
###########################################################
{
'label' : 'debutLigneDroite', # Ligne droite avec suivi bordure
'instruction' : 'ligneDroiteTelemetre',
'vitesse' : VITESSE_LIGNE_DROITE_APRES_CHICANE,
'distance' : DISTANCE_BORDURE_LIGNE_DROITE_APRES_CHICANE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_APRES_CHICANE
},
{
'instruction' : 'tourne', # Freine
'positionRoues' : 0,
'vitesse' : -30,
'distance' : DISTANCE_BORDURE_LIGNE_DROITE_APRES_CHICANE,
'conditionFin' : 'duree',
'duree' : 0.5
},
###########################################################
# SECOND VIRAGE 180° ETAPE 6
###########################################################
{
'instruction' : 'ligneDroite', # Ligne droite au cap
'vitesse' : VITESSE_LIGNE_DROITE_AVANT_180,
'conditionFin' : 'telemetre',
'distSupA' : DISTANCE_DECLENCHEMENT_180 # Fin quand distance telemetre s'envole
},
{
'instruction' : 'tourne', # Commence le virage 180°
'positionRoues' : POSITION_ROUES_180_DEBUT,
'vitesse' : VITESSE_180_DEBUT,
'conditionFin' : 'cap',
'capFinalMini' : 60, # En relatif par rapport au cap initial, pour la gauche : 180 300, pour la droite 60 180
'capFinalMaxi' : 180, # En relatif par rapport au cap initial
},
{
'instruction' : 'ajouteCap',
'cap' : 90,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite au cap
'vitesse' : VITESSE_180_DEBUT,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_PENDANT_180 + 0.1 # Ajout car il y a un devers
},
{
'instruction' : 'tourne', # Puis finit le virage 180°
'positionRoues' : POSITION_ROUES_180_FIN,
'vitesse' : VITESSE_180_FIN,
'conditionFin' : 'cap',
'capFinalMini' : 60, # En relatif par rapport au cap initial
'capFinalMaxi' : 180 # En relatif par rapport au cap initial
},
{
'instruction' : 'ajouteCap',
'cap' : 90,
'conditionFin' : 'immediat',
},
###########################################################
# LIGNE DROITE AVEC SUIVI BORDURE ETAPE 7
###########################################################
{
'instruction' : 'ligneDroite', # Ligne droite au cap pour sortir proprement du virage
'vitesse' : VITESSE_DERNIERE_LIGNE_DROITE,
'conditionFin' : 'duree',
'duree' : DUREE_LIGNE_DROITE_SANS_SUIVI_BORDURE_APRES_DERNIER_VIRAGE
},
{
'instruction' : 'ligneDroiteTelemetre', # Ligne droite avec suivi bordures
'vitesse' : VITESSE_DERNIERE_LIGNE_DROITE,
'distance' : DISTANCE_BORDURE_DERNIERE_LIGNE_DROITE,
'conditionFin' : 'duree',
'duree' : DUREE_DERNIERE_LIGNE_DROITE
},
{
'instruction' : 'ajouteCap', # Hack pour corriger un biais
'cap' : -2,
'conditionFin' : 'immediat',
},
{
'instruction' : 'ligneDroite', # Ligne droite au cap
'vitesse' : VITESSE_DERNIERE_LIGNE_DROITE_CAP,
'conditionFin' : 'duree',
'duree' : DUREE_DERNIERE_LIGNE_DROITE_CAP
},
{
'instruction' : 'tourne', # Freine
'positionRoues' : 0,
'vitesse' : -10,
'conditionFin' : 'duree',
'duree' : 0.3
},
{
'instruction' : 'ligneDroiteTelemetre', # Ligne droite avec suivi bordures
'vitesse' : VITESSE_RALENTISSEMENT_FINAL,
'distance' : DISTANCE_BORDURE_RALENTISSEMENT_FINAL,
'conditionFin' : 'duree',
'duree' : DUREE_RALENTISSEMENT_FINAL
},
{
'instruction' : 'tourne', # Freine
'positionRoues' : 0,
'vitesse' : -30,
'distance' : DISTANCE_BORDURE_PREMIERE_LIGNE_DROITE,
'conditionFin' : 'duree',
'duree' : 0.5
},
{
'instruction' : 'ligneDroite', # Ligne droite au cap
'vitesse' : VITESSE_LIGNE_DROITE_AVANT_180,
'conditionFin' : 'telemetre',
'distSupA' : DISTANCE_DECLENCHEMENT_180 # Fin quand distance telemetre s'envole
},
###########################################################
# FREINAGE PUIS ARRET
###########################################################
{
'label' : 'arret',
'instruction' : 'tourne', # Freinage
'vitesse' : -30,
'positionRoues' : 0,
'conditionFin' : 'duree',
'duree' : 0.5
},
{
'instruction' : 'tourne', # Arrêt avec roues a 0
'vitesse' : 0,
'positionRoues' : 0,
'conditionFin' : 'duree',
'duree' : 1.5,
'nextLabel' : 'attendBouton' # Retour au début
}
]
sequence = 0
debut = True
timeDebut = 0
programmeCourant = {}
voiture = None
asservissement = None
last_mesure_depassement = False
time_debut_depassement = 0
last_mesure_telemetre1 = 0
timer_led = 0
vitesse_clignote_led = 10
led_clignote = True
last_led = 0
timer_bouton = 0
last_bouton = 1 # 1 = bouton relache, 0 = bouton appuye
flag_appui_court = False # Passe a True quand un appui court (3 secondes) a ete detecte
def __init__(self, voiture):
self.voiture = voiture
def execute(self):
# Fait clignoter la led
if self.led_clignote:
if time.time() > self.timer_led + self.vitesse_clignote_led:
self.timer_led = time.time()
self.last_led = 0 if self.last_led else 1
self.voiture.setLed(self.last_led)
else:
self.voiture.setLed(1)
# Verifie appui court (3 sec) ou long (10 sec) sur bouton
if self.voiture.getBoutonPoussoir() == 0:
if self.last_bouton == 1:
self.timer_bouton = time.time()
else:
if time.time() > self.timer_bouton + self.DUREE_APPUI_COURT_REDEMARRAGE:
# Arrete la voiture
self.voiture.avance(0)
self.voiture.tourne(0)
self.vitesse_clignote_led = 0.3
self.led_clignote = True
self.flag_appui_court = True
if time.time() > self.timer_bouton + self.DUREE_APPUI_LONG_SHUTDOWN:
# Appui long: shutdown Raspberry Pi
os.system('sudo shutdown -h now')
pass
self.last_bouton = 0
else:
self.last_bouton = 1
if self.flag_appui_court:
# Si on a detecte un appui court avant la relache du bouton
self.flag_appui_court = False
# Retourne a la sequence du debut
for i in range(len(self.programme)):
if 'label' in self.programme[i]:
if self.programme[i]['label'] == 'attendBouton':
# On a trouve la prochaine sequence
self.sequence = i
self.debut = True
if self.debut:
# Premiere execution de l'instruction courante
self.programmeCourant = self.programme[self.sequence]
instruction = self.programmeCourant['instruction']
print "********** Nouvelle instruction *********** ", instruction
self.timeDebut = time.time()
self.debut = False
self.arduino.annuleRecalageCap()
self.asservissement.cumulErreurCap = 0
self.last_mesure_depassement = False
# Fait du cap courant le cap a suivre
if instruction == 'setCap':
self.asservissement.setCapTarget()
# Programme la vitesse de la voiture
if instruction == 'ligneDroite' or instruction == 'ligneDroiteTelemetre' or instruction == 'tourne' or instruction == 'suiviCourbeTelemetre':
vitesse = self.programmeCourant['vitesse']
print "Vitesse : ", vitesse
self.voiture.avance(vitesse)
self.asservissement.setVitesse(vitesse)
# Positionne les roues pour l'instruction 'tourne'
if instruction == 'tourne':
positionRoues = self.programmeCourant['positionRoues']
print "Position roues : ", positionRoues
self.voiture.tourne(positionRoues)
# Ajoute une valeur a capTarget pour l'instruction 'ajouteCap'
if instruction == 'ajouteCap':
self.asservissement.ajouteCap(self.programmeCourant['cap'])
# Indique a la classe d'asservissement si elle doit asservir, et selon quel algo
if instruction == 'ligneDroite':
self.asservissement.initLigneDroite()
elif instruction == 'ligneDroiteTelemetre':
recalageCap = False
if 'recalageCap' in self.programmeCourant:
recalageCap = self.programmeCourant['recalageCap']
activationDistanceIntegrale = False
if 'activationDistanceIntegrale' in self.programmeCourant:
activationDistanceIntegrale = self.programmeCourant['activationDistanceIntegrale']
antiProche = False
if 'antiProche' in self.programmeCourant:
antiProche = self.programmeCourant['antiProche']
# Surtout pas de correction integrale avec la protection antiProche
activationDistanceIntegrale = False
self.asservissement.initLigneDroiteTelemetre(self.programmeCourant['distance'], recalageCap, activationDistanceIntegrale, antiProche)
elif instruction == 'suiviCourbeTelemetre':
self.asservissement.initCourbeTelemetre(self.programmeCourant['distance'])
else:
self.asservissement.annuleLigneDroite()
else:
# Partie qui s'execute en boucle tant que la condition de fin n'est pas remplie
pass
# Verifie s'il faut passer a l'instruction suivante
finSequence = False # Initialise finSequence
# Recupere la condition de fin
conditionFin = self.programmeCourant['conditionFin']
# Verifie si la condition de fin est atteinte
if conditionFin == 'attendreGyroStable':
if self.arduino.gyroX != 0.0:
# Si l'arduino a bien reussi a acquerir le gyro, le dit a travers la vitesse de clignotement de la led
self.vitesse_clignote_led = 1.5
finSequence = self.arduino.checkGyroStable()
elif conditionFin == 'cap':
capFinalMini = self.programmeCourant['capFinalMini']
capFinalMaxi = self.programmeCourant['capFinalMaxi']
if self.asservissement.checkDeltaCapAtteint(capFinalMini, capFinalMaxi):
finSequence = True
elif conditionFin == 'duree':
if (time.time() - self.timeDebut) > self.programmeCourant['duree']:
finSequence = True
elif conditionFin == 'immediat':
finSequence = True
elif conditionFin == 'telemetre':
if self.arduino.bestTelemetrePourDetectionVirage() > self.programmeCourant['distSupA']:
#if self.last_mesure_depassement:
# if self.last_mesure_telemetre1 != self.arduino.telemetre1:
# print "Telemetre1 : ", self.arduino.telemetre1, " Distance a depasser : ", self.programmeCourant['distSupA']
# self.last_mesure_telemetre1 = self.arduino.telemetre1
# # Verifie si depassement du telemetre1 pendant longtemps + confirmation par telemetre IR
# if (time.time() > self.time_debut_depassement + self.DUREE_DEPASSEMENT_TELEMETRE) and (self.arduino.telemetreIR > self.DISTANCE_DEPASSEMENT_TELEMETRE_IR):
finSequence = True
#else:
# self.time_debut_depassement = time.time()
#self.last_mesure_depassement = True
#else:
# self.last_mesure_depassement = False
elif conditionFin == 'attendBouton':
self.vitesse_clignote_led = 0.3
self.led_clignote = True
if self.voiture.getBoutonPoussoir() == 0:
self.led_clignote = False
finSequence = True
if finSequence:
# Si le champ nextLabel est defini, alors il faut chercher le prochain element par son label
if 'nextLabel' in self.programmeCourant:
nextLabel = self.programmeCourant['nextLabel']
for i in range(len(self.programme)):
if 'label' in self.programme[i]:
if self.programme[i]['label'] == nextLabel:
# On a trouve la prochaine sequence
self.sequence = i
else:
# Si le champ nextLabel n'est pas defini, on passe simplement a l'element suivant
self.sequence += 1
self.debut = True
| 39.535456 | 169 | 0.538856 | 2,420 | 27,319 | 5.867355 | 0.141322 | 0.070498 | 0.033805 | 0.029157 | 0.506162 | 0.444257 | 0.404395 | 0.358476 | 0.358476 | 0.300444 | 0 | 0.024341 | 0.344339 | 27,319 | 691 | 170 | 39.535456 | 0.767865 | 0.216553 | 0 | 0.396728 | 0 | 0 | 0.17275 | 0.005596 | 0 | 0 | 0 | 0.001447 | 0 | 0 | null | null | 0.014315 | 0.010225 | null | null | 0.006135 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f38371cb347fe1cc59a05a3c04244fc73c03cc52 | 820 | py | Python | app/request/migrations/0003_auto_20190924_2107.py | contestcrew/2019SeoulContest-Backend | 2e99cc6ec6a712911da3b79412ae84a9d35453e1 | [
"MIT"
] | null | null | null | app/request/migrations/0003_auto_20190924_2107.py | contestcrew/2019SeoulContest-Backend | 2e99cc6ec6a712911da3b79412ae84a9d35453e1 | [
"MIT"
] | 32 | 2019-08-30T13:09:28.000Z | 2021-06-10T19:07:56.000Z | app/request/migrations/0003_auto_20190924_2107.py | contestcrew/2019SeoulContest-Backend | 2e99cc6ec6a712911da3b79412ae84a9d35453e1 | [
"MIT"
] | 3 | 2019-09-19T10:12:50.000Z | 2019-09-30T15:59:13.000Z | # Generated by Django 2.2.5 on 2019-09-24 12:07
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('request', '0002_auto_20190924_1811'),
]
operations = [
migrations.CreateModel(
name='PoliceOffice',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=30, verbose_name='이름')),
],
),
migrations.AddField(
model_name='request',
name='police_office',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='request.PoliceOffice', verbose_name='경찰서'),
),
]
| 30.37037 | 140 | 0.610976 | 89 | 820 | 5.483146 | 0.617978 | 0.04918 | 0.057377 | 0.090164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054366 | 0.259756 | 820 | 26 | 141 | 31.538462 | 0.749588 | 0.054878 | 0 | 0.1 | 1 | 0 | 0.122898 | 0.029754 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f388170b0b0845168e9528e0556eaa6b27d8bd42 | 6,952 | py | Python | vad.py | zhuligs/Pallas | c8d77d0963c080fa7331560f1659001488b0328f | [
"MIT"
] | null | null | null | vad.py | zhuligs/Pallas | c8d77d0963c080fa7331560f1659001488b0328f | [
"MIT"
] | null | null | null | vad.py | zhuligs/Pallas | c8d77d0963c080fa7331560f1659001488b0328f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# import numpy as np
import itin
import sdata
import fppy
from copy import deepcopy as cp
from wrapdimer import get_rmode, get_0mode, get_mode
from zfunc import set_cell_from_vasp, write_cell_to_vasp
from vfunc import runvdim, goptv
# def con(reac, prod):
# mode = get_mode(reac, prod)
# sdd = runvdim(reac, mode)
# mode = -1.0 * get_mode(sdd, reac)
# newmin = goptv(sdd, mode)
# types = sdata.types()
# fp0 = reac.get_lfp()
def con(reac, prod):
rPool = []
pPool = []
for i in range(itin.ndimMax):
print "ZLOG: R DIM", i
mode = get_rmode()
tcc = runvdim(reac, mode)
rPool.append(tcc)
print "ZLOG: E, DIR", tcc.get_e(), sdata.ddir
print "ZLOG: P DIM", i
mode = get_rmode()
tcc = runvdim(prod, mode)
pPool.append(tcc)
print "ZLOG: E, DIR", tcc.get_e(), sdata.ddir
dcompt = []
for i in range(itin.ndimMax):
xreac = cp(rPool[i])
fpi = xreac.get_lfp()
for j in range(itin.ndimMax):
xprod = cp(pPool[j])
fpj = xprod.get_lfp()
(dist, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpi, fpj)
print "ZLOG: I %d J %d dist %8.6E" % (i, j, dist)
dcompt.append([dist, [i, j]])
dcomp = sorted(dcompt, key=lambda x: x[0])
print "ZLOG: shortest dim D %8.6E" % (dcomp[0][0])
(ix, iy) = dcomp[0][1]
print "ZLOG: ix iy", ix, iy
xsp = cp(rPool[ix])
ysp = cp(pPool[iy])
fp1 = xsp.get_lfp()
fp2 = ysp.get_lfp()
(d1, m1) = fppy.fp_dist(itin.ntyp, sdata.types, fp1, fp2)
print "ZLOG: CONF D: %8.6E" % (d1)
# modex = -1*get_mode(xsp, reac)
# modey = -1*get_mode(ysp, prod)
modex = get_0mode()
xspl = goptv(xsp, modex)
print "ZLOG: XSPL, E, DIR", xspl.get_e(), sdata.gdir
yspl = goptv(ysp, modex)
print "ZLOG: YSPL, E, DIR", yspl.get_e(), sdata.gdir
rbe = xsp.get_e() - reac.get_e()
pbe = ysp.get_e() - reac.get_e()
fpxs = xspl.get_lfp()
fpys = yspl.get_lfp()
(d, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpxs, fpys)
print "ZLOG: DD: ", d
return(d, rbe, pbe, xsp, ysp, xspl, yspl)
def con2(reac, prod):
rPool = []
rlool = []
pPool = []
plool = []
for i in range(itin.ndimMax):
print "ZLOG: R DIM", i
mode = get_rmode()
tcc = runvdim(reac, mode)
rPool.append(cp(tcc))
print "ZLOG: DIM E, DIR", tcc.get_e(), sdata.ddir
modex = -1*get_mode(tcc, reac)*0.1
tccc = goptv(tcc, modex)
print "ZLOG: OPT E, DIR", tccc.get_e(), sdata.gdir
rlool.append(cp(tccc))
print "ZLOG: P DIM", i
mode = get_rmode()
tcc = runvdim(prod, mode)
pPool.append(cp(tcc))
print "ZLOG: DIM E, DIR", tcc.get_e(), sdata.ddir
modey = -1*get_mode(tcc, prod)*0.1
tccc = goptv(tcc, modey)
print "ZLOG: OPT E, DIR", tccc.get_e(), sdata.gdir
plool.append(tccc)
dcompt = []
for i in range(itin.ndimMax):
xreac = cp(rlool[i])
fpi = xreac.get_lfp()
for j in range(itin.ndimMax):
xprod = cp(plool[j])
fpj = xprod.get_lfp()
(dist, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpi, fpj)
print "ZLOG: I %d J %d dist %8.6E" % (i, j, dist)
dcompt.append([dist, [i, j]])
dcomp = sorted(dcompt, key=lambda x: x[0])
print "ZLOG: shortest OPT D %8.6E" % (dcomp[0][0])
(ix, iy) = dcomp[0][1]
print "ZLOG: ix iy", ix, iy
xsp = cp(rPool[ix])
ysp = cp(pPool[iy])
xspl = cp(rlool[ix])
yspl = cp(plool[iy])
d = dcomp[0][0]
# fp1 = xsp.get_lfp()
# fp2 = ysp.get_lfp()
# (d1, m1) = fppy.fp_dist(itin.ntyp, sdata.types, fp1, fp2)
# print "ZLOG: CONF D: %8.6E" % (d1)
# # modex = -1*get_mode(xsp, reac)
# # modey = -1*get_mode(ysp, prod)
# modex = get_0mode()
# xspl = goptv(xsp, modex)
# print "ZLOG: XSPL, E, DIR", xspl.get_e(), sdata.gdir
# yspl = goptv(ysp, modex)
# print "ZLOG: YSPL, E, DIR", yspl.get_e(), sdata.gdir
# rbe = xsp.get_e() - reac.get_e()
# pbe = ysp.get_e() - reac.get_e()
# fpxs = xspl.get_lfp()
# fpys = yspl.get_lfp()
# (d, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpxs, fpys)
# print "ZLOG: DD: ", d
return(d, xsp, ysp, xspl, yspl)
def rcon(xreac, xprod):
dmax = itin.dist
dd = 1.0
rc = []
ist = 0
xfp0 = xreac.get_lfp()
yfp0 = xprod.get_lfp()
while dd > dmax:
ist += 1
if ist > 200:
break
(d, xsp, ysp, xspl, yspl) = con2(xreac, xprod)
xreac = cp(xspl)
xprod = cp(yspl)
rc.append([d, xsp, ysp, xspl, yspl])
dtt = []
for i in range(len(rc)):
xxl = rc[i][5]
fpxxl = xxl.get_lfp()
(dx, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpxxl, yfp0)
dtt.append(dx)
print "ZLOG: I %d to PROD dist %8.6E" % (i, dx)
for j in range(len(rc)):
yyl = rc[j][6]
fpyyl = yyl.get_lfp()
(dy, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpyyl, xfp0)
dtt.append(dy)
print "ZLOG: J %d to PROD dist %8.6E" % (j, dy)
(dt, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpxxl, fpyyl)
print "ZLOG: CONT I %2d J %2d dist %8.6E" % (i, j, dt)
dtt.append(dt)
dd = min(dtt)
print "ZLOG: IST:", ist
print "ZLOG: DRP:", d, dd, rbe, pbe
print "ZLOG: X-S-E, X-L-E, Y-S-E, Y-L-E:", \
xsp.get_e(), xspl.get_e(), ysp.get_e(), yspl.get_e()
write_cell_to_vasp(xsp, "ixsp_" + str(ist) + ".vasp")
write_cell_to_vasp(xspl, "ixspl_" + str(ist) + ".vasp")
write_cell_to_vasp(ysp, "iysp_" + str(ist) + ".vasp")
write_cell_to_vasp(yspl, "iyspl_" + str(ist) + ".vasp")
return rc
def initrun():
reac0 = set_cell_from_vasp('R.vasp')
prod0 = set_cell_from_vasp('P.vasp')
mode = get_0mode()
reac = goptv(reac0, mode)
prod = goptv(prod0, mode)
write_cell_to_vasp(reac, 'ROPT.vasp')
write_cell_to_vasp(prod, 'POPT.vasp')
sdata.types = reac.get_types()
fpr = reac.get_lfp()
fpp = prod.get_lfp()
(d, m) = fppy.fp_dist(itin.ntyp, sdata.types, fpr, fpp)
print 'ZLOG: INIT DIST', d
print 'ZLOG: REAC ENERGY', reac.get_e()
print 'ZLOG: PROD ENERGY', prod.get_e()
return (reac, prod)
def main():
(reac, prod) = initrun()
rc = rcon(reac, prod)
i = 0
for x in rc:
i += 1
print "ZZ# no, d, rbe, pbe", i, x[0], x[1], x[2]
write_cell_to_vasp(x[1], 'xsp' + str(i) + '.vasp')
write_cell_to_vasp(x[2], 'ysp' + str(i) + '.vasp')
write_cell_to_vasp(x[3], 'xspl' + str(i) + '.vasp')
write_cell_to_vasp(x[4], 'yspl' + str(i) + '.vasp')
if __name__ == '__main__':
main()
| 28.260163 | 76 | 0.534091 | 1,075 | 6,952 | 3.336744 | 0.139535 | 0.082799 | 0.033733 | 0.045999 | 0.618065 | 0.574854 | 0.567048 | 0.546139 | 0.517982 | 0.479509 | 0 | 0.018678 | 0.299194 | 6,952 | 245 | 77 | 28.37551 | 0.71757 | 0.121979 | 0 | 0.299401 | 0 | 0.005988 | 0.107255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.041916 | null | null | 0.179641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3940f471c47cc139263585f43ad154e0740108a | 8,509 | py | Python | pygpsnmea/kml.py | tww-software/py_gps_nmea | 8295d146014d4e8636e7ca05f7e843f023c67e45 | [
"MIT"
] | null | null | null | pygpsnmea/kml.py | tww-software/py_gps_nmea | 8295d146014d4e8636e7ca05f7e843f023c67e45 | [
"MIT"
] | null | null | null | pygpsnmea/kml.py | tww-software/py_gps_nmea | 8295d146014d4e8636e7ca05f7e843f023c67e45 | [
"MIT"
] | null | null | null | """
a parser to generate Keyhole Markup Language (KML) for Google Earth
"""
import datetime
import os
import re
DATETIMEREGEX = re.compile(
r'\d{4}/(0[1-9]|1[0-2])/(0[1-9]|1[0-9]|2[0-9]|3[01]) '
r'(0[0-9]|1[0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])')
class KMLOutputParser():
"""
Class to parse KML into an output file.
Attributes:
kmldoc(list): list of strings to make up the doc.kml
kmlfilepath(str): path to output KML file
kmlheader(str): first part of a KML file
placemarktemplate(str): template for a KML placemark (pin on map)
lineplacemarktemplate(str): template for KML linestring (line on map)
"""
def __init__(self, kmlfilepath):
self.kmldoc = []
self.kmlfilepath = kmlfilepath
self.kmlheader = """<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2">
<Document>
<name>%s</name>
<open>1</open>"""
self.placemarktemplate = """
<Placemark>
<name>%s</name>
<description>%s</description>
<TimeStamp>
<when>%s</when>
</TimeStamp>
<LookAt>
<longitude>%s</longitude>
<latitude>%s</latitude>
<altitude>%s</altitude>
<heading>-0</heading>
<tilt>0</tilt>
<range>500</range>
</LookAt>
<Point>
<coordinates>%s</coordinates>
</Point>
</Placemark>"""
self.lineplacemarktemplate = """
<Placemark>
<name>%s</name>
<LineString>
<coordinates>%s</coordinates>
</LineString>
</Placemark>"""
@staticmethod
def format_kml_placemark_description(placemarkdict):
"""
format html tags for inside a kml placemark from a dictionary
Args:
placemarkdict(dict): dictionary of information for a placemark
Returns:
description(str): the dictionary items formatted as HTML string
suitable to be in a KML placemark description
"""
starttag = "<![CDATA["
newlinetag = "<br />\n"
endtag = "]]>"
descriptionlist = []
descriptionlist.append(starttag)
for item in placemarkdict:
if isinstance(placemarkdict[item], dict):
descriptionlist.append(newlinetag)
descriptionlist.append(item.upper())
descriptionlist.append(newlinetag)
for subitem in placemarkdict[item]:
descriptionlist.append(str(subitem).upper())
descriptionlist.append(' - ')
descriptionlist.append(str(placemarkdict[item][subitem]))
descriptionlist.append(newlinetag)
continue
descriptionlist.append(str(item).upper())
descriptionlist.append(' - ')
descriptionlist.append(str(placemarkdict[item]))
descriptionlist.append(newlinetag)
descriptionlist.append(endtag)
description = ''.join(descriptionlist)
return description
def create_kml_header(self, name):
"""
Write the first part of the KML output file.
This only needs to be called once at the start of the kml file.
Args:
name(str): name to use for this kml document
"""
self.kmldoc.append(self.kmlheader % (name))
def add_kml_placemark(self, placemarkname, description, lon, lat,
altitude='0', timestamp=''):
"""
Write a placemark to the KML file (a pin on the map!)
Args:
placemarkname(str): text that appears next to the pin on the map
description(str): text that will appear in the placemark
lon(str): longitude in decimal degrees
lat(str): latitude in decimal degrees
altitude(str): altitude in metres
timestamp(str): time stamp in XML format
"""
placemarkname = remove_invalid_chars(placemarkname)
coords = lon + ',' + lat + ',' + altitude
placemark = self.placemarktemplate % (
placemarkname, description, timestamp, lon, lat,
altitude, coords)
self.kmldoc.append(placemark)
def open_folder(self, foldername):
"""
open a folder to store placemarks
Args:
foldername(str): the name of the folder
"""
cleanfoldername = remove_invalid_chars(foldername)
openfolderstr = "<Folder>\n<name>{}</name>".format(cleanfoldername)
self.kmldoc.append(openfolderstr)
def close_folder(self):
"""
close the currently open folder
"""
closefolderstr = "</Folder>"
self.kmldoc.append(closefolderstr)
def add_kml_placemark_linestring(self, placemarkname, coords):
"""
Write a linestring to the KML file (a line on the map!)
Args:
placemarkname(str): name of the linestring
coords(list): list of dicts containing Lat/Lon
"""
placemarkname = remove_invalid_chars(placemarkname)
newcoordslist = []
for item in coords:
lon = str(item['longitude'])
lat = str(item['latitude'])
try:
alt = str(item['altitude (M)'])
except KeyError:
alt = '0'
coordsline = '{},{},{}'.format(lon, lat, alt)
newcoordslist.append(coordsline)
placemark = self.lineplacemarktemplate % (placemarkname,
'\n'.join(newcoordslist))
self.kmldoc.append(placemark)
def close_kml_file(self):
"""
Write the end of the KML file.
This needs to be called once at the end of the file
to ensure the tags are closed properly.
"""
endtags = "\n</Document></kml>"
self.kmldoc.append(endtags)
def write_kml_doc_file(self):
"""
write the tags to the kml doc.kml file
"""
with open(self.kmlfilepath, 'w') as kmlout:
for kmltags in self.kmldoc:
kmlout.write(kmltags)
kmlout.flush()
class LiveKMLMap(KMLOutputParser):
"""
live plot positions on a map
"""
kmlnetlink = """<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2">
<NetworkLink>
<name>Live GPS Positon</name>
<description>current GPS position</description>
<Link>
<href>{}</href>
<refreshVisibility>1</refreshVisibility>
<refreshMode>onInterval</refreshMode>
<refreshInterval>1</refreshInterval>
</Link>
</NetworkLink>
</kml>"""
def __init__(self, kmlfilepath):
super().__init__(kmlfilepath)
outputpath = os.path.dirname(kmlfilepath)
self.netlinkpath = os.path.join(outputpath, 'open_this.kml')
def create_netlink_file(self):
"""
write the netlink file
"""
with open(os.path.join(self.netlinkpath), 'w') as netlinkfile:
netlinkfile.write(self.kmlnetlink.format(self.kmlfilepath))
class InvalidDateTimeString(Exception):
"""
raise if timestamp is the wrong format
"""
def remove_invalid_chars(xmlstring):
"""
remove invalid chars from a string
Args:
xmlstring(str): input string to clean
Returns:
cleanstring(str): return string with invalid chars replaced or removed
"""
invalidchars = {'<': '<', '>': '>', '"': '"',
'\t': ' ', '\n': ''}
cleanstring = xmlstring.replace('&', '&')
for invalidchar in invalidchars:
cleanstring = cleanstring.replace(
invalidchar, invalidchars[invalidchar])
return cleanstring
def convert_timestamp_to_kmltimestamp(timestamp):
"""
convert the pygps timestamp string to one suitable for KML
Args:
timestamp(str): the timestamp string in the format '%Y/%m/%d %H:%M:%S'
Raises:
InvalidDateTimeString: when the timestamp is not correctly formatted
Returns:
xmltimestamp(str): the timestamp in the format '%Y-%m-%dT%H:%M:%SZ'
"""
if DATETIMEREGEX.match(timestamp):
if timestamp.endswith(' (estimated)'):
timestamp = timestamp.rstrip(' (estimated)')
try:
dtobj = datetime.datetime.strptime(timestamp, '%Y/%m/%d %H:%M:%S')
kmltimestamp = dtobj.strftime('%Y-%m-%dT%H:%M:%SZ')
except ValueError as err:
raise InvalidDateTimeString('wrong') from err
return kmltimestamp
raise InvalidDateTimeString('timestamp must be %Y/%m/%d %H:%M:%S')
| 31.868914 | 78 | 0.595135 | 929 | 8,509 | 5.404736 | 0.264801 | 0.054372 | 0.01912 | 0.00956 | 0.138419 | 0.078072 | 0.057757 | 0.048198 | 0.02151 | 0.02151 | 0 | 0.00917 | 0.282289 | 8,509 | 266 | 79 | 31.988722 | 0.813001 | 0.261253 | 0 | 0.170068 | 1 | 0.013605 | 0.226894 | 0.075921 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088435 | false | 0 | 0.020408 | 0 | 0.156463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f39a1fae3017b69be241d72f355eabb23b20cb7e | 766 | py | Python | setup.py | btpka3/certbot-auto-dns-challenge | 6f0eee2c4a65380aaf51e439714b0c88820cd392 | [
"Apache-2.0"
] | null | null | null | setup.py | btpka3/certbot-auto-dns-challenge | 6f0eee2c4a65380aaf51e439714b0c88820cd392 | [
"Apache-2.0"
] | null | null | null | setup.py | btpka3/certbot-auto-dns-challenge | 6f0eee2c4a65380aaf51e439714b0c88820cd392 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf8 -*-
from setuptools import setup
setup(
name='certbot_adc',
version='0.1',
description="perform certbot auto dns challenge with DNS provider's API",
url='http://github.com/btpka3/certbot-auto-dns-challenge',
author='btpka3',
author_email='btpka3@163.com',
license='Apache License v2.0',
packages=['certbot_adc'],
install_requires=[
"aliyun-python-sdk-core>=2.0.7",
"aliyun-python-sdk-alidns>=2.0.7",
"PyYAML>=3.12",
"validate_email>=1.3",
"qcloudapi-sdk-python>=2.0.9"
],
scripts=[
'bin/certbot-adc-check-conf',
'bin/certbot-adc-manual-auth-hook',
'bin/certbot-adc-manual-cleanup-hook',
],
zip_safe=False
)
| 26.413793 | 77 | 0.612272 | 102 | 766 | 4.539216 | 0.598039 | 0.107991 | 0.084233 | 0.099352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041391 | 0.211488 | 766 | 28 | 78 | 27.357143 | 0.725166 | 0.053525 | 0 | 0.083333 | 0 | 0 | 0.53112 | 0.248963 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3ac47f0d2af46f81dd25d634efde4812cbc00a9 | 2,263 | py | Python | IATI2LOD/src/gather data scripts/DbpediaData.py | KasperBrandt/IATI2LOD | 3a4fbcbf59d324e948b14509f74c50633d36a497 | [
"MIT"
] | 1 | 2019-08-03T00:52:44.000Z | 2019-08-03T00:52:44.000Z | IATI2LOD/src/gather data scripts/DbpediaData.py | KasperBrandt/IATI2LOD | 3a4fbcbf59d324e948b14509f74c50633d36a497 | [
"MIT"
] | 1 | 2015-10-11T09:47:25.000Z | 2015-10-16T12:58:43.000Z | IATI2LOD/src/gather data scripts/DbpediaData.py | KasperBrandt/IATI2LOD | 3a4fbcbf59d324e948b14509f74c50633d36a497 | [
"MIT"
] | 1 | 2021-05-29T03:43:01.000Z | 2021-05-29T03:43:01.000Z | ## By Kasper Brandt
## Last updated on 26-05-2013
import os, sys, datetime, urllib2, AddProvenance
from rdflib import Namespace, Graph
# Settings
dbpedia_folder = "/media/Acer/School/IATI-data/dataset/DBPedia/"
dbpedia_files = ["/media/Acer/School/IATI-data/mappings/DBPedia/dbpedia-countries-via-factbook.ttl",
"/media/Acer/School/IATI-data/mappings/DBPedia/dbpedia-organisations.ttl"]
if not os.path.isdir(dbpedia_folder):
os.makedirs(dbpedia_folder)
# Provenance settings
Iati = Namespace("http://purl.org/collections/iati/")
start_time = datetime.datetime.now()
source_ttls = []
for dbpedia_file in dbpedia_files:
with open(dbpedia_file, 'r') as f:
for line in f:
if "owl:sameAs" in line:
line_list = line.split()
if "<" in line_list[2]:
dbpedia_item = line_list[2].replace("<http://dbpedia.org/resource/","").replace(">","")
else:
dbpedia_item = line_list[2].replace("dbpedia:", "")
dbpedia_url = "http://dbpedia.org/data/" + dbpedia_item + ".ttl"
source_ttls.append(dbpedia_url)
turtle_response = urllib2.urlopen(dbpedia_url)
turtle_data = turtle_response.read()
print "Retrieved data from " + dbpedia_url + ", writing to file..."
with open(dbpedia_folder + dbpedia_item + ".ttl", 'w') as turtle_f:
turtle_f.write(turtle_data)
# Add provenance
print "Adding provenance..."
provenance = Graph()
provenance = AddProvenance.addProv(Iati,
provenance,
'DBPedia',
start_time,
source_ttls,
['DBPedia'],
"gather%20data%20scripts/DbpediaData.py")
provenance_turtle = provenance.serialize(format='turtle')
with open(dbpedia_folder + 'provenance-dbpedia.ttl', 'w') as turtle_file_prov:
turtle_file_prov.write(provenance_turtle)
print "Done!"
| 35.359375 | 107 | 0.550155 | 230 | 2,263 | 5.247826 | 0.4 | 0.053853 | 0.037283 | 0.047225 | 0.13836 | 0.119304 | 0.074565 | 0.074565 | 0 | 0 | 0 | 0.011386 | 0.340256 | 2,263 | 64 | 108 | 35.359375 | 0.797053 | 0.038445 | 0 | 0 | 0 | 0.025641 | 0.211157 | 0.118027 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.051282 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3ad47eaa0299f9f0aeb207c1f9ec4e1799a981c | 1,337 | py | Python | test_api.py | cagcoach/aikapi | cbb796d736b7a37536672b0f1841014b9b7545bb | [
"MIT"
] | null | null | null | test_api.py | cagcoach/aikapi | cbb796d736b7a37536672b0f1841014b9b7545bb | [
"MIT"
] | null | null | null | test_api.py | cagcoach/aikapi | cbb796d736b7a37536672b0f1841014b9b7545bb | [
"MIT"
] | null | null | null | from PythonAPI.bam import BAM
import numpy as np
dataset_dir = '/home/beatriz/Documentos/Work/final_datasets' # For Bea
# dataset_dir = '/home/almartmen/Github/aikapi' # For Alberto
dataset_name = '181129'
bam = BAM(dataset_dir, dataset_name, image_format='png')
# bam.unroll_videos()
# print(bam.get_persons_in_frame(800))
# print(bam.get_poses_in_frame(801))
# person3d = bam.get_person_in_frame(800, 1)
# print(person3d)
# pose3d = bam.get_pose_in_frame(799, 1)
# print(pose3d)
# print(bam.get_activities_for_person(2))
# print(bam.get_images_in_frame(1141))
# bam.unroll_videos(force=True, video=1)
# camera = bam.get_camera(3,1)
# points3d = np.array([[0.498339264765202, 3.2171029078369897, 1.5828869056621102]])
# points2d = camera.project_points(points3d)
# print(points2d)
# points2d_pose = camera.project_points(pose3d)
# print(points2d_pose)
# bam.unroll_videos()
print(bam.get_total_cameras())
print(bam.get_total_frames())
print(bam.get_person_ids())
print(bam.get_static_object_ids())
# print(bam.get_activity_names())
# print(bam.get_annotations_for_person(19))
print(bam.get_persons_in_frame(1000))
# p = bam.get_annotations_for_person(1)
# print(p)
### OBJECTS
# print(bam.get_static_objects_in_frame(2))
# print(bam.get_static_object_in_frame(3, 21))
# print(bam.get_annotations_for_static_object(21))
| 26.74 | 84 | 0.76739 | 206 | 1,337 | 4.665049 | 0.359223 | 0.112383 | 0.16025 | 0.05307 | 0.227888 | 0.094693 | 0 | 0 | 0 | 0 | 0 | 0.083882 | 0.090501 | 1,337 | 49 | 85 | 27.285714 | 0.706414 | 0.681376 | 0 | 0 | 0 | 0 | 0.134518 | 0.111675 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
f3c07f9cec57f76e746c76c17235957335504e0a | 986 | py | Python | wunderkafka/config/generated/enums.py | severstal-digital/wunderkafka | 8c56fa4559a8576af7f005fd916bf97127576278 | [
"Apache-2.0"
] | null | null | null | wunderkafka/config/generated/enums.py | severstal-digital/wunderkafka | 8c56fa4559a8576af7f005fd916bf97127576278 | [
"Apache-2.0"
] | null | null | null | wunderkafka/config/generated/enums.py | severstal-digital/wunderkafka | 8c56fa4559a8576af7f005fd916bf97127576278 | [
"Apache-2.0"
] | null | null | null | from enum import Enum
class BrokerAddressFamily(str, Enum):
any = 'any'
v4 = 'v4'
v6 = 'v6'
class SecurityProtocol(str, Enum):
plaintext = 'plaintext'
ssl = 'ssl'
sasl_plaintext = 'sasl_plaintext'
sasl_ssl = 'sasl_ssl'
class SslEndpointIdentificationAlgorithm(str, Enum):
none = 'none'
https = 'https'
class IsolationLevel(str, Enum):
read_uncommitted = 'read_uncommitted'
read_committed = 'read_committed'
class AutoOffsetReset(str, Enum):
smallest = 'smallest'
earliest = 'earliest'
beginning = 'beginning'
largest = 'largest'
latest = 'latest'
end = 'end'
error = 'error'
class CompressionCodec(str, Enum):
none = 'none'
gzip = 'gzip'
snappy = 'snappy'
lz4 = 'lz4'
zstd = 'zstd'
class CompressionType(str, Enum):
none = 'none'
gzip = 'gzip'
snappy = 'snappy'
lz4 = 'lz4'
zstd = 'zstd'
class QueuingStrategy(str, Enum):
fifo = 'fifo'
lifo = 'lifo'
| 17.607143 | 52 | 0.618661 | 104 | 986 | 5.788462 | 0.355769 | 0.093023 | 0.054817 | 0.074751 | 0.179402 | 0.179402 | 0.179402 | 0.179402 | 0.179402 | 0.179402 | 0 | 0.010929 | 0.257606 | 986 | 55 | 53 | 17.927273 | 0.811475 | 0 | 0 | 0.282051 | 0 | 0 | 0.178499 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
f3c21ba1dec0f31a9f84ecd704fb5126c483d973 | 1,593 | py | Python | examples/example_ME5ME6.py | bmachiel/python-substratestack | 5689cdbc8582dac7a6b567c03e871ebb3a89343a | [
"BSD-2-Clause"
] | 1 | 2020-03-10T14:46:47.000Z | 2020-03-10T14:46:47.000Z | examples/example_ME5ME6.py | bmachiel/python-substratestack | 5689cdbc8582dac7a6b567c03e871ebb3a89343a | [
"BSD-2-Clause"
] | null | null | null | examples/example_ME5ME6.py | bmachiel/python-substratestack | 5689cdbc8582dac7a6b567c03e871ebb3a89343a | [
"BSD-2-Clause"
] | 1 | 2020-03-10T14:48:49.000Z | 2020-03-10T14:48:49.000Z | #!/bin/env python
# import the technology's complete stack definition
from example import stack
# in order to decrease simulation times, some metal layers can be removed from
# the stack, allowing more oxide layers to be merged in the next step
stack.remove_metal_layer_by_name('PO1')
stack.remove_metal_layer_by_name('ME1')
stack.remove_metal_layer_by_name('ME2')
stack.remove_metal_layer_by_name('ME3')
stack.remove_metal_layer_by_name('ME4')
#stack.remove_metal_layer_by_name('ME5')
#stack.remove_metal_layer_by_name('ME6')
if __name__ == '__main__':
# Print the standardized stack to example_ME5ME6_std.pdf
stack.draw('example_ME5ME6_std', pages=3, single_page=True)
# Merge oxide layers to reduce the stack's complexity, decreasing simulation
# times
stack.simplify()
if __name__ == '__main__':
# Print the simplified stack to example_ME5ME6.pdf
stack.draw('example_ME5ME6', pages=3, single_page=True)
# Write out a Momentum subtrate definition file of the simplified stack
# write_momentum_substrate argument: filename (without extension),
# infinite ground plane
# NOTE: this might produce bad output when the stack has not been
# simplified before!
stack.write_momentum_substrate('example_ME5ME6', True)
# Write out a Sonnet project that includes the simplified subtrate stack
# write_sonnet_technology argument: filename (without extension)
# NOTE: this might produce bad output when the stack has not been
# simplified before!
stack.write_sonnet_technology('example_ME5ME6')
| 38.853659 | 78 | 0.755179 | 225 | 1,593 | 5.071111 | 0.4 | 0.067485 | 0.09816 | 0.128834 | 0.411043 | 0.300614 | 0.134969 | 0.134969 | 0.134969 | 0.134969 | 0 | 0.015909 | 0.171375 | 1,593 | 40 | 79 | 39.825 | 0.848485 | 0.609542 | 0 | 0.153846 | 0 | 0 | 0.150912 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
f3c32a1c29b23a9a39eedc92636dc8c119583df9 | 450 | py | Python | other/dingding/dingtalk/api/rest/OapiProjectPointAddRequest.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | other/dingding/dingtalk/api/rest/OapiProjectPointAddRequest.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | other/dingding/dingtalk/api/rest/OapiProjectPointAddRequest.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | '''
Created by auto_sdk on 2020.12.24
'''
from dingtalk.api.base import RestApi
class OapiProjectPointAddRequest(RestApi):
def __init__(self,url=None):
RestApi.__init__(self,url)
self.action_time = None
self.rule_code = None
self.rule_name = None
self.score = None
self.tenant_id = None
self.userid = None
self.uuid = None
def getHttpMethod(self):
return 'POST'
def getapiname(self):
return 'dingtalk.oapi.project.point.add'
| 21.428571 | 42 | 0.733333 | 65 | 450 | 4.876923 | 0.6 | 0.15142 | 0.069401 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0.155556 | 450 | 20 | 43 | 22.5 | 0.813158 | 0.073333 | 0 | 0 | 0 | 0 | 0.085575 | 0.075795 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.066667 | 0.133333 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
f3c6b52efc5243517d9844c7baac03ac3dc8305c | 645 | py | Python | django_site/parser_vacancies/migrations/0005_vacancies_count.py | StGrail/v.it | 765cb720a16b0fff66e013b8b66a80d99168c16b | [
"MIT"
] | null | null | null | django_site/parser_vacancies/migrations/0005_vacancies_count.py | StGrail/v.it | 765cb720a16b0fff66e013b8b66a80d99168c16b | [
"MIT"
] | 1 | 2020-12-23T19:08:20.000Z | 2020-12-23T19:08:20.000Z | django_site/parser_vacancies/migrations/0005_vacancies_count.py | StGrail/v.it | 765cb720a16b0fff66e013b8b66a80d99168c16b | [
"MIT"
] | null | null | null | # Generated by Django 3.1.4 on 2021-02-28 15:09
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('parser_vacancies', '0004_auto_20210207_0052'),
]
operations = [
migrations.CreateModel(
name='Vacancies_count',
fields=[
('id', models.IntegerField(primary_key=True, serialize=False)),
('data', models.DateField(null=True, unique=True)),
('added_today', models.IntegerField(null=True)),
('total_vacancies_count', models.IntegerField(null=True)),
],
),
]
| 28.043478 | 79 | 0.586047 | 64 | 645 | 5.765625 | 0.703125 | 0.146341 | 0.119241 | 0.140921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067391 | 0.286822 | 645 | 22 | 80 | 29.318182 | 0.734783 | 0.069767 | 0 | 0 | 1 | 0 | 0.153846 | 0.073579 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45edca108e30053b7caf54d078e854708b738e3e | 300 | py | Python | Scripts/TK_xls-to-peaks.py | colinwalshbrown/CWB_utils | 86675812f9398845d1994b57500830e2c3dc6cc0 | [
"MIT"
] | null | null | null | Scripts/TK_xls-to-peaks.py | colinwalshbrown/CWB_utils | 86675812f9398845d1994b57500830e2c3dc6cc0 | [
"MIT"
] | null | null | null | Scripts/TK_xls-to-peaks.py | colinwalshbrown/CWB_utils | 86675812f9398845d1994b57500830e2c3dc6cc0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import sys
if len(sys.argv) < 2:
print "usage: TK_xls-to-peaks.py <TK_xls>"
sys.exit(0)
for line in open(sys.argv[1]):
l = line[:-1].split()
for (i,x) in enumerate(l[4][:-1].split(",")):
print "\t".join((l[0],l[1],x,l[5][:-1].split(",")[i]))
| 21.428571 | 62 | 0.52 | 56 | 300 | 2.75 | 0.589286 | 0.116883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042017 | 0.206667 | 300 | 13 | 63 | 23.076923 | 0.605042 | 0.066667 | 0 | 0 | 0 | 0 | 0.136201 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
45f5aa52358da93645231378c0fbfaf7b12d6756 | 2,319 | py | Python | examples/custom_node.py | tomaszkurgan/sapling | 717681f04f9cb00a47322c5a16f052b8658d558b | [
"MIT"
] | null | null | null | examples/custom_node.py | tomaszkurgan/sapling | 717681f04f9cb00a47322c5a16f052b8658d558b | [
"MIT"
] | null | null | null | examples/custom_node.py | tomaszkurgan/sapling | 717681f04f9cb00a47322c5a16f052b8658d558b | [
"MIT"
] | null | null | null | import uuid
import sapling
# create your own node by inheritance from sapling.Node
class MyNode(sapling.Node):
def __init__(self, name, data=None, id=None):
self.id = id or uuid.uuid4()
super(MyNode, self).__init__(name, data=data)
# There are two ways to force the sapling.Tree to use your node's class:
# 1. create new tree class using sapling.Tree and your node class as bases
# Because sapling.Tree inherits from sapling.Node class, using dependency injection by inheritance
# gives your the possibility to override much of sapling.Tree features simultaneously with
# sapling.Node features, without any additional effort of overriding function directly in
# tree class.
# All nodes in tree, including tree-node itself, will get all features form your new
# node class.
# 2. pass your node class as parameter while constructing Tree
# Option 2. is monkey patch which create new class inherited from Tree class
# and new node class in runtime.
# Because of that, first solution is the prefered one.
class MyTree(sapling.Tree, MyNode):
pass
if __name__ == '__main__':
t = MyTree('a')
t.insert('a/b/c/d/e', force=True)
t.insert('a/b/f/g', force=True)
t.insert('a/b/h/i/j', force=True)
t.insert('a/b/h/i/k', force=True)
t.insert('a/b/h/i/l', force=True)
t.insert('a/z', force=True)
t.insert('a/zz/e/f/g', force=True)
t.insert('a/zz/z/f/g', force=True)
print t.printout()
# all nodes in the tree are instances of MyNode
for node in t:
print node, type(node), node.id
print '\n\n'
# if you don't want to create new Tree class
# use optional parameter while creating Tree instance
# But remember - it's monkey patching
t2 = sapling.Tree('a', node_cls=MyNode)
t2.insert('a/b/c/d/e', force=True)
t2.insert('a/b/f/g', force=True)
t2.insert('a/b/h/i/j', force=True)
t2.insert('a/b/h/i/k', force=True)
t2.insert('a/b/h/i/l', force=True)
print t2.printout()
for node in t2:
print node, type(node), node.id
print '\n\n'
t3 = sapling.Tree('a', node_cls=MyNode)
print type(t3)
t4 = sapling.Tree('a', node_cls=MyNode)
print type(t4)
print type(t2) is type(t4)
t5 = MyTree('a')
print type(t5)
print type(t) is type(t5)
| 31.337838 | 101 | 0.661492 | 395 | 2,319 | 3.835443 | 0.311392 | 0.060066 | 0.052805 | 0.073927 | 0.271287 | 0.266007 | 0.233663 | 0.207921 | 0.039604 | 0 | 0 | 0.011519 | 0.213885 | 2,319 | 73 | 102 | 31.767123 | 0.819528 | 0.432083 | 0 | 0.1 | 0 | 0 | 0.100077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.025 | 0.05 | null | null | 0.275 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3404a220cc2c7335fc1d99ac2799acd671b399e7 | 1,410 | py | Python | templates/pythonScripts/ExtraMeeting.py | cameronosmith/webreg-to-google-calendar | b245d17ce763082f253e98755d1e24a973b6d1ea | [
"MIT"
] | null | null | null | templates/pythonScripts/ExtraMeeting.py | cameronosmith/webreg-to-google-calendar | b245d17ce763082f253e98755d1e24a973b6d1ea | [
"MIT"
] | null | null | null | templates/pythonScripts/ExtraMeeting.py | cameronosmith/webreg-to-google-calendar | b245d17ce763082f253e98755d1e24a973b6d1ea | [
"MIT"
] | null | null | null | #class for meetings other than lecture
class ExtraMeeting:
def __init__(self):
self.type="TBA"
self.days="TBA"
self.time="TBA"
self.building="TBA"
self.room="TBA"
#self explanatory setter methods
def setType(self,type):
if type == 'DI':
self.type='Discussion'
elif type == 'LA':
self.type='LAB'
elif type == 'FI':
self.type='Final'
else:
self.type=type
def setDays(self,days):
self.days=days
def setTime(self,time):
self.time=time
def setBuilding(self,building):
self.building=building
def setRoom(self,room):
self.room=room
#getters
def getType(self):
if(self.type):
return self.type
else:
return ""
def getDays(self):
if(self.days):
return self.days
else:
return ""
def getTime(self):
if(self.time):
return self.time
else:
return ""
def getBuilding(self):
if(self.building):
return self.building
else:
return ""
def getRoom(self):
if(self.room):
return self.room
else:
return ""
def printStats(self):
print self.getType()+self.getDays()+self.getTime()+self.getBuilding()+self.getRoom()
| 23.114754 | 92 | 0.51844 | 154 | 1,410 | 4.720779 | 0.272727 | 0.088033 | 0.068776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.366667 | 1,410 | 60 | 93 | 23.5 | 0.81411 | 0.053191 | 0 | 0.215686 | 0 | 0 | 0.029345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3411f5389305082b1750729cc8d9ca2b0b1a5f11 | 529 | py | Python | home/migrations/0010_auto_20190604_1147.py | xni06/wagtail-CMS | defe0f46e8109e96d6d5e9fd4cf002790fbcd54b | [
"MIT"
] | 4 | 2019-06-04T07:18:44.000Z | 2020-06-15T22:27:36.000Z | home/migrations/0010_auto_20190604_1147.py | jaspotsangbam/wagtail-CMS | 2ec0dd05ba1f9339b705ce529588131049aa9bc7 | [
"MIT"
] | 38 | 2019-05-09T13:14:56.000Z | 2022-03-12T00:54:57.000Z | home/migrations/0010_auto_20190604_1147.py | jaspotsangbam/wagtail-CMS | 2ec0dd05ba1f9339b705ce529588131049aa9bc7 | [
"MIT"
] | 3 | 2019-09-26T14:32:36.000Z | 2021-05-06T15:48:01.000Z | # Generated by Django 2.1.8 on 2019-06-04 11:47
from django.db import migrations
import wagtail.core.blocks
import wagtail.core.fields
class Migration(migrations.Migration):
dependencies = [
('home', '0009_homepage_header'),
]
operations = [
migrations.AlterField(
model_name='homepage',
name='links',
field=wagtail.core.fields.StreamField([('link', wagtail.core.blocks.PageChooserBlock()), ('link_header', wagtail.core.blocks.TextBlock())]),
),
]
| 25.190476 | 152 | 0.644612 | 58 | 529 | 5.810345 | 0.637931 | 0.163205 | 0.151335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046117 | 0.221172 | 529 | 20 | 153 | 26.45 | 0.771845 | 0.085066 | 0 | 0 | 1 | 0 | 0.107884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
341b8abe8d9d5f16043f17419ff87aa397d65daa | 2,681 | py | Python | src/crud/user.py | JuanFKurucz/proyecto-seguridad | feb805c785afc57de19244e7916f232d3798a768 | [
"MIT"
] | null | null | null | src/crud/user.py | JuanFKurucz/proyecto-seguridad | feb805c785afc57de19244e7916f232d3798a768 | [
"MIT"
] | null | null | null | src/crud/user.py | JuanFKurucz/proyecto-seguridad | feb805c785afc57de19244e7916f232d3798a768 | [
"MIT"
] | null | null | null | import os
from datetime import datetime, timedelta
from src.database.models.user import User # noqa
from src.database.models.file import File # noqa
from src.database.session import db_session # noqa
from src.utils.hash import hash_pass
from sqlalchemy.orm.exc import NoResultFound
from src.utils.cipher import encrypt_file, decrypt_file
from src.utils.hash import hash_md5, generate_token
from src.utils.mail_sender import send_mail_login
def create_user(username, email, password):
user = User(username=username, email=email, hashed_password=hash_pass(username, password))
try:
db_session.add(user)
db_session.commit()
db_session.flush()
return user
except:
db_session.rollback()
return None
def connect_user(username, password):
user = db_session.query(User).filter(User.username == username).first()
if user and user.check_password(password=password):
user.login_token = str(generate_token())
user.login_token_expiration = (datetime.now() + timedelta(minutes=5)).timestamp()
db_session.add(user)
db_session.commit()
db_session.flush()
send_mail_login(str(user.email), user.login_token)
return user
return None
def encrypt_user_file(user, path, key):
if not key:
print("Error: se debe ingresar una clave")
return
if not path:
print("Error: se debe ingresar una ruta de archivo")
return
nonce, ciphertext, mac = encrypt_file(hash_md5(key).encode("utf8"), path)
if not ciphertext:
print("Error al encriptar el archivo, puede que el archivo que quiera encriptar este vacio")
return
user.files.append(
File(name=os.path.basename(path), encrypted_file=ciphertext, nonce=nonce, mac=mac)
)
db_session.commit()
def decrypt_user_file(user, file_id, path, key):
if not path:
print("No se especifico una ruta de archivo para guadar")
return
try:
decrypt_file(
hash_md5(key).encode("utf8"),
db_session.query(File).filter(File.id == file_id).one(),
path,
)
except NoResultFound:
print("El archivo no existe")
except Exception:
print("Error inesperado")
def check_token_user(user, token):
if user.login_token and user.login_token_expiration:
date = datetime.fromtimestamp(user.login_token_expiration)
if (date - datetime.now()).total_seconds() > 0:
if user.login_token == token:
return True
else:
print("Token invaldo")
else:
print("Token expirado")
return False
| 31.174419 | 100 | 0.664677 | 350 | 2,681 | 4.945714 | 0.305714 | 0.057192 | 0.056615 | 0.041594 | 0.140959 | 0.140959 | 0.051993 | 0.051993 | 0.051993 | 0.051993 | 0 | 0.003443 | 0.241701 | 2,681 | 85 | 101 | 31.541176 | 0.848008 | 0.005222 | 0 | 0.295775 | 0 | 0 | 0.104394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0.070423 | 0.140845 | 0 | 0.352113 | 0.112676 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
342de811bc99973f044dafb400a3ec144fec6901 | 2,092 | py | Python | 1441. Build an Array With Stack Operations.py | alijon30/Leetcode | 73e8171945e1fcbc59e76f79667c9ea130db27e9 | [
"Unlicense"
] | null | null | null | 1441. Build an Array With Stack Operations.py | alijon30/Leetcode | 73e8171945e1fcbc59e76f79667c9ea130db27e9 | [
"Unlicense"
] | null | null | null | 1441. Build an Array With Stack Operations.py | alijon30/Leetcode | 73e8171945e1fcbc59e76f79667c9ea130db27e9 | [
"Unlicense"
] | null | null | null | You are given an array target and an integer n.
In each iteration, you will read a number from list = [1, 2, 3, ..., n].
Build the target array using the following operations:
"Push": Reads a new element from the beginning list, and pushes it in the array.
"Pop": Deletes the last element of the array.
If the target array is already built, stop reading more elements.
Return a list of the operations needed to build target. The test cases are generated so that the answer is unique.
Example 1:
Input: target = [1,3], n = 3
Output: ["Push","Push","Pop","Push"]
Explanation:
Read number 1 and automatically push in the array -> [1]
Read number 2 and automatically push in the array then Pop it -> [1]
Read number 3 and automatically push in the array -> [1,3]
Example 2:
Input: target = [1,2,3], n = 3
Output: ["Push","Push","Push"]
Example 3:
Input: target = [1,2], n = 4
Output: ["Push","Push"]
Explanation: You only need to read the first 2 numbers and stop.
class Solution:
def buildArray(self, target: List[int], n: int) -> List[str]:
stack = Stack()
final = []
for i in range(1, target[-1]+1):
if i in target:
stack.push(i)
final.append("Push")
else:
stack.push(i)
stack.pop()
final.append("Push")
final.append("Pop")
return final
class Stack:
def __init__(self):
self.list = []
def __str__(self):
values = [str(value) for value in self.list]
return ''.join(values)
def push(self, value):
self.list.append(value)
def isEmpty(self):
if self.list == []:
return True
else:
return False
def pop(self):
if self.isEmpty():
return "The stack is empty"
else:
return self.list.pop()
def peek(self):
if self.isEmpty():
return "The stack is empty"
else:
return self.list[-1]
| 26.15 | 114 | 0.565966 | 290 | 2,092 | 4.055172 | 0.306897 | 0.040816 | 0.034014 | 0.056122 | 0.202381 | 0.202381 | 0.147959 | 0.095238 | 0.095238 | 0.095238 | 0 | 0.019901 | 0.327438 | 2,092 | 79 | 115 | 26.481013 | 0.81592 | 0 | 0 | 0.206897 | 0 | 0 | 0.042543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
342f8555d424c554a10a704d33b1cac436a6cb38 | 221 | py | Python | introduction-to-data-visualization-in-python/3. Statistical plots with Seaborn/script_11.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-05-08T11:09:27.000Z | 2021-05-08T11:09:27.000Z | introduction-to-data-visualization-in-python/3. Statistical plots with Seaborn/script_11.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2022-03-12T15:42:14.000Z | 2022-03-12T15:42:14.000Z | introduction-to-data-visualization-in-python/3. Statistical plots with Seaborn/script_11.py | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-04-30T18:24:19.000Z | 2021-04-30T18:24:19.000Z | # Plotting distributions pairwise (1)
# Print the first 5 rows of the DataFrame
print(auto.head())
# Plot the pairwise joint distributions from the DataFrame
sns.pairplot(auto)
# Display the plot
plt.show()
| 20.090909 | 60 | 0.723982 | 31 | 221 | 5.16129 | 0.677419 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011299 | 0.199095 | 221 | 10 | 61 | 22.1 | 0.892655 | 0.678733 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
343b36003417f4c54f3846c29ad26c071c003af4 | 1,146 | py | Python | src/neuro_comma/logger.py | Andhs/neuro-comma | 9e0203f46a08dced3bcc7d7f55065c8ff8317ef7 | [
"MIT"
] | 32 | 2021-06-15T09:41:10.000Z | 2022-02-13T09:55:37.000Z | src/neuro_comma/logger.py | vkirilenko/neuro-comma | d76c97b39d0a60d7e07eb37f6d68ed491241c051 | [
"MIT"
] | 2 | 2021-06-24T16:25:34.000Z | 2021-07-09T09:24:20.000Z | src/neuro_comma/logger.py | vkirilenko/neuro-comma | d76c97b39d0a60d7e07eb37f6d68ed491241c051 | [
"MIT"
] | 5 | 2021-07-30T07:33:39.000Z | 2021-12-30T13:06:15.000Z | def log_text(file_path, log):
if not log.endswith('\n'):
log += '\n'
print(log)
with open(file_path, 'a') as f:
f.write(log)
def log_args(file_path, args):
log = f"Args: {args}\n"
log_text(file_path, log)
def log_train_epoch(file_path, epoch, train_loss, train_accuracy):
log = f"epoch: {epoch}, Train loss: {train_loss}, Train accuracy: {train_accuracy}\n"
log_text(file_path, log)
def log_val_epoch(file_path, epoch, val_loss, val_acc):
log = f"epoch: {epoch}, Val loss: {val_loss}, Val accuracy: {val_acc}\n"
log_text(file_path, log)
def log_test_metrics(file_path, precision, recall, f1, accuracy, cm):
log = (f"Precision: {precision}\n"
f"Recall: {recall}\n"
f"F1 score: {f1}\n"
f"Accuracy: {accuracy}\n"
f"Confusion Matrix:\n{cm}\n")
log_text(file_path, log)
def log_target_test_metrics(file_path, target, precision, recall, f1):
log = (f"{target}:\n"
f"\tPrecision: {round(precision, 4)}\n"
f"\tRecall: {round(recall, 4)}\n"
f"\tF1 score: {round(f1, 4)}\n")
log_text(file_path, log)
| 28.65 | 89 | 0.615183 | 178 | 1,146 | 3.758427 | 0.213483 | 0.143498 | 0.098655 | 0.134529 | 0.204783 | 0.177877 | 0.149477 | 0.149477 | 0 | 0 | 0 | 0.010135 | 0.225131 | 1,146 | 39 | 90 | 29.384615 | 0.743243 | 0 | 0 | 0.178571 | 0 | 0 | 0.321117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0 | 0 | 0.214286 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
344753395c32fdc8e1cad3a8b60e006debbc3651 | 354 | py | Python | social_blog/blog_posts/forms.py | higorspinto/Social-Blog | bec89351bf76778059f112c0e2a66de9348dda54 | [
"MIT"
] | null | null | null | social_blog/blog_posts/forms.py | higorspinto/Social-Blog | bec89351bf76778059f112c0e2a66de9348dda54 | [
"MIT"
] | 4 | 2021-03-19T03:43:40.000Z | 2022-01-13T01:39:30.000Z | social_blog/blog_posts/forms.py | higorspinto/Social-Blog | bec89351bf76778059f112c0e2a66de9348dda54 | [
"MIT"
] | null | null | null | # blogs_posts/forms.py
from flask_wtf import FlaskForm
from wtforms import StringField, TextAreaField, SubmitField
from wtforms.validators import DataRequired
class BlogPostForm(FlaskForm):
title = StringField("Title", validators=[DataRequired()])
text = TextAreaField("Text", validators=[DataRequired()])
submit = SubmitField("Post")
| 29.5 | 61 | 0.762712 | 36 | 354 | 7.444444 | 0.583333 | 0.08209 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135593 | 354 | 12 | 62 | 29.5 | 0.875817 | 0.056497 | 0 | 0 | 0 | 0 | 0.039039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3447c0a2e4e87d813f8b9dc78d7b0714f0ad5791 | 1,568 | py | Python | blog/migrations/0001_initial.py | ht-90/django-blog | b7dd80a4066ecf3c288461e0c5785f953a1d2e5f | [
"MIT"
] | null | null | null | blog/migrations/0001_initial.py | ht-90/django-blog | b7dd80a4066ecf3c288461e0c5785f953a1d2e5f | [
"MIT"
] | 1 | 2021-02-23T11:25:37.000Z | 2021-02-23T11:25:37.000Z | blog/migrations/0001_initial.py | ht-90/django-blog | b7dd80a4066ecf3c288461e0c5785f953a1d2e5f | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-02-26 08:07
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, null=True, unique=True)),
],
),
migrations.CreateModel(
name='Tag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, null=True, unique=True)),
],
),
migrations.CreateModel(
name='Post',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True, null=True)),
('updated', models.DateTimeField(auto_now_add=True, null=True)),
('title', models.CharField(max_length=255, null=True)),
('body', models.TextField(blank=True, null=True)),
('category', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='blog.category')),
('tag', models.ManyToManyField(blank=True, to='blog.Tag')),
],
),
]
| 37.333333 | 124 | 0.574617 | 164 | 1,568 | 5.390244 | 0.359756 | 0.063348 | 0.084842 | 0.078054 | 0.567873 | 0.567873 | 0.567873 | 0.528281 | 0.43552 | 0.43552 | 0 | 0.021333 | 0.282526 | 1,568 | 41 | 125 | 38.243902 | 0.764444 | 0.028699 | 0 | 0.5 | 1 | 0 | 0.059172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3449704981ebbc8bc32237e12d8a30eef51a79de | 1,566 | py | Python | evaluate/previous_works/svsyn/supervision/photometric.py | Syniez/Joint_360depth | 4f28c3b5b7f648173480052e205e898c6c7a5151 | [
"MIT"
] | 92 | 2019-09-08T09:55:05.000Z | 2022-02-21T21:29:40.000Z | supervision/photometric.py | zjsprit/SphericalViewSynthesis | fcdec95bf3ad109767d27396434b51cf3aad2b4b | [
"BSD-2-Clause"
] | 4 | 2020-05-12T02:29:36.000Z | 2021-11-26T07:49:43.000Z | supervision/photometric.py | zjsprit/SphericalViewSynthesis | fcdec95bf3ad109767d27396434b51cf3aad2b4b | [
"BSD-2-Clause"
] | 26 | 2019-09-16T02:26:33.000Z | 2021-10-21T03:55:02.000Z | import torch
from .ssim import *
class PhotometricLossParameters(object):
def __init__(self, alpha=0.85, l1_estimator='none',\
ssim_estimator='none', window=7, std=1.5, ssim_mode='gaussian'):
super(PhotometricLossParameters, self).__init__()
self.alpha = alpha
self.l1_estimator = l1_estimator
self.ssim_estimator = ssim_estimator
self.window = window
self.std = std
self.ssim_mode = ssim_mode
def get_alpha(self):
return self.alpha
def get_l1_estimator(self):
return self.l1_estimator
def get_ssim_estimator(self):
return self.ssim_estimator
def get_window(self):
return self.window
def get_std(self):
return self.std
def get_ssim_mode(self):
return self.ssim_mode
def calculate_loss(pred, gt, params, mask, weights):
valid_mask = mask.type(gt.dtype)
masked_gt = gt * valid_mask
masked_pred = pred * valid_mask
l1 = torch.abs(masked_gt - masked_pred)
d_ssim = torch.clamp(
(
1 - ssim_loss(masked_pred, masked_gt, kernel_size=params.get_window(),
std=params.get_std(), mode=params.get_ssim_mode())
) / 2, 0, 1)
loss = (
d_ssim * params.get_alpha()
+ l1 * (1 - params.get_alpha())
)
loss *= valid_mask
loss *= weights
count = torch.sum(mask, dim=[1, 2, 3], keepdim=True).float()
return torch.mean(torch.sum(loss, dim=[1, 2, 3], keepdim=True) / count)
| 30.115385 | 83 | 0.604725 | 203 | 1,566 | 4.418719 | 0.251232 | 0.053512 | 0.093645 | 0.051282 | 0.037904 | 0.037904 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.287356 | 1,566 | 51 | 84 | 30.705882 | 0.782258 | 0 | 0 | 0 | 0 | 0 | 0.010561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.047619 | 0.142857 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
3449bef65b5b8a27951b1370c04e2abfac01c124 | 1,917 | py | Python | Python/waytoolong.py | pretam591/All_Program_helper | 83f52bce53bdcd0b115753ecda610d21aa0ddd2a | [
"MIT"
] | 16 | 2021-10-03T11:15:49.000Z | 2021-10-31T04:40:24.000Z | Python/waytoolong.py | pretam591/All_Program_helper | 83f52bce53bdcd0b115753ecda610d21aa0ddd2a | [
"MIT"
] | 232 | 2021-10-02T14:51:43.000Z | 2021-11-14T08:23:27.000Z | Python/waytoolong.py | pretam591/All_Program_helper | 83f52bce53bdcd0b115753ecda610d21aa0ddd2a | [
"MIT"
] | 166 | 2021-10-02T13:56:34.000Z | 2021-10-31T17:56:34.000Z | # A. Way Too Long Words
# -------------------------------
# time limit per test1 second
# memory limit per test 256 megabytes
# input :standard input
# output :standard output
# Sometimes some words like "localization" or "internationalization" are so long that writing
# them many times in one text is quite tiresome.
# Let's consider a word too long, if its length is strictly more than 10 characters.
# All too long words should be replaced with a special abbreviation.
# This abbreviation is made like this: we write down the first and the last letter of a word and between
# them we write the number of letters between the first and the last letters.
# That number is in decimal system and doesn't contain any leading zeroes.
# Thus, "localization" will be spelt as "l10n", and "internationalization» will be spelt as "i18n".
# You are suggested to automatize the process of changing the words with abbreviations.
# At that all too long words should be replaced by the abbreviation and the words that are not
# too long should not undergo any changes.
# Input
# The first line contains an integer n (1 ≤ n ≤ 100). Each of the following n lines contains one word.
# All the words consist of lowercase Latin letters and possess the lengths of from 1 to 100 characters.
# Output
# Print n lines. The i-th line should contain the result of replacing of the i-th word from the input data.
# Examples
# input
# 4
# word
# localization
# internationalization
# pneumonoultramicroscopicsilicovolcanoconiosis
# output
# word
# l10n
# i18n
# p43s
#solution to above problem
t=int(input())
for i in range(t):
s=input()
if(len(s)<=10):
print(s)
else:
print(s[0]+str(len(s)-2)+s[-1])
| 36.865385 | 107 | 0.649974 | 275 | 1,917 | 4.541818 | 0.487273 | 0.028022 | 0.028823 | 0.024019 | 0.078463 | 0.04964 | 0.04964 | 0 | 0 | 0 | 0 | 0.021629 | 0.276474 | 1,917 | 51 | 108 | 37.588235 | 0.876712 | 0.869588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
344a8a5e768a6b664f7f2dcbefec0978989b66e3 | 584 | py | Python | stake-pool/py/stake/constants.py | wowswap-io/solana-program-library | 3af176d6c4d872afe029396a597736484d4a8212 | [
"Apache-2.0"
] | null | null | null | stake-pool/py/stake/constants.py | wowswap-io/solana-program-library | 3af176d6c4d872afe029396a597736484d4a8212 | [
"Apache-2.0"
] | 21 | 2022-03-18T20:20:29.000Z | 2022-03-29T08:38:49.000Z | stake-pool/py/stake/constants.py | wowswap-io/solana-program-library | 3af176d6c4d872afe029396a597736484d4a8212 | [
"Apache-2.0"
] | null | null | null | """Stake Program Constants."""
from solana.publickey import PublicKey
STAKE_PROGRAM_ID: PublicKey = PublicKey("Stake11111111111111111111111111111111111111")
"""Public key that identifies the Stake program."""
SYSVAR_STAKE_CONFIG_ID: PublicKey = PublicKey("StakeConfig11111111111111111111111111111111")
"""Public key that identifies the Stake config sysvar."""
STAKE_LEN: int = 200
"""Size of stake account."""
LAMPORTS_PER_SOL: int = 1_000_000_000
"""Number of lamports per SOL"""
MINIMUM_DELEGATION: int = LAMPORTS_PER_SOL
"""Minimum delegation allowed by the stake program"""
| 30.736842 | 92 | 0.789384 | 70 | 584 | 6.385714 | 0.471429 | 0.107383 | 0.09396 | 0.102908 | 0.277405 | 0.138702 | 0 | 0 | 0 | 0 | 0 | 0.159615 | 0.109589 | 584 | 18 | 93 | 32.444444 | 0.7 | 0.041096 | 0 | 0 | 0 | 0 | 0.258258 | 0.258258 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
344bdfd8bb0a323b85e099ee0a1e40416c891086 | 1,774 | py | Python | tests/chem/test_mol.py | ShantamShorewala/aizynthfinder | 6b15d5846558b14c4ce3c353727d9d676af7f6fb | [
"MIT"
] | 219 | 2020-06-15T08:04:53.000Z | 2022-03-31T09:02:47.000Z | tests/chem/test_mol.py | ShantamShorewala/aizynthfinder | 6b15d5846558b14c4ce3c353727d9d676af7f6fb | [
"MIT"
] | 56 | 2020-08-14T14:50:42.000Z | 2022-03-22T12:49:06.000Z | tests/chem/test_mol.py | ShantamShorewala/aizynthfinder | 6b15d5846558b14c4ce3c353727d9d676af7f6fb | [
"MIT"
] | 58 | 2020-06-15T13:36:42.000Z | 2022-03-21T06:18:02.000Z | import pytest
from rdkit import Chem
from aizynthfinder.chem import MoleculeException, Molecule
def test_no_input():
with pytest.raises(MoleculeException):
Molecule()
def test_create_with_mol():
rd_mol = Chem.MolFromSmiles("O")
mol = Molecule(rd_mol=rd_mol)
assert mol.smiles == "O"
def test_create_with_smiles():
mol = Molecule(smiles="O")
assert Chem.MolToSmiles(mol.rd_mol) == "O"
def test_inchi():
mol = Molecule(smiles="O")
assert mol.inchi == "InChI=1S/H2O/h1H2"
def test_inchi_key():
mol = Molecule(smiles="O")
assert mol.inchi_key == "XLYOFNOQVPJJNP-UHFFFAOYSA-N"
def test_fingerprint():
mol = Molecule(smiles="O")
assert sum(mol.fingerprint(2)) == 1
assert sum(mol.fingerprint(2, 10)) == 1
def test_sanitize():
mol = Molecule(smiles="O", sanitize=True)
assert Chem.MolToSmiles(mol.rd_mol) == "O"
mol = Molecule(smiles="c1ccccc1(C)(C)")
with pytest.raises(MoleculeException):
mol.sanitize()
mol.sanitize(raise_exception=False)
assert mol.smiles == "CC1(C)CCCCC1"
def test_equality():
mol1 = Molecule(smiles="CCCCO")
mol2 = Molecule(smiles="OCCCC")
assert mol1 == mol2
def test_basic_equality():
mol1 = Molecule(smiles="CC[C@@H](C)O") # R-2-butanol
mol2 = Molecule(smiles="CC[C@H](C)O") # S-2-butanol
assert mol1 != mol2
assert mol1.basic_compare(mol2)
def test_has_atom_mapping():
mol1 = Molecule(smiles="CCCCO")
mol2 = Molecule(smiles="C[C:5]CCO")
assert not mol1.has_atom_mapping()
assert mol2.has_atom_mapping()
def test_remove_atom_mapping():
mol = Molecule(smiles="C[C:5]CCO")
assert mol.has_atom_mapping()
mol.remove_atom_mapping()
assert not mol.has_atom_mapping()
| 19.494505 | 58 | 0.669109 | 247 | 1,774 | 4.643725 | 0.255061 | 0.158675 | 0.103749 | 0.078466 | 0.320837 | 0.247602 | 0.247602 | 0 | 0 | 0 | 0 | 0.022284 | 0.19053 | 1,774 | 90 | 59 | 19.711111 | 0.776462 | 0.012965 | 0 | 0.2 | 0 | 0 | 0.077231 | 0.015446 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.22 | false | 0 | 0.06 | 0 | 0.28 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
344cd574f56ada084ecacd041bbefa24f527dd55 | 715 | py | Python | agents/agent_loader.py | JCKing97/Agents4Asteroids | c25eb106b9963db97a6fd426f2a8b2f7b8dd073f | [
"MIT"
] | 1 | 2020-06-24T16:07:38.000Z | 2020-06-24T16:07:38.000Z | agents/agent_loader.py | JCKing97/Agents4Asteroids | c25eb106b9963db97a6fd426f2a8b2f7b8dd073f | [
"MIT"
] | 9 | 2019-08-14T19:44:39.000Z | 2020-05-03T13:01:57.000Z | agents/agent_loader.py | JCKing97/Agents4Asteroids | c25eb106b9963db97a6fd426f2a8b2f7b8dd073f | [
"MIT"
] | null | null | null | from typing import List, Type
from game.agent import Agent
import os
from importlib import import_module
import inspect
def load_agents() -> List[Type[Agent]]:
"""
:return: all available agent types currently in the system.
"""
agents: List[Type[Agent]] = []
agent_dir = os.path.dirname(os.path.abspath(__file__))
for filename in os.listdir(agent_dir):
filename = str(filename)
if filename.endswith(".py"):
module = import_module("agents." + filename.split('.')[0])
for name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, Agent) and obj is not Agent:
agents.append(obj)
return agents
| 32.5 | 73 | 0.641958 | 91 | 715 | 4.945055 | 0.494505 | 0.053333 | 0.062222 | 0.084444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001855 | 0.246154 | 715 | 21 | 74 | 34.047619 | 0.833024 | 0.082517 | 0 | 0 | 0 | 0 | 0.017188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.375 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
345cff6b5dc8f245274d42d83f7cdc4417c7b6b5 | 1,961 | py | Python | proper_forms/fields/email.py | jpsca/pforms | 77c9da93e5224e79bb147aa873f28951e972bb21 | [
"MIT"
] | 2 | 2020-09-30T22:41:00.000Z | 2020-12-04T16:47:17.000Z | proper_forms/fields/email.py | jpsca/hyperform | d5c450ad8684a853fed26f8c2606877151125a9e | [
"MIT"
] | 2 | 2021-11-18T18:01:28.000Z | 2021-11-18T18:03:29.000Z | proper_forms/fields/email.py | jpsca/hyperform | d5c450ad8684a853fed26f8c2606877151125a9e | [
"MIT"
] | null | null | null | from .text import Text
from ..ftypes import type_email
__all__ = ("Email", )
class Email(Text):
"""Validates and normalize an email address using the
JoshData/python-email-validator library.
Even if the format is valid, it cannot guarantee that the email is real, so the
purpose of this function is to alert the user of a typing mistake.
The normalizations include lowercasing the domain part of the email address
(domain names are case-insensitive), unicode "NFC" normalization of the whole
address (which turns characters plus combining characters into precomposed
characters where possible and replaces certain unicode characters (such as
angstrom and ohm) with other equivalent code points (a-with-ring and omega,
respectively)), replacement of fullwidth and halfwidth characters in the domain
part, and possibly other UTS46 mappings on the domain part.
Options:
check_dns (bool):
Check if the domain name in the email address resolves.
There is nothing to be gained by trying to actually contact an SMTP server,
so that's not done.
allow_smtputf8 (bool):
Accept non-ASCII characters in the local part of the address
(before the @-sign). These email addresses require that your mail
submission library and the mail servers along the route to the destination,
including your own outbound mail server, all support the
[SMTPUTF8 (RFC 6531)](https://tools.ietf.org/html/rfc6531) extension.
By default this is set to `False`.
"""
input_type = "email"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.error_messages.setdefault("type", "Doesn‘t look like a valid e-mail.")
def type(self, value, check_dns=False, allow_smtputf8=False):
return type_email(value, check_dns=check_dns, allow_smtputf8=allow_smtputf8)
| 40.854167 | 87 | 0.701683 | 272 | 1,961 | 4.970588 | 0.566176 | 0.026627 | 0.028846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009993 | 0.234574 | 1,961 | 47 | 88 | 41.723404 | 0.89074 | 0.719021 | 0 | 0 | 0 | 0 | 0.104213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3461f75437c6e8764c0d89e33b3659778cc6f407 | 368 | py | Python | MRTasks/parsingTasks/listS3Files.py | ArulselvanMadhavan/Artist_Recognition_from_Audio_Features | feeca8487773b2f1bac7f408fd11adcc3820b294 | [
"Apache-2.0"
] | 1 | 2016-04-19T14:17:12.000Z | 2016-04-19T14:17:12.000Z | MRTasks/parsingTasks/listS3Files.py | ArulselvanMadhavan/Artist_Recognition_from_Audio_Features | feeca8487773b2f1bac7f408fd11adcc3820b294 | [
"Apache-2.0"
] | null | null | null | MRTasks/parsingTasks/listS3Files.py | ArulselvanMadhavan/Artist_Recognition_from_Audio_Features | feeca8487773b2f1bac7f408fd11adcc3820b294 | [
"Apache-2.0"
] | 1 | 2016-09-16T15:08:00.000Z | 2016-09-16T15:08:00.000Z | import sys
__author__ = 'arul'
from boto.s3.connection import S3Connection
if __name__ == '__main__':
access_key = sys.argv[1]
access_secret = sys.argv[2]
conn = S3Connection(access_key,access_secret)
bucket = conn.get_bucket('cs6240_msd')
for key in bucket.list(prefix='cs6240_msd/'):
print key
# print key.name.encode('utf-8') | 24.533333 | 49 | 0.682065 | 51 | 368 | 4.54902 | 0.607843 | 0.077586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047458 | 0.19837 | 368 | 15 | 50 | 24.533333 | 0.738983 | 0.081522 | 0 | 0 | 0 | 0 | 0.097923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
34662329b3c89b9de5cd7e3249e18c3a5937c451 | 3,812 | py | Python | Analytic/plots.py | benvchurch/project-eva | 38e3e61ec1913fb2d94fbb8a5db53b90ec32ed66 | [
"Unlicense"
] | 1 | 2017-07-27T19:02:16.000Z | 2017-07-27T19:02:16.000Z | Analytic/plots.py | benvchurch/project-eva | 38e3e61ec1913fb2d94fbb8a5db53b90ec32ed66 | [
"Unlicense"
] | null | null | null | Analytic/plots.py | benvchurch/project-eva | 38e3e61ec1913fb2d94fbb8a5db53b90ec32ed66 | [
"Unlicense"
] | null | null | null | from __future__ import division
import numpy as np
from scipy import special
from numpy import log, exp, sin ,cos, pi, log10, sqrt
from scipy.integrate import quad, dblquad, cumtrapz
from matplotlib import pyplot as plt
import time
import CDM_SubHalo_Potential
import FDM_SubHalo_Potential
#integral precision
p = 2
#num plot points
num = 50
#fluc, normedfluc, fourierfluc, sqfourierfluc, tidalvar, flucwalk, veldispersion
var = "veldispersion"
#calc, test
mode = "calc"
#radius for test
Rtest = 10**3
params = {
'axes.labelsize': 24,
'axes.titlesize': 22,
'legend.fontsize': 20,
'xtick.labelsize': 24,
'ytick.labelsize': 24,
'text.usetex': True,
'figure.figsize': [10,8], # instead of 4.5, 4.5
'lines.linewidth': 2,
'xtick.major.pad': 15,
'ytick.major.pad': 15,
'figure.subplot.bottom': 0.12,
'figure.subplot.top': 0.95,
'figure.subplot.left': 0.225,
#'font.size': 22
}
plt.rcParams.update(params)
ep = np.logspace(1, p, num)
D = np.logspace(0, 5, num)
if(var == "fluc"):
CDMfunc = CDM_SubHalo_Potential.Fluc
elif(var == "normedfluc"):
CDMfunc = CDM_SubHalo_Potential.NormalizedFluc
elif(var == "fourierfluc"):
CDMfunc = CDM_SubHalo_Potential.NormedFourierMagInt
elif(var == "sqfourierfluc"):
CDMfunc = CDM_SubHalo_Potential.IntegSpectralPower
elif(var == "tidalvar"):
CDMfunc = CDM_SubHalo_Potential.TidalVariance
elif(var == "flucwalk"):
CDMfunc = CDM_SubHalo_Potential.FlucWalk
elif(var == "veldispersion"):
CDMfunc = CDM_SubHalo_Potential.VelocityDispersion
def CDM_Calculate():
if(mode == "calc"):
return map(lambda x: CDMfunc(x, int(10**p)), D)
elif(mode == "test"):
return map(lambda x: CDMfunc(Rtest, int(x)), ep)
if(var == "fluc"):
FDMfunc = FDM_SubHalo_Potential.Fluc
elif(var == "normedfluc"):
FDMfunc = FDM_SubHalo_Potential.NormalizedFluc
elif(var == "fourierfluc"):
FDMfunc = FDM_SubHalo_Potential.NormedFourierMagInt
elif(var == "sqfourierfluc"):
FDMfunc = FDM_SubHalo_Potential.IntegSpectralPower
elif(var == "tidalvar"):
FDMfunc = FDM_SubHalo_Potential.TidalVariance
elif(var == "flucwalk"):
FDMfunc = FDM_SubHalo_Potential.FlucWalk
elif(var == "veldispersion"):
FDMfunc = FDM_SubHalo_Potential.VelocityDispersion
def FDM_Calculate(set_m22):
FDM_SubHalo_Potential.m22 = set_m22
if(mode == "calc"):
return map(lambda x: FDMfunc(x, int(10**p)), D)
elif(mode == "test"):
return map(lambda x: FDMfunc(Rtest, int(x)), ep)
def main():
t = time.time()
plt.loglog(D, CDM_Calculate(), label = '$CDM$', linestyle = '--')
print "done CDM in " + str(time.time() - t)
log_axion_masses = [6,4,2,1,0,-1]
for logm in log_axion_masses:
t = time.time()
plt.loglog(D, FDM_Calculate(10**logm), label = r'$m_{a} = 10^{' + str(logm-22) +'} eV$')
print "done FDM log(m22) = " + str(logm) + " in " + str(time.time() - t)
plt.xlabel(r'$r(pc)$')
if(var == "fluc"):
plt.ylabel(r'$ \sqrt{\left < \left (\frac{\partial \phi}{\partial t} \right )^2 \right >} \quad ((km/s)^{2} Myr^{-1})$')
elif(var == "normedfluc"):
plt.ylabel(r'$ \sqrt{\left < \left (\frac{\partial \phi}{\partial t} \right )^2 \right > } \left (\frac{\Omega}{\phi} \right )$')
elif(var == "fourierfluc"):
plt.ylabel(r'\[ \frac{1}{\phi} \int_{\Omega}^{\infty} | \tilde{\phi}(\omega) | d\omega \]')
elif(var == "sqfourierfluc"):
plt.ylabel(r'\[ \sqrt{\frac{\Omega}{\phi^2} \int_{\Omega}^{\infty} | \tilde{\phi}(\omega) |^2 d\omega} \]')
elif(var == "tidalvar"):
plt.ylabel(r'$\sigma_{T}^2$')
elif(var == "flucwalk"):
plt.ylabel(r'$ \sqrt{\left < \left (\frac{\partial \phi}{\partial t} \right )^2 \right >^{\frac{1}{2}} \cdot T_{age}} \quad (km/s)$')
elif(var == "veldispersion"):
plt.ylabel(r'$ \Delta v \: (km/s)$')
plt.legend(loc='lower right')
plt.show()
if __name__ == "__main__":
main()
| 29.550388 | 135 | 0.662382 | 533 | 3,812 | 4.622889 | 0.283302 | 0.051136 | 0.069399 | 0.073864 | 0.40138 | 0.378653 | 0.117289 | 0.096185 | 0.096185 | 0.096185 | 0 | 0.024646 | 0.148478 | 3,812 | 128 | 136 | 29.78125 | 0.734442 | 0.045383 | 0 | 0.272727 | 0 | 0.050505 | 0.292481 | 0.025337 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.020202 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
346b05d408ff7992b28238f3b72a34e2e2a82c5a | 645 | py | Python | swigwin-3.0.12/Examples/test-suite/python/swigobject_runme.py | bostich83/atomic_swig | 5438f676d690ffddb09d88bbc1b51e3b38fa8c6a | [
"MIT"
] | null | null | null | swigwin-3.0.12/Examples/test-suite/python/swigobject_runme.py | bostich83/atomic_swig | 5438f676d690ffddb09d88bbc1b51e3b38fa8c6a | [
"MIT"
] | 2 | 2020-03-24T18:19:22.000Z | 2020-03-31T11:22:32.000Z | swigwin-3.0.12/Examples/test-suite/python/swigobject_runme.py | bostich83/atomic_swig | 5438f676d690ffddb09d88bbc1b51e3b38fa8c6a | [
"MIT"
] | 2 | 2019-11-01T01:28:09.000Z | 2020-05-11T05:48:26.000Z |
from swigobject import *
a = A()
a1 = a_ptr(a)
a2 = a_ptr(a)
if a1.this != a2.this:
raise RuntimeError
lthis = long(a.this)
# match pointer value, but deal with leading zeros on 8/16 bit systems and
# different C++ compilers interpretation of %p
xstr1 = "%016X" % (lthis,)
xstr1 = str.lstrip(xstr1, '0')
xstr2 = pointer_str(a)
xstr2 = str.replace(xstr2, "0x", "")
xstr2 = str.replace(xstr2, "0X", "")
xstr2 = str.lstrip(xstr2, '0')
xstr2 = str.upper(xstr2)
if xstr1 != xstr2:
print xstr1, xstr2
raise RuntimeError
s = str(a.this)
r = repr(a.this)
v1 = v_ptr(a)
v2 = v_ptr(a)
if long(v1) != long(v2):
raise RuntimeError
| 17.916667 | 74 | 0.651163 | 106 | 645 | 3.915094 | 0.462264 | 0.038554 | 0.024096 | 0.096386 | 0.125301 | 0.125301 | 0.125301 | 0 | 0 | 0 | 0 | 0.065385 | 0.193798 | 645 | 35 | 75 | 18.428571 | 0.732692 | 0.181395 | 0 | 0.130435 | 0 | 0 | 0.020992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.043478 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
346d66976904f2a2e890a78b4c0d64f9e6578329 | 347 | py | Python | contrib/mypy/examples/src/python/mypy_plugin/settings.py | anthonyjpratti/pants | d98e53af6ddd877861231bce8343f8204da0a9d1 | [
"Apache-2.0"
] | 1 | 2020-08-26T03:30:31.000Z | 2020-08-26T03:30:31.000Z | contrib/mypy/examples/src/python/mypy_plugin/settings.py | anthonyjpratti/pants | d98e53af6ddd877861231bce8343f8204da0a9d1 | [
"Apache-2.0"
] | 1 | 2020-01-21T16:34:02.000Z | 2020-01-21T16:34:02.000Z | contrib/mypy/examples/src/python/mypy_plugin/settings.py | anthonyjpratti/pants | d98e53af6ddd877861231bce8343f8204da0a9d1 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from django.urls import URLPattern
DEBUG: bool = True
DEFAULT_FROM_EMAIL: str = 'webmaster@example.com'
SECRET_KEY: str = 'not so secret'
MY_SETTING: URLPattern = URLPattern(pattern='foo', callback=lambda: None)
| 28.916667 | 73 | 0.769452 | 48 | 347 | 5.479167 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019934 | 0.132565 | 347 | 11 | 74 | 31.545455 | 0.853821 | 0.363112 | 0 | 0 | 0 | 0 | 0.169725 | 0.09633 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
346fa28c58b006ee5c9a022eea52649d810fd3de | 902 | py | Python | trainer/migrations/0023_userpretest.py | tthelen/interpunct | 83e2cbd67dcc94c131a3a2b155eefd710636a912 | [
"MIT"
] | 2 | 2016-10-21T21:52:08.000Z | 2021-10-19T02:19:43.000Z | trainer/migrations/0023_userpretest.py | tthelen/interpunct | 83e2cbd67dcc94c131a3a2b155eefd710636a912 | [
"MIT"
] | null | null | null | trainer/migrations/0023_userpretest.py | tthelen/interpunct | 83e2cbd67dcc94c131a3a2b155eefd710636a912 | [
"MIT"
] | 4 | 2016-10-24T19:17:48.000Z | 2018-05-11T11:53:12.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-05-18 08:55
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('trainer', '0022_auto_20180517_1429'),
]
operations = [
migrations.CreateModel(
name='UserPretest',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('result', models.BooleanField(default=False)),
('rule', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='link_to_pretests', to='trainer.User')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='link_to_pretest', to='trainer.User')),
],
),
]
| 34.692308 | 141 | 0.640798 | 101 | 902 | 5.534653 | 0.574257 | 0.057245 | 0.075134 | 0.118068 | 0.250447 | 0.250447 | 0.250447 | 0.250447 | 0.250447 | 0.250447 | 0 | 0.045649 | 0.222838 | 902 | 25 | 142 | 36.08 | 0.751783 | 0.073171 | 0 | 0 | 1 | 0 | 0.136855 | 0.027611 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
347a6f99a4de31c46f4ac69d858282e197897fe5 | 1,651 | py | Python | chris_backend/users/tests/test_serializers.py | PintoGideon/ChRIS_ultron_backEnd | 3c094c90f45c64e279c6d78d9accc357679fb37b | [
"MIT"
] | null | null | null | chris_backend/users/tests/test_serializers.py | PintoGideon/ChRIS_ultron_backEnd | 3c094c90f45c64e279c6d78d9accc357679fb37b | [
"MIT"
] | null | null | null | chris_backend/users/tests/test_serializers.py | PintoGideon/ChRIS_ultron_backEnd | 3c094c90f45c64e279c6d78d9accc357679fb37b | [
"MIT"
] | null | null | null |
import logging
from django.test import TestCase
from rest_framework import serializers
from users.serializers import UserSerializer
class UserSerializerTests(TestCase):
"""
Generic user view tests' setup and tearDown
"""
def setUp(self):
# avoid cluttered console output (for instance logging all the http requests)
logging.disable(logging.CRITICAL)
self.username = 'cube'
self.password = 'cubepass'
self.email = 'dev@babymri.org'
def tearDown(self):
# re-enable logging
logging.disable(logging.DEBUG)
def test_create(self):
"""
Test whether overriden create method takes care of the password hashing.
"""
user_serializer = UserSerializer()
validated_data = {'username': self.username, 'password': self.password,
'email': self.email}
user = user_serializer.create(validated_data)
self.assertEqual(user.username, self.username)
self.assertEqual(user.email, self.email)
self.assertNotEqual(user.password, self.password)
self.assertTrue(user.check_password(self.password))
def test_validate_username(self):
"""
Test whether overriden validate_username method raises a
serializers.ValidationError when the username contains forward slashes.
"""
user_serializer = UserSerializer()
with self.assertRaises(serializers.ValidationError):
user_serializer.validate_username('user/')
username = user_serializer.validate_username(self.username)
self.assertEqual(username, self.username)
| 32.372549 | 85 | 0.675348 | 172 | 1,651 | 6.395349 | 0.406977 | 0.076364 | 0.072727 | 0.043636 | 0.063636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.24046 | 1,651 | 50 | 86 | 33.02 | 0.877193 | 0.205936 | 0 | 0.074074 | 0 | 0 | 0.042776 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.148148 | false | 0.148148 | 0.148148 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
347d4a00184ea7c2c1c24dd55efd4162c15dc9d8 | 7,727 | py | Python | day_4/day_4_improvements/tests/test_user_edit.py | dmchu/Pytest_REST_API_with_Allure | a41fbbcc04304a89a99ece7b06148de668a7b696 | [
"Apache-2.0"
] | null | null | null | day_4/day_4_improvements/tests/test_user_edit.py | dmchu/Pytest_REST_API_with_Allure | a41fbbcc04304a89a99ece7b06148de668a7b696 | [
"Apache-2.0"
] | null | null | null | day_4/day_4_improvements/tests/test_user_edit.py | dmchu/Pytest_REST_API_with_Allure | a41fbbcc04304a89a99ece7b06148de668a7b696 | [
"Apache-2.0"
] | null | null | null | import allure
from day_4.day_4_improvements.lib.base_case import BaseCase
from day_4.day_4_improvements.lib.assersions import Assertions as AS
from day_4.day_4_improvements.lib.my_requests import MyRequests as MR
from day_4.day_4_improvements.lib.helpers import Helpers as HP
@allure.epic("User Profile Edit cases")
class TestUserEdit(BaseCase):
BASE_URI: str = "/user/"
URI_LOGIN: str = BASE_URI + "login"
@allure.feature("User Profile Edit")
@allure.story("positive - Edit profile of just created user")
@allure.description("Verifiying that user profile of just created user can be edited")
def test_edit_just_created_user(self):
registered_user: dict = HP.register_user(HP)
user_email = registered_user.get("user_email")
user_password = registered_user.get("user_password")
user_id = registered_user.get("user_id")
login_user_response = HP.authorize_user(HP, user_email, user_password)
URI_USER = self.BASE_URI + str(user_id)
new_name = "Changed Name"
headers = {
'x-csrf-token': login_user_response.get("token")
}
cookies = {
'auth_sid': login_user_response.get("auth_sid")
}
edit_data = {
'firstName': new_name
}
# Edit user data
response3 = MR.put(URI_USER, headers=headers, cookies=cookies, data=edit_data)
AS.assert_code_status(response3, 200)
# Get updated user data
response4 = MR.get(URI_USER, headers=headers, cookies=cookies)
AS.assert_json_value_by_name(response4, "firstName", new_name, "Wrong name of user after update")
@allure.feature("User Profile Edit")
@allure.story("negative - Edit profile of user without authorization")
@allure.description("Verifiying that user profile can not be edited without authorization")
def test_edit_existing_user_without_authorization(self):
registered_user: dict = HP.register_user(HP)
user_id = registered_user.get("user_id")
URI_USER = self.BASE_URI + str(user_id)
new_name = "Changed Name2"
headers = {
'x-csrf-token': ""
}
cookies = {
'auth_sid': ""
}
edit_data = {
'firstName': new_name
}
# Edit user data
response = MR.put(URI_USER, headers=headers, cookies=cookies, data=edit_data)
AS.assert_code_status(response, 400)
AS.assert_response_text(response, "Auth token not supplied")
@allure.feature("User Profile Edit")
@allure.story("negative - Edit profile of user with another user authorization")
@allure.description("Verifiying that user profile can not be edited with another user authorization")
def test_edit_existing_user_with_authorization_by_another_user(self):
registered_user: dict = HP.register_user(HP)
correct_user_email = registered_user.get("user_email")
correct_user_password = registered_user.get("user_password")
user_id = registered_user.get("user_id")
user_email = "vinkotov@example.com"
user_password = "1234"
# Authorization with another user
login_data = {
'email': user_email,
'password': user_password
}
response = MR.post(self.URI_LOGIN, data=login_data)
auth_sid = self.get_cookie(response, "auth_sid")
token = self.get_header(response, "x-csrf-token")
URI_USER = self.BASE_URI + str(user_id)
new_name = "Changed Name3"
headers = {
'x-csrf-token': token
}
cookies = {
'auth_sid': auth_sid
}
edit_data = {
'email': new_name
}
# Try to edit user data
response2 = MR.put(URI_USER, headers=headers, cookies=cookies, data=edit_data)
AS.assert_code_status(response2, 400)
# Authorization with correct user
login_user_response = HP.authorize_user(HP, correct_user_email, correct_user_password)
URI_USER = self.BASE_URI + str(user_id)
new_name = "Changed Name"
headers_2 = {
'x-csrf-token': login_user_response.get("token")
}
cookies_2 = {
'auth_sid': login_user_response.get("auth_sid")
}
# Get user data and verify that changes was not made
response4 = MR.get(URI_USER, headers=headers_2, cookies=cookies_2)
response_data = response4.json()
user_first_name = response_data.get("firstName")
assert user_first_name != new_name, \
"First name should not be changed by user with another authenticated user, but it did"
@allure.feature("User Profile Edit")
@allure.story("negative - Edit user 'email' with wrong email format")
@allure.description("Verifiying that user 'email' can not be edited with wrong email format")
def test_edit_user_email_with_wrong_format(self):
registered_user: dict = HP.register_user(HP)
user_email = registered_user.get("user_email")
user_password = registered_user.get("user_password")
user_id = registered_user.get("user_id")
login_user_response = HP.authorize_user(HP, user_email, user_password)
URI_USER = self.BASE_URI + str(user_id)
new_email = user_email.replace("@", ".")
headers = {
'x-csrf-token': login_user_response.get("token")
}
cookies = {
'auth_sid': login_user_response.get("auth_sid")
}
edit_data = {
'email': new_email
}
# Edit user data
response3 = MR.put(URI_USER, headers=headers, cookies=cookies, data=edit_data)
AS.assert_code_status(response3, 400)
AS.assert_response_text(response3, "Invalid email format")
# Get updated user data
response4 = MR.get(URI_USER, headers=headers, cookies=cookies)
response_data = response4.json()
user_email = response_data.get("email")
assert user_email != new_email, \
"Email should not be changed by user to email with wrong format, but it did"
@allure.feature("User Profile Edit")
@allure.story("negative - Edit user 'first name' with one character")
@allure.description("Verifiying that user 'first name' can not be edited with one character")
def test_edit_user_first_name_with_one_character(self):
registered_user: dict = HP.register_user(HP)
user_email = registered_user.get("user_email")
user_password = registered_user.get("user_password")
user_id = registered_user.get("user_id")
login_user_response = HP.authorize_user(HP, user_email, user_password)
URI_USER = self.BASE_URI + str(user_id)
new_email = user_email.replace("@", ".")
headers = {
'x-csrf-token': login_user_response.get("token")
}
cookies = {
'auth_sid': login_user_response.get("auth_sid")
}
edit_data = {
'firstName': "V"
}
# Edit user data
response3 = MR.put(URI_USER, headers=headers, cookies=cookies, data=edit_data)
AS.assert_code_status(response3, 400)
AS.assert_json_value_by_name(response3, "error",
"Too short value for field firstName", "The error message is not as expected")
# Get updated user data
response4 = MR.get(URI_USER, headers=headers, cookies=cookies)
response_data = response4.json()
user_first_name = response_data.get("firstName")
assert user_first_name != "V", \
"First name should not be changed by user to very short name, but it did"
| 38.635 | 115 | 0.64734 | 981 | 7,727 | 4.841998 | 0.132518 | 0.039789 | 0.046526 | 0.057474 | 0.753263 | 0.698737 | 0.660211 | 0.584 | 0.562947 | 0.535579 | 0 | 0.008879 | 0.256633 | 7,727 | 199 | 116 | 38.829146 | 0.818071 | 0.033907 | 0 | 0.467532 | 0 | 0 | 0.213605 | 0 | 0 | 0 | 0 | 0 | 0.084416 | 1 | 0.032468 | false | 0.064935 | 0.032468 | 0 | 0.084416 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
347e90ac805f9147fa4f7a2a02dbf307daa9c5ca | 3,685 | py | Python | usecase/usecase-cordova-android-tests/samples/SharedModeLibraryDownload4.x/res/test.py | JianfengXu/crosswalk-test-suite | 6fb6ef9d89235743ee8b867fd2541c5bdf388786 | [
"BSD-3-Clause"
] | null | null | null | usecase/usecase-cordova-android-tests/samples/SharedModeLibraryDownload4.x/res/test.py | JianfengXu/crosswalk-test-suite | 6fb6ef9d89235743ee8b867fd2541c5bdf388786 | [
"BSD-3-Clause"
] | null | null | null | usecase/usecase-cordova-android-tests/samples/SharedModeLibraryDownload4.x/res/test.py | JianfengXu/crosswalk-test-suite | 6fb6ef9d89235743ee8b867fd2541c5bdf388786 | [
"BSD-3-Clause"
] | null | null | null | import os
import commands
import sys
import json
from optparse import OptionParser
global CROSSWALK_VERSION
with open("../../tools/VERSION", "rt") as pkg_version_file:
pkg_version_raw = pkg_version_file.read()
pkg_version_file.close()
pkg_version_json = json.loads(pkg_version_raw)
CROSSWALK_VERSION = pkg_version_json["main-version"]
try:
usage = "Usage: ./test.py -u [http://host/XWalkRuntimeLib.apk]"
opts_parser = OptionParser(usage=usage)
opts_parser.add_option(
"-u",
"--url",
dest="url",
help="specify the url, e.g. http://host/XWalkRuntimeLib.apk")
global BUILD_PARAMETERS
(BUILD_PARAMETERS, args) = opts_parser.parse_args()
except Exception as e:
print "Got wrong options: %s, exit ..." % e
sys.exit(1)
if not BUILD_PARAMETERS.url:
print "Please add the -u parameter for the url of XWalkRuntimeLib.apk"
sys.exit(1)
version_parts = CROSSWALK_VERSION.split('.')
if len(version_parts) < 4:
print "The crosswalk version is not configured exactly!"
sys.exit(1)
versionType = version_parts[3]
if versionType == '0':
username = commands.getoutput("echo $USER")
repository_aar_path = "/home/%s/.m2/repository/org/xwalk/xwalk_shared_library/%s/" \
"xwalk_shared_library-%s.aar" % \
(username, CROSSWALK_VERSION, CROSSWALK_VERSION)
repository_pom_path = "/home/%s/.m2/repository/org/xwalk/xwalk_shared_library/%s/" \
"xwalk_shared_library-%s.pom" % \
(username, CROSSWALK_VERSION, CROSSWALK_VERSION)
if not os.path.exists(repository_aar_path) or not os.path.exists(repository_pom_path):
wget_cmd = "wget https://download.01.org/crosswalk/releases/crosswalk/" \
"android/canary/%s/crosswalk-shared-%s.aar" % \
(CROSSWALK_VERSION, CROSSWALK_VERSION)
install_cmd = "mvn install:install-file -DgroupId=org.xwalk " \
"-DartifactId=xwalk_shared_library -Dversion=%s -Dpackaging=aar " \
"-Dfile=crosswalk-shared-%s.aar -DgeneratePom=true" % \
(CROSSWALK_VERSION, CROSSWALK_VERSION)
os.system(wget_cmd)
os.system(install_cmd)
library_url = BUILD_PARAMETERS.url
library_url = library_url.replace("/", "\\/")
if os.path.exists("SharedModeLibraryDownload"):
os.system("rm -rf SharedModeLibraryDownload")
os.system("cordova create SharedModeLibraryDownload com.example.sharedModeLibraryDownload SharedModeLibraryDownload")
os.chdir("./SharedModeLibraryDownload")
os.system('sed -i "s/<widget/<widget android-activityName=\\"SharedModeLibraryDownload\\"/g" config.xml')
os.system('sed -i "s/<\/widget>/ <allow-navigation href=\\"*\\" \/>\\n<\/widget>/g" config.xml')
os.system("cordova platform add android")
add_plugin_cmd = "cordova plugin add ../../../tools/cordova-plugin-crosswalk-webview" \
" --variable XWALK_VERSION=\"%s\" --variable XWALK_MODE=\"shared\"" % CROSSWALK_VERSION
print add_plugin_cmd
os.system(add_plugin_cmd)
os.system('sed -i "s/android:supportsRtl=\\"true\\">/android:supportsRtl=\\"true\\">\\n <meta-data android:name=\\"xwalk_apk_url\\" android:value=\\"' + library_url + '\\" \\/>/g" platforms/android/AndroidManifest.xml')
os.system("cordova build android")
os.system("cordova run")
lsstatus = commands.getstatusoutput("ls ./platforms/android/build/outputs/apk/*.apk")
if lsstatus[0] == 0:
print "Build Package Successfully"
else:
print "Build Package Error"
pmstatus = commands.getstatusoutput("adb shell pm list packages |grep com.example.sharedModeLibraryDownload")
if pmstatus[0] == 0:
print "Package Name Consistent"
else:
print "Package Name Inconsistent"
| 44.39759 | 226 | 0.699322 | 459 | 3,685 | 5.453159 | 0.337691 | 0.0831 | 0.035957 | 0.030364 | 0.14982 | 0.068718 | 0.053536 | 0.053536 | 0.053536 | 0.053536 | 0 | 0.00451 | 0.157666 | 3,685 | 82 | 227 | 44.939024 | 0.801869 | 0 | 0 | 0.118421 | 0 | 0.026316 | 0.45536 | 0.205699 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.065789 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
caa57ab541795d55d1fd7fa6b28f28f0dd754bf4 | 761 | py | Python | ois_api_client/v3_0/deserialization/deserialize_user_header.py | peterkulik/ois_api_client | 51dabcc9f920f89982c4419bb058f5a88193cee0 | [
"MIT"
] | 7 | 2020-10-22T08:15:29.000Z | 2022-01-27T07:59:39.000Z | ois_api_client/v3_0/deserialization/deserialize_user_header.py | peterkulik/ois_api_client | 51dabcc9f920f89982c4419bb058f5a88193cee0 | [
"MIT"
] | null | null | null | ois_api_client/v3_0/deserialization/deserialize_user_header.py | peterkulik/ois_api_client | 51dabcc9f920f89982c4419bb058f5a88193cee0 | [
"MIT"
] | null | null | null | from typing import Optional
import xml.etree.ElementTree as ET
from ...xml.XmlReader import XmlReader as XR
from ..namespaces import COMMON
from ..dto.UserHeader import UserHeader
from .deserialize_crypto import deserialize_crypto
def deserialize_user_header(element: ET.Element) -> Optional[UserHeader]:
if element is None:
return None
result = UserHeader(
login=XR.get_child_text(element, 'login', COMMON),
password_hash=deserialize_crypto(
XR.find_child(element, 'passwordHash', COMMON)
),
tax_number=XR.get_child_text(element, 'taxNumber', COMMON),
request_signature=deserialize_crypto(
XR.find_child(element, 'requestSignature', COMMON)
),
)
return result
| 30.44 | 73 | 0.70565 | 88 | 761 | 5.931818 | 0.454545 | 0.130268 | 0.038314 | 0.05364 | 0.214559 | 0.1341 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208936 | 761 | 24 | 74 | 31.708333 | 0.86711 | 0 | 0 | 0.1 | 0 | 0 | 0.055191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.1 | 0.3 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
caaf2c40188789d0575a2ef98deb50c6eded3c8b | 464 | py | Python | annotations/migrations/0015_organization_description.py | alexliyihao/auto-annotation-web | 391bd2c4a8ea1d2d3aba92a13cd7c41dd77a609d | [
"MIT"
] | 1 | 2021-11-17T15:34:33.000Z | 2021-11-17T15:34:33.000Z | annotations/migrations/0015_organization_description.py | alexliyihao/auto-annotation-web | 391bd2c4a8ea1d2d3aba92a13cd7c41dd77a609d | [
"MIT"
] | null | null | null | annotations/migrations/0015_organization_description.py | alexliyihao/auto-annotation-web | 391bd2c4a8ea1d2d3aba92a13cd7c41dd77a609d | [
"MIT"
] | null | null | null | # Generated by Django 3.2.8 on 2021-11-11 02:06
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('annotations', '0014_auto_20211110_2007'),
]
operations = [
migrations.AddField(
model_name='organization',
name='description',
field=models.CharField(default='test description', max_length=400),
preserve_default=False,
),
]
| 23.2 | 79 | 0.62069 | 48 | 464 | 5.875 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100592 | 0.271552 | 464 | 19 | 80 | 24.421053 | 0.733728 | 0.096983 | 0 | 0 | 1 | 0 | 0.17506 | 0.055156 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
caaf9b2d174b67f74ceb1ce325fcdf36e83f2b9b | 632 | py | Python | flask_resources/parsers/__init__.py | inveniosoftware/flask-resources | f9d926a959f392b66811a10ee8faaa2f77e6d528 | [
"MIT"
] | 2 | 2021-09-04T22:36:26.000Z | 2021-12-06T22:02:37.000Z | flask_resources/parsers/__init__.py | inveniosoftware/flask-resources | f9d926a959f392b66811a10ee8faaa2f77e6d528 | [
"MIT"
] | 62 | 2020-06-09T09:09:28.000Z | 2021-03-31T16:32:51.000Z | flask_resources/parsers/__init__.py | inveniosoftware/flask-resources | f9d926a959f392b66811a10ee8faaa2f77e6d528 | [
"MIT"
] | 6 | 2020-04-28T08:23:55.000Z | 2021-04-09T07:41:15.000Z | # -*- coding: utf-8 -*-
#
# Copyright (C) 2020-2021 CERN.
# Copyright (C) 2020-2021 Northwestern University.
#
# Flask-Resources is free software; you can redistribute it and/or modify it
# under the terms of the MIT License; see LICENSE file for more details.
"""Request parser for the body, headers, query string and view args."""
from .base import RequestParser
from .body import RequestBodyParser
from .decorators import request_body_parser, request_parser
from .schema import MultiDictSchema
__all__ = (
"MultiDictSchema",
"request_body_parser",
"request_parser",
"RequestBodyParser",
"RequestParser",
)
| 27.478261 | 76 | 0.740506 | 80 | 632 | 5.725 | 0.625 | 0.085153 | 0.061135 | 0.078603 | 0.131004 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032197 | 0.164557 | 632 | 22 | 77 | 28.727273 | 0.835227 | 0.495253 | 0 | 0 | 0 | 0 | 0.254072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
caaff8629778159017275b2997472982be0e80e4 | 3,253 | py | Python | analyze_tagged_corpus.py | kevincobain2000/nltk-trainer | 5c0b53ccecb7a5042d5af6c4325e134f7d83cb45 | [
"Apache-2.0"
] | 1 | 2021-10-08T11:40:09.000Z | 2021-10-08T11:40:09.000Z | analyze_tagged_corpus.py | kevincobain2000/nltk-trainer | 5c0b53ccecb7a5042d5af6c4325e134f7d83cb45 | [
"Apache-2.0"
] | null | null | null | analyze_tagged_corpus.py | kevincobain2000/nltk-trainer | 5c0b53ccecb7a5042d5af6c4325e134f7d83cb45 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import argparse
import nltk.corpus
from nltk.corpus.util import LazyCorpusLoader
from nltk.probability import FreqDist
from nltk.tag.simplify import simplify_wsj_tag
from nltk_trainer import load_corpus_reader
########################################
## command options & argument parsing ##
########################################
parser = argparse.ArgumentParser(description='Analyze a part-of-speech tagged corpus',
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('corpus',
help='''The name of a tagged corpus included with NLTK, such as treebank,
brown, cess_esp, floresta, or the root path to a corpus directory,
which can be either an absolute path or relative to a nltk_data directory.''')
parser.add_argument('--trace', default=1, type=int,
help='How much trace output you want, defaults to %(default)d. 0 is no trace output.')
corpus_group = parser.add_argument_group('Corpus Reader Options')
corpus_group.add_argument('--reader', default=None,
help='''Full module path to a corpus reader class, such as
nltk.corpus.reader.tagged.TaggedCorpusReader''')
corpus_group.add_argument('--fileids', default=None,
help='Specify fileids to load from corpus')
corpus_group.add_argument('--simplify_tags', action='store_true', default=False,
help='Use simplified tags')
sort_group = parser.add_argument_group('Tag Count Sorting Options')
sort_group.add_argument('--sort', default='tag', choices=['tag', 'count'],
help='Sort key, defaults to %(default)s')
sort_group.add_argument('--reverse', action='store_true', default=False,
help='Sort in revere order')
args = parser.parse_args()
###################
## corpus reader ##
###################
tagged_corpus = load_corpus_reader(args.corpus, reader=args.reader, fileids=args.fileids)
if not tagged_corpus:
raise ValueError('%s is an unknown corpus')
if args.trace:
print 'loading %s' % args.corpus
##############
## counting ##
##############
wc = 0
tag_counts = FreqDist()
taglen = 7
word_set = set()
if args.simplify_tags and args.corpus not in ['conll2000', 'switchboard']:
kwargs = {'simplify_tags': True}
else:
kwargs = {}
for word, tag in tagged_corpus.tagged_words(fileids=args.fileids, **kwargs):
if len(tag) > taglen:
taglen = len(tag)
if args.corpus in ['conll2000', 'switchboard'] and args.simplify_tags:
tag = simplify_wsj_tag(tag)
wc += 1
# loading corpora/treebank/tagged with ChunkedCorpusReader produces None tags
if not isinstance(tag, basestring): tag = str(tag)
tag_counts.inc(tag)
word_set.add(word)
############
## output ##
############
print '%d total words\n%d unique words\n%d tags\n' % (wc, len(word_set), len(tag_counts))
if args.sort == 'tag':
sort_key = lambda (t, c): t
elif args.sort == 'count':
sort_key = lambda (t, c): c
else:
raise ValueError('%s is not a valid sort option' % args.sort)
countlen = max(len(str(tag_counts[tag_counts.max()])) + 2, 9)
# simple reSt table format
print ' '.join(['Tag'.center(taglen), 'Count'.center(countlen)])
print ' '.join(['='*taglen, '='*(countlen)])
for tag, count in sorted(tag_counts.items(), key=sort_key, reverse=args.reverse):
print ' '.join([tag.ljust(taglen), str(count).rjust(countlen)])
print ' '.join(['='*taglen, '='*(countlen)]) | 32.53 | 89 | 0.694436 | 456 | 3,253 | 4.837719 | 0.335526 | 0.044878 | 0.036265 | 0.029918 | 0.094288 | 0.028105 | 0 | 0 | 0 | 0 | 0 | 0.005234 | 0.118967 | 3,253 | 100 | 90 | 32.53 | 0.76448 | 0.057178 | 0 | 0.063492 | 0 | 0 | 0.294199 | 0.015283 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.095238 | null | null | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cab12a2e4b4e5bc1f20b3d34222016965f6e7990 | 352 | py | Python | 02_Variables/Variable types/tests.py | dannymeijer/level-up-with-python | 1bd1169aafd0fdc124984c30edc7f0153626cf06 | [
"MIT"
] | null | null | null | 02_Variables/Variable types/tests.py | dannymeijer/level-up-with-python | 1bd1169aafd0fdc124984c30edc7f0153626cf06 | [
"MIT"
] | null | null | null | 02_Variables/Variable types/tests.py | dannymeijer/level-up-with-python | 1bd1169aafd0fdc124984c30edc7f0153626cf06 | [
"MIT"
] | null | null | null | from lessons.test_helper import run_common_tests, get_answer_placeholders, passed, failed
def test_type_used():
window = get_answer_placeholders()[0]
if "type" in window and "float_number" in window:
passed()
else:
failed("Use the type() function")
if __name__ == '__main__':
run_common_tests()
test_type_used()
| 23.466667 | 89 | 0.696023 | 47 | 352 | 4.744681 | 0.617021 | 0.080717 | 0.125561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003571 | 0.204545 | 352 | 14 | 90 | 25.142857 | 0.792857 | 0 | 0 | 0 | 0 | 0 | 0.133523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.2 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cab5b79870681508e9b3e9a34436132f63a674f8 | 3,985 | py | Python | main.py | hamolicious/Python-Word-Search-Generator | 9a8ce3f54afd1e2dd286b89bdb9052bb03023ccf | [
"Apache-2.0"
] | null | null | null | main.py | hamolicious/Python-Word-Search-Generator | 9a8ce3f54afd1e2dd286b89bdb9052bb03023ccf | [
"Apache-2.0"
] | null | null | null | main.py | hamolicious/Python-Word-Search-Generator | 9a8ce3f54afd1e2dd286b89bdb9052bb03023ccf | [
"Apache-2.0"
] | null | null | null |
from random import choice, randint
import os
def generate_grid(w, h):
global width, height
alphabet = 'qwertyuiopasdfghjklzxcvbnm'
grid = []
for i in range(h):
row = []
for j in range(w):
row.append(' ')
grid.append(row)
return grid
def populate_grid(words, grid):
for word in words:
done = False
tries = 10
while not done:
try:
start_x = randint(0, len(grid[0]) - len(word))
start_y = randint(0, len(grid) - len(word))
vel = choice([(1, 0), (0, 1), (1, 1)])
except ValueError:
done = True
break
valid_spot = True
x, y = start_x, start_y
for i in range(len(word)):
if grid[y][x] == ' ' or grid[y][x] == word[i]:
pass
else:
valid_spot = False
tries -= 1
break
x += vel[0]
y += vel[1]
if tries <= 0:
done = True
if valid_spot:
x, y = start_x, start_y
for i in range(len(word)):
grid[y][x] = word[i].upper()
x += vel[0]
y += vel[1]
done = True
alphabet = 'qwertyuiopasdfghjklzxcvbnm'.upper()
for i in range(len(grid)):
for j in range(len(grid[0])):
if grid[i][j] == ' ':
grid[i][j] = choice(alphabet)
return grid
def draw_grid(grid):
screen = ''
for row in grid:
screen += '\n'
do_once = True
for tile in row:
if do_once:
screen += '|'
do_once = False
screen += tile + '|'
print(screen)
def use_random():
path = 'words.txt'
if os.path.exists(path):
with open(path, 'r') as file:
temp = file.readlines()
class wrd():
def __init__(self, word):
self.word = word
self.index = randint(0, 1000)
def __lt__(self, other):
return self.index < other.index
words = []
word_count = 5
for word in temp:
w = word.replace('\n', '')
words.append(wrd(w))
words.sort()
temp = []
for i in range(word_count):
temp.append(words[i].word)
return temp
def get_details():
# width
while True:
w = input('\nWhat is the width of the grid in characters?\n[>> ')
if w.isdigit():
w = int(w)
break
# height
while True:
h = input('\nWhat is the height of the grid in characters?\n[>> ')
if h.isdigit():
h = int(h)
break
# words
words = []
while True:
wrd = input('Please add the words to the bank, you can press ENTER to use random words and press "q" when you\'re finished.\n[>> ').strip().lower()
if wrd == '':
words = use_random()
break
if wrd == 'q':
break
for let in wrd:
if let not in 'qwertyuiopasdfghjklzxcvbnm':
print('Invalid word, words can only contain the following letters:', ''.join(i for i in sorted('qwertyuiopasdfghjklzxcvbnm')))
break
else:
words.append(wrd)
return w, h, words
def clear():
print('\n' * 50)
width, height, words = get_details()
while True:
grid = generate_grid(width, height)
grid = populate_grid(words, grid)
clear()
draw_grid(grid)
input('>')
| 20.863874 | 156 | 0.430866 | 437 | 3,985 | 3.855835 | 0.267735 | 0.02908 | 0.021365 | 0.032641 | 0.099703 | 0.078338 | 0.066469 | 0.037982 | 0.037982 | 0.037982 | 0 | 0.012127 | 0.461982 | 3,985 | 190 | 157 | 20.973684 | 0.773787 | 0.004517 | 0 | 0.239316 | 1 | 0.008547 | 0.10391 | 0.027852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068376 | false | 0.008547 | 0.017094 | 0.008547 | 0.136752 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cababbb0c22d7ff63e14475c4594f1e5800c07e0 | 6,019 | py | Python | sklearn_pipeline_enhancements/shared/transformers.py | Kgoetsch/sklearn_pipeline_enhancements | 9afb9e03d762c3d8ecd19639f3b84f38cb33a71d | [
"MIT"
] | 11 | 2017-04-18T00:19:05.000Z | 2020-04-06T22:16:35.000Z | sklearn_pipeline_enhancements/shared/transformers.py | OlliePage/sklearn_pipeline_enhancements | 9afb9e03d762c3d8ecd19639f3b84f38cb33a71d | [
"MIT"
] | null | null | null | sklearn_pipeline_enhancements/shared/transformers.py | OlliePage/sklearn_pipeline_enhancements | 9afb9e03d762c3d8ecd19639f3b84f38cb33a71d | [
"MIT"
] | 5 | 2017-12-05T20:14:55.000Z | 2021-03-29T12:25:36.000Z | import numpy as np
import pandas as pd
from patsy.highlevel import dmatrix
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.pipeline import _name_estimators, Pipeline
__author__ = 'kgoetsch'
def make_dataframeunion(steps):
return DataFrameUnion(_name_estimators(steps))
class FactorExtractor(TransformerMixin, BaseEstimator):
"""
In: pd.DataFrame
Column in that Frame
Out: pd.Series
"""
def __init__(self, factor):
self.factor = factor
def transform(self, data):
return data[self.factor]
def fit(self, *_):
return self
class RenameField(TransformerMixin, BaseEstimator):
"""
In: pd.DataFrame
Column in that Frame
Out: pd.Series
"""
def __init__(self, new_name):
self.new_name = new_name
def transform(self, data):
data.name = self.new_name
return data
def fit(self, *_):
return self
class FillNA(TransformerMixin, BaseEstimator):
"""
In: pd.Series
Out: pd.Series
"""
def __init__(self, na_replacement=None):
if na_replacement is not None:
self.NA_replacement = na_replacement
else:
self.NA_replacement = 'missing'
def transform(self, data):
return data.fillna(self.NA_replacement)
def fit(self, *_):
return self
class DataFrameUnion(TransformerMixin, BaseEstimator):
"""
In: list of (string, transformer) tuples :
Out: pd.DataFrame
"""
def __init__(self, transformer_list):
self.feature_names = None
self.transformer_list = transformer_list # (string, Transformer)-tuple list
def __getitem__(self, attrib):
return self.__dict__[attrib]
def transform(self, X):
"""Transform X separately by each transformer, concatenate results.
Parameters
----------
X : array-like or sparse matrix, shape (n_samples, n_features)
Input data to be transformed.
Returns
-------
X_t : array-like or sparse matrix, shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the
sum of n_components (output dimension) over transformers.
"""
Xs = (self._transform_one(trans, X)
for name, trans in self.transformer_list)
df_merged_result = self._merge_results(Xs)
return df_merged_result
def fit(self, X, y=None):
"""Fit all transformers using X.
Parameters
----------
:param X: pd.DataFrame
Input data, used to fit transformers.
:param y:
"""
transformers = (self._fit_one_transformer(trans, X, y)
for name, trans in self.transformer_list)
self._update_transformer_list(transformers)
return self
def _merge_results(self, transformed_result_generator):
df_merged_result = ''
for transformed in transformed_result_generator:
if isinstance(transformed, pd.Series):
transformed = pd.DataFrame(data=transformed)
if not isinstance(df_merged_result, pd.DataFrame):
df_merged_result = transformed
else:
df_merged_result = pd.concat([df_merged_result, transformed], axis=1)
if self.feature_names is None:
self.feature_names = df_merged_result.columns
elif (len(self.feature_names) != len(df_merged_result.columns)) or \
((self.feature_names != df_merged_result.columns).any()):
custom_dataframe = pd.DataFrame(data=0, columns=self.feature_names, index=df_merged_result.index)
custom_dataframe.update(df_merged_result)
df_merged_result = custom_dataframe
return df_merged_result
def _update_transformer_list(self, transformers):
self.transformer_list[:] = [
(name, new)
for ((name, old), new) in zip(self.transformer_list, transformers)
]
def _fit_one_transformer(self, transformer, X, y):
return transformer.fit(X, y)
def _transform_one(self, transformer, X):
return transformer.transform(X)
def extract_and_denull(var, na=0):
return Pipeline([
('extract', FactorExtractor(var)),
('fill_na', FillNA(na))
])
class ConvertToArray(TransformerMixin, BaseEstimator):
"""
In: pd.Dataframe
Out: np.array
"""
def transform(self, data):
return np.ascontiguousarray(data.values)
def fit(self, *_):
return self
class CategoricalDummifier(TransformerMixin, BaseEstimator):
"""
In: pd.Series
Out: pd.DataFrame
"""
def transform(self, data):
return dmatrix(formula_like=str(data.name), data=pd.DataFrame(data.apply(str)), return_type='dataframe',
NA_action='raise').drop('Intercept', axis=1)
def fit(self, *_):
return self
class WeekdayExtraction(TransformerMixin, BaseEstimator):
"""
In: pd.DataFrame
Out: pd.Series
"""
def transform(self, data):
return_data = pd.Series(data.index.weekday, index=data.index, name='weekday')
return return_data
def fit(self, *_):
return self
class LengthofField(TransformerMixin, BaseEstimator):
"""
In: pd.Series
Out: pd.Series
"""
def transform(self, data):
return_value = data.apply(len)
return return_value
def fit(self, *_):
return self
class InsertIntercept(TransformerMixin, BaseEstimator):
"""
In: pd.DataFrame
Out: pd.Series = 1 of the same length
"""
def transform(self, data):
return pd.DataFrame(data=1, index=data.index, columns=['Intercept'])
def fit(self, *_):
return self
if __name__ == '__main__':
target = make_dataframeunion([extract_and_denull('years'), extract_and_denull('kitten')])
for step in target.transformer_list:
print step
| 25.943966 | 112 | 0.632663 | 683 | 6,019 | 5.352855 | 0.218155 | 0.030635 | 0.053611 | 0.07221 | 0.310449 | 0.271061 | 0.204869 | 0.15454 | 0.073304 | 0.044311 | 0 | 0.001361 | 0.267486 | 6,019 | 231 | 113 | 26.056277 | 0.827852 | 0.005317 | 0 | 0.276786 | 0 | 0 | 0.018316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.044643 | null | null | 0.008929 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cabd038328a350d6730ea789068223bf2e852a25 | 526 | py | Python | experimental/plotlyDelaunay3D.py | FYP-DES5/deepscan-core | b6ce70ae69577fbdf5b80b30c4e83c7ee9cf6942 | [
"MIT"
] | null | null | null | experimental/plotlyDelaunay3D.py | FYP-DES5/deepscan-core | b6ce70ae69577fbdf5b80b30c4e83c7ee9cf6942 | [
"MIT"
] | null | null | null | experimental/plotlyDelaunay3D.py | FYP-DES5/deepscan-core | b6ce70ae69577fbdf5b80b30c4e83c7ee9cf6942 | [
"MIT"
] | null | null | null | import plotly.plotly as py
from plotly.graph_objs import *
import numpy as np
import matplotlib.cm as cm
from scipy.spatial import Delaunay
u=np.linspace(0,2*np.pi, 24)
v=np.linspace(-1,1, 8)
u,v=np.meshgrid(u,v)
u=u.flatten()
v=v.flatten()
#evaluate the parameterization at the flattened u and v
tp=1+0.5*v*np.cos(u/2.)
x=tp*np.cos(u)
y=tp*np.sin(u)
z=0.5*v*np.sin(u/2.)
#define 2D points, as input data for the Delaunay triangulation of U
points2D=np.vstack([u,v]).T
tri = Delaunay(points2D)#triangulate the rectangle U
| 22.869565 | 68 | 0.731939 | 111 | 526 | 3.459459 | 0.468468 | 0.03125 | 0.015625 | 0.026042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036717 | 0.119772 | 526 | 22 | 69 | 23.909091 | 0.792657 | 0.281369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cac8d19cf2fbd34c80ea8f4c9aeb60a84c12ab4c | 1,542 | py | Python | after/trans.py | Windsooon/Comments | 47a6077e3bf46743a8da3d59ea8ebcd5601c9fe9 | [
"MIT"
] | 1 | 2020-07-08T06:17:54.000Z | 2020-07-08T06:17:54.000Z | after/trans.py | Windsooon/Comments | 47a6077e3bf46743a8da3d59ea8ebcd5601c9fe9 | [
"MIT"
] | null | null | null | after/trans.py | Windsooon/Comments | 47a6077e3bf46743a8da3d59ea8ebcd5601c9fe9 | [
"MIT"
] | null | null | null | import os
import csv
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from comments.base import DATA_DIR, cut_words, sigmoid
class Trans:
def __init__(self):
self.trans_matrix = pd.read_excel(os.path.join(DATA_DIR, 'trans.xls'), index_col=0)
# add one to every value
self.trans_matrix += 1
def calculate_pro(self, sentence):
'''
TODO
'''
res = cut_words(sentence, postag=True)
# Get all the POS
pos = [r[1] for r in res]
possibility = 0
pre = 'begin'
for i in range(len(pos)-1):
try:
possibility += np.log(self.trans_matrix.loc[pre, : ][pos[i]] * 10 / sum(self.trans_matrix.loc[pre, : ]))
except KeyError:
pass
else:
pre = pos[i]
return possibility
def pro(self, csv_file):
data = pd.read_csv(os.path.join(DATA_DIR, csv_file), skipinitialspace=True)
data['pro'] = data['comments'].apply(self.calculate_pro)
data['pro'].to_csv('output_' + csv_file)
def show(self, csv_file):
axes = plt.axes()
axes.set_xlim([-200, 200])
d = pd.read_csv(csv_file, skipinitialspace=True, names=['pro'])
plt.hist(d['pro'], color='blue', edgecolor='black', bins = 300)
plt.title('Log Pro')
plt.xlabel('log_pro')
plt.ylabel('number')
plt.tight_layout()
plt.show()
t = Trans()
# t.pro('fin_useless.csv')
t.show('output_fin_useless.csv')
| 29.09434 | 120 | 0.576524 | 212 | 1,542 | 4.042453 | 0.45283 | 0.04084 | 0.070012 | 0.032672 | 0.088681 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014585 | 0.288586 | 1,542 | 52 | 121 | 29.653846 | 0.766636 | 0.044747 | 0 | 0 | 0 | 0 | 0.06358 | 0.015204 | 0 | 0 | 0 | 0.019231 | 0 | 1 | 0.102564 | false | 0.025641 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cacadf76d959296c2be37d92baf26148aaa927bb | 4,417 | py | Python | pysnmp/Unisphere-Products-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/Unisphere-Products-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/Unisphere-Products-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module Unisphere-Products-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/Unisphere-Products-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 21:26:09 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
ObjectIdentifier, OctetString, Integer = mibBuilder.importSymbols("ASN1", "ObjectIdentifier", "OctetString", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ValueSizeConstraint, ValueRangeConstraint, ConstraintsIntersection, ConstraintsUnion = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ValueSizeConstraint", "ValueRangeConstraint", "ConstraintsIntersection", "ConstraintsUnion")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
iso, NotificationType, Counter64, Gauge32, ModuleIdentity, Counter32, IpAddress, Integer32, Unsigned32, TimeTicks, ObjectIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, MibIdentifier, Bits = mibBuilder.importSymbols("SNMPv2-SMI", "iso", "NotificationType", "Counter64", "Gauge32", "ModuleIdentity", "Counter32", "IpAddress", "Integer32", "Unsigned32", "TimeTicks", "ObjectIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "MibIdentifier", "Bits")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
unisphere, = mibBuilder.importSymbols("Unisphere-SMI", "unisphere")
usProducts = ModuleIdentity((1, 3, 6, 1, 4, 1, 4874, 1))
usProducts.setRevisions(('2001-12-07 15:36', '2001-10-15 18:29', '2001-03-01 15:27', '2000-05-24 00:00', '1999-12-13 19:36', '1999-11-16 00:00', '1999-09-28 00:00',))
if mibBuilder.loadTexts: usProducts.setLastUpdated('200112071536Z')
if mibBuilder.loadTexts: usProducts.setOrganization('Unisphere Networks, Inc.')
productFamilies = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1))
unisphereProductFamilies = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1))
usErx = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 1))
usEdgeRoutingSwitch1400 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 1, 1))
usEdgeRoutingSwitch700 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 1, 2))
usEdgeRoutingSwitch1440 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 1, 3))
usEdgeRoutingSwitch705 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 1, 4))
usMrx = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 2))
usMrxRoutingSwitch16000 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 2, 1))
usMrxRoutingSwitch32000 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 2, 2))
usSmx = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 3))
usServiceMediationSwitch2100 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 3, 1))
usSrx = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 4))
usServiceReadySwitch3000 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 4, 1))
usUmc = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 5))
usUmcSystemManagement = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 1, 5, 1))
oemProductFamilies = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 2))
marconiProductFamilies = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 2, 1))
usSsx = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 2, 1, 1))
usSsx1400 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 2, 1, 1, 1))
usSsx700 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 2, 1, 1, 2))
usSsx1440 = MibIdentifier((1, 3, 6, 1, 4, 1, 4874, 1, 1, 2, 1, 1, 3))
mibBuilder.exportSymbols("Unisphere-Products-MIB", usSsx1400=usSsx1400, usErx=usErx, usServiceMediationSwitch2100=usServiceMediationSwitch2100, oemProductFamilies=oemProductFamilies, usServiceReadySwitch3000=usServiceReadySwitch3000, usUmc=usUmc, usSmx=usSmx, usSsx1440=usSsx1440, unisphereProductFamilies=unisphereProductFamilies, usEdgeRoutingSwitch705=usEdgeRoutingSwitch705, usMrxRoutingSwitch16000=usMrxRoutingSwitch16000, usSsx=usSsx, usProducts=usProducts, usEdgeRoutingSwitch700=usEdgeRoutingSwitch700, usSsx700=usSsx700, usUmcSystemManagement=usUmcSystemManagement, marconiProductFamilies=marconiProductFamilies, productFamilies=productFamilies, usEdgeRoutingSwitch1440=usEdgeRoutingSwitch1440, usMrx=usMrx, usMrxRoutingSwitch32000=usMrxRoutingSwitch32000, usEdgeRoutingSwitch1400=usEdgeRoutingSwitch1400, PYSNMP_MODULE_ID=usProducts, usSrx=usSrx)
| 105.166667 | 856 | 0.737152 | 568 | 4,417 | 5.728873 | 0.246479 | 0.029502 | 0.022127 | 0.028273 | 0.350031 | 0.289183 | 0.289183 | 0.289183 | 0.285802 | 0.285802 | 0 | 0.158549 | 0.107539 | 4,417 | 41 | 857 | 107.731707 | 0.66692 | 0.076523 | 0 | 0 | 0 | 0 | 0.155528 | 0.016216 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.205882 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
caceb51d93c36ec68a44bf085fefaa6d893e959c | 2,609 | py | Python | ooobuild/dyn/awt/field_unit.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/dyn/awt/field_unit.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | ooobuild/dyn/awt/field_unit.py | Amourspirit/ooo_uno_tmpl | 64e0c86fd68f24794acc22d63d8d32ae05dd12b8 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2022 :Barry-Thomas-Paul: Moss
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http: // www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Const Class
# this is a auto generated file generated by Cheetah
# Libre Office Version: 7.3
# Namespace: com.sun.star.awt
from enum import IntEnum
from typing import TYPE_CHECKING
from ooo.oenv.env_const import UNO_ENVIRONMENT, UNO_RUNTIME
_DYNAMIC = False
if (not TYPE_CHECKING) and UNO_RUNTIME and UNO_ENVIRONMENT:
_DYNAMIC = True
if not TYPE_CHECKING and _DYNAMIC:
from com.sun.star.awt import FieldUnit as FieldUnit
if hasattr(FieldUnit, '_constants') and isinstance(FieldUnit._constants, dict):
FieldUnit._constants['__ooo_ns__'] = 'com.sun.star.awt'
FieldUnit._constants['__ooo_full_ns__'] = 'com.sun.star.awt.FieldUnit'
FieldUnit._constants['__ooo_type_name__'] = 'const'
def build_enum():
global FieldUnitEnum
ls = [f for f in dir(FieldUnit) if not callable(getattr(FieldUnit, f)) and not f.startswith('__')]
_dict = {}
for name in ls:
_dict[name] = getattr(FieldUnit, name)
FieldUnitEnum = IntEnum('FieldUnitEnum', _dict)
build_enum()
else:
from ...lo.awt.field_unit import FieldUnit as FieldUnit
class FieldUnitEnum(IntEnum):
"""
Enum of Const Class FieldUnit
specifies attributes for the MetricField map units.
IMPORTANT: These constants have to be disjunct with constants in util/MeasureUnit.
"""
FUNIT_NONE = FieldUnit.FUNIT_NONE
FUNIT_MM = FieldUnit.FUNIT_MM
FUNIT_CM = FieldUnit.FUNIT_CM
FUNIT_M = FieldUnit.FUNIT_M
FUNIT_KM = FieldUnit.FUNIT_KM
FUNIT_TWIP = FieldUnit.FUNIT_TWIP
FUNIT_POINT = FieldUnit.FUNIT_POINT
FUNIT_PICA = FieldUnit.FUNIT_PICA
FUNIT_INCH = FieldUnit.FUNIT_INCH
FUNIT_FOOT = FieldUnit.FUNIT_FOOT
FUNIT_MILE = FieldUnit.FUNIT_MILE
FUNIT_CUSTOM = FieldUnit.FUNIT_CUSTOM
FUNIT_PERCENT = FieldUnit.FUNIT_PERCENT
FUNIT_100TH_MM = FieldUnit.FUNIT_100TH_MM
__all__ = ['FieldUnit', 'FieldUnitEnum']
| 37.811594 | 106 | 0.708317 | 347 | 2,609 | 5.10951 | 0.429395 | 0.110547 | 0.022561 | 0.029329 | 0.049633 | 0.027073 | 0 | 0 | 0 | 0 | 0 | 0.008305 | 0.215408 | 2,609 | 68 | 107 | 38.367647 | 0.857841 | 0.330012 | 0 | 0 | 0 | 0 | 0.080904 | 0.015467 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.131579 | 0 | 0.552632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cad6d4d9039f032ee0647fb24456c499c4ecede6 | 1,108 | py | Python | tests/test_kfold.py | pal-16/preprocessy | 6577143d3397a47bd69c125c8de46ebc5ad2ddae | [
"MIT"
] | null | null | null | tests/test_kfold.py | pal-16/preprocessy | 6577143d3397a47bd69c125c8de46ebc5ad2ddae | [
"MIT"
] | null | null | null | tests/test_kfold.py | pal-16/preprocessy | 6577143d3397a47bd69c125c8de46ebc5ad2ddae | [
"MIT"
] | null | null | null | import numpy as np
import pytest
from preprocessy.resampling import KFold
class TestKFold:
def test_kfold(self):
with pytest.raises(ValueError):
KFold(n_splits=2.3)
with pytest.raises(ValueError):
KFold(n_splits=0)
with pytest.raises(ValueError):
KFold(shuffle=1)
with pytest.raises(ValueError):
KFold(shuffle=True, random_state=4.5)
with pytest.raises(ValueError):
KFold(shuffle=False, random_state=69)
def test_split(self):
with pytest.raises(ValueError):
arr = np.arange(10)
kFold = KFold(n_splits=20)
for train_indices, test_indices in kFold.split(arr):
print(
f"Train indices: {train_indices}\nTest indices:"
f" {test_indices}"
)
arr = np.arange(12)
kFold = KFold(n_splits=3, shuffle=True, random_state=69)
for train_indices, test_indices in kFold.split(arr):
assert len(train_indices) == 8
assert len(test_indices) == 4
| 27.7 | 68 | 0.583935 | 132 | 1,108 | 4.772727 | 0.356061 | 0.095238 | 0.152381 | 0.247619 | 0.485714 | 0.431746 | 0.250794 | 0.130159 | 0.130159 | 0 | 0 | 0.0253 | 0.322202 | 1,108 | 39 | 69 | 28.410256 | 0.813582 | 0 | 0 | 0.275862 | 0 | 0 | 0.054152 | 0.018953 | 0 | 0 | 0 | 0 | 0.068966 | 1 | 0.068966 | false | 0 | 0.103448 | 0 | 0.206897 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cadb7857fc43538c2164847a75f460688bce44f4 | 879 | py | Python | tests/max/test_rmc.py | realead/rmc | 6dafa2a4b5ab7199e86e86a4c10388bc8e472bb6 | [
"MIT"
] | null | null | null | tests/max/test_rmc.py | realead/rmc | 6dafa2a4b5ab7199e86e86a4c10388bc8e472bb6 | [
"MIT"
] | null | null | null | tests/max/test_rmc.py | realead/rmc | 6dafa2a4b5ab7199e86e86a4c10388bc8e472bb6 | [
"MIT"
] | null | null | null | import os
import exetest as ex
import exetest.decorator as dec
import RMCTester
@dec.to_unit_tests
class Tester(RMCTester.RMCTester):
#setting up the test case
my_path = os.path.dirname(__file__)
program_name = "max"
exe = os.path.join(my_path, RMCTester.RMCTester.EXE_NAME)
default_parameters = {ex.EXIT_CODE: 0,
ex.STDERR: "",
ex.INPUT: ""}
casedata_both_nulls = {ex.OPTIONS: ["2", "0", "0"],
ex.STDOUT: "0\n"}
casedata_both_ones = {ex.OPTIONS: ["2", "1", "1"],
ex.STDOUT: "1\n"}
casedata_max_first= {ex.OPTIONS: ["2", "5", "1"],
ex.STDOUT: "5\n"}
casedata_max_second = {ex.OPTIONS: ["2", "5", "6"],
ex.STDOUT: "6\n"}
| 25.114286 | 61 | 0.486917 | 103 | 879 | 3.961165 | 0.456311 | 0.088235 | 0.098039 | 0.053922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030853 | 0.373151 | 879 | 34 | 62 | 25.852941 | 0.709619 | 0.027304 | 0 | 0 | 0 | 0 | 0.031653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cadf0bfc3589bccc805fdee98c4cd79597ee146b | 2,309 | py | Python | waitercaller.py | salaikumar/waitercaller | 83ad2f8a477fc196c665db1ce1159a042d69fe8e | [
"Apache-2.0"
] | null | null | null | waitercaller.py | salaikumar/waitercaller | 83ad2f8a477fc196c665db1ce1159a042d69fe8e | [
"Apache-2.0"
] | null | null | null | waitercaller.py | salaikumar/waitercaller | 83ad2f8a477fc196c665db1ce1159a042d69fe8e | [
"Apache-2.0"
] | null | null | null | from flask import Flask
from flask import request
from flask import render_template
# Login Extension
from flask_login import LoginManager
from flask_login import login_required
from flask_login import login_user
from flask_login import logout_user
from mockdbhelper import MockDBHelper as DBHelper
from user import User
from flask import redirect
from flask import url_for
# Password Helper imports
from passwordhelper import PasswordHelper
# DB Helper instance
DB = DBHelper()
# Password Helper instance
PH = PasswordHelper()
# Creating a Flask App instance
app = Flask(__name__)
# Set a secret key for you application
app.secret_key = 'tPXJY3X37Qybz4QykV+hOyUxVQeEXf1Ao2C8upz+fGQXKsM'
login_manager = LoginManager(app)
@app.route("/")
def home():
return render_template("home.html")
@app.route("/account")
@login_required
def account():
return "You're logged in"
@login_manager.user_loader
def load_user(user_id):
user_password = DB.get_user(user_id)
if user_password:
return User(user_id)
@app.route("/login", methods=["POST"])
def login():
email = request.form.get("email")
password = request.form.get("password")
stored_user = DB.get_user(email)
if stored_user and PH.validate_password(password, stored_user['salt'], stored_user['hashed']):
user = User(email)
login_user(user, remember=True)
return redirect(url_for('account'))
# user_password = DB.get_user(email)
# if user_password and user_password == password:
# user = User(email)
# login_user(user)
# return redirect(url_for('account'))
# return account()
return home()
# Register function
@app.route("/register" , methods=["POST"])
def register():
email = request.form.get("email")
password = request.form.get("password")
confirmpass = request.form.get("password2")
if not password == confirmpass:
return redirect(url_for('home'))
if DB.get_user(email):
return redirect(url_for('home'))
salt = PH.get_salt()
hashed = PH.get_hash(password + salt)
DB.add_user(email, salt, hashed)
return redirect(url_for('home'))
@app.route("/logout")
def logout():
logout_user()
return redirect(url_for("home"))
if __name__ == '__main__':
app.run(port=5000, debug=True)
| 27.164706 | 98 | 0.708099 | 306 | 2,309 | 5.156863 | 0.245098 | 0.051331 | 0.064639 | 0.076046 | 0.274398 | 0.134347 | 0.068441 | 0.068441 | 0.068441 | 0.068441 | 0 | 0.006332 | 0.179298 | 2,309 | 84 | 99 | 27.488095 | 0.826385 | 0.154612 | 0 | 0.12069 | 0 | 0 | 0.096491 | 0.024252 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0.172414 | 0.206897 | 0.034483 | 0.465517 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cae4acfde1f01b9cb0259fd0db2f62854a8254cc | 4,544 | py | Python | fabfile.py | yanikou19/pymatgen | 8ee0d9ff35a9c2fa4f00da5d423e536ed8914e31 | [
"MIT"
] | null | null | null | fabfile.py | yanikou19/pymatgen | 8ee0d9ff35a9c2fa4f00da5d423e536ed8914e31 | [
"MIT"
] | null | null | null | fabfile.py | yanikou19/pymatgen | 8ee0d9ff35a9c2fa4f00da5d423e536ed8914e31 | [
"MIT"
] | null | null | null | """
Deployment file to facilitate releases of pymatgen.
Note that this file is meant to be run from the root directory of the pymatgen
repo.
"""
__author__ = "Shyue Ping Ong"
__email__ = "ongsp@ucsd.edu"
__date__ = "Sep 1, 2014"
import glob
import os
import json
import webbrowser
import requests
import re
import subprocess
from fabric.api import local, lcd
from pymatgen import __version__ as ver
def make_doc():
with open("CHANGES.rst") as f:
contents = f.read()
toks = re.split("\-{3,}", contents)
n = len(toks[0].split()[-1])
changes = [toks[0]]
changes.append("\n" + "\n".join(toks[1].strip().split("\n")[0:-1]))
changes = ("-" * n).join(changes)
with open("docs/latest_changes.rst", "w") as f:
f.write(changes)
with lcd("examples"):
local("ipython nbconvert --to html *.ipynb")
local("mv *.html ../docs/_static")
with lcd("docs"):
local("cp ../CHANGES.rst change_log.rst")
local("sphinx-apidoc -d 6 -o . -f ../pymatgen")
local("rm pymatgen.*.tests.rst")
for f in glob.glob("docs/*.rst"):
if f.startswith('docs/pymatgen') and f.endswith('rst'):
newoutput = []
suboutput = []
subpackage = False
with open(f, 'r') as fid:
for line in fid:
clean = line.strip()
if clean == "Subpackages":
subpackage = True
if not subpackage and not clean.endswith("tests"):
newoutput.append(line)
else:
if not clean.endswith("tests"):
suboutput.append(line)
if clean.startswith("pymatgen") and not clean.endswith("tests"):
newoutput.extend(suboutput)
subpackage = False
suboutput = []
with open(f, 'w') as fid:
fid.write("".join(newoutput))
local("make html")
local("cp _static/* _build/html/_static")
#This makes sure pymatgen.org works to redirect to the Gihub page
local("echo \"pymatgen.org\" > _build/html/CNAME")
#Avoid ths use of jekyll so that _dir works as intended.
local("touch _build/html/.nojekyll")
def publish():
local("python setup.py release")
def setver():
local("sed s/version=.*,/version=\\\"{}\\\",/ setup.py > newsetup"
.format(ver))
local("mv newsetup setup.py")
def update_doc():
make_doc()
with lcd("docs/_build/html/"):
local("git add .")
local("git commit -a -m \"Update dev docs\"")
local("git push origin gh-pages")
def merge_stable():
local("git commit -a -m \"v%s release\"" % ver)
local("git push")
local("git checkout stable")
local("git pull")
local("git merge master")
local("git push")
local("git checkout master")
def release_github():
with open("CHANGES.rst") as f:
contents = f.read()
toks = re.split("\-+", contents)
desc = toks[1].strip()
payload = {
"tag_name": "v" + ver,
"target_commitish": "master",
"name": "v" + ver,
"body": desc,
"draft": False,
"prerelease": False
}
response = requests.post(
"https://api.github.com/repos/materialsproject/pymatgen/releases",
data=json.dumps(payload),
headers={"Authorization": "token " + os.environ["GITHUB_RELEASES_TOKEN"]})
print response.text
def update_changelog():
output = subprocess.check_output(["git", "log", "--pretty=format:%s",
"v%s..HEAD" % ver])
lines = ["* " + l for l in output.strip().split("\n")]
with open("CHANGES.rst") as f:
contents = f.read()
toks = contents.split("==========")
toks.insert(-1, "\n\n" + "\n".join(lines))
with open("CHANGES.rst", "w") as f:
f.write("==========".join(toks))
def log_ver():
filepath = os.path.join(os.environ["HOME"], "Dropbox", "Public",
"pymatgen", ver)
with open(filepath, "w") as f:
f.write("Release")
def release(skip_test=False):
setver()
if not skip_test:
local("nosetests")
publish()
log_ver()
update_doc()
merge_stable()
release_github()
def open_doc():
pth = os.path.abspath("docs/_build/html/index.html")
webbrowser.open("file://" + pth)
| 29.316129 | 92 | 0.541813 | 543 | 4,544 | 4.449355 | 0.357274 | 0.033113 | 0.024834 | 0.029801 | 0.137417 | 0.120033 | 0.069536 | 0.05298 | 0.05298 | 0.05298 | 0 | 0.00473 | 0.302157 | 4,544 | 154 | 93 | 29.506494 | 0.757174 | 0.026188 | 0 | 0.10084 | 0 | 0 | 0.233699 | 0.027343 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.07563 | null | null | 0.008403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cae85af56f28cb944803582d2acd6f034c6fb314 | 2,281 | py | Python | bin/contentctl_project/contentctl_core/domain/entities/story.py | arjunkhunti-crest/security_content | 41e354485e5917d3366ef735a9c5b25a20d3b8cc | [
"Apache-2.0"
] | null | null | null | bin/contentctl_project/contentctl_core/domain/entities/story.py | arjunkhunti-crest/security_content | 41e354485e5917d3366ef735a9c5b25a20d3b8cc | [
"Apache-2.0"
] | null | null | null | bin/contentctl_project/contentctl_core/domain/entities/story.py | arjunkhunti-crest/security_content | 41e354485e5917d3366ef735a9c5b25a20d3b8cc | [
"Apache-2.0"
] | null | null | null | import string
import uuid
import requests
from pydantic import BaseModel, validator, ValidationError
from datetime import datetime
from bin.contentctl_project.contentctl_core.domain.entities.security_content_object import SecurityContentObject
from bin.contentctl_project.contentctl_core.domain.entities.story_tags import StoryTags
class Story(BaseModel, SecurityContentObject):
# story spec
name: str
id: str
version: int
date: str
author: str
description: str
narrative: str
references: list
tags: StoryTags
# enrichments
detection_names: list = None
investigation_names: list = None
baseline_names: list = None
author_company: str = None
author_name: str = None
detections: list = None
investigations: list = None
@validator('name')
def name_invalid_chars(cls, v):
invalidChars = set(string.punctuation.replace("-", ""))
if any(char in invalidChars for char in v):
raise ValueError('invalid chars used in name: ' + v)
return v
@validator('id')
def id_check(cls, v, values):
try:
uuid.UUID(str(v))
except:
raise ValueError('uuid is not valid: ' + values["name"])
return v
@validator('date')
def date_valid(cls, v, values):
try:
datetime.strptime(v, "%Y-%m-%d")
except:
raise ValueError('date is not in format YYYY-MM-DD: ' + values["name"])
return v
@validator('description', 'narrative')
def encode_error(cls, v, values, field):
try:
v.encode('ascii')
except UnicodeEncodeError:
raise ValueError('encoding error in ' + field.name + ': ' + values["name"])
return v
# @validator('references')
# def references_check(cls, v, values):
# for reference in v:
# try:
# get = requests.get(reference)
# if not get.status_code == 200:
# raise ValueError('Reference ' + reference + ' is not reachable: ' + values["name"])
# except requests.exceptions.RequestException as e:
# raise ValueError('Reference ' + reference + ' is not reachable: ' + values["name"])
# return v | 30.824324 | 112 | 0.615081 | 254 | 2,281 | 5.448819 | 0.370079 | 0.065029 | 0.046243 | 0.049133 | 0.213873 | 0.157514 | 0.157514 | 0.157514 | 0.08237 | 0 | 0 | 0.001836 | 0.283648 | 2,281 | 74 | 113 | 30.824324 | 0.845165 | 0.207804 | 0 | 0.176471 | 0 | 0 | 0.087465 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.137255 | 0 | 0.627451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cae9f9112bf10e3c5039ce94fd6acb072c0cb24e | 2,793 | py | Python | backend/api/migrations/0001_initial.py | INSRapperswil/nornir-web | 458e6b24bc373197044b4b7b5da74f16f93a9459 | [
"MIT"
] | 2 | 2021-06-01T08:33:04.000Z | 2021-08-20T04:22:39.000Z | backend/api/migrations/0001_initial.py | INSRapperswil/nornir-web | 458e6b24bc373197044b4b7b5da74f16f93a9459 | [
"MIT"
] | null | null | null | backend/api/migrations/0001_initial.py | INSRapperswil/nornir-web | 458e6b24bc373197044b4b7b5da74f16f93a9459 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2020-10-05 10:33
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Inventory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('hosts_file', models.TextField()),
('groups_file', models.TextField()),
],
),
migrations.CreateModel(
name='Task',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('status', models.IntegerField(choices=[(1, 'Scheduled'), (2, 'Running'), (3, 'Finished'), (4, 'Failed')], default=1)),
('date_scheduled', models.DateTimeField(verbose_name='Date Scheduled')),
('date_started', models.DateTimeField(null=True, verbose_name='Date Started')),
('date_finished', models.DateTimeField(null=True, verbose_name='Date Finished')),
('variables', models.TextField()),
('input', models.TextField()),
('result', models.TextField()),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='JobTemplate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('description', models.TextField()),
('file_path', models.TextField()),
('created_by', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='InventoryFilter',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('filter', models.TextField()),
('inventory', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='api.inventory')),
('task', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='api.task')),
],
),
]
| 45.786885 | 137 | 0.56928 | 270 | 2,793 | 5.744444 | 0.292593 | 0.077369 | 0.045132 | 0.070922 | 0.564152 | 0.564152 | 0.564152 | 0.509994 | 0.509994 | 0.509994 | 0 | 0.014551 | 0.28643 | 2,793 | 60 | 138 | 46.55 | 0.763673 | 0.016112 | 0 | 0.471698 | 1 | 0 | 0.112435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.056604 | 0 | 0.132075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
caec913d6621cf7f754967667188e3d0cafa8eeb | 3,982 | py | Python | tests/primitives/high_precision_condition_slowtest.py | gift-surg/puma | 58beae3459a0c8d96adfe9af323e26868428df4d | [
"Apache-2.0"
] | null | null | null | tests/primitives/high_precision_condition_slowtest.py | gift-surg/puma | 58beae3459a0c8d96adfe9af323e26868428df4d | [
"Apache-2.0"
] | 13 | 2020-05-04T14:14:58.000Z | 2020-07-29T16:37:03.000Z | tests/primitives/high_precision_condition_slowtest.py | gift-surg/puma | 58beae3459a0c8d96adfe9af323e26868428df4d | [
"Apache-2.0"
] | null | null | null | import time
from threading import Thread
from unittest import TestCase
from puma.primitives import HighPrecisionCondition
from tests.github_issue_11_quickfix import skip_on_windows_until_github_issue_11_is_resolved
TIMEOUTS = [0.0, 0.000001, 0.014, 0.015, 0.016, 0.017, 0.018, 0.3]
PRECISION = 0.005
TIMEOUT = 0.3
class HighPrecisionConditionTest(TestCase):
# HighPrecisionCondition is derived from threading.Condition, and only overrides the wait() method. We only test the modified behaviour, not the base class's behaviour.
@skip_on_windows_until_github_issue_11_is_resolved
def test_if_times_out(self) -> None:
self._test_wait_when_time_out(0.1)
def test_if_notified_with_timeout(self) -> None:
condition = HighPrecisionCondition()
thread = Thread(target=self._sleep_then_notify, args=[TIMEOUT, condition])
thread.start()
try:
with condition:
t1 = time.perf_counter()
ret = condition.wait(TIMEOUT * 3) # timout used here should not happen: condition should be raised after TIMEOUT
t2 = time.perf_counter()
self.assertTrue(ret)
self.assertGreaterEqual(t2 - t1, TIMEOUT) # Check the wait did not return until the condition was raised
self.assertLess(t2 - t1, TIMEOUT * 2) # Check that the wait did return early (when the condition was raised)
finally:
thread.join()
def test_if_notified_timeout_is_none(self) -> None:
condition = HighPrecisionCondition()
thread = Thread(target=self._sleep_then_notify, args=[TIMEOUT, condition])
thread.start()
try:
with condition:
t1 = time.perf_counter()
ret = condition.wait(None)
t2 = time.perf_counter()
self.assertTrue(ret)
self.assertGreaterEqual(t2 - t1, TIMEOUT) # Check the wait did not return until the condition was raised
self.assertLess(t2 - t1, TIMEOUT * 2) # Crude sanity check in case some default timeout was (wrongly) implemented
finally:
thread.join()
@skip_on_windows_until_github_issue_11_is_resolved
def test_wait_timeout_precision(self) -> None:
for timeout in TIMEOUTS:
time.sleep(0.25) # Try to reduce load on CPU so that test is less likely to fail on heavily loaded machine
self._test_wait_when_time_out(timeout)
@skip_on_windows_until_github_issue_11_is_resolved
def test_wait_for_timeout_precision(self) -> None:
for timeout in TIMEOUTS:
time.sleep(0.25) # Try to reduce load on CPU so that test is less likely to fail on heavily loaded machine
self._test_wait_for_when_time_out(timeout)
def _test_wait_when_time_out(self, timeout: float) -> None:
condition = HighPrecisionCondition()
with condition:
t1 = time.perf_counter()
ret = condition.wait(timeout)
t2 = time.perf_counter()
self.assertFalse(ret)
self.assertGreaterEqual(t2 - t1, timeout, f"Took {t2 - t1} when timeout was {timeout}")
self.assertLess(t2 - t1, timeout + PRECISION, f"Took {t2 - t1} when timeout was {timeout}")
def _test_wait_for_when_time_out(self, timeout: float) -> None:
condition = HighPrecisionCondition()
with condition:
t1 = time.perf_counter()
ret = condition.wait_for(lambda: False, timeout)
t2 = time.perf_counter()
self.assertFalse(ret)
self.assertGreaterEqual(t2 - t1, timeout, f"Took {t2 - t1} when timeout was {timeout}")
self.assertLess(t2 - t1, timeout + PRECISION, f"Took {t2 - t1} when timeout was {timeout}")
@staticmethod
def _sleep_then_notify(delay: float, condition: HighPrecisionCondition) -> None:
time.sleep(delay)
with condition:
condition.notify()
| 45.770115 | 172 | 0.659216 | 505 | 3,982 | 5.00198 | 0.239604 | 0.019002 | 0.047506 | 0.028504 | 0.673793 | 0.670625 | 0.644497 | 0.644497 | 0.644497 | 0.628266 | 0 | 0.030591 | 0.261175 | 3,982 | 86 | 173 | 46.302326 | 0.828008 | 0.171773 | 0 | 0.630137 | 0 | 0 | 0.049863 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 1 | 0.109589 | false | 0 | 0.068493 | 0 | 0.191781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
caf0139fdfe557f953aad0f3d0c2c6c1ddb373a5 | 1,091 | py | Python | catch.py | AlexLito666/my_first_project | 94898038bb6b0558328ee80791340d1ad3b92260 | [
"CC0-1.0"
] | null | null | null | catch.py | AlexLito666/my_first_project | 94898038bb6b0558328ee80791340d1ad3b92260 | [
"CC0-1.0"
] | null | null | null | catch.py | AlexLito666/my_first_project | 94898038bb6b0558328ee80791340d1ad3b92260 | [
"CC0-1.0"
] | null | null | null | from pygame import *
#создай окно игры
window = display.set_mode((700, 500))
display.set_caption("Догонялки")
#задай фон сцены
background = transform.scale(image.load("background.png"), (700, 500))
sprite1 = transform.scale(image.load('sprite1.png'), (100, 100))
sprite2 = transform.scale(image.load('sprite2.png'), (100, 100))
x1 = 100
y1 = 200
x2 = 200
y2 = 200
clock = time.Clock()
FPS = 60
game = True
while game:
window.blit(background,(0, 0))
window.blit(sprite1, (x1, y1))
window.blit(sprite2, (x2, y2))
key_pressed = key.get_pressed()
if key_pressed[K_UP]:
y1-=10
if key_pressed[K_DOWN]:
y1+=10
if key_pressed[K_LEFT]:
x1-=10
if key_pressed[K_RIGHT]:
x1+=10
if key_pressed[K_w]:
y2-=10
if key_pressed[K_s]:
y2+=10
if key_pressed[K_a]:
x2-=10
if key_pressed[K_d]:
x2+=10
#обработай событие «клик по кнопке "Закрыть окно"»
for e in event.get():
if e.type == QUIT:
game = False
display.update()
clock.tick(FPS)
| 18.491525 | 73 | 0.600367 | 164 | 1,091 | 3.884146 | 0.432927 | 0.141287 | 0.150706 | 0.163265 | 0.183673 | 0.160126 | 0 | 0 | 0 | 0 | 0 | 0.096535 | 0.259395 | 1,091 | 58 | 74 | 18.810345 | 0.689356 | 0.073327 | 0 | 0 | 0 | 0 | 0.044643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.025641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cafc588d566696ccb6cf77935803ead728bb260c | 2,674 | py | Python | Image Classification/CGIAR Crop Yield Prediction Challenge/omarhzm/fields (1).py | ZindiAfrica/Computer-Vision | bf4c00a0633506270dc6d07df938a100a10ee799 | [
"MIT"
] | null | null | null | Image Classification/CGIAR Crop Yield Prediction Challenge/omarhzm/fields (1).py | ZindiAfrica/Computer-Vision | bf4c00a0633506270dc6d07df938a100a10ee799 | [
"MIT"
] | null | null | null | Image Classification/CGIAR Crop Yield Prediction Challenge/omarhzm/fields (1).py | ZindiAfrica/Computer-Vision | bf4c00a0633506270dc6d07df938a100a10ee799 | [
"MIT"
] | null | null | null | import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import glob, random
import sklearn
from sklearn.decomposition import PCA
from xgboost.sklearn import XGBRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import GradientBoostingRegressor,BaggingRegressor, RandomForestRegressor,VotingRegressor
from sklearn.linear_model import LinearRegression
from lightgbm import LGBMRegressor
import catboost
from catboost import CatBoostRegressor
from tqdm import tqdm
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
#import warnings
#warnings.filterwarnings('ignore')
folder = os.path.dirname(os.path.abspath(__file__))
train_new = pd.read_csv(folder+'/Train.csv')
bands_of_interest = ['S2_B5', 'S2_B4', 'S2_B3', 'S2_B2', 'CLIM_pr', 'CLIM_soil']
band_names = [l.strip() for l in open(folder + '/band_names.txt', 'r').readlines()]
def process_train(fid, folder= folder+'/imtrain'):
fn = f'{folder}/{fid}.npy'
arr = np.load(fn)
values = {}
for month in range(12):
bns = [str(month) + '_' + b for b in bands_of_interest] # Bands of interest for this month
idxs = np.where(np.isin(band_names, bns)) # Index of these bands
vs = arr[idxs, 20, 20] # Sample the im at the center point
for bn, v in zip(bns, vs[0]):
values[bn] = v
return values
def process_test(fid, folder= folder+'/imtest'):
fn = f'{folder}/{fid}.npy'
arr = np.load(fn)
values = {}
for month in range(12):
bns = [str(month) + '_' + b for b in bands_of_interest] # Bands of interest for this month
idxs = np.where(np.isin(band_names, bns)) # Index of these bands
vs = arr[idxs, 20, 20] # Sample the im at the center point
for bn, v in zip(bns, vs[0]):
values[bn] = v
return values
# Make a new DF with the sampled values from each field
train_sampled = pd.DataFrame([process_train(fid) for fid in train_new['Field_ID'].values])
#MODEL
X = train_sampled.copy()
y = train_new['Yield'].values
print(X.head)
print(y)
X_train, X_test, y_train, y_test = train_test_split(X, y)
model=BaggingRegressor(CatBoostRegressor(silent=True),n_estimators=55)
model.fit(X_train, y_train)
print('Score:', mean_squared_error(y_test, model.predict(X_test), squared=False))
#SUBMITTING
ss = pd.read_csv(folder+'/SampleSubmission.csv')
test_sampled = pd.DataFrame([process_test(fid) for fid in ss['Field_ID'].values])
preds = model.predict(test_sampled)
ss['Yield'] = preds
ss.to_csv(folder+'/Sub.csv', index=False)
| 34.727273 | 111 | 0.715782 | 404 | 2,674 | 4.596535 | 0.351485 | 0.041465 | 0.040388 | 0.016155 | 0.250942 | 0.250942 | 0.250942 | 0.250942 | 0.250942 | 0.250942 | 0 | 0.010787 | 0.167913 | 2,674 | 76 | 112 | 35.184211 | 0.82382 | 0.110696 | 0 | 0.338983 | 0 | 0 | 0.076889 | 0.009174 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.305085 | 0 | 0.372881 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1b02966f6bd91456827cf2718b30f4933e7e5684 | 480 | py | Python | example_app/blog/models.py | yswtrue/taggit-selectize | 7c417e5179629414d2ef8ed000b1d14d5980da5b | [
"BSD-3-Clause"
] | 77 | 2015-02-15T23:57:39.000Z | 2021-06-04T06:32:41.000Z | example_app/blog/models.py | yswtrue/taggit-selectize | 7c417e5179629414d2ef8ed000b1d14d5980da5b | [
"BSD-3-Clause"
] | 38 | 2015-02-16T08:11:27.000Z | 2021-11-11T15:08:19.000Z | example_app/blog/models.py | yswtrue/taggit-selectize | 7c417e5179629414d2ef8ed000b1d14d5980da5b | [
"BSD-3-Clause"
] | 29 | 2016-01-25T21:55:44.000Z | 2021-11-09T00:19:45.000Z | from django.db import models
from django.utils.encoding import python_2_unicode_compatible
from taggit_selectize.managers import TaggableManager
@python_2_unicode_compatible
class Blog(models.Model):
title = models.CharField(max_length=255)
slug = models.SlugField(editable=True, max_length=255, unique=True)
body = models.TextField()
date = models.DateTimeField(auto_now_add=True)
tags = TaggableManager()
def __str__(self):
return self.title
| 28.235294 | 71 | 0.76875 | 62 | 480 | 5.709677 | 0.645161 | 0.056497 | 0.079096 | 0.135593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.15 | 480 | 16 | 72 | 30 | 0.848039 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0.083333 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1b084957ce54dfc38011de897df30991e0e6f6c8 | 901 | py | Python | checkov/terraform/checks/resource/gcp/GoogleComputeBootDiskEncryption.py | shimont/checkov | 470e4998f3a0287cdb80b75a898927027c42e16b | [
"Apache-2.0"
] | null | null | null | checkov/terraform/checks/resource/gcp/GoogleComputeBootDiskEncryption.py | shimont/checkov | 470e4998f3a0287cdb80b75a898927027c42e16b | [
"Apache-2.0"
] | null | null | null | checkov/terraform/checks/resource/gcp/GoogleComputeBootDiskEncryption.py | shimont/checkov | 470e4998f3a0287cdb80b75a898927027c42e16b | [
"Apache-2.0"
] | null | null | null | from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.models.enums import CheckResult, CheckCategories
class GoogleComputeBootDiskEncryption(BaseResourceCheck):
def __init__(self):
name = "Ensure VM disks for critical VMs are encrypted with CustomerSupplied Encryption Keys (CSEK)"
id = "CKV_GCP_38"
supported_resources = ['google_compute_instance']
categories = [CheckCategories.ENCRYPTION]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf):
if 'boot_disk' in conf.keys():
if 'disk_encryption_key_raw' in conf['boot_disk'][0] or 'kms_key_self_link' in conf['boot_disk'][0]:
return CheckResult.PASSED
return CheckResult.FAILED
check = GoogleComputeBootDiskEncryption()
| 40.954545 | 112 | 0.738069 | 102 | 901 | 6.245098 | 0.578431 | 0.084772 | 0.031397 | 0.043956 | 0.047096 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005405 | 0.17869 | 901 | 21 | 113 | 42.904762 | 0.855405 | 0 | 0 | 0 | 0 | 0 | 0.211987 | 0.051054 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0.066667 | 0.133333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1b0b5385379c866640e35c8a01868562a900ce1f | 8,606 | py | Python | asset/google/cloud/asset_v1/proto/asset_service_pb2_grpc.py | conwaychriscosmo/google-cloud-python | 8e7b7f8a5f4bb04d13f4d88ec3848f017faf834a | [
"Apache-2.0"
] | 1 | 2019-03-26T21:44:51.000Z | 2019-03-26T21:44:51.000Z | asset/google/cloud/asset_v1/proto/asset_service_pb2_grpc.py | conwaychriscosmo/google-cloud-python | 8e7b7f8a5f4bb04d13f4d88ec3848f017faf834a | [
"Apache-2.0"
] | 40 | 2019-07-16T10:04:48.000Z | 2020-01-20T09:04:59.000Z | asset/google/cloud/asset_v1/proto/asset_service_pb2_grpc.py | conwaychriscosmo/google-cloud-python | 8e7b7f8a5f4bb04d13f4d88ec3848f017faf834a | [
"Apache-2.0"
] | 2 | 2019-07-18T00:05:31.000Z | 2019-11-27T14:17:22.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from google.cloud.asset_v1.proto import (
asset_service_pb2 as google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2,
)
from google.longrunning import (
operations_pb2 as google_dot_longrunning_dot_operations__pb2,
)
from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2
class AssetServiceStub(object):
"""Asset service definition.
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ExportAssets = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/ExportAssets",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.ExportAssetsRequest.SerializeToString,
response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString,
)
self.BatchGetAssetsHistory = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/BatchGetAssetsHistory",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.BatchGetAssetsHistoryRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.BatchGetAssetsHistoryResponse.FromString,
)
self.CreateFeed = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/CreateFeed",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.CreateFeedRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.Feed.FromString,
)
self.GetFeed = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/GetFeed",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.GetFeedRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.Feed.FromString,
)
self.ListFeeds = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/ListFeeds",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.ListFeedsRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.ListFeedsResponse.FromString,
)
self.UpdateFeed = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/UpdateFeed",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.UpdateFeedRequest.SerializeToString,
response_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.Feed.FromString,
)
self.DeleteFeed = channel.unary_unary(
"/google.cloud.asset.v1.AssetService/DeleteFeed",
request_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.DeleteFeedRequest.SerializeToString,
response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString,
)
class AssetServiceServicer(object):
"""Asset service definition.
"""
def ExportAssets(self, request, context):
"""Exports assets with time and resource types to a given Cloud Storage
location. The output format is newline-delimited JSON.
This API implements the [google.longrunning.Operation][google.longrunning.Operation] API allowing you
to keep track of the export.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def BatchGetAssetsHistory(self, request, context):
"""Batch gets the update history of assets that overlap a time window.
For RESOURCE content, this API outputs history with asset in both
non-delete or deleted status.
For IAM_POLICY content, this API outputs history when the asset and its
attached IAM POLICY both exist. This can create gaps in the output history.
If a specified asset does not exist, this API returns an INVALID_ARGUMENT
error.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def CreateFeed(self, request, context):
"""Creates a feed in a parent project/folder/organization to listen to its
asset updates.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def GetFeed(self, request, context):
"""Gets details about an asset feed.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def ListFeeds(self, request, context):
"""Lists all asset feeds in a parent project/folder/organization.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def UpdateFeed(self, request, context):
"""Updates an asset feed configuration.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def DeleteFeed(self, request, context):
"""Deletes an asset feed.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details("Method not implemented!")
raise NotImplementedError("Method not implemented!")
def add_AssetServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
"ExportAssets": grpc.unary_unary_rpc_method_handler(
servicer.ExportAssets,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.ExportAssetsRequest.FromString,
response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString,
),
"BatchGetAssetsHistory": grpc.unary_unary_rpc_method_handler(
servicer.BatchGetAssetsHistory,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.BatchGetAssetsHistoryRequest.FromString,
response_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.BatchGetAssetsHistoryResponse.SerializeToString,
),
"CreateFeed": grpc.unary_unary_rpc_method_handler(
servicer.CreateFeed,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.CreateFeedRequest.FromString,
response_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.Feed.SerializeToString,
),
"GetFeed": grpc.unary_unary_rpc_method_handler(
servicer.GetFeed,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.GetFeedRequest.FromString,
response_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.Feed.SerializeToString,
),
"ListFeeds": grpc.unary_unary_rpc_method_handler(
servicer.ListFeeds,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.ListFeedsRequest.FromString,
response_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.ListFeedsResponse.SerializeToString,
),
"UpdateFeed": grpc.unary_unary_rpc_method_handler(
servicer.UpdateFeed,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.UpdateFeedRequest.FromString,
response_serializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.Feed.SerializeToString,
),
"DeleteFeed": grpc.unary_unary_rpc_method_handler(
servicer.DeleteFeed,
request_deserializer=google_dot_cloud_dot_asset__v1_dot_proto_dot_asset__service__pb2.DeleteFeedRequest.FromString,
response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
"google.cloud.asset.v1.AssetService", rpc_method_handlers
)
server.add_generic_rpc_handlers((generic_handler,))
| 51.843373 | 145 | 0.757146 | 981 | 8,606 | 6.143731 | 0.156983 | 0.066368 | 0.064709 | 0.070516 | 0.721752 | 0.675626 | 0.653725 | 0.583707 | 0.529119 | 0.529119 | 0 | 0.009603 | 0.177202 | 8,606 | 165 | 146 | 52.157576 | 0.841548 | 0.126075 | 0 | 0.293103 | 1 | 0 | 0.103486 | 0.052148 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077586 | false | 0 | 0.034483 | 0 | 0.12931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b0be51bf909125818059e14554ee8ddc0df6805 | 357 | py | Python | src/main/python/cobra/mit/_loader.py | tm0nk/cobra-j | daeb85250e63cb21e9c02ecc2acfdf8ba1e44d6b | [
"Apache-2.0"
] | null | null | null | src/main/python/cobra/mit/_loader.py | tm0nk/cobra-j | daeb85250e63cb21e9c02ecc2acfdf8ba1e44d6b | [
"Apache-2.0"
] | null | null | null | src/main/python/cobra/mit/_loader.py | tm0nk/cobra-j | daeb85250e63cb21e9c02ecc2acfdf8ba1e44d6b | [
"Apache-2.0"
] | null | null | null | # Copyright (c) '2015' Cisco Systems, Inc. All Rights Reserved
import importlib
class ClassLoader(object):
@classmethod
def loadClass(cls, fqClassName):
fqClassName = str(fqClassName)
moduleName, className = fqClassName.rsplit('.', 1)
module = importlib.import_module(moduleName)
return getattr(module, className)
| 27.461538 | 62 | 0.694678 | 36 | 357 | 6.861111 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017731 | 0.210084 | 357 | 12 | 63 | 29.75 | 0.858156 | 0.168067 | 0 | 0 | 0 | 0 | 0.00339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1b0d7458eb7f29bf8431f38d4ac67830ed31e880 | 4,283 | py | Python | auth/oauth.py | sensi277/rebble-auth-py | 298b7b15f0835d8379e0ef132a129940061a9ca1 | [
"MIT"
] | null | null | null | auth/oauth.py | sensi277/rebble-auth-py | 298b7b15f0835d8379e0ef132a129940061a9ca1 | [
"MIT"
] | null | null | null | auth/oauth.py | sensi277/rebble-auth-py | 298b7b15f0835d8379e0ef132a129940061a9ca1 | [
"MIT"
] | null | null | null | from datetime import datetime, timedelta
import logging
from flask import Blueprint, abort, render_template, request, redirect
from flask_oauthlib.provider import OAuth2Provider
from oauthlib.common import generate_token as generate_random_token
from flask_login import current_user, login_required
from auth.login.base import demand_pebble
from .models import db, IssuedToken, AuthClient, User
from .redis import client as redis
import json
oauth_bp = Blueprint('oauth_bp', __name__)
oauth = OAuth2Provider()
class Grant:
def __init__(self, client_id, code, user_id, scopes, redirect_uri):
self.client_id = client_id
self.code = code
self.user_id = user_id
self.scopes = scopes
self.redirect_uri = redirect_uri
@property
def user(self):
return User.query.filter_by(id=self.user_id).one()
def delete(self):
redis.delete(self.key)
@property
def key(self):
return self.redis_key(self.client_id, self.code)
def serialise(self):
return json.dumps([self.client_id, self.code, self.user_id, self.scopes, self.redirect_uri]).encode('utf-8')
@classmethod
def deserialise(cls, serialised):
return cls(*json.loads(serialised.decode('utf-8')))
@classmethod
def redis_key(cls, client_id, code):
return f'grant-{client_id}-{code}'
@oauth.grantgetter
def load_grant(client_id, code):
return Grant.deserialise(redis.get(Grant.redis_key(client_id, code)))
@oauth.grantsetter
def set_grant(client_id, code, request, *args, **kwargs):
if not current_user.is_authenticated:
logging.error("Tried to set a grant for a user who is not logged in!?")
return None
grant = Grant(client_id, code['code'], current_user.id, request.scopes, request.redirect_uri)
redis.setex(grant.key, 100, grant.serialise())
return grant
@oauth.tokengetter
def get_token(access_token=None, refresh_token=None):
if access_token:
# There are two valid 'tokens': ones we've issued, and the Pebble token.
# Because we don't actually store the pebble token as an issued token, we have to
# check for it here and invent a token if it's the one we tried to use.
token = IssuedToken.query.filter_by(access_token=access_token).one_or_none()
if token:
return token
user = User.query.filter_by(pebble_token=access_token).one_or_none()
if user:
return IssuedToken(access_token=access_token, refresh_token=None, expires=None, client_id=None, user=user,
scopes=['pebble', 'pebble_token', 'profile'])
elif refresh_token:
return IssuedToken.query.filter_by(refresh_token=refresh_token).one_or_none()
@oauth.tokensetter
def set_token(token, request, *args, **kwargs):
expires_in = token.get('expires_in')
expires = datetime.utcnow() + timedelta(seconds=expires_in)
scopes = token['scope'].split(' ')
token = IssuedToken(access_token=token['access_token'], refresh_token=token['refresh_token'], expires=expires,
client_id=request.client.client_id, user_id=request.user.id,
scopes=scopes)
db.session.add(token)
db.session.commit()
return token
@oauth.clientgetter
def get_client(client_id):
return AuthClient.query.filter_by(client_id=client_id).one()
@oauth_bp.route('/authorise', methods=['GET', 'POST'])
@login_required
@oauth.authorize_handler
def authorise(*args, **kwargs):
return True
@oauth_bp.route('/token', methods=['GET', 'POST'])
@oauth.token_handler
def access_token():
return None
@oauth_bp.route('/error')
def oauth_error():
return render_template('oauth-error.html',
error=request.args.get('error', 'unknown'),
error_description=request.args.get('error_description', '')), 400
def generate_token(request, refresh_token=False):
return generate_random_token()
def init_app(app):
app.config['OAUTH2_PROVIDER_TOKEN_EXPIRES_IN'] = 315576000 # 10 years
app.config['OAUTH2_PROVIDER_ERROR_ENDPOINT'] = 'oauth_bp.oauth_error'
oauth.init_app(app)
app.register_blueprint(oauth_bp, url_prefix='/oauth')
app.extensions['csrf'].exempt(oauth_bp)
| 32.694656 | 118 | 0.698342 | 581 | 4,283 | 4.94148 | 0.26506 | 0.04737 | 0.029258 | 0.023685 | 0.052247 | 0.018809 | 0.018809 | 0 | 0 | 0 | 0 | 0.006638 | 0.190988 | 4,283 | 130 | 119 | 32.946154 | 0.821934 | 0.053467 | 0 | 0.085106 | 0 | 0 | 0.083724 | 0.02124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.180851 | false | 0 | 0.106383 | 0.117021 | 0.478723 | 0.031915 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
1b1163b5c1c6d08f8b1ea7b7d3e6b064f710268e | 1,283 | py | Python | 52_最长不含重复字符的子字符串.py | wode1/-offer-python- | 22cb55b7a89f48c7d3686f0a7a822c16ff3976f1 | [
"Apache-2.0"
] | null | null | null | 52_最长不含重复字符的子字符串.py | wode1/-offer-python- | 22cb55b7a89f48c7d3686f0a7a822c16ff3976f1 | [
"Apache-2.0"
] | null | null | null | 52_最长不含重复字符的子字符串.py | wode1/-offer-python- | 22cb55b7a89f48c7d3686f0a7a822c16ff3976f1 | [
"Apache-2.0"
] | null | null | null | '''
从字符串中找出一个最长的不包含重复字符的子字符串,计算该最长子字符串的长度。假设字符串中只包含'a'~'z的字符。例如,在字
符串"arabcacfr"中,最长的不含重复字符的子字符串是"acfr",长度为4。
'''
class Solution:
def lengthOfLongestSubstring(self, s):
if len(s) <= 1:
return len(s)
head, tail = 0, 0
maxLen = 1
while tail+1 < len(s):
tail += 1 # 往窗口内添加元素
if s[tail] not in s[head: tail]: # 窗口内没有重复的字符,实时更新窗口最大长度
maxLen = max(maxLen, tail - head + 1)
else: # 窗口内有重复字符,移动head直至窗口内不再含重复的元素
while s[tail] in s[head: tail]:
head += 1
return maxLen
def lengthOfLongestSubdtring1(self, s):
if len(s)<=1:
return len(s)
head, tail=0, 0
maxLen=1
while tail+1<len(s):
tail+=1 # 王窗口内添加元素
if s[tail] not in s[head:tail]:# 如果窗口内没有重复的字符,实时更新窗口的元素
maxLen=max(maxLen, tail-head+1)
else: # 窗口内有重复字符,移动head直至窗口内不再含重复的元素
while s[tail] in s[head:tail]:
head+=1 # 当有重复的数字时首指针会一直相加
return maxLen
def lengthOfLongestSubdtring2(self, s):
if len(s)<=1:
return len(s)
head, tail=0, 0 # 双指针
maxLen=1
while tail+1<len(s):
tail+=1
if s[tail] not in s[head:tail]:
maxLen=max(maxLen, tail-head+1)
else:
while s[tail] in s[head:tail]:
head+=1
return maxLen
s=Solution()
print(s.lengthOfLongestSubdtring2("yyabcdabjcabceg")) # cdabj
| 12.578431 | 62 | 0.631333 | 184 | 1,283 | 4.402174 | 0.255435 | 0.044444 | 0.1 | 0.081481 | 0.588889 | 0.588889 | 0.588889 | 0.554321 | 0.476543 | 0.444444 | 0 | 0.028455 | 0.233048 | 1,283 | 101 | 63 | 12.70297 | 0.794715 | 0.197974 | 0 | 0.857143 | 0 | 0 | 0.015593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.238095 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b1717b44e9148ae97616955dad12e1104067a0e | 1,147 | py | Python | sh/scripts/view_faucet_account_keys.py | goral09/stests | 4de26485535cadf1b708188a7133a976536ccba3 | [
"Apache-2.0"
] | 4 | 2020-03-10T15:28:17.000Z | 2021-10-02T11:41:17.000Z | sh/scripts/view_faucet_account_keys.py | goral09/stests | 4de26485535cadf1b708188a7133a976536ccba3 | [
"Apache-2.0"
] | 1 | 2020-03-25T11:31:44.000Z | 2020-03-25T11:31:44.000Z | sh/scripts/view_faucet_account_keys.py | goral09/stests | 4de26485535cadf1b708188a7133a976536ccba3 | [
"Apache-2.0"
] | 9 | 2020-02-25T18:43:42.000Z | 2021-08-10T17:08:42.000Z | import argparse
from stests.core import cache
from stests.core import factory
from stests.core.utils import args_validator
from stests.core.utils import cli as utils
from stests.core.utils import env
from arg_utils import get_network
# CLI argument parser.
ARGS = argparse.ArgumentParser("Displays a keys asssociated with a network's faucet account.")
# CLI argument: network name.
ARGS.add_argument(
"--net",
default=env.get_network_name(),
dest="network",
help="Network name {type}{id}, e.g. nctl1.",
type=args_validator.validate_network,
)
def main(args):
"""Entry point.
:param args: Parsed CLI arguments.
"""
network = get_network(args)
utils.log(f"NETWORK: {network.name} -> faucet account-key = {network.faucet.account_key}")
utils.log(f"NETWORK: {network.name} -> faucet account-hash = {network.faucet.account_hash}")
utils.log(f"NETWORK: {network.name} -> faucet private-key = {network.faucet.private_key}")
utils.log(f"NETWORK: {network.name} -> faucet public-key = {network.faucet.public_key}")
# Entry point.
if __name__ == '__main__':
main(ARGS.parse_args())
| 27.97561 | 96 | 0.710549 | 158 | 1,147 | 5.018987 | 0.341772 | 0.0971 | 0.088272 | 0.080706 | 0.286255 | 0.191677 | 0.191677 | 0.150063 | 0 | 0 | 0 | 0.001037 | 0.159547 | 1,147 | 40 | 97 | 28.675 | 0.821577 | 0.096774 | 0 | 0 | 0 | 0 | 0.414201 | 0.110454 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.304348 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1b206ae3f5c5fdb2c9e5d1ecca0c6fbb09923902 | 5,152 | py | Python | test/acceptance/tools/fabric_utils.py | FIWARE/cloud.PaaS | 3ddec91c2b0ef3baca1dd2e596373cf0d4d341e3 | [
"Apache-2.0"
] | null | null | null | test/acceptance/tools/fabric_utils.py | FIWARE/cloud.PaaS | 3ddec91c2b0ef3baca1dd2e596373cf0d4d341e3 | [
"Apache-2.0"
] | null | null | null | test/acceptance/tools/fabric_utils.py | FIWARE/cloud.PaaS | 3ddec91c2b0ef3baca1dd2e596373cf0d4d341e3 | [
"Apache-2.0"
] | 2 | 2016-08-22T16:03:25.000Z | 2018-03-05T23:28:55.000Z | # -*- coding: utf-8 -*-
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of FI-WARE project.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
#
# For those usages not covered by the Apache version 2.0 License please
# contact with opensource@tid.es
__author__ = "Javier Fernández"
__email__ = "jfernandez@tcpsi.es"
__copyright__ = "Copyright 2015"
__license__ = " Apache License, Version 2.0"
__version__ = "1.0.0"
import logging
from fabric.api import env, hide, run, get
from fabric.tasks import execute
from fabric.contrib import files
from StringIO import StringIO
__logger__ = logging.getLogger("qautils")
FABRIC_ASSERT_RESULT = u'<local-only>'
class FabricAssertions():
@staticmethod
def assert_file_exist(path):
"""
Fabric assertion: Check if file exists on the current remote hosts.
:param path (string): Absolute path to file
:return (bool): True if given file exists on the current remote host (dir: PROVISION_ROOT_PATH).
"""
return files.exists(path)
@staticmethod
def assert_content_in_file(path, expected_content):
"""
Fabric assertion: Check if some text is in the given {dir_path}/{file}
:param path (string): Absolute path to file
:param expected_content (string): String to look for.
:return (bool): True if given content is in file.
"""
fd = StringIO()
get(path, fd)
file_content = fd.getvalue()
return expected_content in file_content
class FabricUtils():
def __init__(self, host_name, host_username, host_password=None, host_ssh_key=None):
"""
Init Fabric client.
:param host_name (string): Hostname
:param host_username (string): Username
:param host_password (string): Password
:param host_ssh_key (string): SSH private key file
:return: None
"""
__logger__.info("Init Fabric to execute remote commands in '%s'. Credentials: '%s/%s'; SSH Key file: '%s'",
host_name, host_username, host_password, host_ssh_key)
env.host_string = host_name
env.user = host_username
env.password = host_password
env.key_filename = host_ssh_key
self.fabric_assertions = FabricAssertions()
@staticmethod
def execute_command(command):
"""
Execute a shell command on the current remote host
:param command (string): Command to be execute
:return (string): Result of the remote execution or None if some problem happens
"""
__logger__.debug("Executing remote command: '%s'", command)
try:
with hide('running', 'stdout'):
result = run(command)
__logger__.debug("Result of execution: \n%s", result)
return result
except:
__logger__.error("Any problem executing command: '%s'", command)
return None
def file_exist(self, dir_path, file_name):
"""
Fabric executor: Run method with assertion 'assert_file_exist' in the remote host
:param dir_path (string): Path of the directory where file is located.
:param file_name (string): File name
:return (bool): True if file contains that content (dir: PROVISION_ROOT_PATH)
"""
path = "{}/{}".format(dir_path, file_name)
__logger__.debug("Checking if remote file exists: '%s'", path)
with hide('running', 'stdout'):
success = execute(self.fabric_assertions.assert_file_exist, path=path)
return success[FABRIC_ASSERT_RESULT]
def content_in_file(self, dir_path, file_name, expected_content):
"""
Fabric executor: Run method with assertion 'assert_content_in_file' on the remote host
:param dir_path (string): Path of the directory where file is located.
:param file_name (string): File name
:param expected_content (string): String to be found in file
:return (bool): True if file contains that content (dir: PROVISION_ROOT_PATH)
"""
path = "{}/{}".format(dir_path, file_name)
__logger__.debug("Checking if the content '%s' is in remote file: '%s'", expected_content, path)
try:
with hide('running', 'stdout'):
success = execute(self.fabric_assertions.assert_content_in_file,
path=path, expected_content=expected_content)
except:
__logger__.error("Problem when trying to access to remote file")
return False
return success[FABRIC_ASSERT_RESULT]
| 35.287671 | 115 | 0.658579 | 658 | 5,152 | 4.951368 | 0.285714 | 0.036832 | 0.019951 | 0.019644 | 0.350215 | 0.263352 | 0.205648 | 0.159607 | 0.159607 | 0.159607 | 0 | 0.005189 | 0.251941 | 5,152 | 145 | 116 | 35.531034 | 0.840166 | 0.417508 | 0 | 0.237288 | 0 | 0.016949 | 0.171004 | 0 | 0 | 0 | 0 | 0 | 0.152542 | 1 | 0.101695 | false | 0.050847 | 0.084746 | 0 | 0.338983 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1b21ac02930b4e22bdd28cedc76ce3ebf51b7c69 | 650 | py | Python | hdri_encoding/metadata_encoding_utils.py | microsoft/ConfigNet | f16b9b52698b1fe588322fdc5d921746f68d0e9e | [
"MIT"
] | 82 | 2020-08-24T01:47:03.000Z | 2022-02-28T08:33:50.000Z | hdri_encoding/metadata_encoding_utils.py | microsoft/ConfigNet | f16b9b52698b1fe588322fdc5d921746f68d0e9e | [
"MIT"
] | 3 | 2020-09-18T07:21:00.000Z | 2022-02-09T23:42:27.000Z | hdri_encoding/metadata_encoding_utils.py | microsoft/ConfigNet | f16b9b52698b1fe588322fdc5d921746f68d0e9e | [
"MIT"
] | 16 | 2020-08-24T05:47:00.000Z | 2022-01-05T07:25:13.000Z | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
"""Common functions used by the metadata encoding scripts"""
import json
def load_metadata_dicts(metadata_files):
metadata_dicts = []
for metadata_file in metadata_files:
with open(metadata_file, "r") as fp:
metadata_dicts.append(json.load(fp))
return metadata_dicts
def save_metadata_dicts(metadata_dicts, metadata_files):
assert(len(metadata_dicts) == len(metadata_files))
for i in range(len(metadata_dicts)):
with open(metadata_files[i], "w") as fp:
json.dump(metadata_dicts[i], fp, indent=4)
| 34.210526 | 61 | 0.690769 | 87 | 650 | 4.954023 | 0.471264 | 0.271462 | 0.146172 | 0.12065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001949 | 0.210769 | 650 | 18 | 62 | 36.111111 | 0.838207 | 0.190769 | 0 | 0 | 0 | 0 | 0.003992 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b227e9375b3dddeb348d11a30d06d878e3e08d5 | 11,700 | py | Python | code/segmentation/MELC_ImageProcessing.py | perlfloccri/DeepFLEX | 84d2d83f09acdb4dbc7c02735e071bf8d67ddafe | [
"MIT"
] | null | null | null | code/segmentation/MELC_ImageProcessing.py | perlfloccri/DeepFLEX | 84d2d83f09acdb4dbc7c02735e071bf8d67ddafe | [
"MIT"
] | null | null | null | code/segmentation/MELC_ImageProcessing.py | perlfloccri/DeepFLEX | 84d2d83f09acdb4dbc7c02735e071bf8d67ddafe | [
"MIT"
] | 1 | 2020-09-30T21:44:12.000Z | 2020-09-30T21:44:12.000Z | from MELC.utils.myDatasets import generate_workingRaw_from_raw, MELCStructureDataset
import numpy as np
import tifffile as tiff
from MELC.utils.registration_daria import register
import matplotlib.pyplot as plt
import cv2
from MELC.utils.Files import create_folder
from skimage import img_as_float, img_as_uint
from MELC.utils.f_transformations import filterLowFrequencies, visualize_frequencies
import glob
from os.path import join
from config import *
import argparse
SEPARATOR = '/'
def parse_args():
"""Parse input arguments"""
parser = argparse.ArgumentParser(description='Run Training of Mask R-CNN')
parser.add_argument(
'--path', dest='path', required=True,
help='Config file for training (and optionally testing)')
return parser.parse_args()
class MELCImageProcessing:
def __init__(self, path: str, melc_structure_generated: bool = True):
self._path = path
self._path_registered_fluor = ''
self._path_registered_bleach = ''
self._path_registered_phase = ''
self._path_registered_vis_fluor = ''
self._path_registered_vis_bleach = ''
self._path_registered_vis_phase = ''
self._path_bg_corr = ''
self._path_bg_corr_f = ''
self._path_bg_corr_v_f = ''
self._path_normalized_f = ''
self._path_normalized_v_f = ''
'''
Extract MELC data and calibration data
'''
w_raw = self._path + SEPARATOR + 'w_raw'
if not melc_structure_generated:
generate_workingRaw_from_raw(self._path, w_raw)
melc_dataset = MELCStructureDataset(w_raw)
'''
Sort by creation date
'''
self._melc_fluor = melc_dataset.fluor_pd.sort_values('order_index', ascending=True)
self._melc_phase = melc_dataset.phase_pd.sort_values('order_index', ascending=True)
self._melc_bleach = melc_dataset.bleach_pd.sort_values('order_index', ascending=True)
self._melc_phasebleach = melc_dataset.phasebleach_pd.sort_values('order_index', ascending=True)
self.create_folders()
self._corrected_bf_im = self.generate_bg_correction_img()
self.process_images()
def create_folders(self):
'''
Create folders for registered images
'''
path_processed = join(self._path, 'processed')
path_registered = join(path_processed, 'registered')
self._path_registered_fluor = join(path_registered, 'fluor')
self._path_registered_bleach = join(path_registered, 'bleach')
self._path_registered_phase = join(path_registered, 'phase')
self._path_registered_vis_fluor = join(path_registered, 'vis_fluor')
self._path_registered_vis_bleach = join(path_registered, 'vis_bleach')
self._path_registered_vis_phase = join(path_registered, 'vis_phase')
create_folder(path_processed)
create_folder(path_registered)
create_folder(self._path_registered_fluor)
create_folder(self._path_registered_bleach)
create_folder(self._path_registered_phase)
create_folder(self._path_registered_vis_fluor)
create_folder(self._path_registered_vis_bleach)
create_folder(self._path_registered_vis_phase)
'''
Create folders for background corrected images
'''
self._path_bg_corr = self._path + SEPARATOR + 'processed' + SEPARATOR + 'background_corr' + SEPARATOR
self._path_bg_corr_f = self._path_bg_corr + 'fluor' + SEPARATOR
self._path_bg_corr_v_f = self._path_bg_corr + 'vis_fluor' + SEPARATOR
self._path_bg_corr_p = self._path_bg_corr + 'phase' + SEPARATOR
self._path_bg_corr_v_p = self._path_bg_corr + 'vis_phase' + SEPARATOR
create_folder(self._path_bg_corr)
create_folder(self._path_bg_corr_f)
create_folder(self._path_bg_corr_v_f)
create_folder(self._path_bg_corr_p)
create_folder(self._path_bg_corr_v_p)
'''
Create folders for normalized images
'''
path_normalized = self._path + SEPARATOR + 'processed' + SEPARATOR + 'normalized'
self._path_normalized_f = path_normalized + SEPARATOR + 'fluor' + SEPARATOR
self._path_normalized_v_f = path_normalized + SEPARATOR + 'vis_fluor' + SEPARATOR
self._path_normalized_p = path_normalized + SEPARATOR + 'phase' + SEPARATOR
self._path_normalized_v_p = path_normalized + SEPARATOR + 'vis_phase' + SEPARATOR
create_folder(path_normalized)
create_folder(self._path_normalized_f)
create_folder(self._path_normalized_v_f)
create_folder(self._path_normalized_p)
create_folder(self._path_normalized_v_p)
def generate_bg_correction_img(self):
'''
Create correction image for fluorescence and bleaching images
'''
brightfield_im = []
darkframe_im = []
filter_names = ['XF116-2', 'XF111-2']
calibration_path = self._path + SEPARATOR +'w_raw' + SEPARATOR + 'calibration' + SEPARATOR
brightfield_im.append(np.int16(tiff.imread(glob.glob(calibration_path + '*_cal_b001_5000_XF116-2_000.tif'))))
brightfield_im.append(np.int16(tiff.imread(glob.glob(calibration_path + '*_cal_b001_5000_XF111-2_000.tif'))))
darkframe_im.append(np.int16(tiff.imread(glob.glob(calibration_path + '*_cal_d001_5000_XF116-2_000.tif'))))
darkframe_im.append(np.int16(tiff.imread(glob.glob(calibration_path + '*_cal_d001_5000_XF111-2_000.tif'))))
corrected_brightfield_im = [(brightfield_im[i] - darkframe_im[i]) for i in range(len(filter_names))]
corrected_brightfield_im[0][corrected_brightfield_im[0] <= 0] = 0
corrected_brightfield_im[1][corrected_brightfield_im[1] <= 0] = 0
return corrected_brightfield_im
def process_images(self):
'''
Registration, background correction and normalization of images
'''
'''
Registration
'''
ref_image = tiff.imread(glob.glob(self._path + SEPARATOR + 'w_raw' + SEPARATOR + 'phase' + SEPARATOR + '*_Propidium iodide_200_XF116*.tif'))
for i in range(0, (len(self._melc_fluor)-1)):
pb_idx = np.where(self._melc_phasebleach['order_index'] == self._melc_bleach.iloc[i]['order_index'])[0][0]
phasebleach_image = tiff.imread(self._melc_phasebleach.iloc[pb_idx]['path'])
bleach_image = tiff.imread(self._melc_bleach.iloc[i]['path'])
registered_bleach_image = register(ref_image, phasebleach_image, bleach_image)
filename_bleach = SEPARATOR + str(int(self._melc_bleach.iloc[i]['order_index'])) + '_' + '_'.join(
self._melc_bleach.iloc[i]['fid'].split('_')[:-1]) + '.tif'
tiff.imsave(self._path_registered_bleach + filename_bleach, registered_bleach_image)
save_vis_img(registered_bleach_image, self._path_registered_vis_bleach, filename_bleach)
p_idx = np.where(self._melc_phase['order_index'] == self._melc_fluor.iloc[i+1]['order_index'])[0][0]
phase_image = tiff.imread(self._melc_phase.iloc[p_idx]['path'])
fluorescence_image = tiff.imread(self._melc_fluor.iloc[i+1]['path'])
registered_phase_image = register(ref_image, phase_image, phase_image)
registered_fluor_image = register(ref_image, phase_image, fluorescence_image)
filename_fluor = SEPARATOR + str(int(self._melc_fluor.iloc[i+1]['order_index'])) + '_' + '_'.join(
self._melc_fluor.iloc[i+1]['fid'].split('_')[:-1]) + '.tif'
tiff.imsave(self._path_registered_fluor + filename_fluor, registered_fluor_image)
tiff.imsave(self._path_registered_phase + filename_fluor, registered_fluor_image)
save_vis_img(registered_fluor_image, self._path_registered_vis_fluor, filename_fluor)
save_vis_img(registered_phase_image, self._path_registered_vis_phase, filename_fluor)
'''
Background Correction
'''
bleach = np.int16(registered_bleach_image)
fluor = np.int16(registered_fluor_image)
phase = np.int16(registered_phase_image)
if self._melc_fluor.iloc[i+1]['filter'] == 'XF111-2':
fluor -= self._corrected_bf_im[1]
phase -= self._corrected_bf_im[1]
else:
fluor -= self._corrected_bf_im[0]
phase -= self._corrected_bf_im[0]
if self._melc_bleach.iloc[i]['filter'] == 'XF111-2':
bleach -= self._corrected_bf_im[1]
else:
bleach -= self._corrected_bf_im[0]
phase[phase < 0] = 0
# Substraction of bleaching image
fluor_wo_bg = fluor - bleach
fluor_wo_bg[fluor_wo_bg < 0] = 0
tiff.imsave(self._path_bg_corr_f + filename_fluor, fluor_wo_bg)
save_vis_img(fluor_wo_bg, self._path_bg_corr_v_f, filename_fluor)
tiff.imsave(self._path_bg_corr_p + filename_fluor, phase)
save_vis_img(phase, self._path_bg_corr_v_p, filename_fluor)
'''
Normalization
'''
fluor_wo_bg_normalized = melc_normalization(fluor_wo_bg)
phase_bc_normalized = melc_normalization(phase)
tiff.imsave(self._path_normalized_f + filename_fluor, fluor_wo_bg_normalized)
save_vis_img(fluor_wo_bg_normalized, self._path_normalized_v_f, filename_fluor)
tiff.imsave(self._path_normalized_p + filename_fluor, phase_bc_normalized)
save_vis_img(phase_bc_normalized, self._path_normalized_v_p, filename_fluor)
def save_vis_img(img: np.ndarray, path: str, filename: str):
img_float = img_as_float(img.astype(int))
img_float = img_float - np.percentile(img_float[20:-20, 20:-20], 0.135) # subtract background
if not np.percentile(img_float[20:-20, 20:-20], 100 - 0.135) == 0.0:
img_float /= np.percentile(img_float[20:-20, 20:-20], 100 - 0.135) # normalize to 99.865% of max value
img_float[img_float < 0] = 0
img_float[img_float > 1] = 1 # cut-off high intensities
tiff.imsave(path + filename, img_as_uint(img_float))
def melc_normalization(img: np.ndarray):
sorted_img = np.sort(np.ravel(img))[::-1]
img[img > sorted_img[3]] = sorted_img[3] # cut off high intensities
return img[15:-15, 15:-15]
'''
For visualization and inspection of images
***Using normalization
registered_u8 = cv2.convertScaleAbs(registered_image, alpha=(255.0/65535.0))
kernel = np.ones((2, 2), np.float32)/4
mean_filtered_img = cv2.filter2D(registered_float, -1, kernel)
normalized_img = cv2.normalize(mean_filtered_img, None, 0, 255, cv2.NORM_MINMAX)
***Using FFT - cut 0.00001 percent of highest frequencies
images = []
images.append(registered_float)
visualize_frequencies(images)
pixels = registered_float.size
high_intensity_pixels = 3
percentage_non_artificial = 100-high_intensity_pixels/pixels
filtered_img = filterLowFrequencies(registered_float, percentage_non_artificial)
images.append(filtered_img)
visualize_frequencies(images)
***Plot histogram
hist = cv2.calcHist([registered_image], [0], None, [65535], [0, 65535])
plt.plot(hist)
plt.xticks(np.arange(0, 65535, step=2000))
plt.grid(True)
plt.yscale('log') # plt.xlim([0, 65535])
plt.show()
'''
if __name__ == '__main__':
args = parse_args()
MELCImageProcessing(args.path, melc_structure_generated=False)
# raw_1 = r'G:\FORSCHUNG\LAB4\VISIOMICS\MELC\2019\3rdFinalPanel_18-6056\201912201349_1'
# melc_processed_data = MELCImageProcessing(raw_1, melc_structure_generated=False)
x = 0
| 42.238267 | 148 | 0.685897 | 1,506 | 11,700 | 4.920983 | 0.152058 | 0.073404 | 0.058292 | 0.039671 | 0.447308 | 0.300364 | 0.18648 | 0.149642 | 0.113885 | 0.054514 | 0 | 0.030346 | 0.208547 | 11,700 | 276 | 149 | 42.391304 | 0.769978 | 0.041709 | 0 | 0.012579 | 1 | 0 | 0.064246 | 0.014787 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044025 | false | 0 | 0.081761 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b24854fdff1c239a2ae1dd104ad4dbb9b57c601 | 1,668 | py | Python | ryu/tests/server-for-consistency-test.py | uiuc-srg/ryu | 2a597f812270ea9690269a20bf659f334c323eb6 | [
"Apache-2.0"
] | null | null | null | ryu/tests/server-for-consistency-test.py | uiuc-srg/ryu | 2a597f812270ea9690269a20bf659f334c323eb6 | [
"Apache-2.0"
] | null | null | null | ryu/tests/server-for-consistency-test.py | uiuc-srg/ryu | 2a597f812270ea9690269a20bf659f334c323eb6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
An echo server that uses select to handle multiple clients at a time.
Command-line parameter: port=listening port
"""
import select
import socket
import sys
if len(sys.argv)!=2:
print("You need to specify a listening port!")
sys.exit()
host = ''
port = int(sys.argv[1])
backlog = 5 # maximum number of queued connections
size = 1024 # buffer size
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((host,port))
server.listen(backlog)
input = [server,sys.stdin]
running = 1
print("Press any key to stop the server...")
while running:
# The Python select module allows an application to wait for input from multiple sockets at a time.
inputready,outputready,exceptready = select.select(input,[],[])
for s in inputready:
if s == server:
# handle the server socket
client, address = server.accept()
print("New client at "+address[0]+":"+str(address[1]))
input.append(client)
elif s == sys.stdin:
# handle standard input
junk = sys.stdin.readline()
running = 0
else:
# handle all other sockets
data = "[from h"+sys.argv[1][0]+"]: "
#data = data+s.recv(size)
if data:
try:
#s.send(data)
print("recv: %d bytes" % len(s.recv(size)))
except socket.error, e:
s.close()
input.remove(s)
break
else:
s.close()
input.remove(s)
server.close()
| 27.344262 | 103 | 0.550959 | 205 | 1,668 | 4.473171 | 0.492683 | 0.022901 | 0.015267 | 0.037077 | 0.039258 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.33753 | 1,668 | 60 | 104 | 27.8 | 0.8181 | 0.166667 | 0 | 0.153846 | 0 | 0 | 0.088165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0.102564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b24a9c95de8992eb607fee932124e52311407d1 | 1,146 | py | Python | 2015/python/d07.py | eduellery/adventofcode | dccece0bf59bc241803edc99a6536062fe2714d1 | [
"MIT"
] | null | null | null | 2015/python/d07.py | eduellery/adventofcode | dccece0bf59bc241803edc99a6536062fe2714d1 | [
"MIT"
] | null | null | null | 2015/python/d07.py | eduellery/adventofcode | dccece0bf59bc241803edc99a6536062fe2714d1 | [
"MIT"
] | null | null | null | import re
lines = open('d07.in').read().split('\n')
calc = dict()
for line in lines:
instruction = re.findall(r'(.*)\s->\s(\w*)', line)
for ops, register in instruction:
calc[register] = ops.strip().split(' ')
def calculate(register):
try:
return int(register)
except ValueError:
pass
if register not in results:
ops = calc[register]
if len(ops) == 1:
res = calculate(ops[0])
else:
op = ops[-2]
if op == 'AND':
res = calculate(ops[0]) & calculate(ops[2])
elif op == 'OR':
res = calculate(ops[0]) | calculate(ops[2])
elif op == 'NOT':
res = ~calculate(ops[1]) & 0xffff
elif op == 'RSHIFT':
res = calculate(ops[0]) >> calculate(ops[2])
elif op == 'LSHIFT':
res = calculate(ops[0]) << calculate(ops[2])
results[register] = res
return results[register]
results = dict()
original = calculate('a')
print('P1:', original)
results = dict()
results['b'] = original
modified = calculate('a')
print('P2:', modified)
| 25.466667 | 58 | 0.519197 | 136 | 1,146 | 4.375 | 0.382353 | 0.201681 | 0.151261 | 0.134454 | 0.22521 | 0.22521 | 0.22521 | 0.176471 | 0.176471 | 0 | 0 | 0.021684 | 0.315881 | 1,146 | 44 | 59 | 26.045455 | 0.737245 | 0 | 0 | 0.054054 | 0 | 0 | 0.046248 | 0 | 0 | 0 | 0.005236 | 0 | 0 | 1 | 0.027027 | false | 0.027027 | 0.027027 | 0 | 0.108108 | 0.054054 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b2c8ab1cb51ed50fafc8095cf24fef9233aa71a | 2,755 | py | Python | lib/durak/game/ai.py | maximpertsov/durak-sockets | 53e47980a5456f44df4c2f140d74504ae8ad43a6 | [
"MIT"
] | null | null | null | lib/durak/game/ai.py | maximpertsov/durak-sockets | 53e47980a5456f44df4c2f140d74504ae8ad43a6 | [
"MIT"
] | 1 | 2020-10-23T14:12:47.000Z | 2020-11-03T17:33:11.000Z | lib/durak/game/ai.py | maximpertsov/durak-sockets | 53e47980a5456f44df4c2f140d74504ae8ad43a6 | [
"MIT"
] | null | null | null | from itertools import chain
from random import choice
from lib.durak.exceptions import IllegalAction
class AI:
class CannotPerform(IllegalAction):
pass
def __init__(self, *, game):
self._game = game
def perform_action(self, *, player):
"""
Have the user perform a random action.
Do nothing if yielded.
"""
_player = self._player(player)
if self._player(_player) in self._game._yielded.get():
raise self.CannotPerform("Already yielded")
action_type, selected_action = choice(self._potential_actions(player=_player))
selected_action(player=_player)
return action_type
def _potential_actions(self, *, player):
# TODO: report action on event?
defending = player == self._game.defender
not_defending = not defending
return list(
chain(
[("attacked", self._attack)] * 3 * not_defending,
# self._pass_card,
[("defended", self._defend)] * 9 * defending,
[("gave_up", self._give_up)] * 1 * defending,
[("yielded_attack", self._yield_attack)] * 7 * not_defending,
)
)
def _attack(self, *, player):
"""
Throw a random, legal attack card
"""
try:
potential_cards = list(
set(self._game._legal_attacks._cards)
& set(self._player(player).cards())
)
card = choice(potential_cards)
self._game.legally_attack(player=player, cards=[card])
except (IllegalAction, IndexError):
raise self.CannotPerform
def _yield_attack(self, *, player):
try:
self._game.yield_attack(player=player)
except IllegalAction:
raise self.CannotPerform
def _defend(self, *, player):
"""
Defend randomly
"""
try:
base_card, potential_cards = choice(
list(self._game.legal_defenses._legal_defenses.items())
)
if not potential_cards:
self._game.give_up(player=player)
return
card = choice(list(potential_cards))
self._game.legally_defend(player=player, base_card=base_card, card=card)
except (IllegalAction, IndexError):
raise self.CannotPerform
def _give_up(self, *, player):
try:
self._game.give_up(player=player)
except IllegalAction:
raise self.CannotPerform
def serialize(self):
return [player.serialize() for player in self.ordered()]
def _player(self, player_or_id):
return self._game.player(player_or_id)
| 29.945652 | 86 | 0.57931 | 284 | 2,755 | 5.355634 | 0.271127 | 0.063116 | 0.072321 | 0.065746 | 0.236686 | 0.1762 | 0.149901 | 0.149901 | 0 | 0 | 0 | 0.002146 | 0.323412 | 2,755 | 91 | 87 | 30.274725 | 0.813841 | 0.058076 | 0 | 0.225806 | 0 | 0 | 0.020668 | 0 | 0 | 0 | 0 | 0.010989 | 0 | 1 | 0.145161 | false | 0.016129 | 0.048387 | 0.032258 | 0.306452 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b2e5bc5b7359f315c6568c59fbcd38f9b3397a5 | 948 | py | Python | string/tinyUrl.py | mengyangbai/leetcode | e7a6906ecc5bce665dec5d0f057b302a64d50f40 | [
"MIT"
] | null | null | null | string/tinyUrl.py | mengyangbai/leetcode | e7a6906ecc5bce665dec5d0f057b302a64d50f40 | [
"MIT"
] | null | null | null | string/tinyUrl.py | mengyangbai/leetcode | e7a6906ecc5bce665dec5d0f057b302a64d50f40 | [
"MIT"
] | null | null | null | import random
class Codec:
base62 = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
baseurl = 'http://tinyurl.com/'
tinyTolong = {'1':'1'}
longTotiny = {}
def encode(self, longUrl):
"""Encodes a URL to a shortened URL.
:type longUrl: str
:rtype: str
"""
if longUrl in Codec.longTotiny:
return Codec.longTotiny[longUrl]
shortUrl = '1'
while shortUrl in Codec.tinyTolong:
shortUrl = ''.join(random.choice(Codec.base62) for i in range(6))
shortUrl = Codec.baseurl + shortUrl
Codec.tinyTolong[shortUrl] = longUrl
Codec.longTotiny[longUrl] = shortUrl
return shortUrl
def decode(self, shortUrl):
"""Decodes a shortened URL to its original URL.
:type shortUrl: str
:rtype: str
"""
longUrl = Codec.tinyTolong[shortUrl]
return longUrl | 28.727273 | 77 | 0.597046 | 94 | 948 | 6.021277 | 0.425532 | 0.079505 | 0.121908 | 0.106007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027439 | 0.308017 | 948 | 33 | 78 | 28.727273 | 0.835366 | 0.150844 | 0 | 0 | 0 | 0 | 0.115226 | 0.085048 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.052632 | 0 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1b3057afe6cc042a2d06d8c6901dc21dbf505ca6 | 1,328 | py | Python | Lib/objc/MetalKit.py | kanishpatel/Pyto | feec7a1a54f635a6375fa7ede074ff35afbfbb95 | [
"MIT"
] | null | null | null | Lib/objc/MetalKit.py | kanishpatel/Pyto | feec7a1a54f635a6375fa7ede074ff35afbfbb95 | [
"MIT"
] | null | null | null | Lib/objc/MetalKit.py | kanishpatel/Pyto | feec7a1a54f635a6375fa7ede074ff35afbfbb95 | [
"MIT"
] | null | null | null | '''
Classes from the 'MetalKit' framework.
'''
try:
from rubicon.objc import ObjCClass
except ValueError:
def ObjCClass(name):
return None
def _Class(name):
try:
return ObjCClass(name)
except NameError:
return None
MTKViewDisplayLinkTarget = _Class('MTKViewDisplayLinkTarget')
MTKMeshBuffer = _Class('MTKMeshBuffer')
MTKTextureLoaderASTCHelper = _Class('MTKTextureLoaderASTCHelper')
MTKMeshBufferZone = _Class('MTKMeshBufferZone')
MTKMeshBufferHolder = _Class('MTKMeshBufferHolder')
MTKMesh = _Class('MTKMesh')
MTKSubmesh = _Class('MTKSubmesh')
MTKMeshBufferAllocator = _Class('MTKMeshBufferAllocator')
MTKOffscreenDrawable = _Class('MTKOffscreenDrawable')
MTKTextureLoader = _Class('MTKTextureLoader')
MTKTextureIOBufferAllocator = _Class('MTKTextureIOBufferAllocator')
MTKTextureIOBuffer = _Class('MTKTextureIOBuffer')
MTKTextureIOBufferMap = _Class('MTKTextureIOBufferMap')
MTKTextureUploader = _Class('MTKTextureUploader')
MTKTextureLoaderData = _Class('MTKTextureLoaderData')
MTKTextureLoaderPVR = _Class('MTKTextureLoaderPVR')
MTKTextureLoaderKTX = _Class('MTKTextureLoaderKTX')
MTKTextureLoaderImageIO = _Class('MTKTextureLoaderImageIO')
MTKTextureLoaderMDL = _Class('MTKTextureLoaderMDL')
MTKTextureLoaderPVR3 = _Class('MTKTextureLoaderPVR3')
MTKView = _Class('MTKView')
| 33.2 | 67 | 0.799699 | 92 | 1,328 | 11.304348 | 0.423913 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001682 | 0.104669 | 1,328 | 39 | 68 | 34.051282 | 0.873003 | 0.028614 | 0 | 0.129032 | 0 | 0 | 0.300312 | 0.111544 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.032258 | 0.032258 | 0.193548 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b42bdaa222371750f61ef24741923cb5ba6d385 | 3,392 | py | Python | tests/test_datasets.py | gau-nernst/CenterNet | bd6a21813a63310e3b1c77cefe24077b72d1092a | [
"MIT"
] | 47 | 2021-08-10T09:30:53.000Z | 2022-03-29T07:53:43.000Z | tests/test_datasets.py | gau-nernst/CenterNet | bd6a21813a63310e3b1c77cefe24077b72d1092a | [
"MIT"
] | 1 | 2021-08-07T13:46:49.000Z | 2021-08-07T13:46:49.000Z | tests/test_datasets.py | gau-nernst/CenterNet | bd6a21813a63310e3b1c77cefe24077b72d1092a | [
"MIT"
] | 6 | 2021-08-12T02:40:43.000Z | 2022-01-31T16:12:40.000Z | import random
import pytest
import numpy as np
import torch
from torch.utils.data import DataLoader
from centernet_lightning.datasets import COCODataset, VOCDataset, CrowdHumanDataset, MOTTrackingSequence, MOTTrackingDataset, KITTITrackingSequence, KITTITrackingDataset
from centernet_lightning.datasets.utils import get_default_detection_transforms, get_default_tracking_transforms, CollateDetection, CollateTracking
from centernet_lightning.datasets.builder import build_dataset, build_dataloader
def generate_detection_dataset_configs():
pass
def generate_tracking_dataset_configs():
pass
class TestDetectionDataset:
dataset_configs = generate_detection_dataset_configs()
def test_attributes(self, constructor, data_dir, split, name_to_label):
dataset = constructor(data_dir, split, name_to_label)
assert isinstance(len(dataset), int)
def test_get_item(self, constructor, data_dir, split, name_to_label):
dataset = constructor(data_dir, split, name_to_label)
for item in random.sample(dataset, 10):
assert isinstance(item["image"], np.ndarray)
assert item["image"].shape[-1] == 3
assert isinstance(item["bboxes"], list)
for box in item["bboxes"]:
assert len(box) == 4
for x in box:
assert 0 <= x <= 1
assert isinstance(item["labels"], list)
assert len(item["labels"]) == len(item["bboxes"])
transforms = get_default_detection_transforms()
dataset = constructor(data_dir, split, name_to_label, transforms=transforms)
for item in random.sample(dataset, 10):
assert isinstance(item["image"], torch.Tensor)
assert item["image"].shape[0] == 3
assert isinstance(item["bboxes"], tuple)
assert isinstance(item["labels"], tuple)
assert len(item["bboxes"]) == len(item["labels"])
def test_dataloader(self, constructor, data_dir, split, name_to_label):
batch_size = 4
transforms = get_default_detection_transforms()
collate_fn = CollateDetection()
dataset = constructor(data_dir, split, name_to_label, transforms=transforms)
dataloader = DataLoader(dataset, batch_size=batch_size, collate_fn=collate_fn)
batch = next(iter(dataloader))
img = batch["image"]
assert isinstance(img, torch.Tensor)
assert img.shape[0] == batch_size
bboxes = batch["bboxes"]
assert isinstance(bboxes, torch.Tensor)
assert bboxes.shape[0] == batch_size
assert bboxes.max() <= 1
assert bboxes.min() >= 0
labels = batch["labels"]
assert isinstance(labels, torch.Tensor)
assert labels.shape[0] == batch_size
mask = batch["mask"]
assert isinstance(mask, torch.Tensor)
assert mask.shape[0] == batch_size
for x in mask.view(-1):
assert x == 0 or x == 1
assert bboxes.shape[1] == labels.shape[1] == mask.shape[1]
def test_builder(self):
pass
class TestTrackingDataset:
dataset_configs = generate_tracking_dataset_configs()
def test_attributes(self):
pass
def test_get_item(self):
pass
def test_dataloader(self):
pass
def test_builder(self):
pass | 34.262626 | 169 | 0.65684 | 389 | 3,392 | 5.542416 | 0.210797 | 0.081633 | 0.058442 | 0.074675 | 0.325603 | 0.229128 | 0.198516 | 0.198516 | 0.180891 | 0.180891 | 0 | 0.009415 | 0.248526 | 3,392 | 99 | 170 | 34.262626 | 0.836406 | 0 | 0 | 0.232877 | 0 | 0 | 0.027999 | 0 | 0 | 0 | 0 | 0 | 0.342466 | 1 | 0.136986 | false | 0.09589 | 0.109589 | 0 | 0.30137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1b437fea1f8494157ef38e2b0fdd72c52d6e8cfe | 768 | py | Python | docs/run.py | imrehg/dash-bootstrap-components | 7cf43168808bb88b243e414168dc3bf196fefd84 | [
"Apache-2.0"
] | 1 | 2022-01-12T12:36:20.000Z | 2022-01-12T12:36:20.000Z | docs/run.py | imrehg/dash-bootstrap-components | 7cf43168808bb88b243e414168dc3bf196fefd84 | [
"Apache-2.0"
] | null | null | null | docs/run.py | imrehg/dash-bootstrap-components | 7cf43168808bb88b243e414168dc3bf196fefd84 | [
"Apache-2.0"
] | null | null | null | from werkzeug.middleware.dispatcher import DispatcherMiddleware
from components_page import register_apps as register_component_apps
from examples import register_apps as register_example_apps
from markdown_to_html import convert_all_markdown_files
from server import create_server
convert_all_markdown_files()
server = create_server()
component_routes = register_component_apps()
example_routes = register_example_apps()
routes = {**component_routes, **example_routes}
application = DispatcherMiddleware(
server, {slug: app.server for slug, app in routes.items()}
)
if __name__ == "__main__":
import os
from werkzeug.serving import run_simple
os.environ["DBC_DOCS_MODE"] = "dev"
run_simple("localhost", 8888, application, use_reloader=True)
| 30.72 | 68 | 0.807292 | 98 | 768 | 5.938776 | 0.469388 | 0.041237 | 0.061856 | 0.068729 | 0.09622 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005926 | 0.121094 | 768 | 24 | 69 | 32 | 0.856296 | 0 | 0 | 0 | 0 | 0 | 0.042969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.388889 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1b4626f96dec1d654d95e876fe84b62e1e6877b4 | 1,426 | py | Python | fledgling/app/entity/task.py | PracticalCleanArchitecture/fledgling | ec2fdd8446c0ecb94bdd8f8cfaec4c264b546683 | [
"MIT"
] | 1 | 2021-08-28T16:26:44.000Z | 2021-08-28T16:26:44.000Z | fledgling/app/entity/task.py | PracticalCleanArchitecture/fledgling | ec2fdd8446c0ecb94bdd8f8cfaec4c264b546683 | [
"MIT"
] | null | null | null | fledgling/app/entity/task.py | PracticalCleanArchitecture/fledgling | ec2fdd8446c0ecb94bdd8f8cfaec4c264b546683 | [
"MIT"
] | null | null | null | # -*- coding: utf8 -*-
from abc import ABC, abstractmethod
from datetime import datetime
from enum import Enum
from typing import List, Optional, Tuple, Union
class TaskRepositoryError(Exception):
pass
class TaskStatus(Enum):
CREATED = 1
FINISHED = 2
class Task:
def __init__(self):
self.brief = None
self.id = None
self.keywords = []
self.status = None
@classmethod
def new(cls, *, brief, id_=None, keywords: List[str] = None,
status: TaskStatus = None) -> 'Task':
instance = Task()
instance.brief = brief
instance.id = id_
instance.keywords = keywords or []
instance.status = status
return instance
def is_finished(self) -> bool:
return self.status == TaskStatus.FINISHED
class ITaskRepository(ABC):
@abstractmethod
def add(self, task: Task) -> Task:
pass
@abstractmethod
def get_by_id(self, id_) -> Union[None, Task]:
"""
查询指定的任务。
"""
pass
@abstractmethod
def list(self, *, keyword: Optional[str] = None, page, per_page,
plan_trigger_time: Optional[Tuple[datetime, datetime]] = None,
status: Optional[int] = None,
task_ids: Union[None, List[int]] = None):
"""
列出任务。
"""
pass
@abstractmethod
def remove(self, *, task_id: int):
pass
| 22.634921 | 75 | 0.581346 | 154 | 1,426 | 5.285714 | 0.357143 | 0.083538 | 0.077396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003052 | 0.310659 | 1,426 | 62 | 76 | 23 | 0.825025 | 0.025245 | 0 | 0.214286 | 0 | 0 | 0.002978 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.119048 | 0.095238 | 0.02381 | 0.452381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1b469269698aba27e175e7d85e9c7b4f767ceec3 | 7,254 | py | Python | ElectroWeakAnalysis/ZMuMu/python/ZMuMuCategoriesPlots_cff.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 6 | 2017-09-08T14:12:56.000Z | 2022-03-09T23:57:01.000Z | ElectroWeakAnalysis/ZMuMu/python/ZMuMuCategoriesPlots_cff.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 545 | 2017-09-19T17:10:19.000Z | 2022-03-07T16:55:27.000Z | ElectroWeakAnalysis/ZMuMu/python/ZMuMuCategoriesPlots_cff.py | SWuchterl/cmssw | 769b4a7ef81796579af7d626da6039dfa0347b8e | [
"Apache-2.0"
] | 14 | 2017-10-04T09:47:21.000Z | 2019-10-23T18:04:45.000Z | import FWCore.ParameterSet.Config as cms
from ElectroWeakAnalysis.ZMuMu.ZMuMuCategoriesSequences_cff import *
import copy
zPlots = cms.PSet(
histograms = cms.VPSet(
cms.PSet(
min = cms.untracked.double(0.0),
max = cms.untracked.double(200.0),
nbins = cms.untracked.int32(200),
name = cms.untracked.string("zMass"),
description = cms.untracked.string("Z mass [GeV/c^{2}]"),
plotquantity = cms.untracked.string("mass")
),
cms.PSet(
min = cms.untracked.double(0.0),
max = cms.untracked.double(200.0),
nbins = cms.untracked.int32(200),
name = cms.untracked.string("mu1Pt"),
description = cms.untracked.string("Highest muon p_{t} [GeV/c]"),
plotquantity = cms.untracked.string("max(daughter(0).pt,daughter(1).pt)")
),
cms.PSet(
min = cms.untracked.double(0.0),
max = cms.untracked.double(200.0),
nbins = cms.untracked.int32(200),
name = cms.untracked.string("mu2Pt"),
description = cms.untracked.string("Lowest muon p_{t} [GeV/c]"),
plotquantity = cms.untracked.string("min(daughter(0).pt,daughter(1).pt)")
)
)
)
# ZMuMu at least 1 HLT + 2 track-iso (Shape)
goodZToMuMuPlotsLoose = cms.EDAnalyzer(
"CandViewHistoAnalyzer",
zPlots,
src = cms.InputTag("goodZToMuMuAtLeast1HLTLoose")
)
goodZToMuMuPlots = cms.EDAnalyzer(
"CandViewHistoAnalyzer",
zPlots,
src = cms.InputTag("goodZToMuMuAtLeast1HLT")
)
## #### plot for loose cuts
## goodZToMuMuSequence.__iadd__(goodZToMuMuPlots)
## goodZToMuMuSequence.setLabel("goodZToMuMuAtLeast1HLT")
## #ZMuMu 2 HLT + 2 track-iso
## goodZToMuMu2HLTPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMu2HLTPlots.src = cms.InputTag("goodZToMuMu2HLT")
## goodZToMuMu2HLTSequence.__iadd__(goodZToMuMu2HLTPlots)
## goodZToMuMu2HLTSequence.setLabel("goodZToMuMu2HLT")
## #ZMuMu 1 HLT + 2 track-iso
## goodZToMuMu1HLTPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMu1HLTPlots.src = cms.InputTag("goodZToMuMu1HLT")
## goodZToMuMu1HLTSequence.__iadd__(goodZToMuMu1HLTPlots)
## #ZMuMu at least 1 HLT + at least 1 NON track-iso
## nonIsolatedZToMuMuPlots = copy.deepcopy(goodZToMuMuPlots)
## nonIsolatedZToMuMuPlots.src = cms.InputTag("nonIsolatedZToMuMuAtLeast1HLT")
## nonIsolatedZToMuMuSequence.__iadd__(nonIsolatedZToMuMuPlots)
## #ZMuMu at least 1 HLT + 1 NON track-iso
## oneNonIsolatedZToMuMuPlots = copy.deepcopy(goodZToMuMuPlots)
## oneNonIsolatedZToMuMuPlots.src = cms.InputTag("oneNonIsolatedZToMuMuAtLeast1HLT")
## oneNonIsolatedZToMuMuSequence.__iadd__(oneNonIsolatedZToMuMuPlots)
## #ZMuMu at least 1 HLT + 2 NON track-iso
## twoNonIsolatedZToMuMuPlots = copy.deepcopy(goodZToMuMuPlots)
## twoNonIsolatedZToMuMuPlots.src = cms.InputTag("twoNonIsolatedZToMuMuAtLeast1HLT")
## twoNonIsolatedZToMuMuSequence.__iadd__(twoNonIsolatedZToMuMuPlots)
## #ZMuSta First HLT + 2 track-iso
## goodZToMuMuOneStandAloneMuonPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMuOneStandAloneMuonPlots.src = cms.InputTag("goodZToMuMuOneStandAloneMuonFirstHLT")
## goodZToMuMuOneStandAloneMuonSequence.__iadd__(goodZToMuMuOneStandAloneMuonPlots)
## #ZMuTk First HLT + 2 track-iso
## goodZToMuMuOneTrackPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMuOneTrackPlots.src = cms.InputTag("goodZToMuMuOneTrackFirstHLT")
## goodZToMuMuOneTrackSequence.__iadd__(goodZToMuMuOneTrackPlots)
## #ZMuMu same charge
## goodZToMuMuSameChargeAtLeast1HLTPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMuSameChargeAtLeast1HLTPlots.src = cms.InputTag("goodZToMuMuSameChargeAtLeast1HLT")
## goodZToMuMuSameChargeSequence.__iadd__(goodZToMuMuSameChargeAtLeast1HLTPlots)
## goodZToMuMuSameCharge2HLTPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMuSameCharge2HLTPlots.src = cms.InputTag("goodZToMuMuSameCharge2HLT")
## goodZToMuMuSameCharge2HLTSequence.__iadd__(goodZToMuMuSameCharge2HLTPlots)
## goodZToMuMuSameCharge1HLTPlots = copy.deepcopy(goodZToMuMuPlots)
## goodZToMuMuSameCharge1HLTPlots.src = cms.InputTag("goodZToMuMuSameCharge1HLT")
## goodZToMuMuSameCharge1HLTSequence.__iadd__(goodZToMuMuSameCharge1HLTPlots)
#### plot for tight cuts
goodZToMuMuPath.__iadd__(goodZToMuMuPlots)
goodZToMuMuPath.setLabel("goodZToMuMuAtLeast1HLT")
#ZMuMu 2 HLT + 2 track-iso
goodZToMuMu2HLTPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMu2HLTPlots.src = cms.InputTag("goodZToMuMu2HLT")
goodZToMuMu2HLTPath.__iadd__(goodZToMuMu2HLTPlots)
goodZToMuMu2HLTPath.setLabel("goodZToMuMu2HLT")
#ZMuMu 1 HLT + 2 track-iso
goodZToMuMu1HLTPlots= copy.deepcopy(goodZToMuMuPlots)
goodZToMuMu1HLTPlots.src = cms.InputTag("goodZToMuMu1HLT")
goodZToMuMu1HLTPath.__iadd__(goodZToMuMu1HLTPlots)
##### plot for AB and BB region
goodZToMuMuAB1HLTPlots= copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuAB1HLTPlots.src = cms.InputTag("goodZToMuMuAB1HLT")
goodZToMuMuAB1HLTPath.__iadd__(goodZToMuMuAB1HLTPlots)
goodZToMuMuBB2HLTPlots= copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuBB2HLTPlots.src = cms.InputTag("goodZToMuMuBB2HLT")
goodZToMuMuBB2HLTPath.__iadd__(goodZToMuMuBB2HLTPlots)
#ZMuMu at least 1 HLT + at least 1 NON track-iso
nonIsolatedZToMuMuPlots = copy.deepcopy(goodZToMuMuPlots)
nonIsolatedZToMuMuPlots.src = cms.InputTag("nonIsolatedZToMuMuAtLeast1HLT")
nonIsolatedZToMuMuPath.__iadd__(nonIsolatedZToMuMuPlots)
#ZMuMu at least 1 HLT + 1 NON track-iso
oneNonIsolatedZToMuMuPlots = copy.deepcopy(goodZToMuMuPlots)
oneNonIsolatedZToMuMuPlots.src = cms.InputTag("oneNonIsolatedZToMuMuAtLeast1HLT")
oneNonIsolatedZToMuMuPath.__iadd__(oneNonIsolatedZToMuMuPlots)
#ZMuMu at least 1 HLT + 2 NON track-iso
twoNonIsolatedZToMuMuPlots = copy.deepcopy(goodZToMuMuPlots)
twoNonIsolatedZToMuMuPlots.src = cms.InputTag("twoNonIsolatedZToMuMuAtLeast1HLT")
twoNonIsolatedZToMuMuPath.__iadd__(twoNonIsolatedZToMuMuPlots)
#ZMuSta global HLT + 2 track-iso
goodZToMuMuOneStandAloneMuonPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuOneStandAloneMuonPlots.src = cms.InputTag("goodZToMuMuOneStandAloneMuonFirstHLT")
goodZToMuMuOneStandAloneMuonPath.__iadd__(goodZToMuMuOneStandAloneMuonPlots)
#ZMuTk First HLT + 2 track-iso
goodZToMuMuOneTrackPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuOneTrackPlots.src = cms.InputTag("goodZToMuMuOneTrackFirstHLT")
goodZToMuMuOneTrackPath.__iadd__(goodZToMuMuOneTrackPlots)
#ZMuTkMu global HLT + 2 track-iso
goodZToMuMuOneTrackerMuonPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuOneTrackerMuonPlots.src = cms.InputTag("goodZToMuMuOneTrackerMuonFirstHLT")
goodZToMuMuOneTrackerMuonPath.__iadd__(goodZToMuMuOneTrackerMuonPlots)
#ZMuMu same charge
goodZToMuMuSameChargeAtLeast1HLTPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuSameChargeAtLeast1HLTPlots.src = cms.InputTag("goodZToMuMuSameChargeAtLeast1HLT")
goodZToMuMuSameChargePath.__iadd__(goodZToMuMuSameChargeAtLeast1HLTPlots)
goodZToMuMuSameCharge2HLTPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuSameCharge2HLTPlots.src = cms.InputTag("goodZToMuMuSameCharge2HLT")
goodZToMuMuSameCharge2HLTPath.__iadd__(goodZToMuMuSameCharge2HLTPlots)
goodZToMuMuSameCharge1HLTPlots = copy.deepcopy(goodZToMuMuPlots)
goodZToMuMuSameCharge1HLTPlots.src = cms.InputTag("goodZToMuMuSameCharge1HLT")
goodZToMuMuSameCharge1HLTPath.__iadd__(goodZToMuMuSameCharge1HLTPlots)
| 34.056338 | 95 | 0.809347 | 601 | 7,254 | 9.597338 | 0.214642 | 0.026006 | 0.06068 | 0.020804 | 0.665742 | 0.659327 | 0.648752 | 0.630028 | 0.630028 | 0.616158 | 0 | 0.021115 | 0.092501 | 7,254 | 212 | 96 | 34.216981 | 0.855081 | 0.397849 | 0 | 0.216867 | 0 | 0 | 0.145579 | 0.106303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.036145 | 0 | 0.036145 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1b588099d4b30e26fb0a274ed99ed4ea86b7a285 | 1,038 | py | Python | appcred.py | tamalsaha/keystone-demo | dd13b3d283565f72f38d696b235be4e2ad2c2845 | [
"Apache-2.0"
] | 1 | 2018-07-31T09:25:53.000Z | 2018-07-31T09:25:53.000Z | appcred.py | tamalsaha/keystone-demo | dd13b3d283565f72f38d696b235be4e2ad2c2845 | [
"Apache-2.0"
] | null | null | null | appcred.py | tamalsaha/keystone-demo | dd13b3d283565f72f38d696b235be4e2ad2c2845 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
from keystoneauth1.identity import v3
from keystoneauth1 import session
from keystoneclient.v3 import client
from pprint import pprint
import os
def main():
AUTH_URL = os.getenv('OS_AUTH_URL')
USERNAME = os.getenv('OS_USERNAME')
USER_DOMAIN_NAME = os.getenv('OS_USER_DOMAIN_NAME')
PASSWD = os.getenv('OS_PASSWORD')
PROJECT_ID = os.getenv('OS_PROJECT_ID')
PROJECT_NAME = os.getenv('OS_PROJECT_NAME')
pprint('AUTH_URL = ' + AUTH_URL)
pprint('USERNAME = ' + USERNAME)
pprint('USER_DOMAIN_NAME = ' + USER_DOMAIN_NAME)
pprint('PASSWD = ' + PASSWD)
pprint('PROJECT_ID = ' + PROJECT_ID)
pprint('PROJECT_NAME = ' + PROJECT_NAME)
auth = v3.Password(auth_url=AUTH_URL,
username=USERNAME,
user_domain_name=USER_DOMAIN_NAME,
password=PASSWD,
project_id=PROJECT_ID,
project_name=PROJECT_NAME)
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
app_cred = keystone.application_credentials.create(
name='kubernetes')
pprint(app_cred.to_dict())
if __name__ == "__main__":
main()
| 27.315789 | 52 | 0.754335 | 145 | 1,038 | 5.068966 | 0.255172 | 0.057143 | 0.081633 | 0.059864 | 0.07619 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0.005495 | 0.123314 | 1,038 | 37 | 53 | 28.054054 | 0.802198 | 0.015414 | 0 | 0 | 0 | 0 | 0.17238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0.129032 | 0.16129 | 0 | 0.193548 | 0.258065 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1b5abd620bf6c3fd9ffce5839f82ac6ace822f1a | 664 | py | Python | data/admin.py | FSavoy/visuo-server | d9c93ec7ae9dd033f3f0290381ddbac413bb6f9a | [
"BSD-3-Clause"
] | 2 | 2017-11-16T08:32:46.000Z | 2018-04-02T13:36:42.000Z | data/admin.py | FSavoy/visuo-server | d9c93ec7ae9dd033f3f0290381ddbac413bb6f9a | [
"BSD-3-Clause"
] | null | null | null | data/admin.py | FSavoy/visuo-server | d9c93ec7ae9dd033f3f0290381ddbac413bb6f9a | [
"BSD-3-Clause"
] | 2 | 2017-11-16T08:33:52.000Z | 2021-05-12T06:31:54.000Z | from django.contrib.gis import admin
# Register your models here.
from models import SkyPicture, MeasuringDevice, WeatherMeasurement, RadiosondeMeasurement
from django.contrib.gis import forms
# Custom interface for selecting the location of devices
class MeasuringDeviceAdminForm(forms.ModelForm):
location = forms.PointField(widget=forms.OSMWidget(attrs={
'display_raw': True}))
class MeasuringDeviceAdmin(admin.GeoModelAdmin):
form = MeasuringDeviceAdminForm
admin.site.register(SkyPicture)
admin.site.register(WeatherMeasurement)
admin.site.register(MeasuringDevice, MeasuringDeviceAdmin)
admin.site.register(RadiosondeMeasurement) | 34.947368 | 89 | 0.813253 | 68 | 664 | 7.926471 | 0.544118 | 0.06679 | 0.12616 | 0.074212 | 0.096475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111446 | 664 | 19 | 90 | 34.947368 | 0.913559 | 0.121988 | 0 | 0 | 0 | 0 | 0.018933 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.