hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
86d75f7e9a302f49289d9be8498b550dc47650fa | 79,634 | py | Python | private/templates/NYC/config.py | devinbalkind/eden | d5a684eae537432eb2c7d954132484a4714ca8fb | [
"MIT"
] | null | null | null | private/templates/NYC/config.py | devinbalkind/eden | d5a684eae537432eb2c7d954132484a4714ca8fb | [
"MIT"
] | null | null | null | private/templates/NYC/config.py | devinbalkind/eden | d5a684eae537432eb2c7d954132484a4714ca8fb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
try:
# Python 2.7
from collections import OrderedDict
except:
# Python 2.6
from gluon.contrib.simplejson.ordered_dict import OrderedDict
from gluon import current
from gluon.html import A, URL
from gluon.storage import Storage
from s3 import s3_fullname
T = current.T
settings = current.deployment_settings
"""
Template settings for NYC Prepared
"""
# Pre-Populate
settings.base.prepopulate = ("NYC",)
settings.base.system_name = T("NYC Prepared")
settings.base.system_name_short = T("NYC Prepared")
# Theme (folder to use for views/layout.html)
settings.base.theme = "NYC"
settings.ui.formstyle_row = "bootstrap"
settings.ui.formstyle = "bootstrap"
settings.ui.filter_formstyle = "table_inline"
settings.msg.parser = "NYC"
# Uncomment to Hide the language toolbar
settings.L10n.display_toolbar = False
# Default timezone for users
settings.L10n.utc_offset = "UTC -0500"
# Uncomment these to use US-style dates in English
settings.L10n.date_format = "%m-%d-%Y"
# Start week on Sunday
settings.L10n.firstDOW = 0
# Number formats (defaults to ISO 31-0)
# Decimal separator for numbers (defaults to ,)
settings.L10n.decimal_separator = "."
# Thousands separator for numbers (defaults to space)
settings.L10n.thousands_separator = ","
# Default Country Code for telephone numbers
settings.L10n.default_country_code = 1
# Enable this to change the label for 'Mobile Phone'
settings.ui.label_mobile_phone = "Cell Phone"
# Enable this to change the label for 'Postcode'
settings.ui.label_postcode = "ZIP Code"
# Uncomment to disable responsive behavior of datatables
# - Disabled until tested
settings.ui.datatables_responsive = False
# PDF to Letter
settings.base.paper_size = T("Letter")
# Restrict the Location Selector to just certain countries
# NB This can also be over-ridden for specific contexts later
# e.g. Activities filtered to those of parent Project
settings.gis.countries = ("US",)
settings.fin.currencies = {
"USD" : T("United States Dollars"),
}
settings.L10n.languages = OrderedDict([
("en", "English"),
("es", "Español"),
])
# Authentication settings
# These settings should be changed _after_ the 1st (admin) user is
# registered in order to secure the deployment
# Should users be allowed to register themselves?
settings.security.self_registration = "index"
# Do new users need to verify their email address?
settings.auth.registration_requires_verification = True
# Do new users need to be approved by an administrator prior to being able to login?
settings.auth.registration_requires_approval = True
# Always notify the approver of a new (verified) user, even if the user is automatically approved
#settings.auth.always_notify_approver = False
# Uncomment this to request the Mobile Phone when a user registers
settings.auth.registration_requests_mobile_phone = True
# Uncomment this to request the Organisation when a user registers
settings.auth.registration_requests_organisation = True
# Uncomment this to request the Site when a user registers
#settings.auth.registration_requests_site = True
# Roles that newly-registered users get automatically
#settings.auth.registration_roles = { 0: ["comms_dispatch"]}
#settings.auth.registration_link_user_to = {"staff":T("Staff"),
# #"volunteer":T("Volunteer")
# }
settings.auth.registration_link_user_to_default = "staff"
settings.security.policy = 5 # Controller, Function & Table ACLs
# Enable this to have Open links in IFrames open a full page in a new tab
settings.ui.iframe_opens_full = True
settings.ui.label_attachments = "Media"
settings.ui.update_label = "Edit"
# Uncomment to disable checking that LatLons are within boundaries of their parent
#settings.gis.check_within_parent_boundaries = False
# GeoNames username
settings.gis.geonames_username = "eden_nyc"
# Uncomment to show created_by/modified_by using Names not Emails
settings.ui.auth_user_represent = "name"
# Record Approval
settings.auth.record_approval = True
settings.auth.record_approval_required_for = ("org_organisation",)
# -----------------------------------------------------------------------------
# Audit
def audit_write(method, tablename, form, record, representation):
if not current.auth.user:
# Don't include prepop
return False
if tablename in ("cms_post",
"org_facility",
"org_organisation",
"req_req",
):
# Perform normal Audit
return True
else:
# Don't Audit non user-visible resources
return False
settings.security.audit_write = audit_write
# -----------------------------------------------------------------------------
# CMS
# Uncomment to use Bookmarks in Newsfeed
settings.cms.bookmarks = True
# Uncomment to use have Filter form in Newsfeed be open by default
settings.cms.filter_open = True
# Uncomment to adjust filters in Newsfeed when clicking on locations instead of opening the profile page
settings.cms.location_click_filters = True
# Uncomment to use organisation_id instead of created_by in Newsfeed
settings.cms.organisation = "post_organisation.organisation_id"
# Uncomment to use org_group_id in Newsfeed
settings.cms.organisation_group = "post_organisation_group.group_id"
# Uncomment to use person_id instead of created_by in Newsfeed
settings.cms.person = "person_id"
# Uncomment to use Rich Text editor in Newsfeed
settings.cms.richtext = True
# Uncomment to show Links in Newsfeed
settings.cms.show_links = True
# Uncomment to show Tags in Newsfeed
settings.cms.show_tags = True
# Uncomment to show post Titles in Newsfeed
settings.cms.show_titles = True
# -----------------------------------------------------------------------------
# Inventory Management
# Uncomment to customise the label for Facilities in Inventory Management
settings.inv.facility_label = "Facility"
# Uncomment if you need a simpler (but less accountable) process for managing stock levels
#settings.inv.direct_stock_edits = True
# Uncomment to call Stock Adjustments, 'Stock Counts'
settings.inv.stock_count = True
# Uncomment to not track pack values
settings.inv.track_pack_values = False
settings.inv.send_show_org = False
# Types common to both Send and Receive
settings.inv.shipment_types = {
1: T("Other Warehouse")
}
settings.inv.send_types = {
#21: T("Distribution")
}
settings.inv.send_type_default = 1
settings.inv.item_status = {
#0: current.messages["NONE"],
#1: T("Dump"),
#2: T("Sale"),
#3: T("Reject"),
#4: T("Surplus")
}
# -----------------------------------------------------------------------------
# Organisations
#
# Enable the use of Organisation Groups
settings.org.groups = "Network"
# Make Services Hierarchical
settings.org.services_hierarchical = True
# Set the label for Sites
settings.org.site_label = "Facility"
#settings.org.site_label = "Location"
# Uncomment to show the date when a Site (Facilities-only for now) was last contacted
settings.org.site_last_contacted = True
# Enable certain fields just for specific Organisations
# empty list => disabled for all (including Admin)
#settings.org.dependent_fields = { \
# "pr_person_details.mother_name" : [],
# "pr_person_details.father_name" : [],
# "pr_person_details.company" : [],
# "pr_person_details.affiliations" : [],
# "vol_volunteer.active" : [],
# "vol_volunteer_cluster.vol_cluster_type_id" : [],
# "vol_volunteer_cluster.vol_cluster_id" : [],
# "vol_volunteer_cluster.vol_cluster_position_id" : [],
# }
# Uncomment to use an Autocomplete for Site lookup fields
settings.org.site_autocomplete = True
# Extra fields to search in Autocompletes & display in Representations
settings.org.site_autocomplete_fields = ("organisation_id$name",
"location_id$addr_street",
)
# Uncomment to hide inv & req tabs from Sites
#settings.org.site_inv_req_tabs = True
# -----------------------------------------------------------------------------
def facility_marker_fn(record):
"""
Function to decide which Marker to use for Facilities Map
@ToDo: Legend
"""
db = current.db
s3db = current.s3db
table = db.org_facility_type
ltable = db.org_site_facility_type
query = (ltable.site_id == record.site_id) & \
(ltable.facility_type_id == table.id)
rows = db(query).select(table.name)
types = [row.name for row in rows]
# Use Marker in preferential order
if "Hub" in types:
marker = "warehouse"
elif "Medical Clinic" in types:
marker = "hospital"
elif "Food" in types:
marker = "food"
elif "Relief Site" in types:
marker = "asset"
elif "Residential Building" in types:
marker = "residence"
#elif "Shelter" in types:
# marker = "shelter"
else:
# Unknown
marker = "office"
if settings.has_module("req"):
# Colour code by open/priority requests
reqs = record.reqs
if reqs == 3:
# High
marker = "%s_red" % marker
elif reqs == 2:
# Medium
marker = "%s_yellow" % marker
elif reqs == 1:
# Low
marker = "%s_green" % marker
mtable = db.gis_marker
try:
marker = db(mtable.name == marker).select(mtable.image,
mtable.height,
mtable.width,
cache=s3db.cache,
limitby=(0, 1)
).first()
except:
marker = db(mtable.name == "office").select(mtable.image,
mtable.height,
mtable.width,
cache=s3db.cache,
limitby=(0, 1)
).first()
return marker
# -----------------------------------------------------------------------------
def org_facility_onvalidation(form):
"""
Default the name to the Street Address
"""
form_vars = form.vars
name = form_vars.get("name", None)
if name:
return
address = form_vars.get("address", None)
if address:
form_vars.name = address
else:
# We need a default
form_vars.name = current.db.org_facility.location_id.represent(form_vars.location_id)
# -----------------------------------------------------------------------------
def customise_org_facility_controller(**attr):
s3db = current.s3db
s3 = current.response.s3
# Tell the client to request per-feature markers
s3db.configure("org_facility", marker_fn=facility_marker_fn)
# Custom PreP
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
if not result:
return False
if r.method not in ("read", "update"):
types = r.get_vars.get("site_facility_type.facility_type_id__belongs", None)
if not types:
# Hide Private Residences
from s3 import FS
s3.filter = FS("site_facility_type.facility_type_id$name") != "Private Residence"
if r.interactive:
tablename = "org_facility"
table = s3db[tablename]
if not r.component and r.method in (None, "create", "update"):
from s3 import IS_LOCATION_SELECTOR2, S3LocationSelectorWidget2, S3MultiSelectWidget
field = table.location_id
if r.method in ("create", "update"):
field.label = "" # Gets replaced by widget
levels = ("L2", "L3")
field.requires = IS_LOCATION_SELECTOR2(levels=levels)
field.widget = S3LocationSelectorWidget2(levels=levels,
hide_lx=False,
reverse_lx=True,
show_address=True,
show_postcode=True,
)
table.organisation_id.widget = S3MultiSelectWidget(multiple=False)
if r.get_vars.get("format", None) == "popup":
# Coming from req/create form
# Hide most Fields
from s3 import S3SQLCustomForm, S3SQLInlineComponent
# We default this onvalidation
table.name.notnull = False
table.name.requires = None
crud_form = S3SQLCustomForm(S3SQLInlineComponent(
"site_facility_type",
label = T("Facility Type"),
fields = [("", "facility_type_id")],
multiple = False,
required = True,
),
"name",
"location_id",
)
s3db.configure(tablename,
crud_form = crud_form,
onvalidation = org_facility_onvalidation,
)
return True
s3.prep = custom_prep
return attr
settings.customise_org_facility_controller = customise_org_facility_controller
# -----------------------------------------------------------------------------
def customise_org_organisation_resource(r, tablename):
from gluon.html import DIV, INPUT
from s3 import S3MultiSelectWidget, S3SQLCustomForm, S3SQLInlineLink, S3SQLInlineComponent, S3SQLInlineComponentMultiSelectWidget
s3db = current.s3db
if r.tablename == "org_organisation":
if r.id:
# Update form
ctable = s3db.pr_contact
query = (ctable.pe_id == r.record.pe_id) & \
(ctable.contact_method == "RSS") & \
(ctable.deleted == False)
rss = current.db(query).select(ctable.poll,
limitby=(0, 1)
).first()
if rss and not rss.poll:
# Remember that we don't wish to import
rss_import = "on"
else:
# Default
rss_import = None
else:
# Create form: Default
rss_import = None
else:
# Component
if r.component_id:
# Update form
db = current.db
otable = s3db.org_organisation
org = db(otable.id == r.component_id).select(otable.pe_id,
limitby=(0, 1)
).first()
try:
pe_id = org.pe_id
except:
current.log.error("Org %s not found: cannot set rss_import correctly" % r.component_id)
# Default
rss_import = None
else:
ctable = s3db.pr_contact
query = (ctable.pe_id == pe_id) & \
(ctable.contact_method == "RSS") & \
(ctable.deleted == False)
rss = db(query).select(ctable.poll,
limitby=(0, 1)
).first()
if rss and not rss.poll:
# Remember that we don't wish to import
rss_import = "on"
else:
# Default
rss_import = None
else:
# Create form: Default
rss_import = None
mtable = s3db.org_group_membership
mtable.group_id.widget = S3MultiSelectWidget(multiple=False)
mtable.status_id.widget = S3MultiSelectWidget(multiple=False,
create=dict(c="org",
f="group_membership_status",
label=str(T("Add New Status")),
parent="group_membership",
child="status_id"
))
crud_form = S3SQLCustomForm(
"name",
"acronym",
S3SQLInlineLink(
"organisation_type",
field = "organisation_type_id",
label = T("Type"),
multiple = False,
#widget = "hierarchy",
),
S3SQLInlineComponentMultiSelectWidget(
# activate hierarchical org_service:
#S3SQLInlineLink(
"service",
label = T("Services"),
field = "service_id",
# activate hierarchical org_service:
#leafonly = False,
#widget = "hierarchy",
),
S3SQLInlineComponent(
"group_membership",
label = T("Network"),
fields = [("", "group_id"),
("", "status_id"),
],
),
S3SQLInlineComponent(
"address",
label = T("Address"),
multiple = False,
# This is just Text - put into the Comments box for now
# Ultimately should go into location_id$addr_street
fields = [("", "comments")],
),
S3SQLInlineComponentMultiSelectWidget(
"location",
label = T("Neighborhoods Served"),
field = "location_id",
filterby = dict(field = "level",
options = "L4"
),
# @ToDo: GroupedCheckbox Widget or Hierarchical MultiSelectWidget
#cols = 5,
),
"phone",
S3SQLInlineComponent(
"contact",
name = "phone2",
label = T("Phone2"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "WORK_PHONE"
)
),
S3SQLInlineComponent(
"contact",
name = "email",
label = T("Email"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "EMAIL"
)
),
"website",
S3SQLInlineComponent(
"contact",
comment = DIV(INPUT(_type="checkbox",
_name="rss_no_import",
value = rss_import,
),
T("Don't Import Feed")),
name = "rss",
label = T("RSS"),
multiple = False,
fields = [("", "value"),
#(T("Don't Import Feed"), "poll"),
],
filterby = dict(field = "contact_method",
options = "RSS"
)
),
S3SQLInlineComponent(
"document",
name = "iCal",
label = "iCAL",
multiple = False,
fields = [("", "url")],
filterby = dict(field = "name",
options="iCal"
)
),
S3SQLInlineComponent(
"document",
name = "data",
label = T("Data"),
multiple = False,
fields = [("", "url")],
filterby = dict(field = "name",
options="Data"
)
),
S3SQLInlineComponent(
"contact",
name = "twitter",
label = T("Twitter"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "TWITTER"
)
),
S3SQLInlineComponent(
"contact",
name = "facebook",
label = T("Facebook"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "FACEBOOK"
)
),
"comments",
postprocess = pr_contact_postprocess,
)
from s3 import S3LocationFilter, S3OptionsFilter, S3TextFilter
# activate hierarchical org_service:
#from s3 import S3LocationFilter, S3OptionsFilter, S3TextFilter, S3HierarchyFilter
filter_widgets = [
S3TextFilter(["name", "acronym"],
label = T("Name"),
_class = "filter-search",
),
S3OptionsFilter("group_membership.group_id",
label = T("Network"),
represent = "%(name)s",
#hidden = True,
),
S3LocationFilter("organisation_location.location_id",
label = T("Neighborhood"),
levels = ("L3", "L4"),
#hidden = True,
),
S3OptionsFilter("service_organisation.service_id",
#label = T("Service"),
#hidden = True,
),
# activate hierarchical org_service:
#S3HierarchyFilter("service_organisation.service_id",
# #label = T("Service"),
# #hidden = True,
# ),
S3OptionsFilter("organisation_organisation_type.organisation_type_id",
label = T("Type"),
#hidden = True,
),
]
list_fields = ["name",
(T("Type"), "organisation_organisation_type.organisation_type_id"),
(T("Services"), "service.name"),
"phone",
(T("Email"), "email.value"),
"website"
#(T("Neighborhoods Served"), "location.name"),
]
s3db.configure("org_organisation",
crud_form = crud_form,
filter_widgets = filter_widgets,
list_fields = list_fields,
)
settings.customise_org_organisation_resource = customise_org_organisation_resource
# -----------------------------------------------------------------------------
def customise_org_organisation_controller(**attr):
s3db = current.s3db
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
else:
result = True
if r.interactive:
if r.component_name == "facility":
if r.method in (None, "create", "update"):
from s3 import IS_LOCATION_SELECTOR2, S3LocationSelectorWidget2
table = s3db.org_facility
field = table.location_id
if r.method in ("create", "update"):
field.label = "" # Gets replaced by widget
levels = ("L2", "L3")
field.requires = IS_LOCATION_SELECTOR2(levels=levels)
field.widget = S3LocationSelectorWidget2(levels=levels,
hide_lx=False,
reverse_lx=True,
show_address=True,
show_postcode=True,
)
elif r.component_name == "human_resource":
# Don't assume that user is from same org/site as Contacts they create
r.component.table.site_id.default = None
return result
s3.prep = custom_prep
# Custom postp
standard_postp = s3.postp
def custom_postp(r, output):
# Call standard postp
if callable(standard_postp):
output = standard_postp(r, output)
if r.interactive and isinstance(output, dict):
if "rheader" in output:
# Custom Tabs
tabs = [(T("Basic Details"), None),
(T("Contacts"), "human_resource"),
(T("Facilities"), "facility"),
(T("Projects"), "project"),
(T("Assets"), "asset"),
]
output["rheader"] = s3db.org_rheader(r, tabs=tabs)
return output
s3.postp = custom_postp
return attr
settings.customise_org_organisation_controller = customise_org_organisation_controller
# -----------------------------------------------------------------------------
def customise_org_group_controller(**attr):
s3db = current.s3db
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
else:
result = True
if not r.component:
table = s3db.org_group
list_fields = ["name",
"mission",
"website",
"meetings",
]
s3db.configure("org_group",
list_fields = list_fields,
)
if r.interactive:
from gluon.html import DIV, INPUT
from s3 import S3SQLCustomForm, S3SQLInlineComponent
if r.method != "read":
from gluon.validators import IS_EMPTY_OR
from s3 import IS_LOCATION_SELECTOR2, S3LocationSelectorWidget2
field = table.location_id
field.label = "" # Gets replaced by widget
#field.requires = IS_LOCATION_SELECTOR2(levels = ("L2",))
field.requires = IS_EMPTY_OR(
IS_LOCATION_SELECTOR2(levels = ("L2",))
)
field.widget = S3LocationSelectorWidget2(levels = ("L2",),
points = True,
polygons = True,
)
# Default location to Manhattan
db = current.db
gtable = db.gis_location
query = (gtable.name == "New York") & \
(gtable.level == "L2")
manhattan = db(query).select(gtable.id,
limitby=(0, 1)).first()
if manhattan:
field.default = manhattan.id
table.mission.readable = table.mission.writable = True
table.meetings.readable = table.meetings.writable = True
if r.id:
# Update form
ctable = s3db.pr_contact
query = (ctable.pe_id == r.record.pe_id) & \
(ctable.contact_method == "RSS") & \
(ctable.deleted == False)
rss = current.db(query).select(ctable.poll,
limitby=(0, 1)
).first()
if rss and not rss.poll:
# Remember that we don't wish to import
rss_import = "on"
else:
# Default
rss_import = None
else:
# Create form: Default
rss_import = None
crud_form = S3SQLCustomForm(
"name",
"location_id",
"mission",
S3SQLInlineComponent(
"contact",
name = "phone",
label = T("Phone"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "WORK_PHONE"
)
),
S3SQLInlineComponent(
"contact",
name = "email",
label = T("Email"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "EMAIL"
)
),
"website",
S3SQLInlineComponent(
"contact",
comment = DIV(INPUT(_type="checkbox",
_name="rss_no_import",
value = rss_import,
),
T("Don't Import Feed")),
name = "rss",
label = T("RSS"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "RSS"
)
),
S3SQLInlineComponent(
"document",
name = "iCal",
label = "iCAL",
multiple = False,
fields = [("", "url")],
filterby = dict(field = "name",
options="iCal"
)
),
S3SQLInlineComponent(
"document",
name = "data",
label = T("Data"),
multiple = False,
fields = [("", "url")],
filterby = dict(field = "name",
options="Data"
)
),
S3SQLInlineComponent(
"contact",
name = "twitter",
label = T("Twitter"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "TWITTER"
)
),
S3SQLInlineComponent(
"contact",
name = "facebook",
label = T("Facebook"),
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "FACEBOOK"
)
),
"meetings",
"comments",
postprocess = pr_contact_postprocess,
)
s3db.configure("org_group",
crud_form = crud_form,
)
elif r.component_name == "pr_group":
list_fields = [#(T("Network"), "group_team.org_group_id"),
"name",
"description",
"meetings",
(T("Chairperson"), "chairperson"),
"comments",
]
s3db.configure("pr_group",
list_fields = list_fields,
)
elif r.component_name == "organisation":
# Add Network Status to List Fields
list_fields = s3db.get_config("org_organisation", "list_fields")
list_fields.insert(1, "group_membership.status_id")
return result
s3.prep = custom_prep
if current.auth.s3_logged_in():
# Allow components with components (such as org/group) to breakout from tabs
attr["native"] = True
return attr
settings.customise_org_group_controller = customise_org_group_controller
# -----------------------------------------------------------------------------
# Persons
# Uncomment to hide fields in S3AddPersonWidget
settings.pr.request_dob = False
settings.pr.request_gender = False
# Doesn't yet work (form fails to submit)
#settings.pr.select_existing = False
settings.pr.show_emergency_contacts = False
# -----------------------------------------------------------------------------
# Persons
def customise_pr_person_controller(**attr):
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
else:
result = True
s3db = current.s3db
#if r.method == "validate":
# # Can't validate image without the file
# image_field = s3db.pr_image.image
# image_field.requires = None
if r.interactive or r.representation == "aadata":
if not r.component:
hr_fields = ["organisation_id",
"job_title_id",
"site_id",
]
if r.method in ("create", "update"):
get_vars = r.get_vars
# Context from a Profile page?"
organisation_id = get_vars.get("(organisation)", None)
if organisation_id:
field = s3db.hrm_human_resource.organisation_id
field.default = organisation_id
field.readable = field.writable = False
hr_fields.remove("organisation_id")
site_id = get_vars.get("(site)", None)
if site_id:
field = s3db.hrm_human_resource.site_id
field.default = site_id
field.readable = field.writable = False
hr_fields.remove("site_id")
else:
s3db.hrm_human_resource.site_id.default = None
# ImageCrop widget doesn't currently work within an Inline Form
#image_field = s3db.pr_image.image
#from gluon.validators import IS_IMAGE
#image_field.requires = IS_IMAGE()
#image_field.widget = None
from s3 import S3SQLCustomForm, S3SQLInlineComponent
s3_sql_custom_fields = ["first_name",
#"middle_name",
"last_name",
S3SQLInlineComponent(
"human_resource",
name = "human_resource",
label = "",
multiple = False,
fields = hr_fields,
),
#S3SQLInlineComponent(
# "image",
# name = "image",
# label = T("Photo"),
# multiple = False,
# fields = [("", "image")],
# filterby = dict(field = "profile",
# options=[True]
# )
# ),
]
list_fields = [(current.messages.ORGANISATION, "human_resource.organisation_id"),
"first_name",
#"middle_name",
"last_name",
(T("Job Title"), "human_resource.job_title_id"),
(T("Office"), "human_resource.site_id"),
]
# Don't include Email/Phone for unauthenticated users
if current.auth.is_logged_in():
MOBILE = settings.get_ui_label_mobile_phone()
EMAIL = T("Email")
list_fields += [(MOBILE, "phone.value"),
(EMAIL, "email.value"),
]
s3_sql_custom_fields.insert(3,
S3SQLInlineComponent(
"contact",
name = "phone",
label = MOBILE,
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "SMS")),
)
s3_sql_custom_fields.insert(3,
S3SQLInlineComponent(
"contact",
name = "email",
label = EMAIL,
multiple = False,
fields = [("", "value")],
filterby = dict(field = "contact_method",
options = "EMAIL")),
)
crud_form = S3SQLCustomForm(*s3_sql_custom_fields)
s3db.configure(r.tablename,
crud_form = crud_form,
list_fields = list_fields,
)
elif r.component_name == "group_membership":
s3db.pr_group_membership.group_head.label = T("Group Chairperson")
return result
s3.prep = custom_prep
# Custom postp
standard_postp = s3.postp
def custom_postp(r, output):
# Call standard postp
if callable(standard_postp):
output = standard_postp(r, output)
if r.interactive and isinstance(output, dict):
if "form" in output:
output["form"].add_class("pr_person")
elif "item" in output and hasattr(output["item"], "add_class"):
output["item"].add_class("pr_person")
return output
s3.postp = custom_postp
return attr
settings.customise_pr_person_controller = customise_pr_person_controller
# -----------------------------------------------------------------------------
# Groups
def chairperson(row):
"""
Virtual Field to show the chairperson of a group
"""
if hasattr(row, "pr_group"):
row = row.pr_group
try:
group_id = row.id
except:
# not available
return current.messages["NONE"]
db = current.db
mtable = current.s3db.pr_group_membership
ptable = db.pr_person
query = (mtable.group_id == group_id) & \
(mtable.group_head == True) & \
(mtable.person_id == ptable.id)
chair = db(query).select(ptable.first_name,
ptable.middle_name,
ptable.last_name,
ptable.id,
limitby=(0, 1)).first()
if chair:
# Only used in list view so HTML is OK
return A(s3_fullname(chair),
_href=URL(c="hrm", f="person", args=chair.id))
else:
return current.messages["NONE"]
# -----------------------------------------------------------------------------
def customise_pr_group_controller(**attr):
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
if not result:
return False
from s3 import S3Represent, S3TextFilter, S3OptionsFilter, S3SQLCustomForm, S3SQLInlineComponent
s3db = current.s3db
s3db.org_group_team.org_group_id.represent = S3Represent(lookup="org_group",
show_link=True)
crud_form = S3SQLCustomForm("name",
"description",
S3SQLInlineComponent("group_team",
label = T("Network"),
fields = [("", "org_group_id")],
# @ToDo: Make this optional?
multiple = False,
),
"meetings",
"comments",
)
filter_widgets = [
S3TextFilter(["name",
"description",
"comments",
"group_team.org_group_id$name",
],
label = T("Search"),
comment = T("You can search by by group name, description or comments and by network name. You may use % as wildcard. Press 'Search' without input to list all."),
#_class = "filter-search",
),
S3OptionsFilter("group_team.org_group_id",
label = T("Network"),
#hidden = True,
),
]
# Need to re-do list_fields as get over_written by hrm_group_controller()
list_fields = [(T("Network"), "group_team.org_group_id"),
"name",
"description",
"meetings",
(T("Chairperson"), "chairperson"),
"comments",
]
s3db.configure("pr_group",
crud_form = crud_form,
filter_widgets = filter_widgets,
list_fields = list_fields,
)
s3db.pr_group_membership.group_head.label = T("Group Chairperson")
if r.component_name == "group_membership":
from s3layouts import S3AddResourceLink
s3db.pr_group_membership.person_id.comment = \
S3AddResourceLink(c="pr", f="person",
title=T("Create Person"),
tooltip=current.messages.AUTOCOMPLETE_HELP)
#else:
# # RHeader wants a simplified version, but don't want inconsistent across tabs
# s3db.pr_group_membership.group_head.label = T("Chairperson")
return True
s3.prep = custom_prep
return attr
settings.customise_pr_group_controller = customise_pr_group_controller
# -----------------------------------------------------------------------------
def customise_pr_group_resource(r, tablename):
"""
Customise pr_group resource (in group & org_group controllers)
- runs after controller customisation
- but runs before prep
"""
s3db = current.s3db
table = s3db.pr_group
field = table.group_type
field.default = 3 # Relief Team, to show up in hrm/group
field.readable = field.writable = False
table.name.label = T("Name")
table.description.label = T("Description")
table.meetings.readable = table.meetings.writable = True
# Increase size of widget
from s3 import s3_comments_widget
table.description.widget = s3_comments_widget
from gluon import Field
table.chairperson = Field.Method("chairperson", chairperson)
# Format for filter_widgets & imports
s3db.add_components("pr_group",
org_group_team = "group_id",
)
s3db.configure("pr_group",
# Redirect to member list when a new group has been created
create_next = URL(c="hrm", f="group",
args=["[id]", "group_membership"]),
)
settings.customise_pr_group_resource = customise_pr_group_resource
# -----------------------------------------------------------------------------
def pr_contact_postprocess(form):
"""
Import Organisation/Network RSS Feeds
"""
s3db = current.s3db
form_vars = form.vars
rss_url = form_vars.rsscontact_i_value_edit_0 or \
form_vars.rsscontact_i_value_edit_none
if not rss_url:
if form.record:
# Update form
old_rss = form.record.sub_rsscontact
import json
data = old_rss = json.loads(old_rss)["data"]
if data:
# RSS feed is being deleted, so we should disable it
old_rss = data[0]["value"]["value"]
table = s3db.msg_rss_channel
old = current.db(table.url == old_rss).select(table.channel_id,
table.enabled,
limitby = (0, 1)
).first()
if old and old.enabled:
s3db.msg_channel_disable("msg_rss_channel", old.channel_id)
return
else:
# Nothing to do :)
return
# Check if we already have a channel for this Contact
db = current.db
name = form_vars.name
table = s3db.msg_rss_channel
name_exists = db(table.name == name).select(table.id,
table.channel_id,
table.enabled,
table.url,
limitby = (0, 1)
).first()
no_import = current.request.post_vars.get("rss_no_import", None)
if name_exists:
if name_exists.url == rss_url:
# No change to either Contact Name or URL
if no_import:
if name_exists.enabled:
# Disable channel (& associated parsers)
s3db.msg_channel_disable("msg_rss_channel",
name_exists.channel_id)
return
elif name_exists.enabled:
# Nothing to do :)
return
else:
# Enable channel (& associated parsers)
s3db.msg_channel_enable("msg_rss_channel",
name_exists.channel_id)
return
# Check if we already have a channel for this URL
url_exists = db(table.url == rss_url).select(table.id,
table.channel_id,
table.enabled,
limitby = (0, 1)
).first()
if url_exists:
# We have 2 feeds: 1 for the Contact & 1 for the URL
# Disable the old Contact one and link the URL one to this Contact
# and ensure active or not as appropriate
# Name field is unique so rename old one
name_exists.update_record(name="%s (Old)" % name)
if name_exists.enabled:
# Disable channel (& associated parsers)
s3db.msg_channel_disable("msg_rss_channel",
name_exists.channel_id)
url_exists.update_record(name=name)
if no_import:
if url_exists.enabled:
# Disable channel (& associated parsers)
s3db.msg_channel_disable("msg_rss_channel",
url_exists.channel_id)
return
elif url_exists.enabled:
# Nothing to do :)
return
else:
# Enable channel (& associated parsers)
s3db.msg_channel_enable("msg_rss_channel",
url_exists.channel_id)
return
else:
# Update the URL
name_exists.update_record(url=rss_url)
if no_import:
if name_exists.enabled:
# Disable channel (& associated parsers)
s3db.msg_channel_disable("msg_rss_channel",
name_exists.channel_id)
return
elif name_exists.enabled:
# Nothing to do :)
return
else:
# Enable channel (& associated parsers)
s3db.msg_channel_enable("msg_rss_channel",
name_exists.channel_id)
return
else:
# Check if we already have a channel for this URL
url_exists = db(table.url == rss_url).select(table.id,
table.channel_id,
table.enabled,
limitby = (0, 1)
).first()
if url_exists:
# Either Contact has changed Name or this feed is associated with
# another Contact
# - update Feed name
url_exists.update_record(name=name)
if no_import:
if url_exists.enabled:
# Disable channel (& associated parsers)
s3db.msg_channel_disable("msg_rss_channel",
url_exists.channel_id)
return
elif url_exists.enabled:
# Nothing to do :)
return
else:
# Enable channel (& associated parsers)
s3db.msg_channel_enable("msg_rss_channel",
url_exists.channel_id)
return
elif no_import:
# Nothing to do :)
return
#else:
# # Create a new Feed
# pass
# Add RSS Channel
_id = table.insert(name=name, enabled=True, url=rss_url)
record = dict(id=_id)
s3db.update_super(table, record)
# Enable
channel_id = record["channel_id"]
s3db.msg_channel_enable("msg_rss_channel", channel_id)
# Setup Parser
table = s3db.msg_parser
_id = table.insert(channel_id=channel_id,
function_name="parse_rss",
enabled=True)
s3db.msg_parser_enable(_id)
# Check Now
async = current.s3task.async
async("msg_poll", args=["msg_rss_channel", channel_id])
async("msg_parse", args=[channel_id, "parse_rss"])
# -----------------------------------------------------------------------------
# Human Resource Management
# Uncomment to chage the label for 'Staff'
settings.hrm.staff_label = "Contacts"
# Uncomment to allow Staff & Volunteers to be registered without an email address
settings.hrm.email_required = False
# Uncomment to allow Staff & Volunteers to be registered without an Organisation
settings.hrm.org_required = False
# Uncomment to show the Organisation name in HR represents
settings.hrm.show_organisation = True
# Uncomment to disable Staff experience
settings.hrm.staff_experience = False
# Uncomment to disable the use of HR Certificates
settings.hrm.use_certificates = False
# Uncomment to disable the use of HR Credentials
settings.hrm.use_credentials = False
# Uncomment to enable the use of HR Education
settings.hrm.use_education = False
# Uncomment to disable the use of HR Skills
#settings.hrm.use_skills = False
# Uncomment to disable the use of HR Trainings
settings.hrm.use_trainings = False
# Uncomment to disable the use of HR Description
settings.hrm.use_description = False
# Change the label of "Teams" to "Groups"
settings.hrm.teams = "Groups"
# Custom label for Organisations in HR module
#settings.hrm.organisation_label = "National Society / Branch"
settings.hrm.organisation_label = "Organization"
# -----------------------------------------------------------------------------
def customise_hrm_human_resource_controller(**attr):
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
else:
result = True
if r.interactive or r.representation == "aadata":
if not r.component:
from s3 import S3TextFilter, S3OptionsFilter, S3LocationFilter
filter_widgets = [
S3TextFilter(["person_id$first_name",
"person_id$middle_name",
"person_id$last_name",
],
label = T("Name"),
),
S3OptionsFilter("organisation_id",
filter = True,
header = "",
hidden = True,
),
S3OptionsFilter("group_person.group_id",
label = T("Network"),
#filter = True,
#header = "",
hidden = True,
),
S3LocationFilter("location_id",
label = T("Location"),
levels = ("L1", "L2", "L3", "L4"),
hidden = True,
),
S3OptionsFilter("site_id",
hidden = True,
),
S3OptionsFilter("training.course_id",
label = T("Training"),
hidden = True,
),
S3OptionsFilter("group_membership.group_id",
label = T("Team"),
filter = True,
header = "",
hidden = True,
),
]
s3db = current.s3db
s3db.configure("hrm_human_resource",
filter_widgets = filter_widgets,
)
field = r.table.site_id
# Don't assume that user is from same org/site as Contacts they create
field.default = None
# Use a hierarchical dropdown instead of AC
field.widget = None
script = \
'''$.filterOptionsS3({
'trigger':'organisation_id',
'target':'site_id',
'lookupResource':'site',
'lookupURL':'/%s/org/sites_for_org/',
'optional':true
})''' % r.application
s3.jquery_ready.append(script)
return result
s3.prep = custom_prep
return attr
settings.customise_hrm_human_resource_controller = customise_hrm_human_resource_controller
# -----------------------------------------------------------------------------
def customise_hrm_human_resource_resource(r, tablename):
"""
Customise hrm_human_resource resource (in facility, human_resource, organisation & person controllers)
- runs after controller customisation
- but runs before prep
"""
s3db = current.s3db
from s3 import S3SQLCustomForm, S3SQLInlineComponent
crud_form = S3SQLCustomForm("person_id",
"organisation_id",
"site_id",
S3SQLInlineComponent(
"group_person",
label = T("Network"),
link = False,
fields = [("", "group_id")],
multiple = False,
),
"job_title_id",
"start_date",
)
list_fields = ["id",
"person_id",
"job_title_id",
"organisation_id",
(T("Network"), "group_person.group_id"),
(T("Groups"), "person_id$group_membership.group_id"),
"site_id",
#"site_contact",
(T("Email"), "email.value"),
(settings.get_ui_label_mobile_phone(), "phone.value"),
]
s3db.configure("hrm_human_resource",
crud_form = crud_form,
list_fields = list_fields,
)
settings.customise_hrm_human_resource_resource = customise_hrm_human_resource_resource
# -----------------------------------------------------------------------------
def customise_hrm_job_title_controller(**attr):
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
else:
result = True
if r.interactive or r.representation == "aadata":
table = current.s3db.hrm_job_title
table.organisation_id.readable = table.organisation_id.writable = False
table.type.readable = table.type.writable = False
return result
s3.prep = custom_prep
return attr
settings.customise_hrm_job_title_controller = customise_hrm_job_title_controller
# -----------------------------------------------------------------------------
# Projects
# Use codes for projects (called 'blurb' in NYC)
settings.project.codes = True
# Uncomment this to use settings suitable for detailed Task management
settings.project.mode_task = False
# Uncomment this to use Activities for projects
settings.project.activities = True
# Uncomment this to use Milestones in project/task.
settings.project.milestones = False
# Uncomment this to disable Sectors in projects
settings.project.sectors = False
# Multiple partner organizations
settings.project.multiple_organisations = True
def customise_project_project_controller(**attr):
s3 = current.response.s3
# Custom prep
standard_prep = s3.prep
def custom_prep(r):
# Call standard prep
if callable(standard_prep):
result = standard_prep(r)
else:
result = True
if not r.component and (r.interactive or r.representation == "aadata"):
from s3 import S3SQLCustomForm, S3SQLInlineComponent, S3SQLInlineComponentCheckbox
s3db = current.s3db
table = r.table
tablename = "project_project"
table.code.label = T("Project blurb (max. 100 characters)")
table.code.max_length = 100
table.comments.label = T("How people can help")
script = '''$('#project_project_code').attr('maxlength','100')'''
s3.jquery_ready.append(script)
crud_form = S3SQLCustomForm(
"organisation_id",
"name",
"code",
"description",
"status_id",
"start_date",
"end_date",
"calendar",
#"drr.hfa",
#"objectives",
"human_resource_id",
# Activities
S3SQLInlineComponent(
"location",
label = T("Location"),
fields = [("", "location_id")],
),
# Partner Orgs
S3SQLInlineComponent(
"organisation",
name = "partner",
label = T("Partner Organizations"),
fields = ["organisation_id",
"comments", # NB This is labelled 'Role' in DRRPP
],
filterby = dict(field = "role",
options = "2"
)
),
S3SQLInlineComponent(
"document",
name = "media",
label = T("URLs (media, fundraising, website, social media, etc."),
fields = ["document_id",
"name",
"url",
"comments",
],
filterby = dict(field = "name")
),
S3SQLInlineComponentCheckbox(
"activity_type",
label = T("Categories"),
field = "activity_type_id",
cols = 3,
# Filter Activity Type by Project
filter = {"linktable": "project_activity_type_project",
"lkey": "project_id",
"rkey": "activity_type_id",
},
),
#"budget",
#"currency",
"comments",
)
from s3 import S3TextFilter, S3OptionsFilter, S3LocationFilter, S3DateFilter
filter_widgets = [
S3TextFilter(["name",
"code",
"description",
"organisation.name",
"organisation.acronym",
],
label = T("Name"),
_class = "filter-search",
),
S3OptionsFilter("status_id",
label = T("Status"),
# Not translateable
#represent = "%(name)s",
cols = 3,
),
#S3OptionsFilter("theme_project.theme_id",
# label = T("Theme"),
# #hidden = True,
# ),
S3LocationFilter("location.location_id",
label = T("Location"),
levels = ("L1", "L2", "L3", "L4"),
#hidden = True,
),
# @ToDo: Widget to handle Start & End in 1!
S3DateFilter("start_date",
label = T("Start Date"),
hide_time = True,
#hidden = True,
),
S3DateFilter("end_date",
label = T("End Date"),
hide_time = True,
#hidden = True,
),
]
list_fields = ["id",
"name",
"code",
"organisation_id",
"start_date",
"end_date",
(T("Locations"), "location.location_id"),
]
s3db.configure(tablename,
crud_form = crud_form,
filter_widgets = filter_widgets,
list_fields = list_fields,
)
return result
s3.prep = custom_prep
return attr
settings.customise_project_project_controller = customise_project_project_controller
# -----------------------------------------------------------------------------
# Requests Management
settings.req.req_type = ["People", "Stock"]#, "Summary"]
settings.req.prompt_match = False
#settings.req.use_commit = False
settings.req.requester_optional = True
settings.req.date_writable = False
settings.req.item_quantities_writable = True
settings.req.skill_quantities_writable = True
settings.req.items_ask_purpose = False
#settings.req.use_req_number = False
# Label for Requester
settings.req.requester_label = "Site Contact"
# Filter Requester as being from the Site
settings.req.requester_from_site = True
# Label for Inventory Requests
settings.req.type_inv_label = "Supplies"
# Uncomment to enable Summary 'Site Needs' tab for Offices/Facilities
settings.req.summary = True
# -----------------------------------------------------------------------------
def req_req_postprocess(form):
"""
Runs after crud_form completes
- creates a cms_post in the newswire
- @ToDo: Send out Tweets
"""
req_id = form.vars.id
db = current.db
s3db = current.s3db
rtable = s3db.req_req
# Read the full record
row = db(rtable.id == req_id).select(rtable.type,
rtable.site_id,
rtable.requester_id,
rtable.priority,
rtable.date_required,
rtable.purpose,
rtable.comments,
limitby=(0, 1)
).first()
# Build Title & Body from the Request details
priority = rtable.priority.represent(row.priority)
date_required = row.date_required
if date_required:
date = rtable.date_required.represent(date_required)
title = "%(priority)s by %(date)s" % dict(priority=priority,
date=date)
else:
title = priority
body = row.comments
if row.type == 1:
# Items
ritable = s3db.req_req_item
items = db(ritable.req_id == req_id).select(ritable.item_id,
ritable.item_pack_id,
ritable.quantity)
item_represent = s3db.supply_item_represent
pack_represent = s3db.supply_item_pack_represent
for item in items:
item = "%s %s %s" % (item.quantity,
pack_represent(item.item_pack_id),
item_represent(item.item_id))
body = "%s\n%s" % (item, body)
else:
# Skills
body = "%s\n%s" % (row.purpose, body)
rstable = s3db.req_req_skill
skills = db(rstable.req_id == req_id).select(rstable.skill_id,
rstable.quantity)
skill_represent = s3db.hrm_multi_skill_represent
for skill in skills:
item = "%s %s" % (skill.quantity, skill_represent(skill.skill_id))
body = "%s\n%s" % (item, body)
# Lookup series_id
stable = s3db.cms_series
try:
series_id = db(stable.name == "Request").select(stable.id,
cache=s3db.cache,
limitby=(0, 1)
).first().id
except:
# Prepop hasn't been run
series_id = None
# Location is that of the site
otable = s3db.org_site
location_id = db(otable.site_id == row.site_id).select(otable.location_id,
limitby=(0, 1)
).first().location_id
# Create Post
ptable = s3db.cms_post
_id = ptable.insert(series_id=series_id,
title=title,
body=body,
location_id=location_id,
person_id=row.requester_id,
)
record = dict(id=_id)
s3db.update_super(ptable, record)
# Add source link
url = "%s%s" % (settings.get_base_public_url(),
URL(c="req", f="req", args=req_id))
s3db.doc_document.insert(doc_id=record["doc_id"],
url=url,
)
# -----------------------------------------------------------------------------
def customise_req_req_resource(r, tablename):
from s3layouts import S3AddResourceLink
current.s3db.req_req.site_id.comment = \
S3AddResourceLink(c="org", f="facility",
vars = dict(child="site_id"),
title=T("Create Facility"),
tooltip=current.messages.AUTOCOMPLETE_HELP)
current.response.s3.req_req_postprocess = req_req_postprocess
if not r.component and r.method in ("create", "update"):
script = \
'''$('#req_req_site_id').change(function(){
var url=$('#person_add').attr('href')
url=url.split('?')
var q=S3.queryString.parse(url[1])
q['(site)']=$(this).val()
url=url[0]+'?'+S3.queryString.stringify(q)
$('#person_add').attr('href',url)})'''
current.response.s3.jquery_ready.append(script)
settings.customise_req_req_resource = customise_req_req_resource
# -----------------------------------------------------------------------------
# Comment/uncomment modules here to disable/enable them
settings.modules = OrderedDict([
# Core modules which shouldn't be disabled
("default", Storage(
name_nice = T("Home"),
restricted = False, # Use ACLs to control access to this module
access = None, # All Users (inc Anonymous) can see this module in the default menu & access the controller
module_type = None # This item is not shown in the menu
)),
("admin", Storage(
name_nice = T("Admin"),
#description = "Site Administration",
restricted = True,
access = "|1|", # Only Administrators can see this module in the default menu & access the controller
module_type = None # This item is handled separately for the menu
)),
("appadmin", Storage(
name_nice = T("Administration"),
#description = "Site Administration",
restricted = True,
module_type = None # No Menu
)),
("errors", Storage(
name_nice = T("Ticket Viewer"),
#description = "Needed for Breadcrumbs",
restricted = False,
module_type = None # No Menu
)),
("sync", Storage(
name_nice = T("Synchronization"),
#description = "Synchronization",
restricted = True,
access = "|1|", # Only Administrators can see this module in the default menu & access the controller
module_type = None # This item is handled separately for the menu
)),
# Uncomment to enable internal support requests
#("support", Storage(
# name_nice = T("Support"),
# #description = "Support Requests",
# restricted = True,
# module_type = None # This item is handled separately for the menu
# )),
("gis", Storage(
name_nice = T("Map"),
#description = "Situation Awareness & Geospatial Analysis",
restricted = True,
module_type = 9, # 8th item in the menu
)),
("pr", Storage(
name_nice = T("Person Registry"),
#description = "Central point to record details on People",
restricted = True,
access = "|1|", # Only Administrators can see this module in the default menu (access to controller is possible to all still)
module_type = 10
)),
("org", Storage(
name_nice = T("Locations"),
#description = 'Lists "who is doing what & where". Allows relief agencies to coordinate their activities',
restricted = True,
module_type = 4
)),
# All modules below here should be possible to disable safely
("hrm", Storage(
name_nice = T("Contacts"),
#description = "Human Resources Management",
restricted = True,
module_type = 3,
)),
#("vol", Storage(
# name_nice = T("Volunteers"),
# #description = "Human Resources Management",
# restricted = True,
# module_type = 2,
# )),
("cms", Storage(
name_nice = T("Content Management"),
#description = "Content Management System",
restricted = True,
module_type = 10,
)),
("doc", Storage(
name_nice = T("Documents"),
#description = "A library of digital resources, such as photos, documents and reports",
restricted = True,
module_type = None,
)),
("msg", Storage(
name_nice = T("Messaging"),
#description = "Sends & Receives Alerts via Email & SMS",
restricted = True,
# The user-visible functionality of this module isn't normally required. Rather it's main purpose is to be accessed from other modules.
module_type = None,
)),
("supply", Storage(
name_nice = T("Supply Chain Management"),
#description = "Used within Inventory Management, Request Management and Asset Management",
restricted = True,
module_type = None, # Not displayed
)),
("inv", Storage(
name_nice = T("Inventory"),
#description = "Receiving and Sending Items",
restricted = True,
module_type = 10
)),
#("proc", Storage(
# name_nice = T("Procurement"),
# #description = "Ordering & Purchasing of Goods & Services",
# restricted = True,
# module_type = 10
# )),
("asset", Storage(
name_nice = T("Assets"),
#description = "Recording and Assigning Assets",
restricted = True,
module_type = 10,
)),
# Vehicle depends on Assets
#("vehicle", Storage(
# name_nice = T("Vehicles"),
# #description = "Manage Vehicles",
# restricted = True,
# module_type = 10,
# )),
("req", Storage(
name_nice = T("Requests"),
#description = "Manage requests for supplies, assets, staff or other resources. Matches against Inventories where supplies are requested.",
restricted = True,
module_type = 1,
)),
("project", Storage(
name_nice = T("Projects"),
#description = "Tracking of Projects, Activities and Tasks",
restricted = True,
module_type = 10
)),
("assess", Storage(
name_nice = T("Assessments"),
#description = "Rapid Assessments & Flexible Impact Assessments",
restricted = True,
module_type = 5,
)),
("event", Storage(
name_nice = T("Events"),
#description = "Activate Events (e.g. from Scenario templates) for allocation of appropriate Resources (Human, Assets & Facilities).",
restricted = True,
module_type = 10,
)),
("survey", Storage(
name_nice = T("Surveys"),
#description = "Create, enter, and manage surveys.",
restricted = True,
module_type = 5,
)),
#("cr", Storage(
# name_nice = T("Shelters"),
# #description = "Tracks the location, capacity and breakdown of victims in Shelters",
# restricted = True,
# module_type = 10
# )),
#("dvr", Storage(
# name_nice = T("Disaster Victim Registry"),
# #description = "Allow affected individuals & households to register to receive compensation and distributions",
# restricted = False,
# module_type = 10,
# )),
#("member", Storage(
# name_nice = T("Members"),
# #description = "Membership Management System",
# restricted = True,
# module_type = 10,
# )),
# @ToDo: Rewrite in a modern style
#("budget", Storage(
# name_nice = T("Budgeting Module"),
# #description = "Allows a Budget to be drawn up",
# restricted = True,
# module_type = 10
# )),
# @ToDo: Port these Assessments to the Survey module
#("building", Storage(
# name_nice = T("Building Assessments"),
# #description = "Building Safety Assessments",
# restricted = True,
# module_type = 10,
# )),
])
| 39.422772 | 187 | 0.470226 | 6,895 | 79,634 | 5.26454 | 0.130384 | 0.00843 | 0.011984 | 0.012783 | 0.393427 | 0.319761 | 0.283727 | 0.267941 | 0.245544 | 0.22907 | 0 | 0.009749 | 0.422922 | 79,634 | 2,019 | 188 | 39.442298 | 0.780133 | 0.206733 | 0 | 0.538405 | 0 | 0.000732 | 0.087764 | 0.012475 | 0 | 0 | 0 | 0.002972 | 0 | 0 | null | null | 0 | 0.037308 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86d90c0ca6a5dbc266bca705498a4a9e3c8d3aac | 721 | py | Python | chroma-manager/tests/utils/__init__.py | GarimaVishvakarma/intel-chroma | fdf68ed00b13643c62eb7480754d3216d9295e0b | [
"MIT"
] | null | null | null | chroma-manager/tests/utils/__init__.py | GarimaVishvakarma/intel-chroma | fdf68ed00b13643c62eb7480754d3216d9295e0b | [
"MIT"
] | null | null | null | chroma-manager/tests/utils/__init__.py | GarimaVishvakarma/intel-chroma | fdf68ed00b13643c62eb7480754d3216d9295e0b | [
"MIT"
] | null | null | null | import time
import datetime
import contextlib
@contextlib.contextmanager
def patch(obj, **attrs):
"Monkey patch an object's attributes, restoring them after the block."
stored = {}
for name in attrs:
stored[name] = getattr(obj, name)
setattr(obj, name, attrs[name])
try:
yield
finally:
for name in stored:
setattr(obj, name, stored[name])
@contextlib.contextmanager
def timed(msg='', threshold=0):
"Print elapsed time of a block, if over optional threshold."
start = time.time()
try:
yield
finally:
elapsed = time.time() - start
if elapsed >= threshold:
print datetime.timedelta(seconds=elapsed), msg
| 24.033333 | 74 | 0.629681 | 86 | 721 | 5.27907 | 0.488372 | 0.046256 | 0.118943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001905 | 0.271845 | 721 | 29 | 75 | 24.862069 | 0.862857 | 0 | 0 | 0.32 | 0 | 0 | 0.174757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.12 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86dbf2f275a336e7d12bde00e6cd729b126ef190 | 1,883 | py | Python | packages/facilities/diagnostics/py/custom_checkbox.py | Falcons-Robocup/code | 2281a8569e7f11cbd3238b7cc7341c09e2e16249 | [
"Apache-2.0"
] | 2 | 2021-01-15T13:27:19.000Z | 2021-08-04T08:40:52.000Z | packages/facilities/diagnostics/py/custom_checkbox.py | Falcons-Robocup/code | 2281a8569e7f11cbd3238b7cc7341c09e2e16249 | [
"Apache-2.0"
] | null | null | null | packages/facilities/diagnostics/py/custom_checkbox.py | Falcons-Robocup/code | 2281a8569e7f11cbd3238b7cc7341c09e2e16249 | [
"Apache-2.0"
] | 5 | 2018-05-01T10:39:31.000Z | 2022-03-25T03:02:35.000Z | # Copyright 2020 Jan Feitsma (Falcons)
# SPDX-License-Identifier: Apache-2.0
#!/usr/bin/python
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
class Checkbox():
def __init__(self, name, position, default=False, label=None, rsize=0.6, enabled=True):
self.name = name # unique ID associated with
# label to display next to the checkbox
if label == None:
self.label = name # reuse
else:
self.label = label
self.callback = None
self.enabled = enabled
self.ticked = default
self.ax = plt.axes(position) # position is a tuple (x,y,w,h)
self.ax.axis('off')
self.canvas = self.ax.figure.canvas
# draw text
if len(self.label):
self.text = self.ax.text(-0.15, 0.5, self.label, horizontalalignment='right', verticalalignment='center')
# draw a rectangle, add a bit of spacing
self.ax.add_patch(Rectangle((0,(1.0-rsize)/2), rsize, rsize, fill=True))
# setup event handling
self.canvas.mpl_connect('button_release_event', self._handle_event)
self.redraw()
def __repr__(self):
s = 'checkbox:' + self.name + '=' + str(self.ticked)
if not self.enabled:
s += ' (disabled)'
return s
def on_changed(self, cb):
self.callback = cb
def _handle_event(self, e):
if self.enabled and e.inaxes == self.ax: # TODO: exclude spacing margin for inaxes calculation
self.ticked = not self.ticked
self.redraw()
if self.callback != None:
self.callback(self.name, self.ticked)
def redraw(self):
col = 'grey'
if self.enabled:
col = ['lightgoldenrodyellow', 'blue'][self.ticked]
self.ax.patches[0].set_facecolor(col)
self.ax.figure.canvas.draw()
| 33.625 | 117 | 0.601699 | 243 | 1,883 | 4.588477 | 0.45679 | 0.043049 | 0.0287 | 0.035874 | 0.039462 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013284 | 0.280404 | 1,883 | 55 | 118 | 34.236364 | 0.809594 | 0.164631 | 0 | 0.051282 | 0 | 0 | 0.053205 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 1 | 0.128205 | false | 0 | 0.051282 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86dc7d357b174d6a4843f8edef2436d8cf30c367 | 742 | py | Python | generator.py | Axonny/HexagonalHitori | 582cb50b751796c30ed273f66c8ac9fa6f3dd089 | [
"MIT"
] | null | null | null | generator.py | Axonny/HexagonalHitori | 582cb50b751796c30ed273f66c8ac9fa6f3dd089 | [
"MIT"
] | null | null | null | generator.py | Axonny/HexagonalHitori | 582cb50b751796c30ed273f66c8ac9fa6f3dd089 | [
"MIT"
] | null | null | null | from hitori_generator import Generator
from argparse import ArgumentParser
def generate(n: int, output_file: str) -> None:
if n < 3 or n > 8:
print("It isn't valid size")
exit(4)
generator = Generator(n)
data = generator.generate()
lines = map(lambda x: ' '.join(map(str, x)), data)
with open(output_file, 'w', encoding='utf-8') as f:
f.write('\n'.join(lines))
def main():
p = ArgumentParser()
p.add_argument('filename', type=str, help='Path to output file')
p.add_argument('-s', "--size", type=int, default=3, help='Generate SxS field. size must be in [3, 8]. Default is 3')
args = p.parse_args()
generate(args.size, args.filename)
if __name__ == '__main__':
main()
| 27.481481 | 120 | 0.628032 | 111 | 742 | 4.072072 | 0.540541 | 0.066372 | 0.053097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013769 | 0.216981 | 742 | 26 | 121 | 28.538462 | 0.7642 | 0 | 0 | 0 | 1 | 0 | 0.171159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.105263 | 0 | 0.210526 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86dd7f5030a8b0c0b8c5d1166bbac51638b7d539 | 25,946 | py | Python | opaflib/xmlast.py | feliam/opaf | f9908c26af1bf28cc29f3d647dcd9f55d631d732 | [
"MIT"
] | 2 | 2019-11-23T14:46:35.000Z | 2022-01-21T16:09:47.000Z | opaflib/xmlast.py | feliam/opaf | f9908c26af1bf28cc29f3d647dcd9f55d631d732 | [
"MIT"
] | null | null | null | opaflib/xmlast.py | feliam/opaf | f9908c26af1bf28cc29f3d647dcd9f55d631d732 | [
"MIT"
] | 1 | 2019-09-06T21:04:39.000Z | 2019-09-06T21:04:39.000Z | from lxml import etree
from opaflib.filters import defilterData
#Logging facility
import logging,code
logger = logging.getLogger("OPAFXML")
class PDFXML(etree.ElementBase):
''' Base pdf-xml class. Every pdf token xml representation will
have a span wich indicates where the original token layed in the file
'''
def _getspan(self):
return tuple([int(i) for i in self.get('span').split('~')])
def _setspan(self, value):
self.set('span',"%d~%d"%value)
def span_move(self,offset, recursive=True):
begin,end = self.span
self.span = (begin+offset,end+offset)
if recursive:
for child in self.getchildren():
child.span_move(offset)
def span_expand(self,span):
begin,end = self.span
self.span = (min(begin,span[0]),max(end,span[1]))
def clear_span(self, recursive=True):
del self.attrib['span']
for child in self.getchildren():
child.clear_span()
span = property(_getspan,_setspan)
def _to_xml(self):
return etree.tostring(self)
xml = property(_to_xml)
def _from_python(self, value):
self.from_python(value)
def _to_python(self):
return self.to_python()
value = property(_to_python,_from_python)
def __getattr__(self, name):
tags = set([e.tag for e in self])
if name in tags:
return self.xpath('./%s'%name)
return getattr(super(PDFXML,self),name)
def get_numgen(self):
''' Search the object and generation number of any pdf element '''
if self.tag.startswith('indirect'):
return self.id
else:
return self.getparent().get_numgen()
#leaf
class PDFString(PDFXML):
def from_python(self, value):
self.text = value.encode('string_escape')
def to_python(self):
return self.text.decode('string_escape')
class PDFName(PDFString):
pass
class PDFData(PDFString):
pass
class PDFBool(PDFString):
def from_python(self, value):
assert type(value) == bool, 'Value must be a boolean'
self.text = ['false','true'][int(value)]
def to_python(self):
return {'false': False, 'true': True}[self.text]
class PDFNull(PDFString):
def from_python(self, value):
assert value is None, 'Value must be None'
self.text = 'null'
def to_python(self):
assert self.text == 'null', 'PDFNull xml not initialized'
return None
class PDFR(PDFString):
def from_python(self, (n,g)):
assert type(n) == int and type(g) == int, 'R must be two numbers, n and g'
assert n >= 0 and n < 65535 , 'Invalid object number (%d)'%n
assert g >= 0 and g < 65535 , 'Invalid generation number (%d)'%g
self.text = "%d %d"%(n,g)
def to_python(self):
return tuple([int(i) for i in self.text.split(' ')])
def solve(self):
''' search the referenced indirect object in the containing pdf '''
pdf = self.xpath('/*')[0]
return pdf.getIndirectObject(self.value)
class PDFNumber(PDFXML):
def from_python(self, value):
assert type(value) in [int, float], 'Wrong type for a number'
self.text = str(value)
def to_python(self):
x = self.text
return float(int(float(x))) == float(x) and int(float(x)) or float(x)
class PDFStartxref(PDFString):
def from_python(self, value):
assert type(value) == int , 'Wrong type for startxref'
self.text = str(value).encode('string_escape')
def to_python(self):
return int(self.text.decode('string_escape'))
class PDFHeader(PDFString):
pass
#tree
class PDFEntry(PDFXML):
def to_python(self):
return tuple([e.value for e in self.getchildren()])
def _getkey(self):
return self[0]
def _setkey(self, key):
assert key.tag == 'name'
self[0] = key
key = property(_getkey,_setkey,None)
def _getval(self):
return self[1]
def _setval(self, val):
self[1] = val
val = property(_getval,_setval,None)
class PDFDictionary(PDFXML):
def to_python(self):
return dict([e.value for e in self.getchildren()])
def has_key(self,key):
return len(self.xpath('./entry/name[position()=1 and text()="%s"]'%key))>0
def __getitem__(self, i):
if str == type(i):
return self.xpath('./entry/name[position()=1 and text()="%s"]/../*[position()=2]'%i)[0]
return super(PDFDictionary,self).__getitem__(i)
def __delitem__(self, i):
if str == type(i):
return self.remove(self.xpath('./entry/name[position()=1 and text()="%s"]/..'%i)[0])
return super(PDFDictionary,self).__delitem__(i)
def __setitem__(self, key, val):
if str == type(key):
self.xpath('./entry/name[position()=1 and text()="%s"]/..'%key)[0].val=val
else:
super(PDFDictionary,self).__setitem__(key,val)
class PDFStream(PDFXML):
def to_python(self):
return {'dictionary':self[0].value, 'data':self[1].value}
def _getdictionary(self):
return self[0]
def _setdictionary(self, d):
assert key.tag == 'dictionary'
self[0] = d
dictionary = property(_getdictionary,_setdictionary,None)
def _getdata(self):
return self[1]
def _setdata(self, data):
assert data.tag == 'data'
self[1] = data
data = property(_getdata,_setdata,None)
def isFiltered(self):
''' Check if stream is filtered '''
return self.dictionary.has_key('Filter')
def getFilters(self):
val = self.dictionary.value
filters = val.get('Filter',None)
params = val.get('DecodeParams',None)
assert any([type(filters) == list and (type(params) == list or params==None ),
type(filters) != list and (type(params) == dict or params==None ) ]), 'Filter/DecodeParms wrong type'
if type(filters) != list:
filters=[filters]
params=params and [params] or [{}]
if params == None:
params = [{}]*len(filters)
assert all([type(x)==str for x in filters]), 'Filter shall be a names'
assert all([type(x)==dict for x in params]), 'Params should be a dictionary.. or null?'
assert len(filters) == len(params),'Number of Decodeparams should match Filters'
return zip(filters,params)
def popFilter(self):
dictionary = self.dictionary
assert dictionary.has_key('Filter'), 'Stream not Filtered!'
selected_filter = None
selected_params = None
deletion_list = []
if dictionary['Length'].value != len(self.data.value):
logger.info("Length field of object %s does not match the actual data size (%d != %d)"%(str(self.get_numgen()),dictionary['Length'].value,len(self.data.value)))
if type(dictionary['Filter']) == PDFArray:
selected_filter = dictionary['Filter'][0]
del dictionary['Filter'][0]
if dictionary.has_key('DecodeParms'):
assert dictionary['DecodeParms'] == PDFArray, 'Array of filters need array of decoding params'
selected_params = dictionary['DecodeParms'][0]
deletion_list.append((dictionary['DecodeParms'],0))
#del dictionary['DecodeParms'][0]
else:
selected_filter = dictionary['Filter']
del dictionary['Filter']
if dictionary.has_key('DecodeParms'):
selected_params = dictionary['DecodeParms']
deletion_list.append((dictionary, 'DecodeParms'))
#del dictionary['DecodeParms']
if dictionary.has_key('Filter') and \
type(dictionary['Filter']) == PDFArray and \
len(dictionary['Filter']) == 0:
deletion_list.append((dictionary, 'Filter'))
#del dictionary['Filter']
if dictionary.has_key('DecodeParms') and \
type(dictionary['DecodeParms']) == PDFArray and \
len(dictionary['DecodeParms']) == 0:
deletion_list.append((dictionary, 'DecodeParms'))
#del dictionary['DecodeParms']
#FIX recode defilterData .. make it register/unregister able.
#(think /Crypt 7.4.10 Crypt Filter )
self.data.value = defilterData(selected_filter.value,self.data.value, selected_params and selected_params.value or selected_params)
for v,i in deletion_list:
del v[i]
dictionary['Length'].value = len(self.data.value)
def defilter(self):
try:
while self.isFiltered():
self.popFilter()
except Exception,e:
logger.debug("Couldn't defilter <%s> stream (exception %s)."%(self.value,str(e)))
logger.info("Couldn't defilter <%s> stream."%str(self.get_numgen()))
def isObjStm(self):
''' Return true if this is an object stream (ObjStml) '''
return self.dictionary.has_key('Type') and self.dictionary['Type'].value == 'ObjStm'
def expandObjStm(self):
'''
This parses the ObjStm structure and replace it with all the new
indirect objects.
'''
from opaflib.parser import parse
assert not self.isFiltered(), "ObjStm should not be compressed at this point"
assert self.dictionary.has_key('N'), "N is mandatory in ObjStm dictionary"
assert self.dictionary.has_key('First'), "First is mandatory in ObjStm dictionary"
dictionary = self.dictionary
data = self.data.value
first = dictionary["First"].value
pointers = [int(x) for x in data[:first].split()]
assert len(pointers)%2 == 0 , "Wrong number of integer in the ObjStm begining"
pointers = dict([(pointers[i+1]+first,pointers[i]) for i in range(0,len(pointers),2) ])
positions = sorted(pointers.keys() + [len(data)])
parsed_objects = []
for p in range(0,len(positions)-1):
logger.info("Adding new object %s from objectstream"%repr((pointers[positions[p]],0)))
io = PDF.indirect_object(parse('object', data[positions[p]:positions[p+1]]+" "))
io.id = (pointers[positions[p]],0)
parsed_objects.append(io)
return parsed_objects
class PDFArray(PDFXML):
def to_python(self):
return [e.value for e in self]
class PDFIndirect(PDFXML):
def to_python(self):
assert len(self.getchildren())==1, "Wrong number of children in indirect object"
return (self.id, self.object.value)
def _getobject(self):
return self[0]
def _setobject(self, o):
self[0] = o
object = property(_getobject,_setobject,None)
def _getid(self):
return tuple([int(i) for i in self.get('id').split(' ')])
def _setid(self, o):
self.set('id', "%d %d"%o)
id = property(_getid,_setid,None)
def isStream(self):
return len(self.xpath('./stream'))==1
class PDFPdf(PDFXML):
def to_python(self):
return [e.value for e in self]
def getStartxref(self):
''' Get the last startxref pointer (should be at least one) '''
return self.pdf_update[-1].startxref[-1]
#FIX move all this to pdf_update and do the wrapper here
def getObjectAt(self, pos):
''' Get the object found at certain byte position '''
return self.xpath('//*[starts-with(@span,"%d~")]'%pos)[0]
def getTrailer(self, startxref=None):
''' Get the Trailer dictionary (should be at least one) '''
if startxref == None:
startxref = self.getStartxref().value
xref = self.getObjectAt(startxref)
assert xref.tag in ['xref', 'stream'] and xref[0].tag == 'dictionary'
return xref[0]
def getID(self, startxref=None):
''' Get the pdf ID from the trailer dictionary '''
trailer = self.getTrailer(startxref).value
if trailer.has_key('ID'):
return trailer['ID']
else:
return ['','']
def getIndirectObject(self, ref):
''' Search for an indirect object '''
for u in self.pdf_update:
if u.has_key(ref):
return u[ref]
def getRoot(self):
''' Get the pdf Root node. '''
return self.getIndirectObject(self.getTrailer()['Root'].value).object
def isEncrypted(self):
''' Return true if pdf is encrypted '''
return self.getTrailer().has_key('Encrypt')
def countObjStm(self):
''' Count number of 'compressed' object streams '''
return len(self.xpath('//stream/dictionary/entry/name[position()=1 and text()="Type"]/../name[position()=2 and text()="ObjStm"]/../../..'))
def countIObj(self):
''' Count number of 'compressed' object streams '''
return len(self.xpath('//indirect_object'))
def graph(xml_pdf,dot='default.dot'):
''' Generate a .dot graph of the pdf '''
dotdata = "digraph {\n"
nodes_added = set()
for io in self.pdf_update.indirect_object:
references = io.xpath(".//R")
orig = "%d %d"%io.id
if len(references) == 0:
dotdata += '\t"%s";\n'%x
nodes_added.add(orig)
else:
for r in references:
dest = "%d %d"%r.value
dotdata += '\t"%s" -> "%s";\n'%(orig, dest)
nodes_added.add(orig)
nodes_added.add(dest)
try:
root = "%d %d"%self.getRoot()
dotdata += '\t"trailer" -> "%s";\n'%root
except Exception,e :
pass
dotdata += '}\n'
logger.info("Writing graph to %s(a dot file). Download graphviz or try this http://rise4fun.com/Agl for render it."%dot)
file(dot,"w").write(dotdata)
def expandAllObjStm(self):
''' Find all object streams and expand them. Each ObjStm will be replaced
by its childs '''
for u in self.pdf_update:
for ref in u.findAllObjStm():
u.expandObjStm(ref)
def defilterAll(self):
''' Find all object streams and expand them. Each ObjStm will be replaced
by its childs '''
for u in self.pdf_update:
for io in u[:]:
if type(io) == PDFIndirect and io.isStream() and io.object.isFiltered():
io.object.defilter()
def decrypt(self):
''' This will try to decrypt V:4 null password encryption '''
import hashlib, struct
from Crypto.Cipher import AES
from Crypto.Util import randpool
import base64
def rc4crypt(data, key):
x = 0
box = range(256)
for i in range(256):
x = (x + box[i] + ord(key[i % len(key)])) % 256
box[i], box[x] = box[x], box[i]
x = 0
y = 0
out = []
for char in data:
x = (x + 1) % 256
y = (y + box[x]) % 256
box[x], box[y] = box[y], box[x]
out.append(chr(ord(char) ^ box[(box[x] + box[y]) % 256]))
return ''.join(out)
block_size = 16
key_size = 32
def encrypt(plain_text,key_bytes):
assert len(key_bytes) == key_size
mode = AES.MODE_CBC
pad = block_size - len(plain_text) % block_size
data = plain_text + pad * chr(pad)
iv_bytes = randpool.RandomPool(512).get_bytes(block_size)
encrypted_bytes = iv_bytes + AES.new(key_bytes, mode, iv_bytes).encrypt(data)
return encrypted_bytes
def decrypt(encrypted_bytes,key_bytes):
#assert len(key_bytes) == key_size
mode = AES.MODE_CBC
iv_bytes = encrypted_bytes[:block_size]
plain_text = AES.new(key_bytes, mode, iv_bytes).decrypt(encrypted_bytes[block_size:])
pad = ord(plain_text[-1])
return plain_text[:-pad]
assert self.isEncrypted()
#Get and print the encryption dictionary
encrypt = self.getTrailer()['Encrypt'].solve().object
print "It's ENCRYPTED!"
encrypt_py = encrypt.value
print encrypt_py
#Ok try to decrypt it ...
assert encrypt_py['V'] == 4, "Sorry only Version 4 supported"
assert encrypt_py['R'] == 4, "Sorry only Version 4 supported"
#password length
n = encrypt_py['Length']/8
print "N:",n
#a) Pad or truncate the password string to exactly 32 bytes.
user_password = ""
pad = "28BF4E5E4E758A4164004E56FFFA01082E2E00B6D0683E802F0CA9FE6453697A".decode('hex')
print "PASSWORD: ", user_password.encode('hex')
print "PAD: ", pad.encode('hex')
#b) Initialize the MD5 hash function and pass the result of step (a) as input to this function.
m = hashlib.md5()
m.update((user_password+pad)[:32])
print "MD5 update 1", ((user_password+pad)[:32]).encode('hex')
#c) Pass the value of the encryption dictionary's O entry to the MD5 hash function.
m.update (encrypt_py['O'][:32])
print "MD5 update 2", (encrypt_py['O'][:32]).encode('hex')
#d) Convert the integer value of the P entry to a 32-bit unsigned binary number and pass these bytes to the
# MD5 hash function, low-order byte first. WTF!!??
print "MD5 update 3", struct.pack("<L", 0xffffffff&encrypt_py['P']).encode('hex')
m.update (struct.pack("<L", 0xffffffff&encrypt_py['P'] ))
#e) append ID ?
#TODO, get the ID from the trailer..
ID = ''
m.update (ID)
print "MD5 update 4", ID.encode('hex')
#f) If document metadata is not being encrypted, pass 4 bytes with the value 0xFFFFFFFF to the MD5 hash function.
if encrypt_py.has_key('EncryptMetadata') and encrypt_py['EncryptMetadata'] == false:
m.update('\xff'*4)
print "MD5 update 5", ('\xff'*4).encode('hex')
print "1rst DIGEST:", m.digest().encode('hex')
h = m.digest()[:n]
for i in range(0,50):
h = hashlib.md5(h[:n]).digest()
print "Encryption KEY(%d)"%i, h.encode('hex')
key = h[:n]
print "Encryption KEY", key.encode('hex')
print "Try to authenticate"
_buf = hashlib.md5(pad + ID).digest()
print "MD5(padding+ID):",_buf.encode('hex')
for i in range(0,20):
_key = ''.join([chr(ord(k)^i) for k in list(key)])
_buf1 = rc4crypt(_buf,_key)
print "RC4 iter(%d) Encrypt data <%s> with key <%s> and it gives data <%s>"%(i,_buf.encode('hex'),_key.encode('hex'),_buf1.encode('hex'))
_buf = _buf1
assert _buf == encrypt_py['U'][:16]
print "Authenticated! (An actual pass is not needed. Using null pass '' )"
print "U", encrypt_py['U'].encode('hex')
print "O", encrypt_py['O'].encode('hex')
def decrypt_xml(xml_element):
n,g = xml_element.get_numgen()
m = hashlib.md5()
m.update(key)
m.update(chr(n&0xff))
m.update(chr((n>>8)&0xff))
m.update(chr((n>>16)&0xff))
m.update(chr(g&0xff))
m.update(chr((g>>8)&0xff))
m.update("sAlT")
real_key = m.digest()
pld = e.value
if pld.endswith("\x0d\x0a"):
pld = pld[:-2]
pld = decrypt(pld,real_key)
e.value=pld
#decrypt every string and stream in place...
for e in self.xpath('//stream/data'):
decrypt_xml(e)
for e in self.xpath('//string'):
decrypt_xml(e)
class PDFUpdate(PDFXML):
def to_python(self):
return dict([e.value for e in self.xpath('./indirect_object')])
def has_key(self,key):
key = "%d %d"%key
return len(self.xpath('./indirect_object[@id="%s"]'%key))>0
def __getitem__(self, key):
if tuple == type(key):
key = "%d %d"%key
return self.xpath('./indirect_object[@id="%s"]'%key)[0]
return super(PDFUpdate,self).__getitem__(key)
def __delitem__(self, key):
if tuple == type(key):
key = "%d %d"%key
return self.remove(self.xpath('./indirect_object[@id="%s"]'%key)[0])
return super(PDFUpdate,self).__delitem__(key)
def __setitem__(self, key, val):
if str == type(key):
self.xpath('./indirect_object[@obj="%s"]'%key)[0][:]=[val] #mmm
else:
super(PDFDictionary,self).__setitem__(key,val)
def getObjectAt(self, pos):
''' Get the object found at certain byte position (only in this update!)'''
return self.xpath('.//*[starts-with(@span,"%d~")]'%pos)[0]
def getTrailer(self, startxref=None):
''' Get the Trailer dictionary (of this update!)'''
if startxref == None:
startxref = self.getStartxref().value
xref = self.getObjectAt(startxref)
return xref.dictionary
def getRoot(self):
''' Get the pdf Root node of this update. '''
return self[self.getTrailer()['Root'].value].object
def countObjStm(self):
''' Count number of 'compressed' object streams '''
return len(self.xpath('.//stream/dictionary/entry/name[position()=1 and text()="Type"]/../name[position()=2 and text()="ObjStm"]/../../..'))
def expandObjStm(self, ref):
io_objstm = self[ref]
assert io_objstm.object.dictionary['Type'].value == 'ObjStm'
#completelly defilter the object stream
while io_objstm.object.isFiltered():
io_objstm.object.popFilter()
#parse the indirect simpe objects inside it
expanded_iobjects = io_objstm.object.expandObjStm()
#replace the object stream by its childs
for new_io in expanded_iobjects:
io_objstm.addnext(new_io)
self.remove(io_objstm)
def findAllObjStm(self):
''' Search 'compressed' object streams ids/refs'''
return [io.id for io in self.xpath('.//stream/dictionary/entry/name[position()=1 and text()="Type"]/../name[position()=2 and text()="ObjStm"]/../../../..')]
def expandAllObjStm(self):
for ref in self.findAllObjStm():
self.expandObjStm(ref)
#Factory
class PDFXMLFactory():
def __init__(self):
self.parser = etree.XMLParser()
fallback = etree.ElementDefaultClassLookup(PDFXML)
lookup = etree.ElementNamespaceClassLookup(fallback)
namespace = lookup.get_namespace(None)
#leafs
namespace['name'] = PDFName
namespace['string'] = PDFString
namespace['number'] = PDFNumber
namespace['null'] = PDFNull
namespace['bool'] = PDFBool
namespace['R'] = PDFR
namespace['header'] = PDFHeader
namespace['startxref'] = PDFStartxref
namespace['data'] = PDFData
#trees
namespace['entry'] = PDFEntry
namespace['dictionary'] = PDFDictionary
namespace['stream'] = PDFStream
namespace['pdf'] = PDFPdf
namespace['pdf_update'] = PDFUpdate
namespace['indirect_object'] = PDFIndirect
namespace['array'] = PDFArray
self.parser.set_element_class_lookup(lookup)
#leaf
def create_leaf(self, tag, value,**attribs):
assert tag in ['number','string','name','R','startxref','header','data','null','bool'], "Got wrong leaf tag: %s"%tag
xml = self.parser.makeelement(tag)
xml.value=value
xml.span=attribs.setdefault('span', (0xffffffff,-1))
del attribs['span']
for attr_key, attr_val in attribs.items():
xml.set(attr_key, str(attr_val))
return xml
#Tree
def create_tree(self, tag, *childs, **attribs):
assert tag in ['indirect_object','dictionary', 'entry', 'array', 'stream', 'xref', 'pdf', 'pdf_update'], "Got wrong tree tag: %s"%tag
xml = self.parser.makeelement(tag)
xml.span=attribs.setdefault('span', (0xffffffff,-1))
del attribs['span']
for attr_key, attr_val in attribs.items():
xml.set(attr_key, str(attr_val))
for child in childs:
xml.append(child)
return xml
def __getattr__(self,tag, *args,**kwargs):
if tag in ['number','string','name','R','startxref','header','data','null','bool']:
return lambda payload, **my_kwargs: self.create_leaf(tag, payload, **my_kwargs)
elif tag in ['indirect_object','dictionary', 'entry', 'array', 'stream', 'xref', 'pdf', 'pdf_update']:
return lambda payload, **my_kwargs: self.create_tree(tag, *payload, **my_kwargs)
return super(PDFXMLFactory,self).__getattr__(tag,*args,**kwargs)
PDF = PDFXMLFactory()
def create_leaf(tag, value, **kwargs):
return PDF.create_leaf(tag, value,**kwargs)
def create_tree(tag, childs, **kwargs):
return PDF.create_tree(tag, *childs, **kwargs)
if __name__=="__main__":
name = create_leaf('name', "Name")
string = create_leaf('string', "Felipe")
entry = create_tree('entry',[name,string])
dictionary = create_tree('dictionary',[entry])
stream_data = create_leaf('data',"A"*100)
stream = create_tree('stream',[dictionary,stream_data])
indirect = create_tree('indirect_object', [stream], obj=(1,0))
array = create_tree('array', [create_leaf('number', i) for i in range(0,10)])
xml=indirect
print etree.tostring(xml), xml.value
import code
code.interact(local=locals())
| 37.332374 | 172 | 0.582248 | 3,230 | 25,946 | 4.57678 | 0.141486 | 0.015558 | 0.010417 | 0.014206 | 0.342826 | 0.282081 | 0.235406 | 0.207536 | 0.178584 | 0.154908 | 0 | 0.012456 | 0.279041 | 25,946 | 694 | 173 | 37.386167 | 0.777825 | 0.04413 | 0 | 0.206287 | 0 | 0.009823 | 0.13935 | 0.029108 | 0 | 0 | 0.00258 | 0.001441 | 0.060904 | 0 | null | null | 0.017682 | 0.017682 | null | null | 0.039293 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86dfb9b0ac538e587eb0952c661e061a843edff2 | 1,544 | py | Python | src/sol/handle_metaplex.py | terra-dashboard/staketaxcsv | 5793105488bf799c61aee64a45f44e9ae8fef397 | [
"MIT"
] | 140 | 2021-12-11T23:37:46.000Z | 2022-03-29T23:04:36.000Z | src/sol/handle_metaplex.py | terra-dashboard/staketaxcsv | 5793105488bf799c61aee64a45f44e9ae8fef397 | [
"MIT"
] | 80 | 2021-12-17T15:13:47.000Z | 2022-03-31T13:33:53.000Z | src/sol/handle_metaplex.py | terra-dashboard/staketaxcsv | 5793105488bf799c61aee64a45f44e9ae8fef397 | [
"MIT"
] | 52 | 2021-12-12T00:37:17.000Z | 2022-03-29T23:25:09.000Z | from common.make_tx import make_swap_tx
from sol.handle_simple import handle_unknown_detect_transfers
def handle_metaplex(exporter, txinfo):
transfers_in, transfers_out, _ = txinfo.transfers_net
if len(transfers_in) == 1 and len(transfers_out) == 1:
sent_amount, sent_currency, _, _ = transfers_out[0]
received_amount, received_currency, _, _ = transfers_in[0]
row = make_swap_tx(txinfo, sent_amount, sent_currency, received_amount, received_currency)
exporter.ingest_row(row)
else:
handle_unknown_detect_transfers(exporter, txinfo)
def is_nft_mint(txinfo):
log_instructions = txinfo.log_instructions
transfers_in, transfers_out, _ = txinfo.transfers_net
if "MintTo" in log_instructions and len(transfers_out) == 1 and len(transfers_in) == 0:
return True
elif ("MintTo" in log_instructions
and len(transfers_out) == 1
and len(transfers_in) == 1
and transfers_in[0][0] == 1):
return True
else:
return False
def handle_nft_mint(exporter, txinfo):
transfers_in, transfers_out, transfers_unknown = txinfo.transfers_net
if len(transfers_in) == 1 and len(transfers_out) == 1:
sent_amount, sent_currency, _, _ = transfers_out[0]
received_amount, received_currency, _, _ = transfers_in[0]
row = make_swap_tx(txinfo, sent_amount, sent_currency, received_amount, received_currency)
exporter.ingest_row(row)
return
handle_unknown_detect_transfers(exporter, txinfo)
| 34.311111 | 98 | 0.709845 | 200 | 1,544 | 5.095 | 0.205 | 0.107949 | 0.088322 | 0.062807 | 0.752699 | 0.748773 | 0.61629 | 0.61629 | 0.551521 | 0.551521 | 0 | 0.012265 | 0.207902 | 1,544 | 44 | 99 | 35.090909 | 0.820932 | 0 | 0 | 0.5625 | 0 | 0 | 0.007772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.0625 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86e150137bde5dca549d0321cdc857bd542bc500 | 3,878 | py | Python | cinemasci/cis/__init__.py | cinemascience/cinemasc | 5b00a0c2e3c886f65cfbf1f59e914fc458d7068b | [
"BSD-3-Clause"
] | null | null | null | cinemasci/cis/__init__.py | cinemascience/cinemasc | 5b00a0c2e3c886f65cfbf1f59e914fc458d7068b | [
"BSD-3-Clause"
] | 3 | 2020-04-22T16:26:44.000Z | 2020-04-22T16:30:12.000Z | cinemasci/cis/__init__.py | cinemascience/cinemasc | 5b00a0c2e3c886f65cfbf1f59e914fc458d7068b | [
"BSD-3-Clause"
] | 1 | 2020-03-06T21:21:19.000Z | 2020-03-06T21:21:19.000Z | from . import imageview
from . import cisview
from . import renderer
from . import convert
class cis:
"""Composible Image Set Class
The data structure to hold properties of a Composible Image Set.
"""
def __init__(self, filename):
""" The constructor. """
self.fname = filename
self.classname = "COMPOSABLE_IMAGE_SET"
self.dims = [0,0]
self.flags = "CONSTANT_CHANNELS"
self.version = "1.0"
self.parameterlist = []
self.parametertable = None
self.variables = {}
self.images = {}
self.colormaps = {}
def debug_print(self):
""" Debug print statement for CIS properties. """
print("printing cis")
print(" fname: {}".format(self.fname))
print(" classname: {}".format(self.classname))
print(" dims: {}".format(self.dims))
print(" flags: {}".format(self.flags))
print(" version: {}".format(self.version))
print(" colormaps: ")
for m in self.colormaps:
print(m)
for i in self.get_images():
print(" image: {}".format(self.get_image(i).name))
for l in self.get_image(i).get_layers():
print(" layer: {}".format(self.get_image(i).get_layer(l).name))
print("\n")
def get_image(self, key):
""" Returns an image given its key. """
result = False
if key in self.images:
result = self.images[key]
return result
def get_images(self):
""" Returns all images. """
for i in self.images:
yield i
def get_image_names(self):
""" Returns list of image names. """
return list(self.images.keys())
def set_parameter_table(self, table):
""" Set parameter table using a deep copy. """
self.parametertable = table.copy(deep=True)
def add_parameter(self, name, type):
""" Add a parameter to the list of parameters for the CIS. """
# check for duplicates
self.parameterlist.append([name, type])
def add_variable(self, name, type, min, max):
""" Add a variable to the set of variables. """
# check for duplicates
self.variables[name] = {'type':type, 'min':min, 'max':max}
def add_image(self, name):
""" Add an image to the set of images in the CIS. """
# check for duplicates
self.images[name] = image.image(name)
return self.images[name]
def get_variables(self):
""" Return all variables. """
for i in self.variables:
yield i
def get_variable(self, name):
""" Return a variable. """
variable = None
if name in self.variables:
variable = self.variables[name]
return variable
def get_image(self,name):
""" Return an image. """
image = None
if name in self.images:
image = self.images[name]
return image
def get_colormap(self,name):
""" Return a colormap. """
colormap = None
if name in self.colormaps:
colormap = self.colormaps[name]
return colormap
def add_colormap(self, name, path):
""" Add a colormap to the set of colormaps. """
#if colormap not in dict
if (name not in self.colormaps):
self.colormaps[name] = colormap.colormap(path)
def remove_colormap(self, name):
""" Remove a colormap from the set of colormaps. """
self.colormaps.pop(name)
def get_colormaps(self):
""" Return all colormaps. """
for i in self.colormaps:
yield i
def set_dims(self, w, h):
""" Set the dimensions of the CIS given a width and height. """
self.dims = [w, h]
| 29.603053 | 84 | 0.555183 | 463 | 3,878 | 4.583153 | 0.207343 | 0.031103 | 0.028275 | 0.01885 | 0.075872 | 0.02639 | 0 | 0 | 0 | 0 | 0 | 0.001533 | 0.326973 | 3,878 | 130 | 85 | 29.830769 | 0.811494 | 0.194688 | 0 | 0.038462 | 0 | 0 | 0.05994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.217949 | false | 0 | 0.051282 | 0 | 0.358974 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86e1817f75ca21dff7ecb06d87908e9887be1bfd | 2,172 | py | Python | applications/spaghetti.py | fos/fos-legacy | db6047668781a0615abcebc7d55a7164f3105047 | [
"BSD-3-Clause"
] | 2 | 2016-08-03T10:33:08.000Z | 2021-06-23T18:50:14.000Z | applications/spaghetti.py | fos/fos-legacy | db6047668781a0615abcebc7d55a7164f3105047 | [
"BSD-3-Clause"
] | null | null | null | applications/spaghetti.py | fos/fos-legacy | db6047668781a0615abcebc7d55a7164f3105047 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
import nibabel as nib
import os.path as op
import pyglet
#pyglet.options['debug_gl'] = True
#pyglet.options['debug_x11'] = True
#pyglet.options['debug_gl_trace'] = True
#pyglet.options['debug_texture'] = True
#fos modules
from fos.actor.axes import Axes
from fos import World, Window, WindowManager
from labeler import TrackLabeler
from fos.actor.slicer import Slicer
#dipy modules
from dipy.segment.quickbundles import QuickBundles
from dipy.io.dpy import Dpy
from dipy.io.pickles import load_pickle,save_pickle
from dipy.viz.colormap import orient2rgb
import copy
if __name__ == '__main__':
subject = 5
seeds = 1
qb_dist = 30
#load T1 volume registered in MNI space
img = nib.load('data/subj_'+("%02d" % subject)+'/MPRAGE_32/T1_flirt_out.nii.gz')
data = img.get_data()
affine = img.get_affine()
#load the tracks registered in MNI space
fdpyw = 'data/subj_'+("%02d" % subject)+'/101_32/DTI/tracks_gqi_'+str(seeds)+'M_linear.dpy'
dpr = Dpy(fdpyw, 'r')
T = dpr.read_tracks()
dpr.close()
#load initial QuickBundles with threshold 30mm
fpkl = 'data/subj_'+("%02d" % subject)+'/101_32/DTI/qb_gqi_'+str(seeds)+'M_linear_'+str(qb_dist)+'.pkl'
#qb=QuickBundles(T,30.,12)
qb=load_pickle(fpkl)
#create the interaction system for tracks
tl = TrackLabeler(qb,qb.downsampled_tracks(),vol_shape=data.shape,tracks_alpha=1)
#add a interactive slicing/masking tool
sl = Slicer(affine,data)
#add one way communication between tl and sl
tl.slicer=sl
#OpenGL coordinate system axes
ax = Axes(100)
x,y,z=data.shape
#add the actors to the world
w=World()
w.add(tl)
w.add(sl)
#w.add(ax)
#create a window
wi = Window(caption="Interactive Spaghetti using Diffusion Imaging in Python (dipy.org) and Free On Shades (fos.me)",\
bgcolor=(0.3,0.3,0.6,1),width=1200,height=800)
#attach the world to the window
wi.attach(w)
#create a manager which can handle multiple windows
wm = WindowManager()
wm.add(wi)
wm.run()
print('Everything is running ;-)')
| 31.941176 | 122 | 0.675414 | 325 | 2,172 | 4.393846 | 0.473846 | 0.036415 | 0.05042 | 0.046218 | 0.061625 | 0.036415 | 0.036415 | 0 | 0 | 0 | 0 | 0.0296 | 0.206722 | 2,172 | 67 | 123 | 32.41791 | 0.799187 | 0.282228 | 0 | 0 | 0 | 0.02439 | 0.173377 | 0.034416 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.317073 | 0 | 0.317073 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
86e6c529a13c62833d2d9d91e683f2c9cc85c2b8 | 16,246 | py | Python | sdk/python/pulumi_azure_native/servicebus/v20210601preview/get_subscription.py | polivbr/pulumi-azure-native | 09571f3bf6bdc4f3621aabefd1ba6c0d4ecfb0e7 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/servicebus/v20210601preview/get_subscription.py | polivbr/pulumi-azure-native | 09571f3bf6bdc4f3621aabefd1ba6c0d4ecfb0e7 | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/servicebus/v20210601preview/get_subscription.py | polivbr/pulumi-azure-native | 09571f3bf6bdc4f3621aabefd1ba6c0d4ecfb0e7 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
__all__ = [
'GetSubscriptionResult',
'AwaitableGetSubscriptionResult',
'get_subscription',
]
@pulumi.output_type
class GetSubscriptionResult:
"""
Description of subscription resource.
"""
def __init__(__self__, accessed_at=None, auto_delete_on_idle=None, client_affine_properties=None, count_details=None, created_at=None, dead_lettering_on_filter_evaluation_exceptions=None, dead_lettering_on_message_expiration=None, default_message_time_to_live=None, duplicate_detection_history_time_window=None, enable_batched_operations=None, forward_dead_lettered_messages_to=None, forward_to=None, id=None, is_client_affine=None, lock_duration=None, max_delivery_count=None, message_count=None, name=None, requires_session=None, status=None, system_data=None, type=None, updated_at=None):
if accessed_at and not isinstance(accessed_at, str):
raise TypeError("Expected argument 'accessed_at' to be a str")
pulumi.set(__self__, "accessed_at", accessed_at)
if auto_delete_on_idle and not isinstance(auto_delete_on_idle, str):
raise TypeError("Expected argument 'auto_delete_on_idle' to be a str")
pulumi.set(__self__, "auto_delete_on_idle", auto_delete_on_idle)
if client_affine_properties and not isinstance(client_affine_properties, dict):
raise TypeError("Expected argument 'client_affine_properties' to be a dict")
pulumi.set(__self__, "client_affine_properties", client_affine_properties)
if count_details and not isinstance(count_details, dict):
raise TypeError("Expected argument 'count_details' to be a dict")
pulumi.set(__self__, "count_details", count_details)
if created_at and not isinstance(created_at, str):
raise TypeError("Expected argument 'created_at' to be a str")
pulumi.set(__self__, "created_at", created_at)
if dead_lettering_on_filter_evaluation_exceptions and not isinstance(dead_lettering_on_filter_evaluation_exceptions, bool):
raise TypeError("Expected argument 'dead_lettering_on_filter_evaluation_exceptions' to be a bool")
pulumi.set(__self__, "dead_lettering_on_filter_evaluation_exceptions", dead_lettering_on_filter_evaluation_exceptions)
if dead_lettering_on_message_expiration and not isinstance(dead_lettering_on_message_expiration, bool):
raise TypeError("Expected argument 'dead_lettering_on_message_expiration' to be a bool")
pulumi.set(__self__, "dead_lettering_on_message_expiration", dead_lettering_on_message_expiration)
if default_message_time_to_live and not isinstance(default_message_time_to_live, str):
raise TypeError("Expected argument 'default_message_time_to_live' to be a str")
pulumi.set(__self__, "default_message_time_to_live", default_message_time_to_live)
if duplicate_detection_history_time_window and not isinstance(duplicate_detection_history_time_window, str):
raise TypeError("Expected argument 'duplicate_detection_history_time_window' to be a str")
pulumi.set(__self__, "duplicate_detection_history_time_window", duplicate_detection_history_time_window)
if enable_batched_operations and not isinstance(enable_batched_operations, bool):
raise TypeError("Expected argument 'enable_batched_operations' to be a bool")
pulumi.set(__self__, "enable_batched_operations", enable_batched_operations)
if forward_dead_lettered_messages_to and not isinstance(forward_dead_lettered_messages_to, str):
raise TypeError("Expected argument 'forward_dead_lettered_messages_to' to be a str")
pulumi.set(__self__, "forward_dead_lettered_messages_to", forward_dead_lettered_messages_to)
if forward_to and not isinstance(forward_to, str):
raise TypeError("Expected argument 'forward_to' to be a str")
pulumi.set(__self__, "forward_to", forward_to)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if is_client_affine and not isinstance(is_client_affine, bool):
raise TypeError("Expected argument 'is_client_affine' to be a bool")
pulumi.set(__self__, "is_client_affine", is_client_affine)
if lock_duration and not isinstance(lock_duration, str):
raise TypeError("Expected argument 'lock_duration' to be a str")
pulumi.set(__self__, "lock_duration", lock_duration)
if max_delivery_count and not isinstance(max_delivery_count, int):
raise TypeError("Expected argument 'max_delivery_count' to be a int")
pulumi.set(__self__, "max_delivery_count", max_delivery_count)
if message_count and not isinstance(message_count, float):
raise TypeError("Expected argument 'message_count' to be a float")
pulumi.set(__self__, "message_count", message_count)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if requires_session and not isinstance(requires_session, bool):
raise TypeError("Expected argument 'requires_session' to be a bool")
pulumi.set(__self__, "requires_session", requires_session)
if status and not isinstance(status, str):
raise TypeError("Expected argument 'status' to be a str")
pulumi.set(__self__, "status", status)
if system_data and not isinstance(system_data, dict):
raise TypeError("Expected argument 'system_data' to be a dict")
pulumi.set(__self__, "system_data", system_data)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
if updated_at and not isinstance(updated_at, str):
raise TypeError("Expected argument 'updated_at' to be a str")
pulumi.set(__self__, "updated_at", updated_at)
@property
@pulumi.getter(name="accessedAt")
def accessed_at(self) -> str:
"""
Last time there was a receive request to this subscription.
"""
return pulumi.get(self, "accessed_at")
@property
@pulumi.getter(name="autoDeleteOnIdle")
def auto_delete_on_idle(self) -> Optional[str]:
"""
ISO 8061 timeSpan idle interval after which the topic is automatically deleted. The minimum duration is 5 minutes.
"""
return pulumi.get(self, "auto_delete_on_idle")
@property
@pulumi.getter(name="clientAffineProperties")
def client_affine_properties(self) -> Optional['outputs.SBClientAffinePropertiesResponse']:
"""
Properties specific to client affine subscriptions.
"""
return pulumi.get(self, "client_affine_properties")
@property
@pulumi.getter(name="countDetails")
def count_details(self) -> 'outputs.MessageCountDetailsResponse':
"""
Message count details
"""
return pulumi.get(self, "count_details")
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> str:
"""
Exact time the message was created.
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter(name="deadLetteringOnFilterEvaluationExceptions")
def dead_lettering_on_filter_evaluation_exceptions(self) -> Optional[bool]:
"""
Value that indicates whether a subscription has dead letter support on filter evaluation exceptions.
"""
return pulumi.get(self, "dead_lettering_on_filter_evaluation_exceptions")
@property
@pulumi.getter(name="deadLetteringOnMessageExpiration")
def dead_lettering_on_message_expiration(self) -> Optional[bool]:
"""
Value that indicates whether a subscription has dead letter support when a message expires.
"""
return pulumi.get(self, "dead_lettering_on_message_expiration")
@property
@pulumi.getter(name="defaultMessageTimeToLive")
def default_message_time_to_live(self) -> Optional[str]:
"""
ISO 8061 Default message timespan to live value. This is the duration after which the message expires, starting from when the message is sent to Service Bus. This is the default value used when TimeToLive is not set on a message itself.
"""
return pulumi.get(self, "default_message_time_to_live")
@property
@pulumi.getter(name="duplicateDetectionHistoryTimeWindow")
def duplicate_detection_history_time_window(self) -> Optional[str]:
"""
ISO 8601 timeSpan structure that defines the duration of the duplicate detection history. The default value is 10 minutes.
"""
return pulumi.get(self, "duplicate_detection_history_time_window")
@property
@pulumi.getter(name="enableBatchedOperations")
def enable_batched_operations(self) -> Optional[bool]:
"""
Value that indicates whether server-side batched operations are enabled.
"""
return pulumi.get(self, "enable_batched_operations")
@property
@pulumi.getter(name="forwardDeadLetteredMessagesTo")
def forward_dead_lettered_messages_to(self) -> Optional[str]:
"""
Queue/Topic name to forward the Dead Letter message
"""
return pulumi.get(self, "forward_dead_lettered_messages_to")
@property
@pulumi.getter(name="forwardTo")
def forward_to(self) -> Optional[str]:
"""
Queue/Topic name to forward the messages
"""
return pulumi.get(self, "forward_to")
@property
@pulumi.getter
def id(self) -> str:
"""
Resource Id
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="isClientAffine")
def is_client_affine(self) -> Optional[bool]:
"""
Value that indicates whether the subscription has an affinity to the client id.
"""
return pulumi.get(self, "is_client_affine")
@property
@pulumi.getter(name="lockDuration")
def lock_duration(self) -> Optional[str]:
"""
ISO 8061 lock duration timespan for the subscription. The default value is 1 minute.
"""
return pulumi.get(self, "lock_duration")
@property
@pulumi.getter(name="maxDeliveryCount")
def max_delivery_count(self) -> Optional[int]:
"""
Number of maximum deliveries.
"""
return pulumi.get(self, "max_delivery_count")
@property
@pulumi.getter(name="messageCount")
def message_count(self) -> float:
"""
Number of messages.
"""
return pulumi.get(self, "message_count")
@property
@pulumi.getter
def name(self) -> str:
"""
Resource name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="requiresSession")
def requires_session(self) -> Optional[bool]:
"""
Value indicating if a subscription supports the concept of sessions.
"""
return pulumi.get(self, "requires_session")
@property
@pulumi.getter
def status(self) -> Optional[str]:
"""
Enumerates the possible values for the status of a messaging entity.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter(name="systemData")
def system_data(self) -> 'outputs.SystemDataResponse':
"""
The system meta data relating to this resource.
"""
return pulumi.get(self, "system_data")
@property
@pulumi.getter
def type(self) -> str:
"""
Resource type
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="updatedAt")
def updated_at(self) -> str:
"""
The exact time the message was updated.
"""
return pulumi.get(self, "updated_at")
class AwaitableGetSubscriptionResult(GetSubscriptionResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetSubscriptionResult(
accessed_at=self.accessed_at,
auto_delete_on_idle=self.auto_delete_on_idle,
client_affine_properties=self.client_affine_properties,
count_details=self.count_details,
created_at=self.created_at,
dead_lettering_on_filter_evaluation_exceptions=self.dead_lettering_on_filter_evaluation_exceptions,
dead_lettering_on_message_expiration=self.dead_lettering_on_message_expiration,
default_message_time_to_live=self.default_message_time_to_live,
duplicate_detection_history_time_window=self.duplicate_detection_history_time_window,
enable_batched_operations=self.enable_batched_operations,
forward_dead_lettered_messages_to=self.forward_dead_lettered_messages_to,
forward_to=self.forward_to,
id=self.id,
is_client_affine=self.is_client_affine,
lock_duration=self.lock_duration,
max_delivery_count=self.max_delivery_count,
message_count=self.message_count,
name=self.name,
requires_session=self.requires_session,
status=self.status,
system_data=self.system_data,
type=self.type,
updated_at=self.updated_at)
def get_subscription(namespace_name: Optional[str] = None,
resource_group_name: Optional[str] = None,
subscription_name: Optional[str] = None,
topic_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetSubscriptionResult:
"""
Description of subscription resource.
:param str namespace_name: The namespace name
:param str resource_group_name: Name of the Resource group within the Azure subscription.
:param str subscription_name: The subscription name.
:param str topic_name: The topic name.
"""
__args__ = dict()
__args__['namespaceName'] = namespace_name
__args__['resourceGroupName'] = resource_group_name
__args__['subscriptionName'] = subscription_name
__args__['topicName'] = topic_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:servicebus/v20210601preview:getSubscription', __args__, opts=opts, typ=GetSubscriptionResult).value
return AwaitableGetSubscriptionResult(
accessed_at=__ret__.accessed_at,
auto_delete_on_idle=__ret__.auto_delete_on_idle,
client_affine_properties=__ret__.client_affine_properties,
count_details=__ret__.count_details,
created_at=__ret__.created_at,
dead_lettering_on_filter_evaluation_exceptions=__ret__.dead_lettering_on_filter_evaluation_exceptions,
dead_lettering_on_message_expiration=__ret__.dead_lettering_on_message_expiration,
default_message_time_to_live=__ret__.default_message_time_to_live,
duplicate_detection_history_time_window=__ret__.duplicate_detection_history_time_window,
enable_batched_operations=__ret__.enable_batched_operations,
forward_dead_lettered_messages_to=__ret__.forward_dead_lettered_messages_to,
forward_to=__ret__.forward_to,
id=__ret__.id,
is_client_affine=__ret__.is_client_affine,
lock_duration=__ret__.lock_duration,
max_delivery_count=__ret__.max_delivery_count,
message_count=__ret__.message_count,
name=__ret__.name,
requires_session=__ret__.requires_session,
status=__ret__.status,
system_data=__ret__.system_data,
type=__ret__.type,
updated_at=__ret__.updated_at)
| 45.253482 | 595 | 0.70048 | 1,930 | 16,246 | 5.512435 | 0.118653 | 0.028198 | 0.033838 | 0.064856 | 0.41987 | 0.260551 | 0.193815 | 0.119936 | 0.074161 | 0.068521 | 0 | 0.002276 | 0.215622 | 16,246 | 358 | 596 | 45.379888 | 0.832614 | 0.12243 | 0 | 0.11157 | 1 | 0 | 0.189794 | 0.083492 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107438 | false | 0 | 0.024793 | 0 | 0.243802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86e708b4a5fa05856a6c8d0dde3c26f2006621e1 | 4,340 | py | Python | py_cfeve/module/CFAF240400E0-030TN-A1.py | crystalfontz/CFA-EVE-Python-Library | c5aca10b9b6ee109d4df8a9a692dcef083dafc88 | [
"Unlicense"
] | 1 | 2021-12-08T00:12:02.000Z | 2021-12-08T00:12:02.000Z | py_cfeve/module/CFAF240400E0-030TN-A1.py | crystalfontz/CFA-EVE-Python-Library | c5aca10b9b6ee109d4df8a9a692dcef083dafc88 | [
"Unlicense"
] | null | null | null | py_cfeve/module/CFAF240400E0-030TN-A1.py | crystalfontz/CFA-EVE-Python-Library | c5aca10b9b6ee109d4df8a9a692dcef083dafc88 | [
"Unlicense"
] | null | null | null | #===========================================================================
#
# Crystalfontz Raspberry-Pi Python example library for FTDI / BridgeTek
# EVE graphic accelerators.
#
#---------------------------------------------------------------------------
#
# This file is part of the port/adaptation of existing C based EVE libraries
# to Python for Crystalfontz EVE based displays.
#
# 2021-10-20 Mark Williams / Crystalfontz America Inc.
# https:#www.crystalfontz.com/products/eve-accelerated-tft-displays.php
#---------------------------------------------------------------------------
#
# This is free and unencumbered software released into the public domain.
# Anyone is free to copy, modify, publish, use, compile, sell, or
# distribute this software, either in source code form or as a compiled
# binary, for any purpose, commercial or non-commercial, and by any
# means.
# In jurisdictions that recognize copyright laws, the author or authors
# of this software dedicate any and all copyright interest in the
# software to the public domain. We make this dedication for the benefit
# of the public at large and to the detriment of our heirs and
# successors. We intend this dedication to be an overt act of
# relinquishment in perpetuity of all present and future rights to this
# software under copyright law.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
# For more information, please refer to <http:#unlicense.org/>
#
#============================================================================
#EVE Device Type
EVE_DEVICE = 811
# EVE Clock Speed
EVE_CLOCK_SPEED = 60000000
# Touch
TOUCH_RESISTIVE = False
TOUCH_CAPACITIVE = False
TOUCH_GOODIX_CAPACITIVE = False
# Define RGB output pins order, determined by PCB layout
LCD_SWIZZLE = 2
# Define active edge of PCLK. Observed by scope:
# 0: Data is put out coincident with falling edge of the clock.
# Rising edge of the clock is in the middle of the data.
# 1: Data is put out coincident with rising edge of the clock.
# Falling edge of the clock is in the middle of the data.
LCD_PCLKPOL = 0
# LCD drive strength: 0=5mA, 1=10mA
LCD_DRIVE_10MA = 0
# Spread Spectrum on RGB signals. Probably not a good idea at higher
# PCLK frequencies.
LCD_PCLK_CSPREAD = 0
#This is not a 24-bit display, so dither
LCD_DITHER = 0
# Pixel clock divisor
LCD_PCLK = 5
#----------------------------------------------------------------------------
# Frame_Rate = 60Hz / 16.7mS
#----------------------------------------------------------------------------
# Horizontal timing
# Target 60Hz frame rate, using the largest possible line time in order to
# maximize the time that the EVE has to process each line.
HPX = 240 # Horizontal Pixel Width
HSW = 10 # Horizontal Sync Width
HBP = 20 # Horizontal Back Porch
HFP = 10 # Horizontal Front Porch
HPP = 209 # Horizontal Pixel Padding
# FTDI needs at least 1 here
# Define the constants needed by the EVE based on the timing
# Active width of LCD display
LCD_WIDTH = HPX
# Start of horizontal sync pulse
LCD_HSYNC0 = HFP
# End of horizontal sync pulse
LCD_HSYNC1 = HFP+HSW
# Start of active line
LCD_HOFFSET = HFP+HSW+HBP
# Total number of clocks per line
LCD_HCYCLE = HPX+HFP+HSW+HBP+HPP
#----------------------------------------------------------------------------
# Vertical timing
VLH = 400 # Vertical Line Height
VS = 2 # Vertical Sync (in lines)
VBP = 2 # Vertical Back Porch
VFP = 4 # Vertical Front Porch
VLP = 1 # Vertical Line Padding
# FTDI needs at least 1 here
# Define the constants needed by the EVE based on the timing
# Active height of LCD display
LCD_HEIGHT = VLH
# Start of vertical sync pulse
LCD_VSYNC0 = VFP
# End of vertical sync pulse
LCD_VSYNC1 = VFP+VS
# Start of active screen
LCD_VOFFSET = VFP+VS+VBP
# Total number of lines per screen
LCD_VCYCLE = VLH+VFP+VS+VBP+VLP | 38.070175 | 78 | 0.645392 | 605 | 4,340 | 4.586777 | 0.436364 | 0.014414 | 0.012973 | 0.02018 | 0.156396 | 0.103784 | 0.085045 | 0.085045 | 0.085045 | 0.085045 | 0 | 0.019225 | 0.185023 | 4,340 | 114 | 79 | 38.070175 | 0.765338 | 0.807373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86ecf632226839fdabd506f9f83e9358864f79de | 4,447 | py | Python | annotation_gui_gcp/orthophoto_view.py | lioncorpo/sfm.lion-judge-corporation | 95fb11bff263c3faab62269cc907eec18b527e22 | [
"BSD-2-Clause"
] | 1 | 2019-05-31T13:50:41.000Z | 2019-05-31T13:50:41.000Z | annotation_gui_gcp/orthophoto_view.py | Pandinosaurus/OpenSfM | b892ba9fd5e7fd6c7a9e3c81edddca80f71c1cd5 | [
"BSD-2-Clause"
] | null | null | null | annotation_gui_gcp/orthophoto_view.py | Pandinosaurus/OpenSfM | b892ba9fd5e7fd6c7a9e3c81edddca80f71c1cd5 | [
"BSD-2-Clause"
] | 2 | 2017-03-31T16:54:34.000Z | 2018-07-10T11:32:22.000Z | from typing import Tuple
import numpy as np
import rasterio.warp
from opensfm import features
from .orthophoto_manager import OrthoPhotoManager
from .view import View
class OrthoPhotoView(View):
def __init__(
self,
main_ui,
path: str,
init_lat: float,
init_lon: float,
is_geo_reference: bool = False,
):
"""[summary]
Args:
main_ui (GUI.Gui)
path (str): path containing geotiffs
"""
self.image_manager = OrthoPhotoManager(path, 100.0)
self.images_in_list = self.image_manager.image_keys
self.zoom_window_size_px = 500
self.is_geo_reference = is_geo_reference
self.size = 50 # TODO add widget for zoom level
super(OrthoPhotoView, self).__init__(main_ui, False)
self.refocus(init_lat, init_lon)
self.populate_image_list()
if self.images_in_list:
self.bring_new_image(self.images_in_list[0])
self.set_title()
def get_image(self, new_image):
crop, image_window, geot = self.image_manager.read_image_around_latlon(
new_image, self.center_lat, self.center_lon, self.size
)
self.image_window = image_window
self.geot = geot
return crop
def get_candidate_images(self):
return self.image_manager.get_candidate_images(
self.center_lat, self.center_lon, self.size
)
def pixel_to_latlon(self, x: float, y: float):
"""
From pixels (in the viewing window) to latlon
"""
if not self.is_geo_reference:
return None
# Pixel to whatever crs the image is in
# pyre-fixme[16]: `OrthoPhotoView` has no attribute `geot`.
x, y = self.geot.xy(y, x)
# And then to WSG84 (lat/lon)
lons, lats = rasterio.warp.transform(self.geot.crs, "EPSG:4326", [x], [y])
return lats[0], lons[0]
def gcp_to_pixel_coordinates(self, x: float, y: float) -> Tuple[float, float]:
"""
Transforms from normalized coordinates (in the whole geotiff) to
pixels (in the viewing window)
"""
h, w = self.image_manager.get_image_size(self.current_image)
px = features.denormalized_image_coordinates(np.array([[x, y]]), w, h)[0]
# pyre-fixme[16]: `OrthoPhotoView` has no attribute `image_window`.
x = px[0] - self.image_window.col_off
y = px[1] - self.image_window.row_off
# pyre-fixme[7]: Expected `Tuple[float, float]` but got `List[typing.Any]`.
return [x, y]
def pixel_to_gcp_coordinates(self, x: float, y: float) -> Tuple[float, float]:
"""
Transforms from pixels (in the viewing window) to normalized coordinates
(in the whole geotiff)
"""
# pyre-fixme[16]: `OrthoPhotoView` has no attribute `image_window`.
x += self.image_window.col_off
y += self.image_window.row_off
h, w = self.image_manager.get_image_size(self.current_image)
coords = features.normalized_image_coordinates(np.array([[x, y]]), w, h)[0]
return coords.tolist()
def refocus(self, lat, lon):
self.center_lat = lat
self.center_lon = lon
self.populate_image_list()
if self.images_in_list:
if self.current_image not in self.images_in_list:
self.bring_new_image(self.images_in_list[0])
else:
self.bring_new_image(self.current_image)
self.set_title()
def bring_new_image(self, new_image):
super(OrthoPhotoView, self).bring_new_image(new_image, force=True)
xlim = self.ax.get_xlim()
ylim = self.ax.get_ylim()
artists = self.ax.plot(np.mean(xlim), np.mean(ylim), "rx")
self.plt_artists.extend(artists)
self.canvas.draw_idle()
def set_title(self):
lat, lon = self.center_lat, self.center_lon
if self.images_in_list:
t = "Images covering lat:{:.4f}, lon:{:.4f}".format(lat, lon)
shot = self.current_image
seq_ix = self.images_in_list.index(shot)
title = f"{t} [{seq_ix+1}/{len(self.images_in_list)}]: {shot}"
else:
title = f"No orthophotos around {lat}, {lon}"
self.current_image = None
self.ax.clear()
self.ax.axis("off")
self.canvas.draw_idle()
self.window.title(title)
| 35.293651 | 83 | 0.613672 | 597 | 4,447 | 4.351759 | 0.247906 | 0.038106 | 0.04157 | 0.055427 | 0.389915 | 0.335643 | 0.266744 | 0.230177 | 0.204003 | 0.182448 | 0 | 0.010274 | 0.277715 | 4,447 | 125 | 84 | 35.576 | 0.798568 | 0.152687 | 0 | 0.174419 | 0 | 0 | 0.037658 | 0.010995 | 0 | 0 | 0 | 0.032 | 0 | 1 | 0.104651 | false | 0 | 0.069767 | 0.011628 | 0.255814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86ef4e909fe2cea39d77e8fe80f71f1e8cdcd676 | 1,844 | py | Python | main.py | Light-Lens/PassGen | 8f4f2ef08299d6243b939d0f08ac75bde3cabf5e | [
"MIT"
] | 3 | 2021-07-19T16:39:06.000Z | 2021-11-08T11:53:50.000Z | main.py | Light-Lens/PassGen | 8f4f2ef08299d6243b939d0f08ac75bde3cabf5e | [
"MIT"
] | null | null | null | main.py | Light-Lens/PassGen | 8f4f2ef08299d6243b939d0f08ac75bde3cabf5e | [
"MIT"
] | null | null | null | # PassGen
# These imports will be used for this project.
from colorama import Fore, Style
from colorama import init
import datetime
import string
import random
import sys
import os
# Initilaze File organizer.
os.system('title PassGen')
init(autoreset = True)
# Create Log Functions.
class LOG:
def INFO_LOG(message):
CurrentTime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(f"{CurrentTime} - INFO: {message}")
def STATUS_LOG(message):
CurrentTime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(f"{CurrentTime} - STATUS: {message}")
def ERROR_LOG(message):
CurrentTime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(Fore.RED + Style.BRIGHT + f"{CurrentTime} - ERROR: {message}")
def WARN_LOG(message):
CurrentTime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(Fore.YELLOW + Style.BRIGHT + f"{CurrentTime} - WARNING: {message}")
# This will Generate a Strong Password for the User!
def Generate(PassLen):
JoinChars = [] # Create an Empty List.
# Split the List of these String Operations, and Join them to JoinChars List.
JoinChars.extend(list(string.ascii_letters))
JoinChars.extend(list(string.digits))
JoinChars.extend(list(string.punctuation))
random.shuffle(JoinChars) # Shuffle the List.
# Get the random passoword.
return "".join(JoinChars[0:PassLen])
# Code Logic here.
LOG.WARN_LOG("Initialized PassGen!")
LOG.STATUS_LOG("Generating a Random Password for You.")
Password = Generate(random.randint(5, 17))
LOG.INFO_LOG(f"Your Password is: {Password}")
with open("Password.log", "a") as File: File.write(f"{Password}\n")
if (len(sys.argv) == 1) or (len(sys.argv) > 1 and sys.argv[1].lower() != "-o"):
os.system("start Password.log")
sys.exit() # Exiting the program successfully.
| 32.350877 | 80 | 0.691432 | 262 | 1,844 | 4.835878 | 0.408397 | 0.031571 | 0.066298 | 0.091555 | 0.211523 | 0.211523 | 0.211523 | 0.211523 | 0.211523 | 0.211523 | 0 | 0.004487 | 0.154013 | 1,844 | 56 | 81 | 32.928571 | 0.807692 | 0.186551 | 0 | 0.108108 | 0 | 0 | 0.238128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0.243243 | 0.189189 | 0 | 0.378378 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
86f0c522be62919400b4d5f2f8a78d4b4a38dcb9 | 399 | py | Python | scripts/make_gene_table.py | lmdu/bioinfo | 4542b0718410d15f3956c6545d9824a16608e02b | [
"MIT"
] | null | null | null | scripts/make_gene_table.py | lmdu/bioinfo | 4542b0718410d15f3956c6545d9824a16608e02b | [
"MIT"
] | null | null | null | scripts/make_gene_table.py | lmdu/bioinfo | 4542b0718410d15f3956c6545d9824a16608e02b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
descripts = {}
with open('macaca_genes.txt') as fh:
fh.readline()
for line in fh:
cols = line.strip('\n').split('\t')
if cols[1]:
descripts[cols[0]] = cols[1].split('[')[0].strip()
else:
descripts[cols[0]] = cols[1]
with open('gene_info.txt') as fh:
for line in fh:
cols = line.strip().split('\t')
cols.append(descripts[cols[1]])
print "\t".join(cols)
| 19.95 | 53 | 0.611529 | 66 | 399 | 3.666667 | 0.454545 | 0.082645 | 0.057851 | 0.090909 | 0.355372 | 0.198347 | 0.198347 | 0 | 0 | 0 | 0 | 0.020958 | 0.162907 | 399 | 19 | 54 | 21 | 0.703593 | 0.050125 | 0 | 0.142857 | 0 | 0 | 0.100529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86f16e97aa13775a35aa0ced03caeac309db0c51 | 4,627 | py | Python | {{cookiecutter.repo_name}}/src/mix_with_scaper.py | nussl/cookiecutter | 5df8512592778ea7155b05e3e4b54676227968b0 | [
"MIT"
] | null | null | null | {{cookiecutter.repo_name}}/src/mix_with_scaper.py | nussl/cookiecutter | 5df8512592778ea7155b05e3e4b54676227968b0 | [
"MIT"
] | null | null | null | {{cookiecutter.repo_name}}/src/mix_with_scaper.py | nussl/cookiecutter | 5df8512592778ea7155b05e3e4b54676227968b0 | [
"MIT"
] | null | null | null | import gin
from scaper import Scaper, generate_from_jams
import copy
import logging
import p_tqdm
import nussl
import os
import numpy as np
def _reset_event_spec(sc):
sc.reset_fg_event_spec()
sc.reset_bg_event_spec()
def check_mixture(path_to_mix):
mix_signal = nussl.AudioSignal(path_to_mix)
if mix_signal.rms() < .01:
return False
return True
def make_one_mixture(sc, path_to_file, num_sources,
event_parameters, allow_repeated_label):
"""
Creates a single mixture, incoherent. Instantiates according to
the event parameters for each source.
"""
check = False
while not check:
for j in range(num_sources):
sc.add_event(**event_parameters)
sc.generate(
path_to_file,
path_to_file.replace('.wav', '.jams'),
no_audio=False,
allow_repeated_label=allow_repeated_label,
save_isolated_events=True,
)
_reset_event_spec(sc)
check = check_mixture(path_to_file)
def instantiate_and_get_event_spec(sc, master_label, event_parameters):
_reset_event_spec(sc)
_event_parameters = copy.deepcopy(event_parameters)
_event_parameters['label'] = ('const', master_label)
sc.add_event(**_event_parameters)
event = sc._instantiate_event(sc.fg_spec[-1])
_reset_event_spec(sc)
return sc, event
def make_one_mixture_coherent(sc, path_to_file, labels, event_parameters,
allow_repeated_label):
check = False
while not check:
sc, event = instantiate_and_get_event_spec(
sc, labels[0], event_parameters)
for label in labels:
try:
sc.add_event(
label=('const', label),
source_file=('const', event.source_file.replace(labels[0], label)),
source_time=('const', event.source_time),
event_time=('const', 0),
event_duration=('const', sc.duration),
snr=event_parameters['snr'],
pitch_shift=('const', event.pitch_shift),
time_stretch=('const', event.time_stretch)
)
except:
logging.exception(
f"Got an error for {label} @ {_source_file}. Moving on...")
sc.generate(
path_to_file,
path_to_file.replace('.wav', '.jams'),
no_audio=False,
allow_repeated_label=allow_repeated_label,
save_isolated_events=True,
)
sc.fg_spec = []
check = check_mixture(path_to_file)
@gin.configurable
def make_scaper_datasets(scopes=['train', 'val']):
for scope in scopes:
with gin.config_scope(scope):
mix_with_scaper()
@gin.configurable
def mix_with_scaper(num_mixtures, foreground_path, background_path,
scene_duration, sample_rate, target_folder,
event_parameters, num_sources=None, labels=None,
coherent=False, allow_repeated_label=False,
ref_db=-40, bitdepth=16, seed=0, num_workers=1):
nussl.utils.seed(seed)
os.makedirs(target_folder, exist_ok=True)
scaper_seed = np.random.randint(100)
logging.info('Starting mixing.')
if num_sources is None and labels is None:
raise ValueError("One of labels or num_sources must be set!")
if coherent and labels is None:
raise ValueError("Coherent mixing requires explicit labels!")
generators = []
if background_path is None:
background_path = foreground_path
for i in range(num_mixtures):
sc = Scaper(
scene_duration,
fg_path=foreground_path,
bg_path=background_path,
random_state=scaper_seed,
)
sc.ref_db = ref_db
sc.sr = sample_rate
sc.bitdepth = bitdepth
generators.append(sc)
scaper_seed += 1
mix_func = make_one_mixture_coherent if coherent else make_one_mixture
def arg_tuple(i):
_args = (
generators[i],
os.path.join(target_folder, f'{i:08d}.wav'),
labels if coherent else num_sources,
event_parameters,
allow_repeated_label
)
return _args
args = [arg_tuple(i) for i in range(num_mixtures)]
# do one by itself for testing
mix_func(*args[0])
args = list(zip(*args[1:]))
args = [list(a) for a in args]
# now do the rest in parallel
p_tqdm.p_map(mix_func, *args, num_cpus=num_workers)
| 31.691781 | 87 | 0.612708 | 573 | 4,627 | 4.643979 | 0.277487 | 0.073281 | 0.030064 | 0.024051 | 0.2469 | 0.198422 | 0.118001 | 0.085682 | 0.085682 | 0.085682 | 0 | 0.006163 | 0.298682 | 4,627 | 145 | 88 | 31.910345 | 0.813867 | 0.034364 | 0 | 0.194915 | 1 | 0 | 0.053519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.067797 | 0 | 0.169492 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86f1999802fa178d60effb8dd046d22fbb0dd814 | 4,643 | py | Python | adventure-cards/package/main.py | DaneRosa/adventure-cards | 0685feeec8b56627795e685ff4fffad187881e1c | [
"MIT"
] | null | null | null | adventure-cards/package/main.py | DaneRosa/adventure-cards | 0685feeec8b56627795e685ff4fffad187881e1c | [
"MIT"
] | null | null | null | adventure-cards/package/main.py | DaneRosa/adventure-cards | 0685feeec8b56627795e685ff4fffad187881e1c | [
"MIT"
] | null | null | null | import json
def hydrateCards(rawDeckDataPath):
pack = []
rawDeckData = json.load(open(rawDeckDataPath,))
for index, item in enumerate(rawDeckData):
deck = []
# print(index,item)
for i in rawDeckData[item]:
card ={
f'{index}':
{
"name": "",
"type": "",
"level": None,
"spell_name": "",
"creature_name": "",
"artifact_name": "",
"enchantment_name": "",
"spell_magnifier": "",
"spell_type": "",
"name_modifier": "",
"creature_modifier": "",
"mythic_creature_modifier": "",
"location": "",
"mythic_location": ""
}
}
nameSplit = i[0].split()
card[f'{index}']['name'] = i[0]
card[f'{index}']['type']= i[1]
card[f'{index}']['level']=i[2]
if i[1] == 'spell':
if len(nameSplit) == 1:
card[f'{index}']['spell_name']= i[0]
elif len(nameSplit) == 2:
card[f'{index}']['spell_type']= nameSplit[0]
card[f'{index}']['spell_name']= nameSplit[1]
elif len(nameSplit) == 3:
card[f'{index}']['spell_magnifier']=nameSplit[0]
card[f'{index}']['spell_type']=nameSplit[1]
card[f'{index}']['spell_name']=nameSplit[2]
elif i[1] == 'artifact':
if 'Divine Robe' or 'Ghost Wand' in i[0]:
if 'Divine Robe' in i[0]:
i[0] = i[0].replace('Divine Robe', 'DivineRobe')
if 'Ghost Wand' in i[0]:
i[0] = i[0].replace('Ghost Wand', 'GhostWand')
nameSplit = i[0].split()
card[f'{index}']['name'] = i[0]
if len(nameSplit) == 1:
card[f'{index}']['artifact_name']= i[0]
elif len(nameSplit) == 2:
card[f'{index}']['artifact_name']= nameSplit[1]
card[f'{index}']['spell_type']= nameSplit[0]
elif len(nameSplit) == 3:
card[f'{index}']['artifact_name']= nameSplit[2]
card[f'{index}']['spell_magnifier']= nameSplit[0]
card[f'{index}']['spell_type']= nameSplit[1]
elif i[1] == 'enchantment':
if len(nameSplit) == 1:
card[f'{index}']['enchantment_name']= i[0]
if len(nameSplit) == 2:
card[f'{index}']['enchantment_name']= nameSplit[1]
card[f'{index}']['spell_type']= nameSplit[0]
if len(nameSplit) == 3:
card[f'{index}']['enchantment_name']=nameSplit[2]
card[f'{index}']['spell_type']=nameSplit[1]
card[f'{index}']['spell_magnifier']=nameSplit[0]
elif i[1] == 'monster':
card[f'{index}']['type']= 'creature'
if len(nameSplit) == 1:
card[f'{index}']['creature_name']= nameSplit[0]
if len(nameSplit) == 3:
card[f'{index}']['creature_name']= nameSplit[2]
card[f'{index}']['creature_modifier']= nameSplit[1]
card[f'{index}']['name_modifier']= nameSplit[0]
if len(nameSplit) >3:
keyword = 'of'
before_keyword, keyword, after_keyword = i[0].partition(keyword)
if i[2] == 2:
card[f'{index}']['creature_name']= nameSplit[2]
card[f'{index}']['creature_modifier']= nameSplit[1]
card[f'{index}']['name_modifier']= nameSplit[0]
card[f'{index}']['location']= nameSplit[2] = keyword + after_keyword
elif i[2] == 3:
card[f'{index}']['creature_name']= nameSplit[2]
card[f'{index}']['mythic_creature_modifier']= nameSplit[1]
card[f'{index}']['name_modifier']= nameSplit[0]
card[f'{index}']['mythic_location']= keyword + after_keyword
deck.append(card[f'{index}'])
index +=1
if len(deck) == 45:
break
pack.append(deck)
return(pack) | 48.873684 | 92 | 0.420633 | 443 | 4,643 | 4.306998 | 0.130926 | 0.09696 | 0.19392 | 0.069182 | 0.626834 | 0.604822 | 0.535115 | 0.43501 | 0.404612 | 0.378931 | 0 | 0.024345 | 0.41611 | 4,643 | 95 | 93 | 48.873684 | 0.679454 | 0.003661 | 0 | 0.329787 | 0 | 0 | 0.208432 | 0.010378 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010638 | false | 0 | 0.010638 | 0 | 0.021277 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86f3b70aa2f3c882bdd1f178d6aa80fab1793aab | 5,898 | py | Python | pmdarima/preprocessing/endog/boxcox.py | tuomijal/pmdarima | 5bf84a2a5c42b81b949bd252ad3d4c6c311343f8 | [
"MIT"
] | 736 | 2019-12-02T01:33:31.000Z | 2022-03-31T21:45:29.000Z | pmdarima/preprocessing/endog/boxcox.py | tuomijal/pmdarima | 5bf84a2a5c42b81b949bd252ad3d4c6c311343f8 | [
"MIT"
] | 186 | 2019-12-01T18:01:33.000Z | 2022-03-31T18:27:56.000Z | pmdarima/preprocessing/endog/boxcox.py | tuomijal/pmdarima | 5bf84a2a5c42b81b949bd252ad3d4c6c311343f8 | [
"MIT"
] | 126 | 2019-12-07T04:03:19.000Z | 2022-03-31T17:40:14.000Z | # -*- coding: utf-8 -*-
from scipy import stats
import numpy as np
import warnings
from ...compat import check_is_fitted, pmdarima as pm_compat
from .base import BaseEndogTransformer
__all__ = ['BoxCoxEndogTransformer']
class BoxCoxEndogTransformer(BaseEndogTransformer):
r"""Apply the Box-Cox transformation to an endogenous array
The Box-Cox transformation is applied to non-normal data to coerce it more
towards a normal distribution. It's specified as::
(((y + lam2) ** lam1) - 1) / lam1, if lmbda != 0, else
log(y + lam2)
Parameters
----------
lmbda : float or None, optional (default=None)
The lambda value for the Box-Cox transformation, if known. If not
specified, it will be estimated via MLE.
lmbda2 : float, optional (default=0.)
The value to add to ``y`` to make it non-negative. If, after adding
``lmbda2``, there are still negative values, a ValueError will be
raised.
neg_action : str, optional (default="raise")
How to respond if any values in ``y <= 0`` after adding ``lmbda2``.
One of ('raise', 'warn', 'ignore'). If anything other than 'raise',
values <= 0 will be truncated to the value of ``floor``.
floor : float, optional (default=1e-16)
A positive value that truncate values to if there are values in ``y``
that are zero or negative and ``neg_action`` is not 'raise'. Note that
if values are truncated, invertibility will not be preserved, and the
transformed array may not be perfectly inverse-transformed.
"""
def __init__(self, lmbda=None, lmbda2=0, neg_action="raise", floor=1e-16):
self.lmbda = lmbda
self.lmbda2 = lmbda2
self.neg_action = neg_action
self.floor = floor
def fit(self, y, X=None, **kwargs): # TODO: kwargs go away
"""Fit the transformer
Learns the value of ``lmbda``, if not specified in the constructor.
If defined in the constructor, is not re-learned.
Parameters
----------
y : array-like or None, shape=(n_samples,)
The endogenous (time-series) array.
X : array-like or None, shape=(n_samples, n_features), optional
The exogenous array of additional covariates. Not used for
endogenous transformers. Default is None, and non-None values will
serve as pass-through arrays.
"""
lam1 = self.lmbda
lam2 = self.lmbda2
# Temporary shim until we remove `exogenous` support completely
X, _ = pm_compat.get_X(X, **kwargs)
if lam2 < 0:
raise ValueError("lmbda2 must be a non-negative scalar value")
if lam1 is None:
y, _ = self._check_y_X(y, X)
_, lam1 = stats.boxcox(y + lam2, lmbda=None, alpha=None)
self.lam1_ = lam1
self.lam2_ = lam2
return self
def transform(self, y, X=None, **kwargs):
"""Transform the new array
Apply the Box-Cox transformation to the array after learning the
lambda parameter.
Parameters
----------
y : array-like or None, shape=(n_samples,)
The endogenous (time-series) array.
X : array-like or None, shape=(n_samples, n_features), optional
The exogenous array of additional covariates. Not used for
endogenous transformers. Default is None, and non-None values will
serve as pass-through arrays.
Returns
-------
y_transform : array-like or None
The Box-Cox transformed y array
X : array-like or None
The X array
"""
check_is_fitted(self, "lam1_")
# Temporary shim until we remove `exogenous` support completely
X, _ = pm_compat.get_X(X, **kwargs)
lam1 = self.lam1_
lam2 = self.lam2_
y, exog = self._check_y_X(y, X)
y += lam2
neg_mask = y <= 0.
if neg_mask.any():
action = self.neg_action
msg = "Negative or zero values present in y"
if action == "raise":
raise ValueError(msg)
elif action == "warn":
warnings.warn(msg, UserWarning)
y[neg_mask] = self.floor
if lam1 == 0:
return np.log(y), exog
return (y ** lam1 - 1) / lam1, exog
def inverse_transform(self, y, X=None, **kwargs): # TODO: kwargs go away
"""Inverse transform a transformed array
Inverse the Box-Cox transformation on the transformed array. Note that
if truncation happened in the ``transform`` method, invertibility will
not be preserved, and the transformed array may not be perfectly
inverse-transformed.
Parameters
----------
y : array-like or None, shape=(n_samples,)
The transformed endogenous (time-series) array.
X : array-like or None, shape=(n_samples, n_features), optional
The exogenous array of additional covariates. Not used for
endogenous transformers. Default is None, and non-None values will
serve as pass-through arrays.
Returns
-------
y : array-like or None
The inverse-transformed y array
X : array-like or None
The inverse-transformed X array
"""
check_is_fitted(self, "lam1_")
# Temporary shim until we remove `exogenous` support completely
X, _ = pm_compat.get_X(X, **kwargs)
lam1 = self.lam1_
lam2 = self.lam2_
y, exog = self._check_y_X(y, X)
if lam1 == 0:
return np.exp(y) - lam2, exog
numer = y * lam1 # remove denominator
numer += 1. # add 1 back to it
de_exp = numer ** (1. / lam1) # de-exponentiate
return de_exp - lam2, exog
| 33.511364 | 78 | 0.598847 | 764 | 5,898 | 4.537958 | 0.231675 | 0.019037 | 0.031728 | 0.043265 | 0.462359 | 0.448515 | 0.417652 | 0.40525 | 0.40525 | 0.366023 | 0 | 0.015207 | 0.308749 | 5,898 | 175 | 79 | 33.702857 | 0.835173 | 0.56375 | 0 | 0.232143 | 0 | 0 | 0.059846 | 0.010618 | 0 | 0 | 0 | 0.011429 | 0 | 1 | 0.071429 | false | 0 | 0.089286 | 0 | 0.267857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86f77106c10502d0d37cea800a21ab20ce83f638 | 1,218 | py | Python | meregistro/apps/registro/models/EstablecimientoDomicilio.py | MERegistro/meregistro | 6cde3cab2bd1a8e3084fa38147de377d229391e3 | [
"BSD-3-Clause"
] | null | null | null | meregistro/apps/registro/models/EstablecimientoDomicilio.py | MERegistro/meregistro | 6cde3cab2bd1a8e3084fa38147de377d229391e3 | [
"BSD-3-Clause"
] | null | null | null | meregistro/apps/registro/models/EstablecimientoDomicilio.py | MERegistro/meregistro | 6cde3cab2bd1a8e3084fa38147de377d229391e3 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from django.db import models
from apps.registro.models.TipoDomicilio import TipoDomicilio
from apps.registro.models.Localidad import Localidad
from apps.registro.models.Establecimiento import Establecimiento
from django.core.exceptions import ValidationError
from apps.seguridad.audit import audit
@audit
class EstablecimientoDomicilio(models.Model):
TIPO_POSTAL = 'Postal'
TIPO_INSTITUCIONAL = 'Institucional'
establecimiento = models.ForeignKey(Establecimiento, related_name='domicilios')
tipo_domicilio = models.ForeignKey(TipoDomicilio)
localidad = models.ForeignKey(Localidad, related_name='domicilios_establecimientos')
calle = models.CharField(max_length=100)
altura = models.CharField(max_length=15)
referencia = models.CharField(max_length=255, null=True, blank=True)
cp = models.CharField(max_length=20)
class Meta:
app_label = 'registro'
db_table = 'registro_establecimiento_domicilio'
def __unicode__(self):
if self.cp:
cp = " (CP: " + self.cp + ")"
else:
cp = ""
return "%s %s - %s %s" % (self.calle, self.altura, self.localidad.nombre, cp)
def __init__(self, *args, **kwargs):
super(EstablecimientoDomicilio, self).__init__(*args, **kwargs)
| 32.918919 | 85 | 0.76601 | 146 | 1,218 | 6.212329 | 0.410959 | 0.035281 | 0.079383 | 0.105843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010252 | 0.119048 | 1,218 | 36 | 86 | 33.833333 | 0.835042 | 0.017241 | 0 | 0 | 0 | 0 | 0.098745 | 0.051046 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
86f7b299e6e411fb0020928642f34720d9448cf2 | 301 | py | Python | python_for_everybody/py2_p4i_old/6.5findslicestringextract.py | timothyyu/p4e-prac | f978b71ce147b6e9058372929f2666c2e67d0741 | [
"BSD-3-Clause"
] | null | null | null | python_for_everybody/py2_p4i_old/6.5findslicestringextract.py | timothyyu/p4e-prac | f978b71ce147b6e9058372929f2666c2e67d0741 | [
"BSD-3-Clause"
] | null | null | null | python_for_everybody/py2_p4i_old/6.5findslicestringextract.py | timothyyu/p4e-prac | f978b71ce147b6e9058372929f2666c2e67d0741 | [
"BSD-3-Clause"
] | 1 | 2020-04-18T16:09:04.000Z | 2020-04-18T16:09:04.000Z | # 6.5 Write code using find() and string slicing (see section 6.10) to extract
# the number at the end of the line below.
# Convert the extracted value to a floating point number and print it out.
text = "X-DSPAM-Confidence: 0.8475";
pos = text.find(':')
text = float(text[pos+1:])
print text | 27.363636 | 79 | 0.697674 | 53 | 301 | 3.962264 | 0.735849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045267 | 0.192691 | 301 | 11 | 80 | 27.363636 | 0.81893 | 0.637874 | 0 | 0 | 0 | 0 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
86f9d5c800a3592d64ffbc26d845ced72a00288c | 4,005 | py | Python | src/python/pants/backend/android/tasks/aapt_builder.py | hythloday/pants | 107e9b0957f6949ac4bd535fbef8d2d8cba05c5c | [
"Apache-2.0"
] | 11 | 2015-01-20T01:39:41.000Z | 2019-08-08T07:27:44.000Z | src/python/pants/backend/android/tasks/aapt_builder.py | hythloday/pants | 107e9b0957f6949ac4bd535fbef8d2d8cba05c5c | [
"Apache-2.0"
] | 1 | 2016-03-15T20:35:18.000Z | 2016-03-15T20:35:18.000Z | src/python/pants/backend/android/tasks/aapt_builder.py | fakeNetflix/square-repo-pants | 28a018c7f47900aec4f576c81a52e0e4b41d9fec | [
"Apache-2.0"
] | 5 | 2015-03-30T02:46:53.000Z | 2018-03-08T20:10:43.000Z | # coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (nested_scopes, generators, division, absolute_import, with_statement,
print_function, unicode_literals)
import os
import subprocess
from twitter.common import log
from pants.backend.android.targets.android_binary import AndroidBinary
from pants.backend.android.targets.android_resources import AndroidResources
from pants.backend.android.tasks.aapt_task import AaptTask
from pants.base.build_environment import get_buildroot
from pants.base.exceptions import TaskError
from pants.base.workunit import WorkUnit
from pants.util.dirutil import safe_mkdir
class AaptBuilder(AaptTask):
"""Build an android bundle with compiled code and assets.
This class gathers compiled classes (an Android dex archive) and packages it with the
target's resource files. The output is an unsigned .apk, an Android application package file.
"""
@classmethod
def product_types(cls):
return ['apk']
@staticmethod
def is_app(target):
return isinstance(target, AndroidBinary)
def __init__(self, *args, **kwargs):
super(AaptBuilder, self).__init__(*args, **kwargs)
def prepare(self, round_manager):
round_manager.require_data('dex')
def render_args(self, target, resource_dir, inputs):
args = []
# Glossary of used aapt flags. Aapt handles a ton of action, this will continue to expand.
# : 'package' is the main aapt operation (see class docstring for more info).
# : '-M' is the AndroidManifest.xml of the project.
# : '-S' points to the resource_dir to "spider" down while collecting resources.
# : '-I' packages to add to base "include" set, here the android.jar of the target-sdk.
# : '--ignored-assets' patterns for the aapt to skip. This is the default w/ 'BUILD*' added.
# : '-F' The name and location of the .apk file to output
# : additional positional arguments are treated as input directories to gather files from.
args.extend([self.aapt_tool(target.build_tools_version)])
args.extend(['package', '-M', target.manifest])
args.extend(['-S'])
args.extend(resource_dir)
args.extend(['-I', self.android_jar_tool(target.target_sdk)])
args.extend(['--ignore-assets', self.ignored_assets])
args.extend(['-F', os.path.join(self.workdir, target.app_name + '-unsigned.apk')])
args.extend(inputs)
log.debug('Executing: {0}'.format(args))
return args
def execute(self):
safe_mkdir(self.workdir)
# TODO(mateor) map stderr and stdout to workunit streams (see CR 859)
with self.context.new_workunit(name='apk-bundle', labels=[WorkUnit.MULTITOOL]):
targets = self.context.targets(self.is_app)
with self.invalidated(targets) as invalidation_check:
invalid_targets = []
for vt in invalidation_check.invalid_vts:
invalid_targets.extend(vt.targets)
for target in invalid_targets:
# 'input_dirs' is the folder containing the Android dex file
input_dirs = []
# 'gen_out' holds resource folders (e.g. 'res')
gen_out = []
mapping = self.context.products.get('dex')
for basedir in mapping.get(target):
input_dirs.append(basedir)
def gather_resources(target):
"""Gather the 'resource_dir' of the target"""
if isinstance(target, AndroidResources):
gen_out.append(os.path.join(get_buildroot(), target.resource_dir))
target.walk(gather_resources)
process = subprocess.Popen(self.render_args(target, gen_out, input_dirs))
result = process.wait()
if result != 0:
raise TaskError('Android aapt tool exited non-zero ({code})'.format(code=result))
for target in targets:
self.context.products.get('apk').add(target, self.workdir).append(target.app_name + "-unsigned.apk")
| 41.71875 | 106 | 0.698127 | 530 | 4,005 | 5.154717 | 0.403774 | 0.029283 | 0.01757 | 0.025256 | 0.044656 | 0.027086 | 0 | 0 | 0 | 0 | 0 | 0.00373 | 0.196754 | 4,005 | 95 | 107 | 42.157895 | 0.845508 | 0.30437 | 0 | 0 | 0 | 0 | 0.048657 | 0 | 0 | 0 | 0 | 0.010526 | 0 | 1 | 0.118644 | false | 0 | 0.186441 | 0.033898 | 0.372881 | 0.016949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8101825b7fae5806f4a1d2d670c101bc508918db | 5,681 | py | Python | modules/documents.py | rotsee/protokollen | a001a1db86df57adcf5c53c95c4c2fae426340f1 | [
"MIT",
"Apache-2.0",
"CC0-1.0",
"Unlicense"
] | 4 | 2015-03-22T20:23:36.000Z | 2015-12-09T14:31:34.000Z | modules/documents.py | rotsee/protokollen | a001a1db86df57adcf5c53c95c4c2fae426340f1 | [
"MIT",
"Apache-2.0",
"CC0-1.0",
"Unlicense"
] | 4 | 2015-03-24T10:42:00.000Z | 2016-06-21T08:44:01.000Z | modules/documents.py | rotsee/protokollen | a001a1db86df57adcf5c53c95c4c2fae426340f1 | [
"MIT",
"Apache-2.0",
"CC0-1.0",
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""This module contains classes for documents, and lists of documents.
Documents are defined by the document rules in settings.py
A file can contain one or more document. However, a document can
not be constructed from more than one file. This is a limitation,
obvious in cases like Gotlands kommun, where meeting minutes are
split up in a large number of files.
"""
import settings
from modules.utils import make_unicode, last_index
from modules.extractors.documentBase import ExtractionNotAllowed
document_headers = {
"Content-Type": "text/plain",
"Content-Disposition": "attachment",
"Cache-Control": "public"
}
class DocumentList(object):
"""Contains a list of documents, extracted from a file.
"""
def __init__(self, extractor):
"""Create a list of documents, using `extractor`
"""
self._documents = []
page_types_and_dates = []
"""Keep track of documents by type and date, to be able to merge
documents depending on `settings.document_type_settings`
"""
# Loop through pages, and add pages of the same type and date together
last_page_type = None
last_page_date = None
documents = []
try:
for page in extractor.get_next_page():
temp_doc = Document(page, extractor)
if (len(documents) > 0 and
temp_doc.type_ == last_page_type and
temp_doc.date == last_page_date):
documents[-1].merge_with(temp_doc)
else:
documents.append(temp_doc)
page_types_and_dates.append((temp_doc.type_, temp_doc.date))
last_page_type = temp_doc.type_
last_page_date = temp_doc.date
except ExtractionNotAllowed:
raise ExtractionNotAllowed
# merge documents, if disallow_infixes == True
doc_settings = settings.document_type_settings
disallow_infixes = [d for d in doc_settings
if doc_settings[d]["disallow_infixes"] is True]
"""Document types that disallow holes"""
num_docs = len(page_types_and_dates)
i = 0
while i < num_docs:
(type_, date) = page_types_and_dates[i]
last_match = last_index(page_types_and_dates, (type_, date))
if type_ in disallow_infixes and last_match > i:
num_docs_to_merge = last_match - i + 1
new_doc = documents.pop(0)
for j in range(i, last_match):
new_doc.merge_with(documents.pop(0))
self._documents.append(new_doc)
i += num_docs_to_merge
else:
doc_to_merge = documents.pop(0)
self._documents.append(doc_to_merge)
i += 1
def get_next_document(self):
for document in self._documents:
yield document
def __len__(self):
"""len is the number of documents"""
return len(self._documents)
class Document(object):
"""Represents a single document
"""
text = ""
header = ""
date = None
type_ = None
def __init__(self, page, extractor):
"""Create a document stub from a page. Use add_page
to keep extending this document.
"""
self.text = page.get_text()
self.header = page.get_header() or extractor.get_header()
self.date = page.get_date() or extractor.get_date()
self.type_ = self.get_document_type()
self.date = page.get_date() or extractor.get_date()
def append_page(self, page):
"""Append content from a page to this document.
"""
pass
def append_text(self, text):
"""Append content to this document.
"""
self.text += text
def merge_with(self, document):
"""Merge this document with another one"""
try:
self.text += document.text
except UnicodeDecodeError:
self.text = make_unicode(self.text) + make_unicode(document.text)
def __len__(self):
"""len is the length of the total plaintext"""
return len(self.text)
def get_document_type(self):
"""
Return the first matching document type, based on this
header text.
"""
for document_type in settings.document_rules:
if self.parse_rules(document_type[1], self.header):
return document_type[0]
return None
def parse_rules(self, tuple_, header):
"""Parse document rules. See settings.py for syntax"""
rule_key = tuple_[0].upper()
rule_val = tuple_[1]
header = header.upper()
# --------- Logical separators --------
if rule_key == "AND":
hit = True
for rule in rule_val:
hit = hit and self.parse_rules(rule, header)
return hit
elif rule_key == "OR":
hit = False
for rule in rule_val:
hit = hit or self.parse_rules(rule, header)
return hit
elif rule_key == "NOT":
hit = not self.parse_rules(rule_val, header)
return hit
# -------------- Rules ----------------
elif rule_key == "HEADER_CONTAINS":
try:
pos = make_unicode(header).find(rule_val.upper())
except UnicodeDecodeError:
pos = -1
return pos > -1
if __name__ == "__main__":
print "This module is only intended to be called from other scripts."
import sys
sys.exit()
| 33.417647 | 80 | 0.582996 | 686 | 5,681 | 4.603499 | 0.246356 | 0.019949 | 0.018999 | 0.026916 | 0.141862 | 0.096897 | 0.065231 | 0.051298 | 0.051298 | 0.027866 | 0 | 0.003908 | 0.324415 | 5,681 | 169 | 81 | 33.615385 | 0.818916 | 0.037141 | 0 | 0.149533 | 0 | 0 | 0.041942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009346 | 0.037383 | null | null | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8101888cafdd6d738a67d105df6945c67e4d48e2 | 773 | py | Python | tools/amp_segment/ina_speech_segmenter.py | saratkumar/galaxy | 35cd0987239c1b006d6eaf70b4a03a58fb857a12 | [
"CC-BY-3.0"
] | 1 | 2020-03-11T15:17:32.000Z | 2020-03-11T15:17:32.000Z | tools/amp_segment/ina_speech_segmenter.py | saratkumar/galaxy | 35cd0987239c1b006d6eaf70b4a03a58fb857a12 | [
"CC-BY-3.0"
] | 72 | 2019-06-06T18:52:41.000Z | 2022-02-17T02:53:18.000Z | tools/done/amp_segment/ina_speech_segmenter.py | AudiovisualMetadataPlatform/amp_mgms | 593d4f4d40b597a7753cd152cd233976e6b28c75 | [
"Apache-2.0"
] | 1 | 2022-03-01T08:07:54.000Z | 2022-03-01T08:07:54.000Z | #!/usr/bin/env python3
import os
import os.path
import shutil
import subprocess
import sys
import tempfile
import uuid
import mgm_utils
def main():
(root_dir, input_file, json_file) = sys.argv[1:4]
tmpName = str(uuid.uuid4())
tmpdir = "/tmp"
temp_input_file = f"{tmpdir}/{tmpName}.dat"
temp_output_file = f"{tmpdir}/{tmpName}.json"
shutil.copy(input_file, temp_input_file)
sif = mgm_utils.get_sif_dir(root_dir) + "/ina_segmentation.sif"
r = subprocess.run(["singularity", "run", sif, temp_input_file, temp_output_file])
shutil.copy(temp_output_file, json_file)
if os.path.exists(temp_input_file):
os.remove(temp_input_file)
if os.path.exists(temp_output_file):
os.remove(temp_output_file)
exit(r.returncode)
if __name__ == "__main__":
main()
| 21.472222 | 83 | 0.742561 | 122 | 773 | 4.385246 | 0.385246 | 0.117757 | 0.121495 | 0.06729 | 0.082243 | 0.082243 | 0 | 0 | 0 | 0 | 0 | 0.005908 | 0.124191 | 773 | 35 | 84 | 22.085714 | 0.784343 | 0.027167 | 0 | 0 | 0 | 0 | 0.122503 | 0.087883 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.32 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
81050df4590617cea7e0daedc54d45bd783c7cfa | 367 | py | Python | stickmanZ/__main__.py | MichaelMcFarland98/cse210-project | 9e5a45a75f465fe123e33712d3c19dd88e98246a | [
"MIT"
] | 1 | 2021-07-24T00:40:14.000Z | 2021-07-24T00:40:14.000Z | stickmanZ/__main__.py | MichaelMcFarland98/cse210-project | 9e5a45a75f465fe123e33712d3c19dd88e98246a | [
"MIT"
] | null | null | null | stickmanZ/__main__.py | MichaelMcFarland98/cse210-project | 9e5a45a75f465fe123e33712d3c19dd88e98246a | [
"MIT"
] | null | null | null |
from game.game_view import GameView
from game.menu_view import menu_view
from game import constants
import arcade
SCREEN_WIDTH = constants.SCREEN_WIDTH
SCREEN_HEIGHT = constants.SCREEN_HEIGHT
SCREEN_TITLE = constants.SCREEN_TITLE
window = arcade.Window(SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_TITLE)
start_view = menu_view()
window.show_view(start_view)
arcade.run()
| 22.9375 | 65 | 0.836512 | 54 | 367 | 5.388889 | 0.296296 | 0.082474 | 0.116838 | 0.158076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100817 | 367 | 15 | 66 | 24.466667 | 0.881818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
810b128cc1280e3c864be85f0fd7db633ecb097d | 35,104 | py | Python | knx-test.py | WAvdBeek/CoAPthon3 | 5aa9d6a6d9a2903d86b113da538df9bd970e6b44 | [
"MIT"
] | 1 | 2021-11-05T08:04:33.000Z | 2021-11-05T08:04:33.000Z | knx-test.py | WAvdBeek/CoAPthon3 | 5aa9d6a6d9a2903d86b113da538df9bd970e6b44 | [
"MIT"
] | 1 | 2021-07-21T12:40:54.000Z | 2021-07-21T14:42:42.000Z | knx-test.py | WAvdBeek/CoAPthon3 | 5aa9d6a6d9a2903d86b113da538df9bd970e6b44 | [
"MIT"
] | 1 | 2021-07-20T10:18:17.000Z | 2021-07-20T10:18:17.000Z | #!/usr/bin/env python
import getopt
import socket
import sys
import cbor
#from cbor2 import dumps, loads
import json
import time
import traceback
from coapthon.client.helperclient import HelperClient
from coapthon.utils import parse_uri
from coapthon import defines
client = None
paths = {}
paths_extend = {}
my_base = ""
def usage(): # pragma: no cover
print("Command:\tknxcoapclient.py -o -p [-P]")
print("Options:")
print("\t-o, --operation=\tGET|GETNONE|PUT|POST|DELETE|DISCOVER|OBSERVE")
print("\t-p, --path=\t\t\tPath of the request")
print("\t-P, --payload=\t\tPayload of the request")
print("\t-c, --contenttype=\t\tcontenttype of the request")
print("\t-f, --payload-file=\t\tFile with payload of the request")
def get_url(line):
data = line.split(">")
url = data[0]
return url[1:]
def get_ct(line):
tagvalues = line.split(";")
for tag in tagvalues:
if tag.startswith("ct"):
ct_value_all = tag.split("=")
ct_value = ct_value_all[1].split(",")
return ct_value[0]
return ""
def get_base(url):
# python3 knxcoapclient.py -o GET -p coap://[fe80::6513:3050:71a7:5b98]:63914/a -c 50
my_url = url.replace("coap://","")
mybase = my_url.split("/")
return mybase[0]
def get_base_from_link(payload):
print("get_base_from_link\n")
global paths
global paths_extend
lines = payload.splitlines()
# add the
if len(paths) == 0:
my_base = get_base(get_url(lines[0]))
return my_base
def get_sn(my_base):
print("Get SN :");
sn = execute_get("coap://"+my_base+"/dev/sn", 60)
json_data = cbor.loads(sn.payload)
#print ("SN : ", json_data)
return json_data
def install(my_base):
sn = get_sn(my_base)
print (" SN : ", sn)
iid = "5" # installation id
if "000001" == sn :
# sensor, e.g sending
print ("--------------------")
print ("Installing SN: ", sn)
content = { 2: "reset"}
print("reset :", content);
execute_post("coap://"+my_base+"/.well-known/knx", 60, 60, content)
content = True
print("set PM :", content);
execute_put("coap://"+my_base+"/dev/pm", 60, 60, content)
content = 1
print("set IA :", content);
execute_put("coap://"+my_base+"/dev/ia", 60, 60, content)
content = iid
execute_put("coap://"+my_base+"/dev/iid", 60, 60, content)
content = { 2: "startLoading"}
print("lsm :", content);
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
# group object table
# id (0)= 1
# url (11)= /p/light
# ga (7 )= 1
# cflags (8) = ["r" ] ; read = 1, write = 2, transmit = 3 update = 4
content = [ {0: 1, 11: "p/push", 7:[1], 8: [2] } ]
execute_post("coap://"+my_base+"/fp/g", 60, 60, content)
execute_get("coap://"+my_base+"/fp/g", 40)
# recipient table
# id (0)= 1
# ia (12)
# url (11)= .knx
# ga (7 )= 1
# cflags (8) = ["r" ] ; read = 1, write = 2, transmit = 3 update = 4
content = [ {0: 1, 11: "/p/push", 7:[1], 12 :"blah.blah" } ]
execute_post("coap://"+my_base+"/fp/r", 60, 60, content)
content = False
print("set PM :", content);
execute_put("coap://"+my_base+"/dev/pm", 60, 60, content)
content = { 2: "loadComplete"}
print("lsm :", content);
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
if "000002" == sn :
# actuator ==> receipient
# should use /fp/r
print ("--------------------")
print ("installing SN: ", sn)
content = True
print("set PM :", content);
execute_put("coap://"+my_base+"/dev/pm", 60, 60, content)
content = 2
print("set IA :", content);
execute_put("coap://"+my_base+"/dev/ia", 60, 60, content)
content = iid
execute_put("coap://"+my_base+"/dev/iid", 60, 60, content)
content = { 2: "startLoading"}
print("lsm :", content);
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
# group object table
# id (0)= 1
# url (11)= /p/light
# ga (7 )= 1
# cflags (8) = ["r" ] ; read = 1, write = 2, transmit = 3 update = 4
content = [ { 0: 1, 11: "/p/light", 7:[1], 8: [1] } ]
execute_post("coap://"+my_base+"/fp/g", 60, 60, content)
execute_get("coap://"+my_base+"/fp/g", 40)
# publisher table
# id (0)= 1
# ia (12)
# url (11)= .knx
# ga (7 )= 1
# cflags (8) = ["r" ] ; read = 1, write = 2, transmit = 3 update = 4
content = [ {0: 1, 11: ".knx", 7:[1], 12 :"blah.blah" } ]
execute_post("coap://"+my_base+"/fp/p", 60, 60, content)
content = False
print("set PM :", content);
execute_put("coap://"+my_base+"/dev/pm", 60, 60, content)
content = { 2: "loadComplete"}
print("lsm :", content);
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
# do a post
content = {"sia": 5678, "st": 55, "ga": 1, "value": 100 }
content = { 4: 5678, "st": 55, 7: 1, "value": 100 }
# st ga value (1)
#content = { 5: { 6: 1, 7: 1, 1: True } }
#execute_post("coap://"+my_base+"/.knx", 60, 60, content)
content = {4: 5678, 5: { 6: 1, 7: 1, 1: False } }
#execute_post("coap://"+my_base+"/.knx", 60, 60, content)
#execute_post("coap://[FF02::FD]:5683/.knx", 60, 60, content)
# no json tags as strings
def do_sequence_dev(my_base):
print("===================")
print("Get SN :");
sn = execute_get("coap://"+my_base+"/dev/sn", 60)
sn = get_sn(my_base)
print (" SN : ", sn)
print("===================")
print("Get HWT :");
execute_get("coap://"+my_base+"/dev/hwt", 60)
print("===================")
print("Get HWV :");
execute_get("coap://"+my_base+"/dev/hwv", 60)
print("===================")
print("Get FWV :");
execute_get("coap://"+my_base+"/dev/fwv", 60)
print("===================")
print("Get Model :");
execute_get("coap://"+my_base+"/dev/model", 60)
print("===================")
content = True
print("set PM :", content);
execute_put("coap://"+my_base+"/dev/pm", 60, 60, content)
execute_get("coap://"+my_base+"/dev/pm", 60)
content = False
print("set PM :", content);
execute_put("coap://"+my_base+"/dev/pm", 60, 60, content)
execute_get("coap://"+my_base+"/dev/pm", 60)
print("===================")
content = 44
print("set IA :", content);
execute_put("coap://"+my_base+"/dev/ia", 60, 60, content)
execute_get("coap://"+my_base+"/dev/ia", 60)
print("===================")
content = "my host name"
print("set hostname :", content);
execute_put("coap://"+my_base+"/dev/hostname", 60, 60, content)
execute_get("coap://"+my_base+"/dev/hostname", 60)
print("===================")
content = " iid xxx"
print("set iid :", content);
execute_put("coap://"+my_base+"/dev/iid", 60, 60, content)
execute_get("coap://"+my_base+"/dev/iid", 60)
# id ==> 0
# href ==> 11
# ga ==> 7
# cflag ==> 8
def do_sequence_fp_g_int(my_base):
# url, content, accept, contents
content = [ {0: 1, 11: "xxxx1", 8: [1,2,3,4,5], 7:[2222,3333]} ]
execute_post("coap://"+my_base+"/fp/g", 60, 60, content)
execute_get("coap://"+my_base+"/fp/g/1", 60)
execute_get("coap://"+my_base+"/fp/g", 40)
content = [ {0: 2, 11: "xxxxyyy2", 8: [1,4,5], 7:[44,55,33]}, {0: 3, 1: "xxxxyyy3", 8: [1,4,5], 7:[44,55,33]} ]
execute_post("coap://"+my_base+"/fp/g", 60, 60, content)
execute_get("coap://"+my_base+"/fp/g/2", 60)
execute_get("coap://"+my_base+"/fp/g/3", 60)
execute_get("coap://"+my_base+"/fp/g", 40)
execute_del("coap://"+my_base+"/fp/g/3", 60)
execute_get("coap://"+my_base+"/fp/g/3", 60)
execute_get("coap://"+my_base+"/fp/g", 40)
def do_sequence_fp_g(my_base):
# url, content, accept, contents
content = [ {"id": 1, "href": "xxxx1", "cflag": [1,2,3,4,5], "ga":[2222,3333]} ]
execute_post("coap://"+my_base+"/fp/g", 60, 60, content)
execute_get("coap://"+my_base+"/fp/g/1", 60)
execute_get("coap://"+my_base+"/fp/g", 40)
content = [ {"id": 2, "href": "xxxxyyy2", "cflag": [1,4,5], "ga":[44,55,33]}, {"id": 3, "href": "xxxxyyy3", "cflag": [1,4,5], "ga":[44,55,33]} ]
execute_post("coap://"+my_base+"/fp/g", 60, 60, content)
execute_get("coap://"+my_base+"/fp/g/2", 60)
execute_get("coap://"+my_base+"/fp/g/3", 60)
execute_get("coap://"+my_base+"/fp/g", 40)
execute_del("coap://"+my_base+"/fp/g/3", 60)
execute_get("coap://"+my_base+"/fp/g/3", 60)
execute_get("coap://"+my_base+"/fp/g", 40)
# id ==> 0
# ia ==> 12
# path ==> 112
# url ==> 10
# ga ==> 7
def do_sequence_fp_p_int(my_base):
# url, content, accept, contents
content = [ {0: 1, 12: "Ia.IA1", 112: "path1", 7:[2222,3333]} ]
execute_post("coap://"+my_base+"/fp/p", 60, 60, content)
execute_get("coap://"+my_base+"/fp/p/1", 60)
# 40 == application-link format
execute_get("coap://"+my_base+"/fp/p", 40)
content = [ {0: 2, 12: "xxxxyyyia2", 112: "path2", 7:[44,55,33]},
{0: 3, 12: "xxxxyyyia3", 112: "path3", 7:[44,55,33]} ]
execute_post("coap://"+my_base+"/fp/p", 60, 60, content)
execute_get("coap://"+my_base+"/fp/p/2", 60)
execute_get("coap://"+my_base+"/fp/p/3", 60)
execute_get("coap://"+my_base+"/fp/p", 40)
execute_del("coap://"+my_base+"/fp/p/3", 60)
execute_get("coap://"+my_base+"/fp/p/3", 60)
execute_get("coap://"+my_base+"/fp/p", 40)
def do_sequence_fp_p(my_base):
# url, content, accept, contents
content = [ {"id": 1, "ia": "Ia.IA1", "path": "path1", "ga":[2222,3333]} ]
execute_post("coap://"+my_base+"/fp/p", 60, 60, content)
execute_get("coap://"+my_base+"/fp/p/1", 60)
# 40 == application-link format
execute_get("coap://"+my_base+"/fp/p", 40)
content = [ {"id": 2, "ia": "xxxxyyyia2", "path": "path2","ga":[44,55,33]}, {"id": 3, "ia": "xxxxyyyia3", "path": "path3","ga":[44,55,33]} ]
execute_post("coap://"+my_base+"/fp/p", 60, 60, content)
execute_get("coap://"+my_base+"/fp/p/2", 60)
execute_get("coap://"+my_base+"/fp/p/3", 60)
execute_get("coap://"+my_base+"/fp/p", 40)
execute_del("coap://"+my_base+"/fp/p/3", 60)
execute_get("coap://"+my_base+"/fp/p/3", 60)
execute_get("coap://"+my_base+"/fp/p", 40)
# id ==> 0
# ia ==> 12
# path ==> 112
# url ==> 10
# ga ==> 7
def do_sequence_fp_r_int(my_base):
# url, content, accept, contents
content = [ { 0: 1, 12: "r-Ia.IA1", 112: "r-path1", 7:[2222,3333]} ]
execute_post("coap://"+my_base+"/fp/r", 60, 60, content)
execute_get("coap://"+my_base+"/fp/r/1", 60)
execute_get("coap://"+my_base+"/fp/r", 40)
content = [ { 0: 2, 12: "r-Ia.IA2", 10: "url2", 112: "r-path2", 7:[44,55,33]},
{0: 3, 12: "r-Ia.IA3", 112: "r-path3", 7:[44,55,33]} ]
execute_post("coap://"+my_base+"/fp/r", 60, 60, content)
execute_get("coap://"+my_base+"/fp/r/2", 60)
execute_get("coap://"+my_base+"/fp/r/3", 60)
execute_get("coap://"+my_base+"/fp/r", 40)
execute_del("coap://"+my_base+"/fp/r/3", 60)
execute_get("coap://"+my_base+"/fp/r/3", 60)
execute_get("coap://"+my_base+"/fp/r", 40)
def do_sequence_fp_r(my_base):
# url, content, accept, contents
content = [ {"id": 1, "ia": "r-Ia.IA1", "path": "r-path1", "ga":[2222,3333]} ]
execute_post("coap://"+my_base+"/fp/r", 60, 60, content)
execute_get("coap://"+my_base+"/fp/r/1", 60)
execute_get("coap://"+my_base+"/fp/r", 40)
content = [ {"id": 2, "ia": "r-Ia.IA2", "path": "r-path2", "ga":[44,55,33]}, {"id": 3, "ia": "r-Ia.IA3", "path": "r-path3", "ga":[44,55,33]} ]
execute_post("coap://"+my_base+"/fp/r", 60, 60, content)
execute_get("coap://"+my_base+"/fp/r/2", 60)
execute_get("coap://"+my_base+"/fp/r/3", 60)
execute_get("coap://"+my_base+"/fp/r", 40)
execute_del("coap://"+my_base+"/fp/r/3", 60)
execute_get("coap://"+my_base+"/fp/r/3", 60)
execute_get("coap://"+my_base+"/fp/r", 40)
# cmd ==> 2
def do_sequence_lsm_int(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/a/lsm", 60)
content = {2 : "startLoading"}
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
content = {2 : "loadComplete"}
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
content = {2 : "unload"}
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
def do_sequence_lsm(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/a/lsm", 60)
content = {"cmd": "startLoading"}
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
content = {"cmd": "loadComplete"}
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
content = {"cmd": "unload"}
execute_post("coap://"+my_base+"/a/lsm", 60, 60, content)
execute_get("coap://"+my_base+"/a/lsm", 60)
# ./knx resource
# sia ==> 4
# ga ==> 7
# st 6
def do_sequence_knx_knx_int(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.knx", 60)
content = {"value": { 4 : 5, 7: 7777 , 6 : "rp"}}
execute_post("coap://"+my_base+"/.knx", 60, 60, content)
execute_get("coap://"+my_base+"/.knx", 60)
# ./knx resource
def do_sequence_knx_knx(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.knx", 60)
content = {"value": { "sia" : 5, "ga": 7, "st": "rp"}}
execute_post("coap://"+my_base+"/.knx", 60, 60, content)
execute_get("coap://"+my_base+"/.knx", 60)
def do_sequence_knx_spake(my_base):
# url, content, accept, contents
# sequence:
# - parameter exchange: 15 (rnd)- return value
# - credential exchange: 10 - return value
# - pase verification exchange: 14 - no return value
content = { 15: b"a-15-sdfsdred"}
execute_post("coap://"+my_base+"/.well-known/knx/spake", 60, 60, content)
# pa
content = { 10: b"s10dfsdfsfs" }
execute_post("coap://"+my_base+"/.well-known/knx/spake", 60, 60, content)
# ca
content = { 14: b"a15sdfsdred"}
execute_post("coap://"+my_base+"/.well-known/knx/spake", 60, 60, content)
# expecting return
def do_sequence_knx_idevid(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.well-known/knx/idevid", 282)
def do_sequence_knx_ldevid(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.well-known/knx/ldevid", 282)
def do_sequence_knx_osn(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.well-known/knx/osn", 60)
def do_sequence_knx_crc(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.well-known/knx/crc", 60)
def do_sequence_oscore(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/f/oscore", 40)
execute_get("coap://"+my_base+"/p/oscore/replwdo", 60)
content = 105
execute_put("coap://"+my_base+"/p/oscore/replwdo", 60, 60, content)
execute_get("coap://"+my_base+"/p/oscore/replwdo", 60)
execute_get("coap://"+my_base+"/p/oscore/osndelay", 60)
content = 1050
execute_put("coap://"+my_base+"/p/oscore/osndelay", 60, 60, content)
execute_get("coap://"+my_base+"/p/oscore/osndelay", 60)
def do_sequence_core_knx(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/.well-known/knx", 60)
content = { 1 : 5, 2: "reset"}
execute_post("coap://"+my_base+"/.well-known/knx", 60, 60, content)
def do_sequence_a_sen(my_base):
# url, content, accept, contents
content = {2: "reset"}
execute_post("coap://"+my_base+"/a/sen", 60, 60, content)
def do_sequence_auth(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/auth", 40)
def do_sequence_auth_at(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/auth/at", 40)
#
content = {0: b"id", 1 : 20, 2:b"ms",3:"hkdf", 4:"alg", 5:b"salt", 6:b"contextId"}
execute_post("coap://"+my_base+"/auth/at", 60, 60, content)
content = {0: b"id2", 1 : 20, 2:b"ms",3:"hkdf", 4:"alg", 5:b"salt", 6:b"contextId2"}
execute_post("coap://"+my_base+"/auth/at", 60, 60, content)
execute_get("coap://"+my_base+"/auth/at", 40)
execute_get("coap://"+my_base+"/auth/at/id", 60)
execute_del("coap://"+my_base+"/auth/at/id", 60)
def do_sequence_f(my_base):
# url, content, accept, contents
execute_get("coap://"+my_base+"/f", 40)
# note this one is a bit dirty hard coded...
execute_get("coap://"+my_base+"/f/417", 40)
execute_get("coap://"+my_base+"/.well-known/core", 40)
def do_sequence(my_base):
#sn = get_sn(my_base)
install(my_base)
return
do_sequence_dev(my_base)
#return
do_sequence_fp_g_int(my_base)
#do_sequence_fp_g(my_base)
do_sequence_fp_p_int(my_base)
#do_sequence_fp_p(my_base)
do_sequence_fp_r_int(my_base)
#do_sequence_fp_r(my_base)
do_sequence_lsm_int(my_base)
#do_sequence_lsm(my_base)
do_sequence_lsm_int(my_base)
# .knx
do_sequence_knx_knx_int(my_base)
#do_sequence_knx_knx(my_base)
do_sequence_knx_spake(my_base)
do_sequence_knx_idevid(my_base)
do_sequence_knx_ldevid(my_base)
do_sequence_knx_crc(my_base)
do_sequence_knx_osn(my_base)
do_sequence_oscore(my_base)
do_sequence_core_knx(my_base)
do_sequence_a_sen(my_base)
do_sequence_auth(my_base)
do_sequence_auth_at(my_base)
do_sequence_f(my_base)
def client_callback_discovery(response, checkdata=None):
print(" --- Discovery Callback ---")
global my_base
if response is not None:
print ("response code:",response.code)
print ("response type:",response.content_type)
if response.code > 100:
print("+++returned error+++")
return
if response.content_type == defines.Content_types["application/link-format"]:
print (response.payload.decode())
my_base = get_base_from_link(response.payload.decode())
do_sequence(my_base)
def code2string(code):
if code == 68:
return "(Changed)"
if code == 69:
return "(Content)"
if code == 132:
return "(Not Found)"
if code == 133:
return "(METHOD_NOT_ALLOWED)"
if code == 160:
return "(INTERNAL_SERVER_ERROR)"
return ""
def client_callback(response, checkdata=None):
print(" --- Callback ---")
if response is not None:
print ("response code:",response.code, code2string(response.code))
print ("response type:",response.content_type)
if response.code > 100:
print("+++returned error+++")
return
#print(response.pretty_print())
if response.content_type == defines.Content_types["text/plain"]:
if response.payload is not None:
print (type(response.payload), len(response.payload))
print ("=========")
print (response.payload)
print ("=========")
else:
print ("payload: none")
elif response.content_type == defines.Content_types["application/cbor"]:
print (type(response.payload), len(response.payload))
print ("=========")
print (response.payload)
print ("=========")
#json_data = loads(response.payload)
#print(json_data)
#print ("=========")
json_string = ""
try:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
except:
print("error in cbor..")
print (json_string)
print ("===+++===")
if checkdata is not None:
check_data = cbor.loads(checkdata)
check_string = json.dumps(check_data, indent=2, sort_keys=True)
print(" check: ")
print (check_string)
if check_string == json_string:
print(" =+++===> OK ")
else:
print(" =+++===> NOT OK ")
print (json_string)
elif response.content_type == defines.Content_types["application/vnd.ocf+cbor"]:
print ("application/vnd.ocf+cbor")
try:
print (type(response.payload), len(response.payload))
print ("=========")
print (response.payload)
print ("=========")
json_data = cbor.loads(response.payload)
print (json_data)
print ("---------")
except:
traceback.print_exc()
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print (json_string)
elif response.content_type == defines.Content_types["application/link-format"]:
print (response.payload.decode())
else:
if response.payload is not None:
print ("type, len", type(response.payload), len(response.payload))
print (response.payload)
#else:
# print (" not handled: ", response)
else:
print (" Response : None")
#check = True
#while check:
# chosen = eval(input("Stop observing? [y/N]: "))
# if chosen != "" and not (chosen == "n" or chosen == "N" or chosen == "y" or chosen == "Y"):
# print("Unrecognized choose.")
# continue
def client_callback_observe(response): # pragma: no cover
global client
print("Callback_observe")
check = True
while check:
chosen = eval(input("Stop observing? [y/N]: "))
if chosen != "" and not (chosen == "n" or chosen == "N" or chosen == "y" or chosen == "Y"):
print("Unrecognized choose.")
continue
elif chosen == "y" or chosen == "Y":
while True:
rst = eval(input("Send RST message? [Y/n]: "))
if rst != "" and not (rst == "n" or rst == "N" or rst == "y" or rst == "Y"):
print("Unrecognized choose.")
continue
elif rst == "" or rst == "y" or rst == "Y":
client.cancel_observing(response, True)
else:
client.cancel_observing(response, False)
check = False
break
else:
break
def execute_get(mypath, ct_value):
print ("---------------------------")
print ("execute_get: ", ct_value, mypath)
print (type(mypath))
if (mypath is None or len(mypath) < 5):
return
if mypath.startswith("coap://") == False:
print(" not executing: ", mypath);
return;
ct = {}
ct['accept'] = ct_value
host, port, path = parse_uri(mypath)
try:
tmp = socket.gethostbyname(host)
host = tmp
except socket.gaierror:
pass
nclient = HelperClient(server=(host, port))
response = nclient.get(path, None, None, **ct)
client_callback(response)
nclient.stop()
return response
def execute_del(mypath, ct_value):
print ("---------------------------")
print ("execute_del: ", ct_value, mypath)
do_exit = False
ct = {}
ct['accept'] = ct_value
ct['content_type'] = ct_value
if mypath.startswith("coap://") == False:
print(" not executing: ", mypath);
return;
host, port, path = parse_uri(mypath)
try:
tmp = socket.gethostbyname(host)
host = tmp
except socket.gaierror:
pass
nclient = HelperClient(server=(host, port))
nclientcheck = HelperClient(server=(host, port))
payload = 0
response = nclient.delete(path, None, None, **ct)
client_callback(response)
#nclient.stop()
#sys.exit(2)
print ("=======")
def execute_put(mypath, ct_value, accept, content):
print ("---------------------------")
print ("execute_put: ", ct_value, mypath)
do_exit = False
ct = {}
ct['accept'] = accept
ct['content_type'] = ct_value
if mypath.startswith("coap://") == False:
print(" not executing: ", mypath);
return
host, port, path = parse_uri(mypath)
try:
tmp = socket.gethostbyname(host)
host = tmp
except socket.gaierror:
pass
nclient = HelperClient(server=(host, port))
nclientcheck = HelperClient(server=(host, port))
payload = 0
if accept == 60:
payload = cbor.dumps(content)
else:
payload = content
print ("payload: ", payload)
response = nclient.put(path, payload, None, None , None, **ct)
client_callback(response)
nclient.stop()
def execute_post(mypath, ct_value, accept, content):
print ("---------------------------")
print ("execute_post: ", ct_value, mypath)
print (content)
print (" ---------------------")
do_exit = False
ct = {}
ct['accept'] = accept
ct['content_type'] = ct_value
if mypath.startswith("coap://") == False:
print(" not executing: ", mypath);
return
host, port, path = parse_uri(mypath)
try:
tmp = socket.gethostbyname(host)
host = tmp
except socket.gaierror:
pass
nclient = HelperClient(server=(host, port))
#nclientcheck = HelperClient(server=(host, port))
payload = 0
if accept == 60:
#print(" content :", content)
payload = cbor.dumps(content)
else:
payload = content
response = nclient.post(path, payload, None, None , None, **ct)
client_callback(response)
nclient.stop()
def main(): # pragma: no cover
global client
op = None
path = None
payload = None
content_type = None
#ct = {'content_type': defines.Content_types["application/link-format"]}
ct = {}
ct['accept'] = 40
try:
opts, args = getopt.getopt(sys.argv[1:], "ho:p:P:f:c:", ["help", "operation=", "path=", "payload=",
"payload_file=","content-type"])
except getopt.GetoptError as err:
# print help information and exit:
print((str(err))) # will print something like "option -a not recognized"
usage()
sys.exit(2)
for o, a in opts:
if o in ("-o", "--operation"):
op = a
elif o in ("-p", "--path"):
path = a
elif o in ("-P", "--payload"):
payload = a
elif o in ("-c", "--content-type"):
ct['accept'] = a
print ("content type request : ", ct)
elif o in ("-f", "--payload-file"):
with open(a, 'r') as f:
payload = f.read()
elif o in ("-h", "--help"):
usage()
sys.exit()
else:
usage()
sys.exit(2)
if op is None:
print("Operation must be specified")
usage()
sys.exit(2)
if path is None:
print("Path must be specified")
usage()
sys.exit(2)
if not path.startswith("coap://"):
print("Path must be conform to coap://host[:port]/path")
usage()
sys.exit(2)
host, port, path = parse_uri(path)
try:
tmp = socket.gethostbyname(host)
host = tmp
except socket.gaierror:
pass
client = HelperClient(server=(host, port))
if op == "GET":
if path is None:
print("Path cannot be empty for a GET request")
usage()
sys.exit(2)
response = client.get(path, None, None, **ct)
print((response.pretty_print()))
if response.content_type == defines.Content_types["application/json"]:
json_data = json.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print ("JSON ::")
print (json_string)
if response.content_type == defines.Content_types["application/cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print ("JSON ::")
print (json_string)
if response.content_type == defines.Content_types["application/link-format"]:
#json_data = cbor.loads(response.payload)
#json_string = json.dumps(json_data, indent=2, sort_keys=True)
#print ("JSON ::")
print (response.payload.decode())
print ("\n\n")
if response.content_type == defines.Content_types["application/vnd.ocf+cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print ("JSON ::")
print (json_string)
client.stop()
elif op == "GETNONE":
if path is None:
print("Path cannot be empty for a GET-None request")
usage()
sys.exit(2)
response = client.get_non(path, None, None, **ct)
print((response.pretty_print()))
if response.content_type == defines.Content_types["application/json"]:
json_data = json.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print ("JSON ::")
print (json_string)
if response.content_type == defines.Content_types["application/cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print ("JSON ::")
print (json_string)
if response.content_type == defines.Content_types["application/vnd.ocf+cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print ("JSON ::")
print (json_string)
client.stop()
elif op == "OBSERVE":
if path is None:
print("Path cannot be empty for a GET request")
usage()
sys.exit(2)
client.observe(path, client_callback_observe)
elif op == "DELETE":
if path is None:
print("Path cannot be empty for a DELETE request")
usage()
sys.exit(2)
response = client.delete(path)
print((response.pretty_print()))
client.stop()
elif op == "POST":
if path is None:
print("Path cannot be empty for a POST request")
usage()
sys.exit(2)
if payload is None:
print("Payload cannot be empty for a POST request")
usage()
sys.exit(2)
print ( "payload for POST (ascii):", payload )
print (ct['accept'] )
if ct['accept'] == str(defines.Content_types["application/cbor"]):
json_data = json.loads(payload)
cbor_data = cbor.dumps(json_data)
payload = bytes(cbor_data)
if ct['accept'] == str(defines.Content_types["application/vnd.ocf+cbor"]):
json_data = json.loads(payload)
cbor_data = cbor.loads(json_data)
payload = cbor_data
response = client.post(path, payload, None, None, **ct)
print((response.pretty_print()))
if response.content_type == defines.Content_types["application/cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print (json_string)
if response.content_type == defines.Content_types["application/vnd.ocf+cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print (json_string)
client.stop()
elif op == "PUT":
if path is None:
print("Path cannot be empty for a PUT request")
usage()
sys.exit(2)
if payload is None:
print("Payload cannot be empty for a PUT request")
usage()
sys.exit(2)
response = client.put(path, payload)
print((response.pretty_print()))
client.stop()
elif op == "DISCOVER":
#response = client.discover( path, client_callback, None, **ct)
response = client.discover( path, None, None, **ct)
if response is not None:
print(response.pretty_print())
if response.content_type == defines.Content_types["application/cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print (json_string)
if response.content_type == defines.Content_types["application/vnd.ocf+cbor"]:
json_data = cbor.loads(response.payload)
json_string = json.dumps(json_data, indent=2, sort_keys=True)
print (json_string)
if response.content_type == defines.Content_types["application/link-format"]:
#json_data = cbor.loads(response.payload)
#json_string = json.dumps(json_data, indent=2, sort_keys=True)
print (response.payload.decode())
# do_get(response.payload.decode(), client)
client_callback_discovery(response)
counter = 2
try:
while counter > 0:
time.sleep(1)
counter = counter - 1
#client.stop()
except KeyboardInterrupt:
print("Client Shutdown")
#client.stop()
#execute_list()
client.stop()
else:
print("Operation not recognized")
usage()
sys.exit(2)
if __name__ == '__main__': # pragma: no cover
main()
| 33.786333 | 149 | 0.546206 | 4,538 | 35,104 | 4.065668 | 0.078669 | 0.066992 | 0.080217 | 0.076314 | 0.758753 | 0.732358 | 0.681843 | 0.643089 | 0.602764 | 0.575285 | 0 | 0.040155 | 0.269998 | 35,104 | 1,038 | 150 | 33.818882 | 0.679817 | 0.097425 | 0 | 0.574555 | 0 | 0.001368 | 0.177289 | 0.022312 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053352 | false | 0.00684 | 0.01368 | 0 | 0.093023 | 0.199726 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
810ccb8df33ca9c859d68156c3d23f37b798cbf1 | 1,301 | py | Python | tests/components/zwave_js/test_discovery.py | tbarbette/core | 8e58c3aa7bc8d2c2b09b6bd329daa1c092d52d3c | [
"Apache-2.0"
] | 1 | 2020-12-18T12:23:04.000Z | 2020-12-18T12:23:04.000Z | tests/components/zwave_js/test_discovery.py | tbarbette/core | 8e58c3aa7bc8d2c2b09b6bd329daa1c092d52d3c | [
"Apache-2.0"
] | 60 | 2020-07-06T15:10:30.000Z | 2022-03-31T06:01:46.000Z | tests/components/zwave_js/test_discovery.py | tbarbette/core | 8e58c3aa7bc8d2c2b09b6bd329daa1c092d52d3c | [
"Apache-2.0"
] | 4 | 2017-01-10T04:17:33.000Z | 2021-09-02T16:37:24.000Z | """Test discovery of entities for device-specific schemas for the Z-Wave JS integration."""
async def test_iblinds_v2(hass, client, iblinds_v2, integration):
"""Test that an iBlinds v2.0 multilevel switch value is discovered as a cover."""
node = iblinds_v2
assert node.device_class.specific.label == "Unused"
state = hass.states.get("light.window_blind_controller")
assert not state
state = hass.states.get("cover.window_blind_controller")
assert state
async def test_ge_12730(hass, client, ge_12730, integration):
"""Test GE 12730 Fan Controller v2.0 multilevel switch is discovered as a fan."""
node = ge_12730
assert node.device_class.specific.label == "Multilevel Power Switch"
state = hass.states.get("light.in_wall_smart_fan_control")
assert not state
state = hass.states.get("fan.in_wall_smart_fan_control")
assert state
async def test_inovelli_lzw36(hass, client, inovelli_lzw36, integration):
"""Test LZW36 Fan Controller multilevel switch endpoint 2 is discovered as a fan."""
node = inovelli_lzw36
assert node.device_class.specific.label == "Unused"
state = hass.states.get("light.family_room_combo")
assert state.state == "off"
state = hass.states.get("fan.family_room_combo_2")
assert state
| 34.236842 | 91 | 0.731745 | 186 | 1,301 | 4.946237 | 0.317204 | 0.058696 | 0.097826 | 0.117391 | 0.438043 | 0.343478 | 0.206522 | 0.136957 | 0.136957 | 0.136957 | 0 | 0.034291 | 0.170638 | 1,301 | 37 | 92 | 35.162162 | 0.81835 | 0.065334 | 0 | 0.333333 | 0 | 0 | 0.209544 | 0.170124 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
810e3e3e48092c408dee59bf8a6eb974e84689eb | 1,475 | py | Python | Final-Project/server/art/serializers.py | wendy006/Web-Dev-Course | 2f0cfddb7ab4db88ffb4483c7cd4a00abf36c720 | [
"MIT"
] | null | null | null | Final-Project/server/art/serializers.py | wendy006/Web-Dev-Course | 2f0cfddb7ab4db88ffb4483c7cd4a00abf36c720 | [
"MIT"
] | null | null | null | Final-Project/server/art/serializers.py | wendy006/Web-Dev-Course | 2f0cfddb7ab4db88ffb4483c7cd4a00abf36c720 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import *
class CollectionSerializer(serializers.ModelSerializer):
class Meta:
model = Collection
fields = ('collectionID', 'name', 'display_name', 'description', 'img_url')
class ArtSerializer(serializers.ModelSerializer):
img_url = serializers.ReadOnlyField()
thumb_url = serializers.ReadOnlyField()
class Meta:
model = Art
fields = ('artID', 'title', 'filename', 'rarity', 'collection', 'img_url', 'thumb_url')
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ('id', 'username', 'email', 'password', 'coins', 'art')
extra_kwargs = {
'password': {'write_only': True}
}
def create(self, validated_data):
password = validated_data.pop('password', None)
instance = self.Meta.model(**validated_data)
if password is not None:
instance.set_password(password)
instance.save()
return instance
class OwnSerializer(serializers.ModelSerializer):
duplicates = serializers.ReadOnlyField()
class Meta:
model = Own
fields = ('ownID', 'user', 'art', 'duplicates')
class SaleSerializer(serializers.ModelSerializer):
class Meta:
model = Sale
fields = ('saleID', 'seller', 'buyer', 'ownership', 'art', 'price', 'available', 'sold', 'postDate', 'purchaseDate') | 35.97561 | 124 | 0.626441 | 136 | 1,475 | 6.698529 | 0.5 | 0.059276 | 0.076839 | 0.115258 | 0.215148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247458 | 1,475 | 41 | 124 | 35.97561 | 0.820721 | 0 | 0 | 0.142857 | 0 | 0 | 0.168524 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0.142857 | 0.057143 | 0 | 0.485714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
81145bece0e3560e4fd661b7085c6a1e4f6811f2 | 910 | py | Python | djangocms_redirect/migrations/0003_auto_20190810_1009.py | vsalat/djangocms-redirect | a2577f08430b6b65ae4a51293f861b697bf4ab9d | [
"BSD-3-Clause"
] | null | null | null | djangocms_redirect/migrations/0003_auto_20190810_1009.py | vsalat/djangocms-redirect | a2577f08430b6b65ae4a51293f861b697bf4ab9d | [
"BSD-3-Clause"
] | null | null | null | djangocms_redirect/migrations/0003_auto_20190810_1009.py | vsalat/djangocms-redirect | a2577f08430b6b65ae4a51293f861b697bf4ab9d | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.2.4 on 2019-08-10 08:09
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('djangocms_redirect', '0002_auto_20170321_1807'),
]
operations = [
migrations.AddField(
model_name='redirect',
name='catchall_redirect',
field=models.BooleanField(default=False, help_text='If selected all the pages starting with the given string will be redirected to the given redirect path', verbose_name='Catchall redirect'),
),
migrations.AddField(
model_name='redirect',
name='subpath_match',
field=models.BooleanField(default=False, help_text='If selected all the pages starting with the given string will be redirected by replacing the matching subpath with the provided redirect path.', verbose_name='Subpath match'),
),
]
| 37.916667 | 239 | 0.679121 | 109 | 910 | 5.559633 | 0.504587 | 0.034653 | 0.075908 | 0.089109 | 0.478548 | 0.478548 | 0.349835 | 0.349835 | 0.349835 | 0.349835 | 0 | 0.044476 | 0.234066 | 910 | 23 | 240 | 39.565217 | 0.824964 | 0.049451 | 0 | 0.352941 | 1 | 0.058824 | 0.418308 | 0.026651 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
811898bc6c0124ca8489662af03fc5f7195a1876 | 5,191 | py | Python | octopart/scrape_octopart.py | nicholaschiang/dl-datasheets | 1c5ab2545a85c1ea7643fc655005259544617d90 | [
"MIT"
] | null | null | null | octopart/scrape_octopart.py | nicholaschiang/dl-datasheets | 1c5ab2545a85c1ea7643fc655005259544617d90 | [
"MIT"
] | null | null | null | octopart/scrape_octopart.py | nicholaschiang/dl-datasheets | 1c5ab2545a85c1ea7643fc655005259544617d90 | [
"MIT"
] | 1 | 2019-12-07T20:13:06.000Z | 2019-12-07T20:13:06.000Z | #! /usr/bin/env python
import sys
import json
import urllib
import urllib2
import time
import argparse
import re
# Category ID for Discrete Semiconductors > Transistors > BJTs
TRANSISTOR_ID = b814751e89ff63d3
def find_total_hits(search_query):
"""
Function: find_total_hits
--------------------
Returns the number of hits that correspond to the search query.
"""
url = "http://octopart.com/api/v3/categories/"
# NOTE: Use your API key here (https://octopart.com/api/register)
url += "?apikey=09b32c6c"
args = [
('q', search_query),
('start', 0),
('limit', 1), #change to increase number of datasheets
('include[]','datasheets')
]
url += '&' + urllib.urlencode(args)
data = urllib.urlopen(url).read() # perform a SearchRequest
search_response = json.loads(data) # Grab the SearchResponse
# return number of hits
return search_response['hits']
def download_datasheets(search_query):
"""
Function: download_datasheets
--------------------
Uses the OctoPart API to download all datasheets associated with a given
set of search keywords.
"""
MAX_RESULTS = 100
counter = 0
total_hits = find_total_hits(search_query)
# print number of hits
print "[info] Search Response Hits: %s" % (total_hits)
# Calculate how many multiples of 100s of hits there are
num_hundreds = total_hits / MAX_RESULTS
print "[info] Performing %s iterations of %s results." % (num_hundreds, MAX_RESULTS)
for i in range(num_hundreds+1):
url = "http://octopart.com/api/v3/parts/search"
# NOTE: Use your API key here (https://octopart.com/api/register)
url += "?apikey=09b32c6c"
args = [
('q', search_query),
('start', (i * MAX_RESULTS)),
('limit', MAX_RESULTS), # change to edit number of datasheets
('include[]','datasheets')
# ('include[]','specs'),
# ('include[]','descriptions')
]
url += '&' + urllib.urlencode(args)
data = urllib.urlopen(url).read() # perform a SearchRequest
search_response = json.loads(data) # Grab the SearchResponse
# Iterate through the SearchResults in the SearchResponse
if not search_response.get('results'):
print "[error] no results returned in outer loop: " + str(i)
continue
for result in search_response['results']:
part = result['item'] # Grab the Part in the SearchResult
print ("[info] %s_%s..." % (part['brand']['name'].replace(" ", ""), part['mpn'])),
sys.stdout.flush()
# Iterate through list of datasheets for the given part
for datasheet in part['datasheets']:
# Grab the Datasheet URL
pdflink = datasheet['url']
if pdflink is not None:
# Download the PDF
try:
response = urllib2.urlopen(pdflink)
except urllib2.HTTPError, err:
if err.code == 404:
print "[error] Page not found!...",
elif err.code == 403:
print "[error] Access Denied!...",
else:
print "[error] HTTP Error code ", err.code,
continue; # advance to next datasheet rather than crashing
try:
filename = re.search('([^/]*)\.[^.]*$', datasheet['url']).group(1)
except AttributeError:
continue; # skip to next datasheet rather than crashing
file = open("../datasheets/%s.pdf" % filename, 'w')
file.write(response.read())
file.close()
counter += 1 # Increment the counter of files downloaded
# NOTE: Not sure if this is necessary. Just a precaution.
time.sleep(0.4) # Limit ourselves to 3 HTTP Requests/second
print("DONE")
print("[info] %s Parts Completed." % MAX_RESULTS)
print("[info] COMPLETED: %s datasheets for the query were downloaded." % counter)
def parse_args():
"""
Function: parse_args
--------------------
Parse the arguments for the Octopart Datasheet Scraper
"""
# Define what commandline arguments can be accepted
parser = argparse.ArgumentParser()
parser.add_argument('query',metavar="\"SEARCH_KEYWORDS\"",
help="keywords to query in quotes (required)")
parser.add_argument('--version', action='version', version='%(prog)s 0.1.0')
args = parser.parse_args()
return args.query
# Main Function
if __name__ == "__main__":
reload(sys)
sys.setdefaultencoding('utf-8')
search_query = parse_args() # Parse commandline arguments
start_time = time.time()
print "[info] Download datasheets for %s" % search_query
download_datasheets(search_query)
finish_time = time.time()
print '[info] Took', finish_time - start_time, 'sec total.'
| 38.169118 | 97 | 0.571374 | 569 | 5,191 | 5.115993 | 0.363796 | 0.034009 | 0.019237 | 0.013054 | 0.231536 | 0.176572 | 0.138097 | 0.138097 | 0.138097 | 0.138097 | 0 | 0.014171 | 0.306685 | 5,191 | 135 | 98 | 38.451852 | 0.794665 | 0.199961 | 0 | 0.204545 | 0 | 0 | 0.191288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.079545 | null | null | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
811c9730165b0d65d78610ed7c5cc6d9f073addc | 5,039 | py | Python | sifter/grammar/grammar.py | russell/sifter | 03e85349fd2329439ae3f7eb3c1f484ba2ebf807 | [
"BSD-2-Clause"
] | null | null | null | sifter/grammar/grammar.py | russell/sifter | 03e85349fd2329439ae3f7eb3c1f484ba2ebf807 | [
"BSD-2-Clause"
] | null | null | null | sifter/grammar/grammar.py | russell/sifter | 03e85349fd2329439ae3f7eb3c1f484ba2ebf807 | [
"BSD-2-Clause"
] | 1 | 2020-08-19T06:30:47.000Z | 2020-08-19T06:30:47.000Z | # Parser based on RFC 5228, especially the grammar as defined in section 8. All
# references are to sections in RFC 5228 unless stated otherwise.
import ply.yacc
import sifter.grammar
from sifter.grammar.lexer import tokens
import sifter.handler
import logging
__all__ = ('parser',)
def parser(**kwargs):
return ply.yacc.yacc(**kwargs)
def p_commands_list(p):
"""commands : commands command"""
p[0] = p[1]
# section 3.2: REQUIRE command must come before any other commands
if p[2].RULE_IDENTIFIER == 'REQUIRE':
if any(command.RULE_IDENTIFIER != 'REQUIRE'
for command in p[0].commands):
log = logging.getLogger("sifter")
log.error(("REQUIRE command on line %d must come before any "
"other non-REQUIRE commands" % p.lineno(2)))
raise SyntaxError
# section 3.1: ELSIF and ELSE must follow IF or another ELSIF
elif p[2].RULE_IDENTIFIER in ('ELSIF', 'ELSE'):
if p[0].commands[-1].RULE_IDENTIFIER not in ('IF', 'ELSIF'):
log = logging.getLogger("sifter")
log.error(("ELSIF/ELSE command on line %d must follow an IF/ELSIF "
"command" % p.lineno(2)))
raise SyntaxError
p[0].commands.append(p[2])
def p_commands_empty(p):
"""commands : """
p[0] = sifter.grammar.CommandList()
def p_command(p):
"""command : IDENTIFIER arguments ';'
| IDENTIFIER arguments block"""
#print("COMMAND:", p[1], p[2], p[3])
tests = p[2].get('tests')
block = None
if p[3] != ';': block = p[3]
handler = sifter.handler.get('command', p[1])
if handler is None:
log = logging.getLogger("sifter")
log.error(("No handler registered for command '%s' on line %d" %
(p[1], p.lineno(1))))
raise SyntaxError
p[0] = handler(arguments=p[2]['args'], tests=tests, block=block)
def p_command_error(p):
"""command : IDENTIFIER error ';'
| IDENTIFIER error block"""
log = logging.getLogger("sifter")
log.error(("Syntax error in command definition after %s on line %d" %
(p[1], p.lineno(1))))
raise SyntaxError
def p_block(p):
"""block : '{' commands '}' """
# section 3.2: REQUIRE command must come before any other commands,
# which means it can't be in the block of another command
if any(command.RULE_IDENTIFIER == 'REQUIRE'
for command in p[2].commands):
log = logging.getLogger("sifter")
log.error(("REQUIRE command not allowed inside of a block (line %d)" %
(p.lineno(2))))
raise SyntaxError
p[0] = p[2]
def p_block_error(p):
"""block : '{' error '}'"""
log = logging.getLogger("sifter")
log.error(("Syntax error in command block that starts on line %d" %
(p.lineno(1),)))
raise SyntaxError
def p_arguments(p):
"""arguments : argumentlist
| argumentlist test
| argumentlist '(' testlist ')'"""
p[0] = { 'args' : p[1], }
if len(p) > 2:
if p[2] == '(':
p[0]['tests'] = p[3]
else:
p[0]['tests'] = [ p[2] ]
def p_testlist_error(p):
"""arguments : argumentlist '(' error ')'"""
log = logging.getLogger("sifter")
log.error(("Syntax error in test list that starts on line %d" % p.lineno(2)))
raise SyntaxError
def p_argumentlist_list(p):
"""argumentlist : argumentlist argument"""
p[0] = p[1]
p[0].append(p[2])
def p_argumentlist_empty(p):
"""argumentlist : """
p[0] = []
def p_test(p):
"""test : IDENTIFIER arguments"""
#print("TEST:", p[1], p[2])
tests = p[2].get('tests')
handler = sifter.handler.get('test', p[1])
if handler is None:
log = logging.getLogger("sifter")
log.error(("No handler registered for test '%s' on line %d" %
(p[1], p.lineno(1))))
raise SyntaxError
p[0] = handler(arguments=p[2]['args'], tests=tests)
def p_testlist_list(p):
"""testlist : test ',' testlist"""
p[0] = p[3]
p[0].insert(0, p[1])
def p_testlist_single(p):
"""testlist : test"""
p[0] = [ p[1] ]
def p_argument_stringlist(p):
"""argument : '[' stringlist ']'"""
p[0] = p[2]
def p_argument_string(p):
"""argument : string"""
# for simplicity, we treat all single strings as a string list
p[0] = [ p[1] ]
def p_argument_number(p):
"""argument : NUMBER"""
p[0] = p[1]
def p_argument_tag(p):
"""argument : TAG"""
p[0] = sifter.grammar.Tag(p[1])
def p_stringlist_error(p):
"""argument : '[' error ']'"""
log = logging.getLogger("sifter")
log.error(("Syntax error in string list that starts on line %d" %
p.lineno(1)))
raise SyntaxError
def p_stringlist_list(p):
"""stringlist : string ',' stringlist"""
p[0] = p[3]
p[0].insert(0, p[1])
def p_stringlist_single(p):
"""stringlist : string"""
p[0] = [ p[1] ]
def p_string(p):
"""string : QUOTED_STRING"""
p[0] = sifter.grammar.String(p[1])
| 29.467836 | 81 | 0.581663 | 695 | 5,039 | 4.14964 | 0.164029 | 0.017337 | 0.010402 | 0.078017 | 0.460125 | 0.407767 | 0.388696 | 0.345007 | 0.333911 | 0.29577 | 0 | 0.024494 | 0.254614 | 5,039 | 170 | 82 | 29.641176 | 0.743344 | 0.227228 | 0 | 0.352381 | 0 | 0 | 0.169852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.209524 | false | 0 | 0.047619 | 0.009524 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
811ce2660d66f66cb91158b2b6a72ae00e0a02c5 | 3,904 | py | Python | multidoc_mnb.py | dropofwill/author-attr-experiments | a90e2743591358a6253f3b3664f5e398517f84bc | [
"Unlicense"
] | 2 | 2015-01-06T12:53:39.000Z | 2018-02-01T13:57:09.000Z | multidoc_mnb.py | dropofwill/author-attr-experiments | a90e2743591358a6253f3b3664f5e398517f84bc | [
"Unlicense"
] | null | null | null | multidoc_mnb.py | dropofwill/author-attr-experiments | a90e2743591358a6253f3b3664f5e398517f84bc | [
"Unlicense"
] | null | null | null | from sklearn import datasets
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import ShuffleSplit
from sklearn.cross_validation import Bootstrap
from sklearn.naive_bayes import MultinomialNB
from sklearn.grid_search import GridSearchCV
from scipy.stats import sem
from pprint import pprint
import numpy as np
import pylab as pl
import string
import matplotlib.pyplot as plt
# Calculates the mean of the scores with the standard deviation
def mean_sem(scores):
return ("Mean score: {0:.3f} (+/-{1:.3f})").format(np.mean(scores), sem(scores))
def test_docs(dir):
# Load documents
docs = datasets.load_files(container_path="../../sklearn_data/"+dir)
X, y = docs.data, docs.target
baseline = 1/float(len(list(np.unique(y))))
# Select Features via Bag of Words approach without stop words
#X = CountVectorizer(charset_error='ignore', stop_words='english', strip_accents='unicode', ).fit_transform(X)
X = TfidfVectorizer(charset_error='ignore', stop_words='english', analyzer='char', ngram_range=(2,4), strip_accents='unicode', sublinear_tf=True, max_df=0.5).fit_transform(X)
n_samples, n_features = X.shape
# sklearn's grid search
parameters = { 'alpha': np.logspace(-100,0,10)}
bv = Bootstrap(n_samples, n_iter=10, test_size=0.3, random_state=42)
mnb_gv = GridSearchCV(MultinomialNB(), parameters, cv=bv,)
#scores = cross_val_score(mnb_gv, X, y, cv=bv)
mnb_gv.fit(X, y)
mnb_gv_best_params = mnb_gv.best_params_.values()[0]
print mnb_gv.best_score_
print mnb_gv_best_params
# CV with Bootstrap
mnb = MultinomialNB(alpha=mnb_gv_best_params)
boot_scores = cross_val_score(mnb, X, y, cv=bv)
print mean_sem(boot_scores)
improvement = (mnb_gv.best_score_ - baseline) / baseline
rand_baseline.append(baseline)
test_results.append([mnb_gv.best_score_])
com_results.append(improvement)
sem_results.append(sem(boot_scores))
def graph(base_list, results_list, com_list, arange):
N=arange
base=np.array(base_list)
res=np.array(results_list)
com = np.array(com_list)
ind = np.arange(N) # the x locations for the groups
width = 0.3 # the width of the bars: can also be len(x) sequence
#fig, ax = plt.sublots()
p1 = plt.bar(ind, base, width, color='r')
p2 = plt.bar(ind+0.3, res, width, color='y')
p3 = plt.bar(ind+0.6, com, width, color='b')
plt.rcParams['figure.figsize'] = 10, 7.5
plt.rcParams['axes.grid'] = True
plt.gray()
plt.ylabel('Accuracy')
plt.title('AAAC Problem Accuracy')
plt.yticks(np.arange(0,3,30))
plt.xticks(np.arange(0,13,13))
#plt.set_xticks(('A','B','C','D','E','F','G','H','I','J','K','L','M'))
plt.legend( (p1[0], p2[0], p3[0]), ('Baseline', 'Algorithm', 'Improvement'))
plt.show()
rand_baseline = list()
test_results = list()
sem_results = list()
com_results = list()
#test_docs("problemA")
for i in string.uppercase[:13]:
test_docs("problem"+i)
#graph(rand_baseline,test_results,com_results,13)
import os
import time as tm
sub_dir = "Results/"
location = "multiDoc" + tm.strftime("%Y%m%d-%H%M%S") + ".txt"
with open(os.path.join(sub_dir, location), 'w') as myFile:
myFile.write(str(rand_baseline))
myFile.write("\n")
myFile.write(str(test_results))
myFile.write("\n")
myFile.write(str(sem_results))
myFile.write("\n")
myFile.write(str(com_results))
# CV with ShuffleSpit
'''
cv = ShuffleSplit(n_samples, n_iter=100, test_size=0.2, random_state=0)
test_scores = cross_val_score(mnb, X, y, cv=cv)
print np.mean(test_scores)
'''
# Single run through
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
print X_train.shape
print y_train.shape
print X_test.shape
print y_test.shape
mnb = MultinomialNB().fit(X_train, y_train)
print mnb.score(X_test, y_test)
''' | 27.111111 | 175 | 0.733863 | 636 | 3,904 | 4.323899 | 0.319182 | 0.018182 | 0.022909 | 0.037818 | 0.159273 | 0.104727 | 0.042909 | 0.018909 | 0 | 0 | 0 | 0.017748 | 0.119621 | 3,904 | 144 | 176 | 27.111111 | 0.782368 | 0.157018 | 0 | 0.04 | 0 | 0 | 0.072816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.226667 | null | null | 0.053333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
811eb205fb191ad48270915e49e393d586962cb9 | 26,184 | py | Python | smipyping/_targetstable.py | KSchopmeyer/smipyping | 9c60b3489f02592bd9099b8719ca23ae43a9eaa5 | [
"MIT"
] | null | null | null | smipyping/_targetstable.py | KSchopmeyer/smipyping | 9c60b3489f02592bd9099b8719ca23ae43a9eaa5 | [
"MIT"
] | 19 | 2017-10-18T15:31:25.000Z | 2020-03-04T19:31:59.000Z | smipyping/_targetstable.py | KSchopmeyer/smipyping | 9c60b3489f02592bd9099b8719ca23ae43a9eaa5 | [
"MIT"
] | null | null | null | # (C) Copyright 2017 Inova Development Inc.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Define the base of targets (i.e. systems to be tested)
TargetID = Column(Integer(11), primary_key=True)
IPAddress = Column(String(15), nullable=False)
CompanyID = Column(Integer(11), ForeignKey("Companies.CompanyID"))
Namespace = Column(String(30), nullable=False)
SMIVersion = Column(String(15), nullable=False)
Product = Column(String(30), nullable=False)
Principal = Column(String(30), nullable=False)
Credential = Column(String(30), nullable=False)
CimomVersion = Column(String(30), nullable=False)
InteropNamespace = Column(String(30), nullable=False)
Notify = Column(Enum('Enabled', 'Disabled'), default='Disabled')
NotifyUsers = Column(String(12), nullable=False)
ScanEnabled = Column(Enum('Enabled', 'Disabled'), default='Enabled')
Protocol = Column(String(10), default='http')
Port = Column(String(10), nullable=False)
"""
# TODO change ip_address to hostname where host name is name : port
from __future__ import print_function, absolute_import
import os
import csv
import re
from collections import OrderedDict
from textwrap import wrap
import six
from mysql.connector import Error as mysqlerror
from ._dbtablebase import DBTableBase
from ._mysqldbmixin import MySQLDBMixin
from ._common import get_url_str
from ._logging import AUDIT_LOGGER_NAME, get_logger
from ._companiestable import CompaniesTable
__all__ = ['TargetsTable']
class TargetsTable(DBTableBase):
"""
Class representing the targets db table.
This base contains information on the targets, host systems, etc. in the
environment.
The factory method should be used to construct a new TargetsTable object
since that creates the correct object for the defined database type.
"""
table_name = 'Targets'
key_field = 'TargetID'
# Fields that are required to create new records
required_fields = [
'IPAddress', 'CompanyID', 'Namespace',
'SMIVersion', 'Product', 'Principal', 'Credential',
'CimomVersion', 'InteropNamespace', 'Notify', 'NotifyUsers',
'ScanEnabled', 'Protocol', 'Port']
# All fields in each record.
fields = [key_field] + required_fields
join_fields = ['CompanyName']
all_fields = fields + join_fields
hints = {
'IPAddress': "Host name or ip address",
'CompanyID': "DB id of company",
'Namespace': "User namespace",
'SMIVersion': "SMI version",
'Product': "Product name",
'Principal': "User Name to access target",
'Credential': "User password to access target",
'CimomVersion': "Version of CIMOM",
'InteropNamespace': "Interop Namespace name",
'Notify': "'Enabled' if users to be notified of issues, else "
"'Disabled'",
'NotifyUsers': "List of UserIDs to notify",
'ScanEnabled': "Enabled if this target to be scanned",
'Protocol': '"http" or "https"',
'Port': "Integer defining WBEM server port."}
# # Defines each record for the data base and outputs.
# # The Name is the database name for the property
# # The value tuple is display name and max width for the record
table_format_dict = OrderedDict([
('TargetID', ('ID', 2, int)),
('CompanyName', ('CompanyName', 12, str)),
('Namespace', ('Namespace', 12, str)),
('SMIVersion', ('SMIVersion', 12, str)),
('Product', ('Product', 15, str)),
('Principal', ('Principal', 12, str)),
('Credential', ('Credential', 12, str)),
('CimomVersion', ('CimomVersion', 15, str)),
('IPAddress', ('IPAddress', 12, str)),
('InteropNamespace', ('Interop', 8, str)),
('Notify', ('Notify', 12, str)),
('NotifyUsers', ('NotifyUsers', 12, str)),
('Protocol', ('Prot', 5, str)),
('Port', ('Port', 4, int)),
('ScanEnabled', ('Enabled', 6, str)),
]) # noqa: E123
def __init__(self, db_dict, db_type, verbose, output_format):
"""Initialize the abstract Targets instance.
This controls all other
target bases. This defines the common definition of all targets bases
including field names, and common methods.
Parameters:
db_dict (:term: `dictionary')
Dictionary containing all of the parameters to open the database
defined by the db_dict attribute.
db_type (:term: `string`)
String defining one of the allowed database types for the
target database.
verbose (:class:`py:bool`)
Boolean. If true detailed info is displayed on the processing
of the TargetData class
output_format (:term:`string`)
String defining one of the legal report output formats. If not
provided, the default is a simple report format.
"""
super(TargetsTable, self).__init__(db_dict, db_type, verbose)
self.output_format = output_format
# def __str__(self):
# # # TODO this and __repr__ do not really match.
# # """String info on targetdata. TODO. Put more info here"""
# # return ('type=%s db=%s, len=%s' % (self.db_type, self.get_dbdict(),
# # # len(self.data_dict)))
# def __repr__(self):
# # """Rep of target data"""
# # return ('Targetdata db_type %s, rep count=%s' %
# # # (self.db_type, len(self.data_dict)))
def test_fieldnames(self, fields):
"""Test a list of field names. This test generates an exception,
KeyError if a field in fields is not in the table
"""
for field in fields:
self.table_format_dict[field] # pylint: disable=pointless-statement
def get_dbdict(self):
"""Get string for the db_dict"""
return '%s' % self.db_dict
@classmethod
def factory(cls, db_dict, db_type, verbose, output_format='simple'):
"""Factory method to select subclass based on database type (db_type).
Currently the types sql and csv are supported.
Returns instance object of the defined provider type.
"""
inst = None
if verbose:
print('targetdata factory datafile %s dbtype %s verbose %s'
% (db_dict, db_type, verbose))
if db_type == ('csv'):
inst = CsvTargetsTable(db_dict, db_type, verbose,
output_format=output_format)
elif db_type == ('mysql'):
inst = MySQLTargetsTable(db_dict, db_type, verbose,
output_format=output_format)
else:
ValueError('Invalid targets factory db_type %s' % db_type)
if verbose:
print('Resulting targets factory inst %r' % inst)
return inst
def get_field_list(self):
"""Return a list of the base table field names in the order defined."""
return list(self.table_format_dict)
def get_format_dict(self, name):
"""Return tuple of display name and length for name."""
return self.table_format_dict[name]
def get_enabled_targetids(self):
"""Get list of target ids that are marked enabled."""
return [x for x in self.data_dict if not self.disabled_target_id(x)]
def get_disabled_targetids(self):
"""Get list of target ids that are marked disabled"""
return [x for x in self.data_dict
if self.disabled_target_id(x)]
# TODO we have multiple of these. See get dict_for_host,get_hostid_list
def get_targets_host(self, host_data):
"""
If an record for `host_data` exists return that record,
otherwise return None.
There may be multiple ipaddress, port entries for a
single ipaddress, port in the database
Parameters:
host_id(tuple of hostname or ipaddress and port)
Returns list of targetdata keys
"""
# TODO clean up for PY 3
return_list = []
for key, value in self.data_dict.items():
port = value["Port"]
# TODO port from database is a string. Should be int internal.
if value["IPAddress"] == host_data[0] and int(port) == host_data[1]:
return_list.append(key)
return return_list
def get_target(self, targetid):
"""
Get the target data for the parameter target_id.
This is alternate to using [id] directly. It does an additonal check
for correct type for target_id
Returns:
target as dictionary
Exceptions:
KeyError if target not in targets dictionary
"""
if not isinstance(targetid, six.integer_types):
targetid = int(targetid)
return self.data_dict[targetid]
def filter_targets(self, ip_filter=None, company_name_filter=None):
"""
Filter for match of ip_filter and companyname filter if they exist
and return list of any targets that match.
The filters are regex strings.
"""
rtn = OrderedDict()
for key, value in self.data_dict.items():
if ip_filter and re.match(ip_filter, value['IPAddress']):
rtn[key] = value
if company_name_filter and \
re.match(value['CompanyName'], company_name_filter):
rtn[key] = value
return rtn
def build_url(self, targetid):
"""Get the string representing the url for targetid. Gets the
Protocol, IPaddress and port and uses the common get_url_str to
create a string. Port info is included only if it is not the
WBEM CIM-XML standard definitions.
"""
target = self[targetid]
return get_url_str(target['Protocol'], target['IPAddress'],
target['Port'])
def get_hostid_list(self, ip_filter=None, company_name_filter=None):
"""
Get all WBEM Server ipaddresses in the targets base.
Returns list of IP addresses:port entries.
TODO: Does not include port right now.
"""
output_list = []
# TODO clean up for python 3
for _id, value in self.data_dict.items():
if self.verbose:
print('get_hostid_list value %s' % (value,))
output_list.append(value['IPAddress'])
return output_list
def tbl_hdr(self, record_list):
"""Return a list of all the column headers from the record_list."""
hdr = []
for name in record_list:
value = self.get_format_dict(name)
hdr.append(value[0])
return hdr
def get_notifyusers(self, targetid):
"""
Get list of entries in the notify users field and split into python
list and return the list of integers representing the userids.
This list stored in db as string of integers separated by commas.
Returns None if there is no data in NotifyUsers.
"""
notify_users = self[targetid]['NotifyUsers']
if notify_users:
notify_users_list = notify_users.split(',')
notify_users_list = [int(userid) for userid in notify_users_list]
return notify_users_list
return None
def format_record(self, record_id, fields, fold=False):
"""Return the fields defined in field_list for the record_id in
display format.
String fields will be folded if their width is greater than the
specification in the format_dictionary and fold=True
"""
# TODO can we make this a std cvt function.
target = self.get_target(record_id)
line = []
for field_name in fields:
field_value = target[field_name]
fmt_value = self.get_format_dict(field_name)
max_width = fmt_value[1]
field_type = fmt_value[2]
if isinstance(field_type, six.string_types) and field_value:
if max_width < len(field_value):
line.append('\n'.join(wrap(field_value, max_width)))
else:
line.append('%s' % field_value)
else:
line.append('%s' % field_value)
return line
def disabled_target(self, target_record): # pylint: disable=no-self-use
"""
If target_record disabled, return true, else return false.
"""
val = target_record['ScanEnabled'].lower()
if val == 'enabled':
return False
if val == 'disabled':
return True
ValueError('ScanEnabled field must contain "Enabled" or "Disabled'
' string. %s is invalid.' % val)
def disabled_target_id(self, targetid):
"""
Return True if target recorded for this target_id marked
disabled. Otherwise return True
Parameters:
target_id(:term:`integer`)
Valid target Id for the Target_Tableue .
Returns: (:class:`py:bool`)
True if this target id disabled
Exceptions:
KeyError if target_id not in database
"""
return(self.disabled_target(self.data_dict[targetid]))
def get_output_width(self, col_list):
"""
Get the width of a table from the column names in the list
"""
total_width = 0
for name in col_list:
value = self.get_format_dict(name)
total_width += value[1]
return total_width
def get_unique_creds(self):
"""
Get the set of Credentials and Principal that represents the
unique combination of both. The result could be used to test with
all Principals/Credentials knows in the db.
Return list of targetIDs that represent unique sets of Principal and
Credential
"""
creds = {k: '%s%s' % (v['Principal'], v['Credential'])
for k, v in self.data_dict.items()}
ucreds = dict([[v, k] for k, v in creds.items()])
unique_keys = dict([[v, k] for k, v in ucreds.items()])
unique_creds = [(self.data_dict[k]['Principal'],
self.data_dict[k]['Credential']) for k in unique_keys]
return unique_creds
class SQLTargetsTable(TargetsTable):
"""
Subclass of Targets data for all SQL databases. Subclasses of this class
support specialized sql databases.
"""
def __init__(self, db_dict, dbtype, verbose, output_format):
"""Pass through to SQL"""
if verbose:
print('SQL Database type %s verbose=%s' % (db_dict, verbose))
super(SQLTargetsTable, self).__init__(db_dict, dbtype, verbose,
output_format)
self.connection = None
class MySQLTargetsTable(SQLTargetsTable, MySQLDBMixin):
"""
This subclass of TargetsTable process targets infromation from an sql
database.
Generate the targetstable from the sql database targets table and
the companies table, by mapping the data to the dictionary defined
for targets
"""
# TODO filename is config file name, not actual file name.
def __init__(self, db_dict, dbtype, verbose, output_format):
"""Read the input file into a dictionary."""
super(MySQLTargetsTable, self).__init__(db_dict, dbtype, verbose,
output_format)
self.connectdb(db_dict, verbose)
self._load_table()
self._load_joins()
def _load_joins(self):
"""
Load the tables that would normally be joins. In this case it is the
companies table. Move the companyName into the targets table
TODO we should not be doing this in this manner but with a
join.
"""
# Get companies table and insert into targets table:
# TODO in smipyping name is db_dict. Elsewhere it is db_info
companies_tbl = CompaniesTable.factory(self.db_dict,
self.db_type,
self.verbose)
try:
# set the companyname into the targets table
for target_key in self.data_dict:
target = self.data_dict[target_key]
if target['CompanyID'] in companies_tbl:
company = companies_tbl[target['CompanyID']]
target['CompanyName'] = company['CompanyName']
else:
target['CompanyName'] = "TableError CompanyID %s" % \
target['CompanyID']
except Exception as ex:
raise ValueError('Error: putting Company Name in table %r error %s'
% (self.db_dict, ex))
def update_fields(self, targetid, changes):
"""
Update the database record defined by targetid with the dictionary
of items defined by changes where each item is an entry in the
target record. Update does NOT test if the new value is the same
as the original value.
"""
cursor = self.connection.cursor()
# dynamically build the update sql based on the changes dictionary
set_names = "SET "
values = []
comma = False
for key, value in changes.items():
if comma:
set_names = set_names + ", "
else:
comma = True
set_names = set_names + "{0} = %s".format(key)
values.append(value)
values.append(targetid)
sql = "Update Targets " + set_names
# append targetid component
sql = sql + " WHERE TargetID=%s"
# Record the original data for the audit log.
original_data = {}
target_record = self.get_target(targetid)
for change in changes:
original_data[change] = target_record[change]
try:
cursor.execute(sql, tuple(values))
self.connection.commit()
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.info('TargetsTable TargetID: %s, update fields: %s, '
'original fields: %s',
targetid, changes, original_data)
except Exception as ex:
self.connection.rollback()
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.error('TargetsTable TargetID: %s failed SQL update. '
'SQL: %s Changes: %s Exception: %s',
targetid, sql, changes, ex)
raise ex
finally:
self._load_table()
self._load_joins()
cursor.close()
def activate(self, targetid, activate_flag):
"""
Activate or deactivate the table entry defined by the
targetid parameter to the value defined by the activate_flag
Parameters:
targetid (:term:`py:integer`):
The database key property for this table
activate_flag (:class:`py:bool`):
Next state that will be set into the database for this target.
Since the db field is an enum it actually sete Active or Inactive
strings into the field
"""
cursor = self.connection.cursor()
enabled_kw = 'Enabled' if activate_flag else 'Disabled'
sql = 'UPDATE Targets SET ScanEnabled = %s WHERE TargetID = %s'
try:
cursor.execute(sql, (enabled_kw, targetid)) # noqa F841
self.connection.commit()
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.info('TargetTable TargetId %s,set scanEnabled to %s',
targetid, enabled_kw)
except mysqlerror as ex:
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.error('TargetTable userid %s failed SQL change '
'ScanEnabled. SQL=%s '
'Change to %s exception %s: %s',
targetid, sql, enabled_kw, ex.__class__.__name__,
ex)
self.connection.rollback()
raise ex
finally:
self._load_table()
self._load_joins()
def delete(self, targetid):
"""
Delete the target in the targets table defined by the targetid
"""
cursor = self.connection.cursor()
sql = "DELETE FROM Targets WHERE TargetID=%s"
try:
# pylint: disable=unused-variable
mydata = cursor.execute(sql, (targetid,)) # noqa F841
self.connection.commit()
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.info('TargetTable TargetId %s Deleted', targetid)
except mysqlerror as ex:
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.error('TargetTable targetid %s failed SQL DELETE. '
'SQL=%s exception %s: %s',
targetid, sql, ex.__class__.__name__, ex)
self.connection.rollback()
raise ex
finally:
self._load_table()
self._load_joins()
self.connection.close()
def insert(self, fields):
"""
Write a new record to the database containing the fields defined in
the input.
Parameters:
field_data ()
Dictionary of fields to be inserted into the table. There is
one entry in the dictionary for each field to be inserted.
Exceptions:
"""
cursor = self.connection.cursor()
placeholders = ', '.join(['%s'] * len(fields))
columns = ', '.join(fields.keys())
sql = "INSERT INTO %s ( %s ) VALUES ( %s )" % (self.table_name,
columns,
placeholders)
try:
cursor.execute(sql, fields.values())
self.connection.commit()
new_targetid = cursor.lastrowid
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.info('TargetsTable TargetId %s added. %s',
new_targetid, fields)
except mysqlerror as ex:
audit_logger = get_logger(AUDIT_LOGGER_NAME)
audit_logger.error('TargetTable INSERT failed SQL update. SQL=%s. '
'data=%s. Exception %s: %s', sql, fields,
ex.__class__.__name__, ex)
self.connection.rollback()
raise ex
finally:
self._load_table()
self._load_joins()
self.connection.close()
class CsvTargetsTable(TargetsTable):
"""Comma Separated Values form of the Target base."""
def __init__(self, db_dict, dbtype, verbose, output_format):
"""Read the input file into a dictionary."""
super(CsvTargetsTable, self).__init__(db_dict, dbtype, verbose,
output_format)
fn = db_dict['targetsfilename']
self.filename = fn
# If the filename is not a full directory, the data file must be
# either in the local directory or the same directory as the
# config file defined by the db_dict entry directory
if os.path.isabs(fn):
if not os.path.isfile(fn):
ValueError('CSV file %s does not exist ' % fn)
else:
self.filename = fn
else:
if os.path.isfile(fn):
self.filename = fn
else:
full_fn = os.path.join(db_dict['directory'], fn)
if not os.path.isfile(full_fn):
ValueError('CSV file %s does not exist '
'in local directory or config directory %s' %
(fn, db_dict['directory']))
else:
self.filename = full_fn
with open(self.filename) as input_file:
reader = csv.DictReader(input_file)
# create dictionary (id = key) with dictionary for
# each set of entries
result = {}
for row in reader:
key = int(row['TargetID'])
if key in result:
# duplicate row handling
print('ERROR. Duplicate Id in table: %s\nrow=%s' %
(key, row))
raise ValueError('Input Error. duplicate Id')
else:
result[key] = row
self.data_dict = result
def write_updated_record(self, record_id):
"""Backup the existing file and write the new one.
with cvs it writes the whole file back
"""
backfile = '%s.bak' % self.filename
# TODO does this cover directories/clean up for possible exceptions.
if os.path.isfile(backfile):
os.remove(backfile)
os.rename(self.filename, backfile)
self.write_file(self.filename)
def write_file(self, file_name):
"""Write the current Target base to the named file."""
with open(file_name, 'wb') as f:
writer = csv.DictWriter(f, fieldnames=self.get_field_list())
writer.writeheader()
for key, value in sorted(self.data_dict.items()):
writer.writerow(value)
| 37.512894 | 80 | 0.588718 | 3,079 | 26,184 | 4.871387 | 0.169211 | 0.018335 | 0.012801 | 0.010667 | 0.185812 | 0.146343 | 0.133009 | 0.114874 | 0.099073 | 0.078539 | 0 | 0.004404 | 0.323595 | 26,184 | 697 | 81 | 37.566714 | 0.842471 | 0.331424 | 0 | 0.230986 | 0 | 0 | 0.143384 | 0 | 0 | 0 | 0 | 0.015782 | 0 | 1 | 0.084507 | false | 0.002817 | 0.03662 | 0 | 0.208451 | 0.016901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
812066ffbcb9013a2cc703f8d57626a63964c5af | 9,057 | py | Python | QUANTAXIS/QASU/crawl_eastmoney.py | QUANTAXISER/QUANTAXIS | 6ebd727b2900e8910fa45814bf45eeffca395250 | [
"MIT"
] | 1 | 2018-09-09T02:55:10.000Z | 2018-09-09T02:55:10.000Z | QUANTAXIS/QASU/crawl_eastmoney.py | frosthaoz/QUANTAXIS | f5f482418e5f6e23ac3530089b8d17300d931b48 | [
"MIT"
] | null | null | null | QUANTAXIS/QASU/crawl_eastmoney.py | frosthaoz/QUANTAXIS | f5f482418e5f6e23ac3530089b8d17300d931b48 | [
"MIT"
] | 3 | 2018-11-29T07:07:56.000Z | 2021-02-09T17:24:56.000Z | import os
from QUANTAXIS.QASetting import QALocalize
#from QUANTAXIS_CRAWLY.run_selenium_alone import (read_east_money_page_zjlx_to_sqllite, open_chrome_driver, close_chrome_dirver)
from QUANTAXIS_CRAWLY.run_selenium_alone import *
import urllib
import pandas as pd
import time
from QUANTAXIS.QAUtil import (DATABASE)
def QA_request_eastmoney_zjlx( param_stock_code_list ):
# 改用
strUrl = "http://data.eastmoney.com/zjlx/{}.html".format(param_stock_code_list[0])
# 延时
time.sleep(1.223)
response = urllib.request.urlopen(strUrl)
content = response.read()
# 🛠todo 改用 re 正则表达式做匹配
strings = content.decode("utf-8", "ignore")
string_lines = strings.split("\r\n")
#for aline in string_lines:
# aline = aline.strip()
# if '_stockCode' in aline:
# _stockCode = aline[len('var _stockCode = '):]
# _stockCode = _stockCode.strip("\"\"\,")
# if '_stockMarke' in aline:
# _stockMarke = aline[len('_stockMarke = '):]
# _stockMarke = _stockMarke.strip("\"\"\,")
# # 60XXXX ,
#_stockMarke = 1
# 00XXXX ,
# _stockMarke = 2
# 30XXXX ,
# _stockMarke = 2
# if '_stockName' in aline:
# _stockName = aline[len('_stockName = '):]
# _stockName = _stockName.strip("\"\"\,")
# if '_market' in aline:
# _market = aline[len('_market = '):]
# _market = _market.strip("\"\"\,")
# break
#_market= 'hsa'
# print(_stockCode)
# print(_stockMarke)
# print(_stockName)
# print(_market)
values = []
for aline in string_lines:
aline = aline.strip()
if 'EM_CapitalFlowInterface' in aline:
# print(aline)
# print('------------------')
aline = aline.strip()
if aline.startswith('var strUrl = '):
if 'var strUrl = ' in aline:
aline = aline[len('var strUrl = '):]
values = aline.split('+')
# print(values)
break
# print('------------------')
print(values)
for iStockCode in range(len(param_stock_code_list)):
requestStr = ""
strCode = param_stock_code_list[iStockCode]
if strCode[0:2] == '60':
_stockMarke = '1'
elif strCode[0:2] == '00' or strCode[0:2] == '30':
_stockMarke = '2'
else:
print(strCode + " 暂不支持, 60, 00, 30 开头的股票代码")
return
for iItem in values:
if '_stockCode' in iItem:
requestStr = requestStr + param_stock_code_list[iStockCode]
elif '_stockMarke' in iItem:
requestStr = requestStr + _stockMarke
else:
if 'http://ff.eastmoney.com/' in iItem:
requestStr = 'http://ff.eastmoney.com/'
else:
iItem = iItem.strip(' "')
iItem = iItem.rstrip(' "')
requestStr = requestStr + iItem
# print(requestStr)
# 延时
time.sleep(1.456)
response = urllib.request.urlopen(requestStr)
content2 = response.read()
# print(content2)
strings = content2.decode("utf-8", "ignore")
# print(strings)
list_data_zjlx = []
if 'var aff_data=({data:[["' in strings:
leftChars = strings[len('var aff_data=({data:[["'):]
# print(leftChars)
dataArrays = leftChars.split(',')
# print(dataArrays)
for aItemIndex in range(0, len(dataArrays), 13):
'''
日期
收盘价
涨跌幅
主力净流入 净额 净占比
超大单净流入 净额 净占比
大单净流入 净额 净占比
中单净流入 净额 净占比
小单净流入 净额 净占比
'''
dict_row = {}
dict_row['stock_code'] = param_stock_code_list[iStockCode]
# 日期
# print(aItemIndex)
data01 = dataArrays[aItemIndex]
data01 = data01.strip('"')
# print('日期',data01)
dict_row['date'] = data01
# 主力净流入 净额
data02 = dataArrays[aItemIndex + 1]
data02 = data02.strip('"')
# print('主力净流入 净额',data02)
dict_row['zljll_je_wy'] = data02
# 主力净流入 净占比
data03 = dataArrays[aItemIndex + 2]
data03 = data03.strip('"')
# print('主力净流入 净占比',data03)
# date01 = aItemData.strip('[\'\'')
dict_row['zljll_jzb_bfb'] = data03
# 超大单净流入 净额
data04 = dataArrays[aItemIndex + 3]
data04 = data04.strip('"')
# print('超大单净流入 净额',data04)
dict_row['cddjll_je_wy'] = data04
# 超大单净流入 净占比
data05 = dataArrays[aItemIndex + 4]
data05 = data05.strip('"')
# print('超大单净流入 净占比',data05)
dict_row['cddjll_je_jzb'] = data05
# 大单净流入 净额
data06 = dataArrays[aItemIndex + 5]
data06 = data06.strip('"')
# print('大单净流入 净额',data06)
dict_row['ddjll_je_wy'] = data06
# 大单净流入 净占比
data07 = dataArrays[aItemIndex + 6]
data07 = data07.strip('"')
# print('大单净流入 净占比',data07)
dict_row['ddjll_je_jzb'] = data07
# 中单净流入 净额
data08 = dataArrays[aItemIndex + 7]
data08 = data08.strip('"')
# print('中单净流入 净额',data08)
dict_row['zdjll_je_wy'] = data08
# 中单净流入 净占比
data09 = dataArrays[aItemIndex + 8]
data09 = data09.strip('"')
# print('中单净流入 净占比',data09)
dict_row['zdjll_je_jzb'] = data09
# 小单净流入 净额
data10 = dataArrays[aItemIndex + 9]
data10 = data10.strip('"')
# print('小单净流入 净额',data10)
dict_row['xdjll_je_wy'] = data10
# 小单净流入 净占比
data11 = dataArrays[aItemIndex + 10]
data11 = data11.strip('"')
# print('小单净流入 净占比',data11)
dict_row['xdjll_je_jzb'] = data11
# 收盘价
data12 = dataArrays[aItemIndex + 11]
data12 = data12.strip('"')
# print('收盘价',data12)
dict_row['close_price'] = data12
# 涨跌幅
data13 = dataArrays[aItemIndex + 12]
data13 = data13.strip('"')
data13 = data13.strip('"]]})')
# print('涨跌幅',data13)
dict_row['change_price'] = data13
# 读取一条记录成功
# print("成功读取一条记录")
# print(dict_row)
list_data_zjlx.append(dict_row)
# print(list_data_zjlx)
df = pd.DataFrame(list_data_zjlx)
# print(df)
client = DATABASE
coll_stock_zjlx = client.eastmoney_stock_zjlx
# coll_stock_zjlx.insert_many(QA_util_to_json_from_pandas(df))
for i in range(len(list_data_zjlx)):
aRec = list_data_zjlx[i]
# 🛠todo 当天结束后,获取当天的资金流相,当天的资金流向是瞬时间点的
ret = coll_stock_zjlx.find_one(aRec)
if ret == None:
coll_stock_zjlx.insert_one(aRec)
print("🤑 插入新的记录 ", aRec)
else:
print("😵 记录已经存在 ", ret)
'''
作为测试用例来获取, 对比 reqeust 方式的获取数据是否一致
'''
def QA_read_eastmoney_zjlx_web_page_to_sqllite(stockCodeList = None):
# todo 🛠 check stockCode 是否存在有效合法
# todo 🛠 QALocalize 从QALocalize 目录中读取 固定位置存放驱动文件
print("📨当前工作路径文件位置 : ",os.getcwd())
path_check = os.getcwd()+"/QUANTAXIS_WEBDRIVER"
if os.path.exists(path_check) == False:
print("😵 确认当前路径是否包含selenium_driver目录 😰 ")
return
else:
print(os.getcwd()+"/QUANTAXIS_WEBDRIVER"," 目录存在 😁")
print("")
# path_for_save_data = QALocalize.download_path + "/eastmoney_stock_zjlx"
# isExists = os.path.exists(path_for_save_data)
# if isExists == False:
# os.mkdir(path_for_save_data)
# isExists = os.path.exists(path_for_save_data)
# if isExists == True:
# print(path_for_save_data,"目录不存在! 成功建立目录 😢")
# else:
# print(path_for_save_data,"目录不存在! 失败建立目录 🤮, 可能没有权限 🈲")
# return
# else:
# print(path_for_save_data,"目录存在!准备读取数据 😋")
browser = open_chrome_driver()
for indexCode in range(len(stockCodeList)):
#full_path_name = path_for_save_data + "/" + stockCodeList[indexCode] + "_zjlx.sqlite.db"
read_east_money_page_zjlx_to_sqllite(stockCodeList[indexCode], browser)
pass
close_chrome_dirver(browser)
#创建目录
#启动线程读取网页,写入数据库
#等待完成 | 30.392617 | 128 | 0.510765 | 891 | 9,057 | 4.98541 | 0.259259 | 0.02679 | 0.019811 | 0.027015 | 0.110311 | 0.086898 | 0.069338 | 0.037371 | 0.037371 | 0.020261 | 0 | 0.034689 | 0.372971 | 9,057 | 298 | 129 | 30.392617 | 0.745026 | 0.25207 | 0 | 0.072581 | 0 | 0 | 0.088015 | 0.007803 | 0 | 0 | 0 | 0.010067 | 0 | 1 | 0.016129 | false | 0.008065 | 0.056452 | 0 | 0.08871 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8123d51391f52c37336172ab4d3305871857e10f | 16,865 | py | Python | flexget/tests/test_next_series_seasons.py | metaMMA/Flexget | a38986422461d7935ead1e2b4ed4c88bcd0a90f5 | [
"MIT"
] | null | null | null | flexget/tests/test_next_series_seasons.py | metaMMA/Flexget | a38986422461d7935ead1e2b4ed4c88bcd0a90f5 | [
"MIT"
] | 1 | 2017-10-09T23:06:44.000Z | 2017-10-09T23:06:44.000Z | flexget/tests/test_next_series_seasons.py | metaMMA/Flexget | a38986422461d7935ead1e2b4ed4c88bcd0a90f5 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals, division, absolute_import
from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
import pytest
from flexget.entry import Entry
# TODO Add more standard tests
class TestNextSeriesSeasonSeasonsPack(object):
_config = """
templates:
global:
parsing:
series: internal
anchors:
_nss_backfill: &nss_backfill
next_series_seasons:
backfill: yes
_nss_from_start: &nss_from_start
next_series_seasons:
from_start: yes
_nss_backfill_from_start: &nss_backfill_from_start
next_series_seasons:
backfill: yes
from_start: yes
_series_ep_pack: &series_ep_pack
identified_by: ep
tracking: backfill
season_packs:
threshold: 1000
reject_eps: yes
_series_ep_tracking_pack: &series_ep_tracking_pack
identified_by: ep
tracking: backfill
season_packs:
threshold: 1000
reject_eps: yes
_series_ep_tracking_begin_s02e01: &series_ep_tracking_pack_begin_s02e01
identified_by: ep
tracking: backfill
begin: s02e01
season_packs:
threshold: 1000
reject_eps: yes
_series_ep_tracking_begin_s04e01: &series_ep_tracking_pack_begin_s04e01
identified_by: ep
tracking: backfill
begin: s04e01
season_packs:
threshold: 1000
reject_eps: yes
tasks:
inject_series:
series:
settings:
test_series:
season_packs: always
test_series:
- Test Series 1
- Test Series 2
- Test Series 3
- Test Series 4
- Test Series 5
- Test Series 6
- Test Series 7
- Test Series 8
- Test Series 9
- Test Series 10
- Test Series 11
- Test Series 12
- Test Series 13
- Test Series 14
- Test Series 15
- Test Series 16
- Test Series 17
- Test Series 18
- Test Series 19
- Test Series 20
- Test Series 21
- Test Series 22
- Test Series 23
- Test Series 24
- Test Series 25
- Test Series 50
- Test Series 100
test_next_series_seasons_season_pack:
next_series_seasons: yes
series:
- Test Series 1:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_backfill:
<<: *nss_backfill
series:
- Test Series 2:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_backfill_and_begin:
<<: *nss_backfill
series:
- Test Series 3:
<<: *series_ep_tracking_pack_begin_s02e01
max_reruns: 0
test_next_series_seasons_season_pack_from_start:
<<: *nss_from_start
series:
- Test Series 4:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_from_start_backfill:
<<: *nss_backfill_from_start
series:
- Test Series 5:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_from_start_backfill_and_begin:
<<: *nss_backfill_from_start
series:
- Test Series 6:
<<: *series_ep_tracking_pack_begin_s02e01
max_reruns: 0
test_next_series_seasons_season_pack_and_ep:
next_series_seasons: yes
series:
- Test Series 7:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_backfill:
<<: *nss_backfill
series:
- Test Series 8:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_backfill_and_begin:
<<: *nss_backfill
series:
- Test Series 9:
<<: *series_ep_tracking_pack_begin_s02e01
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_from_start:
<<: *nss_from_start
series:
- Test Series 10:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_from_start_backfill:
<<: *nss_backfill_from_start
series:
- Test Series 11:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_from_start_backfill_and_begin:
<<: *nss_backfill_from_start
series:
- Test Series 12:
<<: *series_ep_tracking_pack_begin_s02e01
max_reruns: 0
test_next_series_seasons_season_pack_gap:
next_series_seasons: yes
series:
- Test Series 13:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_gap_backfill:
<<: *nss_backfill
series:
- Test Series 14:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_gap_backfill_and_begin:
<<: *nss_backfill
series:
- Test Series 15:
<<: *series_ep_tracking_pack_begin_s04e01
max_reruns: 0
test_next_series_seasons_season_pack_gap_from_start:
<<: *nss_from_start
series:
- Test Series 16:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_gap_from_start_backfill:
<<: *nss_backfill_from_start
series:
- Test Series 17:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_gap_from_start_backfill_and_begin:
<<: *nss_backfill_from_start
series:
- Test Series 18:
<<: *series_ep_tracking_pack_begin_s04e01
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_gap:
next_series_seasons: yes
series:
- Test Series 19:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_gap_backfill:
<<: *nss_backfill
series:
- Test Series 20:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_gap_backfill_and_begin:
<<: *nss_backfill
series:
- Test Series 21:
<<: *series_ep_tracking_pack_begin_s04e01
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_gap_from_start:
<<: *nss_from_start
series:
- Test Series 22:
<<: *series_ep_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_gap_from_start_backfill:
<<: *nss_backfill_from_start
series:
- Test Series 23:
<<: *series_ep_tracking_pack
max_reruns: 0
test_next_series_seasons_season_pack_and_ep_gap_from_start_backfill_and_begin:
<<: *nss_backfill_from_start
series:
- Test Series 24:
<<: *series_ep_tracking_pack_begin_s04e01
max_reruns: 0
test_next_series_seasons_season_pack_begin_completed:
next_series_seasons: yes
series:
- Test Series 50:
identified_by: ep
begin: S02E01
season_packs:
threshold: 1000
reject_eps: yes
max_reruns: 0
test_next_series_seasons_season_pack_from_start_multirun:
next_series_seasons:
from_start: yes
series:
- Test Series 100:
<<: *series_ep_pack
max_reruns: 0
"""
@pytest.fixture()
def config(self):
"""Season packs aren't supported by guessit yet."""
return self._config
def inject_series(self, execute_task, release_name):
execute_task(
'inject_series',
options={'inject': [Entry(title=release_name, url='')], 'disable_tracking': True},
)
@pytest.mark.parametrize(
"task_name,inject,result_find",
[
('test_next_series_seasons_season_pack', ['Test Series 1 S02'], ['Test Series 1 S03']),
(
'test_next_series_seasons_season_pack_backfill',
['Test Series 2 S02'],
['Test Series 2 S01', 'Test Series 2 S03'],
),
(
'test_next_series_seasons_season_pack_backfill_and_begin',
['Test Series 3 S02'],
['Test Series 3 S03'],
),
(
'test_next_series_seasons_season_pack_from_start',
['Test Series 4 S02'],
['Test Series 4 S03'],
),
(
'test_next_series_seasons_season_pack_from_start_backfill',
['Test Series 5 S02'],
['Test Series 5 S03', 'Test Series 5 S01'],
),
(
'test_next_series_seasons_season_pack_from_start_backfill_and_begin',
['Test Series 6 S02'],
['Test Series 6 S03'],
),
(
'test_next_series_seasons_season_pack_and_ep',
['Test Series 7 S02', 'Test Series 7 S03E01'],
['Test Series 7 S03'],
),
(
'test_next_series_seasons_season_pack_and_ep_backfill',
['Test Series 8 S02', 'Test Series 8 S03E01'],
['Test Series 8 S01', 'Test Series 8 S03'],
),
(
'test_next_series_seasons_season_pack_and_ep_backfill_and_begin',
['Test Series 9 S02', 'Test Series 9 S03E01'],
['Test Series 9 S03'],
),
(
'test_next_series_seasons_season_pack_and_ep_from_start',
['Test Series 10 S02', 'Test Series 10 S03E01'],
['Test Series 10 S03'],
),
(
'test_next_series_seasons_season_pack_and_ep_from_start_backfill',
['Test Series 11 S02', 'Test Series 11 S03E01'],
['Test Series 11 S03', 'Test Series 11 S01'],
),
(
'test_next_series_seasons_season_pack_and_ep_from_start_backfill_and_begin',
['Test Series 12 S02', 'Test Series 12 S03E01'],
['Test Series 12 S03'],
),
(
'test_next_series_seasons_season_pack_gap',
['Test Series 13 S02', 'Test Series 13 S06'],
['Test Series 13 S07'],
),
(
'test_next_series_seasons_season_pack_gap_backfill',
['Test Series 14 S02', 'Test Series 14 S06'],
[
'Test Series 14 S07',
'Test Series 14 S05',
'Test Series 14 S04',
'Test Series 14 S03',
'Test Series 14 S01',
],
),
(
'test_next_series_seasons_season_pack_gap_backfill_and_begin',
['Test Series 15 S02', 'Test Series 15 S06'],
['Test Series 15 S07', 'Test Series 15 S05', 'Test Series 15 S04'],
),
(
'test_next_series_seasons_season_pack_gap_from_start',
['Test Series 16 S02', 'Test Series 16 S06'],
['Test Series 16 S07'],
),
(
'test_next_series_seasons_season_pack_gap_from_start_backfill',
['Test Series 17 S02', 'Test Series 17 S06'],
[
'Test Series 17 S07',
'Test Series 17 S05',
'Test Series 17 S04',
'Test Series 17 S03',
'Test Series 17 S01',
],
),
(
'test_next_series_seasons_season_pack_gap_from_start_backfill_and_begin',
['Test Series 18 S02', 'Test Series 18 S06'],
['Test Series 18 S07', 'Test Series 18 S05', 'Test Series 18 S04'],
),
(
'test_next_series_seasons_season_pack_and_ep_gap',
['Test Series 19 S02', 'Test Series 19 S06', 'Test Series 19 S07E01'],
['Test Series 19 S07'],
),
(
'test_next_series_seasons_season_pack_and_ep_gap_backfill',
['Test Series 20 S02', 'Test Series 20 S06', 'Test Series 20 S07E01'],
[
'Test Series 20 S07',
'Test Series 20 S05',
'Test Series 20 S04',
'Test Series 20 S03',
'Test Series 20 S01',
],
),
(
'test_next_series_seasons_season_pack_and_ep_gap_backfill_and_begin',
['Test Series 21 S02', 'Test Series 21 S06', 'Test Series 21 S07E01'],
['Test Series 21 S07', 'Test Series 21 S05', 'Test Series 21 S04'],
),
(
'test_next_series_seasons_season_pack_and_ep_gap_from_start',
['Test Series 22 S02', 'Test Series 22 S03E01', 'Test Series 22 S06'],
['Test Series 22 S07'],
),
(
'test_next_series_seasons_season_pack_and_ep_gap_from_start_backfill',
['Test Series 23 S02', 'Test Series 23 S03E01', 'Test Series 23 S06'],
[
'Test Series 23 S07',
'Test Series 23 S05',
'Test Series 23 S04',
'Test Series 23 S03',
'Test Series 23 S01',
],
),
(
'test_next_series_seasons_season_pack_and_ep_gap_from_start_backfill_and_begin',
['Test Series 24 S02', 'Test Series 24 S03E01', 'Test Series 24 S06'],
['Test Series 24 S07', 'Test Series 24 S05', 'Test Series 24 S04'],
),
(
'test_next_series_seasons_season_pack_begin_completed',
['Test Series 50 S02'],
['Test Series 50 S03'],
),
],
)
def test_next_series_seasons(self, execute_task, task_name, inject, result_find):
for entity_id in inject:
self.inject_series(execute_task, entity_id)
task = execute_task(task_name)
for result_title in result_find:
assert task.find_entry(title=result_title)
assert len(task.all_entries) == len(result_find)
# Tests which require multiple tasks to be executed in order
# Each run_parameter is a tuple of lists: [task name, list of series ID(s) to inject, list of result(s) to find]
@pytest.mark.parametrize(
"run_parameters",
[
(
[
'test_next_series_seasons_season_pack_from_start_multirun',
[],
['Test Series 100 S01'],
],
[
'test_next_series_seasons_season_pack_from_start_multirun',
[],
['Test Series 100 S02'],
],
)
],
)
def test_next_series_seasons_multirun(self, execute_task, run_parameters):
for this_test in run_parameters:
for entity_id in this_test[1]:
self.inject_series(execute_task, entity_id)
task = execute_task(this_test[0])
for result_title in this_test[2]:
assert task.find_entry(title=result_title)
assert len(task.all_entries) == len(this_test[2])
| 37.645089 | 116 | 0.520308 | 1,771 | 16,865 | 4.528515 | 0.083569 | 0.198254 | 0.135661 | 0.144015 | 0.653865 | 0.623691 | 0.578429 | 0.545885 | 0.500998 | 0.449127 | 0 | 0.062229 | 0.411147 | 16,865 | 447 | 117 | 37.729306 | 0.745343 | 0.017729 | 0 | 0.415704 | 0 | 0 | 0.720904 | 0.238449 | 0 | 0 | 0 | 0.002237 | 0.009238 | 1 | 0.009238 | false | 0 | 0.009238 | 0 | 0.025404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8123dd148da3e7a93c319e5be784b12da6c27afd | 22,630 | py | Python | pymatgen/analysis/wulff.py | hpatel1567/pymatgen | 8304b25464206c74305214e45935df90bab95500 | [
"MIT"
] | 1 | 2020-02-08T08:20:45.000Z | 2020-02-08T08:20:45.000Z | pymatgen/analysis/wulff.py | hpatel1567/pymatgen | 8304b25464206c74305214e45935df90bab95500 | [
"MIT"
] | null | null | null | pymatgen/analysis/wulff.py | hpatel1567/pymatgen | 8304b25464206c74305214e45935df90bab95500 | [
"MIT"
] | null | null | null | # coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
This module define a WulffShape class to generate the Wulff shape from
a lattice, a list of indices and their corresponding surface energies,
and the total area and volume of the wulff shape,the weighted surface energy,
the anisotropy and shape_factor can also be calculated.
In support of plotting from a given view in terms of miller index.
The lattice is from the conventional unit cell, and (hkil) for hexagonal
lattices.
If you use this code extensively, consider citing the following:
Tran, R.; Xu, Z.; Radhakrishnan, B.; Winston, D.; Persson, K. A.; Ong, S. P.
(2016). Surface energies of elemental crystals. Scientific Data.
"""
from pymatgen.core.structure import Structure
from pymatgen.util.coord import get_angle
import numpy as np
import scipy as sp
from scipy.spatial import ConvexHull
import logging
import warnings
__author__ = 'Zihan Xu, Richard Tran, Shyue Ping Ong'
__copyright__ = 'Copyright 2013, The Materials Virtual Lab'
__version__ = '0.1'
__maintainer__ = 'Zihan Xu'
__email__ = 'zix009@eng.ucsd.edu'
__date__ = 'May 5 2016'
logger = logging.getLogger(__name__)
def hkl_tuple_to_str(hkl):
"""
Prepare for display on plots
"(hkl)" for surfaces
Agrs:
hkl: in the form of [h, k, l] or (h, k, l)
"""
str_format = '($'
for x in hkl:
if x < 0:
str_format += '\\overline{' + str(-x) + '}'
else:
str_format += str(x)
str_format += '$)'
return str_format
def get_tri_area(pts):
"""
Given a list of coords for 3 points,
Compute the area of this triangle.
Args:
pts: [a, b, c] three points
"""
a, b, c = pts[0], pts[1], pts[2]
v1 = np.array(b) - np.array(a)
v2 = np.array(c) - np.array(a)
area_tri = abs(sp.linalg.norm(sp.cross(v1, v2)) / 2)
return area_tri
class WulffFacet:
"""
Helper container for each Wulff plane.
"""
def __init__(self, normal, e_surf, normal_pt, dual_pt, index, m_ind_orig,
miller):
"""
:param normal:
:param e_surf:
:param normal_pt:
:param dual_pt:
:param index:
:param m_ind_orig:
:param miller:
"""
self.normal = normal
self.e_surf = e_surf
self.normal_pt = normal_pt
self.dual_pt = dual_pt
self.index = index
self.m_ind_orig = m_ind_orig
self.miller = miller
self.points = []
self.outer_lines = []
class WulffShape:
"""
Generate Wulff Shape from list of miller index and surface energies,
with given conventional unit cell.
surface energy (Jm^2) is the length of normal.
Wulff shape is the convex hull.
Based on:
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
Process:
1. get wulff simplices
2. label with color
3. get wulff_area and other properties
.. attribute:: debug (bool)
.. attribute:: alpha
transparency
.. attribute:: color_set
.. attribute:: grid_off (bool)
.. attribute:: axis_off (bool)
.. attribute:: show_area
.. attribute:: off_color
color of facets off wulff
.. attribute:: structure
Structure object, input conventional unit cell (with H ) from lattice
.. attribute:: miller_list
list of input miller index, for hcp in the form of hkil
.. attribute:: hkl_list
modify hkill to hkl, in the same order with input_miller
.. attribute:: e_surf_list
list of input surface energies, in the same order with input_miller
.. attribute:: lattice
Lattice object, the input lattice for the conventional unit cell
.. attribute:: facets
[WulffFacet] for all facets considering symm
.. attribute:: dual_cv_simp
simplices from the dual convex hull (dual_pt)
.. attribute:: wulff_pt_list
.. attribute:: wulff_cv_simp
simplices from the convex hull of wulff_pt_list
.. attribute:: on_wulff
list for all input_miller, True is on wulff.
.. attribute:: color_area
list for all input_miller, total area on wulff, off_wulff = 0.
.. attribute:: miller_area
($hkl$): area for all input_miller
"""
def __init__(self, lattice, miller_list, e_surf_list, symprec=1e-5):
"""
Args:
lattice: Lattice object of the conventional unit cell
miller_list ([(hkl), ...]: list of hkl or hkil for hcp
e_surf_list ([float]): list of corresponding surface energies
symprec (float): for recp_operation, default is 1e-5.
"""
if any([se < 0 for se in e_surf_list]):
warnings.warn("Unphysical (negative) surface energy detected.")
self.color_ind = list(range(len(miller_list)))
self.input_miller_fig = [hkl_tuple_to_str(x) for x in miller_list]
# store input data
self.structure = Structure(lattice, ["H"], [[0, 0, 0]])
self.miller_list = tuple([tuple(x) for x in miller_list])
self.hkl_list = tuple([(x[0], x[1], x[-1]) for x in miller_list])
self.e_surf_list = tuple(e_surf_list)
self.lattice = lattice
self.symprec = symprec
# 2. get all the data for wulff construction
# get all the surface normal from get_all_miller_e()
self.facets = self._get_all_miller_e()
logger.debug(len(self.facets))
# 3. consider the dual condition
dual_pts = [x.dual_pt for x in self.facets]
dual_convex = ConvexHull(dual_pts)
dual_cv_simp = dual_convex.simplices
# simplices (ndarray of ints, shape (nfacet, ndim))
# list of [i, j, k] , ndim = 3
# i, j, k: ind for normal_e_m
# recalculate the dual of dual, get the wulff shape.
# conner <-> surface
# get cross point from the simplices of the dual convex hull
wulff_pt_list = [self._get_cross_pt_dual_simp(dual_simp)
for dual_simp in dual_cv_simp]
wulff_convex = ConvexHull(wulff_pt_list)
wulff_cv_simp = wulff_convex.simplices
logger.debug(", ".join([str(len(x)) for x in wulff_cv_simp]))
# store simplices and convex
self.dual_cv_simp = dual_cv_simp
self.wulff_pt_list = wulff_pt_list
self.wulff_cv_simp = wulff_cv_simp
self.wulff_convex = wulff_convex
self.on_wulff, self.color_area = self._get_simpx_plane()
miller_area = []
for m, in_mill_fig in enumerate(self.input_miller_fig):
miller_area.append(
in_mill_fig + ' : ' + str(round(self.color_area[m], 4)))
self.miller_area = miller_area
def _get_all_miller_e(self):
"""
from self:
get miller_list(unique_miller), e_surf_list and symmetry
operations(symmops) according to lattice
apply symmops to get all the miller index, then get normal,
get all the facets functions for wulff shape calculation:
|normal| = 1, e_surf is plane's distance to (0, 0, 0),
normal[0]x + normal[1]y + normal[2]z = e_surf
return:
[WulffFacet]
"""
all_hkl = []
color_ind = self.color_ind
planes = []
recp = self.structure.lattice.reciprocal_lattice_crystallographic
recp_symmops = self.lattice.get_recp_symmetry_operation(self.symprec)
for i, (hkl, energy) in enumerate(zip(self.hkl_list,
self.e_surf_list)):
for op in recp_symmops:
miller = tuple([int(x) for x in op.operate(hkl)])
if miller not in all_hkl:
all_hkl.append(miller)
normal = recp.get_cartesian_coords(miller)
normal /= sp.linalg.norm(normal)
normal_pt = [x * energy for x in normal]
dual_pt = [x / energy for x in normal]
color_plane = color_ind[divmod(i, len(color_ind))[1]]
planes.append(WulffFacet(normal, energy, normal_pt,
dual_pt, color_plane, i, hkl))
# sort by e_surf
planes.sort(key=lambda x: x.e_surf)
return planes
def _get_cross_pt_dual_simp(self, dual_simp):
"""
|normal| = 1, e_surf is plane's distance to (0, 0, 0),
plane function:
normal[0]x + normal[1]y + normal[2]z = e_surf
from self:
normal_e_m to get the plane functions
dual_simp: (i, j, k) simplices from the dual convex hull
i, j, k: plane index(same order in normal_e_m)
"""
matrix_surfs = [self.facets[dual_simp[i]].normal for i in range(3)]
matrix_e = [self.facets[dual_simp[i]].e_surf for i in range(3)]
cross_pt = sp.dot(sp.linalg.inv(matrix_surfs), matrix_e)
return cross_pt
def _get_simpx_plane(self):
"""
Locate the plane for simpx of on wulff_cv, by comparing the center of
the simpx triangle with the plane functions.
"""
on_wulff = [False] * len(self.miller_list)
surface_area = [0.0] * len(self.miller_list)
for simpx in self.wulff_cv_simp:
pts = [self.wulff_pt_list[simpx[i]] for i in range(3)]
center = np.sum(pts, 0) / 3.0
# check whether the center of the simplices is on one plane
for plane in self.facets:
abs_diff = abs(np.dot(plane.normal, center) - plane.e_surf)
if abs_diff < 1e-5:
on_wulff[plane.index] = True
surface_area[plane.index] += get_tri_area(pts)
plane.points.append(pts)
plane.outer_lines.append([simpx[0], simpx[1]])
plane.outer_lines.append([simpx[1], simpx[2]])
plane.outer_lines.append([simpx[0], simpx[2]])
# already find the plane, move to the next simplices
break
for plane in self.facets:
plane.outer_lines.sort()
plane.outer_lines = [line for line in plane.outer_lines
if plane.outer_lines.count(line) != 2]
return on_wulff, surface_area
def _get_colors(self, color_set, alpha, off_color, custom_colors={}):
"""
assign colors according to the surface energies of on_wulff facets.
return:
(color_list, color_proxy, color_proxy_on_wulff, miller_on_wulff,
e_surf_on_wulff_list)
"""
import matplotlib as mpl
import matplotlib.pyplot as plt
color_list = [off_color] * len(self.hkl_list)
color_proxy_on_wulff = []
miller_on_wulff = []
e_surf_on_wulff = [(i, e_surf)
for i, e_surf in enumerate(self.e_surf_list)
if self.on_wulff[i]]
c_map = plt.get_cmap(color_set)
e_surf_on_wulff.sort(key=lambda x: x[1], reverse=False)
e_surf_on_wulff_list = [x[1] for x in e_surf_on_wulff]
if len(e_surf_on_wulff) > 1:
cnorm = mpl.colors.Normalize(vmin=min(e_surf_on_wulff_list),
vmax=max(e_surf_on_wulff_list))
else:
# if there is only one hkl on wulff, choose the color of the median
cnorm = mpl.colors.Normalize(vmin=min(e_surf_on_wulff_list) - 0.1,
vmax=max(e_surf_on_wulff_list) + 0.1)
scalar_map = mpl.cm.ScalarMappable(norm=cnorm, cmap=c_map)
for i, e_surf in e_surf_on_wulff:
color_list[i] = scalar_map.to_rgba(e_surf, alpha=alpha)
if tuple(self.miller_list[i]) in custom_colors.keys():
color_list[i] = custom_colors[tuple(self.miller_list[i])]
color_proxy_on_wulff.append(
plt.Rectangle((2, 2), 1, 1, fc=color_list[i], alpha=alpha))
miller_on_wulff.append(self.input_miller_fig[i])
scalar_map.set_array([x[1] for x in e_surf_on_wulff])
color_proxy = [plt.Rectangle((2, 2), 1, 1, fc=x, alpha=alpha)
for x in color_list]
return color_list, color_proxy, color_proxy_on_wulff, miller_on_wulff, e_surf_on_wulff_list
def show(self, *args, **kwargs):
r"""
Show the Wulff plot.
Args:
*args: Passed to get_plot.
**kwargs: Passed to get_plot.
"""
self.get_plot(*args, **kwargs).show()
def get_line_in_facet(self, facet):
"""
Returns the sorted pts in a facet used to draw a line
"""
lines = list(facet.outer_lines)
pt = []
prev = None
while len(lines) > 0:
if prev is None:
l = lines.pop(0)
else:
for i, l in enumerate(lines):
if prev in l:
l = lines.pop(i)
if l[1] == prev:
l.reverse()
break
# make sure the lines are connected one by one.
# find the way covering all pts and facets
pt.append(self.wulff_pt_list[l[0]].tolist())
pt.append(self.wulff_pt_list[l[1]].tolist())
prev = l[1]
return pt
def get_plot(self, color_set='PuBu', grid_off=True, axis_off=True,
show_area=False, alpha=1, off_color='red', direction=None,
bar_pos=(0.75, 0.15, 0.05, 0.65), bar_on=False, units_in_JPERM2=True,
legend_on=True, aspect_ratio=(8, 8), custom_colors={}):
"""
Get the Wulff shape plot.
Args:
color_set: default is 'PuBu'
grid_off (bool): default is True
axis_off (bool): default is Ture
show_area (bool): default is False
alpha (float): chosen from 0 to 1 (float), default is 1
off_color: Default color for facets not present on the Wulff shape.
direction: default is (1, 1, 1)
bar_pos: default is [0.75, 0.15, 0.05, 0.65]
bar_on (bool): default is False
legend_on (bool): default is True
aspect_ratio: default is (8, 8)
custom_colors ({(h,k,l}: [r,g,b,alpha}): Customize color of each
facet with a dictionary. The key is the corresponding Miller
index and value is the color. Undefined facets will use default
color site. Note: If you decide to set your own colors, it
probably won't make any sense to have the color bar on.
Return:
(matplotlib.pyplot)
"""
import matplotlib as mpl
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d as mpl3
color_list, color_proxy, color_proxy_on_wulff, miller_on_wulff, e_surf_on_wulff = self._get_colors(
color_set, alpha, off_color, custom_colors=custom_colors)
if not direction:
# If direction is not specified, use the miller indices of
# maximum area.
direction = max(self.area_fraction_dict.items(),
key=lambda x: x[1])[0]
fig = plt.figure()
fig.set_size_inches(aspect_ratio[0], aspect_ratio[1])
azim, elev = self._get_azimuth_elev([direction[0], direction[1],
direction[-1]])
wulff_pt_list = self.wulff_pt_list
ax = mpl3.Axes3D(fig, azim=azim, elev=elev)
for plane in self.facets:
# check whether [pts] is empty
if len(plane.points) < 1:
# empty, plane is not on_wulff.
continue
# assign the color for on_wulff facets according to its
# index and the color_list for on_wulff
plane_color = color_list[plane.index]
pt = self.get_line_in_facet(plane)
# plot from the sorted pts from [simpx]
tri = mpl3.art3d.Poly3DCollection([pt])
tri.set_color(plane_color)
tri.set_edgecolor("#808080")
ax.add_collection3d(tri)
# set ranges of x, y, z
# find the largest distance between on_wulff pts and the origin,
# to ensure complete and consistent display for all directions
r_range = max([np.linalg.norm(x) for x in wulff_pt_list])
ax.set_xlim([-r_range * 1.1, r_range * 1.1])
ax.set_ylim([-r_range * 1.1, r_range * 1.1])
ax.set_zlim([-r_range * 1.1, r_range * 1.1])
# add legend
if legend_on:
color_proxy = color_proxy
if show_area:
ax.legend(color_proxy, self.miller_area, loc='upper left',
bbox_to_anchor=(0, 1), fancybox=True, shadow=False)
else:
ax.legend(color_proxy_on_wulff, miller_on_wulff,
loc='upper center',
bbox_to_anchor=(0.5, 1), ncol=3, fancybox=True,
shadow=False)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
# Add colorbar
if bar_on:
cmap = plt.get_cmap(color_set)
cmap.set_over('0.25')
cmap.set_under('0.75')
bounds = [round(e, 2) for e in e_surf_on_wulff]
bounds.append(1.2 * bounds[-1])
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
# display surface energies
ax1 = fig.add_axes(bar_pos)
cbar = mpl.colorbar.ColorbarBase(
ax1, cmap=cmap, norm=norm, boundaries=[0] + bounds + [10],
extend='both', ticks=bounds[:-1], spacing='proportional',
orientation='vertical')
units = "$J/m^2$" if units_in_JPERM2 else r"$eV/\AA^2$"
cbar.set_label('Surface Energies (%s)' % (units), fontsize=100)
if grid_off:
ax.grid('off')
if axis_off:
ax.axis('off')
return plt
def _get_azimuth_elev(self, miller_index):
"""
Args:
miller_index: viewing direction
Returns:
azim, elev for plotting
"""
if miller_index == (0, 0, 1) or miller_index == (0, 0, 0, 1):
return 0, 90
else:
cart = self.lattice.get_cartesian_coords(miller_index)
azim = get_angle([cart[0], cart[1], 0], (1, 0, 0))
v = [cart[0], cart[1], 0]
elev = get_angle(cart, v)
return azim, elev
@property
def volume(self):
"""
Volume of the Wulff shape
"""
return self.wulff_convex.volume
@property
def miller_area_dict(self):
"""
Returns {hkl: area_hkl on wulff}
"""
return dict(zip(self.miller_list, self.color_area))
@property
def miller_energy_dict(self):
"""
Returns {hkl: surface energy_hkl}
"""
return dict(zip(self.miller_list, self.e_surf_list))
@property
def surface_area(self):
"""
Total surface area of Wulff shape.
"""
return sum(self.miller_area_dict.values())
@property
def weighted_surface_energy(self):
"""
Returns:
sum(surface_energy_hkl * area_hkl)/ sum(area_hkl)
"""
return self.total_surface_energy / self.surface_area
@property
def area_fraction_dict(self):
"""
Returns:
(dict): {hkl: area_hkl/total area on wulff}
"""
return {hkl: self.miller_area_dict[hkl] / self.surface_area
for hkl in self.miller_area_dict.keys()}
@property
def anisotropy(self):
"""
Returns:
(float) Coefficient of Variation from weighted surface energy
The ideal sphere is 0.
"""
square_diff_energy = 0
weighted_energy = self.weighted_surface_energy
area_frac_dict = self.area_fraction_dict
miller_energy_dict = self.miller_energy_dict
for hkl in miller_energy_dict.keys():
square_diff_energy += (miller_energy_dict[hkl] - weighted_energy) \
** 2 * area_frac_dict[hkl]
return np.sqrt(square_diff_energy) / weighted_energy
@property
def shape_factor(self):
"""
This is useful for determining the critical nucleus size.
A large shape factor indicates great anisotropy.
See Ballufi, R. W., Allen, S. M. & Carter, W. C. Kinetics
of Materials. (John Wiley & Sons, 2005), p.461
Returns:
(float) Shape factor.
"""
return self.surface_area / (self.volume ** (2 / 3))
@property
def effective_radius(self):
"""
Radius of the Wulffshape when the
Wulffshape is approximated as a sphere.
Returns:
(float) radius.
"""
return ((3 / 4) * (self.volume / np.pi)) ** (1 / 3)
@property
def total_surface_energy(self):
"""
Total surface energy of the Wulff shape.
Returns:
(float) sum(surface_energy_hkl * area_hkl)
"""
tot_surface_energy = 0
for hkl in self.miller_energy_dict.keys():
tot_surface_energy += self.miller_energy_dict[hkl] * \
self.miller_area_dict[hkl]
return tot_surface_energy
@property
def tot_corner_sites(self):
"""
Returns the number of vertices in the convex hull.
Useful for identifying catalytically active sites.
"""
return len(self.wulff_convex.vertices)
@property
def tot_edges(self):
"""
Returns the number of edges in the convex hull.
Useful for identifying catalytically active sites.
"""
all_edges = []
for facet in self.facets:
edges = []
pt = self.get_line_in_facet(facet)
lines = []
for i, p in enumerate(pt):
if i == len(pt) / 2:
break
lines.append(tuple(sorted(tuple([tuple(pt[i * 2]), tuple(pt[i * 2 + 1])]))))
for i, p in enumerate(lines):
if p not in all_edges:
edges.append(p)
all_edges.extend(edges)
return len(all_edges)
| 35.194401 | 107 | 0.576712 | 3,024 | 22,630 | 4.115741 | 0.164683 | 0.024747 | 0.008436 | 0.014462 | 0.186405 | 0.127029 | 0.102925 | 0.070946 | 0.063555 | 0.052627 | 0 | 0.015858 | 0.328458 | 22,630 | 642 | 108 | 35.249221 | 0.803119 | 0.315068 | 0 | 0.088816 | 1 | 0 | 0.021338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.039474 | 0 | 0.197368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
812c923f7680b63727b8c0d8a0b724feb7e64f73 | 1,448 | py | Python | src/gausskernel/dbmind/xtuner/test/test_ssh.py | wotchin/openGauss-server | ebd92e92b0cfd76b121d98e4c57a22d334573159 | [
"MulanPSL-1.0"
] | 1 | 2020-06-30T15:00:50.000Z | 2020-06-30T15:00:50.000Z | src/gausskernel/dbmind/xtuner/test/test_ssh.py | wotchin/openGauss-server | ebd92e92b0cfd76b121d98e4c57a22d334573159 | [
"MulanPSL-1.0"
] | null | null | null | src/gausskernel/dbmind/xtuner/test/test_ssh.py | wotchin/openGauss-server | ebd92e92b0cfd76b121d98e4c57a22d334573159 | [
"MulanPSL-1.0"
] | null | null | null | # Copyright (c) 2020 Huawei Technologies Co.,Ltd.
#
# openGauss is licensed under Mulan PSL v2.
# You can use this software according to the terms and conditions of the Mulan PSL v2.
# You may obtain a copy of Mulan PSL v2 at:
#
# http://license.coscl.org.cn/MulanPSL2
#
# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
# EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
# MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
# See the Mulan PSL v2 for more details.
# -------------------------------------------------------------------------
#
# test_ssh.py
#
# IDENTIFICATION
# src/gausskernel/dbmind/xtuner/test/test_ssh.py
#
# -------------------------------------------------------------------------
from ssh import ExecutorFactory
def test_remote():
exe = ExecutorFactory().set_host('').set_user('').set_pwd('').get_executor() # padding your information
print(exe.exec_command_sync("cat /proc/cpuinfo | grep \"processor\" | wc -l"))
print(exe.exec_command_sync("cat /proc/self/cmdline | xargs -0"))
print(exe.exec_command_sync("echo -e 'hello \\n world'")[0].count('\n'))
print(exe.exec_command_sync("echo -e 'hello \\n world'")[0])
print(exe.exec_command_sync('echo $SHELL'))
def test_local():
exe = ExecutorFactory().get_executor()
print(exe.exec_command_sync("ping -h"))
if __name__ == "__main__":
test_remote()
test_local()
| 33.674419 | 108 | 0.631215 | 193 | 1,448 | 4.57513 | 0.595855 | 0.05436 | 0.08154 | 0.129105 | 0.216308 | 0.19026 | 0.19026 | 0.0906 | 0.0906 | 0.0906 | 0 | 0.00974 | 0.149171 | 1,448 | 42 | 109 | 34.47619 | 0.706981 | 0.520718 | 0 | 0 | 0 | 0 | 0.215774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.214286 | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
81338229b9f75f52ae6ffcf7ef860588b32f5b97 | 3,915 | py | Python | Harpe-website/website/contrib/communication/utils.py | Krozark/Harpe-Website | 1038a8550d08273806c9ec244cb8157ef9e9101e | [
"BSD-2-Clause"
] | null | null | null | Harpe-website/website/contrib/communication/utils.py | Krozark/Harpe-Website | 1038a8550d08273806c9ec244cb8157ef9e9101e | [
"BSD-2-Clause"
] | null | null | null | Harpe-website/website/contrib/communication/utils.py | Krozark/Harpe-Website | 1038a8550d08273806c9ec244cb8157ef9e9101e | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import socket as csocket
from struct import pack,unpack
from website.contrib.communication.models import *
def enum(**enums):
return type('Enum', (), enums)
class Socket:
Dommaine = enum(IP=csocket.AF_INET,LOCAL=csocket.AF_UNIX)
Type = enum(TCP=csocket.SOCK_STREAM, UDP=csocket.SOCK_DGRAM)
Down = enum(SEND=0,RECIVE=1,BOTH=2)
NTW_WELCOM_MSG = "hello!\0"
NTW_ERROR_NO = 0
def __init__ (self,dommaine,type,protocole=0):
self.sock = csocket.socket(dommaine,type,protocole)
self.buffer = b""
self.status = 0
def connect(self,host,port):
self.sock.connect((host,port))
def verify_connexion(self):
code = 404
if self.receive() > 0:
msg = self._unpack_str()
if msg == self.NTW_WELCOM_MSG and self.status == self.NTW_ERROR_NO:
print "verify_connexion <%d : %s>" % (self.status,msg)
else:
print "verify_connexion <%d : %s>" % (self.status,msg)
self.clear()
return self.status
def _unpack_str(self):
i = 0
while self.buffer[i]!= '\0':
i+=1
i+=1
res = self.buffer[:i]
self.buffer = self.buffer[i:]
return res
def send(self):
size = len(self.buffer)
_size = pack('!Ih',size,self.status)
data = _size + self.buffer
sent = self.sock.send(data)
if sent == 0:
print "Connexion lost"
return False
return True
def receive(self):
recv = b''
recv = self.sock.recv(6)
if recv == b'':
print "Connexion lost"
return None
size,self.status = unpack('!Ih',recv)
self.buffer = self.sock.recv(size)
return len(recv) + len(self.buffer)
#Format C Type Python type Standard size
#x pad byte no value
#c char string of length 1
#b signed char integer 1
#B unsigned char integer 1
#? _Bool bool 1
#h short integer 2
#H unsigned short integer 2
#i int integer 4
#I unsigned int integer 4
#l long integer 4
#L unsigned long integer 4
#q long long integer 8
#Q unsigned long long integer 8
#f float float 4
#d double float 8
#s char[] string
#p char[] string
#P void * integer
def add(self,typ,*args):
self.buffer +=pack('!'+typ,*args)
def clear(self):
self.buffer = b""
self.status = 0
def call(self,ret_type,func_id,types="",*args):
if len(types) < len(args):
print "Wrong number of args/type"
return 0
self.clear()
self.add("i",func_id)
if types:
self.add(types,*args)
self.send()
size = self.receive()
if size:
if self.status != 0:
print "recive error code : %d" % self.status
else:
return unpack("!"+ret_type,self.buffer)[0]
return 0
def create_socket():
sock = Socket(Socket.Dommaine.IP,Socket.Type.TCP)
ser = HarpeServer.objects.filter(is_active=True)[:1]
if not ser:
return False
ser = ser[0]
sock.connect(ser.ip,ser.port)
if sock.verify_connexion() != sock.NTW_ERROR_NO:
print "An error occur"
return None
return sock
def send_AnalyseMgf_to_calc(analyseMfg):
sock = create_socket()
if not sock:
return False
data = analyseMfg.mgf.read() + '\0'
return sock.call("i",HarpeServer.FUNCTION_ID.ANALYSE,"i%ds" % (analyseMfg.mgf.size+1) ,analyseMfg.pk,data)
| 29.659091 | 110 | 0.527458 | 490 | 3,915 | 4.132653 | 0.283673 | 0.059259 | 0.014815 | 0.014815 | 0.059259 | 0.059259 | 0.059259 | 0.034568 | 0 | 0 | 0 | 0.016453 | 0.363474 | 3,915 | 131 | 111 | 29.885496 | 0.796148 | 0.203321 | 0 | 0.230769 | 0 | 0 | 0.055215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.032967 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81368cbcf7560067152788c0a732e279491b5a68 | 7,884 | py | Python | pydeap/feature_extraction/_time_domain_features.py | Wlgls/pyDEAP | b7cec369cedd4a69ea82bc49a2fb8376260e4ad2 | [
"Apache-2.0"
] | null | null | null | pydeap/feature_extraction/_time_domain_features.py | Wlgls/pyDEAP | b7cec369cedd4a69ea82bc49a2fb8376260e4ad2 | [
"Apache-2.0"
] | null | null | null | pydeap/feature_extraction/_time_domain_features.py | Wlgls/pyDEAP | b7cec369cedd4a69ea82bc49a2fb8376260e4ad2 | [
"Apache-2.0"
] | null | null | null | # -*- encoding: utf-8 -*-
'''
@File :_time_domain_features.py
@Time :2021/04/16 20:02:55
@Author :wlgls
@Version :1.0
'''
import numpy as np
def statistics(data, combined=True):
"""Statistical features, include Power, Mean, Std, 1st differece, Normalized 1st difference, 2nd difference, Normalized 2nd difference.
Parameters
----------
data array
data, for DEAP dataset, It's shape may be (n_trials, n_channels, points)
Return
----------
f:
Solved feature, It's shape is similar to the shape of your input data.
e.g. for input.shape is (n_trials, n_channels, points), the f.shape is (n_trials, n_channels, n_features)
Example
----------
In [13]: d.shape, l.shape
Out[13]: ((40, 32, 8064), (40, 1))
In [14]: statistics_feature(d).shape
Out[14]: (40, 32, 7)
"""
# Power
power = np.mean(data**2, axis=-1)
# Mean
ave = np.mean(data, axis=-1)
# Standard Deviation
std = np.std(data, axis=-1)
# the mean of the absolute values of 1st differece mean
diff_1st = np.mean(np.abs(np.diff(data,n=1, axis=-1)), axis=-1)
# the mean of the absolute values of Normalized 1st difference
normal_diff_1st = diff_1st / std
# the mean of the absolute values of 2nd difference mean
diff_2nd = np.mean(np.abs(data[..., 2:] - data[..., :-2]), axis=-1)
# the mean of the absolute values of Normalized 2nd difference
normal_diff_2nd = diff_2nd / std
# Features.append(np.concatenate((Power, Mean, Std, diff_1st, normal_diff_1st, diff_2nd, normal_diff_2nd), axis=2))
f = np.stack((power, ave, std, diff_1st, normal_diff_1st, diff_2nd, normal_diff_2nd), axis=-1)
if combined:
f = f.reshape((*f.shape[:-2]))
return f
def hjorth(data, combined=True):
"""Solving Hjorth features, include activity, mobility, complexity
Parameters
----------
data array
data, for DEAP dataset, It's shape may be (n_trials, n_channels, points)
Return
----------
f:
Solved feature, It's shape is similar to the shape of your input data.
e.g. for input.shape is (n_trials, n_channels, points), the f.shape is (n_trials, n_channels, n_features)
Example
----------
In [15]: d.shape, l.shape
Out[15]: ((40, 32, 8064), (40, 1))
In [16]: hjorth_features(d).shape
Out[16]: (40, 32, 3)
"""
data = np.array(data)
ave = np.mean(data, axis=-1)[..., np.newaxis]
diff_1st = np.diff(data, n=1, axis=-1)
# print(diff_1st.shape)
diff_2nd = data[..., 2:] - data[..., :-2]
# Activity
activity = np.mean((data-ave)**2, axis=-1)
# print(Activity.shape)
# Mobility
varfdiff = np.var(diff_1st, axis=-1)
# print(varfdiff.shape)
mobility = np.sqrt(varfdiff / activity)
# Complexity
varsdiff = np.var(diff_2nd, axis=-1)
complexity = np.sqrt(varsdiff/varfdiff) / mobility
f = np.stack((activity, mobility, complexity), axis=-1)
if combined:
f = f.reshape((*f.shape[:-2]))
return f
def higher_order_crossing(data, k=10, combined=True):
"""Solving the feature of hoc. Hoc is a high order zero crossing quantity.
Parameters
----------
data : array
data, for DEAP dataset, It's shape may be (n_trials, n_channels, points)
k : int, optional
Order, by default 10
Return
----------
nzc:
Solved feature, It's shape is similar to the shape of your input data.
e.g. for input.shape is (n_trials, n_channels, points), the f.shape is (n_trials, n_channels, n_features)
Example
----------
In [4]: d, l = load_deap(path, 0)
In [5]: hoc(d, k=10).shape
Out[5]: (40, 32, 10)
In [6]: hoc(d, k=5).shape
Out[6]: (40, 32, 5)
"""
nzc = []
for i in range(k):
curr_diff = np.diff(data, n=i)
x_t = curr_diff >= 0
x_t = np.diff(x_t)
x_t = np.abs(x_t)
count = np.count_nonzero(x_t, axis=-1)
nzc.append(count)
f = np.stack(nzc, axis=-1)
if combined:
f = f.reshape((*f.shape[:-2]))
return f
def sevcik_fd(data, combined=True):
"""Fractal dimension feature is solved, which is used to describe the shape information of EEG time series data. It seems that this feature can be used to judge the electrooculogram and EEG.The calculation methods include Sevcik, fractal Brownian motion, box counting, Higuchi and so on.
Sevcik method: fast calculation and robust analysis of noise
Higuchi: closer to the theoretical value than box counting
The Sevick method is used here because it is easier to implement
Parameters
----------
Parameters
----------
data array
data, for DEAP dataset, It's shape may be (n_trials, n_channels, points)
Return
----------
f:
Solved feature, It's shape is similar to the shape of your input data.
e.g. for input.shape is (n_trials, n_channels, points), the f.shape is (n_trials, n_channels, n_features)
Example
----------
In [7]: d.shape, l.shape
Out[7]: ((40, 32, 8064), (40, 1))
In [8]: sevcik_fd(d).shape
Out[8]: (40, 32, 1)
"""
points = data.shape[-1]
x = np.arange(1, points+1)
x_ = x / np.max(x)
miny = np.expand_dims(np.min(data, axis=-1), axis=-1)
maxy = np.expand_dims(np.max(data, axis=-1), axis=-1)
y_ = (data-miny) / (maxy-miny)
L = np.expand_dims(np.sum(np.sqrt(np.diff(y_, axis=-1)**2 + np.diff(x_)**2), axis=-1), axis=-1)
f = 1 + np.log(L) / np.log(2 * (points-1))
# print(FD.shape)
if combined:
f = f.reshape((*f.shape[:-2]))
return f
def calc_L(X, k, m):
"""
Return Lm(k) as the length of the curve.
"""
N = X.shape[-1]
n = np.floor((N-m)/k).astype(np.int64)
norm = (N-1) / (n*k)
ss = np.sum(np.abs(np.diff(X[..., m::k], n=1)), axis=-1)
Lm = (ss*norm) / k
return Lm
def calc_L_average(X, k):
"""
Return <L(k)> as the average value over k sets of Lm(k).
"""
calc_L_series = np.frompyfunc(lambda m: calc_L(X, k, m), 1, 1)
L_average = np.average(calc_L_series(np.arange(1, k+1)))
return L_average
def higuchi_fd(data, k_max, combined=True):
"""Fractal dimension feature is solved, which is used to describe the shape information of EEG time series data. It seems that this feature can be used to judge the electrooculogram and EEG.The calculation methods include Sevcik, fractal Brownian motion, box counting, Higuchi and so on.
Sevcik method: fast calculation and robust analysis of noise
Higuchi: closer to the theoretical value than box counting
The higuchi method is used here because it is easier to implement
Parameters
----------
Parameters
----------
data array
data, for DEAP dataset, It's shape may be (n_trials, n_channels, points)
Return
----------
f:
Solved feature, It's shape is similar to the shape of your input data.
e.g. for input.shape is (n_trials, n_channels, points), the f.shape is (n_trials, n_channels, n_features)
Example
----------
In [7]: d.shape, l.shape
Out[7]: ((40, 32, 8064), (40, 1))
In [8]: higuchi_fd(dif combined:
f = f
return ).shape
Out[8]: (40, 32, 1)
"""
calc_L_average_series = np.frompyfunc(lambda k: calc_L_average(data, k), 1, 1)
k = np.arange(1, k_max+1)
L = calc_L_average_series(k)
L = np.stack(L, axis=-1)
fd = np.zeros(data.shape[:-1])
for ind in np.argwhere(L[..., 0]):
tmp = L[ind[0], ind[1], ind[2]]
D, _= np.polyfit(np.log2(k), np.log2(tmp), 1)
fd[ind[0], ind[1if combined:
f = f
return ], ind[2]] = - D
f = np.expand_dims(fd, axis=-1)
if combined:
f = f.reshape((*f.shape[:-2]))
return f
| 29.977186 | 291 | 0.597793 | 1,240 | 7,884 | 3.712097 | 0.164516 | 0.027156 | 0.02607 | 0.05214 | 0.560721 | 0.544645 | 0.525092 | 0.511623 | 0.511623 | 0.504454 | 0 | 0.038592 | 0.250634 | 7,884 | 262 | 292 | 30.091603 | 0.740521 | 0.064688 | 0 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.012658 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81399676f0bd08a3b07c20a3a444ab0c8669d9d3 | 1,064 | py | Python | plugins/barracuda_waf/komand_barracuda_waf/actions/create_security_policy/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 46 | 2019-06-05T20:47:58.000Z | 2022-03-29T10:18:01.000Z | plugins/barracuda_waf/komand_barracuda_waf/actions/create_security_policy/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 386 | 2019-06-07T20:20:39.000Z | 2022-03-30T17:35:01.000Z | plugins/barracuda_waf/komand_barracuda_waf/actions/create_security_policy/schema.py | lukaszlaszuk/insightconnect-plugins | 8c6ce323bfbb12c55f8b5a9c08975d25eb9f8892 | [
"MIT"
] | 43 | 2019-07-09T14:13:58.000Z | 2022-03-28T12:04:46.000Z | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
class Component:
DESCRIPTION = "Creates a security policy with the default values"
class Input:
NAME = "name"
class Output:
ID = "id"
class CreateSecurityPolicyInput(komand.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"name": {
"type": "string",
"title": "Name",
"description": "The name of the security policy that needs to be created",
"order": 1
}
},
"required": [
"name"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class CreateSecurityPolicyOutput(komand.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"id": {
"type": "string",
"title": "ID",
"description": "ID of the new policy",
"order": 1
}
},
"required": [
"id"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 17.16129 | 80 | 0.566729 | 110 | 1,064 | 5.263636 | 0.436364 | 0.055268 | 0.051813 | 0.06563 | 0.317789 | 0.317789 | 0.317789 | 0.317789 | 0.148532 | 0.148532 | 0 | 0.002587 | 0.273496 | 1,064 | 61 | 81 | 17.442623 | 0.746442 | 0.034774 | 0 | 0.416667 | 1 | 0 | 0.549268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
813bbe394d73b1fd28585f58879386377ceda809 | 9,047 | py | Python | sympy/printing/lambdarepr.py | Carreau/sympy | 168de33bb177936fa9517702b2c5a777b3989672 | [
"BSD-3-Clause"
] | 4 | 2018-07-04T17:20:12.000Z | 2019-07-14T18:07:25.000Z | sympy/printing/lambdarepr.py | Carreau/sympy | 168de33bb177936fa9517702b2c5a777b3989672 | [
"BSD-3-Clause"
] | null | null | null | sympy/printing/lambdarepr.py | Carreau/sympy | 168de33bb177936fa9517702b2c5a777b3989672 | [
"BSD-3-Clause"
] | 1 | 2018-09-03T03:02:06.000Z | 2018-09-03T03:02:06.000Z | from __future__ import print_function, division
from .str import StrPrinter
from sympy.utilities import default_sort_key
class LambdaPrinter(StrPrinter):
"""
This printer converts expressions into strings that can be used by
lambdify.
"""
def _print_MatrixBase(self, expr):
return "%s(%s)" % (expr.__class__.__name__,
self._print((expr.tolist())))
_print_SparseMatrix = \
_print_MutableSparseMatrix = \
_print_ImmutableSparseMatrix = \
_print_Matrix = \
_print_DenseMatrix = \
_print_MutableDenseMatrix = \
_print_ImmutableMatrix = \
_print_ImmutableDenseMatrix = \
_print_MatrixBase
def _print_Piecewise(self, expr):
result = []
i = 0
for arg in expr.args:
e = arg.expr
c = arg.cond
result.append('((')
result.append(self._print(e))
result.append(') if (')
result.append(self._print(c))
result.append(') else (')
i += 1
result = result[:-1]
result.append(') else None)')
result.append(')'*(2*i - 2))
return ''.join(result)
def _print_Sum(self, expr):
loops = (
'for {i} in range({a}, {b}+1)'.format(
i=self._print(i),
a=self._print(a),
b=self._print(b))
for i, a, b in expr.limits)
return '(builtins.sum({function} {loops}))'.format(
function=self._print(expr.function),
loops=' '.join(loops))
def _print_And(self, expr):
result = ['(']
for arg in sorted(expr.args, key=default_sort_key):
result.extend(['(', self._print(arg), ')'])
result.append(' and ')
result = result[:-1]
result.append(')')
return ''.join(result)
def _print_Or(self, expr):
result = ['(']
for arg in sorted(expr.args, key=default_sort_key):
result.extend(['(', self._print(arg), ')'])
result.append(' or ')
result = result[:-1]
result.append(')')
return ''.join(result)
def _print_Not(self, expr):
result = ['(', 'not (', self._print(expr.args[0]), '))']
return ''.join(result)
def _print_BooleanTrue(self, expr):
return "True"
def _print_BooleanFalse(self, expr):
return "False"
def _print_ITE(self, expr):
result = [
'((', self._print(expr.args[1]),
') if (', self._print(expr.args[0]),
') else (', self._print(expr.args[2]), '))'
]
return ''.join(result)
class NumPyPrinter(LambdaPrinter):
"""
Numpy printer which handles vectorized piecewise functions,
logical operators, etc.
"""
_default_settings = {
"order": "none",
"full_prec": "auto",
}
def _print_seq(self, seq, delimiter=', '):
"General sequence printer: converts to tuple"
# Print tuples here instead of lists because numba supports
# tuples in nopython mode.
return '({},)'.format(delimiter.join(self._print(item) for item in seq))
def _print_MatMul(self, expr):
"Matrix multiplication printer"
return '({0})'.format(').dot('.join(self._print(i) for i in expr.args))
def _print_DotProduct(self, expr):
# DotProduct allows any shape order, but numpy.dot does matrix
# multiplication, so we have to make sure it gets 1 x n by n x 1.
arg1, arg2 = expr.args
if arg1.shape[0] != 1:
arg1 = arg1.T
if arg2.shape[1] != 1:
arg2 = arg2.T
return "dot(%s, %s)" % (self._print(arg1), self._print(arg2))
def _print_Piecewise(self, expr):
"Piecewise function printer"
exprs = '[{0}]'.format(','.join(self._print(arg.expr) for arg in expr.args))
conds = '[{0}]'.format(','.join(self._print(arg.cond) for arg in expr.args))
# If [default_value, True] is a (expr, cond) sequence in a Piecewise object
# it will behave the same as passing the 'default' kwarg to select()
# *as long as* it is the last element in expr.args.
# If this is not the case, it may be triggered prematurely.
return 'select({0}, {1}, default=nan)'.format(conds, exprs)
def _print_Relational(self, expr):
"Relational printer for Equality and Unequality"
op = {
'==' :'equal',
'!=' :'not_equal',
'<' :'less',
'<=' :'less_equal',
'>' :'greater',
'>=' :'greater_equal',
}
if expr.rel_op in op:
lhs = self._print(expr.lhs)
rhs = self._print(expr.rhs)
return '{op}({lhs}, {rhs})'.format(op=op[expr.rel_op],
lhs=lhs,
rhs=rhs)
return super(NumPyPrinter, self)._print_Relational(expr)
def _print_And(self, expr):
"Logical And printer"
# We have to override LambdaPrinter because it uses Python 'and' keyword.
# If LambdaPrinter didn't define it, we could use StrPrinter's
# version of the function and add 'logical_and' to NUMPY_TRANSLATIONS.
return '{0}({1})'.format('logical_and', ','.join(self._print(i) for i in expr.args))
def _print_Or(self, expr):
"Logical Or printer"
# We have to override LambdaPrinter because it uses Python 'or' keyword.
# If LambdaPrinter didn't define it, we could use StrPrinter's
# version of the function and add 'logical_or' to NUMPY_TRANSLATIONS.
return '{0}({1})'.format('logical_or', ','.join(self._print(i) for i in expr.args))
def _print_Not(self, expr):
"Logical Not printer"
# We have to override LambdaPrinter because it uses Python 'not' keyword.
# If LambdaPrinter didn't define it, we would still have to define our
# own because StrPrinter doesn't define it.
return '{0}({1})'.format('logical_not', ','.join(self._print(i) for i in expr.args))
def _print_Min(self, expr):
return '{0}(({1}))'.format('amin', ','.join(self._print(i) for i in expr.args))
def _print_Max(self, expr):
return '{0}(({1}))'.format('amax', ','.join(self._print(i) for i in expr.args))
# numexpr works by altering the string passed to numexpr.evaluate
# rather than by populating a namespace. Thus a special printer...
class NumExprPrinter(LambdaPrinter):
# key, value pairs correspond to sympy name and numexpr name
# functions not appearing in this dict will raise a TypeError
_numexpr_functions = {
'sin' : 'sin',
'cos' : 'cos',
'tan' : 'tan',
'asin': 'arcsin',
'acos': 'arccos',
'atan': 'arctan',
'atan2' : 'arctan2',
'sinh' : 'sinh',
'cosh' : 'cosh',
'tanh' : 'tanh',
'asinh': 'arcsinh',
'acosh': 'arccosh',
'atanh': 'arctanh',
'ln' : 'log',
'log': 'log',
'exp': 'exp',
'sqrt' : 'sqrt',
'Abs' : 'abs',
'conjugate' : 'conj',
'im' : 'imag',
're' : 'real',
'where' : 'where',
'complex' : 'complex',
'contains' : 'contains',
}
def _print_ImaginaryUnit(self, expr):
return '1j'
def _print_seq(self, seq, delimiter=', '):
# simplified _print_seq taken from pretty.py
s = [self._print(item) for item in seq]
if s:
return delimiter.join(s)
else:
return ""
def _print_Function(self, e):
func_name = e.func.__name__
nstr = self._numexpr_functions.get(func_name, None)
if nstr is None:
# check for implemented_function
if hasattr(e, '_imp_'):
return "(%s)" % self._print(e._imp_(*e.args))
else:
raise TypeError("numexpr does not support function '%s'" %
func_name)
return "%s(%s)" % (nstr, self._print_seq(e.args))
def blacklisted(self, expr):
raise TypeError("numexpr cannot be used with %s" %
expr.__class__.__name__)
# blacklist all Matrix printing
_print_SparseMatrix = \
_print_MutableSparseMatrix = \
_print_ImmutableSparseMatrix = \
_print_Matrix = \
_print_DenseMatrix = \
_print_MutableDenseMatrix = \
_print_ImmutableMatrix = \
_print_ImmutableDenseMatrix = \
blacklisted
# blacklist some python expressions
_print_list = \
_print_tuple = \
_print_Tuple = \
_print_dict = \
_print_Dict = \
blacklisted
def doprint(self, expr):
lstr = super(NumExprPrinter, self).doprint(expr)
return "evaluate('%s', truediv=True)" % lstr
def lambdarepr(expr, **settings):
"""
Returns a string usable for lambdifying.
"""
return LambdaPrinter(settings).doprint(expr)
| 34.139623 | 92 | 0.559191 | 1,044 | 9,047 | 4.672414 | 0.253831 | 0.055351 | 0.0205 | 0.01722 | 0.359574 | 0.299713 | 0.270193 | 0.259943 | 0.235957 | 0.230217 | 0 | 0.007331 | 0.3064 | 9,047 | 264 | 93 | 34.268939 | 0.77004 | 0.20493 | 0 | 0.238342 | 0 | 0 | 0.128107 | 0.003278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129534 | false | 0 | 0.015544 | 0.031088 | 0.419689 | 0.435233 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
8143df98ebce82100584c4d53ea2d04b4dccafa6 | 3,351 | py | Python | experiments/rpi/gertboard/dtoa.py | willingc/pingo | 0890bf5ed763e9061320093fc3fb5f7543c5cc2c | [
"MIT"
] | null | null | null | experiments/rpi/gertboard/dtoa.py | willingc/pingo | 0890bf5ed763e9061320093fc3fb5f7543c5cc2c | [
"MIT"
] | 1 | 2021-03-20T05:17:03.000Z | 2021-03-20T05:17:03.000Z | experiments/rpi/gertboard/dtoa.py | willingc/pingo | 0890bf5ed763e9061320093fc3fb5f7543c5cc2c | [
"MIT"
] | null | null | null | #!/usr/bin/python2.7
# Python 2.7 version by Alex Eames of http://RasPi.TV
# functionally equivalent to the Gertboard dtoa test by Gert Jan van Loo & Myra VanInwegen
# Use at your own risk - I'm pretty sure the code is harmless, but check it yourself.
# This will not work unless you have installed py-spidev as in the README.txt file
# spi must also be enabled on your system
import spidev
import sys
from time import sleep
board_type = sys.argv[-1]
# reload spi drivers to prevent spi failures
import subprocess
unload_spi = subprocess.Popen('sudo rmmod spi_bcm2708', shell=True, stdout=subprocess.PIPE)
start_spi = subprocess.Popen('sudo modprobe spi_bcm2708', shell=True, stdout=subprocess.PIPE)
sleep(3)
def which_channel():
channel = raw_input("Which channel do you want to test? Type 0 or 1.\n") # User inputs channel number
while not channel.isdigit(): # Check valid user input
channel = raw_input("Try again - just numbers 0 or 1 please!\n") # Make them do it again if wrong
return channel
spi = spidev.SpiDev()
spi.open(0,1) # The Gertboard DAC is on SPI channel 1 (CE1 - aka GPIO7)
channel = 3 # set initial value to force user selection
common = [0,0,0,160,240] # 2nd byte common to both channels
voltages = [0.0,0.5,1.02,1.36,2.04] # voltages for display
while not (channel == 1 or channel == 0): # channel is set by user input
channel = int(which_channel()) # continue asking until answer 0 or 1 given
if channel == 1: # once proper answer given, carry on
num_list = [176,180,184,186,191] # set correct channel-dependent list for byte 1
else:
num_list = [48,52,56,58,63]
print "These are the connections for the digital to analogue test:"
if board_type == "m":
print "jumper connecting GPIO 7 to CSB"
print "Multimeter connections (set your meter to read V DC):"
print " connect black probe to GND"
print " connect red probe to DA%d on D/A header" % channel
else:
print "jumper connecting GP11 to SCLK"
print "jumper connecting GP10 to MOSI"
print "jumper connecting GP9 to MISO"
print "jumper connecting GP7 to CSnB"
print "Multimeter connections (set your meter to read V DC):"
print " connect black probe to GND"
print " connect red probe to DA%d on J29" % channel
raw_input("When ready hit enter.\n")
for i in range(5):
r = spi.xfer2([num_list[i],common[i]]) #write the two bytes to the DAC
print "Your meter should read about %.2fV" % voltages[i]
raw_input("When ready hit enter.\n")
r = spi.xfer2([16,0]) # switch off channel A = 00010000 00000000 [16,0]
r = spi.xfer2([144,0]) # switch off channel B = 10010000 00000000 [144,0]
# The DAC is controlled by writing 2 bytes (16 bits) to it.
# So we need to write a 16 bit word to DAC
# bit 15 = channel, bit 14 = ignored, bit 13 =gain, bit 12 = shutdown, bits 11-4 data, bits 3-0 ignored
# You feed spidev a decimal number and it converts it to 8 bit binary
# each argument is a byte (8 bits), so we need two arguments, which together make 16 bits.
# that's what spidev sends to the DAC. If you need to delve further, have a look at the datasheet. :)
| 45.90411 | 110 | 0.664279 | 545 | 3,351 | 4.056881 | 0.46789 | 0.024876 | 0.04749 | 0.019901 | 0.150158 | 0.150158 | 0.150158 | 0.091361 | 0.091361 | 0.091361 | 0 | 0.063821 | 0.251865 | 3,351 | 72 | 111 | 46.541667 | 0.818109 | 0.411817 | 0 | 0.177778 | 0 | 0 | 0.341225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.088889 | null | null | 0.288889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d48ee17b3f638f1522292d248a4e2094be89792e | 1,244 | py | Python | ribbon/exceptions.py | cloutiertyler/RibbonGraph | 000864dd0ee33da4ed44af2f4bd1f1a83d5a1ba4 | [
"MIT"
] | 2 | 2017-09-20T17:49:09.000Z | 2017-09-20T17:55:43.000Z | ribbon/exceptions.py | cloutiertyler/RibbonGraph | 000864dd0ee33da4ed44af2f4bd1f1a83d5a1ba4 | [
"MIT"
] | null | null | null | ribbon/exceptions.py | cloutiertyler/RibbonGraph | 000864dd0ee33da4ed44af2f4bd1f1a83d5a1ba4 | [
"MIT"
] | null | null | null | from rest_framework.exceptions import APIException
from rest_framework import status
class GraphAPIError(APIException):
"""Base class for exceptions in this module."""
pass
class NodeNotFoundError(GraphAPIError):
status_code = status.HTTP_404_NOT_FOUND
def __init__(self, id):
self.id = id
super(NodeNotFoundError, self).__init__("Node with id '{}' does not exist.".format(id))
class NodeTypeNotFoundError(GraphAPIError):
status_code = status.HTTP_404_NOT_FOUND
def __init__(self, node_type):
self.node_type = node_type
super(NodeTypeNotFoundError, self).__init__("Node type '{}' does not exist.".format(node_type))
class MissingNodeTypeError(GraphAPIError):
""" Creating a node requires a type. """
status_code = status.HTTP_400_BAD_REQUEST
class MalformedUpdateDictionaryError(GraphAPIError):
status_code = status.HTTP_400_BAD_REQUEST
class InvalidPropertyError(GraphAPIError):
status_code = status.HTTP_400_BAD_REQUEST
class InvalidValueError(GraphAPIError):
status_code = status.HTTP_400_BAD_REQUEST
class PermissionDenied(GraphAPIError):
status_code = status.HTTP_403_FORBIDDEN
default_detail = 'Insufficient permissions for the request.'
| 28.272727 | 103 | 0.762058 | 145 | 1,244 | 6.193103 | 0.351724 | 0.077951 | 0.124722 | 0.155902 | 0.371938 | 0.335189 | 0.335189 | 0.335189 | 0.292873 | 0.122494 | 0 | 0.020019 | 0.156752 | 1,244 | 43 | 104 | 28.930233 | 0.836034 | 0.059486 | 0 | 0.24 | 0 | 0 | 0.089888 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.04 | 0.08 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d496c9cfdd316aad01a20acdae3c9c7e998fb11f | 887 | py | Python | Matrix/Python/rotatematrix.py | pratika1505/DSA-Path-And-Important-Questions | a86a0774f0abf5151c852afd2bbf67a5368125c8 | [
"MIT"
] | 26 | 2021-08-04T17:03:26.000Z | 2022-03-08T08:43:44.000Z | Matrix/Python/rotatematrix.py | pratika1505/DSA-Path-And-Important-Questions | a86a0774f0abf5151c852afd2bbf67a5368125c8 | [
"MIT"
] | 25 | 2021-08-04T16:58:33.000Z | 2021-11-01T05:26:19.000Z | Matrix/Python/rotatematrix.py | pratika1505/DSA-Path-And-Important-Questions | a86a0774f0abf5151c852afd2bbf67a5368125c8 | [
"MIT"
] | 16 | 2021-08-14T20:15:24.000Z | 2022-02-23T11:04:06.000Z | # -*- coding: utf-8 -*-
"""RotateMatrix.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1LX-dZFuQCyBXDNVosTp0MHaZZxoc5T4I
"""
#Function to rotate matrix by 90 degree
def rotate(mat):
# `N × N` matrix
N = len(mat)
# Transpose the matrix
for i in range(N):
for j in range(i):
temp = mat[i][j]
mat[i][j] = mat[j][i]
mat[j][i] = temp
# swap columns
for i in range(N):
for j in range(N // 2):
temp = mat[i][j]
mat[i][j] = mat[i][N - j - 1]
mat[i][N - j - 1] = temp
if __name__ == '__main__':
#Declaring matrix
mat = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]
]
rotate(mat)
#printing matrix
for i in mat:
print(i)
| 19.282609 | 77 | 0.500564 | 129 | 887 | 3.387597 | 0.496124 | 0.05492 | 0.045767 | 0.073227 | 0.208238 | 0.183066 | 0.183066 | 0.183066 | 0.105263 | 0 | 0 | 0.057192 | 0.349493 | 887 | 45 | 78 | 19.711111 | 0.69844 | 0.347238 | 0 | 0.181818 | 1 | 0 | 0.014184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0 | 0 | 0.045455 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d49e9592c8658910d6180947346f6788ba5fdb29 | 498 | py | Python | tests/assignments/test_assign7.py | acc-cosc-1336/cosc-1336-spring-2018-vcruz350 | 0cee9fde3d4129c51626c4e0c870972aebec9b95 | [
"MIT"
] | null | null | null | tests/assignments/test_assign7.py | acc-cosc-1336/cosc-1336-spring-2018-vcruz350 | 0cee9fde3d4129c51626c4e0c870972aebec9b95 | [
"MIT"
] | 1 | 2018-03-08T19:46:08.000Z | 2018-03-08T20:00:47.000Z | tests/assignments/test_assign7.py | acc-cosc-1336/cosc-1336-spring-2018-vcruz350 | 0cee9fde3d4129c51626c4e0c870972aebec9b95 | [
"MIT"
] | null | null | null | import unittest
#write the import for function for assignment7 sum_list_values
from src.assignments.assignment7 import sum_list_values
class Test_Assign7(unittest.TestCase):
def sample_test(self):
self.assertEqual(1,1)
#create a test for the sum_list_values function with list elements:
# bill 23 16 19 22
def test_sum_w_23_16_19_22(self):
test_list = ['bill', 23, 16, 19, 22]
self.assertEqual(80, sum_list_values(test_list))
#unittest.main(verbosity=2)
| 29.294118 | 71 | 0.736948 | 78 | 498 | 4.474359 | 0.461538 | 0.080229 | 0.148997 | 0.068768 | 0.114613 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078624 | 0.182731 | 498 | 16 | 72 | 31.125 | 0.77887 | 0.341365 | 0 | 0 | 0 | 0 | 0.012346 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d49ef05ecf83504c528cca6ff6237271a4f54a56 | 4,957 | py | Python | setec/__init__.py | kgriffs/setec | c6701ffd757cdfe1cfb9c3919b0fd3aa02396f54 | [
"Apache-2.0"
] | null | null | null | setec/__init__.py | kgriffs/setec | c6701ffd757cdfe1cfb9c3919b0fd3aa02396f54 | [
"Apache-2.0"
] | null | null | null | setec/__init__.py | kgriffs/setec | c6701ffd757cdfe1cfb9c3919b0fd3aa02396f54 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 by Kurt Griffiths
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from base64 import b64decode, b64encode
import msgpack
import nacl.encoding
import nacl.secret
import nacl.signing
import nacl.utils
from .version import __version__ # NOQA
class Signer:
"""Message signer based on Ed25519 and nacl.signing.
Arguments:
key (str): Base64-encoded key obtained from keygen()
"""
__slots__ = ('_signing_key',)
def __init__(self, skey):
self._signing_key = nacl.signing.SigningKey(skey, nacl.encoding.Base64Encoder)
@staticmethod
def keygen():
signing_key = nacl.signing.SigningKey.generate()
return (
signing_key.encode(nacl.encoding.Base64Encoder).decode(),
signing_key.verify_key.encode(nacl.encoding.Base64Encoder).decode(),
)
@staticmethod
def vkey(skey):
signing_key = nacl.signing.SigningKey(skey, nacl.encoding.Base64Encoder)
return signing_key.verify_key.encode(nacl.encoding.Base64Encoder)
def signb(self, message):
"""Sign a binary message with its signature attached.
Arguments:
message(bytes): Data to sign.
Returns:
bytes: Signed message
"""
return self._signing_key.sign(message)
def pack(self, doc):
return b64encode(self.packb(doc)).decode()
def packb(self, doc):
packed = msgpack.packb(doc, encoding='utf-8', use_bin_type=True)
return self.signb(packed)
class Verifier:
"""Signature verifier based on Ed25519 and nacl.signing.
Arguments:
key (str): Base64-encoded verify key
"""
__slots__ = ('_verify_key',)
def __init__(self, vkey):
self._verify_key = nacl.signing.VerifyKey(vkey, nacl.encoding.Base64Encoder)
def verifyb(self, message):
"""Verify a signed binary message.
Arguments:
message(bytes): Data to verify.
Returns:
bytes: The orignal message, sans signature.
"""
return self._verify_key.verify(message)
def unpack(self, packed):
return self.unpackb(b64decode(packed))
def unpackb(self, packed):
packed = self.verifyb(packed)
return msgpack.unpackb(packed, raw=False, encoding='utf-8')
class BlackBox:
"""Encryption engine based on PyNaCl's SecretBox (Salsa20/Poly1305).
Warning per the SecretBox docs:
Once you’ve decrypted the message you’ve demonstrated the ability to
create arbitrary valid messages, so messages you send are repudiable.
For non-repudiable messages, sign them after encryption.
(See also: https://pynacl.readthedocs.io/en/stable/signing)
Arguments:
key (str): Base64-encoded key obtained from keygen()
"""
__slots__ = ('_box',)
def __init__(self, key):
self._box = nacl.secret.SecretBox(b64decode(key))
@staticmethod
def keygen():
return b64encode(nacl.utils.random(nacl.secret.SecretBox.KEY_SIZE)).decode()
def encrypt(self, doc, signer=None):
"""Serialize and encrypt a document to Base64-encoded ciphertext.
Arguments:
doc: The string, dict, array, or other JSON-compatible
object to serialize and encrypt.
Keyword Arguments:
signer: An instance of Signer to use in signing the result. If
not provided, the ciphertext is not signed.
Returns:
str: Ciphertext
"""
data = msgpack.packb(doc, encoding='utf-8', use_bin_type=True)
ciphertext = self._box.encrypt(data)
if signer:
ciphertext = signer.signb(ciphertext)
return b64encode(ciphertext).decode()
def decrypt(self, ciphertext, verifier=None):
"""Unpack Base64-encoded ciphertext.
Arguments:
ciphertext (bytes): Ciphertext to decrypt and deserialize.
Keyword Arguments:
verifier: An instance of Verifier to use in verifying the
signed ciphertext. If not provided, the ciphertext is
assumed to be unsigned.
Returns:
doc: Deserialized JSON-compatible object.
"""
ciphertext = b64decode(ciphertext)
if verifier:
ciphertext = verifier.verifyb(ciphertext)
data = self._box.decrypt(ciphertext)
return msgpack.unpackb(data, raw=False, encoding='utf-8')
| 29.158824 | 86 | 0.656042 | 586 | 4,957 | 5.453925 | 0.320819 | 0.025031 | 0.046934 | 0.020651 | 0.227785 | 0.188673 | 0.156758 | 0.156758 | 0.125469 | 0.087922 | 0 | 0.018413 | 0.254993 | 4,957 | 169 | 87 | 29.331361 | 0.847008 | 0.440387 | 0 | 0.084746 | 0 | 0 | 0.019278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.237288 | false | 0 | 0.118644 | 0.050847 | 0.644068 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4a0dbe903b46f2ac15b321d70b46c5431fada6b | 4,932 | py | Python | scripts/H5toXMF.py | robertsawko/proteus | 6f1e4c2ca1af85a906b35a5162430006f0343861 | [
"NASA-1.3"
] | null | null | null | scripts/H5toXMF.py | robertsawko/proteus | 6f1e4c2ca1af85a906b35a5162430006f0343861 | [
"NASA-1.3"
] | null | null | null | scripts/H5toXMF.py | robertsawko/proteus | 6f1e4c2ca1af85a906b35a5162430006f0343861 | [
"NASA-1.3"
] | null | null | null |
#import numpy
#import os
#from xml.etree.ElementTree import *
import tables
#from Xdmf import *
def H5toXMF(basename,size,start,finaltime,stride):
# Open XMF files
for step in range(start,finaltime+1,stride):
XMFfile = open(basename+"."+str(step)+".xmf","w")
XMFfile.write(r"""<?xml version="1.0" ?>
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []>
<Xdmf Version="2.0" xmlns:xi="http://www.w3.org/2001/XInclude">
<Domain>"""+"\n")
XMFfile.write(r' <Grid GridType="Collection" CollectionType="Spatial">'+"\n")
for proc in range(0,size):
filename="solution.p"+str(proc)+"."+str(step)+".h5"
print filename
f1 = tables.openFile(filename)
XMFfile.write (r'<Grid GridType="Uniform">'+"\n")
XMFfile.write(r' <Time Value="'+str(step)+'" />'+"\n")
for tmp in f1.root:
if tmp.name == "elements":
XMFfile.write (r'<Topology NumberOfElements="' +str(len(tmp[:]))+ '" Type="Tetrahedron">'+"\n")
XMFfile.write (r' <DataItem DataType="Int" Dimensions="' +str(len(tmp[:]))+ ' 4" Format="HDF">' + filename + ':/elements</DataItem>'+"\n")
XMFfile.write (r'</Topology>'+"\n")
if tmp.name == "nodes":
XMFfile.write (r'<Geometry Type="XYZ">'+"\n")
XMFfile.write (r' <DataItem DataType="Float" Dimensions="' +str(len(tmp[:]))+ ' 3" Format="HDF" Precision="8">' + filename + ':/nodes</DataItem>'+"\n")
XMFfile.write (r'</Geometry>'+"\n")
if tmp.name == "u":
XMFfile.write (r'<Attribute AttributeType="Scalar" Center="Node" Name="u">'+"\n")
XMFfile.write (r' <DataItem DataType="Float" Dimensions="' +str(len(tmp[:]))+ '" Format="HDF" Precision="8">' + filename + ':/u</DataItem>'+"\n")
XMFfile.write (r'</Attribute>'+"\n")
if tmp.name == "v":
XMFfile.write (r'<Attribute AttributeType="Scalar" Center="Node" Name="v">'+"\n")
XMFfile.write (r' <DataItem DataType="Float" Dimensions="' +str(len(tmp[:]))+ '" Format="HDF" Precision="8">' + filename + ':/v</DataItem>'+"\n")
XMFfile.write (r'</Attribute>'+"\n")
if tmp.name == "w":
XMFfile.write (r'<Attribute AttributeType="Scalar" Center="Node" Name="w">'+"\n")
XMFfile.write (r' <DataItem DataType="Float" Dimensions="' +str(len(tmp[:]))+ '" Format="HDF" Precision="8">' + filename + ':/w</DataItem>'+"\n")
XMFfile.write (r'</Attribute>'+"\n")
if tmp.name == "p":
XMFfile.write (r'<Attribute AttributeType="Scalar" Center="Node" Name="p">'+"\n")
XMFfile.write (r' <DataItem DataType="Float" Dimensions="' +str(len(tmp[:]))+ '" Format="HDF" Precision="8">' + filename + ':/p</DataItem>'+"\n")
XMFfile.write (r'</Attribute>'+"\n")
if tmp.name == "phid":
XMFfile.write (r'<Attribute AttributeType="Scalar" Center="Node" Name="phid">'+"\n")
XMFfile.write (r' <DataItem DataType="Float" Dimensions="' +str(len(tmp[:]))+ '" Format="HDF" Precision="8">' + filename + ':/phid</DataItem>'+"\n")
XMFfile.write (r'</Attribute>'+"\n")
f1.close()
XMFfile.write(' </Grid>'+"\n")
XMFfile.write(' </Grid>'+"\n")
XMFfile.write(' </Domain>'+"\n")
XMFfile.write(' </Xdmf>'+"\n")
XMFfile.close()
if __name__ == '__main__':
from optparse import OptionParser
usage = ""
parser = OptionParser(usage=usage)
parser.add_option("-n","--size",
help="number of processors for run",
action="store",
type="int",
dest="size",
default=1)
parser.add_option("-s","--stride",
help="stride for solution output",
action="store",
type="int",
dest="stride",
default=0)
parser.add_option("-t","--finaltime",
help="finaltime",
action="store",
type="int",
dest="finaltime",
default=1000)
parser.add_option("-f","--filebase_flow",
help="base name for storage files",
action="store",
type="string",
dest="filebase",
default="solution")
(opts,args) = parser.parse_args()
start = 0
if opts.stride == 0 :
start = opts.finaltime
opts.stride = 1
H5toXMF(opts.filebase,opts.size,start,opts.finaltime,opts.stride)
| 42.153846 | 172 | 0.491484 | 507 | 4,932 | 4.753452 | 0.242604 | 0.144398 | 0.134855 | 0.092946 | 0.518257 | 0.417427 | 0.385892 | 0.372614 | 0.372614 | 0.258506 | 0 | 0.010098 | 0.317315 | 4,932 | 116 | 173 | 42.517241 | 0.705673 | 0.018045 | 0 | 0.166667 | 0 | 0.011905 | 0.336435 | 0.036601 | 0.202381 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02381 | null | null | 0.011905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4a7d95a9f223064052da15a9a7a9eecfe46cfa7 | 3,810 | py | Python | atmosphere/custom_activity/base_class.py | ambiata/atmosphere-python-sdk | 48880a8553000cdea59d63b0fba49e1f0f482784 | [
"MIT"
] | null | null | null | atmosphere/custom_activity/base_class.py | ambiata/atmosphere-python-sdk | 48880a8553000cdea59d63b0fba49e1f0f482784 | [
"MIT"
] | 9 | 2021-02-21T21:53:03.000Z | 2021-11-05T06:06:55.000Z | atmosphere/custom_activity/base_class.py | ambiata/atmosphere-python-sdk | 48880a8553000cdea59d63b0fba49e1f0f482784 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import Tuple
from requests import Response
from .pydantic_models import (AppliedExclusionConditionsResponse,
BiasAttributeConfigListResponse,
ComputeRewardResponse, DefaultPredictionResponse,
ExclusionRuleConditionListResponse,
PredictionResponsePayloadFormatListResponse)
class BaseActivityCustomCode(ABC):
"""
The main class of this repository: the one to be implemented
"""
is_for_mocker: bool
def __init__(self, is_for_mocker: bool = False):
self.is_for_mocker = is_for_mocker
@abstractmethod
def validate_prediction_request(self, prediction_request: dict) -> None:
"""Raise a ValidationError if the received prediction request is not valid"""
@abstractmethod
def validate_outcome_request(self, outcome_request: dict) -> None:
"""Raise a ValidationError if the received outcome request is not valid"""
@abstractmethod
def compute_reward(self, outcome_request: dict) -> ComputeRewardResponse:
"""From an outcome, compute the reward"""
@abstractmethod
def get_module_version(self) -> str:
"""Return the version of the module."""
@abstractmethod
def send_mock_prediction_request(
self, url_prediction_endpoint: str
) -> Tuple[Response, dict]:
"""
Send a mock request to the provided url and returns the corresponding response
with extra information if required for computing the prediction.
The response and dictionary will be provided to
the `send_mock_outcome_request`.
"""
@abstractmethod
def send_mock_outcome_request(
self,
url_outcome_endpoint: str,
prediction_response: Response,
info_from_prediction: dict,
) -> Response:
"""
Send a mock request to the provided url and returns the corresponding response.
Provide the prediction response and extra information created while
creating the prediction request from `send_mock_prediction_request`.
"""
def get_prediction_response_payload_formats(
self,
) -> PredictionResponsePayloadFormatListResponse:
"""
Return the list of available format of the prediction payload.
Every format should have a name and a description
The name of the format should be unique.
"""
return {"prediction_response_payload_formats": []}
def format_prediction_payload_response(
self,
default_prediction_response: DefaultPredictionResponse,
payload_format: str, # noqa pylint: disable=unused-argument
) -> dict:
"""
You can format the prediction the way you want based
on the information returned by default
"""
return default_prediction_response
def get_exclusion_rule_conditions(self) -> ExclusionRuleConditionListResponse:
"""
Define the exclusion rules for the activity
"""
return ExclusionRuleConditionListResponse(exclusion_rule_conditions=[])
def get_applied_exclusion_conditions(
self, prediction_request: dict # noqa pylint: disable=unused-argument
) -> AppliedExclusionConditionsResponse:
"""
Define the exclusion rules for the activity
"""
return AppliedExclusionConditionsResponse(applied_exclusion_conditions=[])
def get_bias_attribute_configs(self) -> BiasAttributeConfigListResponse:
"""
Define the bias attribute configs, these decide which attributes may be
used by atmospherex as bias attributes
"""
return BiasAttributeConfigListResponse(bias_attribute_configs=[])
| 36.634615 | 87 | 0.684777 | 379 | 3,810 | 6.691293 | 0.313984 | 0.046924 | 0.01735 | 0.01183 | 0.175868 | 0.15142 | 0.124606 | 0.124606 | 0.090694 | 0.05205 | 0 | 0 | 0.25643 | 3,810 | 103 | 88 | 36.990291 | 0.895164 | 0.323622 | 0 | 0.183673 | 0 | 0 | 0.015171 | 0.015171 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244898 | false | 0 | 0.081633 | 0 | 0.469388 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4ade5ab9af89265fbd2d849b58156e138f3d82c | 452 | py | Python | grocery/migrations/0003_alter_item_comments.py | akshay-kapase/shopping | 7bf3bac4a78d07bca9a9f9d44d85e11bb826a366 | [
"MIT"
] | null | null | null | grocery/migrations/0003_alter_item_comments.py | akshay-kapase/shopping | 7bf3bac4a78d07bca9a9f9d44d85e11bb826a366 | [
"MIT"
] | null | null | null | grocery/migrations/0003_alter_item_comments.py | akshay-kapase/shopping | 7bf3bac4a78d07bca9a9f9d44d85e11bb826a366 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.6 on 2021-09-03 15:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('grocery', '0002_alter_item_comments'),
]
operations = [
migrations.AlterField(
model_name='item',
name='comments',
field=models.CharField(blank=True, default='null', max_length=200),
preserve_default=False,
),
]
| 22.6 | 79 | 0.606195 | 49 | 452 | 5.469388 | 0.816327 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067485 | 0.278761 | 452 | 19 | 80 | 23.789474 | 0.754601 | 0.099558 | 0 | 0 | 1 | 0 | 0.116049 | 0.059259 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4ae07ad4070643d0ba3b0f74c8b5ba6215fad3c | 2,770 | py | Python | projects/objects/buildings/protos/textures/colored_textures/textures_generator.py | yjf18340/webots | 60d441c362031ab8fde120cc0cd97bdb1a31a3d5 | [
"Apache-2.0"
] | 1 | 2019-11-13T08:12:02.000Z | 2019-11-13T08:12:02.000Z | projects/objects/buildings/protos/textures/colored_textures/textures_generator.py | chinakwy/webots | 7c35a359848bafe81fe0229ac2ed587528f4c73e | [
"Apache-2.0"
] | null | null | null | projects/objects/buildings/protos/textures/colored_textures/textures_generator.py | chinakwy/webots | 7c35a359848bafe81fe0229ac2ed587528f4c73e | [
"Apache-2.0"
] | 1 | 2020-09-25T02:01:45.000Z | 2020-09-25T02:01:45.000Z | #!/usr/bin/env python
# Copyright 1996-2019 Cyberbotics Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generate textures prepared for OSM, based on image templates."""
import glob
import os
from PIL import Image
# change directory to this script directory in order to allow this script to be called from another directory.
os.chdir(os.path.dirname(os.path.realpath(__file__)))
# get all the template files in put them in a list of tuples
templates = []
for f in glob.glob("*_diffuse_template.jpg"):
templates.append((f, f.replace('_diffuse_', '_color_mask_')))
# target colors
# ref: http://wiki.openstreetmap.org/wiki/Key:colour
# TODO: is it sufficient?
colors = {
'000000': (0.0, 0.0, 0.0),
'FFFFFF': (0.84, 0.84, 0.84),
'808080': (0.4, 0.4, 0.4),
'C0C0C0': (0.65, 0.65, 0.65),
'800000': (0.4, 0.15, 0.15),
'FF0000': (0.45, 0.0, 0.0),
'808000': (0.4, 0.4, 0.2),
'FFFF00': (0.7, 0.6, 0.15),
'008000': (0.15, 0.3, 0.15),
'00FF00': (0.55, 0.69, 0.52),
'008080': (0.15, 0.3, 0.3),
'00FFFF': (0.6, 0.7, 0.7),
'000080': (0.2, 0.2, 0.3),
'0000FF': (0.4, 0.4, 0.75),
'800080': (0.5, 0.4, 0.5),
'FF00FF': (0.9, 0.75, 0.85),
'F5DEB3': (0.83, 0.78, 0.65),
'8B4513': (0.3, 0.1, 0.05)
}
effectFactor = 0.5 # power of the effect, found empirically
# foreach template
for template in templates:
# load the templates
diffuse = Image.open(template[0])
mask = Image.open(template[1])
assert diffuse.size == mask.size
width, height = diffuse.size
# create an image per color
for colorString, color in colors.iteritems():
image = Image.new('RGB', diffuse.size)
pixels = image.load()
for x in range(height):
for y in range(width):
dR, dG, dB = diffuse.getpixel((x, y))
mR, mG, mB = mask.getpixel((x, y))
r = dR + int(255.0 * (mR / 255.0) * (color[0] * 2.0 - 1.0) * effectFactor)
g = dG + int(255.0 * (mG / 255.0) * (color[1] * 2.0 - 1.0) * effectFactor)
b = dB + int(255.0 * (mB / 255.0) * (color[2] * 2.0 - 1.0) * effectFactor)
pixels[x, y] = (r, g, b)
image.save(template[0].replace('_diffuse_template', '_' + colorString))
| 35.063291 | 110 | 0.605415 | 450 | 2,770 | 3.695556 | 0.42 | 0.010824 | 0.014432 | 0.009621 | 0.048707 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123358 | 0.230325 | 2,770 | 78 | 111 | 35.512821 | 0.65666 | 0.36065 | 0 | 0 | 0 | 0 | 0.09868 | 0.012622 | 0 | 0 | 0 | 0.012821 | 0.022222 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4b1cf0c1cabef461b1902ca1dbcbf5165c73bc9 | 45,496 | py | Python | rpython/memory/test/test_transformed_gc.py | jptomo/pypy-lang-scheme | 55edb2cec69d78f86793282a4566fcbc1ef9fcac | [
"MIT"
] | 1 | 2019-11-25T10:52:01.000Z | 2019-11-25T10:52:01.000Z | rpython/memory/test/test_transformed_gc.py | jptomo/pypy-lang-scheme | 55edb2cec69d78f86793282a4566fcbc1ef9fcac | [
"MIT"
] | null | null | null | rpython/memory/test/test_transformed_gc.py | jptomo/pypy-lang-scheme | 55edb2cec69d78f86793282a4566fcbc1ef9fcac | [
"MIT"
] | null | null | null | import py
import inspect
from rpython.rlib.objectmodel import compute_hash, compute_identity_hash
from rpython.translator.c import gc
from rpython.annotator import model as annmodel
from rpython.rtyper.llannotation import SomePtr
from rpython.rtyper.lltypesystem import lltype, llmemory, rffi, llgroup
from rpython.memory.gctransform import framework, shadowstack
from rpython.rtyper.lltypesystem.lloperation import llop, void
from rpython.rlib.objectmodel import compute_unique_id, we_are_translated
from rpython.rlib.debug import ll_assert
from rpython.rlib import rgc
from rpython.conftest import option
from rpython.rlib.rstring import StringBuilder
from rpython.rlib.rarithmetic import LONG_BIT
WORD = LONG_BIT // 8
def rtype(func, inputtypes, specialize=True, gcname='ref',
backendopt=False, **extraconfigopts):
from rpython.translator.translator import TranslationContext
t = TranslationContext()
# XXX XXX XXX mess
t.config.translation.gc = gcname
t.config.translation.gcremovetypeptr = True
t.config.set(**extraconfigopts)
ann = t.buildannotator()
ann.build_types(func, inputtypes)
if specialize:
t.buildrtyper().specialize()
if backendopt:
from rpython.translator.backendopt.all import backend_optimizations
backend_optimizations(t)
if option.view:
t.viewcg()
return t
ARGS = lltype.FixedSizeArray(lltype.Signed, 3)
class GCTest(object):
gcpolicy = None
GC_CAN_MOVE = False
taggedpointers = False
def setup_class(cls):
cls.marker = lltype.malloc(rffi.CArray(lltype.Signed), 1,
flavor='raw', zero=True)
funcs0 = []
funcs2 = []
cleanups = []
name_to_func = {}
mixlevelstuff = []
for fullname in dir(cls):
if not fullname.startswith('define'):
continue
definefunc = getattr(cls, fullname)
_, name = fullname.split('_', 1)
func_fixup = definefunc.im_func(cls)
cleanup = None
if isinstance(func_fixup, tuple):
func, cleanup, fixup = func_fixup
mixlevelstuff.append(fixup)
else:
func = func_fixup
func.func_name = "f_%s" % name
if cleanup:
cleanup.func_name = "clean_%s" % name
nargs = len(inspect.getargspec(func)[0])
name_to_func[name] = len(funcs0)
if nargs == 2:
funcs2.append(func)
funcs0.append(None)
elif nargs == 0:
funcs0.append(func)
funcs2.append(None)
else:
raise NotImplementedError(
"defined test functions should have 0/2 arguments")
# used to let test cleanup static root pointing to runtime
# allocated stuff
cleanups.append(cleanup)
def entrypoint(args):
num = args[0]
func = funcs0[num]
if func:
res = func()
else:
func = funcs2[num]
res = func(args[1], args[2])
cleanup = cleanups[num]
if cleanup:
cleanup()
return res
from rpython.translator.c.genc import CStandaloneBuilder
s_args = SomePtr(lltype.Ptr(ARGS))
t = rtype(entrypoint, [s_args], gcname=cls.gcname,
taggedpointers=cls.taggedpointers)
for fixup in mixlevelstuff:
if fixup:
fixup(t)
cbuild = CStandaloneBuilder(t, entrypoint, config=t.config,
gcpolicy=cls.gcpolicy)
db = cbuild.generate_graphs_for_llinterp()
entrypointptr = cbuild.getentrypointptr()
entrygraph = entrypointptr._obj.graph
if option.view:
t.viewcg()
cls.name_to_func = name_to_func
cls.entrygraph = entrygraph
cls.rtyper = t.rtyper
cls.db = db
def runner(self, name, transformer=False):
db = self.db
name_to_func = self.name_to_func
entrygraph = self.entrygraph
from rpython.rtyper.llinterp import LLInterpreter
llinterp = LLInterpreter(self.rtyper)
gct = db.gctransformer
if self.__class__.__dict__.get('_used', False):
teardowngraph = gct.frameworkgc__teardown_ptr.value._obj.graph
llinterp.eval_graph(teardowngraph, [])
self.__class__._used = True
# FIIIIISH
setupgraph = gct.frameworkgc_setup_ptr.value._obj.graph
# setup => resets the gc
llinterp.eval_graph(setupgraph, [])
def run(args):
ll_args = lltype.malloc(ARGS, immortal=True)
ll_args[0] = name_to_func[name]
for i in range(len(args)):
ll_args[1+i] = args[i]
res = llinterp.eval_graph(entrygraph, [ll_args])
return res
if transformer:
return run, gct
else:
return run
class GenericGCTests(GCTest):
GC_CAN_SHRINK_ARRAY = False
def define_instances(cls):
class A(object):
pass
class B(A):
def __init__(self, something):
self.something = something
def malloc_a_lot():
i = 0
first = None
while i < 10:
i += 1
a = somea = A()
a.last = first
first = a
j = 0
while j < 30:
b = B(somea)
b.last = first
j += 1
return 0
return malloc_a_lot
def test_instances(self):
run = self.runner("instances")
run([])
def define_llinterp_lists(cls):
def malloc_a_lot():
i = 0
while i < 10:
i += 1
a = [1] * 10
j = 0
while j < 30:
j += 1
a.append(j)
return 0
return malloc_a_lot
def test_llinterp_lists(self):
run = self.runner("llinterp_lists")
run([])
def define_llinterp_tuples(cls):
def malloc_a_lot():
i = 0
while i < 10:
i += 1
a = (1, 2, i)
b = [a] * 10
j = 0
while j < 20:
j += 1
b.append((1, j, i))
return 0
return malloc_a_lot
def test_llinterp_tuples(self):
run = self.runner("llinterp_tuples")
run([])
def define_llinterp_dict(self):
class A(object):
pass
def malloc_a_lot():
i = 0
while i < 10:
i += 1
a = (1, 2, i)
b = {a: A()}
j = 0
while j < 20:
j += 1
b[1, j, i] = A()
return 0
return malloc_a_lot
def test_llinterp_dict(self):
run = self.runner("llinterp_dict")
run([])
def skipdefine_global_list(cls):
gl = []
class Box:
def __init__(self):
self.lst = gl
box = Box()
def append_to_list(i, j):
box.lst.append([i] * 50)
llop.gc__collect(lltype.Void)
return box.lst[j][0]
return append_to_list, None, None
def test_global_list(self):
py.test.skip("doesn't fit in the model, tested elsewhere too")
run = self.runner("global_list")
res = run([0, 0])
assert res == 0
for i in range(1, 5):
res = run([i, i - 1])
assert res == i - 1 # crashes if constants are not considered roots
def define_string_concatenation(cls):
def concat(j, dummy):
lst = []
for i in range(j):
lst.append(str(i))
return len("".join(lst))
return concat
def test_string_concatenation(self):
run = self.runner("string_concatenation")
res = run([100, 0])
assert res == len(''.join([str(x) for x in range(100)]))
def define_nongc_static_root(cls):
T1 = lltype.GcStruct("C", ('x', lltype.Signed))
T2 = lltype.Struct("C", ('p', lltype.Ptr(T1)))
static = lltype.malloc(T2, immortal=True)
def f():
t1 = lltype.malloc(T1)
t1.x = 42
static.p = t1
llop.gc__collect(lltype.Void)
return static.p.x
def cleanup():
static.p = lltype.nullptr(T1)
return f, cleanup, None
def test_nongc_static_root(self):
run = self.runner("nongc_static_root")
res = run([])
assert res == 42
def define_finalizer(cls):
class B(object):
pass
b = B()
b.nextid = 0
b.num_deleted = 0
class A(object):
def __init__(self):
self.id = b.nextid
b.nextid += 1
def __del__(self):
b.num_deleted += 1
def f(x, y):
a = A()
i = 0
while i < x:
i += 1
a = A()
llop.gc__collect(lltype.Void)
llop.gc__collect(lltype.Void)
return b.num_deleted
return f
def test_finalizer(self):
run = self.runner("finalizer")
res = run([5, 42]) #XXX pure lazyness here too
assert res == 6
def define_finalizer_calls_malloc(cls):
class B(object):
pass
b = B()
b.nextid = 0
b.num_deleted = 0
class AAA(object):
def __init__(self):
self.id = b.nextid
b.nextid += 1
def __del__(self):
b.num_deleted += 1
C()
class C(AAA):
def __del__(self):
b.num_deleted += 1
def f(x, y):
a = AAA()
i = 0
while i < x:
i += 1
a = AAA()
llop.gc__collect(lltype.Void)
llop.gc__collect(lltype.Void)
return b.num_deleted
return f
def test_finalizer_calls_malloc(self):
run = self.runner("finalizer_calls_malloc")
res = run([5, 42]) #XXX pure lazyness here too
assert res == 12
def define_finalizer_resurrects(cls):
class B(object):
pass
b = B()
b.nextid = 0
b.num_deleted = 0
class A(object):
def __init__(self):
self.id = b.nextid
b.nextid += 1
def __del__(self):
b.num_deleted += 1
b.a = self
def f(x, y):
a = A()
i = 0
while i < x:
i += 1
a = A()
llop.gc__collect(lltype.Void)
llop.gc__collect(lltype.Void)
aid = b.a.id
b.a = None
# check that __del__ is not called again
llop.gc__collect(lltype.Void)
llop.gc__collect(lltype.Void)
return b.num_deleted * 10 + aid + 100 * (b.a is None)
return f
def test_finalizer_resurrects(self):
run = self.runner("finalizer_resurrects")
res = run([5, 42]) #XXX pure lazyness here too
assert 160 <= res <= 165
def define_custom_trace(cls):
#
S = lltype.GcStruct('S', ('x', llmemory.Address))
T = lltype.GcStruct('T', ('z', lltype.Signed))
offset_of_x = llmemory.offsetof(S, 'x')
def customtrace(gc, obj, callback, arg):
gc._trace_callback(callback, arg, obj + offset_of_x)
lambda_customtrace = lambda: customtrace
#
def setup():
rgc.register_custom_trace_hook(S, lambda_customtrace)
tx = lltype.malloc(T)
tx.z = 4243
s1 = lltype.malloc(S)
s1.x = llmemory.cast_ptr_to_adr(tx)
return s1
def f():
s1 = setup()
llop.gc__collect(lltype.Void)
return llmemory.cast_adr_to_ptr(s1.x, lltype.Ptr(T)).z
return f
def test_custom_trace(self):
run = self.runner("custom_trace")
res = run([])
assert res == 4243
def define_weakref(cls):
import weakref, gc
class A(object):
pass
def g():
a = A()
return weakref.ref(a)
def f():
a = A()
ref = weakref.ref(a)
result = ref() is a
ref = g()
llop.gc__collect(lltype.Void)
result = result and (ref() is None)
# check that a further collection is fine
llop.gc__collect(lltype.Void)
result = result and (ref() is None)
return result
return f
def test_weakref(self):
run = self.runner("weakref")
res = run([])
assert res
def define_weakref_to_object_with_finalizer(cls):
import weakref, gc
class A(object):
count = 0
a = A()
class B(object):
def __del__(self):
a.count += 1
def g():
b = B()
return weakref.ref(b)
def f():
ref = g()
llop.gc__collect(lltype.Void)
llop.gc__collect(lltype.Void)
result = a.count == 1 and (ref() is None)
return result
return f
def test_weakref_to_object_with_finalizer(self):
run = self.runner("weakref_to_object_with_finalizer")
res = run([])
assert res
def define_collect_during_collect(cls):
class B(object):
pass
b = B()
b.nextid = 1
b.num_deleted = 0
b.num_deleted_c = 0
class A(object):
def __init__(self):
self.id = b.nextid
b.nextid += 1
def __del__(self):
llop.gc__collect(lltype.Void)
b.num_deleted += 1
C()
C()
class C(A):
def __del__(self):
b.num_deleted += 1
b.num_deleted_c += 1
def f(x, y):
persistent_a1 = A()
persistent_a2 = A()
i = 0
while i < x:
i += 1
a = A()
persistent_a3 = A()
persistent_a4 = A()
llop.gc__collect(lltype.Void)
llop.gc__collect(lltype.Void)
b.bla = persistent_a1.id + persistent_a2.id + persistent_a3.id + persistent_a4.id
# NB print would create a static root!
llop.debug_print(lltype.Void, b.num_deleted_c)
return b.num_deleted
return f
def test_collect_during_collect(self):
run = self.runner("collect_during_collect")
# runs collect recursively 4 times
res = run([4, 42]) #XXX pure lazyness here too
assert res == 12
def define_collect_0(cls):
def concat(j, dummy):
lst = []
for i in range(j):
lst.append(str(i))
result = len("".join(lst))
if we_are_translated():
llop.gc__collect(lltype.Void, 0)
return result
return concat
def test_collect_0(self):
run = self.runner("collect_0")
res = run([100, 0])
assert res == len(''.join([str(x) for x in range(100)]))
def define_interior_ptrs(cls):
from rpython.rtyper.lltypesystem.lltype import Struct, GcStruct, GcArray
from rpython.rtyper.lltypesystem.lltype import Array, Signed, malloc
S1 = Struct("S1", ('x', Signed))
T1 = GcStruct("T1", ('s', S1))
def f1():
t = malloc(T1)
t.s.x = 1
return t.s.x
S2 = Struct("S2", ('x', Signed))
T2 = GcArray(S2)
def f2():
t = malloc(T2, 1)
t[0].x = 1
return t[0].x
S3 = Struct("S3", ('x', Signed))
T3 = GcStruct("T3", ('items', Array(S3)))
def f3():
t = malloc(T3, 1)
t.items[0].x = 1
return t.items[0].x
S4 = Struct("S4", ('x', Signed))
T4 = Struct("T4", ('s', S4))
U4 = GcArray(T4)
def f4():
u = malloc(U4, 1)
u[0].s.x = 1
return u[0].s.x
S5 = Struct("S5", ('x', Signed))
T5 = GcStruct("T5", ('items', Array(S5)))
def f5():
t = malloc(T5, 1)
return len(t.items)
T6 = GcStruct("T6", ('s', Array(Signed)))
def f6():
t = malloc(T6, 1)
t.s[0] = 1
return t.s[0]
def func():
return (f1() * 100000 +
f2() * 10000 +
f3() * 1000 +
f4() * 100 +
f5() * 10 +
f6())
assert func() == 111111
return func
def test_interior_ptrs(self):
run = self.runner("interior_ptrs")
res = run([])
assert res == 111111
def define_id(cls):
class A(object):
pass
a1 = A()
def func():
a2 = A()
a3 = A()
id1 = compute_unique_id(a1)
id2 = compute_unique_id(a2)
id3 = compute_unique_id(a3)
llop.gc__collect(lltype.Void)
error = 0
if id1 != compute_unique_id(a1): error += 1
if id2 != compute_unique_id(a2): error += 2
if id3 != compute_unique_id(a3): error += 4
return error
return func
def test_id(self):
run = self.runner("id")
res = run([])
assert res == 0
def define_can_move(cls):
TP = lltype.GcArray(lltype.Float)
def func():
return rgc.can_move(lltype.malloc(TP, 1))
return func
def test_can_move(self):
run = self.runner("can_move")
res = run([])
assert res == self.GC_CAN_MOVE
def define_shrink_array(cls):
from rpython.rtyper.lltypesystem.rstr import STR
def f():
ptr = lltype.malloc(STR, 3)
ptr.hash = 0x62
ptr.chars[0] = '0'
ptr.chars[1] = 'B'
ptr.chars[2] = 'C'
ptr2 = rgc.ll_shrink_array(ptr, 2)
return ((ptr == ptr2) +
ord(ptr2.chars[0]) +
(ord(ptr2.chars[1]) << 8) +
(len(ptr2.chars) << 16) +
(ptr2.hash << 24))
return f
def test_shrink_array(self):
run = self.runner("shrink_array")
if self.GC_CAN_SHRINK_ARRAY:
expected = 0x62024231
else:
expected = 0x62024230
assert run([]) == expected
def define_string_builder_over_allocation(cls):
import gc
def fn():
s = StringBuilder(4)
s.append("abcd")
s.append("defg")
s.append("rty")
s.append_multiple_char('y', 1000)
gc.collect()
s.append_multiple_char('y', 1000)
res = s.build()[1000]
gc.collect()
return ord(res)
return fn
def test_string_builder_over_allocation(self):
fn = self.runner("string_builder_over_allocation")
res = fn([])
assert res == ord('y')
class GenericMovingGCTests(GenericGCTests):
GC_CAN_MOVE = True
GC_CAN_TEST_ID = False
def define_many_ids(cls):
class A(object):
pass
def f():
from rpython.rtyper.lltypesystem import rffi
alist = [A() for i in range(50)]
idarray = lltype.malloc(rffi.SIGNEDP.TO, len(alist), flavor='raw')
# Compute the id of all the elements of the list. The goal is
# to not allocate memory, so that if the GC needs memory to
# remember the ids, it will trigger some collections itself
i = 0
while i < len(alist):
idarray[i] = compute_unique_id(alist[i])
i += 1
j = 0
while j < 2:
if j == 1: # allocate some stuff between the two iterations
[A() for i in range(20)]
i = 0
while i < len(alist):
assert idarray[i] == compute_unique_id(alist[i])
i += 1
j += 1
lltype.free(idarray, flavor='raw')
return 0
return f
def test_many_ids(self):
if not self.GC_CAN_TEST_ID:
py.test.skip("fails for bad reasons in lltype.py :-(")
run = self.runner("many_ids")
run([])
@classmethod
def ensure_layoutbuilder(cls, translator):
jit2gc = getattr(translator, '_jit2gc', None)
if jit2gc:
assert 'invoke_after_minor_collection' in jit2gc
return jit2gc['layoutbuilder']
marker = cls.marker
GCClass = cls.gcpolicy.transformerclass.GCClass
layoutbuilder = framework.TransformerLayoutBuilder(translator, GCClass)
layoutbuilder.delay_encoding()
def seeme():
marker[0] += 1
translator._jit2gc = {
'layoutbuilder': layoutbuilder,
'invoke_after_minor_collection': seeme,
}
return layoutbuilder
def define_do_malloc_operations(cls):
P = lltype.GcStruct('P', ('x', lltype.Signed))
def g():
r = lltype.malloc(P)
r.x = 1
p = llop.do_malloc_fixedsize(llmemory.GCREF) # placeholder
p = lltype.cast_opaque_ptr(lltype.Ptr(P), p)
p.x = r.x
return p.x
def f():
i = 0
while i < 40:
g()
i += 1
return 0
if cls.gcname == 'incminimark':
marker = cls.marker
def cleanup():
assert marker[0] > 0
marker[0] = 0
else:
cleanup = None
def fix_graph_of_g(translator):
from rpython.translator.translator import graphof
from rpython.flowspace.model import Constant
from rpython.rtyper.lltypesystem import rffi
layoutbuilder = cls.ensure_layoutbuilder(translator)
type_id = layoutbuilder.get_type_id(P)
#
# now fix the do_malloc_fixedsize in the graph of g
graph = graphof(translator, g)
for op in graph.startblock.operations:
if op.opname == 'do_malloc_fixedsize':
op.args = [Constant(type_id, llgroup.HALFWORD),
Constant(llmemory.sizeof(P), lltype.Signed),
Constant(False, lltype.Bool), # has_finalizer
Constant(False, lltype.Bool), # is_finalizer_light
Constant(False, lltype.Bool)] # contains_weakptr
break
else:
assert 0, "oups, not found"
return f, cleanup, fix_graph_of_g
def test_do_malloc_operations(self):
run = self.runner("do_malloc_operations")
run([])
def define_do_malloc_operations_in_call(cls):
P = lltype.GcStruct('P', ('x', lltype.Signed))
def g():
llop.do_malloc_fixedsize(llmemory.GCREF) # placeholder
def f():
q = lltype.malloc(P)
q.x = 1
i = 0
while i < 40:
g()
i += q.x
return 0
def fix_graph_of_g(translator):
from rpython.translator.translator import graphof
from rpython.flowspace.model import Constant
from rpython.rtyper.lltypesystem import rffi
layoutbuilder = cls.ensure_layoutbuilder(translator)
type_id = layoutbuilder.get_type_id(P)
#
# now fix the do_malloc_fixedsize in the graph of g
graph = graphof(translator, g)
for op in graph.startblock.operations:
if op.opname == 'do_malloc_fixedsize':
op.args = [Constant(type_id, llgroup.HALFWORD),
Constant(llmemory.sizeof(P), lltype.Signed),
Constant(False, lltype.Bool), # has_finalizer
Constant(False, lltype.Bool), # is_finalizer_light
Constant(False, lltype.Bool)] # contains_weakptr
break
else:
assert 0, "oups, not found"
return f, None, fix_graph_of_g
def test_do_malloc_operations_in_call(self):
run = self.runner("do_malloc_operations_in_call")
run([])
def define_gc_heap_stats(cls):
S = lltype.GcStruct('S', ('x', lltype.Signed))
l1 = []
l2 = []
l3 = []
l4 = []
def f():
for i in range(10):
s = lltype.malloc(S)
l1.append(s)
l2.append(s)
if i < 3:
l3.append(s)
l4.append(s)
# We cheat here and only read the table which we later on
# process ourselves, otherwise this test takes ages
llop.gc__collect(lltype.Void)
tb = rgc._heap_stats()
a = 0
nr = 0
b = 0
c = 0
d = 0
e = 0
for i in range(len(tb)):
if tb[i].count == 10:
a += 1
nr = i
if tb[i].count > 50:
d += 1
for i in range(len(tb)):
if tb[i].count == 4:
b += 1
c += tb[i].links[nr]
e += tb[i].size
return d * 1000 + c * 100 + b * 10 + a
return f
def test_gc_heap_stats(self):
py.test.skip("this test makes the following test crash. Investigate.")
run = self.runner("gc_heap_stats")
res = run([])
assert res % 10000 == 2611
totsize = (res / 10000)
size_of_int = rffi.sizeof(lltype.Signed)
assert (totsize - 26 * size_of_int) % 4 == 0
# ^^^ a crude assumption that totsize - varsize would be dividable by 4
# (and give fixedsize)
def define_writebarrier_before_copy(cls):
S = lltype.GcStruct('S', ('x', lltype.Char))
TP = lltype.GcArray(lltype.Ptr(S))
def fn():
l = lltype.malloc(TP, 100)
l2 = lltype.malloc(TP, 100)
for i in range(100):
l[i] = lltype.malloc(S)
rgc.ll_arraycopy(l, l2, 50, 0, 50)
# force nursery collect
x = []
for i in range(20):
x.append((1, lltype.malloc(S)))
for i in range(50):
assert l2[i] == l[50 + i]
return 0
return fn
def test_writebarrier_before_copy(self):
run = self.runner("writebarrier_before_copy")
run([])
# ________________________________________________________________
class TestSemiSpaceGC(GenericMovingGCTests):
gcname = "semispace"
GC_CAN_SHRINK_ARRAY = True
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.semispace import SemiSpaceGC as GCClass
GC_PARAMS = {'space_size': 512*WORD,
'translated_to_c': False}
root_stack_depth = 200
class TestGenerationGC(GenericMovingGCTests):
gcname = "generation"
GC_CAN_SHRINK_ARRAY = True
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.generation import GenerationGC as \
GCClass
GC_PARAMS = {'space_size': 512*WORD,
'nursery_size': 32*WORD,
'translated_to_c': False}
root_stack_depth = 200
def define_weakref_across_minor_collection(cls):
import weakref
class A:
pass
def f():
x = 20 # for GenerationGC, enough for a minor collection
a = A()
a.foo = x
ref = weakref.ref(a)
all = [None] * x
i = 0
while i < x:
all[i] = [i] * i
i += 1
assert ref() is a
llop.gc__collect(lltype.Void)
assert ref() is a
return a.foo + len(all)
return f
def test_weakref_across_minor_collection(self):
run = self.runner("weakref_across_minor_collection")
res = run([])
assert res == 20 + 20
def define_nongc_static_root_minor_collect(cls):
T1 = lltype.GcStruct("C", ('x', lltype.Signed))
T2 = lltype.Struct("C", ('p', lltype.Ptr(T1)))
static = lltype.malloc(T2, immortal=True)
def f():
t1 = lltype.malloc(T1)
t1.x = 42
static.p = t1
x = 20
all = [None] * x
i = 0
while i < x: # enough to cause a minor collect
all[i] = [i] * i
i += 1
i = static.p.x
llop.gc__collect(lltype.Void)
return static.p.x + i
def cleanup():
static.p = lltype.nullptr(T1)
return f, cleanup, None
def test_nongc_static_root_minor_collect(self):
run = self.runner("nongc_static_root_minor_collect")
res = run([])
assert res == 84
def define_static_root_minor_collect(cls):
class A:
pass
class B:
pass
static = A()
static.p = None
def f():
t1 = B()
t1.x = 42
static.p = t1
x = 20
all = [None] * x
i = 0
while i < x: # enough to cause a minor collect
all[i] = [i] * i
i += 1
i = static.p.x
llop.gc__collect(lltype.Void)
return static.p.x + i
def cleanup():
static.p = None
return f, cleanup, None
def test_static_root_minor_collect(self):
run = self.runner("static_root_minor_collect")
res = run([])
assert res == 84
def define_many_weakrefs(cls):
# test for the case where allocating the weakref itself triggers
# a collection
import weakref
class A:
pass
def f():
a = A()
i = 0
while i < 17:
ref = weakref.ref(a)
assert ref() is a
i += 1
return 0
return f
def test_many_weakrefs(self):
run = self.runner("many_weakrefs")
run([])
def define_immutable_to_old_promotion(cls):
T_CHILD = lltype.Ptr(lltype.GcStruct('Child', ('field', lltype.Signed)))
T_PARENT = lltype.Ptr(lltype.GcStruct('Parent', ('sub', T_CHILD)))
child = lltype.malloc(T_CHILD.TO)
child2 = lltype.malloc(T_CHILD.TO)
parent = lltype.malloc(T_PARENT.TO)
parent2 = lltype.malloc(T_PARENT.TO)
parent.sub = child
child.field = 3
parent2.sub = child2
child2.field = 8
T_ALL = lltype.Ptr(lltype.GcArray(T_PARENT))
all = lltype.malloc(T_ALL.TO, 2)
all[0] = parent
all[1] = parent2
def f(x, y):
res = all[x]
#all[x] = lltype.nullptr(T_PARENT.TO)
return res.sub.field
return f
def test_immutable_to_old_promotion(self):
run, transformer = self.runner("immutable_to_old_promotion", transformer=True)
run([1, 4])
if not transformer.GCClass.prebuilt_gc_objects_are_static_roots:
assert len(transformer.layoutbuilder.addresses_of_static_ptrs) == 0
else:
assert len(transformer.layoutbuilder.addresses_of_static_ptrs) >= 4
# NB. Remember that the number above does not count
# the number of prebuilt GC objects, but the number of locations
# within prebuilt GC objects that are of type Ptr(Gc).
# At the moment we get additional_roots_sources == 6:
# * all[0]
# * all[1]
# * parent.sub
# * parent2.sub
# * the GcArray pointer from gc.wr_to_objects_with_id
# * the GcArray pointer from gc.object_id_dict.
def define_adr_of_nursery(cls):
class A(object):
pass
def f():
# we need at least 1 obj to allocate a nursery
a = A()
nf_a = llop.gc_adr_of_nursery_free(llmemory.Address)
nt_a = llop.gc_adr_of_nursery_top(llmemory.Address)
nf0 = nf_a.address[0]
nt0 = nt_a.address[0]
a0 = A()
a1 = A()
nf1 = nf_a.address[0]
nt1 = nt_a.address[0]
assert nf1 > nf0
assert nt1 > nf1
assert nt1 == nt0
return 0
return f
def test_adr_of_nursery(self):
run = self.runner("adr_of_nursery")
res = run([])
class TestGenerationalNoFullCollectGC(GCTest):
# test that nursery is doing its job and that no full collection
# is needed when most allocated objects die quickly
gcname = "generation"
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.generation import GenerationGC
class GCClass(GenerationGC):
__ready = False
def setup(self):
from rpython.memory.gc.generation import GenerationGC
GenerationGC.setup(self)
self.__ready = True
def semispace_collect(self, size_changing=False):
ll_assert(not self.__ready,
"no full collect should occur in this test")
def _teardown(self):
self.__ready = False # collecting here is expected
GenerationGC._teardown(self)
GC_PARAMS = {'space_size': 512*WORD,
'nursery_size': 128*WORD,
'translated_to_c': False}
root_stack_depth = 200
def define_working_nursery(cls):
def f():
total = 0
i = 0
while i < 40:
lst = []
j = 0
while j < 5:
lst.append(i*j)
j += 1
total += len(lst)
i += 1
return total
return f
def test_working_nursery(self):
run = self.runner("working_nursery")
res = run([])
assert res == 40 * 5
class TestHybridGC(TestGenerationGC):
gcname = "hybrid"
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.hybrid import HybridGC as GCClass
GC_PARAMS = {'space_size': 512*WORD,
'nursery_size': 32*WORD,
'large_object': 8*WORD,
'translated_to_c': False}
root_stack_depth = 200
def define_ref_from_rawmalloced_to_regular(cls):
import gc
S = lltype.GcStruct('S', ('x', lltype.Signed))
A = lltype.GcStruct('A', ('p', lltype.Ptr(S)),
('a', lltype.Array(lltype.Char)))
def setup(j):
p = lltype.malloc(S)
p.x = j*2
lst = lltype.malloc(A, j)
# the following line generates a write_barrier call at the moment,
# which is important because the 'lst' can be allocated directly
# in generation 2. This can only occur with varsized mallocs.
lst.p = p
return lst
def f(i, j):
lst = setup(j)
gc.collect()
return lst.p.x
return f
def test_ref_from_rawmalloced_to_regular(self):
run = self.runner("ref_from_rawmalloced_to_regular")
res = run([100, 100])
assert res == 200
def define_write_barrier_direct(cls):
from rpython.rlib import rgc
S = lltype.GcForwardReference()
S.become(lltype.GcStruct('S',
('x', lltype.Signed),
('prev', lltype.Ptr(S)),
('next', lltype.Ptr(S))))
s0 = lltype.malloc(S, immortal=True)
def f():
s = lltype.malloc(S)
s.x = 42
llop.bare_setfield(lltype.Void, s0, void('next'), s)
llop.gc_writebarrier(lltype.Void, llmemory.cast_ptr_to_adr(s0))
rgc.collect(0)
return s0.next.x
def cleanup():
s0.next = lltype.nullptr(S)
return f, cleanup, None
def test_write_barrier_direct(self):
run = self.runner("write_barrier_direct")
res = run([])
assert res == 42
class TestMiniMarkGC(TestHybridGC):
gcname = "minimark"
GC_CAN_TEST_ID = True
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.minimark import MiniMarkGC as GCClass
GC_PARAMS = {'nursery_size': 32*WORD,
'page_size': 16*WORD,
'arena_size': 64*WORD,
'small_request_threshold': 5*WORD,
'large_object': 8*WORD,
'card_page_indices': 4,
'translated_to_c': False,
}
root_stack_depth = 200
def define_no_clean_setarrayitems(cls):
# The optimization find_clean_setarrayitems() in
# gctransformer/framework.py does not work with card marking.
# Check that it is turned off.
S = lltype.GcStruct('S', ('x', lltype.Signed))
A = lltype.GcArray(lltype.Ptr(S))
def sub(lst):
lst[15] = lltype.malloc(S) # 'lst' is set the single mark "12-15"
lst[15].x = 123
lst[0] = lst[15] # that would be a "clean_setarrayitem"
def f():
lst = lltype.malloc(A, 16) # 16 > 10
rgc.collect()
sub(lst)
null = lltype.nullptr(S)
lst[15] = null # clear, so that A() is only visible via lst[0]
rgc.collect() # -> crash
return lst[0].x
return f
def test_no_clean_setarrayitems(self):
run = self.runner("no_clean_setarrayitems")
res = run([])
assert res == 123
def define_nursery_hash_base(cls):
class A:
pass
def fn():
objects = []
hashes = []
for i in range(200):
rgc.collect(0) # nursery-only collection, if possible
obj = A()
objects.append(obj)
hashes.append(compute_identity_hash(obj))
unique = {}
for i in range(len(objects)):
assert compute_identity_hash(objects[i]) == hashes[i]
unique[hashes[i]] = None
return len(unique)
return fn
def test_nursery_hash_base(self):
res = self.runner('nursery_hash_base')
assert res([]) >= 195
def define_instantiate_nonmovable(cls):
from rpython.rlib import objectmodel
from rpython.rtyper import annlowlevel
class A:
pass
def fn():
a1 = A()
a = objectmodel.instantiate(A, nonmovable=True)
a.next = a1 # 'a' is known young here, so no write barrier emitted
res = rgc.can_move(annlowlevel.cast_instance_to_base_ptr(a))
rgc.collect()
objectmodel.keepalive_until_here(a)
return res
return fn
def test_instantiate_nonmovable(self):
res = self.runner('instantiate_nonmovable')
assert res([]) == 0
class TestIncrementalMiniMarkGC(TestMiniMarkGC):
gcname = "incminimark"
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.incminimark import IncrementalMiniMarkGC \
as GCClass
GC_PARAMS = {'nursery_size': 32*WORD,
'page_size': 16*WORD,
'arena_size': 64*WORD,
'small_request_threshold': 5*WORD,
'large_object': 8*WORD,
'card_page_indices': 4,
'translated_to_c': False,
}
root_stack_depth = 200
def define_malloc_array_of_gcptr(self):
S = lltype.GcStruct('S', ('x', lltype.Signed))
A = lltype.GcArray(lltype.Ptr(S))
def f():
lst = lltype.malloc(A, 5)
return (lst[0] == lltype.nullptr(S)
and lst[1] == lltype.nullptr(S)
and lst[2] == lltype.nullptr(S)
and lst[3] == lltype.nullptr(S)
and lst[4] == lltype.nullptr(S))
return f
def test_malloc_array_of_gcptr(self):
run = self.runner('malloc_array_of_gcptr')
res = run([])
assert res
def define_malloc_struct_of_gcptr(cls):
S1 = lltype.GcStruct('S', ('x', lltype.Signed))
S = lltype.GcStruct('S',
('x', lltype.Signed),
('filed1', lltype.Ptr(S1)),
('filed2', lltype.Ptr(S1)))
s0 = lltype.malloc(S)
def f():
return (s0.filed1 == lltype.nullptr(S1) and s0.filed2 == lltype.nullptr(S1))
return f
def test_malloc_struct_of_gcptr(self):
run = self.runner("malloc_struct_of_gcptr")
res = run([])
assert res
# ________________________________________________________________
# tagged pointers
class TaggedPointerGCTests(GCTest):
taggedpointers = True
def define_tagged_simple(cls):
class Unrelated(object):
pass
u = Unrelated()
u.x = UnboxedObject(47)
def fn(n):
rgc.collect() # check that a prebuilt tagged pointer doesn't explode
if n > 0:
x = BoxedObject(n)
else:
x = UnboxedObject(n)
u.x = x # invoke write barrier
rgc.collect()
return x.meth(100)
def func():
return fn(1000) + fn(-1000)
assert func() == 205
return func
def test_tagged_simple(self):
func = self.runner("tagged_simple")
res = func([])
assert res == 205
def define_tagged_prebuilt(cls):
class F:
pass
f = F()
f.l = [UnboxedObject(10)]
def fn(n):
if n > 0:
x = BoxedObject(n)
else:
x = UnboxedObject(n)
f.l.append(x)
rgc.collect()
return f.l[-1].meth(100)
def func():
return fn(1000) ^ fn(-1000)
assert func() == -1999
return func
def test_tagged_prebuilt(self):
func = self.runner("tagged_prebuilt")
res = func([])
assert res == -1999
def define_gettypeid(cls):
class A(object):
pass
def fn():
a = A()
return rgc.get_typeid(a)
return fn
def test_gettypeid(self):
func = self.runner("gettypeid")
res = func([])
print res
from rpython.rlib.objectmodel import UnboxedValue
class TaggedBase(object):
__slots__ = ()
def meth(self, x):
raise NotImplementedError
class BoxedObject(TaggedBase):
attrvalue = 66
def __init__(self, normalint):
self.normalint = normalint
def meth(self, x):
return self.normalint + x + 2
class UnboxedObject(TaggedBase, UnboxedValue):
__slots__ = 'smallint'
def meth(self, x):
return self.smallint + x + 3
class TestHybridTaggedPointerGC(TaggedPointerGCTests):
gcname = "hybrid"
class gcpolicy(gc.BasicFrameworkGcPolicy):
class transformerclass(shadowstack.ShadowStackFrameworkGCTransformer):
from rpython.memory.gc.generation import GenerationGC as \
GCClass
GC_PARAMS = {'space_size': 512*WORD,
'nursery_size': 32*WORD,
'translated_to_c': False}
root_stack_depth = 200
def test_gettypeid(self):
py.test.skip("fails for obscure reasons")
| 31.904628 | 93 | 0.512638 | 5,174 | 45,496 | 4.327406 | 0.123695 | 0.013444 | 0.020322 | 0.024297 | 0.429879 | 0.340107 | 0.302099 | 0.273828 | 0.256856 | 0.242296 | 0 | 0.027162 | 0.389858 | 45,496 | 1,425 | 94 | 31.927018 | 0.779423 | 0.058027 | 0 | 0.459504 | 0 | 0 | 0.041585 | 0.011524 | 0 | 0 | 0.000561 | 0 | 0.042975 | 0 | null | null | 0.015702 | 0.038843 | null | null | 0.001653 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4b39516d2e47e56ba5e7898643ba4593ea3b27e | 349 | py | Python | change_threshold_migration.py | arcapix/gpfsapi-examples | 15bff7fda7b0a576209253dee48eb44e4c0d565f | [
"MIT"
] | 10 | 2016-05-17T12:58:35.000Z | 2022-01-10T05:23:45.000Z | change_threshold_migration.py | arcapix/gpfsapi-examples | 15bff7fda7b0a576209253dee48eb44e4c0d565f | [
"MIT"
] | null | null | null | change_threshold_migration.py | arcapix/gpfsapi-examples | 15bff7fda7b0a576209253dee48eb44e4c0d565f | [
"MIT"
] | 1 | 2016-09-12T09:07:00.000Z | 2016-09-12T09:07:00.000Z | from arcapix.fs.gpfs.policy import PlacementPolicy
from arcapix.fs.gpfs.rule import MigrateRule
# load placement policy for mmfs1
policy = PlacementPolicy('mmfs1')
# create a new migrate rule for 'sata1'
r = MigrateRule(source='sata1', threshold=(90, 50))
# add rule to start of the policy
policy.rules.insert(r, 0)
# save changes
policy.save()
| 23.266667 | 51 | 0.759312 | 52 | 349 | 5.096154 | 0.634615 | 0.083019 | 0.098113 | 0.128302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0299 | 0.137536 | 349 | 14 | 52 | 24.928571 | 0.850498 | 0.326648 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d4b5c94f17a9cee798f64b657926900668bb67f6 | 5,431 | py | Python | classify_images.py | rmsare/cs231a-project | 91776ada3512d3805de0e66940c9f1c5b3c4c641 | [
"MIT"
] | 2 | 2017-11-06T10:23:16.000Z | 2019-11-09T15:11:19.000Z | classify_images.py | rmsare/cs231a-project | 91776ada3512d3805de0e66940c9f1c5b3c4c641 | [
"MIT"
] | null | null | null | classify_images.py | rmsare/cs231a-project | 91776ada3512d3805de0e66940c9f1c5b3c4c641 | [
"MIT"
] | null | null | null | """
Classification of pixels in images using color and other features.
General pipeline usage:
1. Load and segment images (img_utils.py)
2. Prepare training data (label_image.py)
3. Train classifier or cluster data (sklearn KMeans, MeanShift, SVC, etc.)
4. Predict labels on new image or directory (classify_directory())
5. Apply classification to 3D points and estimate ground plane orientation (process_pointcloud.py)
Project uses the following directory structure:
images/ - contains binary files of numpy arrays corresponding to survey images and segmentations
labelled/ - contains labelled ground truth images or training data
results/ - contains results of classification
I store randomly split training and testing images in test/ and train/ directories.
Author: Robert Sare
E-mail: rmsare@stanford.edu
Date: 8 June 2017
"""
import numpy as np
import matplotlib.pyplot as plt
import skimage.color, skimage.io
from skimage.segmentation import mark_boundaries
from sklearn.svm import SVC
from sklearn.cluster import KMeans, MeanShift
from sklearn.metrics import confusion_matrix
from sklearn.utils import shuffle
import os, fnmatch
def classify_directory(classifier, test_dir, train_dir='train/'):
"""
Classify all images in a directory using an arbitrary sklearn classifier.
Saves results to results/ directory.
"""
# XXX: This is here if the classifier needs to be trained from scratch
#print("Preparing training data...")
#n_samples = 1000
#train_data, train_labels = load_training_images(train_dir, n_samples)
#
#print("Training classifier...")
#classifier = ImageSVC()
#classifier.fit(train_data, train_labels)
files = os.listdir(test_dir)
for f in files:
image = skimage.io.imread(f)
height, width, depth = image.shape
print("Predicting labels for " + f.strip('.JPG') + ".jpg")
features = compute_colorxy_features(image)
features /= features.max(axis=0)
pred_labels = classifier.predict(features)
print("Saving predictions for " + f.strip('.JPG') + ".jpg")
plt.figure()
plt.imshow(image)
plt.imshow(pred_labels.reshape((height, width)), alpha=0.5, vmin=0, vmax=2)
plt.show(block=False)
plt.savefig('results/' + f.strip('.JPG') + '_svm_pred.png')
plt.close()
np.save('results/' + f.strip('.JPG') + 'svm.npy', pred_labels.reshape((height,width)))
def compute_colorxy_features(image):
"""
Extract and normalize color and pixel location features from image data
"""
height, width, depth = image.shape
colors = skimage.color.rgb2lab(image.reshape((height*width, depth))
X, Y = np.meshgrid(np.arange(height), np.arange(width))
xy = np.hstack([X.reshape((height*width, 1)), Y.reshape((height*width, 1))])
colorxy = np.hstack([xy, colors])
colorxy /= colorxy.max(axis=0)
return colorxy
def load_ground_truth(filename):
"""
Load ground truth or training image array and redefine labelling for nice
default colors
"""
truth = np.load(filename)
# Change labels for nice default colorscale when plotted
truth = truth - 1
truth[truth == -1] = 0
truth[truth == 0] = 5
truth[truth == 2] = 0
truth[truth == 5] = 2
return truth
def load_image_labels(name):
"""
Load image and labels from previous labelling session
"""
fname = 'images/' + name + '_image.npy'
image = np.load(fname)
fname = 'labelled/' + name + '_labels.npy'
labels = np.load(fname)
return image, labels
def plot_class_image(image, segments, labels):
"""
Display image with segments and class label overlay
"""
plt.figure()
plt.subplot(1,2,1)
plt.imshow(mark_boundaries(image, segments, color=(1,0,0), mode='thick'))
plt.title('segmented image')
plt.subplot(1,2,2)
plt.imshow(image)
plt.imshow(labels, alpha=0.75)
cb = plt.colorbar(orientation='horizontal', shrink=0.5)
plt.title('predicted class labels')
plt.show(block=False)
def load_training_images(train_dir, n_samples=1000, n_features=3):
"""
Load training images from directory and subsample for training or validation
"""
train_data = np.empty((0, n_features))
train_labels = np.empty(0)
files = os.listdir(train_dir)
for f in files:
name = parse_filename(f)
image, labels = load_image_labels(name)
ht, wid, depth = image.shape
train_data = np.append(train_data,
compute_color_features(image), axis=0)
train_labels = np.append(train_labels,
labels.reshape(wid*ht, 1).ravel())
train_data, train_labels = shuffle(train_data, train_labels,
random_state=0, n_samples=n_samples)
return train_data, train_labels
def save_prediction(name, pred_labels):
"""
Save predicted class labels
"""
np.save('results/' + name + '_pred', pred_labels)
if __name__ == "__main__":
# Load training data
train_dir = 'train/'
test_dir = 'test/'
train_data, train_labels = load_training_data(train_dir)
# Train classifier
clf = SVC()
clf.fit(train_data, train_labels)
# Predict labels for test images
classify_directory(clf, test_dir)
| 30.857955 | 104 | 0.662861 | 712 | 5,431 | 4.933989 | 0.308989 | 0.025619 | 0.027896 | 0.039852 | 0.131512 | 0.047253 | 0.019357 | 0 | 0 | 0 | 0 | 0.01364 | 0.230528 | 5,431 | 175 | 105 | 31.034286 | 0.826992 | 0 | 0 | 0.120482 | 0 | 0 | 0.065334 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.108434 | null | null | 0.024096 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4b832afc1a419832477a3ad699f701ea5d77522 | 3,357 | py | Python | ciphers/SKINNY-TK2/SKINNY-TK2/skinnytk2.py | j-danner/autoguess | 712a8dcfb259a277b2b2a499bd7c5fc4aab97b67 | [
"MIT"
] | 7 | 2021-11-29T07:25:43.000Z | 2022-03-02T10:15:30.000Z | ciphers/SKINNY-TK2/SKINNY-TK2/skinnytk2.py | j-danner/autoguess | 712a8dcfb259a277b2b2a499bd7c5fc4aab97b67 | [
"MIT"
] | 1 | 2022-03-30T16:29:50.000Z | 2022-03-30T16:29:50.000Z | ciphers/SKINNY-TK2/SKINNY-TK2/skinnytk2.py | j-danner/autoguess | 712a8dcfb259a277b2b2a499bd7c5fc4aab97b67 | [
"MIT"
] | 1 | 2022-03-30T13:40:12.000Z | 2022-03-30T13:40:12.000Z | # Created on Sep 7, 2020
# author: Hosein Hadipour
# contact: hsn.hadipour@gmail.com
import os
output_dir = os.path.curdir
def skinnytk2(R=1):
"""
This function generates the relations of Skinny-n-n for R rounds.
tk ================================================> TWEAKEY_P(tk) ===> ---
SB AC | P MC SB AC |
x_0 ===> x_0 ===> x_0 ===> + ===> y_0 ===> P(y_0) ===> x_1 ===> x_1 ===> x_1 ===> + ===> y_1 ===> ---
"""
cipher_name = 'skinnytk2'
P = [0, 1, 2, 3, 7, 4, 5, 6, 10, 11, 8, 9, 13, 14, 15, 12]
TKP = [9, 15, 8, 13, 10, 14, 12, 11, 0, 1, 2, 3, 4, 5, 6, 7]
tk1 = ['tk1_%d' % i for i in range(16)]
tk2 = ['tk2_%d' % i for i in range(16)]
# 1 round
# recommended_mg = 8
# recommended_ms = 4
# 2 rounds
# recommended_mg = 16
# recommended_ms = 8
# 3 rounds
# recommended_mg = 19
# recommended_ms = 24
# 4 rounds
# recommended_mg = 21
# recommended_ms = 27
# 5 rounds
# recommended_mg = 22
# recommended_ms = 35
# 6 rounds
# recommended_mg = 25
# recommended_ms = 40
# 7 rounds
# recommended_mg = 26
# recommended_ms = 70
# 8 rounds
# recommended_mg = 28
# recommended_ms = 80
# 9 rounds
# recommended_mg = 28
# recommended_ms = 100
# 10 rounds
recommended_mg = 30
recommended_ms = 100
# 11 rounds
# recommended_mg = 31
# recommended_ms = 100
eqs = '#%s %d Rounds\n' % (cipher_name, R)
eqs += 'connection relations\n'
for r in range(R):
xin = ['x_%d_%d' % (r, i) for i in range(16)]
xout = ['x_%d_%d' % (r + 1, i) for i in range(16)]
y = ['y_%d_%d' % (r, i) for i in range(16)]
tk = ['tk_%d_%d' % (r, i) for i in range(8)]
# Generaete AddTweakey relations
for i in range(4):
for j in range(4):
if i < 2:
eqs += '%s, %s, %s\n' % (tk1[j + 4*i], tk2[j + 4*i], tk[j + 4*i])
eqs += '%s, %s, %s\n' % (xin[j + 4*i], tk[j + 4*i], y[j + 4*i])
else:
eqs += '%s, %s\n' % (xin[j + 4*i], y[j + 4*i])
# Apply ShiftRows
py = [y[P[i]] for i in range(16)]
# Generate MixColumn relations
for j in range(4):
eqs += '%s, %s, %s, %s\n' % (py[j + 0*4], py[j + 2*4], py[j + 3*4], xout[j + 0*4])
eqs += '%s, %s\n' % (py[j], xout[j + 1*4])
eqs += '%s, %s, %s\n' % (py[j + 1*4], py[j + 2*4], xout[j + 2*4])
eqs += '%s, %s, %s\n' % (py[j + 0*4], py[j + 2*4], xout[j + 3*4])
# Update Tweakey
temp1 = tk1.copy()
temp2 = tk2.copy()
tk1 = [temp1[TKP[i]] for i in range(16)]
tk2 = [temp2[TKP[i]] for i in range(16)]
plaintext = ['x_0_%d' % i for i in range(16)]
ciphertext = ['x_%d_%d' % (R, i) for i in range(16)]
eqs += 'known\n' + '\n'.join(plaintext + ciphertext)
eqs += '\nend'
relation_file_path = os.path.join(output_dir, 'relationfile_%s_%dr_mg%d_ms%d.txt' % (cipher_name, R, recommended_mg, recommended_ms))
with open(relation_file_path, 'w') as relation_file:
relation_file.write(eqs)
def main():
skinnytk2(R=10)
if __name__ == '__main__':
main()
| 33.909091 | 137 | 0.472148 | 519 | 3,357 | 2.917148 | 0.22736 | 0.069353 | 0.047556 | 0.087186 | 0.258917 | 0.229194 | 0.165786 | 0.073316 | 0.053501 | 0.042272 | 0 | 0.086682 | 0.340185 | 3,357 | 98 | 138 | 34.255102 | 0.59684 | 0.308311 | 0 | 0.044444 | 1 | 0 | 0.104564 | 0.014621 | 0.022222 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.022222 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4bc5b3a862989ca34a4883d8781d87ac17bd277 | 592 | py | Python | scrapy_compose/fields/parser/string_field.py | Sphynx-HenryAY/scrapy-compose | bac45ee51bf4a49b3d4a9902767a17072137f869 | [
"MIT"
] | null | null | null | scrapy_compose/fields/parser/string_field.py | Sphynx-HenryAY/scrapy-compose | bac45ee51bf4a49b3d4a9902767a17072137f869 | [
"MIT"
] | 18 | 2019-10-17T10:51:30.000Z | 2020-05-12T10:00:49.000Z | scrapy_compose/fields/parser/string_field.py | Sphynx-HenryAY/scrapy-compose | bac45ee51bf4a49b3d4a9902767a17072137f869 | [
"MIT"
] | null | null | null |
from scrapy_compose.utils.context import realize
from .field import FuncField as BaseField
class StringField( BaseField ):
process_timing = [ "post_pack" ]
def __init__( self, key = None, value = None, selector = None, **kwargs ):
#unify value format
if isinstance( value, str ):
value = { "_type": "string", "value": value }
super( StringField, self ).__init__( key = key, value = value, selector = selector, **kwargs )
def make_field( self, selector, key = None, value = None, **kwargs ):
return { realize( selector, key ): self.post_pack( realize( selector, value ) ) }
| 34.823529 | 96 | 0.6875 | 73 | 592 | 5.383562 | 0.479452 | 0.040712 | 0.061069 | 0.081425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.182432 | 592 | 16 | 97 | 37 | 0.811983 | 0.030405 | 0 | 0 | 0 | 0 | 0.043706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d4c0845bc0b80a14fbe5e783d9ed64b00db19bce | 3,383 | py | Python | app/__init__.py | credwood/bitplayers | 4ca6b6c6a21bb21d7cd963c64028415559c3dcc4 | [
"MIT"
] | 1 | 2020-06-26T21:49:14.000Z | 2020-06-26T21:49:14.000Z | app/__init__.py | credwood/bitplayers | 4ca6b6c6a21bb21d7cd963c64028415559c3dcc4 | [
"MIT"
] | 2 | 2020-03-31T11:11:04.000Z | 2021-12-13T20:38:48.000Z | app/__init__.py | credwood/bitplayers | 4ca6b6c6a21bb21d7cd963c64028415559c3dcc4 | [
"MIT"
] | null | null | null | import dash
from flask import Flask
from flask.helpers import get_root_path
from flask_login import login_required
from flask_wtf.csrf import CSRFProtect
from flask_admin import Admin, BaseView, expose
from flask_admin.contrib.sqla import ModelView
from datetime import datetime
from dateutil import parser
import pytz
from pytz import timezone
from config import BaseConfig
csrf = CSRFProtect()
def create_app():
from app.models import Blog, User, MyModelView, Contact
from app.extensions import db
from app.dashapp1.layout import layout as layout_1
from app.dashapp1.callbacks import register_callbacks as register_callbacks_1
#from app.dashapp2.layout import layout as layout_2
#from app.dashapp2.callbacks import register_callbacks as register_callbacks_2
from app.dashapp3.layout import layout as layout_3
from app.dashapp3.callbacks import register_callbacks as register_callbacks_3
server = Flask(__name__)
server.config.from_object(BaseConfig)
csrf.init_app(server)
csrf._exempt_views.add('dash.dash.dispatch')
admin = Admin(server)
admin.add_view(MyModelView(User, db.session))
admin.add_view(MyModelView(Blog, db.session))
admin.add_view(MyModelView(Contact, db.session))
register_dashapp(server, 'dashapp1', 'dashboard1', layout_1, register_callbacks_1)
#register_dashapp(server, 'dashapp2', 'dashboard2', layout_2, register_callbacks_2)
register_dashapp(server, 'dashapp3', 'dashboard3', layout_3, register_callbacks_3)
register_extensions(server)
register_blueprints(server)
server.jinja_env.filters['formatdatetime'] = format_datetime
return server
def format_datetime(date,fmt=None):
western = timezone("America/Los_Angeles")
native=pytz.utc.localize(date, is_dst=None).astimezone(western)
#date = parser.parse(str(date))
#native = date.astimezone(western)
format='%m-%d-%Y %I:%M %p'
return native.strftime(format)
def register_dashapp(app, title, base_pathname, layout, register_callbacks_fun):
# Meta tags for viewport responsiveness
meta_viewport = {"name": "viewport", "content": "width=device-width, initial-scale=1, shrink-to-fit=no"}
my_dashapp = dash.Dash(__name__,
server=app,
url_base_pathname=f'/{base_pathname}/',
assets_folder=get_root_path(__name__) + f'/{base_pathname}/assets/',
meta_tags=[meta_viewport])
with app.app_context():
my_dashapp.title = title
my_dashapp.layout = layout
register_callbacks_fun(my_dashapp)
#_protect_dashviews(my_dashapp)
def _protect_dashviews(dashapp):
for view_func in dashapp.server.view_functions:
if view_func.startswith(dashapp.config.url_base_pathname):
dashapp.server.view_functions[view_func] = login_required(dashapp.server.view_functions[view_func])
def register_extensions(server):
from app.extensions import db
from app.extensions import login_inst
from app.extensions import migrate
from app.extensions import mail
db.init_app(server)
login_inst.init_app(server)
login_inst.login_view = 'main.login'
migrate.init_app(server, db)
mail.init_app(server)
def register_blueprints(server):
from app.webapp import server_bp
server.register_blueprint(server_bp)
| 35.610526 | 111 | 0.738693 | 442 | 3,383 | 5.411765 | 0.28733 | 0.038043 | 0.035535 | 0.048077 | 0.196906 | 0.145903 | 0.090719 | 0 | 0 | 0 | 0 | 0.008957 | 0.174993 | 3,383 | 94 | 112 | 35.989362 | 0.848083 | 0.100503 | 0 | 0.029851 | 0 | 0 | 0.074769 | 0.007905 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089552 | false | 0 | 0.343284 | 0 | 0.462687 | 0.044776 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d4c391278bd0cf509c7b23a6660f7d6beb4dfdb7 | 3,960 | py | Python | python/SHA3_hashlib_based_concept.py | feketebv/SCA_proof_SHA3-512 | 5a7689ea307463d5b797e49142c349b02cdcda03 | [
"MIT"
] | 1 | 2021-05-19T00:08:15.000Z | 2021-05-19T00:08:15.000Z | python/SHA3_hashlib_based_concept.py | feketebv/SCA_proof_SHA3-512 | 5a7689ea307463d5b797e49142c349b02cdcda03 | [
"MIT"
] | null | null | null | python/SHA3_hashlib_based_concept.py | feketebv/SCA_proof_SHA3-512 | 5a7689ea307463d5b797e49142c349b02cdcda03 | [
"MIT"
] | null | null | null | '''
Written by: Balazs Valer Fekete fbv81bp@outlook.hu fbv81bp@gmail.com
Last updated: 29.01.2021
'''
# the concept is to generate a side channel resistant initialisation of the hashing function based on
# one secret key and several openly known initialisation vectors (IV) in a manner that the same input
# is not hashed too more than two times, which is hopefully not sufficient for side channel
# measurements based computations: the number of consecutive measurements for a successful attack on
# the CHI function in a practically noiseless computer simulation (see "chi_cpa.py") takes around a
# 100 measurements
# this concept is achieved by taking a counter of a certain bitlength, and twice as many IVs as bits in
# the counter: "IV0s" and "IV1s" and compute a series of hashes starting with the secret key then with a
# correspong IV of the sets 0 and 1 based on whether the counter's corresponding bit - starting at MSB -
# is 0 or 1; this way every hash output is exactly used 2 times if the intermediate values are STORTED
# and the entire series of initial hashes are NOT fully recomputed only such whose corresponding
# counter bits has changed and all the next levels too down to the LSB of the counter
# the working solution is going to based on the algorithms presented here, although
# in this file the algorithm here does the full padding so the results won't equal to
# a scheme where the rate is fully filled with IVs and the data comes only afterwards...
import hashlib
# KEY DATA STRUCTURES' INTERPRETATION
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IV0s = [658678, 6785697, 254376, 67856, 1432543, 786, 124345, 5443654]
IV1s = [2565, 256658, 985, 218996, 255, 685652, 28552, 3256565]
# LSB ... MSB
hash_copies = [None for i in range(len(IV0s))]
# LSB ... MSB
# counter
# MSB ... LSB
# COMPUTING HASHES FOR EVERY COUNTER VALUE INDIVIDUALLY
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for counter in range(11):
hash = hashlib.sha3_512()
# looping from MSB to LSB in counter too
for i in range(len(IV0s)-1, -1, -1):
if (counter>>i) & 1 == 1:
IV = bytes(IV1s[i])
else:
IV = bytes(IV0s[i])
hash.update(IV)
print(hash.hexdigest())
print()
# COMPUTING HASHES BASED ON THE NATURE OF BINARY INCREMENTATION:
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# only fewer values need to be recomputed, those whose corresponding
# bits have changed, down until LSB
# initialize
hash = hashlib.sha3_512()
# looping from MSB to LSB
for i in range(len(IV0s)-1, -1, -1):
# addressing "MSB" of IVs at first, "LSB" at last!
IV = bytes(IV0s[i])
hash.update(IV)
# index 0 of hash_copies changes the most frequently ie. according to counter's LSB
hash_copies[i] = hash.copy()
# compute
last_counter = 0
for counter in range(11):
IV_mask = last_counter ^ counter
last_counter = counter
# determine the highest non-zero bit of IV_mask, LSB is 1, 0 means there was no change
nz = 0
while IV_mask > 0:
IV_mask >>= 1
nz += 1
# initialize hash to the last value whose corresponding counter bit didn't switch
# have to copy object otherwise the originally pointed version gets updated!
hash = hash_copies[nz].copy() # LSB is index 0
# compute only the remaining hashes
while nz != 0: # nz=0 is the initial condition, nothing needs to be done
nz -= 1
if (counter>>nz) & 1 == 1:
IV = bytes(IV1s[nz])
else:
IV = bytes(IV0s[nz])
hash.update(IV)
# needs to be copied again because of object orientation
hash_copies[nz] = hash.copy()
# showing the hash copies' entire table after each computation
#for hashes in hash_copies:
# print(hashes.hexdigest())
print(hash_copies[0].hexdigest())
| 40 | 105 | 0.65303 | 579 | 3,960 | 4.43696 | 0.419689 | 0.031141 | 0.007007 | 0.012845 | 0.095757 | 0.070845 | 0.063838 | 0.045154 | 0.045154 | 0 | 0 | 0.051095 | 0.238889 | 3,960 | 98 | 106 | 40.408163 | 0.801261 | 0.661364 | 0 | 0.351351 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027027 | 0 | 0.027027 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4c4c2df87ed6c462e4aab6092109b050d3d20d5 | 759 | py | Python | sound/serializers.py | Anirudhchoudhary/ApnaGanna__backend | 52e6c3100fdb289e8bf64a1a4007eeb2eb66a022 | [
"MIT"
] | null | null | null | sound/serializers.py | Anirudhchoudhary/ApnaGanna__backend | 52e6c3100fdb289e8bf64a1a4007eeb2eb66a022 | [
"MIT"
] | null | null | null | sound/serializers.py | Anirudhchoudhary/ApnaGanna__backend | 52e6c3100fdb289e8bf64a1a4007eeb2eb66a022 | [
"MIT"
] | null | null | null | from .models import Sound , Album
from rest_framework import serializers
class SoundSerializer(serializers.ModelSerializer):
class Meta:
model = Sound
fields = ["name" , "song_image" , "pk" , "like" , "played" , "tag" , "singer" , "upload_date"]
class SoundDetailSerializer(serializers.ModelSerializer):
class Meta:
model = Sound
fields = "__all__"
class AlbumSerializer(serializers.ModelSerializer):
sound = serializers.SerializerMethodField()
class Meta:
model = Album
fields = ["name" , "datepublish" , "category" , "sound"]
depth = 1
def get_sound(self , obj):
print("WORKING")
return SoundSerializer(instance=obj.sound , many=True).data
| 27.107143 | 102 | 0.637681 | 72 | 759 | 6.611111 | 0.597222 | 0.163866 | 0.088235 | 0.147059 | 0.214286 | 0.214286 | 0.214286 | 0 | 0 | 0 | 0 | 0.001764 | 0.252964 | 759 | 27 | 103 | 28.111111 | 0.837743 | 0 | 0 | 0.263158 | 0 | 0 | 0.116248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.578947 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d4c5d71a8319e8e4743e5c7446b67b54ee62af61 | 256 | py | Python | devtools/api/health.py | ankeshkhemani/devtools | beb9a46c27b6b4c02a2e8729af0c971cc175f134 | [
"Apache-2.0"
] | null | null | null | devtools/api/health.py | ankeshkhemani/devtools | beb9a46c27b6b4c02a2e8729af0c971cc175f134 | [
"Apache-2.0"
] | null | null | null | devtools/api/health.py | ankeshkhemani/devtools | beb9a46c27b6b4c02a2e8729af0c971cc175f134 | [
"Apache-2.0"
] | null | null | null | import datetime
from fastapi import APIRouter
router = APIRouter()
@router.get("", tags=["health"])
async def get_health():
return {
"results": [],
"status": "success",
"timestamp": datetime.datetime.now().timestamp()
}
| 17.066667 | 56 | 0.605469 | 25 | 256 | 6.16 | 0.68 | 0.194805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230469 | 256 | 14 | 57 | 18.285714 | 0.781726 | 0 | 0 | 0 | 0 | 0 | 0.136719 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.3 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4cc2ada6fd8bd17a6303118a58e9c1a8c44ff7a | 2,265 | py | Python | pytorch_toolkit/face_recognition/model/common.py | AnastasiaaSenina/openvino_training_extensions | 267425d64372dff5b9083dc0ca6abfc305a71449 | [
"Apache-2.0"
] | 1 | 2020-02-09T15:50:49.000Z | 2020-02-09T15:50:49.000Z | pytorch_toolkit/face_recognition/model/common.py | akshayjaryal603/openvino_training_extensions | 7d606a22143db0af97087709d63a2ec2aa02036c | [
"Apache-2.0"
] | 28 | 2020-09-25T22:40:36.000Z | 2022-03-12T00:37:36.000Z | pytorch_toolkit/face_recognition/model/common.py | akshayjaryal603/openvino_training_extensions | 7d606a22143db0af97087709d63a2ec2aa02036c | [
"Apache-2.0"
] | 1 | 2021-04-02T07:51:01.000Z | 2021-04-02T07:51:01.000Z | """
Copyright (c) 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from abc import abstractmethod
from functools import partial
import torch.nn as nn
class ModelInterface(nn.Module):
"""Abstract class for models"""
@abstractmethod
def set_dropout_ratio(self, ratio):
"""Sets dropout ratio of the model"""
@abstractmethod
def get_input_res(self):
"""Returns input resolution"""
from .rmnet_angular import RMNetAngular
from .mobilefacenet import MobileFaceNet
from .landnet import LandmarksNet
from .se_resnet_angular import SEResNetAngular
from .shufflenet_v2_angular import ShuffleNetV2Angular
from .backbones.se_resnet import se_resnet50, se_resnet101, se_resnet152
from .backbones.resnet import resnet50
from .backbones.se_resnext import se_resnext50, se_resnext101, se_resnext152
models_backbones = {'rmnet': RMNetAngular,
'mobilenetv2': MobileFaceNet,
'mobilenetv2_2x': partial(MobileFaceNet, width_multiplier=2.0),
'mobilenetv2_1_5x': partial(MobileFaceNet, width_multiplier=1.5),
'resnet50': partial(SEResNetAngular, base=resnet50),
'se_resnet50': partial(SEResNetAngular, base=se_resnet50),
'se_resnet101': partial(SEResNetAngular, base=se_resnet101),
'se_resnet152': partial(SEResNetAngular, base=se_resnet152),
'se_resnext50': partial(SEResNetAngular, base=se_resnext50),
'se_resnext101': partial(SEResNetAngular, base=se_resnext101),
'se_resnext152': partial(SEResNetAngular, base=se_resnext152),
'shufflenetv2': ShuffleNetV2Angular}
models_landmarks = {'landnet': LandmarksNet}
| 41.944444 | 85 | 0.714349 | 262 | 2,265 | 6.038168 | 0.454198 | 0.097345 | 0.115044 | 0.106195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042434 | 0.209272 | 2,265 | 53 | 86 | 42.735849 | 0.840871 | 0.283444 | 0 | 0.068966 | 0 | 0 | 0.091824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.37931 | 0 | 0.482759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d4cd43090d9af44b579f4587a49e6d83acfe093a | 807 | py | Python | src/dataclay/util/logs.py | kpavel/pyclay | 275bc8af5c57301231a20cca1cc88556a9c84c79 | [
"BSD-3-Clause"
] | 1 | 2020-04-16T17:09:15.000Z | 2020-04-16T17:09:15.000Z | src/dataclay/util/logs.py | kpavel/pyclay | 275bc8af5c57301231a20cca1cc88556a9c84c79 | [
"BSD-3-Clause"
] | 35 | 2019-11-06T17:06:16.000Z | 2021-04-12T16:27:20.000Z | src/dataclay/util/logs.py | kpavel/pyclay | 275bc8af5c57301231a20cca1cc88556a9c84c79 | [
"BSD-3-Clause"
] | 1 | 2020-05-06T11:28:16.000Z | 2020-05-06T11:28:16.000Z |
""" Class description goes here. """
import json
import logging
class JSONFormatter(logging.Formatter):
"""Simple JSON formatter for the logging facility."""
def format(self, obj):
"""Note that obj is a LogRecord instance."""
# Copy the dictionary
ret = dict(obj.__dict__)
# Perform the message substitution
args = ret.pop("args")
msg = ret.pop("msg")
ret["message"] = msg % args
# Exceptions must be formatted (they are not JSON-serializable
try:
ei = ret.pop("exc_info")
except KeyError:
pass
else:
if ei is not None:
ret["exc_info"] = self.formatException(ei)
# Dump the dictionary in JSON form
return json.dumps(ret, skipkeys=True)
| 26.032258 | 70 | 0.581165 | 95 | 807 | 4.873684 | 0.621053 | 0.038877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.319703 | 807 | 30 | 71 | 26.9 | 0.843352 | 0.327138 | 0 | 0 | 0 | 0 | 0.057471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d4cf41c3907f30d0f8d4b3c715caa3ef127581dc | 5,353 | py | Python | backend/services/apns_util.py | xuantan/viewfinder | 992209086d01be0ef6506f325cf89b84d374f969 | [
"Apache-2.0"
] | 645 | 2015-01-03T02:03:59.000Z | 2021-12-03T08:43:16.000Z | backend/services/apns_util.py | hoowang/viewfinder | 9caf4e75faa8070d85f605c91d4cfb52c4674588 | [
"Apache-2.0"
] | null | null | null | backend/services/apns_util.py | hoowang/viewfinder | 9caf4e75faa8070d85f605c91d4cfb52c4674588 | [
"Apache-2.0"
] | 222 | 2015-01-07T05:00:52.000Z | 2021-12-06T09:54:26.000Z | # -*- coding: utf-8 -*-
# Copyright 2012 Viewfinder Inc. All Rights Reserved.
"""Apple Push Notification service utilities.
Original copyright for this code: https://github.com/jayridge/apnstornado
TokenToBinary(): converts a hex-encoded token into a binary value
CreateMessage(): formats a binary APNs message from parameters
ParseResponse(): parses APNs binary response for status & identifier
ErrorStatusToString(): converts error status to error message
"""
__author__ = 'spencer@emailscrubbed.com (Spencer Kimball)'
import base64
import json
import struct
import time
from tornado import escape
_MAX_PAYLOAD_BYTES = 256
"""Maximum number of bytes in the APNS payload."""
_ELLIPSIS_BYTES = escape.utf8(u'…')
"""UTF-8 encoding of the Unicode ellipsis character."""
def TokenToBinary(token):
return base64.b64decode(token)
def TokenFromBinary(bin_token):
return base64.b64encode(bin_token)
def CreateMessage(token, alert=None, badge=None, sound=None,
identifier=0, expiry=None, extra=None, allow_truncate=True):
token = TokenToBinary(token)
if len(token) != 32:
raise ValueError, u'Token must be a 32-byte binary string.'
if (alert is not None) and (not isinstance(alert, (basestring, dict))):
raise ValueError, u'Alert message must be a string or a dictionary.'
if expiry is None:
expiry = long(time.time() + 365 * 86400)
# Start by determining the length of the UTF-8 encoded JSON with no alert text. This allows us to
# determine how much space is left for the message.
# 'content-available': 1 is necessary to trigger iOS 7's background download processing.
aps = { 'alert' : '', 'content-available': 1 }
if badge is not None:
aps['badge'] = badge
if sound is not None:
aps['sound'] = sound
data = { 'aps' : aps }
if extra is not None:
data.update(extra)
# Create compact JSON representation with no extra space and no escaping of non-ascii chars (i.e. use
# direct UTF-8 representation rather than "\u1234" escaping). This maximizes the amount of space that's
# left for the alert text.
encoded = escape.utf8(json.dumps(escape.recursive_unicode(data), separators=(',', ':'), ensure_ascii=False))
bytes_left = _MAX_PAYLOAD_BYTES - len(encoded)
if allow_truncate and isinstance(alert, basestring):
alert = _TruncateAlert(alert, bytes_left)
elif alert and len(escape.utf8(alert)) > bytes_left:
raise ValueError, u'max payload(%d) exceeded: %d' % (_MAX_PAYLOAD_BYTES, len(escape.utf8(alert)))
# Now re-encode including the alert text.
aps['alert'] = alert
encoded = escape.utf8(json.dumps(escape.recursive_unicode(data), separators=(',', ':'), ensure_ascii=False))
length = len(encoded)
assert length <= _MAX_PAYLOAD_BYTES, (encoded, length)
return struct.pack('!bIIH32sH%(length)ds' % { 'length' : length },
1, identifier, expiry,
32, token, length, encoded)
def ParseResponse(bytes):
if len(bytes) != 6:
raise ValueError, u'response must be a 6-byte binary string.'
command, status, identifier = struct.unpack_from('!bbI', bytes, 0)
if command != 8:
raise ValueError, u'response command must equal 8.'
return status, identifier, ErrorStatusToString(status)
def ErrorStatusToString(status):
if status is 0:
return 'No errors encountered'
elif status is 1:
return 'Processing error'
elif status is 2:
return 'Missing device token'
elif status is 3:
return 'Missing topic'
elif status is 4:
return 'Missing payload'
elif status is 5:
return 'Invalid token size'
elif status is 6:
return 'Invalid topic size'
elif status is 7:
return 'Invalid payload size'
elif status is 8:
return 'Invalid token'
elif status is 255:
return 'None (unknown)'
else:
return ''
def _TruncateAlert(alert, max_bytes):
"""Converts the alert text to UTF-8 encoded JSON format, which is how
the alert will be stored in the APNS payload. If the number of
resulting bytes exceeds "max_bytes", then truncates the alert text
at a Unicode character boundary, taking care not to split JSON
escape sequences. Returns the truncated UTF-8 encoded alert text,
including a trailing ellipsis character.
"""
alert_json = escape.utf8(json.dumps(escape.recursive_unicode(alert), ensure_ascii=False))
# Strip quotes added by JSON.
alert_json = alert_json[1:-1]
# Check if alert fits with no truncation.
if len(alert_json) <= max_bytes:
return escape.utf8(alert)
# Make room for an appended ellipsis.
assert max_bytes >= len(_ELLIPSIS_BYTES), 'max_bytes must be at least %d' % len(_ELLIPSIS_BYTES)
max_bytes -= len(_ELLIPSIS_BYTES)
# Truncate the JSON UTF8 string at a Unicode character boundary.
truncated = alert_json[:max_bytes].decode('utf-8', errors='ignore')
# If JSON escape sequences were split, then the truncated string may not be valid JSON. Keep
# chopping trailing characters until the truncated string is valid JSON. It may take several
# tries, such as in the case where a "\u1234" sequence has been split.
while True:
try:
alert = json.loads(u'"%s"' % truncated)
break
except Exception:
truncated = truncated[:-1]
# Return the UTF-8 encoding of the alert with the ellipsis appended to it.
return escape.utf8(alert) + _ELLIPSIS_BYTES
| 34.75974 | 110 | 0.713992 | 759 | 5,353 | 4.968379 | 0.322793 | 0.021215 | 0.02864 | 0.015115 | 0.09467 | 0.052241 | 0.052241 | 0.041368 | 0.041368 | 0.041368 | 0 | 0.019155 | 0.190547 | 5,353 | 153 | 111 | 34.986928 | 0.85045 | 0.198767 | 0 | 0.022989 | 0 | 0 | 0.151417 | 0.007379 | 0 | 0 | 0 | 0 | 0.022989 | 0 | null | null | 0 | 0.057471 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4d48c8aa150de0f108ac0a0655e92b6976fd528 | 41,579 | py | Python | megaboat.py | xros/megaboat | e55e7959c39677ad2a0cdbb00ac88814b838d3e3 | [
"MIT"
] | 4 | 2015-06-07T18:44:02.000Z | 2021-04-03T02:53:01.000Z | megaboat.py | xros/megaboat | e55e7959c39677ad2a0cdbb00ac88814b838d3e3 | [
"MIT"
] | null | null | null | megaboat.py | xros/megaboat | e55e7959c39677ad2a0cdbb00ac88814b838d3e3 | [
"MIT"
] | 2 | 2015-03-27T04:24:55.000Z | 2016-06-26T11:02:47.000Z | # -*- coding: utf-8 -*-
# Copyright to Alexander Liu.
# Any distrubites of this copy should inform its author. If for commercial, please inform the author for authentication. Apr 2014
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
from lxml import etree
import time
import json
import urllib
import urllib2
# For media posting
from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
class ParsingContainer(object):
"""Parsing Wechat messages for whose types are of : 'text', 'image', 'voice', 'video', 'location', 'link'
After making a new instance of the class, need to declare the 'MsgType'
For example,
$~ python
>>> holder = ParsingContainer()
>>> hasattr(holder, "_Content")
>>> True
>>> holder.initType(MsgType='video')
>>> hasattr(holder, "_PicUrl")
>>> True
>>> holder.initType(MsgType='text') # Or we can just ellipsis this operation since by default its 'text'
>>> hasattr(holder, "_PicUrl")
>>> False
>>> hasattr(holder, "_Content")
>>> True
>>> holder.getElementByTag('Content')
>>> ''
"""
# By default, MsgType is set as 'text'
MsgType = 'text'
# Unique tages in all the mapping relationship
#
# For those tags in-common of normal message
global commonTag
commonTag = ['ToUserName', 'FromUserName', 'CreateTime', 'MsgId', 'MsgType']
# For normal message mapping
global normalMapping
normalMapping = {
'text':['Content'],
'image':['PicUrl', 'MediaId'],
'voice':['MediaId','Format'],
'video':['MediaId','ThumbMeiaId'],
'location':['Location_X','Location_Y','Scale', 'Label'],
'link':['Title','Description','Url'],
}
# For event message mapping
global eventMapping
eventMapping = {
# The list presents the combined tag set of the event message
'event':['Event','EventKey','Ticket','Latitude','Longitude','Precision' ],
}
# For recognition message mapping
global recognitionMapping
recognitionMapping = {
'voice':['MediaId','Format','Recognition'],
}
def __init__(self, incomingMessage='<xml></xml>'):
# pre-set some common variables
root = etree.fromstring(incomingMessage)
# The 5 ones in common
if root.find('ToUserName') is not None:
self._ToUserName = root.find('ToUserName').text
else:
self._ToUserName = ''
if root.find('FromUserName') is not None:
self._FromUserName = root.find('FromUserName').text
else:
self._FromUserName = ''
if root.find('CreateTime') is not None:
self._CreateTime = root.find('CreateTime').text
else:
self._CreateTime = '1000000000'
if root.find('MsgType') is not None:
self._MsgType = root.find('MsgType').text
else:
self._MsgType = ''
if root.find('MsgId') is not None:
self._MsgId = root.find('MsgId').text
else:
self._MsgId = ''
# Store the XML incomingMessage if has
# For text message only
if self.MsgType == 'text':
if root.find('Content') is not None:
self._Content = root.find('Content').text
else:
self._Content = ''
# For image message only
elif self.MsgType == 'image':
if root.find('PicUrl') is not None:
self._PicUrl = root.find('PicUrl').text
else:
self._PicUrl = ''
if root.find('MediaId') is not None:
self._MediaId = root.find('MediaId').text
else:
self._MediaId = ''
# For voice message only
elif self.MsgType == 'voice':
if root.find('MediaId') is not None:
self._MediaId = root.find('MediaId').text
else:
self._MediaId = ''
if root.find('Format') is not None:
self._Format = root.find('Format').text
else:
self._Format = ''
# For video message only
elif self.MsgType == 'video':
if root.find('MediaId') is not None:
self._MediaId = root.find('MediaId').text
else:
self._MediaId = ''
if root.find('ThumbMediaId') is not None:
self._ThumbMediaId = root.find('ThumbMediaId').text
else:
self._ThumbMediaId = ''
# For location message only
elif self.MsgType == 'location':
if root.find('Location_X') is not None:
self._Location_X = root.find('Location_X').text
else:
self._Location_X = ''
if root.find('Location_Y') is not None:
self._Location_Y = root.find('Location_Y').text
else:
self._Location_Y = ''
if root.find('Scale') is not None:
self._Scale = root.find('Scale').text
else:
self._Scale = ''
if root.find('Label') is not None:
self._Label = root.find('Label').text
else:
self._Label = ''
# For link message only
elif self.MsgType == 'link':
if root.find('Title') is not None:
self._Title = root.find('Title').text
else:
self._Title = ''
if root.find('Description') is not None:
self._Description = root.find('Description').text
else:
self._Description = ''
if root.find('Url') is not None:
self._Url = root.find('Url').text
else:
self._Url = ''
# For event message only
elif self.MsgType == 'event':
# It has to have a ```self._Event``` for event message certainly
if root.find('Event') is not None:
self._Event = root.find('Event').text
else:
self._Event = ''
if root.find('EventKey') is not None:
self._EventKey = root.find('EventKey').text
if root.find('Ticket') is not None:
self._Ticket = root.find('Ticket').text
if root.find('Latitude') is not None:
self._Latitude = root.find('Latitude').text
if root.find('Longitude') is not None:
self._Longitude = root.find('Longitude').text
if root.find('Precision') is not None:
self._Precision = root.find('Precision').text
def initType(self, MsgType='text', incomingMessage='<xml></xml>'):
''' To initialize message type
'''
MsgType_list = ['text', 'image', 'voice', 'video', 'location', 'link', 'event']
if MsgType not in MsgType_list:
raise ValueError, "MsgType '%s' not valid " % MsgType
for i in MsgType_list:
if MsgType == i:
self.MsgType = i
break
# Delete the common tags
for c in commonTag:
try:
delattr(self, '_' + c)
except:
pass
# Delete the unuseful elements in normalMapping
for k in normalMapping:
if k !=self.MsgType:
for m in normalMapping[k]:
try:
delattr(self, '_' + m)
except:
pass
# Delete the unuseful elements in eventMapping
for k in eventMapping:
for e in eventMapping[k]:
try:
delattr(self, '_' + e)
except:
pass
self.__init__(incomingMessage)
# releasing method
def __del__(self):
pass
#@property
def getElementByTag(self, tag):
'''To get element from the tag
'''
try:
gotten = getattr(self, "_" + tag)
except:
return None
##raise ValueError
#tmp = "Instance has no attribute _%s" % tag
#raise AttributeError, tmp
else:
return gotten
def digest(self, incomingMessage):
'''To digest the XML message passed from wechat server
Make the value variable
The 'incomingMessage' is of XML
According to its content this will assgin values to ```self.MsgType and etc..``` Logistics as the followings:
1) check parent message type :"MsgType"
2) check subclass message type if "Voice Recognition", "Event", "Normal"
3) check children class message type
'''
root = etree.fromstring(incomingMessage)
msgType = root.find("MsgType").text
# Get message type based from the ```incomingMessage``` variable
if msgType in ['text', 'image', 'voice', 'video', 'location', 'link', 'event']:
# Check if the incomingMessage has tag 'Recognition' then, it is a voice recognition message
if root.find("Recognition") is not None:
self.type = 'recognition'
# Check if the incomingMessage has tag 'Event' then, it is a voice event message
elif root.find("Event") is not None:
self.type = 'event'
# After all then 'normal' message
else:
self.type = 'normal'
# For normal messages
if self.type == 'normal':
if msgType == 'text':
self.initType('text', incomingMessage)
elif msgType == 'image':
self.initType('image', incomingMessage)
elif msgType == 'voice':
self.initType('voice', incomingMessage)
elif msgType == 'video':
self.initType('video', incomingMessage)
elif msgType == 'location':
self.initType('location', incomingMessage)
elif msgType == 'link':
self.initType('link', incomingMessage)
elif msgType == 'image':
self.initType('image', incomingMessage)
# TODO
# For event messages
if self.type == 'recognition':
self.initType('voice', incomingMessage)
# Construct a var ```self._Recognition``` since it is just of this more than that of 'normal message => voice'
self._Recognition = root.find("Recognition").text
# For recognition messages
if self.type == 'event':
self.initType('event', incomingMessage)
class RespondingContainer(object):
"""Package XML to reponse to determained wechat message
For more information please visit: http://mp.weixin.qq.com/wiki/index.php?title=%E5%8F%91%E9%80%81%E8%A2%AB%E5%8A%A8%E5%93%8D%E5%BA%94%E6%B6%88%E6%81%AF
Usage:
>>> rc = RespondingContainer()
>>> rc.initType('text') # Or we can ellipsis this since it is of 'text' by default
>>> # Notice we don't need to set the 'CreateTime' since it has been generated automatically :)
>>> rc.setElementByTag(FromUserName='the_server', ToUserName='the_wechat_client',Content='Hello dude!')
>>> tpl_out = rc.dumpXML()
>>> tpl_out
>>><xml>
<ToUserName>the_wechat_client</ToUserName>
<FromUserName>the_server</FromUserName>
<CreateTime>1397808770</CreateTime>
<MsgType>text</MsgType>
<Content>Hello dude!</Content>
</xml>
>>>
"""
def __init__(self, MsgType='text'):
self._MsgType = MsgType
# By default set root as the 'text' XML format
the_tpl = globals()['tpl_' + self._MsgType].encode('utf-8').decode('utf-8')
self.root = etree.fromstring(the_tpl)
#print self.root.find("FromUserName").text
#print type(self.root.find("FromUserName").text)
def initType(self, MsgType='text'):
tpl_list = ['text', 'image', 'voice', 'video', 'music', 'news']
if MsgType not in tpl_list:
raise ValueError, "Invalid responsing message MsgType '%s'" % MsgType
else:
## Load the template
#for i in tpl_list:
# if MsgType == i:
# self._MsgType = MsgType
# ## the the template
# the_xml = globals()['tpl_'+i]
# self.root = etree.fromstring( the_xml )
# break
## Set the default tag value
### Get all the tags
#child_list = []
#for child in self.root.getchildren():
# child_list += [str(child)]
### Attach 'tag' object to class to make something as : 'self._FromUserName'
#for i in child_list:
# if i == 'CreateTime':
# setattr(self,"_"+i, str(int(time.time())))
# else:
# setattr(self,"_"+i, '')
self.__init__(MsgType)
#def setElementByTag(self, tag):
def setElementByTag(self, **kwargs):
""" To package XML message into an object
Usage:
>>> setElementByTag(FromUserName='the_wechat_server',ToUserName='the_wechat_client',Content='Hello dude!')
# In this way we can then use ```dumpXML()``` to get the XML we need to reponse to wechat clients! :)
"""
## assign the basic time
self.root.find('CreateTime').text = str(int(time.time()))
#print "-----"
#print self._MsgType
## For text message only
if self._MsgType == 'text':
# To set attribute value to such as: 'self._FromUsername'
for k, v in kwargs.items():
try:
## assign value to the object
#getattr(self, "_"+k) = v
## assign/update value to the new XML object
self.root.find(k).text = v
except Exception as e:
print e
raise e
#raise AttributeError, "Message type '%s' has no attribute/tag '%s'" % (self._MsgType, k)
## For image message only
elif self._MsgType == 'image':
# To set attribute value of the XML special for image
for k, v in kwargs.items():
if k == 'MediaId':
#print v
#print etree.tostring(self.root)
self.root.find('Image').find('MediaId').text = v
else:
try:
## assign/update value to the new XML object
self.root.find(k).text = v
except Exception as e:
print e
raise e
## For voice message only
elif self._MsgType == 'voice':
# To set attribute value of the XML special for image
for k, v in kwargs.items():
if k == 'MediaId':
#print v
#print etree.tostring(self.root)
self.root.find('Voice').find('MediaId').text = v
else:
try:
## assign/update value to the new XML object
self.root.find(k).text = v
except Exception as e:
print e
raise e
## For video message only
elif self._MsgType == 'video':
# To set attribute value of the XML special for image
for k, v in kwargs.items():
if k == 'MediaId':
#print v
#print etree.tostring(self.root)
self.root.find('Video').find('MediaId').text = v
elif k == 'Title':
self.root.find('Video').find('Title').text = v
elif k == 'Description':
self.root.find('Video').find('Description').text = v
elif k == 'MusicUrl':
self.root.find('Video').find('MusicUrl').text = v
elif k == 'HQMusicUrl':
self.root.find('Video').find('HQMusicUrl').text = v
elif k == 'ThumbMediaId':
self.root.find('Video').find('ThumbMediaId').text = v
else:
try:
## assign/update value to the new XML object
self.root.find(k).text = v
except Exception as e:
print e
raise e
## For article message only
elif self._MsgType == 'article':
# To set attribute value of the XML special for image
for k, v in kwargs.items():
if k == 'ArticleCount':
self.root.find(k).text = v
if k == 'Articles':
# TODO to generate articles as
#print v
#print etree.tostring(self.root)
self.root.find('Video').find('MediaId').text = v
elif k == 'Title':
self.root.find('Video').find('Title').text = v
elif k == 'Description':
self.root.find('Video').find('Description').text = v
elif k == 'MusicUrl':
self.root.find('Video').find('MusicUrl').text = v
elif k == 'HQMusicUrl':
self.root.find('Video').find('HQMusicUrl').text = v
elif k == 'ThumbMediaId':
self.root.find('Video').find('ThumbMediaId').text = v
else:
try:
## assign/update value to the new XML object
self.root.find(k).text = v
except Exception as e:
print e
raise e
def dumpXML(self):
# To dump the XML we need
# the ```self.root``` has been assigned already
return etree.tostring(self.root, encoding='utf-8',method='xml',pretty_print=True)
# The down blow are the templates of all the responsing message valid for wechat
# For more information, please visit : http://mp.weixin.qq.com/wiki/index.php?title=%E5%8F%91%E9%80%81%E8%A2%AB%E5%8A%A8%E5%93%8D%E5%BA%94%E6%B6%88%E6%81%AF
global tpl_text
global tpl_image
global tpl_voice
global tpl_video
global tpl_music
global tpl_news
tpl_text = u'''<xml>
<ToUserName><![CDATA[toUser]]></ToUserName>
<FromUserName><![CDATA[fromUser]]></FromUserName>
<CreateTime>12345678</CreateTime>
<MsgType><![CDATA[text]]></MsgType>
<Content><![CDATA[你好]]></Content>
</xml>'''
tpl_image = '''<xml>
<ToUserName><![CDATA[toUser]]></ToUserName>
<FromUserName><![CDATA[fromUser]]></FromUserName>
<CreateTime>12345678</CreateTime>
<MsgType><![CDATA[image]]></MsgType>
<Image>
<MediaId><![CDATA[media_id]]></MediaId>
</Image>
</xml>'''
tpl_voice = '''<xml>
<ToUserName><![CDATA[toUser]]></ToUserName>
<FromUserName><![CDATA[fromUser]]></FromUserName>
<CreateTime>12345678</CreateTime>
<MsgType><![CDATA[voice]]></MsgType>
<Voice>
<MediaId><![CDATA[media_id]]></MediaId>
</Voice>
</xml>'''
tpl_video = '''<xml>
<ToUserName><![CDATA[toUser]]></ToUserName>
<FromUserName><![CDATA[fromUser]]></FromUserName>
<CreateTime>12345678</CreateTime>
<MsgType><![CDATA[video]]></MsgType>
<Video>
<MediaId><![CDATA[media_id]]></MediaId>
<Title><![CDATA[title]]></Title>
<Description><![CDATA[description]]></Description>
</Video>
</xml>'''
tpl_music = '''<xml>
<ToUserName><![CDATA[toUser]]></ToUserName>
<FromUserName><![CDATA[fromUser]]></FromUserName>
<CreateTime>12345678</CreateTime>
<MsgType><![CDATA[music]]></MsgType>
<Music>
<Title><![CDATA[TITLE]]></Title>
<Description><![CDATA[DESCRIPTION]]></Description>
<MusicUrl><![CDATA[MUSIC_Url]]></MusicUrl>
<HQMusicUrl><![CDATA[HQ_MUSIC_Url]]></HQMusicUrl>
<ThumbMediaId><![CDATA[media_id]]></ThumbMediaId>
</Music>
</xml>'''
tpl_news = '''<xml>
<ToUserName><![CDATA[toUser]]></ToUserName>
<FromUserName><![CDATA[fromUser]]></FromUserName>
<CreateTime>12345678</CreateTime>
<MsgType><![CDATA[news]]></MsgType>
<ArticleCount>2</ArticleCount>
<Articles>
<item>
<Title><![CDATA[title1]]></Title>
<Description><![CDATA[description1]]></Description>
<PicUrl><![CDATA[picurl]]></PicUrl>
<Url><![CDATA[url]]></Url>
</item>
<item>
<Title><![CDATA[title]]></Title>
<Description><![CDATA[description]]></Description>
<PicUrl><![CDATA[picurl]]></PicUrl>
<Url><![CDATA[url]]></Url>
</item>
</Articles>
</xml>'''
# Positive response
class PositiveRespondingContainer(object):
'''Using wechat custom service API to pass 6 types of messages to those wechat clients \n
who sent messages to the public wechat service. Those 6 types of messages include:
text, image, voice, video, music, news
The dumped is of dict format.
We need to json.loads(the_dict_object) if we want to pass the right reponse back
'''
def __init__(self, MsgType='text'):
self._MsgType = MsgType
# By default set the ```self.the_dict``` as from the 'text' JSON format
the_json_tpl = globals()['json_' + self._MsgType].encode('utf-8').decode('utf-8')
self.the_dict = json.loads(the_json_tpl)
if MsgType == 'text':
pass
def initType(self, MsgType='text'):
if MsgType not in ['text', 'image', 'voice', 'video', 'music', 'news']:
raise ValueError, "It has no message type: '%s'" % MsgType
else:
# pass the message type to have ```self.the_dict```
self.__init__(MsgType)
def setElementByKey(self, **kwargs):
'''To set the ```self.the_dict``` according to the message type by such as ```initType(MsgType='text')```
Notice: all the kwargs 's key in this function should be of lower case. Official wechat define that. Don't claim '''
## For text message only
if self._MsgType == 'text':
for k, v in kwargs.items():
try:
if k == 'content':
self.the_dict['text'][k] = v
else:
self.the_dict[k] = v
except Exception as e:
print e
raise e
## For image message only
elif self._MsgType == 'image':
for k, v in kwargs.items():
try:
if k == 'media_id':
self.the_dict['image'][k] = v
else:
self.the_dict[k] = v
except Exception as e:
print e
raise e
## For voice message only
elif self._MsgType == 'voice':
for k, v in kwargs.items():
try:
if k == 'media_id':
self.the_dict['voice'][k] = v
else:
self.the_dict[k] = v
except Exception as e:
print e
raise e
## For video message only
elif self._MsgType == 'video':
for k, v in kwargs.items():
try:
if k == 'media_id':
self.the_dict['video'][k] = v
elif k == 'title':
self.the_dict['video'][k] = v
elif k == 'description':
self.the_dict['video'][k] = v
else:
self.the_dict[k] = v
except Exception as e:
print e
raise e
## For music message only
elif self._MsgType == 'music':
for k, v in kwargs.items():
try:
if k == 'musicurl':
self.the_dict['music'][k] = v
elif k == 'title':
self.the_dict['music'][k] = v
elif k == 'description':
self.the_dict['music'][k] = v
elif k == 'hqmusicurl':
self.the_dict['music'][k] = v
elif k == 'thumb_media_id':
self.the_dict['music'][k] = v
else:
self.the_dict[k] = v
except Exception as e:
print e
raise e
## For news message only
elif self._MsgType == 'news':
for k, v in kwargs.items():
try:
# here we just check whether the ```v``` is type of list the ```v``` should be packaged in a list already
# if list, then its the elment of the key ```articles``` for the news message
'''
"articles": [
{
"title":"Happy Day",
"description":"Is Really A Happy Day",
"url":"URL",
"picurl":"PIC_URL"
},
{
"title":"Happy Day",
"description":"Is Really A Happy Day",
"url":"URL",
"picurl":"PIC_URL"
}
]
'''
if k == 'articles':
if type(v) == list:
self.the_dict['news'][k] = v
else:
raise ValueError, "The value of the key 'articles' should be of type list"
elif k == 'touser':
self.the_dict['touser'] = v
elif k == 'msgtype':
self.the_dict['msgtype'] = 'news'
except Exception as e:
print e
raise e
# package article
def packageArticle(title= "default title", description="default description", url="http://www.baidu.com", picurl="http://www.baidu.com/img/bdlogo.gif"):
'''This will return an article in a list which contains a dict.
While construcing the JSON dumped,
This is used with the function ```setElementByKey(touser='someone', msgtype='news', articles=packageArticle())```
'''
return [{"title": title, "description":description, "url":url, "picurl":picurl}]
# to dump the the dict as for later on JSON loading
def dumpDict(self):
return self.the_dict
json_text = '''{
"touser":"OPENID",
"msgtype":"text",
"text":
{
"content":"Hello World"
}
}'''
json_image = '''{
"touser":"OPENID",
"msgtype":"image",
"image":
{
"media_id":"MEDIA_ID"
}
}'''
json_voice = '''{
"touser":"OPENID",
"msgtype":"voice",
"voice":
{
"media_id":"MEDIA_ID"
}
}'''
json_video = '''{
"touser":"OPENID",
"msgtype":"video",
"video":
{
"media_id":"MEDIA_ID",
"title":"TITLE",
"description":"DESCRIPTION"
}
}'''
json_music = '''{
"touser":"OPENID",
"msgtype":"music",
"music":
{
"title":"MUSIC_TITLE",
"description":"MUSIC_DESCRIPTION",
"musicurl":"MUSIC_URL",
"hqmusicurl":"HQ_MUSIC_URL",
"thumb_media_id":"THUMB_MEDIA_ID"
}
}'''
json_news = '''{
"touser":"OPENID",
"msgtype":"news",
"news":{
"articles": [
{
"title":"Happy Day",
"description":"Is Really A Happy Day",
"url":"URL",
"picurl":"PIC_URL"
},
{
"title":"Happy Day",
"description":"Is Really A Happy Day",
"url":"URL",
"picurl":"PIC_URL"
}
]
}
}'''
class SubscriberManager(object):
'''To manage the subscriber groups, profile, location, list.
Usage:
>>> sm = SubscriberManager()
>>> sm.loadToken('abcdefg1234567')
>>> hisprofile = sm.getSubscriberProfile(openid='his_open_id', lang='zh_CN')
'''
def __init__(self, token=''):
self._token = token
def loadToken(self, token=''):
'''Firstly load the access token, then use the functions below'''
self._token = token
def getSubscriberProfile(self, openid='', lang='zh_CN'):
'''The open_id parameter is unique to unique wechat public service.
This function will return a dict if ```token``` and ```open_id``` are valid.
If not exists or not valid will return None.
For the parameter 'zh_CN', there are others: 'zh_TW, en'
For more information: please visit, http://mp.weixin.qq.com/wiki/index.php?title=%E8%8E%B7%E5%8F%96%E7%94%A8%E6%88%B7%E5%9F%BA%E6%9C%AC%E4%BF%A1%E6%81%AF'''
url = "https://api.weixin.qq.com/cgi-bin/user/info?access_token=" + self._token + "&openid=" + openid + "&lang=" + lang
try:
a = urllib2.urlopen(url)
except Exception as e:
print e
return None
else:
gotten = a.read()
a_dict = json.loads(gotten)
# means wrong appid or secret
if a_dict.has_key('errcode'):
return None
else:
return a_dict
def createGroup(self, name=''):
'''Create a determained group name.
If created, then it will return the new group id of type 'int'.
If not, will return None.
'''
url = "https://api.weixin.qq.com/cgi-bin/groups/create?access_token=" + self._token
postData = '{"group": {"name": "%s"} }' % name
request = urllib2.Request(url,data=postData)
request.get_method = lambda : 'POST'
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return None
else:
a_dict = json.loads(response.read())
if a_dict.has_key('errcode'):
return None
else:
return a_dict['group']['id']
def getAllgroups(self):
''' A dict will be returned.
For more information please visit:
http://mp.weixin.qq.com/wiki/index.php?title=%E5%88%86%E7%BB%84%E7%AE%A1%E7%90%86%E6%8E%A5%E5%8F%A3#.E6.9F.A5.E8.AF.A2.E6.89.80.E6.9C.89.E5.88.86.E7.BB.84
'''
url = "https://api.weixin.qq.com/cgi-bin/groups/get?access_token=" + self._token
try:
response = urllib2.urlopen(url)
except Exception as e:
print e
return None
else:
a_dict = json.loads(response.read())
if a_dict.has_key('errcode'):
return None
else:
return a_dict
def getHisGroupID(self, openid=''):
'''Get a subscriber's group ID. The ID is of type 'int'.
If openid wrong or token invalid, 'None' will be returned.
For more information, please visit:
http://mp.weixin.qq.com/wiki/index.php?title=%E5%88%86%E7%BB%84%E7%AE%A1%E7%90%86%E6%8E%A5%E5%8F%A3#.E6.9F.A5.E8.AF.A2.E7.94.A8.E6.88.B7.E6.89.80.E5.9C.A8.E5.88.86.E7.BB.84'''
url = "https://api.weixin.qq.com/cgi-bin/groups/getid?access_token="+ self._token
postData = '{"openid":"%s"}' % openid
request = urllib2.Request(url,data=postData)
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return None
else:
a_dict = json.loads(response.read())
if a_dict.has_key('errcode'):
return None
else:
return a_dict['groupid']
def updateGroupName(self, groupid='', new_name=''):
'''Update the determained group id with the new_name.
'True' or False if updated or not.
For more information, please visit:
http://mp.weixin.qq.com/wiki/index.php?title=%E5%88%86%E7%BB%84%E7%AE%A1%E7%90%86%E6%8E%A5%E5%8F%A3#.E4.BF.AE.E6.94.B9.E5.88.86.E7.BB.84.E5.90.8D
'''
url = "https://api.weixin.qq.com/cgi-bin/groups/update?access_token=" + self._token
postData = '{"group":{"id":%s,"name":"%s"}}' % (groupid, new_name)
request = urllib2.Request(url,data=postData)
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return False
else:
a_dict = json.loads(response.read())
#print a_dict
if a_dict.has_key('errcode'):
if a_dict['errcode'] == 0:
return True
else:
return False
else:
return False
def moveHimToGroup(self, openid='', groupid=''):
'''Move him to other group.
'True' or 'False' if moved or not.
For more information please visit:
http://mp.weixin.qq.com/wiki/index.php?title=%E5%88%86%E7%BB%84%E7%AE%A1%E7%90%86%E6%8E%A5%E5%8F%A3#.E7.A7.BB.E5.8A.A8.E7.94.A8.E6.88.B7.E5.88.86.E7.BB.84'''
url = "https://api.weixin.qq.com/cgi-bin/groups/members/update?access_token=" + self._token
postData = '{"openid":"%s","to_groupid":%s}' % (openid, groupid)
request = urllib2.Request(url,data=postData)
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return False
else:
a_dict = json.loads(response.read())
#print a_dict
if a_dict.has_key('errcode'):
if a_dict['errcode'] == 0:
return True
else:
return False
else:
return False
def getSubscriberList(self, next_openid=''):
'''To get subscriber list.
A dict will be return if valid.
If ```token``` and ```next_openid``` are valid, then a dict will be returned.
If the ```next_openid``` does not exist, official wechat server takes it as '' by default
If not, a 'None' will be returned.
For more information please visit:
http://mp.weixin.qq.com/wiki/index.php?title=%E8%8E%B7%E5%8F%96%E5%85%B3%E6%B3%A8%E8%80%85%E5%88%97%E8%A1%A8
'''
url = "https://api.weixin.qq.com/cgi-bin/user/get?access_token=" + self._token + "&next_openid=" + next_openid
try:
response = urllib2.urlopen(url)
except Exception as e:
print e
return None
else:
a_dict = json.loads(response.read())
#print a_dict
if a_dict.has_key('errcode'):
return None
else:
return a_dict
def getAPIToken(appid='', appsecret=''):
'''Get wechat API token for cusmter service or others.
If ```appid``` and ```appsecret``` are correct then a string 'token' will be return.
If not , 'return None' '''
default_url = 'https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&'
url = default_url + 'appid=' + appid + '&secret=' + appsecret
try:
a = urllib2.urlopen(url)
except Exception as e:
print e
return None
else:
gotten = a.read()
a_dict = json.loads(gotten)
if a_dict.has_key('access_token'):
return a_dict['access_token']
# means wrong appid or secret
else:
return None
def postMessage2API(token='',messageString=''):
'''Using the token, post the message to determained user.
This returns a Boolean value'''
url = "https://api.weixin.qq.com/cgi-bin/message/custom/send?access_token=" + token
request = urllib2.Request(url, messageString)
request.get_method = lambda : 'POST'
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return False
else:
j = json.loads(response.read())
# The above works
#print j
# to check if the message was accepted
if j['errcode'] == 0:
return True
else:
return False
class MenuManager(object):
'''To manage the bottom menu of the wechat service
Usage:
>>> mm = MenuManager()
>>> mm.loadToken('something_the_api_token')
>>> flag = mm.createMenu('the_menu_format_constructed_from_a_JSON_as_a_string')
>>> flag
True
>>> menu_got = mm.getMenu()
>>> menu_got
{u'menu': {u'button': [{u'type': u'click', u'name': u'\u7b2c\u4e00\u94ae', u'key': u'V1001_TODAY_MUSIC', u'sub_button': []}, {u'type': u'click', u'name': u'\u7b2c\u4e8c\u94ae', u'key': u'V1001_TODAY_SINGER', u'sub_button': []}, {u'name': u'\u7b2c\u4e09\u94ae', u'sub_button': [{u'url': u'http://www.soso.com/', u'type': u'view', u'name': u'\u641c\u641c', u'sub_button': []}, {u'url': u'http://v.qq.com/', u'type': u'view', u'name': u'\u770b\u7535\u5f71', u'sub_button': []}, {u'type': u'click', u'name': u'\u5938\u6211\u5e05', u'key': u'V1001_GOOD', u'sub_button': []}]}]}}
>>> flag2 = mm.deleteMenu()
>>> flag2
True
>>> mm.getMenu()
>>> # nothing gotten: it means no menu at all
'''
def __init__(self, token=''):
self._token = token
def loadToken(self, token=''):
'''Load the token before using other functions'''
self._token = token
def createMenu(self, menu_format=''):
'''Create menu, it needs a token and the menu format.
The ```menu_format``` is of type string.
But ```menu_format``` is constructed from a JSON.
For more information please visit:
http://mp.weixin.qq.com/wiki/index.php?title=%E8%87%AA%E5%AE%9A%E4%B9%89%E8%8F%9C%E5%8D%95%E5%88%9B%E5%BB%BA%E6%8E%A5%E5%8F%A3
'''
token = self._token
url = "https://api.weixin.qq.com/cgi-bin/menu/create?access_token=" + token
request = urllib2.Request(url, menu_format)
request.get_method = lambda : 'POST'
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return False
else:
j = json.loads(response.read())
# The above works
#print j
# to check if the message was accepted
if j['errcode'] == 0:
return True
else:
return False
def getMenu(self):
'''Get the menu format from the API.
If there be, then a dict would be returned.
If not, 'None' will be returned.
'''
token = self._token
url = "https://api.weixin.qq.com/cgi-bin/menu/get?access_token="+ token
try:
response = urllib2.urlopen(url)
except Exception as e:
# its better to raise something here if the wechat remote server is down
print e
return None
else:
a_dict = json.loads(response.read())
if a_dict.has_key('errcode'):
if a_dict['errcode'] != 0:
return None
else:
return a_dict
else:
return a_dict
def deleteMenu(self):
token = self._token
url = "https://api.weixin.qq.com/cgi-bin/menu/delete?access_token=" + token
try:
response = urllib2.urlopen(url)
except Exception as e:
print e
return False
else:
a_dict = json.loads(response.read())
if a_dict.has_key('errcode'):
if a_dict['errcode'] == 0:
return True
else:
return False
else:
return False
class MediaManager(object):
'''There are four types of media suppored by wechat.
image, voice, video, thumb
Post the file to the offical wechat server and get the response.
'''
def __init__(self, media_type='image', token = ''):
self._media_type = media_type
self._token = token
def loadToken(self, token = ''):
self._token = token
def uploadMedia(self, media_type='image', media_path=''):
'''Post the determained media file to the offical URL
If the image is valid, then a_dict will be returned.
If not, 'None' will be returned.
For more information, please visit: http://mp.weixin.qq.com/wiki/index.php?title=%E4%B8%8A%E4%BC%A0%E4%B8%8B%E8%BD%BD%E5%A4%9A%E5%AA%92%E4%BD%93%E6%96%87%E4%BB%B6'''
if media_type not in ['image', 'voice', 'video', 'thumb']:
raise ValueError, "Media type: '%s' not valid" % media_type
else:
self._media_type = media_type
url = "http://file.api.weixin.qq.com/cgi-bin/media/upload?access_token=" + self._token + "&type=" + self._media_type
register_openers()
try:
datagen, headers = multipart_encode({"image1": open(media_path,"rb")})
except Exception as e:
#print e
return None
#raise e
else:
request = urllib2.Request(url,data=datagen,headers=headers)
try:
response = urllib2.urlopen(request)
except Exception as e:
print e
return None
| 37.391187 | 577 | 0.527237 | 4,799 | 41,579 | 4.487602 | 0.113982 | 0.028603 | 0.011283 | 0.016298 | 0.474043 | 0.434389 | 0.4095 | 0.384751 | 0.333488 | 0.320347 | 0 | 0.018928 | 0.343082 | 41,579 | 1,111 | 578 | 37.424842 | 0.769532 | 0.102143 | 0 | 0.572592 | 0 | 0.004071 | 0.216631 | 0.070952 | 0 | 0 | 0 | 0.0018 | 0 | 0 | null | null | 0.006784 | 0.010855 | null | null | 0.033921 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4d8056be31284c17cf40684370c5ac0209b3ede | 1,296 | py | Python | tg/release.py | TurboGears/tg2 | f40a82d016d70ce560002593b4bb8f83b57f87b3 | [
"MIT"
] | 812 | 2015-01-16T22:57:52.000Z | 2022-03-27T04:49:40.000Z | tg/release.py | KonstantinKlepikov/tg2 | b230e98bf6f64b3620dcb4214fa45dafddb0d60f | [
"MIT"
] | 74 | 2015-02-18T17:55:31.000Z | 2021-12-13T10:41:08.000Z | tg/release.py | KonstantinKlepikov/tg2 | b230e98bf6f64b3620dcb4214fa45dafddb0d60f | [
"MIT"
] | 72 | 2015-06-10T06:02:45.000Z | 2022-03-27T08:37:24.000Z | """TurboGears project related information"""
version = "2.4.3"
description = "Next generation TurboGears"
long_description="""
TurboGears brings together a best of breed python tools
to create a flexible, full featured, and easy to use web
framework.
TurboGears 2 provides an integrated and well tested set of tools for
everything you need to build dynamic, database driven applications.
It provides a full range of tools for front end javascript
develeopment, back database development and everything in between:
* dynamic javascript powered widgets (ToscaWidgets2)
* automatic JSON generation from your controllers
* powerful, designer friendly XHTML based templating
* object or route based URL dispatching
* powerful Object Relational Mappers (SQLAlchemy)
The latest development version is available in the
`TurboGears Git repositories`_.
.. _TurboGears Git repositories:
https://github.com/TurboGears
"""
url="http://www.turbogears.org/"
author= "Alessandro Molina, Mark Ramm, Christopher Perkins, Jonathan LaCour, Rick Copland, Alberto Valverde, Michael Pedersen and the TurboGears community"
email = "amol@turbogears.org"
copyright = """Copyright 2005-2020 Kevin Dangoor, Alberto Valverde, Mark Ramm, Christopher Perkins, Alessandro Molina and contributors"""
license = "MIT"
| 41.806452 | 155 | 0.794753 | 167 | 1,296 | 6.149701 | 0.688623 | 0.013632 | 0.019474 | 0.050633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011638 | 0.138117 | 1,296 | 30 | 156 | 43.2 | 0.907789 | 0.029321 | 0 | 0 | 0 | 0.04 | 0.907348 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4d8f82be29e6cb13695308004bac74a741d2095 | 8,111 | py | Python | bogglesolver.py | gammazero/pybogglesolver | 71d2c6d6ae8c9b5f580f6b27479aea3450a2895a | [
"MIT"
] | null | null | null | bogglesolver.py | gammazero/pybogglesolver | 71d2c6d6ae8c9b5f580f6b27479aea3450a2895a | [
"MIT"
] | null | null | null | bogglesolver.py | gammazero/pybogglesolver | 71d2c6d6ae8c9b5f580f6b27479aea3450a2895a | [
"MIT"
] | null | null | null | """
Module to generate solutions for Boggle grids.
Andrew Gillis 22 Dec. 2009
"""
from __future__ import print_function
import os
import sys
import collections
import trie
if sys.version < '3':
range = xrange
class BoggleSolver(object):
"""
This class uses an external words file as a dictionary of acceptable boggle
words. When an instance of this class is created, it sets up an internal
dictionary to look up valid boggle answers. The class' solve method can be
used repeatedly to generate solutions for different boggle grids.
"""
def __init__(self, words_file, xlen=4, ylen=4, pre_compute_adj=False):
"""Create and initialize BoggleSolver instance.
This creates the internal trie for fast word lookup letter-by-letter.
Words that begin with capital letters and words that are not within the
specified length limits are filtered out.
Arguments:
xlen -- X dimension (width) of board.
ylen -- Y dimension (height) of board.
pre_compute_adj -- Pre-compute adjacency matrix.
"""
assert(xlen > 1)
assert(ylen > 1)
self.xlen = xlen
self.ylen = ylen
self.board_size = xlen * ylen
if pre_compute_adj:
self.adjacency = BoggleSolver._create_adjacency_matrix(xlen, ylen)
else:
self.adjacency = None
self.trie = BoggleSolver._load_dictionary(
words_file, self.board_size, 3)
def solve(self, grid):
"""Generate all solutions for the given boggle grid.
Arguments:
grid -- A string of 16 characters representing the letters in a boggle
grid, from top left to bottom right.
Returns:
A list of words found in the boggle grid.
None if given invalid grid.
"""
if self.trie is None:
raise RuntimeError('words file not loaded')
if len(grid) != self.board_size:
raise RuntimeError('invalid board')
board = list(grid)
trie = self.trie
words = set()
q = collections.deque()
adjs = self.adjacency
for init_sq in range(self.board_size):
c = board[init_sq]
q.append((init_sq, c, trie.get_child(c), [init_sq]))
while q:
parent_sq, prefix, pnode, seen = q.popleft()
pnode_get_child = pnode.get_child
if adjs:
adj = adjs[parent_sq]
else:
adj = self._calc_adjacency(self.xlen, self.ylen, parent_sq)
for cur_sq in adj:
if cur_sq in seen:
continue
c = board[cur_sq]
cur_node = pnode_get_child(c)
if cur_node is None:
continue
s = prefix + c
q.append((cur_sq, s, cur_node, seen + [cur_sq]))
if cur_node._is_word:
if s[0] == 'q':
# Rehydrate q-words with 'u'.
words.add('qu' + s[1:])
else:
words.add(s)
return words
def show_grid(self, grid):
"""Utility method to print a 4x4 boggle grid.
Arguments:
grid -- A string of X*Y characters representing the letters in a boggle
grid, from top left to bottom right.
"""
for y in range(self.ylen):
print('+' + '---+' * self.xlen)
yi = y * self.xlen
line = ['| ']
for x in range(self.xlen):
cell = grid[yi+x].upper()
if cell == 'Q':
line.append('Qu')
line.append('| ')
else:
line.append(cell)
line.append(' | ')
print(''.join(line))
print('+' + '---+' * self.xlen)
def find_substrings(self, string):
"""Find all valid substrings in the given string.
This method is not necessary for the boggle solver, but is a utility
for testing that all substrings of a word are correctly found.
Arguments:
string -- The string in which to search for valid substrings.
Returns:
List of substrings that are valid words.
"""
found = set()
for start in range(len(string)):
cur = self.trie
letters = [None] * self.board_size
count = 0
for l in string[start:]:
letters[count] = l
count += 1
cur = cur.get_child(l)
if cur is None:
break
if cur._is_word:
found.add(''.join(letters[:count]))
if not cur.has_children():
break
return found
@staticmethod
def _load_dictionary(words_file, max_len, min_len):
"""Private method to create the trie for finding words.
Arguments:
words_file -- Path of file containing words for reference.
Return:
Count of words inserted into trie.
"""
if not os.path.isfile(words_file):
raise RuntimeError('words file not found: ' + words_file)
print('creating dictionary...')
root = trie.Trie()
word_count = 0
if words_file.endswith('gz'):
import gzip
f = gzip.open(words_file)
elif words_file.endswith('bz2'):
import bz2
f = bz2.BZ2File(words_file)
else:
f = open(words_file)
try:
for word in f:
if sys.version < '3':
word = word.strip()
else:
word = word.strip().decode("utf-8")
# Skip words that are too long or too short.
word_len = len(word)
if word_len > max_len or word_len < min_len:
continue
# Skip words that start with capital letter.
if word[0].isupper():
continue
if word[0] == 'q':
# Skip words starting with q not followed by u.
if word[1] != 'u':
continue
# Remove "u" from q-words so that only the q is matched.
word = 'q' + word[2:]
root.insert(word)
word_count += 1
finally:
f.close()
print('Loaded', word_count, 'words from file.')
return root
@staticmethod
def _create_adjacency_matrix(xlim, ylim):
adj_list = [[]] * (ylim * xlim)
for i in range(ylim * xlim):
# Current cell index = y * xlim + x
adj = BoggleSolver._calc_adjacency(xlim, ylim, i)
adj_list[i] = adj
return adj_list
@staticmethod
def _calc_adjacency(xlim, ylim, sq):
adj = []
y = int(sq / xlim)
x = sq - (y * xlim)
# Look at row above current cell.
if y-1 >= 0:
above = sq - xlim
# Look to upper left.
if x-1 >= 0:
adj.append(above - 1)
# Look above.
adj.append(above)
# Look upper right.
if x+1 < xlim:
adj.append(above + 1)
# Look at same row that current cell is on.
# Look to left of current cell.
if x-1 >= 0:
adj.append(sq - 1)
# Look to right of current cell.
if x+1 < xlim:
adj.append(sq + 1)
# Look at row below current cell.
if y+1 < ylim:
below = sq + xlim
# Look to lower left.
if x-1 >= 0:
adj.append(below - 1)
# Look below.
adj.append(below)
# Look to lower rigth.
if x+1 < xlim:
adj.append(below + 1)
return adj
| 31.076628 | 79 | 0.501911 | 959 | 8,111 | 4.151199 | 0.240876 | 0.03165 | 0.006029 | 0.003768 | 0.113288 | 0.082643 | 0.059784 | 0.034665 | 0.034665 | 0.034665 | 0 | 0.010509 | 0.413389 | 8,111 | 260 | 80 | 31.196154 | 0.826187 | 0.267784 | 0 | 0.181818 | 1 | 0 | 0.02463 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 1 | 0.045455 | false | 0 | 0.045455 | 0 | 0.12987 | 0.038961 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4db73effedd714b6a4d9b15c4a8d627cf47c849 | 1,151 | py | Python | tests/manage/monitoring/pagerduty/test_ceph.py | MeridianExplorer/ocs-ci | a33d5116128b88f176f5eff68a3ef805125cdba1 | [
"MIT"
] | null | null | null | tests/manage/monitoring/pagerduty/test_ceph.py | MeridianExplorer/ocs-ci | a33d5116128b88f176f5eff68a3ef805125cdba1 | [
"MIT"
] | null | null | null | tests/manage/monitoring/pagerduty/test_ceph.py | MeridianExplorer/ocs-ci | a33d5116128b88f176f5eff68a3ef805125cdba1 | [
"MIT"
] | null | null | null | import logging
import pytest
from ocs_ci.framework.testlib import (
managed_service_required,
skipif_ms_consumer,
tier4,
tier4a,
)
from ocs_ci.ocs import constants
from ocs_ci.utility import pagerduty
log = logging.getLogger(__name__)
@tier4
@tier4a
@managed_service_required
@skipif_ms_consumer
@pytest.mark.polarion_id("OCS-2771")
def test_corrupt_pg_pd(measure_corrupt_pg):
"""
Test that there is appropriate incident in PagerDuty when Placement group
on one OSD is corrupted and that this incident is cleared when the corrupted
ceph pool is removed.
"""
api = pagerduty.PagerDutyAPI()
# get incidents from time when manager deployment was scaled down
incidents = measure_corrupt_pg.get("pagerduty_incidents")
target_label = constants.ALERT_CLUSTERERRORSTATE
# TODO(fbalak): check the whole string in summary and incident alerts
assert pagerduty.check_incident_list(
summary=target_label,
incidents=incidents,
urgency="high",
)
api.check_incident_cleared(
summary=target_label,
measure_end_time=measure_corrupt_pg.get("stop"),
)
| 26.159091 | 80 | 0.741095 | 148 | 1,151 | 5.52027 | 0.533784 | 0.044064 | 0.033048 | 0.068543 | 0.093023 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0.008611 | 0.192876 | 1,151 | 43 | 81 | 26.767442 | 0.870829 | 0.264987 | 0 | 0.068966 | 0 | 0 | 0.042631 | 0 | 0 | 0 | 0 | 0.023256 | 0.034483 | 1 | 0.034483 | false | 0 | 0.172414 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4e302bb88e4c014fa9f911add690a08d53c06f0 | 2,578 | py | Python | aiounittest/case.py | tmaila/aiounittest | c43d3b619fd6a8fd071758996a5f42310b0293dc | [
"MIT"
] | 55 | 2017-08-18T10:24:05.000Z | 2022-03-21T08:29:19.000Z | aiounittest/case.py | tmaila/aiounittest | c43d3b619fd6a8fd071758996a5f42310b0293dc | [
"MIT"
] | 15 | 2017-09-22T13:14:43.000Z | 2022-01-23T16:29:22.000Z | aiounittest/case.py | tmaila/aiounittest | c43d3b619fd6a8fd071758996a5f42310b0293dc | [
"MIT"
] | 4 | 2019-11-26T18:08:43.000Z | 2021-06-01T22:12:00.000Z | import asyncio
import unittest
from .helpers import async_test
class AsyncTestCase(unittest.TestCase):
''' AsyncTestCase allows to test asynchoronus function.
The usage is the same as :code:`unittest.TestCase`. It works with other test frameworks
and runners (eg. `pytest`, `nose`) as well.
AsyncTestCase can run:
- test of synchronous code (:code:`unittest.TestCase`)
- test of asynchronous code, supports syntax with
:code:`async`/:code:`await` (Python 3.5+) and
:code:`asyncio.coroutine`/:code:`yield from` (Python 3.4)
Code to test:
.. code-block:: python
import asyncio
async def async_add(x, y, delay=0.1):
await asyncio.sleep(delay)
return x + y
async def async_one():
await async_nested_exc()
async def async_nested_exc():
await asyncio.sleep(0.1)
raise Exception('Test')
Tests:
.. code-block:: python
import aiounittest
class MyTest(aiounittest.AsyncTestCase):
async def test_await_async_add(self):
ret = await async_add(1, 5)
self.assertEqual(ret, 6)
async def test_await_async_fail(self):
with self.assertRaises(Exception) as e:
await async_one()
'''
def get_event_loop(self):
''' Method provides an event loop for the test
It is called before each test, by default :code:`aiounittest.AsyncTestCase` creates the brand new event
loop everytime. After completion, the loop is closed and then recreated, set as default,
leaving asyncio clean.
.. note::
In the most common cases you don't have to bother about this method, the default implementation is a receommended one.
But if, for some reasons, you want to provide your own event loop just override it. Note that :code:`AsyncTestCase` won't close such a loop.
.. code-block:: python
class MyTest(aiounittest.AsyncTestCase):
def get_event_loop(self):
self.my_loop = asyncio.get_event_loop()
return self.my_loop
'''
return None
def __getattribute__(self, name):
attr = super().__getattribute__(name)
if name.startswith('test_') and asyncio.iscoroutinefunction(attr):
return async_test(attr, loop=self.get_event_loop())
else:
return attr
| 29.976744 | 152 | 0.600465 | 310 | 2,578 | 4.880645 | 0.425806 | 0.041639 | 0.031725 | 0.027759 | 0.054197 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006264 | 0.318852 | 2,578 | 85 | 153 | 30.329412 | 0.855353 | 0.742824 | 0 | 0 | 0 | 0 | 0.011574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d4e990995bc970a5eeb5c450531463a5dff36df5 | 2,026 | py | Python | pytouch/elements.py | Krai53n/pytouch | 8a1c69c4ba5981f3cb0bf00db3bcef5dd15e8375 | [
"MIT"
] | null | null | null | pytouch/elements.py | Krai53n/pytouch | 8a1c69c4ba5981f3cb0bf00db3bcef5dd15e8375 | [
"MIT"
] | null | null | null | pytouch/elements.py | Krai53n/pytouch | 8a1c69c4ba5981f3cb0bf00db3bcef5dd15e8375 | [
"MIT"
] | null | null | null | from random import randint
import pyxel
from constants import Screen
import cursors
class Text:
def __init__(self, text):
self._text = text
self._symbol_len = 3
self._padding_len = 1
def _count_text_len(self):
return (
self._symbol_len + self._padding_len
) * len(self._text) - self._padding_len
def _x_text_center_position(self):
return (Screen.width - self._count_text_len()) // 2
def draw(self):
pyxel.text(self._x_text_center_position(), 0, self._text, 2)
class Score:
def __init__(self, padding_right=2, padding_top=2):
self._padding_right = padding_right
self._padding_top = padding_top
self.score = 0
def increase(self):
self.score += 1
def reduce(self):
self.score -= 1
def draw(self):
pyxel.text(self._padding_right, self._padding_top,
f"Score: {self.score}", (Screen.bg - 2) % 16)
class Circle:
def __init__(self):
self._r = 0
self._col = (Screen.bg - 1) % 16
def zero(self):
self._r = 0
def increase(self, size=1):
self._r += size
@property
def r(self):
return self._r
@r.setter
def r(self, r):
self._r = r
@property
def col(self):
return self._col
@col.setter
def col(self, color):
self._col = color
def draw(self, x, y):
pyxel.circ(x, y, self._r, self._col)
class ReachCircle(Circle):
def __init__(self):
super().__init__()
self.min_r = 10
self.respawn()
@property
def x(self):
return self._x
@property
def y(self):
return self._y
def respawn(self):
self._x = randint(self._r, Screen.width - self._r)
self._y = randint(self._r, Screen.height - self._r)
self._r = randint(self.min_r, min(Screen.width, Screen.height) // 2) - 4
def draw(self):
pyxel.circb(self._x, self._y, self._r, self._col)
| 21.104167 | 80 | 0.579961 | 278 | 2,026 | 3.92446 | 0.190647 | 0.059578 | 0.064161 | 0.043996 | 0.143905 | 0.043996 | 0 | 0 | 0 | 0 | 0 | 0.016289 | 0.30306 | 2,026 | 95 | 81 | 21.326316 | 0.756374 | 0 | 0 | 0.161765 | 0 | 0 | 0.009378 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.308824 | false | 0 | 0.058824 | 0.088235 | 0.514706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4ed66dc63c65bd461e9e3340f0322d30f2b6c89 | 319 | py | Python | count_split_inversions/test_count_split_inversions.py | abaldwin/algorithms | 8c8722394c9115c572dadcd8ab601885512fd494 | [
"Apache-2.0"
] | null | null | null | count_split_inversions/test_count_split_inversions.py | abaldwin/algorithms | 8c8722394c9115c572dadcd8ab601885512fd494 | [
"Apache-2.0"
] | null | null | null | count_split_inversions/test_count_split_inversions.py | abaldwin/algorithms | 8c8722394c9115c572dadcd8ab601885512fd494 | [
"Apache-2.0"
] | null | null | null | import unittest
from count_split_inversions import count_inversions
class TestCountSplitInversions(unittest.TestCase):
def test_count_inversions(self):
input = [1, 3, 5, 2, 4, 6]
result = count_inversions(input)
self.assertEqual(result, 3)
if __name__ == '__main__':
unittest.main()
| 22.785714 | 51 | 0.705329 | 38 | 319 | 5.552632 | 0.631579 | 0.21327 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027451 | 0.200627 | 319 | 13 | 52 | 24.538462 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.025078 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4f0759288304875f2de20fc2b91d86d509cb718 | 3,820 | py | Python | examples/add_compensation_to_sample.py | whitews/ReFlowRESTClient | 69369bbea501382291b71facea7a511ab8f7848b | [
"BSD-3-Clause"
] | null | null | null | examples/add_compensation_to_sample.py | whitews/ReFlowRESTClient | 69369bbea501382291b71facea7a511ab8f7848b | [
"BSD-3-Clause"
] | null | null | null | examples/add_compensation_to_sample.py | whitews/ReFlowRESTClient | 69369bbea501382291b71facea7a511ab8f7848b | [
"BSD-3-Clause"
] | null | null | null | import getpass
import sys
import json
from reflowrestclient.utils import *
host = raw_input('Host: ')
username = raw_input('Username: ')
password = getpass.getpass('Password: ')
token = get_token(host, username, password)
if token:
print "Authentication successful"
print '=' * 40
else:
print "No token for you!!!"
sys.exit()
def start():
# Projects
project_list = get_projects(host, token)
for i, result in enumerate(project_list['data']):
print i, ':', result['project_name']
project_choice = raw_input('Choose Project:')
project = project_list['data'][int(project_choice)]
# Subjects
subject_list = get_subjects(host, token, project_pk=project['id'])
for i, result in enumerate(subject_list['data']):
print i, ':', result['subject_id']
subject_choice = raw_input('Choose Subject (leave blank for all subjects): ')
subject = None
if subject_choice:
subject = subject_list['data'][int(subject_choice)]
# Sites
site_list = get_sites(host, token, project_pk=project['id'])
if not site_list:
sys.exit('There are no sites')
for i, result in enumerate(site_list['data']):
print i, ':', result['site_name']
site_choice = raw_input('Choose Site (required): ')
site = site_list['data'][int(site_choice)]
# Samples
sample_args = [host, token]
sample_kwargs = {'site_pk': site['id']}
if subject:
sample_kwargs['subject_pk'] = subject['id']
sample_list = get_samples(*sample_args, **sample_kwargs)
if not sample_list:
sys.exit('There are no samples')
for i, result in enumerate(sample_list['data']):
print i, ':', result['original_filename']
sample_choice = raw_input('Choose Sample (leave blank for all samples): ')
sample = None
if sample_choice:
sample = sample_list['data'][int(sample_choice)]
# Compensation
compensation_list = get_compensations(host, token, site_pk=site['id'], project_pk=project['id'])
if not compensation_list:
sys.exit('There are no compensations')
for i, result in enumerate(compensation_list['data']):
print i, ':', result['original_filename']
compensation_choice = raw_input('Choose Compensation (required): ')
compensation = compensation_list['data'][int(compensation_choice)]
# Now have user verify information
print '=' * 40
print 'You chose to add this compensation to these samples:'
print '\Compensation: %s' % compensation['original_filename']
print 'Samples:'
if sample:
print '\t%s' % sample['original_filename']
else:
for s in sample_list['data']:
print '\t%s' % s['original_filename']
print '=' * 40
apply_choice = None
while apply_choice not in ['continue', 'exit']:
apply_choice = raw_input("Type 'continue' to upload, 'exit' abort: ")
if apply_choice == 'exit':
sys.exit()
print 'continue'
if sample:
response_dict = add_compensation_to_sample(
host,
token,
sample_pk=str(sample['id']),
compensation_pk=str(compensation['id'])
)
print "Response: ", response_dict['status'], response_dict['reason']
print 'Data: '
print json.dumps(response_dict['data'], indent=4)
else:
for sample in sample_list['data']:
response_dict = add_compensation_to_sample(
host,
token,
sample_pk=str(sample['id']),
compensation_pk=str(compensation['id']),
)
print "Response: ", response_dict['status'], response_dict['reason']
print 'Data: '
print json.dumps(response_dict['data'], indent=4)
while True:
start() | 28.939394 | 100 | 0.625654 | 458 | 3,820 | 5.028384 | 0.189956 | 0.041685 | 0.033869 | 0.026053 | 0.330004 | 0.258359 | 0.195397 | 0.164134 | 0.164134 | 0.164134 | 0 | 0.002776 | 0.24555 | 3,820 | 132 | 101 | 28.939394 | 0.796322 | 0.020157 | 0 | 0.27957 | 0 | 0 | 0.192668 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.032258 | 0.043011 | null | null | 0.236559 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4f12c3a663d3edb5021b78314c1afd940fc7b1a | 412 | py | Python | accountifie/toolkit/urls.py | imcallister/accountifie | 094834c9d632e0353e3baf8d924eeb10cba0add4 | [
"MIT",
"Unlicense"
] | 4 | 2017-06-02T08:48:48.000Z | 2021-11-21T23:57:15.000Z | accountifie/toolkit/urls.py | imcallister/accountifie | 094834c9d632e0353e3baf8d924eeb10cba0add4 | [
"MIT",
"Unlicense"
] | 3 | 2020-06-05T16:55:42.000Z | 2021-06-10T17:43:12.000Z | accountifie/toolkit/urls.py | imcallister/accountifie | 094834c9d632e0353e3baf8d924eeb10cba0add4 | [
"MIT",
"Unlicense"
] | 4 | 2015-12-15T14:27:51.000Z | 2017-04-21T21:42:27.000Z | from django.conf import settings
from django.conf.urls import url, static
from . import views
from . import jobs
urlpatterns = [
url(r'^choose_company/(?P<company_id>.*)/$', views.choose_company, name='choose_company'),
url(r'^cleanlogs/$', jobs.cleanlogs, name='cleanlogs'),
url(r'^primecache/$', jobs.primecache, name='primecache'),
url(r'^dump_fixtures/$', views.dump_fixtures),
]
| 27.466667 | 98 | 0.686893 | 53 | 412 | 5.226415 | 0.396226 | 0.057762 | 0.101083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145631 | 412 | 14 | 99 | 29.428571 | 0.786932 | 0 | 0 | 0 | 0 | 0 | 0.26764 | 0.087591 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d4f1aa99ca10cb206e4f7702a9c7de6f3d6dfd4e | 5,975 | py | Python | intersight/models/niaapi_version_regex_all_of.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 21 | 2018-03-29T14:20:35.000Z | 2021-10-13T05:11:41.000Z | intersight/models/niaapi_version_regex_all_of.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 14 | 2018-01-30T15:45:46.000Z | 2022-02-23T14:23:21.000Z | intersight/models/niaapi_version_regex_all_of.py | sdnit-se/intersight-python | 551f7685c0f76bb8af60ec83ffb6f9672d49a4ae | [
"Apache-2.0"
] | 18 | 2018-01-03T15:09:56.000Z | 2021-07-16T02:21:54.000Z | # coding: utf-8
"""
Cisco Intersight
Cisco Intersight is a management platform delivered as a service with embedded analytics for your Cisco and 3rd party IT infrastructure. This platform offers an intelligent level of management that enables IT organizations to analyze, simplify, and automate their environments in more advanced ways than the prior generations of tools. Cisco Intersight provides an integrated and intuitive management experience for resources in the traditional data center as well as at the edge. With flexible deployment options to address complex security needs, getting started with Intersight is quick and easy. Cisco Intersight has deep integration with Cisco UCS and HyperFlex systems allowing for remote deployment, configuration, and ongoing maintenance. The model-based deployment works for a single system in a remote location or hundreds of systems in a data center and enables rapid, standardized configuration and deployment. It also streamlines maintaining those systems whether you are working with small or very large configurations. # noqa: E501
The version of the OpenAPI document: 1.0.9-1295
Contact: intersight@cisco.com
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from intersight.configuration import Configuration
class NiaapiVersionRegexAllOf(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'apic': 'NiaapiVersionRegexPlatform',
'dcnm': 'NiaapiVersionRegexPlatform',
'version': 'str'
}
attribute_map = {'apic': 'Apic', 'dcnm': 'Dcnm', 'version': 'Version'}
def __init__(self,
apic=None,
dcnm=None,
version=None,
local_vars_configuration=None): # noqa: E501
"""NiaapiVersionRegexAllOf - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._apic = None
self._dcnm = None
self._version = None
self.discriminator = None
if apic is not None:
self.apic = apic
if dcnm is not None:
self.dcnm = dcnm
if version is not None:
self.version = version
@property
def apic(self):
"""Gets the apic of this NiaapiVersionRegexAllOf. # noqa: E501
:return: The apic of this NiaapiVersionRegexAllOf. # noqa: E501
:rtype: NiaapiVersionRegexPlatform
"""
return self._apic
@apic.setter
def apic(self, apic):
"""Sets the apic of this NiaapiVersionRegexAllOf.
:param apic: The apic of this NiaapiVersionRegexAllOf. # noqa: E501
:type: NiaapiVersionRegexPlatform
"""
self._apic = apic
@property
def dcnm(self):
"""Gets the dcnm of this NiaapiVersionRegexAllOf. # noqa: E501
:return: The dcnm of this NiaapiVersionRegexAllOf. # noqa: E501
:rtype: NiaapiVersionRegexPlatform
"""
return self._dcnm
@dcnm.setter
def dcnm(self, dcnm):
"""Sets the dcnm of this NiaapiVersionRegexAllOf.
:param dcnm: The dcnm of this NiaapiVersionRegexAllOf. # noqa: E501
:type: NiaapiVersionRegexPlatform
"""
self._dcnm = dcnm
@property
def version(self):
"""Gets the version of this NiaapiVersionRegexAllOf. # noqa: E501
Version number for the Version Regex data, also used as identity. # noqa: E501
:return: The version of this NiaapiVersionRegexAllOf. # noqa: E501
:rtype: str
"""
return self._version
@version.setter
def version(self, version):
"""Sets the version of this NiaapiVersionRegexAllOf.
Version number for the Version Regex data, also used as identity. # noqa: E501
:param version: The version of this NiaapiVersionRegexAllOf. # noqa: E501
:type: str
"""
self._version = version
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(
map(lambda x: x.to_dict()
if hasattr(x, "to_dict") else x, value))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(
map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, NiaapiVersionRegexAllOf):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, NiaapiVersionRegexAllOf):
return True
return self.to_dict() != other.to_dict()
| 34.738372 | 1,052 | 0.627113 | 683 | 5,975 | 5.405564 | 0.286969 | 0.030336 | 0.094258 | 0.080444 | 0.303088 | 0.273023 | 0.270585 | 0.153846 | 0.095883 | 0.053629 | 0 | 0.013539 | 0.295397 | 5,975 | 171 | 1,053 | 34.94152 | 0.86342 | 0.450209 | 0 | 0.064103 | 0 | 0 | 0.04408 | 0.018944 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.051282 | 0 | 0.371795 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4f20508bec1fb3b3210c9cb30a6481120876c56 | 2,158 | py | Python | ROS/fprime_ws/src/genfprime/src/genfprime/generate_modmk.py | genemerewether/fprime | fcdd071b5ddffe54ade098ca5d451903daba9eed | [
"Apache-2.0"
] | 5 | 2019-10-22T03:41:02.000Z | 2022-01-16T12:48:31.000Z | ROS/fprime_ws/src/genfprime/src/genfprime/generate_modmk.py | genemerewether/fprime | fcdd071b5ddffe54ade098ca5d451903daba9eed | [
"Apache-2.0"
] | 27 | 2019-02-07T17:58:58.000Z | 2019-08-13T00:46:24.000Z | ROS/fprime_ws/src/genfprime/src/genfprime/generate_modmk.py | genemerewether/fprime | fcdd071b5ddffe54ade098ca5d451903daba9eed | [
"Apache-2.0"
] | 3 | 2019-01-01T18:44:37.000Z | 2019-08-01T01:19:39.000Z | #
# Copyright 2004-2016, by the California Institute of Technology.
# ALL RIGHTS RESERVED. United States Government Sponsorship
# acknowledged. Any commercial use must be negotiated with the Office
# of Technology Transfer at the California Institute of Technology.
#
# This software may be subject to U.S. export control laws and
# regulations. By accepting this document, the user agrees to comply
# with all U.S. export laws and regulations. User has the
# responsibility to obtain export licenses, or other export authority
# as may be required before exporting such information to foreign
# countries or providing access to foreign persons.
#
from __future__ import print_function
import os
from genmsg import MsgGenerationException
#from . name import *
## :param type_name outdir: Full path to output directory
## :returns int: status. 0 if successful
def write_modmk(outdir): #, msg_types, srv_types):
if not os.path.isdir(outdir):
#TODO: warn?
return 0
xml_in_dir = set([f for f in os.listdir(outdir)
if f.endswith('.xml')])
_write_modmk(outdir, sorted(xml_in_dir))
# TODO(mereweth) if we want to independently specify the generated XML files
# generated_xml = [_msg_serializable_xml_name(f) for f in sorted(msg_types)]
# generated_xml.extend([_port_xml_name(f) for f in sorted(msg_types)]
# write_msg_modmk(outdir, generated_xml)
# generated_xml = [_srv_serializable_xml_name(f) for f in sorted(srv_types)]
# generated_xml.extend([_port_xml_name(f) for f in sorted(srv_types)]
# write_msg_modmk(outdir, generated_xml)
return 0
def _write_modmk(outdir, generated_xml):
if not os.path.exists(outdir):
os.makedirs(outdir)
elif not os.path.isdir(outdir):
raise MsgGenerationException("file preventing the creating of Fprime directory: %s"%dir)
p = os.path.join(outdir, 'mod.mk')
with open(p, 'w') as f:
f.write('SRC = \\\n')
if len(generated_xml) != 0:
for xml in generated_xml[:-1]:
f.write('%s \\\n'%xml)
f.write('%s\n'%generated_xml[-1])
return 0
| 37.206897 | 96 | 0.698332 | 315 | 2,158 | 4.628571 | 0.419048 | 0.090535 | 0.017147 | 0.024005 | 0.240055 | 0.165981 | 0.165981 | 0.123457 | 0.106996 | 0.064472 | 0 | 0.008793 | 0.209453 | 2,158 | 57 | 97 | 37.859649 | 0.845838 | 0.580167 | 0 | 0.130435 | 0 | 0 | 0.09589 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.347826 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d4f523ec6d8e4a47a69a4a400a7f08b9647af175 | 1,154 | py | Python | src/cut_link/utils.py | true7/srt | d5accd411e73ade4ed40a41759e95cb20fbda98d | [
"MIT"
] | null | null | null | src/cut_link/utils.py | true7/srt | d5accd411e73ade4ed40a41759e95cb20fbda98d | [
"MIT"
] | null | null | null | src/cut_link/utils.py | true7/srt | d5accd411e73ade4ed40a41759e95cb20fbda98d | [
"MIT"
] | null | null | null | import string
import random
import json
from calendar import month_name
from django.conf import settings
SHORTLINK_MIN = getattr(settings, "SHORTLINK_MIN", 6)
def code_generator(size=SHORTLINK_MIN):
chars = string.ascii_letters + string.digits
return ''.join(random.choice(chars) for _ in range(size))
def create_shortlink(instance):
new_link = code_generator()
class_ = instance.__class__
query_set = class_.objects.filter(shortlink=new_link)
if query_set.exists():
return create_shortlink()
return new_link
def json_data_func(instance):
''' Return json format data, ready for passing into AmCharts.
Contains 2 items - name of the month and count of distinct
links, which were cut on the website.
'''
class_ = instance.__class__
# FIXME. The problem is every next year it will add results above
result = []
for month in range(1, len(month_name)):
count_use = class_.objects.filter(pub_date__month=month).count()
data = dict(month=month_name[month], count=count_use)
result.append(data)
json_data = json.dumps(result)
return json_data
| 27.47619 | 72 | 0.710572 | 159 | 1,154 | 4.918239 | 0.509434 | 0.034527 | 0.051151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003272 | 0.205373 | 1,154 | 41 | 73 | 28.146341 | 0.849509 | 0.189775 | 0 | 0.08 | 0 | 0 | 0.014396 | 0 | 0 | 0 | 0 | 0.02439 | 0 | 1 | 0.12 | false | 0 | 0.2 | 0 | 0.48 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be01c82117aa2911b241e39136b462d24502c315 | 793 | py | Python | dash/graphs.py | fuzzylabs/wearable-my-foot | 5e7d818fc51a3d3babbe1c0ec49450b1a1f030c6 | [
"Apache-2.0"
] | 5 | 2020-09-04T13:49:41.000Z | 2021-07-30T02:33:49.000Z | dash/graphs.py | archena/wearable-my-foot | 5e7d818fc51a3d3babbe1c0ec49450b1a1f030c6 | [
"Apache-2.0"
] | 2 | 2020-09-24T07:55:43.000Z | 2020-09-24T09:30:19.000Z | dash/graphs.py | archena/wearable-my-foot | 5e7d818fc51a3d3babbe1c0ec49450b1a1f030c6 | [
"Apache-2.0"
] | 1 | 2021-03-04T03:18:37.000Z | 2021-03-04T03:18:37.000Z | import plotly.graph_objs as go
class GraphsHelper:
template = "plotly_dark"
'''
Generate a plot for a timeseries
'''
def generate_timeseries_plot(self, dataframe):
pressure_plots = []
for sensor in ["p1", "p2", "p3"]:
series = dataframe[sensor]
scatter = go.Scatter(x = dataframe.index,
y = series,
name = f"Sensor {sensor}",
opacity = 0.4)
pressure_plots.append(scatter)
pressure_figure = go.Figure(
data = pressure_plots,
layout = go.Layout(
title = "Pressure timeseries",
template = self.template
)
)
return pressure_figure
| 29.37037 | 59 | 0.493064 | 72 | 793 | 5.305556 | 0.569444 | 0.102094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010893 | 0.421185 | 793 | 26 | 60 | 30.5 | 0.821351 | 0 | 0 | 0 | 1 | 0 | 0.068456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.05 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be0243ad78899348119ce102fbea0418e12871e2 | 5,379 | py | Python | telethon/tl/functions/stickers.py | polisitni1/DogeClickBot | ac57eaeefca2c6ab9e48458f9f928a6a421a162e | [
"MIT"
] | null | null | null | telethon/tl/functions/stickers.py | polisitni1/DogeClickBot | ac57eaeefca2c6ab9e48458f9f928a6a421a162e | [
"MIT"
] | null | null | null | telethon/tl/functions/stickers.py | polisitni1/DogeClickBot | ac57eaeefca2c6ab9e48458f9f928a6a421a162e | [
"MIT"
] | null | null | null | """File generated by TLObjects' generator. All changes will be ERASED"""
from ...tl.tlobject import TLRequest
from typing import Optional, List, Union, TYPE_CHECKING
import os
import struct
if TYPE_CHECKING:
from ...tl.types import TypeInputStickerSet, TypeInputUser, TypeInputStickerSetItem, TypeInputDocument
class AddStickerToSetRequest(TLRequest):
CONSTRUCTOR_ID = 0x8653febe
SUBCLASS_OF_ID = 0x9b704a5a
def __init__(self, stickerset, sticker):
"""
:param TypeInputStickerSet stickerset:
:param TypeInputStickerSetItem sticker:
:returns messages.StickerSet: Instance of StickerSet.
"""
self.stickerset = stickerset # type: TypeInputStickerSet
self.sticker = sticker # type: TypeInputStickerSetItem
def to_dict(self):
return {
'_': 'AddStickerToSetRequest',
'stickerset': None if self.stickerset is None else self.stickerset.to_dict(),
'sticker': None if self.sticker is None else self.sticker.to_dict()
}
def __bytes__(self):
return b''.join((
b'\xbe\xfeS\x86',
bytes(self.stickerset),
bytes(self.sticker),
))
@classmethod
def from_reader(cls, reader):
_stickerset = reader.tgread_object()
_sticker = reader.tgread_object()
return cls(stickerset=_stickerset, sticker=_sticker)
class ChangeStickerPositionRequest(TLRequest):
CONSTRUCTOR_ID = 0xffb6d4ca
SUBCLASS_OF_ID = 0x9b704a5a
def __init__(self, sticker, position):
"""
:param TypeInputDocument sticker:
:param int position:
:returns messages.StickerSet: Instance of StickerSet.
"""
self.sticker = sticker # type: TypeInputDocument
self.position = position # type: int
def to_dict(self):
return {
'_': 'ChangeStickerPositionRequest',
'sticker': None if self.sticker is None else self.sticker.to_dict(),
'position': self.position
}
def __bytes__(self):
return b''.join((
b'\xca\xd4\xb6\xff',
bytes(self.sticker),
struct.pack('<i', self.position),
))
@classmethod
def from_reader(cls, reader):
_sticker = reader.tgread_object()
_position = reader.read_int()
return cls(sticker=_sticker, position=_position)
class CreateStickerSetRequest(TLRequest):
CONSTRUCTOR_ID = 0x9bd86e6a
SUBCLASS_OF_ID = 0x9b704a5a
def __init__(self, user_id, title, short_name, stickers, masks=None):
"""
:param TypeInputUser user_id:
:param str title:
:param str short_name:
:param List[TypeInputStickerSetItem] stickers:
:param Optional[bool] masks:
:returns messages.StickerSet: Instance of StickerSet.
"""
self.user_id = user_id # type: TypeInputUser
self.title = title # type: str
self.short_name = short_name # type: str
self.stickers = stickers # type: List[TypeInputStickerSetItem]
self.masks = masks # type: Optional[bool]
async def resolve(self, client, utils):
self.user_id = utils.get_input_user(await client.get_input_entity(self.user_id))
def to_dict(self):
return {
'_': 'CreateStickerSetRequest',
'user_id': None if self.user_id is None else self.user_id.to_dict(),
'title': self.title,
'short_name': self.short_name,
'stickers': [] if self.stickers is None else [None if x is None else x.to_dict() for x in self.stickers],
'masks': self.masks
}
def __bytes__(self):
return b''.join((
b'jn\xd8\x9b',
struct.pack('<I', (0 if self.masks is None or self.masks is False else 1)),
bytes(self.user_id),
self.serialize_bytes(self.title),
self.serialize_bytes(self.short_name),
b'\x15\xc4\xb5\x1c',struct.pack('<i', len(self.stickers)),b''.join(bytes(x) for x in self.stickers),
))
@classmethod
def from_reader(cls, reader):
flags = reader.read_int()
_masks = bool(flags & 1)
_user_id = reader.tgread_object()
_title = reader.tgread_string()
_short_name = reader.tgread_string()
reader.read_int()
_stickers = []
for _ in range(reader.read_int()):
_x = reader.tgread_object()
_stickers.append(_x)
return cls(user_id=_user_id, title=_title, short_name=_short_name, stickers=_stickers, masks=_masks)
class RemoveStickerFromSetRequest(TLRequest):
CONSTRUCTOR_ID = 0xf7760f51
SUBCLASS_OF_ID = 0x9b704a5a
def __init__(self, sticker):
"""
:param TypeInputDocument sticker:
:returns messages.StickerSet: Instance of StickerSet.
"""
self.sticker = sticker # type: TypeInputDocument
def to_dict(self):
return {
'_': 'RemoveStickerFromSetRequest',
'sticker': None if self.sticker is None else self.sticker.to_dict()
}
def __bytes__(self):
return b''.join((
b'Q\x0fv\xf7',
bytes(self.sticker),
))
@classmethod
def from_reader(cls, reader):
_sticker = reader.tgread_object()
return cls(sticker=_sticker)
| 31.641176 | 117 | 0.622421 | 584 | 5,379 | 5.511986 | 0.200342 | 0.047841 | 0.021746 | 0.021746 | 0.324324 | 0.289531 | 0.265921 | 0.215284 | 0.170861 | 0.150668 | 0 | 0.015393 | 0.27533 | 5,379 | 169 | 118 | 31.828402 | 0.810416 | 0.151143 | 0 | 0.380531 | 1 | 0 | 0.056875 | 0.022841 | 0 | 0 | 0.018273 | 0 | 0 | 1 | 0.141593 | false | 0 | 0.044248 | 0.070796 | 0.39823 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be05301485051b024d0504eecb5189daad437a58 | 3,242 | py | Python | 600/unit-1/recursion/problem-set/mit-solutions/ps2_hangman_sol1.py | marioluan/mit-opencourseware-cs | 5de013f8e321fed2ff3b7a13e8929a44805db78b | [
"MIT"
] | null | null | null | 600/unit-1/recursion/problem-set/mit-solutions/ps2_hangman_sol1.py | marioluan/mit-opencourseware-cs | 5de013f8e321fed2ff3b7a13e8929a44805db78b | [
"MIT"
] | null | null | null | 600/unit-1/recursion/problem-set/mit-solutions/ps2_hangman_sol1.py | marioluan/mit-opencourseware-cs | 5de013f8e321fed2ff3b7a13e8929a44805db78b | [
"MIT"
] | 1 | 2020-05-19T13:29:18.000Z | 2020-05-19T13:29:18.000Z | # 6.00 Problem Set 2
#
# Hangman
# Name : Solutions
# Collaborators : <your collaborators>
# Time spent : <total time>
# -----------------------------------
# Helper code
# You don't need to understand this helper code,
# but you will have to know how to use the functions
import random
import string
WORDLIST_FILENAME = "words.txt"
def load_words():
"""
Returns a list of valid words. Words are strings of lowercase letters.
Depending on the size of the word list, this function may
take a while to finish.
"""
print "Loading word list from file..."
# inFile: file
inFile = open(WORDLIST_FILENAME, 'r', 0)
# line: string
line = inFile.readline()
# wordlist: list of strings
wordlist = string.split(line)
print " ", len(wordlist), "words loaded."
return wordlist
def choose_word(wordlist):
"""
wordlist (list): list of words (strings)
Returns a word from wordlist at random
"""
return random.choice(wordlist)
# end of helper code
# -----------------------------------
# load the list of words into the wordlist variable
# so that it can be accessed from anywhere in the program
wordlist = load_words()
def partial_word(secret_word, guessed_letters):
"""
Return the secret_word in user-visible format, with underscores used
to replace characters that have not yet been guessed.
"""
result = ''
for letter in secret_word:
if letter in guessed_letters:
result = result + letter
else:
result = result + '_'
return result
def hangman():
"""
Runs the hangman game.
"""
print 'Welcome to the game, Hangman!'
secret_word = choose_word(wordlist)
print 'I am thinking of a word that is ' + str(len(secret_word)) + ' letters long.'
num_guesses = 8
word_guessed = False
guessed_letters = ''
available_letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i',
'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r',
's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
# Letter-guessing loop. Ask the user to guess a letter and respond to the
# user based on whether the word has yet been correctly guessed.
while num_guesses > 0 and not word_guessed:
print '-------------'
print 'You have ' + str(num_guesses) + ' guesses left.'
print 'Available letters: ' + ''.join(available_letters)
guess = raw_input('Please guess a letter:')
if guess not in available_letters:
print 'Oops! You\'ve already guessed that letter: ' + partial_word(secret_word, guessed_letters)
elif guess not in secret_word:
num_guesses -= 1
available_letters.remove(guess)
print 'Oops! That letter is not in my word: ' + partial_word(secret_word, guessed_letters)
else:
available_letters.remove(guess)
guessed_letters += guess
print 'Good guess: ' + partial_word(secret_word, guessed_letters)
if secret_word == partial_word(secret_word, guessed_letters):
word_guessed = True
if word_guessed:
print 'Congratulations, you won!'
else:
print 'Game over.'
| 32.42 | 108 | 0.604874 | 415 | 3,242 | 4.616867 | 0.404819 | 0.057411 | 0.044363 | 0.054802 | 0.095511 | 0.095511 | 0.04071 | 0 | 0 | 0 | 0 | 0.003383 | 0.270512 | 3,242 | 99 | 109 | 32.747475 | 0.806765 | 0.189081 | 0 | 0.096154 | 0 | 0 | 0.1527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.038462 | null | null | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be077745c0ef294c19a02fb08ff66ab17f79fb99 | 898 | py | Python | day1/files_ex1.py | grenn72/pynet-ons-feb19 | 5aff7dfa6a697214dc24818819a60b46a261d0d3 | [
"Apache-2.0"
] | null | null | null | day1/files_ex1.py | grenn72/pynet-ons-feb19 | 5aff7dfa6a697214dc24818819a60b46a261d0d3 | [
"Apache-2.0"
] | null | null | null | day1/files_ex1.py | grenn72/pynet-ons-feb19 | 5aff7dfa6a697214dc24818819a60b46a261d0d3 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
from __future__ import print_function
# READ ####
f = open("my_file.txt")
print("\nLoop directly over file")
print("-" * 60)
for line in f:
print(line.strip())
print("-" * 60)
f.seek(0)
my_content = f.readlines()
print("\nUse readlines method")
print("-" * 60)
for line in my_content:
print(line.strip())
print("-" * 60)
f.seek(0)
my_content = f.read()
print("\nUse read + splitlines")
print("-" * 60)
for line in my_content.splitlines():
print(line)
print("-" * 60)
f.close()
with open("my_file.txt") as f:
print("\nUse with and loop over file")
print("-" * 60)
for line in f:
print(line.strip())
print("-" * 60)
# WRITE ####
print("\nWriting file.")
f = open("new_file.txt", "w")
f.write("whatever2\n")
f.close()
# APPEND ####
print("\nAppending file.")
with open("new_file.txt", "a") as f:
f.write("something else\n")
print()
| 18.708333 | 42 | 0.614699 | 138 | 898 | 3.905797 | 0.333333 | 0.103896 | 0.074212 | 0.103896 | 0.361781 | 0.361781 | 0.361781 | 0.269017 | 0.269017 | 0.269017 | 0 | 0.025641 | 0.174833 | 898 | 47 | 43 | 19.106383 | 0.701754 | 0.044543 | 0 | 0.472222 | 0 | 0 | 0.254459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027778 | 0 | 0.027778 | 0.555556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
be10e301876952317779fb802d1ea27b44f1342a | 2,188 | py | Python | ks_engine/variable_scoring.py | FilippoRanza/ks.py | 47d909fb70fec50f8d3174855bf5d0c05527bf03 | [
"MIT"
] | 2 | 2021-01-29T11:45:39.000Z | 2022-03-10T03:17:12.000Z | ks_engine/variable_scoring.py | Optimization-Algorithms/ks.py | 44890d33a744c5c4865b96f97efc1e5241b719b1 | [
"MIT"
] | 1 | 2020-05-12T16:18:34.000Z | 2020-05-12T16:18:34.000Z | ks_engine/variable_scoring.py | Optimization-Algorithms/ks.py | 44890d33a744c5c4865b96f97efc1e5241b719b1 | [
"MIT"
] | 1 | 2021-01-29T11:45:45.000Z | 2021-01-29T11:45:45.000Z | #! /usr/bin/python
from .solution import Solution
try:
import gurobipy
except ImportError:
print("Gurobi not found: error ignored to allow tests")
def variable_score_factory(sol: Solution, base_kernel: dict, config: dict):
if config.get("VARIABLE_RANKING"):
output = VariableRanking(sol, base_kernel)
else:
output = ReducedCostScoring(sol, base_kernel)
return output
class AbstactVariableScoring:
def __init__(self, solution: Solution, base_kernel: dict):
self.score = {k: 0 if base_kernel[k] else v for k, v in solution.vars.items()}
def get_value(self, var_name):
return self.score[var_name]
def success_update_score(self, curr_kernel, curr_bucket):
raise NotImplementedError
def failure_update_score(self, curr_kernel, curr_bucket):
raise NotImplementedError
class ReducedCostScoring(AbstactVariableScoring):
def success_update_score(self, curr_kernel, curr_bucket):
pass
def failure_update_score(self, curr_kernel, curr_bucket):
pass
class VariableRanking(AbstactVariableScoring):
def cb_update_score(self, name, value):
if value == 0:
self.score[name] += 0.1
else:
self.score[name] -= 0.1
def success_update_score(self, curr_kernel, curr_bucket):
for var in curr_bucket:
if curr_kernel[var]:
self.score[var] -= 15
else:
self.score[var] += 15
def failure_update_score(self, curr_kernel, curr_bucket):
for var in curr_bucket:
if curr_kernel[var]:
self.score[var] += 1
else:
self.score[var] -= 1
def callback_factory(scoring: AbstactVariableScoring):
if isinstance(scoring, VariableRanking):
output = __build_callback__(scoring)
else:
output = None
return output
def __build_callback__(scoring):
def callback(model, where):
if where == gurobipy.GRB.Callback.MIPSOL:
for var in model.getVars():
value = model.cbGetSolution(var)
scoring.cb_update_score(var.varName, value)
return callback
| 27.012346 | 86 | 0.65128 | 261 | 2,188 | 5.237548 | 0.279693 | 0.05267 | 0.076811 | 0.083394 | 0.326262 | 0.304316 | 0.304316 | 0.304316 | 0.298464 | 0.117045 | 0 | 0.007435 | 0.26234 | 2,188 | 80 | 87 | 27.35 | 0.839529 | 0.00777 | 0 | 0.375 | 0 | 0 | 0.028571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232143 | false | 0.035714 | 0.053571 | 0.017857 | 0.410714 | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be18b88ab1937677b7e3d5583d09538c7f91bce2 | 2,460 | py | Python | pdf2write.py | codeunik/stylus_labs_write_pdf_importer | 25d7aa037647a86284c24527bda7b222cf95bb62 | [
"MIT"
] | null | null | null | pdf2write.py | codeunik/stylus_labs_write_pdf_importer | 25d7aa037647a86284c24527bda7b222cf95bb62 | [
"MIT"
] | null | null | null | pdf2write.py | codeunik/stylus_labs_write_pdf_importer | 25d7aa037647a86284c24527bda7b222cf95bb62 | [
"MIT"
] | null | null | null | import base64
import os
import sys
import PyPDF2
svg = '''<svg id="write-document" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect id="write-doc-background" width="100%" height="100%" fill="#808080"/>
<defs id="write-defs">
<script type="text/writeconfig">
<int name="docFormatVersion" value="2" />
<int name="pageColor" value="-1" />
<int name="pageNum" value="0" />
<int name="ruleColor" value="0" />
<float name="marginLeft" value="0" />
<float name="xOffset" value="-380.701752" />
<float name="xRuling" value="0" />
<float name="yOffset" value="1536.84216" />
<float name="yRuling" value="0" />
</script>
</defs>
'''
pdf_path = sys.argv[1]
pdf = PyPDF2.PdfFileReader(pdf_path, "rb")
img_width = 720
n_pages = pdf.getNumPages() + 1
page = pdf.getPage(0)
width = page.mediaBox.getWidth()
height = page.mediaBox.getHeight()
aspect_ratio = height/width
img_height = int(aspect_ratio * img_width)
os.system('mkdir -p /tmp/pdf2write')
new_page_height = 0
for page in range(n_pages):
print(f"Processing {page}/{n_pages}", end='\r')
os.system(f'pdftoppm {pdf_path} /tmp/pdf2write/tmp{page} -png -f {page} -singlefile')
with open(f'/tmp/pdf2write/tmp{page}.png', 'rb') as f:
base64_data = base64.b64encode(f.read()).decode('utf-8')
tmp_svg = f'''<svg class="write-page" color-interpolation="linearRGB" x="10" y="{new_page_height+10}" width="{img_width}px" height="{img_height}px" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g class="write-content write-v3" width="{img_width}" height="{img_height}" xruling="0" yruling="0" marginLeft="0" papercolor="#FFFFFF" rulecolor="#00000000">
<g class="ruleline write-std-ruling write-scale-down" fill="none" stroke="none" stroke-width="1" shape-rendering="crispEdges" vector-effect="non-scaling-stroke">
<rect class="pagerect" fill="#FFFFFF" stroke="none" x="0" y="0" width="{img_width}" height="{img_height}" />
</g>
<image x="0" y="0" width="{img_width}" height="{img_height}" xlink:href="data:image/png;base64,{base64_data}"/>
</g>
</svg>'''
new_page_height += (img_height+10)
svg += tmp_svg
svg += '''</svg>'''
os.system('rm -rf /tmp/pdf2write')
with open(f'{os.path.dirname(pdf_path)}/{os.path.basename(pdf_path).split(".")[0]}.svg', 'w') as f:
f.write(svg)
os.system(f'gzip -S z {os.path.dirname(pdf_path)}/{os.path.basename(pdf_path).split(".")[0]}.svg')
| 37.846154 | 230 | 0.667073 | 379 | 2,460 | 4.240106 | 0.350923 | 0.030492 | 0.046671 | 0.029869 | 0.215308 | 0.187928 | 0.170504 | 0.170504 | 0.170504 | 0.170504 | 0 | 0.049564 | 0.114228 | 2,460 | 64 | 231 | 38.4375 | 0.68793 | 0 | 0 | 0.039216 | 0 | 0.196078 | 0.713008 | 0.231301 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.078431 | 0 | 0.078431 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be18cd8c90ebbd40ae9aadcbac8dd9bce504b9ec | 2,462 | py | Python | py_headless_daw/project/having_parameters.py | hq9000/py-headless-daw | 33e08727c25d3f00b2556adf5f25c9f7ff4d4304 | [
"MIT"
] | 22 | 2020-06-09T18:46:56.000Z | 2021-09-28T02:11:42.000Z | py_headless_daw/project/having_parameters.py | hq9000/py-headless-daw | 33e08727c25d3f00b2556adf5f25c9f7ff4d4304 | [
"MIT"
] | 19 | 2020-06-03T06:34:57.000Z | 2021-01-26T07:36:17.000Z | py_headless_daw/project/having_parameters.py | hq9000/py-headless-daw | 33e08727c25d3f00b2556adf5f25c9f7ff4d4304 | [
"MIT"
] | 1 | 2020-06-18T09:25:21.000Z | 2020-06-18T09:25:21.000Z | from typing import Dict, List, cast
from py_headless_daw.project.parameter import Parameter, ParameterValueType, ParameterRangeType
class HavingParameters:
def __init__(self):
self._parameters: Dict[str, Parameter] = {}
super().__init__()
def has_parameter(self, name: str) -> bool:
return name in self._parameters
def add_parameter(self,
name: str,
value: ParameterValueType,
param_type: str,
value_range: ParameterRangeType):
if name in self._parameters:
raise Exception('parameter named ' + name + ' already added to this object')
parameter = Parameter(name, value, param_type, value_range)
self._parameters[name] = parameter
def add_parameter_object(self, parameter: Parameter) -> None:
self._parameters[parameter.name] = parameter
def get_parameter(self, name: str) -> Parameter:
for parameter in self.parameters:
if parameter.name == name:
return parameter
list_of_names: List[str] = [p.name for p in self.parameters]
# noinspection PyTypeChecker
available_names: List[str] = cast(List[str], list_of_names)
raise Exception('parameter named ' + name + ' not found. Available: ' + ', '.join(available_names))
def get_parameter_value(self, name: str) -> ParameterValueType:
param = self.get_parameter(name)
return param.value
def get_float_parameter_value(self, name: str) -> float:
param = self.get_parameter(name)
if param.type != Parameter.TYPE_FLOAT:
raise ValueError(f"parameter {name} was expected to be float (error: f009d0ef)")
value = self.get_parameter_value(name)
cast_value = cast(float, value)
return cast_value
def get_enum_parameter_value(self, name: str) -> str:
param = self.get_parameter(name)
if param.type != Parameter.TYPE_ENUM:
raise ValueError(f"parameter {name} was expected to be enum (error: 80a1d180)")
value = self.get_parameter_value(name)
cast_value = cast(str, value)
return cast_value
def set_parameter_value(self, name: str, value: ParameterValueType):
param = self.get_parameter(name)
param.value = value
@property
def parameters(self) -> List[Parameter]:
return list(self._parameters.values())
| 35.681159 | 107 | 0.644598 | 283 | 2,462 | 5.413428 | 0.229682 | 0.076371 | 0.050261 | 0.057441 | 0.399478 | 0.269582 | 0.177546 | 0.177546 | 0.177546 | 0.063969 | 0 | 0.005513 | 0.263201 | 2,462 | 68 | 108 | 36.205882 | 0.83903 | 0.010561 | 0 | 0.163265 | 0 | 0 | 0.083402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204082 | false | 0 | 0.040816 | 0.040816 | 0.387755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be21dcede1ec1af84c0ccb9e8297bd042d23271a | 1,712 | py | Python | CondTools/BeamSpot/test/BeamSpotRcdPrinter_cfg.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 13 | 2015-11-30T15:49:45.000Z | 2022-02-08T16:11:30.000Z | CondTools/BeamSpot/test/BeamSpotRcdPrinter_cfg.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 640 | 2015-02-11T18:55:47.000Z | 2022-03-31T14:12:23.000Z | CondTools/BeamSpot/test/BeamSpotRcdPrinter_cfg.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 51 | 2015-08-11T21:01:40.000Z | 2022-03-30T07:31:34.000Z | import FWCore.ParameterSet.Config as cms
import os
process = cms.Process("summary")
process.MessageLogger = cms.Service( "MessageLogger",
debugModules = cms.untracked.vstring( "*" ),
cout = cms.untracked.PSet( threshold = cms.untracked.string( "DEBUG" ) ),
destinations = cms.untracked.vstring( "cout" )
)
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(1)
)
process.source = cms.Source("EmptySource",
numberEventsInRun = cms.untracked.uint32(1),
firstRun = cms.untracked.uint32(1)
)
process.load("CondCore.CondDB.CondDB_cfi")
process.load("CondTools.BeamSpot.BeamSpotRcdPrinter_cfi")
### 2018 Prompt
process.BeamSpotRcdPrinter.tagName = "BeamSpotObjects_PCL_byLumi_v0_prompt"
process.BeamSpotRcdPrinter.startIOV = 1350646955507767
process.BeamSpotRcdPrinter.endIOV = 1406876667347162
process.BeamSpotRcdPrinter.output = "summary2018_Prompt.txt"
### 2017 ReReco
#process.BeamSpotRcdPrinter.tagName = "BeamSpotObjects_LumiBased_v4_offline"
#process.BeamSpotRcdPrinter.startIOV = 1275820035276801
#process.BeamSpotRcdPrinter.endIOV = 1316235677532161
### 2018 ABC ReReco
#process.BeamSpotRcdPrinter.tagName = "BeamSpotObjects_LumiBased_v4_offline"
#process.BeamSpotRcdPrinter.startIOV = 1354018504835073
#process.BeamSpotRcdPrinter.endIOV = 1374668707594734
### 2018D Prompt
#process.BeamSpotRcdPrinter.tagName = "BeamSpotObjects_PCL_byLumi_v0_prompt"
#process.BeamSpotRcdPrinter.startIOV = 1377280047710242
#process.BeamSpotRcdPrinter.endIOV = 1406876667347162
process.p = cms.Path(process.BeamSpotRcdPrinter)
| 38.044444 | 110 | 0.733645 | 155 | 1,712 | 7.993548 | 0.412903 | 0.282486 | 0.100081 | 0.151735 | 0.421308 | 0.33414 | 0.33414 | 0.33414 | 0.33414 | 0.33414 | 0 | 0.113221 | 0.169393 | 1,712 | 44 | 111 | 38.909091 | 0.758087 | 0.352804 | 0 | 0 | 0 | 0 | 0.152855 | 0.115101 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be237e880ccb11dff8fac9488a75005cce1dd897 | 381 | py | Python | django/authentication/api/urls.py | NAVANEETHA-BS/Django-Reactjs-Redux-Register-login-logout-Homepage--Project | f29ed189b988a2d46d76b3c58cf77d1ed58ca64d | [
"MIT"
] | 2 | 2021-05-13T18:02:00.000Z | 2022-03-30T19:53:38.000Z | django/authentication/api/urls.py | NAVANEETHA-BS/Django-Reactjs-Redux-Register-login-logout-Homepage--Project | f29ed189b988a2d46d76b3c58cf77d1ed58ca64d | [
"MIT"
] | null | null | null | django/authentication/api/urls.py | NAVANEETHA-BS/Django-Reactjs-Redux-Register-login-logout-Homepage--Project | f29ed189b988a2d46d76b3c58cf77d1ed58ca64d | [
"MIT"
] | null | null | null | from django.urls import path
from rest_framework_simplejwt.views import (
TokenObtainPairView,
TokenRefreshView,
TokenVerifyView
)
urlpatterns = [
path('obtain/', TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('refresh/', TokenRefreshView.as_view(), name='token_refresh'),
path('verify/', TokenVerifyView.as_view(), name='token_verify'),
]
| 29.307692 | 77 | 0.734908 | 40 | 381 | 6.775 | 0.525 | 0.066421 | 0.110701 | 0.166052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131234 | 381 | 12 | 78 | 31.75 | 0.818731 | 0 | 0 | 0 | 0 | 0 | 0.167979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be23b9cced5e521037b8711e7bde05f5d17925f0 | 7,257 | py | Python | yue/core/explorer/ftpsource.py | nsetzer/YueMusicPlayer | feaf6fe5c046b1a7f6b7774d4e86a2fbb1e431cf | [
"MIT"
] | null | null | null | yue/core/explorer/ftpsource.py | nsetzer/YueMusicPlayer | feaf6fe5c046b1a7f6b7774d4e86a2fbb1e431cf | [
"MIT"
] | null | null | null | yue/core/explorer/ftpsource.py | nsetzer/YueMusicPlayer | feaf6fe5c046b1a7f6b7774d4e86a2fbb1e431cf | [
"MIT"
] | 1 | 2019-03-06T14:29:27.000Z | 2019-03-06T14:29:27.000Z |
from ftplib import FTP,error_perm, all_errors
import posixpath
from io import BytesIO,SEEK_SET
from .source import DataSource
import sys
import re
reftp = re.compile('(ssh|ftp)\:\/\/(([^@:]+)?:?([^@]+)?@)?([^:]+)(:[0-9]+)?\/(.*)')
def parseFTPurl( url ):
m = reftp.match( url )
if m:
g = m.groups()
result = {
"mode" : g[0],
"username" : g[2] or "",
"password" : g[3] or "",
"hostname" : g[4] or "",
"port" : int(g[5][1:]) if g[5] else 0,
"path" : g[6] or "/",
}
if result['port'] == 0:
if result['mode'] == ssh:
result['port'] = 22
else:
result['port'] = 21 # ftp port default
return result
raise ValueError("invalid: %s"%url)
def utf8_fix(s):
return ''.join([ a if ord(a)<128 else "%02X"%ord(a) for a in s])
class FTPWriter(object):
"""docstring for FTPWriter"""
def __init__(self, ftp, path):
super(FTPWriter, self).__init__()
self.ftp = ftp
self.path = path
self.file = BytesIO()
def write(self,data):
return self.file.write(data)
def seek(self,pos,whence=SEEK_SET):
return self.file.seek(pos,whence)
def tell(self):
return self.file.tell()
def close(self):
self.file.seek(0)
text = "STOR " + utf8_fix(self.path)
self.ftp.storbinary(text, self.file)
def __enter__(self):
return self
def __exit__(self,typ,val,tb):
if typ is None:
self.close()
class FTPReader(object):
"""docstring for FTPWriter"""
def __init__(self, ftp, path):
super(FTPReader, self).__init__()
self.ftp = ftp
self.path = path
self.file = BytesIO()
# open the file
text = "RETR " + utf8_fix(self.path)
self.ftp.retrbinary(text, self.file.write)
self.file.seek(0)
def read(self,n=None):
return self.file.read(n)
def seek(self,pos,whence=SEEK_SET):
return self.file.seek(pos,whence)
def tell(self):
return self.file.tell()
def close(self):
self.file.close()
def __enter__(self):
return self
def __exit__(self,typ,val,tb):
if typ is None:
self.close()
class FTPSource(DataSource):
"""
there is some sort of problem with utf-8/latin-1 and ftplib
storbinary must accepts a STRING, since it builds a cmd and add
the CRLF to the input argument using the plus operator.
the command fails when given unicode text (ord > 127) and also
fails whenm given a byte string.
"""
# TODO: turn this into a directory generator
# which first loads the directory, then loops over
# loaded items.
# TODO: on windows we need a way to view available
# drive letters
def __init__(self, host, port, username="", password=""):
super(FTPSource, self).__init__()
self.ftp = FTP()
self.ftp.connect(host,port)
self.ftp.login(username,password)
self.hostname = "%s:%d"%(host,port)
def root(self):
return "/"
def close(self):
try:
self.ftp.quit()
except all_errors as e:
sys.stderr.write("Error Closing FTP connection\n")
sys.stderr.write("%s\n"%e)
super().close()
def fix(self, path):
return utf8_fix(path)
def join(self,*args):
return posixpath.join(*args)
def breakpath(self,path):
return [ x for x in path.replace("/","\\").split("\\") if x ]
def relpath(self,path,base):
return posixpath.relpath(path,base)
def normpath(self,path,root=None):
if root and not path.startswith("/"):
path = posixpath.join(root,path)
return posixpath.normpath( path )
def listdir(self,path):
return self.ftp.nlst(path)
def parent(self,path):
# TODO: if path is C:\\ return empty string ?
# empty string returns drives
p,_ = posixpath.split(path)
return p
def move(self,oldpath,newpath):
self.ftp.rename(oldpath,newpath)
def delete(self,path):
# todo support removing directory rmdir()
path = utf8_fix(path)
if self.exists( path ):
if self.isdir(path):
try:
self.ftp.rmd(path)
except Exception as e:
print("ftp delete error: %s"%e)
else:
try:
self.ftp.delete(path)
except Exception as e:
print("ftp delete error: %s"%e)
def open(self,path,mode):
if mode=="wb":
return FTPWriter(self.ftp,path)
elif mode=="rb":
return FTPReader(self.ftp,path)
raise NotImplementedError(mode)
def exists(self,path):
path = utf8_fix(path)
p,n=posixpath.split(path)
lst = set(self.listdir(p))
return n in lst
def isdir(self,path):
path = utf8_fix(path)
try:
return self.ftp.size(path) is None
except error_perm:
# TODO: to think about more later,
# under my use-case, I'm only asking if a path is a directory
# if I Already think it exists. Under the current FTP impl
# ftp.size() fails for various reasons unless the file exists
# and is an accessable file. I can infer that a failure to
# determine the size means that the path is a directory,
# but this does not hold true under other use cases.
# I can't cache listdir calls, but if I could, then I could
# use that to determine if the file exists
return True#self.exists( path )
def mkdir(self,path):
# this is a really ugly quick and dirty solution
path = utf8_fix(path)
if not self.exists(path):
p = self.parent( path )
try:
if not self.exists(p):
self.ftp.mkd( p )
self.ftp.mkd(path)
except Exception as e:
print("ftp mkd error: %s"%e)
def split(self,path):
return posixpath.split(path)
def splitext(self,path):
return posixpath.splitext(path)
def stat(self,path):
try:
size = self.ftp.size(path)
except error_perm:
size = None
result = {
"isDir" : size is None,
"isLink": False,
"mtime" : 0,
"ctime" : 0,
"size" : size or 0,
"name" : self.split(path)[1],
"mode" : 0
}
return result
def stat_fast(self,path):
# not fast for thus file system :(
try:
size = self.ftp.size(path)
except error_perm:
size = None
result = {
"name" : self.split(path)[1],
"size" : size or 0,
"isDir" : size is None,
"isLink" : False,
}
return result
def chmod(self,path,mode):
print("chmod not implemented")
def getExportPath(self,path):
return self.hostname+path
| 27.384906 | 83 | 0.539893 | 929 | 7,257 | 4.153929 | 0.276642 | 0.038093 | 0.021767 | 0.015548 | 0.253952 | 0.235812 | 0.193314 | 0.18554 | 0.18554 | 0.18554 | 0 | 0.009011 | 0.342428 | 7,257 | 264 | 84 | 27.488636 | 0.799665 | 0.166598 | 0 | 0.38674 | 0 | 0 | 0.053309 | 0.010194 | 0 | 0 | 0 | 0.003788 | 0 | 1 | 0.209945 | false | 0.016575 | 0.033149 | 0.099448 | 0.414365 | 0.022099 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
be2a32ef4dd37c381a36c7a58f2812962caeb4d5 | 502 | py | Python | logger_application/logger.py | swatishayna/OnlineEDAAutomation | a1bfe8b1dee51a4872529a98f6e1136922329e3e | [
"MIT"
] | 1 | 2022-03-24T20:26:44.000Z | 2022-03-24T20:26:44.000Z | logger_application/logger.py | surajaiswal13/OnlineEDAAutomation | a1bfe8b1dee51a4872529a98f6e1136922329e3e | [
"MIT"
] | null | null | null | logger_application/logger.py | surajaiswal13/OnlineEDAAutomation | a1bfe8b1dee51a4872529a98f6e1136922329e3e | [
"MIT"
] | 2 | 2022-02-08T16:35:32.000Z | 2022-03-04T06:56:54.000Z | from datetime import datetime
from src.utils import uploaded_file
import os
class App_Logger:
def __init__(self):
pass
def log(self, file_object, email, log_message, log_writer_id):
self.now = datetime.now()
self.date = self.now.date()
self.current_time = self.now.strftime("%H:%M:%S")
file_object.write(
email+ "_eda_" + log_writer_id + "\t\t" +str(self.date) + "/" + str(self.current_time) + "\t\t" +email+ "\t\t" +log_message +"\n")
| 27.888889 | 143 | 0.621514 | 73 | 502 | 4.027397 | 0.465753 | 0.071429 | 0.07483 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227092 | 502 | 17 | 144 | 29.529412 | 0.757732 | 0 | 0 | 0 | 0 | 0 | 0.055888 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.083333 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
be2cf6688bc9f36adc898b8d1394b2bd6f967ed1 | 854 | py | Python | fobi_custom/plugins/form_elements/fields/intercept/household_tenure/fobi_form_elements.py | datamade/just-spaces | cc2b7d1518e5da65a403413d39a309fa3e2ac122 | [
"MIT"
] | 6 | 2019-04-09T06:52:31.000Z | 2021-08-31T04:31:59.000Z | fobi_custom/plugins/form_elements/fields/intercept/household_tenure/fobi_form_elements.py | datamade/just-spaces | cc2b7d1518e5da65a403413d39a309fa3e2ac122 | [
"MIT"
] | 176 | 2019-01-11T21:05:50.000Z | 2021-03-16T17:04:13.000Z | fobi_custom/plugins/form_elements/fields/intercept/household_tenure/fobi_form_elements.py | datamade/just-spaces | cc2b7d1518e5da65a403413d39a309fa3e2ac122 | [
"MIT"
] | 1 | 2019-05-10T15:30:25.000Z | 2019-05-10T15:30:25.000Z | from django import forms
from fobi.base import FormFieldPlugin, form_element_plugin_registry
from .forms import HouseholdTenureForm
class HouseholdTenurePlugin(FormFieldPlugin):
"""HouseholdTenurePlugin."""
uid = "household_tenure"
name = "What year did you move into your current address?"
form = HouseholdTenureForm
group = "Intercept" # Group to which the plugin belongs to
def get_form_field_instances(self, request=None, form_entry=None,
form_element_entries=None, **kwargs):
field_kwargs = {
'required': self.data.required,
'label': self.data.label,
'widget': forms.widgets.NumberInput(attrs={}),
}
return [(self.data.name, forms.IntegerField, field_kwargs)]
form_element_plugin_registry.register(HouseholdTenurePlugin)
| 29.448276 | 70 | 0.686183 | 90 | 854 | 6.344444 | 0.588889 | 0.057793 | 0.059545 | 0.087566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225995 | 854 | 28 | 71 | 30.5 | 0.863843 | 0.070258 | 0 | 0 | 0 | 0 | 0.11802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.588235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
076c3b7d76dce4361980237fd24f6e7d24b9f302 | 368 | py | Python | utils/scripts/OOOlevelGen/src/sprites/__init__.py | fullscreennl/monkeyswipe | c56192e202674dd5ab18023f6cf14cf51e95fbd0 | [
"MIT"
] | null | null | null | utils/scripts/OOOlevelGen/src/sprites/__init__.py | fullscreennl/monkeyswipe | c56192e202674dd5ab18023f6cf14cf51e95fbd0 | [
"MIT"
] | null | null | null | utils/scripts/OOOlevelGen/src/sprites/__init__.py | fullscreennl/monkeyswipe | c56192e202674dd5ab18023f6cf14cf51e95fbd0 | [
"MIT"
] | null | null | null | __all__ = ['EnemyBucketWithStar',
'Nut',
'Beam',
'Enemy',
'Friend',
'Hero',
'Launcher',
'Rotor',
'SpikeyBuddy',
'Star',
'Wizard',
'EnemyEquipedRotor',
'CyclingEnemyObject',
'Joints',
'Bomb',
'Contacts']
| 21.647059 | 33 | 0.366848 | 17 | 368 | 7.705882 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.497283 | 368 | 16 | 34 | 23 | 0.708108 | 0 | 0 | 0 | 0 | 0 | 0.347826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
076e350bd997dc6e64e333caef566c1b62991f65 | 970 | py | Python | evaluate.py | adelmassimo/EM-Algorithm-for-MMPP | 23ae031076a464bfba5286cf6b5a1fa5e1cc66b1 | [
"MIT"
] | null | null | null | evaluate.py | adelmassimo/EM-Algorithm-for-MMPP | 23ae031076a464bfba5286cf6b5a1fa5e1cc66b1 | [
"MIT"
] | null | null | null | evaluate.py | adelmassimo/EM-Algorithm-for-MMPP | 23ae031076a464bfba5286cf6b5a1fa5e1cc66b1 | [
"MIT"
] | null | null | null | import model
import numpy as np
import datasetReader as df
import main
# Number of traces loaded T
T = 1
# Generate traces
traces_factory = df.DatasetFactory()
traces_factory.createDataset(T)
traces = traces_factory.traces
P0 = np.matrix("[ .02 0;"
"0 0 0.5;"
"0 0 0]")
P1 = np.matrix("[0.1 0 0;"
"0 0.5 0;"
"0 0 0.9]")
M = np.matrix("[0.25 0 0;"
"0 0.23 0;"
"0 0 0.85]")
def backward_likelihood(i, trace):
N = model.N
M = len( trace )
likelihoods = np.ones((N, 1))
if i < M:
P = main.randomization(P0, model.uniformization_rate, trace[i][0])
# P = stored_p_values[i, :, :]
likelihoods = np.multiply(
P.dot( model.P1 ).dot( backward_likelihood(i+1, trace) ),
model.M[:, trace[i][1]] )
if likelihoods.sum() != 0:
likelihoods = likelihoods / likelihoods.sum()
return likelihoods | 23.095238 | 74 | 0.541237 | 137 | 970 | 3.773723 | 0.357664 | 0.065764 | 0.06383 | 0.038685 | 0.030948 | 0.030948 | 0.030948 | 0.030948 | 0 | 0 | 0 | 0.071212 | 0.319588 | 970 | 42 | 75 | 23.095238 | 0.712121 | 0.072165 | 0 | 0 | 1 | 0 | 0.083612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.137931 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
076ea8e320bea4958c4967806ffb3361e0b72568 | 2,400 | py | Python | Imaging/Core/Testing/Python/TestHSVToRGB.py | forestGzh/VTK | bc98327275bd5cfa95c5825f80a2755a458b6da8 | [
"BSD-3-Clause"
] | 1,755 | 2015-01-03T06:55:00.000Z | 2022-03-29T05:23:26.000Z | Imaging/Core/Testing/Python/TestHSVToRGB.py | forestGzh/VTK | bc98327275bd5cfa95c5825f80a2755a458b6da8 | [
"BSD-3-Clause"
] | 29 | 2015-04-23T20:58:30.000Z | 2022-03-02T16:16:42.000Z | Imaging/Core/Testing/Python/TestHSVToRGB.py | forestGzh/VTK | bc98327275bd5cfa95c5825f80a2755a458b6da8 | [
"BSD-3-Clause"
] | 1,044 | 2015-01-05T22:48:27.000Z | 2022-03-31T02:38:26.000Z | #!/usr/bin/env python
import vtk
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
# Use the painter to draw using colors.
# This is not a pipeline object. It will support pipeline objects.
# Please do not use this object directly.
imageCanvas = vtk.vtkImageCanvasSource2D()
imageCanvas.SetNumberOfScalarComponents(3)
imageCanvas.SetScalarTypeToUnsignedChar()
imageCanvas.SetExtent(0,320,0,320,0,0)
imageCanvas.SetDrawColor(0,0,0)
imageCanvas.FillBox(0,511,0,511)
# r, g, b
imageCanvas.SetDrawColor(255,0,0)
imageCanvas.FillBox(0,50,0,100)
imageCanvas.SetDrawColor(128,128,0)
imageCanvas.FillBox(50,100,0,100)
imageCanvas.SetDrawColor(0,255,0)
imageCanvas.FillBox(100,150,0,100)
imageCanvas.SetDrawColor(0,128,128)
imageCanvas.FillBox(150,200,0,100)
imageCanvas.SetDrawColor(0,0,255)
imageCanvas.FillBox(200,250,0,100)
imageCanvas.SetDrawColor(128,0,128)
imageCanvas.FillBox(250,300,0,100)
# intensity scale
imageCanvas.SetDrawColor(5,5,5)
imageCanvas.FillBox(0,50,110,210)
imageCanvas.SetDrawColor(55,55,55)
imageCanvas.FillBox(50,100,110,210)
imageCanvas.SetDrawColor(105,105,105)
imageCanvas.FillBox(100,150,110,210)
imageCanvas.SetDrawColor(155,155,155)
imageCanvas.FillBox(150,200,110,210)
imageCanvas.SetDrawColor(205,205,205)
imageCanvas.FillBox(200,250,110,210)
imageCanvas.SetDrawColor(255,255,255)
imageCanvas.FillBox(250,300,110,210)
# saturation scale
imageCanvas.SetDrawColor(245,0,0)
imageCanvas.FillBox(0,50,220,320)
imageCanvas.SetDrawColor(213,16,16)
imageCanvas.FillBox(50,100,220,320)
imageCanvas.SetDrawColor(181,32,32)
imageCanvas.FillBox(100,150,220,320)
imageCanvas.SetDrawColor(149,48,48)
imageCanvas.FillBox(150,200,220,320)
imageCanvas.SetDrawColor(117,64,64)
imageCanvas.FillBox(200,250,220,320)
imageCanvas.SetDrawColor(85,80,80)
imageCanvas.FillBox(250,300,220,320)
convert = vtk.vtkImageRGBToHSV()
convert.SetInputConnection(imageCanvas.GetOutputPort())
convertBack = vtk.vtkImageHSVToRGB()
convertBack.SetInputConnection(convert.GetOutputPort())
cast = vtk.vtkImageCast()
cast.SetInputConnection(convertBack.GetOutputPort())
cast.SetOutputScalarTypeToFloat()
cast.ReleaseDataFlagOff()
viewer = vtk.vtkImageViewer()
viewer.SetInputConnection(convertBack.GetOutputPort())
#viewer SetInputConnection [imageCanvas GetOutputPort]
viewer.SetColorWindow(256)
viewer.SetColorLevel(127.5)
viewer.SetSize(320,320)
viewer.Render()
# --- end of script --
| 34.285714 | 67 | 0.814583 | 325 | 2,400 | 6.009231 | 0.298462 | 0.223758 | 0.048643 | 0.069124 | 0.108039 | 0.023554 | 0 | 0 | 0 | 0 | 0 | 0.153102 | 0.052917 | 2,400 | 69 | 68 | 34.782609 | 0.706115 | 0.11625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033898 | 0 | 0.033898 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
076eec8de4f676b9d586492c7ab7750df189a96a | 296 | py | Python | kelas_2b/echa.py | barizraihan/belajarpython | 57df4c939600dd34a519599d6c78178bfb55063b | [
"MIT"
] | null | null | null | kelas_2b/echa.py | barizraihan/belajarpython | 57df4c939600dd34a519599d6c78178bfb55063b | [
"MIT"
] | null | null | null | kelas_2b/echa.py | barizraihan/belajarpython | 57df4c939600dd34a519599d6c78178bfb55063b | [
"MIT"
] | null | null | null | import csv
class echa:
def werehousing(self):
with open('kelas_2b/echa.csv', 'r') as csvfile:
csv_reader = csv.reader(csvfile, delimiter=',')
for row in csv_reader:
print("menampilkan data barang:", row[0], row[1], row[2], row[3], row[4])
| 32.888889 | 93 | 0.567568 | 41 | 296 | 4.02439 | 0.682927 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028302 | 0.283784 | 296 | 8 | 94 | 37 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.14527 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
076f84eca9f11a3725b25d5cf7a8fa60fb6dd720 | 3,399 | py | Python | tests/test_handler_surface_distance.py | dyollb/MONAI | 9084c452c48095c82c71d4391b3684006e5a3c56 | [
"Apache-2.0"
] | 2,971 | 2019-10-16T23:53:16.000Z | 2022-03-31T20:58:24.000Z | tests/test_handler_surface_distance.py | dyollb/MONAI | 9084c452c48095c82c71d4391b3684006e5a3c56 | [
"Apache-2.0"
] | 2,851 | 2020-01-10T16:23:44.000Z | 2022-03-31T22:14:53.000Z | tests/test_handler_surface_distance.py | dyollb/MONAI | 9084c452c48095c82c71d4391b3684006e5a3c56 | [
"Apache-2.0"
] | 614 | 2020-01-14T19:18:01.000Z | 2022-03-31T14:06:14.000Z | # Copyright 2020 - 2021 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from typing import Tuple
import numpy as np
import torch
from ignite.engine import Engine
from monai.handlers import SurfaceDistance
def create_spherical_seg_3d(
radius: float = 20.0, centre: Tuple[int, int, int] = (49, 49, 49), im_shape: Tuple[int, int, int] = (99, 99, 99)
) -> np.ndarray:
"""
Return a 3D image with a sphere inside. Voxel values will be
1 inside the sphere, and 0 elsewhere.
Args:
radius: radius of sphere (in terms of number of voxels, can be partial)
centre: location of sphere centre.
im_shape: shape of image to create
See also:
:py:meth:`~create_test_image_3d`
"""
# Create image
image = np.zeros(im_shape, dtype=np.int32)
spy, spx, spz = np.ogrid[
-centre[0] : im_shape[0] - centre[0], -centre[1] : im_shape[1] - centre[1], -centre[2] : im_shape[2] - centre[2]
]
circle = (spx * spx + spy * spy + spz * spz) <= radius * radius
image[circle] = 1
image[~circle] = 0
return image
sampler_sphere = torch.Tensor(create_spherical_seg_3d(radius=20, centre=(20, 20, 20))).unsqueeze(0).unsqueeze(0)
# test input a list of channel-first tensor
sampler_sphere_gt = [torch.Tensor(create_spherical_seg_3d(radius=20, centre=(10, 20, 20))).unsqueeze(0)]
sampler_sphere_zeros = torch.zeros_like(sampler_sphere)
TEST_SAMPLE_1 = [sampler_sphere, sampler_sphere_gt]
TEST_SAMPLE_2 = [sampler_sphere_gt, sampler_sphere_gt]
TEST_SAMPLE_3 = [sampler_sphere_zeros, sampler_sphere_gt]
TEST_SAMPLE_4 = [sampler_sphere_zeros, sampler_sphere_zeros]
class TestHandlerSurfaceDistance(unittest.TestCase):
# TODO test multi node Surface Distance
def test_compute(self):
sur_metric = SurfaceDistance(include_background=True)
def _val_func(engine, batch):
pass
engine = Engine(_val_func)
sur_metric.attach(engine, "surface_distance")
y_pred, y = TEST_SAMPLE_1
sur_metric.update([y_pred, y])
self.assertAlmostEqual(sur_metric.compute(), 4.17133, places=4)
y_pred, y = TEST_SAMPLE_2
sur_metric.update([y_pred, y])
self.assertAlmostEqual(sur_metric.compute(), 2.08566, places=4)
y_pred, y = TEST_SAMPLE_3
sur_metric.update([y_pred, y])
self.assertAlmostEqual(sur_metric.compute(), float("inf"))
y_pred, y = TEST_SAMPLE_4
sur_metric.update([y_pred, y])
self.assertAlmostEqual(sur_metric.compute(), float("inf"))
def test_shape_mismatch(self):
sur_metric = SurfaceDistance(include_background=True)
with self.assertRaises((AssertionError, ValueError)):
y_pred = TEST_SAMPLE_1[0]
y = torch.ones((1, 1, 10, 10, 10))
sur_metric.update([y_pred, y])
if __name__ == "__main__":
unittest.main()
| 35.778947 | 120 | 0.692262 | 490 | 3,399 | 4.602041 | 0.344898 | 0.06918 | 0.023947 | 0.035477 | 0.303769 | 0.22306 | 0.213747 | 0.149889 | 0.149889 | 0.109978 | 0 | 0.036229 | 0.204178 | 3,399 | 94 | 121 | 36.159574 | 0.797412 | 0.283024 | 0 | 0.176471 | 0 | 0 | 0.0126 | 0 | 0 | 0 | 0 | 0.010638 | 0.098039 | 1 | 0.078431 | false | 0.019608 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0773947b769d5f943efc051b2beaf2ee562da724 | 1,231 | py | Python | AppImageBuilder/commands/file.py | gouchi/appimage-builder | 40e9851c573179e066af116fb906e9cad8099b59 | [
"MIT"
] | null | null | null | AppImageBuilder/commands/file.py | gouchi/appimage-builder | 40e9851c573179e066af116fb906e9cad8099b59 | [
"MIT"
] | null | null | null | AppImageBuilder/commands/file.py | gouchi/appimage-builder | 40e9851c573179e066af116fb906e9cad8099b59 | [
"MIT"
] | null | null | null | # Copyright 2020 Alexis Lopez Zubieta
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
import os
from .command import Command
class FileError(RuntimeError):
pass
class File(Command):
def __init__(self):
super().__init__('file')
self.log_stdout = False
self.log_command = False
def query(self, path):
self._run(['file', '-b', '--exclude', 'ascii', path])
if self.return_code != 0:
raise FileError('\n'.join(self.stderr))
return '\n'.join(self.stdout)
def is_executable_elf(self, path):
output = self.query(path)
result = ('ELF' in output) and ('executable' in output)
return result
| 31.564103 | 80 | 0.685621 | 166 | 1,231 | 5 | 0.554217 | 0.066265 | 0.031325 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005225 | 0.222583 | 1,231 | 38 | 81 | 32.394737 | 0.862069 | 0.490658 | 0 | 0 | 0 | 0 | 0.066775 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.055556 | 0.111111 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
077860d7dfef7192b10ddd84d4a9115cb45934f6 | 290 | py | Python | config.py | Pasmikh/quiz_please_bot | 2b619b359d8021be57b404525013c53403d6cde1 | [
"MIT"
] | null | null | null | config.py | Pasmikh/quiz_please_bot | 2b619b359d8021be57b404525013c53403d6cde1 | [
"MIT"
] | null | null | null | config.py | Pasmikh/quiz_please_bot | 2b619b359d8021be57b404525013c53403d6cde1 | [
"MIT"
] | null | null | null | days_of_week = ['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday', 'Sunday']
operation = ''
options = ['Info', 'Check-in/Out', 'Edit games', 'Back']
admins = ['admin1_telegram_nickname', 'admin2_telegram_nickname']
avail_days = []
TOKEN = 'bot_token'
group_id = id_of_group_chat | 41.428571 | 88 | 0.713793 | 37 | 290 | 5.27027 | 0.810811 | 0.164103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.089655 | 290 | 7 | 89 | 41.428571 | 0.731061 | 0 | 0 | 0 | 0 | 0 | 0.47079 | 0.164948 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0779ab4524c7785b80eb2c94fee42447c65c7dbc | 8,824 | py | Python | utils.py | g4idrijs/CardiacUltrasoundPhaseEstimation | 6bd2e157240133b6e306a7ca931d3d3b96647b88 | [
"Apache-2.0"
] | 1 | 2020-11-17T16:14:06.000Z | 2020-11-17T16:14:06.000Z | utils.py | g4idrijs/CardiacUltrasoundPhaseEstimation | 6bd2e157240133b6e306a7ca931d3d3b96647b88 | [
"Apache-2.0"
] | null | null | null | utils.py | g4idrijs/CardiacUltrasoundPhaseEstimation | 6bd2e157240133b6e306a7ca931d3d3b96647b88 | [
"Apache-2.0"
] | 1 | 2020-06-28T09:19:02.000Z | 2020-06-28T09:19:02.000Z | import os, time
import numpy as np
import scipy.signal
import scipy.misc
import scipy.ndimage.filters
import matplotlib.pyplot as plt
import PIL
from PIL import ImageDraw
import angles
import cv2
import SimpleITK as sitk
def cvShowImage(imDisp, strName, strAnnotation='', textColor=(0, 0, 255),
resizeAmount=None):
if resizeAmount is not None:
imDisp = cv2.resize(imDisp.copy(), None, fx=resizeAmount,
fy=resizeAmount)
imDisp = cv2.cvtColor(imDisp, cv2.COLOR_GRAY2RGB)
if len(strAnnotation) > 0:
cv2.putText(imDisp, strAnnotation, (10, 20), cv2.FONT_HERSHEY_PLAIN,
2.0, textColor, thickness=2)
cv2.imshow(strName, imDisp)
def cvShowColorImage(imDisp, strName, strAnnotation='', textColor=(0, 0, 255),
resizeAmount=None):
if resizeAmount is not None:
imDisp = cv2.resize(imDisp.copy(), None, fx=resizeAmount,
fy=resizeAmount)
if len(strAnnotation) > 0:
cv2.putText(imDisp, strAnnotation, (10, 20), cv2.FONT_HERSHEY_PLAIN,
2.0, textColor, thickness=2)
cv2.imshow(strName, imDisp)
def mplotShowImage(imInput):
plt.imshow(imInput, cmap=plt.cm.gray)
plt.grid(False)
plt.xticks(())
plt.yticks(())
def normalizeArray(a):
return np.single(0.0 + a - a.min()) / (a.max() - a.min())
def AddTextOnImage(imInput, strText, loc=(2, 2), color=255):
imInputPIL = PIL.Image.fromarray(imInput)
d = ImageDraw.Draw(imInputPIL)
d.text(loc, strText, fill=color)
return np.asarray(imInputPIL)
def AddTextOnVideo(imVideo, strText, loc=(2, 2)):
imVideoOut = np.zeros_like(imVideo)
for i in range(imVideo.shape[2]):
imVideoOut[:, :, i] = AddTextOnImage(imVideo[:, :, i], strText, loc)
return imVideoOut
def cvShowVideo(imVideo, strWindowName, waitTime=30, resizeAmount=None):
if not isinstance(imVideo, list):
imVideo = [imVideo]
strWindowName = [strWindowName]
# find max number of frames
maxFrames = 0
for vid in range(len(imVideo)):
if imVideo[vid].shape[-1] > maxFrames:
maxFrames = imVideo[vid].shape[2]
# display video
blnLoop = True
fid = 0
while True:
for vid in range(len(imVideo)):
curVideoFid = fid % imVideo[vid].shape[2]
imCur = imVideo[vid][:, :, curVideoFid]
# resize image if requested
if resizeAmount:
imCur = scipy.misc.imresize(imCur, resizeAmount)
# show image
cvShowImage(imCur, strWindowName[vid], '%d' % (curVideoFid + 1))
# look for "esc" key
k = cv2.waitKey(waitTime) & 0xff
if blnLoop:
if k == 27:
break
elif k == ord(' '):
blnLoop = False
else:
fid = (fid + 1) % maxFrames
else:
if k == 27: # escape
break
elif k == ord(' '): # space
blnLoop = True
elif k == 81: # left arrow
fid = (fid - 1) % maxFrames
elif k == 83: # right arrow
fid = (fid + 1) % maxFrames
for vid in range(len(imVideo)):
cv2.destroyWindow(strWindowName[vid])
def normalizeArray(a, bounds=None):
if bounds is None:
return (0.0 + a - a.min()) / (a.max() - a.min())
else:
b = (0.0 + a - bounds[0]) / (bounds[1] - bounds[0])
b[b < 0] = bounds[0]
b[b > bounds[1]] = bounds[1]
return b
def loadVideoFromFile(dataFilePath, sigmaSmooth=None, resizeAmount=None):
vidseq = cv2.VideoCapture(dataFilePath)
print vidseq, vidseq.isOpened()
# print metadata
metadata = {}
numFrames = vidseq.get(cv2.CAP_PROP_FRAME_COUNT)
print '\tFRAME_COUNT = ', numFrames
metadata['FRAME_COUNT'] = numFrames
frameHeight = vidseq.get(cv2.CAP_PROP_FRAME_HEIGHT)
if frameHeight > 0:
print '\tFRAME HEIGHT = ', frameHeight
metadata['FRAME_HEIGHT'] = frameHeight
frameWidth = vidseq.get(cv2.CAP_PROP_FRAME_WIDTH)
if frameWidth > 0:
print '\tFRAME WIDTH = ', frameWidth
metadata['FRAME_WIDTH'] = frameWidth
fps = vidseq.get(cv2.CAP_PROP_FPS)
if fps > 0:
print '\tFPS = ', fps
metadata['FPS'] = fps
fmt = vidseq.get(cv2.CAP_PROP_FORMAT)
if fmt > 0:
print '\FORMAT = ', fmt
metadata['FORMAT'] = fmt
vmode = vidseq.get(cv2.CAP_PROP_MODE)
if vmode > 0:
print '\MODE = ', vmode
metadata['MODE'] = MODE
# smooth if wanted
if sigmaSmooth:
wSmooth = 4 * sigmaSmooth + 1
print metadata
# read video frames
imInput = []
fid = 0
prevPercent = 0
print '\n'
while True:
valid_object, frame = vidseq.read()
if not valid_object:
break
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
if resizeAmount:
frame = scipy.misc.imresize(frame, resizeAmount)
if sigmaSmooth:
frame = cv2.GaussianBlur(frame, (wSmooth, wSmooth), 0)
imInput.append(frame)
# update progress
fid += 1
curPercent = np.floor(100.0 * fid / numFrames)
if curPercent > prevPercent:
prevPercent = curPercent
print '%.2d%%' % curPercent,
print '\n'
imInput = np.dstack(imInput)
vidseq.release()
return (imInput, metadata)
def writeVideoToFile(imVideo, filename, codec='DIVX', fps=30, isColor=False):
# start timer
tStart = time.time()
# write video
# fourcc = cv2.FOURCC(*list(codec)) # opencv 2.4
fourcc = cv2.VideoWriter_fourcc(*list(codec))
height, width = imVideo.shape[:2]
writer = cv2.VideoWriter(filename, fourcc, fps=fps,
frameSize=(width, height), isColor=isColor)
print writer.isOpened()
numFrames = imVideo.shape[-1]
for fid in range(numFrames):
if isColor:
writer.write(imVideo[:, :, :, fid].astype('uint8'))
else:
writer.write(imVideo[:, :, fid].astype('uint8'))
# end timer
tEnd = time.time()
print 'Writing video {} took {} seconds'.format(filename, tEnd - tStart)
# release
writer.release()
def writeVideoAsTiffStack(imVideo, strFilePrefix):
# start timer
tStart = time.time()
for fid in range(imVideo.shape[2]):
plt.imsave(strFilePrefix + '.%.3d.tif' % (fid + 1), imVideo[:, :, fid])
# end timer
tEnd = time.time()
print 'Writing video {} took {} seconds'.format(strFilePrefix,
tEnd - tStart)
def mplotShowMIP(im, axis, xlabel=None, ylabel=None, title=None):
plt.imshow(im.max(axis))
if title:
plt.title(title)
if xlabel:
plt.xlabel(xlabel)
if ylabel:
plt.ylabel(ylabel)
def convertFromRFtoBMode(imInputRF):
return np.abs(scipy.signal.hilbert(imInputRF, axis=0))
def normalizeAngles(angleList, angle_range):
return np.array(
[angles.normalize(i, angle_range[0], angle_range[1]) for i in
angleList])
def SaveFigToDisk(saveDir, fileName, saveext=('.png', '.eps'), **kwargs):
for ext in saveext:
plt.savefig(os.path.join(saveDir, fileName + ext), **kwargs)
def SaveImageToDisk(im, saveDir, fileName, saveext=('.png',)):
for ext in saveext:
plt.imsave(os.path.join(saveDir, fileName + ext), im)
def generateGatedVideoUsingSplineInterp(imInput, numOutFrames, minFrame,
maxFrame, splineOrder):
tZoom = np.float(numOutFrames) / (maxFrame - minFrame + 1)
return scipy.ndimage.interpolation.zoom(
imInput[:, :, minFrame:maxFrame + 1], (1, 1, tZoom), order=splineOrder)
def ncorr(imA, imB):
imA = (imA - imA.mean()) / imA.std()
imB = (imB - imB.mean()) / imB.std()
return np.mean(imA * imB)
def vis_checkerboard(im1, im2):
im_chk = sitk.CheckerBoard(sitk.GetImageFromArray(im1),
sitk.GetImageFromArray(im2))
return sitk.GetArrayFromImage(im_chk)
def fig2data(fig):
"""
@brief Convert a Matplotlib figure to a 4D numpy array with
RGBA channels and return it
@param fig a matplotlib figure
@return a numpy 3D array of RGBA values
"""
# draw the renderer
fig.canvas.draw()
# Get the RGBA buffer from the figure
w, h = fig.canvas.get_width_height()
buf = np.fromstring(fig.canvas.tostring_argb(), dtype=np.uint8)
buf.shape = (w, h, 4)
# canvas.tostring_argb give pixmap in ARGB mode.
# Roll the ALPHA channel to have it in RGBA mode
buf = np.roll(buf, 3, axis=2)
return buf | 24.241758 | 79 | 0.592248 | 1,031 | 8,824 | 5.031038 | 0.275461 | 0.008097 | 0.013881 | 0.017351 | 0.217467 | 0.174475 | 0.124157 | 0.124157 | 0.124157 | 0.118373 | 0 | 0.022293 | 0.288305 | 8,824 | 364 | 80 | 24.241758 | 0.803662 | 0.051904 | 0 | 0.241546 | 0 | 0 | 0.028841 | 0 | 0 | 0 | 0.000491 | 0 | 0 | 0 | null | null | 0 | 0.05314 | null | null | 0.067633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
077ab159d3a90c5c7c3094919ba408b1a2cadaa4 | 663 | py | Python | tests/test_missing_process.py | ricklupton/sphinx_probs_rdf | bcae27a37162c1a4c4b329af6759a0b5b52cab7a | [
"MIT"
] | 1 | 2021-07-31T10:06:50.000Z | 2021-07-31T10:06:50.000Z | tests/test_missing_process.py | ricklupton/sphinx_probs_rdf | bcae27a37162c1a4c4b329af6759a0b5b52cab7a | [
"MIT"
] | 1 | 2021-05-05T18:15:48.000Z | 2021-05-05T18:15:48.000Z | tests/test_missing_process.py | ricklupton/sphinx_probs_rdf | bcae27a37162c1a4c4b329af6759a0b5b52cab7a | [
"MIT"
] | null | null | null | import pytest
from rdflib import Graph, Namespace, Literal
from rdflib.namespace import RDF, RDFS
from sphinx_probs_rdf.directives import PROBS
SYS = Namespace("http://example.org/system/")
@pytest.mark.sphinx(
'probs_rdf', testroot='missing',
confoverrides={'probs_rdf_system_prefix': str(SYS)})
def test_builder_reports_warning_for_missing_process(app, status, warning):
app.builder.build_all()
assert "build succeeded" not in status.getvalue()
warnings = warning.getvalue().strip()
assert 'WARNING: Requested child "http://example.org/system/Missing" of "http://example.org/system/ErrorMissingProcess" is not a Process' in warnings
| 36.833333 | 153 | 0.764706 | 88 | 663 | 5.613636 | 0.534091 | 0.048583 | 0.08502 | 0.121457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120664 | 663 | 17 | 154 | 39 | 0.847341 | 0 | 0 | 0 | 0 | 0.076923 | 0.313725 | 0.034691 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0785423db820435be27b39e1842db52b66a25a8e | 2,953 | py | Python | tiktorch/server/session/process.py | FynnBe/tiktorch | 60c6fa9700e7ff73e44338e8755c56c6e8846f2f | [
"MIT"
] | null | null | null | tiktorch/server/session/process.py | FynnBe/tiktorch | 60c6fa9700e7ff73e44338e8755c56c6e8846f2f | [
"MIT"
] | null | null | null | tiktorch/server/session/process.py | FynnBe/tiktorch | 60c6fa9700e7ff73e44338e8755c56c6e8846f2f | [
"MIT"
] | null | null | null | import dataclasses
import io
import multiprocessing as _mp
import uuid
import zipfile
from concurrent.futures import Future
from multiprocessing.connection import Connection
from typing import List, Optional, Tuple
import numpy
from tiktorch import log
from tiktorch.rpc import Shutdown
from tiktorch.rpc import mp as _mp_rpc
from tiktorch.rpc.mp import MPServer
from tiktorch.server.reader import eval_model_zip
from .backend import base
from .rpc_interface import IRPCModelSession
@dataclasses.dataclass
class ModelInfo:
# TODO: Test for model info
name: str
input_axes: str
output_axes: str
valid_shapes: List[List[Tuple[str, int]]]
halo: List[Tuple[str, int]]
offset: List[Tuple[str, int]]
scale: List[Tuple[str, float]]
class ModelSessionProcess(IRPCModelSession):
def __init__(self, model_zip: bytes, devices: List[str]) -> None:
with zipfile.ZipFile(io.BytesIO(model_zip)) as model_file:
self._model = eval_model_zip(model_file, devices)
self._datasets = {}
self._worker = base.SessionBackend(self._model)
def forward(self, input_tensor: numpy.ndarray) -> Future:
res = self._worker.forward(input_tensor)
return res
def create_dataset(self, mean, stddev):
id_ = uuid.uuid4().hex
self._datasets[id_] = {"mean": mean, "stddev": stddev}
return id_
def get_model_info(self) -> ModelInfo:
return ModelInfo(
self._model.name,
self._model.input_axes,
self._model.output_axes,
valid_shapes=[self._model.input_shape],
halo=self._model.halo,
scale=self._model.scale,
offset=self._model.offset,
)
def shutdown(self) -> Shutdown:
self._worker.shutdown()
return Shutdown()
def _run_model_session_process(
conn: Connection, model_zip: bytes, devices: List[str], log_queue: Optional[_mp.Queue] = None
):
try:
# from: https://github.com/pytorch/pytorch/issues/973#issuecomment-346405667
import resource
rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)
resource.setrlimit(resource.RLIMIT_NOFILE, (4096, rlimit[1]))
except ModuleNotFoundError:
pass # probably running on windows
if log_queue:
log.configure(log_queue)
session_proc = ModelSessionProcess(model_zip, devices)
srv = MPServer(session_proc, conn)
srv.listen()
def start_model_session_process(
model_zip: bytes, devices: List[str], log_queue: Optional[_mp.Queue] = None
) -> Tuple[_mp.Process, IRPCModelSession]:
client_conn, server_conn = _mp.Pipe()
proc = _mp.Process(
target=_run_model_session_process,
name="ModelSessionProcess",
kwargs={"conn": server_conn, "devices": devices, "log_queue": log_queue, "model_zip": model_zip},
)
proc.start()
return proc, _mp_rpc.create_client(IRPCModelSession, client_conn)
| 30.132653 | 105 | 0.691162 | 364 | 2,953 | 5.379121 | 0.318681 | 0.045965 | 0.024515 | 0.022983 | 0.068948 | 0.068948 | 0.055158 | 0.055158 | 0.055158 | 0.055158 | 0 | 0.007735 | 0.211988 | 2,953 | 97 | 106 | 30.443299 | 0.833691 | 0.043007 | 0 | 0 | 0 | 0 | 0.02056 | 0 | 0 | 0 | 0 | 0.010309 | 0 | 1 | 0.092105 | false | 0.013158 | 0.223684 | 0.013158 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
078810f30530e12e24a60251c7822cc072db8c3d | 1,142 | py | Python | typogrify/templatetags/typogrify_tags.py | tylerbutler/typogrify | 7b7a67348a2d51400fd38c0b61e30e34ca98994e | [
"BSD-3-Clause"
] | null | null | null | typogrify/templatetags/typogrify_tags.py | tylerbutler/typogrify | 7b7a67348a2d51400fd38c0b61e30e34ca98994e | [
"BSD-3-Clause"
] | null | null | null | typogrify/templatetags/typogrify_tags.py | tylerbutler/typogrify | 7b7a67348a2d51400fd38c0b61e30e34ca98994e | [
"BSD-3-Clause"
] | null | null | null | from typogrify.filters import amp, caps, initial_quotes, smartypants, titlecase, typogrify, widont, TypogrifyError
from functools import wraps
from django.conf import settings
from django import template
from django.utils.safestring import mark_safe
from django.utils.encoding import force_unicode
register = template.Library()
def make_safe(f):
"""
A function wrapper to make typogrify play nice with django's
unicode support.
"""
@wraps(f)
def wrapper(text):
text = force_unicode(text)
f.is_safe = True
out = text
try:
out = f(text)
except TypogrifyError, e:
if settings.DEBUG:
raise e
return text
return mark_safe(out)
wrapper.is_safe = True
return wrapper
register.filter('amp', make_safe(amp))
register.filter('caps', make_safe(caps))
register.filter('initial_quotes', make_safe(initial_quotes))
register.filter('smartypants', make_safe(smartypants))
register.filter('titlecase', make_safe(titlecase))
register.filter('typogrify', make_safe(typogrify))
register.filter('widont', make_safe(widont))
| 27.853659 | 114 | 0.69965 | 143 | 1,142 | 5.468531 | 0.356643 | 0.081841 | 0.038363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205779 | 1,142 | 40 | 115 | 28.55 | 0.862183 | 0 | 0 | 0 | 0 | 0 | 0.05364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.206897 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
078ba56d9b68af88a26ed1e2d4bb4466a1a8bcb9 | 429 | py | Python | narwhallet/core/kws/http/enumerations/mediatypes.py | Snider/narwhallet | 0d528763c735f1e68b8264e302854d41e7cf1956 | [
"MIT"
] | 3 | 2021-12-29T11:25:13.000Z | 2022-01-16T13:57:17.000Z | narwhallet/core/kws/http/enumerations/mediatypes.py | Snider/narwhallet | 0d528763c735f1e68b8264e302854d41e7cf1956 | [
"MIT"
] | null | null | null | narwhallet/core/kws/http/enumerations/mediatypes.py | Snider/narwhallet | 0d528763c735f1e68b8264e302854d41e7cf1956 | [
"MIT"
] | 1 | 2022-01-16T13:57:20.000Z | 2022-01-16T13:57:20.000Z | from enum import Enum
class content_type(Enum):
# https://www.iana.org/assignments/media-types/media-types.xhtml
css = 'text/css'
gif = 'image/gif'
htm = 'text/html'
html = 'text/html'
ico = 'image/bmp'
jpg = 'image/jpeg'
jpeg = 'image/jpeg'
js = 'application/javascript'
png = 'image/png'
txt = 'text/plain; charset=us-ascii'
json = 'application/json'
svg = 'image/svg+xml'
| 23.833333 | 68 | 0.613054 | 58 | 429 | 4.517241 | 0.62069 | 0.076336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 429 | 17 | 69 | 25.235294 | 0.793939 | 0.144522 | 0 | 0 | 0 | 0 | 0.416438 | 0.060274 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
078eac42052a5c2213460643ce82f3d54d3402ee | 963 | py | Python | apps/delivery/migrations/0001_initial.py | jimforit/lagou | 165593a15597012092b5e0ba34158fbc1d1c213d | [
"MIT"
] | 2 | 2019-03-11T03:58:19.000Z | 2020-03-06T06:45:28.000Z | apps/delivery/migrations/0001_initial.py | jimforit/lagou | 165593a15597012092b5e0ba34158fbc1d1c213d | [
"MIT"
] | 5 | 2020-06-05T20:04:20.000Z | 2021-09-08T00:53:52.000Z | apps/delivery/migrations/0001_initial.py | jimforit/lagou | 165593a15597012092b5e0ba34158fbc1d1c213d | [
"MIT"
] | null | null | null | # Generated by Django 2.0.2 on 2019-03-08 13:03
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Delivery',
fields=[
('create_time', models.DateTimeField(auto_now_add=True, verbose_name='创建时间')),
('update_time', models.DateTimeField(auto_now=True, verbose_name='更新时间')),
('is_delete', models.BooleanField(default=False, verbose_name='删除标记')),
('id', models.AutoField(primary_key=True, serialize=False, verbose_name='投递ID')),
('delivery_status', models.CharField(choices=[('DD', '待定'), ('YQ', '邀请面试'), ('WJ', '婉拒')], default='DD', max_length=2, verbose_name='投递状态')),
],
options={
'verbose_name': '面试',
'verbose_name_plural': '面试',
},
),
]
| 33.206897 | 157 | 0.559709 | 99 | 963 | 5.272727 | 0.626263 | 0.14751 | 0.088123 | 0.103448 | 0.114943 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023222 | 0.284528 | 963 | 28 | 158 | 34.392857 | 0.734398 | 0.046729 | 0 | 0 | 1 | 0 | 0.138646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
079486c9e4c55ef02a54afabc6964be8635f9540 | 860 | py | Python | shop/migrations/0009_auto_20200310_1430.py | manson800819/test | 6df7d92eababe76a54585cb8102a00a6d79ca467 | [
"MIT"
] | null | null | null | shop/migrations/0009_auto_20200310_1430.py | manson800819/test | 6df7d92eababe76a54585cb8102a00a6d79ca467 | [
"MIT"
] | null | null | null | shop/migrations/0009_auto_20200310_1430.py | manson800819/test | 6df7d92eababe76a54585cb8102a00a6d79ca467 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.29 on 2020-03-10 14:30
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('shop', '0008_auto_20200310_1134'),
]
operations = [
migrations.RemoveField(
model_name='category',
name='id',
),
migrations.AlterField(
model_name='category',
name='name',
field=models.CharField(db_index=True, max_length=200, primary_key=True, serialize=False),
),
migrations.AlterField(
model_name='product',
name='type1',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='products', to='shop.Type1'),
),
]
| 27.741935 | 123 | 0.612791 | 93 | 860 | 5.494624 | 0.623656 | 0.046967 | 0.054795 | 0.086106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061709 | 0.265116 | 860 | 30 | 124 | 28.666667 | 0.746835 | 0.080233 | 0 | 0.304348 | 1 | 0 | 0.100254 | 0.029188 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0797199eb44c9067c6481782c2b094efbd8e10a6 | 3,917 | py | Python | G5/DerivedData/ParameterProbing/checkme.py | shooking/ZoomPedalFun | 7b9f5f4441cfe42e988e06cf6b98603c21ac2466 | [
"CC0-1.0"
] | 9 | 2021-02-15T00:05:32.000Z | 2022-01-24T14:01:46.000Z | G5/DerivedData/ParameterProbing/checkme.py | shooking/ZoomPedalFun | 7b9f5f4441cfe42e988e06cf6b98603c21ac2466 | [
"CC0-1.0"
] | 13 | 2021-08-23T02:07:26.000Z | 2022-02-16T16:55:00.000Z | G5/DerivedData/ParameterProbing/checkme.py | shooking/ZoomPedalFun | 7b9f5f4441cfe42e988e06cf6b98603c21ac2466 | [
"CC0-1.0"
] | null | null | null | # -*- coding: ascii -*-
import sys
import json
def check(data):
OnOffstart = data.find(b"OnOff")
if OnOffstart != -1:
fxName=""
OnOffblockSize = 0x30
for j in range(12):
if data[OnOffstart + j + OnOffblockSize] == 0x00:
break
fxName = fxName + chr(data[OnOffstart + j + OnOffblockSize])
tD = {
"fxname" :fxName
}
mmax = []
mdefault = []
name = []
mpedal = []
numParameters = 0
#print("OnOffStart at {}".format(OnOffstart))
try:
# this is WAY too large, let except break the loop
for j in range(0, 2000):
"""
if not ( data[OnOffstart + (j+1) * OnOffblockSize - 1] == 0x00
and data[OnOffstart + (j+1) * OnOffblockSize - 2] == 0x00):
# ZD2 format has a length and PRME offset. ZDL has none of this.
print("End of the parameters")
break;
if not ( data[OnOffstart + (j) * OnOffblockSize + 0x18 ] == 0x00
and data[OnOffstart + (j) * OnOffblockSize + 0x19] == 0x00
and data[OnOffstart + (j) * OnOffblockSize + 0x1A] == 0x00
and data[OnOffstart + (j) * OnOffblockSize + 0x1B] == 0x00 ):
print("Empty next slot")
break
"""
currName = ""
for i in range(12):
if data[OnOffstart + j * OnOffblockSize + i] == 0x00:
break
currName = currName + chr(data[OnOffstart + j * OnOffblockSize + i])
if data[OnOffstart + j * OnOffblockSize + i] & 0x80:
raise Exception("Non binary char")
if currName == "":
break
name.append(currName)
mmax.append( data[OnOffstart + j * OnOffblockSize + 12] +
data[OnOffstart + j * OnOffblockSize + 13] * 256)
mdefault.append(data[OnOffstart + j * OnOffblockSize + 16] +
data[OnOffstart + j * OnOffblockSize + 17] * 256);
if data[OnOffstart + j * OnOffblockSize + 0x2C]:
mpedal.append(True)
else:
mpedal.append(False)
#print(mmax[j])
#print(mdefault[j])
"""
print("[{}] {} {} {} {}".format(
OnOffstart + (j+1) * OnOffblockSize,
hex(data[OnOffstart + (j+1) * OnOffblockSize]),
hex(data[OnOffstart + (j+1) * OnOffblockSize + 1]),
hex(data[OnOffstart + (j+1) * OnOffblockSize + 2]),
hex(data[OnOffstart + (j+1) * OnOffblockSize + 3])) )
"""
#print("increment params")
numParameters = numParameters + 1
except:
pass
#print("Found {} parameters.".format(numParameters))
tD['Parameters'] = []
# 0 is the OnOff state
# 1 is the name
# so actual paramters start from index 2, but clearly there are 2 less
for i in range(numParameters - 2):
#print(i)
tD['Parameters'].append({'name': name[i+2], 'mmax': mmax[i + 2], 'mdefault': mdefault[i + 2], 'pedal': mpedal[i+2]})
#json.dump(tD, sys.stdout, indent=4)
f = open(fxName+'.json', "w")
json.dump(tD, f, indent=4)
f.close()
return fxName+'.OnOff'
# handles a zoom firmware
if __name__ == "__main__":
if len(sys.argv) == 2:
f = open(sys.argv[1], "rb")
data = f.read()
f.close()
check(data)
| 40.381443 | 129 | 0.451876 | 370 | 3,917 | 4.762162 | 0.324324 | 0.166856 | 0.170261 | 0.23042 | 0.363791 | 0.251419 | 0.097616 | 0.097616 | 0.052213 | 0.052213 | 0 | 0.042115 | 0.430176 | 3,917 | 96 | 130 | 40.802083 | 0.747312 | 0.100332 | 0 | 0.090909 | 0 | 0 | 0.039259 | 0 | 0 | 0 | 0.008822 | 0 | 0 | 1 | 0.018182 | false | 0.018182 | 0.036364 | 0 | 0.072727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07a0beb6aad78f79be93a859fb255e52020dee2b | 1,931 | py | Python | geoist/cattools/Smoothing.py | wqqpp007/geoist | 116b674eae3da4ee706902ce7f5feae1f61f43a5 | [
"MIT"
] | 1 | 2020-06-04T01:09:24.000Z | 2020-06-04T01:09:24.000Z | geoist/cattools/Smoothing.py | wqqpp007/geoist | 116b674eae3da4ee706902ce7f5feae1f61f43a5 | [
"MIT"
] | null | null | null | geoist/cattools/Smoothing.py | wqqpp007/geoist | 116b674eae3da4ee706902ce7f5feae1f61f43a5 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
import numpy as np
import .Selection as Sel
import .Exploration as Exp
import .CatUtils as CU
#-----------------------------------------------------------------------------------------
def GaussWin (Dis, Sig):
return np.exp(-(Dis**2)/(Sig**2.))
#-----------------------------------------------------------------------------------------
def SmoothMFD (Db, a, Wkt, Window=GaussWin, Par=50.,
Delta=0.1, SphereGrid=False,
Box=[], Buffer=[], Grid=[],
Threshold=-100, Unwrap=False,
ZeroRates=False):
if Par <= 0:
Par = np.inf
# Catalogue selection
DbS = Sel.AreaSelect(Db, Wkt, Owrite=0, Buffer=Buffer, Unwrap=Unwrap)
x,y,z = Exp.GetHypocenter(DbS)
# Creating the mesh grid
P = CU.Polygon()
P.Load(Wkt)
# Unwrapping coordinates
if Unwrap:
x = [i if i > 0. else i+360. for i in x]
P.Unwrap()
if Grid:
XY = [G for G in Grid if P.IsInside(G[0], G[1])]
else:
if SphereGrid:
XY = P.SphereGrid(Delta=Delta, Unwrap=Unwrap)
else:
XY = P.CartGrid(Dx=Delta, Dy=Delta, Bounds=Box)
Win = []
for xyP in XY:
Win.append(0)
for xyE in zip(x,y):
Dis = CU.WgsDistance(xyP[1], xyP[0], xyE[1], xyE[0])
Win[-1] += Window(Dis, Par)
# Scaling and normalising the rates
Norm = np.sum(Win)
A = []; X = []; Y = []
for I,W in enumerate(Win):
aT = -np.inf
if Norm > 0. and W > 0.:
aT = a + np.log10(W/Norm)
if aT < Threshold:
# Filter below threshold
aT = -np.inf
if ZeroRates:
A.append(aT)
X.append(XY[I][0])
Y.append(XY[I][1])
else:
if aT > -np.inf:
A.append(aT)
X.append(XY[I][0])
Y.append(XY[I][1])
if Unwrap:
# Wrap back longitudes
X = [x if x < 180. else x-360. for x in X]
return X, Y, A | 23.26506 | 90 | 0.483169 | 271 | 1,931 | 3.442804 | 0.346863 | 0.021436 | 0.038585 | 0.019293 | 0.066452 | 0.066452 | 0.066452 | 0.066452 | 0.066452 | 0.066452 | 0 | 0.027496 | 0.284309 | 1,931 | 83 | 91 | 23.26506 | 0.647612 | 0.188503 | 0 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075472 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07a6ea2e95247eb4360055919661bfab2c787424 | 442 | py | Python | audio.py | fernandoq/quiz-show | 6e130db7923d14cf1976e1c522c58f848e48f2af | [
"MIT"
] | null | null | null | audio.py | fernandoq/quiz-show | 6e130db7923d14cf1976e1c522c58f848e48f2af | [
"MIT"
] | null | null | null | audio.py | fernandoq/quiz-show | 6e130db7923d14cf1976e1c522c58f848e48f2af | [
"MIT"
] | null | null | null | import time
import subprocess
import os
print os.uname()
if not os.uname()[0].startswith("Darw"):
import pygame
pygame.mixer.init()
# Plays a song
def playSong(filename):
print "play song"
if not os.uname()[0].startswith("Darw"):
pygame.mixer.music.fadeout(1000) #fadeout current music over 1 sec.
pygame.mixer.music.load("music/" + filename)
pygame.mixer.music.play()
else:
subprocess.call(["afplay", "music/" + filename]) | 24.555556 | 69 | 0.708145 | 64 | 442 | 4.890625 | 0.5 | 0.140575 | 0.153355 | 0.076677 | 0.172524 | 0.172524 | 0.172524 | 0 | 0 | 0 | 0 | 0.018325 | 0.135747 | 442 | 18 | 70 | 24.555556 | 0.801047 | 0.11086 | 0 | 0.133333 | 0 | 0 | 0.089514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.266667 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.