hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a602184485e7b864f556cccdd23877e1ee76bf04 | 40,400 | py | Python | blackbook/database/couch/api.py | ievans3024/BlackBook | d7840a51061676fcad2a3143035404aa14018ba0 | [
"MIT"
] | 1 | 2015-03-17T19:55:01.000Z | 2015-03-17T19:55:01.000Z | blackbook/database/couch/api.py | ievans3024/BlackBook | d7840a51061676fcad2a3143035404aa14018ba0 | [
"MIT"
] | null | null | null | blackbook/database/couch/api.py | ievans3024/BlackBook | d7840a51061676fcad2a3143035404aa14018ba0 | [
"MIT"
] | null | null | null | import datetime
import couchdb
import couchdb.mapping
import blackbook.database.couch.database
import blackbook.tools.tools
import blackbook.api.basecollection
import blackbook.api.errors
from blackbook.api import APIField
from blackbook.api import APIType
from blackbook.api import API
from blackbook.lib import collection_plus_json
from flask import Blueprint
from flask import current_app
from flask import request
from flask import Response
from flask import session
__author__ = 'ievans3024'
class CouchAPI(API):
"""Abstract Base Class for interfacing with couch Document classes"""
db = APIField(couchdb.Database)
model = APIType(couchdb.mapping.Document)
def _generate_document(self, *args, href='/', **kwargs):
"""
Generate a document
Implementations should return a collection+json document object.
"""
raise NotImplementedError()
def _get_authenticated_user(self, user_api, session_api):
user = None
if session.get("id"):
sessions_by_token = session_api.model.by_token(key=session["id"])
if sessions_by_token.rows:
get_session = sessions_by_token.rows[0]
if get_session.expiry > datetime.datetime.now():
user = user_api.model.load(self.db, get_session.user)
return user
def delete(self, *args, **kwargs):
raise NotImplementedError()
def get(self, *args, **kwargs):
raise NotImplementedError()
def head(self, *args, **kwargs):
raise NotImplementedError()
def options(self, *args, **kwargs):
raise NotImplementedError()
def patch(self, *args, **kwargs):
raise NotImplementedError()
def post(self, *args, **kwargs):
raise NotImplementedError()
def put(self, *args, **kwargs):
raise NotImplementedError()
def search(self, *args, **kwargs):
raise NotImplementedError()
class Contact(CouchAPI):
"""
Contact API class
/contact/[?[after=<id>][before=<id>][q=<query>][name=<name>][surname=<surname>][email=<email>][phone=<phone_number>]]
GET: retrieve list of contacts
- requires authenticated admin user
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only view their own contacts
- admin users may view contacts for all users
- HTTP 200 response
- collection.items will contain a paginated list of contacts
- collection.links will contain a list of pagination links
- collection.queries will contain a list of queries that can be performed
- q: general query/search (searches all fields)
- name: search by first name
- surname: search by last name
- email: search by email
- phone: search by phone number
- collection.template will contain the creation template
POST: create a new contact
- requires authenticated user and completed creation form
- if not authenticated:
- unauthenticated users cannot create new contacts
- HTTP 401 response
- collection.items will be empty
- collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- new contact will be associated with authenticated user
- if form is complete:
- HTTP 201 response
- collection.items will contain a one-item list of the new user's information
- collection.template will contain the creation template
- if form is incomplete:
- HTTP 400 response
- collection.items will be empty
- collection.template will contain the creation template
- collection.error will contain 400 error code, title and message
/contact/<id>/
GET: retrieve information about a specific contact
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only view their own contacts
- admin users may view contacts for all users
- if (non-admin user and contact.user == user.id) or (admin user):
- HTTP 200 response
- collection.items will contain one-item list containing the contact's information
- collection.links will contain a one-link list containing a link to the contact's owner User
- link rel=owner
- collection.template will contain the update template
- if non-admin user and (contact.user != user.id or <id> does not exist):
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
PUT: update information about a specific contact
- requires authenticated user and complete template
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only update their own contacts
- admin users may update contacts for all users
- if (non-admin user and contact.user == user.id) or (admin user):
- if template complete:
- HTTP 200 response
- collection.items will contain one-item list containing the contact's updated information
- collection.links will contain a one-link list containing a link to the contact's owner User
- link rel=owner
- collection.template will contain the update template
- if template incomplete:
- HTTP 400 response
- collection.items will be empty
- collection.template will contain the update template
- collection.error will contain 400 error code, title and message
- if non-admin user and (contact.user != user.id or <id> does not exist):
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
PATCH: update information about a specific contact
- requires authenticated user and partial or complete template
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only update their own contacts
- admin users may update contacts for all users
- data in the submitted template that does not match the server's template will be ignored
- if (non-admin user and contact.user == user.id) or (admin user):
- if template contains matching data:
- HTTP 200 response
- collection.items will contain one-item list containing the contact's updated information
- collection.links will contain a one-link list containing a link to the contact's owner User
- link rel=owner
- collection.template will contain the update template
- if template does not contain any matching data:
- HTTP 400 response
- collection.items will be empty
- collection.template will contain the update template
- collection.error will contain 400 error code, title and message
- if non-admin user and (contact.user != user.id or <id> does not exist):
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
POST: unsupported
- HTTP 405 response
- All collection fields will be empty, if possible, except error
- collection.error will contain 405 error code, title and message
DELETE: delete a specific contact
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only delete their own contacts
- admin users may delete any contact
- if (non-admin user and contact.user == user.id) or (admin user):
- HTTP 204 response
- No body
- if (non-admin user and contact.user != user.id) or <id> does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
/user/<user_id>/contacts/[?[page=<pagenum>][q=<query>][name=<name>][surname=<surname>][email=<email>][phone=<phone_number>]]
GET: retrieve list of contacts for a particular user
- only displays contacts a specific user has created
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only view their own contacts
- admin users may view contacts for all users
- if (non-admin user and <id> == user.id) or (admin user):
- HTTP 200 response
- collection.items will contain a paginated list of the user's contacts
- collection.links will contain a list of pagination links
- also contains a special "rel=owner" link, referring to the owning user
- collection.queries will contain a list of queries that can be performed
- q: general query/search (searches all fields)
- name: search by first name
- surname: search by last name
- email: search by email
- phone: search by phone number
- collection.template will contain the creation template
- if non-admin user and (<id> != user.id or <id> does not exist):
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
- if admin user and <id> does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
POST: create a new contact for a particular user
- requires authenticated user and completed creation form
- if not authenticated:
- unauthenticated users cannot create new contacts
- HTTP 401 response
- collection.items will be empty
- collection.template will be empty
- collection.error will contain 401 error code, title and message
- if (authenticated non-admin and <id> == user.id) or (authenticated admin user):
- non-admins may only create contacts for themselves
- admins may create contacts for any user
- if form is complete:
- HTTP 201 response
- collection.items will contain a one-item list of the new user's information
- collection.template will contain the creation template
- if form is incomplete:
- HTTP 400 response
- collection.items will be empty
- collection.template will contain the creation template
- collection.error will contain 400 error code, title and message
- if authenticated non-admin and <id> != user.id:
- non-admins may not create contacts for other users
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
"""
def __init__(self, db):
super(Contact, self).__init__(db, blackbook.database.couch.models.Contact)
def _generate_document(self, *args, href='/contact/', **kwargs):
"""
Generate a Contact document representation.
:param **kwargs:
"""
document = blackbook.api.basecollection.ContactCollection(href=href)
return document
def delete(self, contact_id=None, *args, **kwargs):
user_api = User(self.db)
session_api = Session(self.db)
user = self._get_authenticated_user(user_api, session_api)
if not blackbook.tools.tools.check_angular_xsrf():
document = self._generate_document()
document.error = blackbook.api.errors.APIBadRequestError()
pass
if not user:
document = self._generate_document()
document.error = blackbook.api.errors.APIUnauthorizedError()
return Response(response=str(document), status=int(document.error.code), mimetype=document.mimetype)
if contact_id:
contact = self.model.load(id=contact_id)
if (not contact) or \
(
contact.user != user.id and
not user.has_permission(
self.db,
".".join([self.db.name, "delete", self.model.__name__.lower()])
)
):
document = self._generate_document()
document.error = blackbook.api.errors.APINotFoundError()
return Response(response=str(document), status=int(document.error.code), mimetype=document.mimetype)
else:
self.db.delete(contact)
return Response(response="", status=204)
def get(self, contact_id=None, user_id=None):
user_api = User(self.db)
session_api = Session(self.db)
user = self._get_authenticated_user(user_api, session_api)
document = self._generate_document()
spec_properties = self.api_spec["properties"]
if not self._request_origin_consistent():
# TODO: handle bad CSRF -- APIBadRequestError?
pass
if not user:
document.error = blackbook.api.errors.APIUnauthorizedError()
return Response(str(document), status=int(document.error.code), mimetype=document.mimetype)
if contact_id:
contact = self.model.load(id=contact_id)
template_data = self.api_spec["template_data"]["update"]
template_meta = self.api_spec["template_meta"]["update"]
if (not contact) or \
(
contact.user != user.id and
not user.has_permission(
self.db,
".".join([self.db.name, "read", self.model.__name__.lower()])
)
):
document.error = blackbook.api.errors.APINotFoundError()
return Response(str(document), status=int(document.error.code), mimetype=document.mimetype)
else:
contacts = [contact]
else:
prev_viewargs = {}
next_viewargs = {}
_range = {}
template_data = self.api_spec["template_data"]["create"]
template_meta = self.api_spec["template_meta"]["create"]
if request.args.get("end"):
_range["endkey_docid"] = request.args.get("end")
if request.args.get("start"):
_range["startkey_docid"] = request.args.get("start")
if (request.args.get("start") and not request.args.get("end")) or \
(request.args.get("end") and not request.args.get("start")):
_range["limit"] = current_app.config.get("API_PAGINATION_PER_PAGE") or 10
if user_id:
if not user_api.model.load(self.db, id=user_id):
document.error = blackbook.api.errors.APINotFoundError()
return Response(response=str(document), status=int(document.error.code), mimetype=document.mimetype)
if user.id == user_id or user.has_permission(
".".join([self.db.name, "read", user_api.model.__name__.lower()])):
contacts = self.model.by_user(key=user_id, **_range)
viewfunc = self.model.by_user
if _range.get("endkey_docid"):
next_viewargs.update(key=user_id, startkey_docid=_range["endkey_docid"], limit=2)
if _range.get("startkey_docid"):
prev_viewargs.update(key=user_id, endkey_docid=_range["startkey_docid"], limit=2)
else:
document.error = blackbook.api.errors.APINotFoundError()
return Response(str(document), status=int(document.error.code), mimetype=document.mimetype)
elif user.has_permission(".".join([self.db.name, "read", self.model.__name__.lower()])):
contacts = self.model.view(self.db, "_all_docs", **_range)
viewfunc = self.model.view
if _range.get("endkey_docid"):
next_viewargs.update(viewname="_all_docs", startkey_docid=_range["endkey_docid"], limit=2)
if _range.get("startkey_docid"):
prev_viewargs.update(viewname="_all_docs", endkey_docid=_range["startkey_docid"], limit=2)
else:
contacts = self.model.by_user(key=user.id, **_range)
viewfunc = self.model.by_user
if _range.get("endkey_docid"):
next_viewargs.update(key=user.id, startkey_docid=_range["endkey_docid"], limit=2)
if _range.get("startkey_docid"):
prev_viewargs.update(key=user.id, endkey_docid=_range["startkey_docid"], limit=2)
if _range.get("startkey_docid"):
# get prev page link, if applicable
prev_contacts_endkey = viewfunc(self.db, **prev_viewargs)
if prev_contacts_endkey.rows:
key = prev_contacts_endkey.rows[0].id
url = request.url_rule + "?end={docid}".format(docid=key)
document.links.append(collection_plus_json.Link(href=url, rel="prev", name="Previous", prompt="<"))
if _range.get("endkey_docid"):
# get next page link, if applicable
next_contacts_startkey = viewfunc(self.db, **next_viewargs)
if next_contacts_startkey.rows:
key = next_contacts_startkey.rows[1].id
url = request.url_rule + "?start={docid}".format(docid=key)
document.links.append(collection_plus_json.Link(href=url, rel="next", name="Next", prompt=">"))
for contact in contacts:
document.items.append(
collection_plus_json.Item(
href="{endpoint}{id}/".format(endpoint=self.api_spec["endpoint"], id=contact.id),
data=[
prop["data"] for prop in spec_properties
# owning users have <dbname>.read.<modelname>.<propertyname>
# admin users have <dbname>.read.<modelname>
if prop["permissions"]["public"] or user.has_permission(*prop["permisisons"]["read"])
],
links=[
collection_plus_json.Link(
href="{endpoint}{id}/".format(endpoint=user_api.api_spec["endpoint"], id=user.id),
rel="owner",
prompt="Created by {name}".format(user.name)
)
]
)
)
# authenticated users have permission <dbname>.update.<modelname>
# therefore they can see the update template
if template_meta["permissions"]["public"] or \
user.has_permission(self.db, *template_meta["permissions"]["read"]):
document.template = collection_plus_json.Template(data=template_data)
return Response(response=str(document), mimetype=document.mimetype)
def head(self, *args, **kwargs):
pass
def options(self, *args, **kwargs):
pass
def patch(self, *args, **kwargs):
pass
def post(self, *args, **kwargs):
pass
def put(self, *args, **kwargs):
pass
def search(self, *args, **kwargs):
pass
class Session(CouchAPI):
"""
Session API Class
/session/
GET: get authentication information
- HTTP 200 response
- if authenticated and authentication has not expired or not authenticated:
- if authenticated:
- collection.items will be a one-item list containing the user's session info
- collection.template will be empty
- if not authenticated:
- collection.items will be empty
- collection.template will contain creation template (login form)
- if authenticated and authentication has expired:
- HTTP 419 response
- collection.items will be empty
- collection.template will contain creation template (login form)
- collection.error will contain 419 error code, title and message
POST: create a new session (log in)
- requires complete creation template
- if creation template is complete and login is successful:
- HTTP 201 response
- session var id set to Session.token value
- collection.items will be a one-item list containing new session info
- collection.template will be empty
- if creation template is complete and login is unsuccessful:
- HTTP 401 response
- collection.items will be empty
- collection.template will contain creation template
- collection.error will contain 401 error code, title and message
- if creation template is not complete:
- HTTP 400 response
- collection.items will be empty
- collection.template will contain creation template
- collection.error will contain 400 error code, title and message
/session/<token>/
GET: get information about a session
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items will be empty
- collection.template will contain creation template
- collection.error will contain 401 error code, title and message
- if authenticated and session.user == user.id:
- HTTP 200 response
- collection.items will be a one-item list containing the session info
- collection.template will be empty
- if (authenticated and session.user != user.id) or token does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
PUT: update session expiry
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items will be empty
- collection.template will contain creation template
- collection.error will contain 401 error code, title and message
- if authenticated and session.user == user.id:
- HTTP 200 response
- session.expiry gets updated
- collection.items will be a one-item list containing updated session info
- collection.template will be empty
- if (authenticated and session.user != user.id) or token does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
PATCH: update session expiry
- clone functionality of PUT method for this endpoint
POST: unsupported
- HTTP 405 response
- All collection fields will be empty, if possible, except error
- collection.error will contain 405 error code, title and message
DELETE: de-authenticate and delete the current session (log out)
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items will be empty
- collection.template will contain creation template
- collection.error will contain 401 error code, title and message
- if authenticated and session.user == user.id:
- HTTP 204 response
- No response body
- if (authenticated and session.user != user.id) or token does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
"""
def __init__(self, db):
super(Session, self).__init__(db, blackbook.database.couch.models.Session)
def _generate_document(self, *args, **kwargs):
pass
def delete(self, *args, **kwargs):
pass
def get(self, *args, **kwargs):
pass
def head(self, *args, **kwargs):
pass
def options(self, *args, **kwargs):
pass
def patch(self, *args, **kwargs):
pass
def post(self, *args, **kwargs):
pass
def put(self, *args, **kwargs):
pass
def search(self, *args, **kwargs):
pass
class User(CouchAPI):
"""
User API class
/user/[?[page=<pagenum>][username=<name>][email=<email>]]
GET: retrieve list of users
- serves creation template
- requires authenticated admin user to see user list
- optionally requires authenticated admin user to see creation template
- if not authenticated:
- if public registration is off:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if public registration is on:
- HTTP 200 response
- collection.items will be empty
- collection.template will contain creation template
- if authenticated:
- if not authorized:
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
- if authorized:
- HTTP 200 response
- collection.items will contain a paginated list of users
- collection.links will contain a list of pagination links
- collection.queries will contain a list of queries that can be performed
- username: search the list by username
- email: search the list by email
- collection.template will contain creation template
POST: create a new user
- optionally allows public creation of user accounts
- requires completed creation form
- if (not authenticated and public registration is on) or (authenticated admin user):
- if form is complete:
- HTTP 201 response
- collection.items will contain a one-item list of the new user's information
- collection.template will contain the creation template
- if form is incomplete:
- HTTP 400 response
- collection.items will be empty
- collection.template will contain the creation template
- collection.error will contain 400 error code, title and message
- if not authenticated and public registration is off:
- HTTP 401 response
- collection.items will be empty
- collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated non-admin user:
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
/user/<id>/
GET: retrieve information about a specific user
- serves update template
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only retrieve their own info
- admin users may retrieve info about any user
- certain info (such as passwords) cannot be retrieved through the api
- if (non-admin user and <id> == user.id) or (admin user):
- HTTP 200 response
- collection.items will contain a one-item list with the user's information
- collection.template will contain the update template
- if non-admin user and (<id> != user.id or <id> does not exist):
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
- if admin user and <id> does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
PUT: update information about a specific user
- requires authenticated user and complete template
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only update themselves
- admin users may update any user
- certain info (such as passwords) can only be modified by a user's own self through the api
- if (non-admin user and <id> == user.id) or (admin user):
- if template complete:
- HTTP 200 response
- collection.items will contain a one-item list with the user's updated information
- collection.template will contain the update template
- if template incomplete:
- 400 HTTP response
- collection.items will be empty
- collection.template will contain update template
- collection.error will contain 400 error code, title and message
- if non-admin user and (<id> != user.id or <id> does not exist):
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
- if admin user and <id> does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
PATCH: update information about a specific user
- requires authenticated user and partial or complete template
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- non-admin users may only update themselves
- admin users may update any user
- certain info (such as passwords) can only be modified by a user's own self through the api
- data in submitted template that does not match the server's template will be ignored
- if (non-admin user and <id> == user.id) or (admin user):
- if template contains matching data:
- HTTP 200 response
- collection.items will contain a one-item list with the user's updated information
- collection.template will contain the update template
- if template does not contain any matching data:
- 400 HTTP response
- collection.items will be empty
- collection.template will contain update template
- collection.error will contain 400 error code, title and message
- if non-admin user and (<id> != user.id or <id> does not exist):
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
- if admin user and <id> does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
DELETE: delete a specific user
- requires authenticated user
- if not authenticated:
- HTTP 401 response
- collection.items and collection.template will be empty
- collection.error will contain 401 error code, title and message
- if authenticated:
- if (non-admin user and <id> == user.id) or (admin user):
- HTTP 204 response
- No body
- if non-admin user and (<id> != user.id or <id> does not exist):
- HTTP 403 response
- collection.items and collection.template will be empty
- collection.error will contain 403 error code, title and message
- if admin and <id> does not exist:
- HTTP 404 response
- collection.items and collection.template will be empty
- collection.error will contain 404 error code, title and message
"""
def __init__(self, db):
super(User, self).__init__(db, blackbook.database.couch.models.User)
def _generate_document(self, *args, **kwargs):
pass
def delete(self, *args, **kwargs):
pass
def get(self, *args, **kwargs):
pass
def head(self, *args, **kwargs):
pass
def options(self, *args, **kwargs):
pass
def patch(self, *args, **kwargs):
pass
def post(self, *args, **kwargs):
pass
def put(self, *args, **kwargs):
pass
def search(self, *args, **kwargs):
pass
def init_api(app):
database = blackbook.database.couch.database.init_db(app)
api_blueprint = Blueprint("api", __name__, url_prefix="/api") # TODO: use config API_ROOT
def api_root():
document = collection_plus_json.Collection(
href="/api/", # TODO: use config API_ROOT
links=[
collection_plus_json.Link(href="/api/contact/", rel="more", prompt="Contacts Endpoint"),
collection_plus_json.Link(href="/api/user/", rel="more", prompt="Users Endpoint"),
collection_plus_json.Link(href="/api/session/", rel="more", prompt="Sessions API")
]
)
if request.method in {"GET", "OPTIONS"}:
return Response(response=document, mimetype=document.mimetype)
else:
return Response()
contact_view = Contact(database).as_view('contact_api')
api_blueprint.add_url_rule('/', view_func=api_root, methods=["GET", "HEAD", "OPTIONS"])
api_blueprint.add_url_rule('/contact/', defaults={'user_id': None}, view_func=contact_view, methods=["GET", "POST"])
api_blueprint.add_url_rule('/contact/<contact_id>/', defaults={'user_id': None, 'contact_id': None},
view_func=contact_view, methods=["GET", "PATCH", "PUT", "DELETE"])
api_blueprint.add_url_rule('/user/<user_id>/contacts/', defaults={'user_id': None},
view_func=contact_view, methods=["GET", "POST"])
return api_blueprint
| 49.328449 | 128 | 0.557005 | 4,277 | 40,400 | 5.194996 | 0.069675 | 0.048022 | 0.064359 | 0.049147 | 0.806247 | 0.778163 | 0.736217 | 0.714659 | 0.694991 | 0.675998 | 0 | 0.014698 | 0.373515 | 40,400 | 818 | 129 | 49.388753 | 0.863177 | 0.640965 | 0 | 0.482759 | 0 | 0 | 0.068293 | 0.005507 | 0 | 0 | 0 | 0.002445 | 0 | 1 | 0.16092 | false | 0.099617 | 0.061303 | 0 | 0.295019 | 0.02682 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
a62206cff6c8c86c0b3e5a8a7ef1d367e731f7f3 | 1,983 | py | Python | tests/application_tests/test_jinja_helpers.py | joelvisroman/dataviva-site | b4219558457746fd5c6b8f4b65b04c738c656fbd | [
"MIT"
] | 126 | 2015-03-24T12:30:43.000Z | 2022-01-06T03:29:54.000Z | tests/application_tests/test_jinja_helpers.py | joelvisroman/dataviva-site | b4219558457746fd5c6b8f4b65b04c738c656fbd | [
"MIT"
] | 694 | 2015-01-14T11:55:28.000Z | 2021-02-08T20:23:11.000Z | tests/application_tests/test_jinja_helpers.py | joelvisroman/dataviva-site | b4219558457746fd5c6b8f4b65b04c738c656fbd | [
"MIT"
] | 52 | 2015-06-19T01:54:56.000Z | 2019-09-23T13:10:46.000Z | #coding: utf-8
from dataviva.utils.jinja_helpers import max_digits
from flask import g
from test_base import BaseTestCase
class MaxDigitsPTTests(BaseTestCase):
def setUp(self):
g.locale = 'en'
def test_max_digits_3_for_1_is_1(self):
assert '1.00' == max_digits(1, 3)
def test_max_digits_3_for_10_is_10(self):
assert '10.0' == max_digits(10, 3)
def test_max_digits_3_for_100_is_100(self):
assert '100' == max_digits(100, 3)
def test_max_digits_3_for_1000_is_1000(self):
assert '1.00' == max_digits(1000, 3)
def test_max_digits_3_for_10000_is_10000(self):
assert '10.0' == max_digits(10000, 3)
def test_max_digits_3_for_100000_is_100000(self):
assert '100' == max_digits(100000, 3)
def test_max_digits_3_for_001_is_001(self):
assert '0.01' == max_digits(0.01, 3)
def test_max_digits_3_for_decimal_0001_is_000(self):
assert '0.00' == max_digits(0.001, 3)
def test_max_digits_3_for_decimal_0009_is_001(self):
assert '0.01' == max_digits(0.009, 3)
def test_max_digits_3_for_decimal_0005_is_001(self):
assert '0.01' == max_digits(0.005, 3)
def test_max_digits_3_for_decimal_0003_is_001(self):
assert '0.00' == max_digits(0.003, 3)
def test_max_digits_3_for_decimal_50_600_is_50_6(self):
assert '50.6' == max_digits(50.600, 3)
def test_max_digits_3_for_decimal_100001000100_00_is_100(self):
assert '100' == max_digits(100001000100.00, 3)
def test_max_digits_3_for_decimal_10000100010_00_is_10_0(self):
assert '10.0' == max_digits(10000100010.00, 3)
def test_max_digits_3_for_decimal_0_4_is_10_0_40(self):
assert '0.40' == max_digits(0.4, 3)
def test_max_digits_3_for_decimal__0_4_is_10__0_40(self):
assert '-0.40' == max_digits(-0.4, 3)
def test_max_digits_3_for_decimal__2319086130_00_is_10__0_40(self):
assert '-2.31' == max_digits(-2319086130.00, 3)
| 31.47619 | 71 | 0.701967 | 350 | 1,983 | 3.5 | 0.16 | 0.257143 | 0.138776 | 0.222041 | 0.680816 | 0.658776 | 0.538776 | 0.354286 | 0.217143 | 0.122449 | 0 | 0.200498 | 0.190116 | 1,983 | 62 | 72 | 31.983871 | 0.562267 | 0.006556 | 0 | 0 | 0 | 0 | 0.035043 | 0 | 0 | 0 | 0 | 0 | 0.425 | 1 | 0.45 | false | 0 | 0.075 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
a63bfd2f1eae7bb8439df28e5e58979bfcc7fbf0 | 96 | py | Python | terrascript/local/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/local/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/local/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/local/__init__.py
import terrascript
class local(terrascript.Provider):
pass | 16 | 34 | 0.791667 | 11 | 96 | 6.545455 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 96 | 6 | 35 | 16 | 0.857143 | 0.302083 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a67c2d2c3ccc68e58781ead39d3b457accb8d98c | 106 | py | Python | terrascript/cloudflare/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/cloudflare/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/cloudflare/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/cloudflare/__init__.py
import terrascript
class cloudflare(terrascript.Provider):
pass | 17.666667 | 39 | 0.811321 | 11 | 106 | 7.454545 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 106 | 6 | 40 | 17.666667 | 0.87234 | 0.320755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a695c9d7115a0f36f7c9797e05163b2e81398f57 | 11,046 | py | Python | test/test_srv6_mobile.py | yasics/vpp | a4d0956082f12ac8269fd415134af7f605c1f3c9 | [
"Apache-2.0"
] | 751 | 2017-07-13T06:16:46.000Z | 2022-03-30T09:14:35.000Z | test/test_srv6_mobile.py | yasics/vpp | a4d0956082f12ac8269fd415134af7f605c1f3c9 | [
"Apache-2.0"
] | 32 | 2021-03-24T06:04:08.000Z | 2021-09-14T02:02:22.000Z | test/test_srv6_mobile.py | yasics/vpp | a4d0956082f12ac8269fd415134af7f605c1f3c9 | [
"Apache-2.0"
] | 479 | 2017-07-13T06:17:26.000Z | 2022-03-31T18:20:43.000Z | #!/usr/bin/env python3
from framework import VppTestCase
from ipaddress import IPv4Address
from ipaddress import IPv6Address
from scapy.contrib.gtp import *
from scapy.all import *
class TestSRv6EndMGTP4E(VppTestCase):
""" SRv6 End.M.GTP4.E (SRv6 -> GTP-U) """
@classmethod
def setUpClass(cls):
super(TestSRv6EndMGTP4E, cls).setUpClass()
try:
cls.create_pg_interfaces(range(2))
cls.pg_if_i = cls.pg_interfaces[0]
cls.pg_if_o = cls.pg_interfaces[1]
cls.pg_if_i.config_ip6()
cls.pg_if_o.config_ip4()
cls.ip4_dst = cls.pg_if_o.remote_ip4
# cls.ip4_src = cls.pg_if_o.local_ip4
cls.ip4_src = "192.168.192.10"
for pg_if in cls.pg_interfaces:
pg_if.admin_up()
pg_if.resolve_arp()
except Exception:
super(TestSRv6EndMGTP4E, cls).tearDownClass()
raise
def create_packets(self, inner):
ip4_dst = IPv4Address(str(self.ip4_dst))
# 32bit prefix + 32bit IPv4 DA + 8bit + 32bit TEID + 24bit
dst = b'\xaa' * 4 + ip4_dst.packed + \
b'\x11' + b'\xbb' * 4 + b'\x11' * 3
ip6_dst = IPv6Address(dst)
ip4_src = IPv4Address(str(self.ip4_src))
# 64bit prefix + 32bit IPv4 SA + 16 bit port + 16bit
src = b'\xcc' * 8 + ip4_src.packed + \
b'\xdd' * 2 + b'\x11' * 2
ip6_src = IPv6Address(src)
self.logger.info("ip4 dst: {}".format(ip4_dst))
self.logger.info("ip4 src: {}".format(ip4_src))
self.logger.info("ip6 dst (remote srgw): {}".format(ip6_dst))
self.logger.info("ip6 src (local srgw): {}".format(ip6_src))
pkts = list()
for d, s in inner:
pkt = (Ether() /
IPv6(dst=str(ip6_dst), src=str(ip6_src)) /
IPv6ExtHdrSegmentRouting() /
IPv6(dst=d, src=s) /
UDP(sport=1000, dport=23))
self.logger.info(pkt.show2(dump=True))
pkts.append(pkt)
return pkts
def test_srv6_mobile(self):
""" test_srv6_mobile """
pkts = self.create_packets([("A::1", "B::1"), ("C::1", "D::1")])
self.vapi.cli(
"sr localsid address {} behavior end.m.gtp4.e v4src_position 64"
.format(pkts[0]['IPv6'].dst))
self.logger.info(self.vapi.cli("show sr localsids"))
self.vapi.cli("clear errors")
self.pg0.add_stream(pkts)
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
self.logger.info(self.vapi.cli("show errors"))
self.logger.info(self.vapi.cli("show int address"))
capture = self.pg1.get_capture(len(pkts))
for pkt in capture:
self.logger.info(pkt.show2(dump=True))
self.assertEqual(pkt[IP].dst, self.ip4_dst)
self.assertEqual(pkt[IP].src, self.ip4_src)
self.assertEqual(pkt[GTP_U_Header].teid, 0xbbbbbbbb)
class TestSRv6TMGTP4D(VppTestCase):
""" SRv6 T.M.GTP4.D (GTP-U -> SRv6) """
@classmethod
def setUpClass(cls):
super(TestSRv6TMGTP4D, cls).setUpClass()
try:
cls.create_pg_interfaces(range(2))
cls.pg_if_i = cls.pg_interfaces[0]
cls.pg_if_o = cls.pg_interfaces[1]
cls.pg_if_i.config_ip4()
cls.pg_if_i.config_ip6()
cls.pg_if_o.config_ip4()
cls.pg_if_o.config_ip6()
cls.ip4_dst = "1.1.1.1"
cls.ip4_src = "2.2.2.2"
cls.ip6_dst = cls.pg_if_o.remote_ip6
for pg_if in cls.pg_interfaces:
pg_if.admin_up()
pg_if.resolve_arp()
pg_if.resolve_ndp(timeout=5)
except Exception:
super(TestSRv6TMGTP4D, cls).tearDownClass()
raise
def create_packets(self, inner):
ip4_dst = IPv4Address(str(self.ip4_dst))
ip4_src = IPv4Address(str(self.ip4_src))
self.logger.info("ip4 dst: {}".format(ip4_dst))
self.logger.info("ip4 src: {}".format(ip4_src))
pkts = list()
for d, s in inner:
pkt = (Ether() /
IP(dst=str(ip4_dst), src=str(ip4_src)) /
UDP(sport=2152, dport=2152) /
GTP_U_Header(gtp_type="g_pdu", teid=200) /
IPv6(dst=d, src=s) /
UDP(sport=1000, dport=23))
self.logger.info(pkt.show2(dump=True))
pkts.append(pkt)
return pkts
def test_srv6_mobile(self):
""" test_srv6_mobile """
pkts = self.create_packets([("A::1", "B::1"), ("C::1", "D::1")])
self.vapi.cli("set sr encaps source addr A1::1")
self.vapi.cli("sr policy add bsid D4:: next D2:: next D3::")
self.vapi.cli(
"sr policy add bsid D5:: behavior t.m.gtp4.d"
"D4::/32 v6src_prefix C1::/64 nhtype ipv6")
self.vapi.cli("sr steer l3 {}/32 via bsid D5::".format(self.ip4_dst))
self.vapi.cli("ip route add D2::/32 via {}".format(self.ip6_dst))
self.logger.info(self.vapi.cli("show sr steer"))
self.logger.info(self.vapi.cli("show sr policies"))
self.vapi.cli("clear errors")
self.pg0.add_stream(pkts)
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
self.logger.info(self.vapi.cli("show errors"))
self.logger.info(self.vapi.cli("show int address"))
capture = self.pg1.get_capture(len(pkts))
for pkt in capture:
self.logger.info(pkt.show2(dump=True))
self.logger.info("GTP4.D Address={}".format(
str(pkt[IPv6ExtHdrSegmentRouting].addresses[0])))
self.assertEqual(
str(pkt[IPv6ExtHdrSegmentRouting].addresses[0]),
"d4:0:101:101::c800:0")
class TestSRv6EndMGTP6E(VppTestCase):
""" SRv6 End.M.GTP6.E """
@classmethod
def setUpClass(cls):
super(TestSRv6EndMGTP6E, cls).setUpClass()
try:
cls.create_pg_interfaces(range(2))
cls.pg_if_i = cls.pg_interfaces[0]
cls.pg_if_o = cls.pg_interfaces[1]
cls.pg_if_i.config_ip6()
cls.pg_if_o.config_ip6()
cls.ip6_nhop = cls.pg_if_o.remote_ip6
for pg_if in cls.pg_interfaces:
pg_if.admin_up()
pg_if.resolve_ndp(timeout=5)
except Exception:
super(TestSRv6EndMGTP6E, cls).tearDownClass()
raise
def create_packets(self, inner):
# 64bit prefix + 8bit QFI + 32bit TEID + 24bit
dst = b'\xaa' * 8 + b'\x00' + \
b'\xbb' * 4 + b'\x00' * 3
ip6_dst = IPv6Address(dst)
self.ip6_dst = ip6_dst
src = b'\xcc' * 8 + \
b'\xdd' * 4 + b'\x11' * 4
ip6_src = IPv6Address(src)
self.ip6_src = ip6_src
pkts = list()
for d, s in inner:
pkt = (Ether() /
IPv6(dst=str(ip6_dst),
src=str(ip6_src)) /
IPv6ExtHdrSegmentRouting(segleft=1,
lastentry=0,
tag=0,
addresses=["a1::1"]) /
IPv6(dst=d, src=s) / UDP(sport=1000, dport=23))
self.logger.info(pkt.show2(dump=True))
pkts.append(pkt)
return pkts
def test_srv6_mobile(self):
""" test_srv6_mobile """
pkts = self.create_packets([("A::1", "B::1"), ("C::1", "D::1")])
self.vapi.cli(
"sr localsid prefix {}/64 behavior end.m.gtp6.e"
.format(pkts[0]['IPv6'].dst))
self.vapi.cli(
"ip route add a1::/64 via {}".format(self.ip6_nhop))
self.logger.info(self.vapi.cli("show sr localsids"))
self.vapi.cli("clear errors")
self.pg0.add_stream(pkts)
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
self.logger.info(self.vapi.cli("show errors"))
self.logger.info(self.vapi.cli("show int address"))
capture = self.pg1.get_capture(len(pkts))
for pkt in capture:
self.logger.info(pkt.show2(dump=True))
self.assertEqual(pkt[IPv6].dst, "a1::1")
self.assertEqual(pkt[IPv6].src, str(self.ip6_src))
self.assertEqual(pkt[GTP_U_Header].teid, 0xbbbbbbbb)
class TestSRv6EndMGTP6D(VppTestCase):
""" SRv6 End.M.GTP6.D """
@classmethod
def setUpClass(cls):
super(TestSRv6EndMGTP6D, cls).setUpClass()
try:
cls.create_pg_interfaces(range(2))
cls.pg_if_i = cls.pg_interfaces[0]
cls.pg_if_o = cls.pg_interfaces[1]
cls.pg_if_i.config_ip6()
cls.pg_if_o.config_ip6()
cls.ip6_nhop = cls.pg_if_o.remote_ip6
cls.ip6_dst = "2001::1"
cls.ip6_src = "2002::1"
for pg_if in cls.pg_interfaces:
pg_if.admin_up()
pg_if.resolve_ndp(timeout=5)
except Exception:
super(TestSRv6EndMGTP6D, cls).tearDownClass()
raise
def create_packets(self, inner):
ip6_dst = IPv6Address(str(self.ip6_dst))
ip6_src = IPv6Address(str(self.ip6_src))
self.logger.info("ip6 dst: {}".format(ip6_dst))
self.logger.info("ip6 src: {}".format(ip6_src))
pkts = list()
for d, s in inner:
pkt = (Ether() /
IPv6(dst=str(ip6_dst), src=str(ip6_src)) /
UDP(sport=2152, dport=2152) /
GTP_U_Header(gtp_type="g_pdu", teid=200) /
IPv6(dst=d, src=s) /
UDP(sport=1000, dport=23))
self.logger.info(pkt.show2(dump=True))
pkts.append(pkt)
return pkts
def test_srv6_mobile(self):
""" test_srv6_mobile """
pkts = self.create_packets([("A::1", "B::1"), ("C::1", "D::1")])
self.vapi.cli("set sr encaps source addr A1::1")
self.vapi.cli("sr policy add bsid D4:: next D2:: next D3::")
self.vapi.cli(
"sr localsid prefix 2001::/64 behavior end.m.gtp6.d D4::/64")
self.vapi.cli("ip route add D2::/64 via {}".format(self.ip6_nhop))
self.logger.info(self.vapi.cli("show sr policies"))
self.vapi.cli("clear errors")
self.pg0.add_stream(pkts)
self.pg_enable_capture(self.pg_interfaces)
self.pg_start()
self.logger.info(self.vapi.cli("show errors"))
self.logger.info(self.vapi.cli("show int address"))
capture = self.pg1.get_capture(len(pkts))
for pkt in capture:
self.logger.info(pkt.show2(dump=True))
self.logger.info("GTP6.D Address={}".format(
str(pkt[IPv6ExtHdrSegmentRouting].addresses[0])))
self.assertEqual(
str(pkt[IPv6ExtHdrSegmentRouting].addresses[0]), "d4::c800:0")
| 32.392962 | 78 | 0.549611 | 1,460 | 11,046 | 4.007534 | 0.126027 | 0.024611 | 0.074175 | 0.019142 | 0.807212 | 0.753205 | 0.726713 | 0.70911 | 0.672193 | 0.658691 | 0 | 0.053449 | 0.31233 | 11,046 | 340 | 79 | 32.488235 | 0.716825 | 0.035126 | 0 | 0.714876 | 0 | 0 | 0.105045 | 0 | 0 | 0 | 0.001886 | 0 | 0.033058 | 1 | 0.049587 | false | 0 | 0.020661 | 0 | 0.103306 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a6fe0b0c80e7d715c7a1fec94649f2f71fe0982c | 78 | py | Python | genelang/results/BranchResult.py | GabrielAmare/Genelang | af5294e900d2f79ff54375f9759c156a4b5a098a | [
"MIT"
] | null | null | null | genelang/results/BranchResult.py | GabrielAmare/Genelang | af5294e900d2f79ff54375f9759c156a4b5a098a | [
"MIT"
] | null | null | null | genelang/results/BranchResult.py | GabrielAmare/Genelang | af5294e900d2f79ff54375f9759c156a4b5a098a | [
"MIT"
] | null | null | null | from .ResultList import ResultList
class BranchResult(ResultList):
pass
| 13 | 34 | 0.782051 | 8 | 78 | 7.625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 78 | 5 | 35 | 15.6 | 0.938462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5b278ed560369942ab18d9161e3f6b195f6d7ab9 | 132 | py | Python | src/navability/services/__init__.py | sam-globus/NavAbilitySDK.py | 46bcf0e8b4b244f26847b59fdf3bf7ba32e35013 | [
"Apache-2.0"
] | null | null | null | src/navability/services/__init__.py | sam-globus/NavAbilitySDK.py | 46bcf0e8b4b244f26847b59fdf3bf7ba32e35013 | [
"Apache-2.0"
] | 29 | 2022-01-17T16:44:49.000Z | 2022-03-31T11:55:01.000Z | src/navability/services/__init__.py | NavAbility/NavAbilitySDK.py | 815cd06574dcdf8bbf4770097c9494db739308f3 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa: F401
from .factor import *
from .solve import *
from .status import *
from .utils import *
from .variable import *
| 16.5 | 23 | 0.712121 | 18 | 132 | 5.222222 | 0.555556 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037383 | 0.189394 | 132 | 7 | 24 | 18.857143 | 0.841122 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b375b03aed9ecd7297dc78618f1d7fc1a056f04 | 554 | py | Python | distancematrix/consumer/__init__.py | linked-time-series/seriesdistancematrix | 7a02fc0eb83e114c38ac73b6ae845cd460479fe9 | [
"MIT"
] | 12 | 2019-11-22T14:34:51.000Z | 2021-05-04T19:23:55.000Z | distancematrix/consumer/__init__.py | linked-time-series/seriesdistancematrix | 7a02fc0eb83e114c38ac73b6ae845cd460479fe9 | [
"MIT"
] | 1 | 2020-04-28T07:59:03.000Z | 2020-04-28T07:59:03.000Z | distancematrix/consumer/__init__.py | linked-time-series/seriesdistancematrix | 7a02fc0eb83e114c38ac73b6ae845cd460479fe9 | [
"MIT"
] | 3 | 2020-03-02T12:39:00.000Z | 2021-03-22T13:36:25.000Z | from distancematrix.consumer.contextual_matrix_profile import ContextualMatrixProfile
from distancematrix.consumer.distance_matrix import DistanceMatrix
from distancematrix.consumer.matrix_profile_lr import MatrixProfileLR
from distancematrix.consumer.matrix_profile_lr import ShiftingMatrixProfileLR
from distancematrix.consumer.matrix_profile_lr import MatrixProfileLRReservoir
from distancematrix.consumer.multidimensional_matrix_profile_lr import MultidimensionalMatrixProfileLR
from distancematrix.consumer.threshold_counter import ThresholdCounter
| 69.25 | 102 | 0.924188 | 55 | 554 | 9.072727 | 0.327273 | 0.252505 | 0.364729 | 0.168337 | 0.282565 | 0.282565 | 0.282565 | 0 | 0 | 0 | 0 | 0 | 0.050542 | 554 | 7 | 103 | 79.142857 | 0.948669 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5b95d1c9f54f1e057144f3de071d64996b41a40f | 22 | py | Python | python/tvm/auto_tensorize/hw_abstraction/opencl/__init__.py | QinHan-Erin/AMOS | 634bf48edf4015e4a69a8c32d49b96bce2b5f16f | [
"Apache-2.0"
] | 22 | 2022-03-18T07:29:31.000Z | 2022-03-23T14:54:32.000Z | python/tvm/auto_tensorize/hw_abstraction/opencl/__init__.py | QinHan-Erin/AMOS | 634bf48edf4015e4a69a8c32d49b96bce2b5f16f | [
"Apache-2.0"
] | null | null | null | python/tvm/auto_tensorize/hw_abstraction/opencl/__init__.py | QinHan-Erin/AMOS | 634bf48edf4015e4a69a8c32d49b96bce2b5f16f | [
"Apache-2.0"
] | 2 | 2022-03-18T08:26:34.000Z | 2022-03-20T06:02:48.000Z | from .arm_dot import * | 22 | 22 | 0.772727 | 4 | 22 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 22 | 1 | 22 | 22 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7516acd44193c356a1ab100ef3153249f8e50efa | 82 | py | Python | rem_events/__init__.py | wishful-project/wishrem_rem_events | 3525c3cffe5ddd94f8255e965b32e43aa6406215 | [
"Apache-2.0"
] | null | null | null | rem_events/__init__.py | wishful-project/wishrem_rem_events | 3525c3cffe5ddd94f8255e965b32e43aa6406215 | [
"Apache-2.0"
] | null | null | null | rem_events/__init__.py | wishful-project/wishrem_rem_events | 3525c3cffe5ddd94f8255e965b32e43aa6406215 | [
"Apache-2.0"
] | null | null | null | from .sensing_events import *
from .rrm_events import *
from .rem_events import *
| 20.5 | 29 | 0.780488 | 12 | 82 | 5.083333 | 0.5 | 0.590164 | 0.52459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 82 | 3 | 30 | 27.333333 | 0.871429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7526885555014d7b68ef8ec25aa0832ba0a5c6e9 | 71 | py | Python | vi/vi/grid/__init__.py | pveierland/permve-ntnu-it3105 | 6a7e4751de47b091c1c9c59560c19a8452698d81 | [
"CC0-1.0"
] | null | null | null | vi/vi/grid/__init__.py | pveierland/permve-ntnu-it3105 | 6a7e4751de47b091c1c9c59560c19a8452698d81 | [
"CC0-1.0"
] | null | null | null | vi/vi/grid/__init__.py | pveierland/permve-ntnu-it3105 | 6a7e4751de47b091c1c9c59560c19a8452698d81 | [
"CC0-1.0"
] | null | null | null | from .coordinate import *
from .grid import *
from .rectangle import *
| 17.75 | 25 | 0.746479 | 9 | 71 | 5.888889 | 0.555556 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 71 | 3 | 26 | 23.666667 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
753ab90bd8372b5808de14b2757af128140f79f3 | 87 | py | Python | libs/uix/baseclass/root.py | glcod/KivyMD-Project-Creator | 790f2fcce7ab2f08fe117eb910e494fe6f898c16 | [
"MIT"
] | 51 | 2020-12-15T21:29:25.000Z | 2022-03-31T11:41:38.000Z | libs/uix/baseclass/root.py | glcod/KivyMD-Project-Creator | 790f2fcce7ab2f08fe117eb910e494fe6f898c16 | [
"MIT"
] | 8 | 2020-12-23T21:40:12.000Z | 2021-10-04T11:57:16.000Z | libs/uix/baseclass/root.py | glcod/KivyMD-Project-Creator | 790f2fcce7ab2f08fe117eb910e494fe6f898c16 | [
"MIT"
] | 14 | 2021-01-02T04:08:53.000Z | 2022-02-15T19:36:59.000Z | from kivy.uix.screenmanager import ScreenManager
class Root(ScreenManager):
pass
| 14.5 | 48 | 0.793103 | 10 | 87 | 6.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149425 | 87 | 5 | 49 | 17.4 | 0.932432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f362f39c12e15a70775d6e719c924daf5c2b6e07 | 158 | py | Python | src/symbols.py | renan-cunha/Hack-Assembler | c288f67eb37835a0b83380bdc81e168da5ebcba1 | [
"MIT"
] | null | null | null | src/symbols.py | renan-cunha/Hack-Assembler | c288f67eb37835a0b83380bdc81e168da5ebcba1 | [
"MIT"
] | null | null | null | src/symbols.py | renan-cunha/Hack-Assembler | c288f67eb37835a0b83380bdc81e168da5ebcba1 | [
"MIT"
] | null | null | null | def is_label(string: str) -> bool:
if string[0] == "(":
return True
return False
def get_label(string: str) -> str:
return string[1:-1]
| 17.555556 | 34 | 0.582278 | 23 | 158 | 3.913043 | 0.565217 | 0.244444 | 0.311111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025862 | 0.265823 | 158 | 8 | 35 | 19.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
f37e3ab1ddf59d4e42768113b163ed580c517f79 | 188 | py | Python | cases/views.py | Wellheor1/l2 | d980210921c545c68fe9d5522bb693d567995024 | [
"MIT"
] | 10 | 2018-03-14T06:17:06.000Z | 2022-03-10T05:33:34.000Z | cases/views.py | Wellheor1/l2 | d980210921c545c68fe9d5522bb693d567995024 | [
"MIT"
] | 512 | 2018-09-10T07:37:34.000Z | 2022-03-30T02:23:43.000Z | cases/views.py | D00dleman/l2 | 0870144537ee340cd8db053a608d731e186f02fb | [
"MIT"
] | 24 | 2018-07-31T05:52:12.000Z | 2022-02-08T00:39:41.000Z | from django.shortcuts import render
from django.views.decorators.csrf import ensure_csrf_cookie
@ensure_csrf_cookie
def home(request):
return render(request, 'dashboard/cases.html')
| 23.5 | 59 | 0.81383 | 26 | 188 | 5.730769 | 0.653846 | 0.134228 | 0.214765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 188 | 7 | 60 | 26.857143 | 0.886905 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f39f896fef8d6169487f16576e94f0bd0c123437 | 13,947 | py | Python | exportchannel/exportchannel.py | AAA3A-AAA3A/AAA3A-cogs | 076ff390610e2470a086bdae41647ee21f01c323 | [
"MIT"
] | 1 | 2022-03-17T02:06:37.000Z | 2022-03-17T02:06:37.000Z | exportchannel/exportchannel.py | AAA3A-AAA3A/AAA3A-cogs | 076ff390610e2470a086bdae41647ee21f01c323 | [
"MIT"
] | 2 | 2022-03-07T03:29:33.000Z | 2022-03-17T06:51:43.000Z | exportchannel/exportchannel.py | AAA3A-AAA3A/AAA3A-cogs | 076ff390610e2470a086bdae41647ee21f01c323 | [
"MIT"
] | 2 | 2021-11-24T19:31:55.000Z | 2022-01-02T06:34:22.000Z | from .AAA3A_utils.cogsutils import CogsUtils # isort:skip
from redbot.core import commands # isort:skip
from redbot.core.i18n import Translator, cog_i18n # isort:skip
from redbot.core.bot import Red # isort:skip
import discord # isort:skip
import typing # isort:skip
import chat_exporter
import io
if CogsUtils().is_dpy2:
from redbot.core.commands import RawUserIdConverter
# Credits:
# Thanks to Red's `Cleanup` cog for the converters and help with the message retrieval function! (https://github.com/Cog-Creators/Red-DiscordBot/blob/V3/develop/redbot/cogs/cleanup/converters.py#L12)
# Thanks to @epic guy on Discord for the basic syntax (command groups, commands) and also commands (await ctx.send, await ctx.author.send, await ctx.message.delete())!
# Thanks to the developers of the cogs I added features to as it taught me how to make a cog! (Chessgame by WildStriker, Captcha by Kreusada, Speak by Epic guy and Rommer by Dav)
# Thanks to all the people who helped me with some commands in the #coding channel of the redbot support server!
_ = Translator("ExportChannel", __file__)
@cog_i18n(_)
class ExportChannel(commands.Cog):
"""A cog to export all or part of a channel's messages to an html file!"""
def __init__(self, bot):
self.bot: Red = bot
self.cogsutils = CogsUtils(cog=self)
self.cogsutils._setup()
async def get_messages(self, channel: discord.TextChannel, number: typing.Optional[int]=None, limit: typing.Optional[int]=None, before: typing.Optional[discord.Message]=None, after: typing.Optional[discord.Message]=None, user_id: typing.Optional[int]=None, bot: typing.Optional[bool]=None):
messages = []
async for message in channel.history(limit=limit, before=before, after=after, oldest_first=False):
if user_id is not None:
if not message.author.id == user_id:
continue
if bot is not None:
if not message.author.bot == bot:
continue
messages.append(message)
if number is not None and number <= len(messages):
break
return messages
@commands.admin_or_permissions(administrator=True)
@commands.guild_only()
@commands.group(name="exportchannel")
async def exportchannel(self, ctx: commands.Context):
"""Commands for export all or part of a channel's messages to an html file."""
@exportchannel.command()
async def all(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel]=None):
"""Export all of a channel's messages to an html file.
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
if count_messages == 0:
await ctx.send(_("Sorry. I could not find any message.").format(**locals()))
return
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of all the messages in the channel {channel.mention} ({channel.id}).\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nThere are {count_messages} exported messages.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick()
@exportchannel.command()
async def messages(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel], limit: int):
"""Export part of a channel's messages to an html file.
Specify the number of messages since the end of the channel.
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel, limit=limit if not channel == ctx.channel else limit + 1)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
if count_messages == 0:
await ctx.send(_("Sorry. I could not find any message.").format(**locals()))
return
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of part the messages in the channel {channel.mention} ({channel.id}).\nThere are {count_messages} exported messages.\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick()
@exportchannel.command()
async def before(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel], before: discord.Message):
"""Export part of a channel's messages to an html file.
Specify the before message (id or link).
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel, before=before)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of part the messages in the channel {channel.mention} ({channel.id}).\nThere are {count_messages} exported messages.\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick()
@exportchannel.command()
async def after(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel], after: discord.Message):
"""Export part of a channel's messages to an html file.
Specify the after message (id or link).
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel, after=after)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
if count_messages == 0:
await ctx.send(_("Sorry. I could not find any message.").format(**locals()))
return
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of part the messages in the channel {channel.mention} ({channel.id}).\nThere are {count_messages} exported messages.\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick()
@exportchannel.command()
async def between(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel], before: discord.Message, after: discord.Message):
"""Export part of a channel's messages to an html file.
Specify the between messages (id or link).
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel, before=before, after=after)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
if count_messages == 0:
await ctx.send(_("Sorry. I could not find any message.").format(**locals()))
return
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of part the messages in the channel {channel.mention} ({channel.id}).\nThere are {count_messages} exported messages.\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick()
if CogsUtils().is_dpy2:
@exportchannel.command()
async def user(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel], user: typing.Union[discord.Member, RawUserIdConverter]):
"""Export part of a channel's messages to an html file.
Specify the member (id, name or mention).
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel, user_id=user.id if isinstance(user, discord.Member) else user)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
if count_messages == 0:
await ctx.send(_("Sorry. I could not find any message.").format(**locals()))
return
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of part the messages in the channel {channel.mention} ({channel.id}).\nThere are {count_messages} exported messages.\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick()
@exportchannel.command()
async def bot(self, ctx: commands.Context, channel: typing.Optional[discord.TextChannel], bot: typing.Optional[bool]=True):
"""Export part of a channel's messages to an html file.
Specify the bool option.
Please note: all attachments and user avatars are saved with the Discord link in this file.
Remember that exporting other users' messages from Discord does not respect the TOS.
"""
async with ctx.typing():
if channel is None:
channel = ctx.channel
messages = await self.get_messages(channel=channel, bot=bot)
messages = [message for message in messages if not message.id == ctx.message.id]
count_messages = len(messages)
if count_messages == 0:
await ctx.send(_("Sorry. I could not find any message.").format(**locals()))
return
transcript = await chat_exporter.raw_export(channel=channel, messages=messages, tz_info="UTC", guild=channel.guild, bot=ctx.bot)
file = discord.File(io.BytesIO(transcript.encode()),
filename=f"transcript-{channel.id}.html")
await ctx.send(_("Here is the html file of the transcript of part the messages in the channel {channel.mention} ({channel.id}).\nThere are {count_messages} exported messages.\nPlease note: all attachments and user avatars are saved with the Discord link in this file.\nRemember that exporting other users' messages from Discord does not respect the TOS.").format(**locals()), file=file)
await ctx.tick() | 67.052885 | 398 | 0.668244 | 1,845 | 13,947 | 5.004336 | 0.101355 | 0.019929 | 0.018196 | 0.031842 | 0.804181 | 0.789776 | 0.78566 | 0.779812 | 0.779812 | 0.738438 | 0 | 0.001792 | 0.239837 | 13,947 | 208 | 399 | 67.052885 | 0.869081 | 0.05693 | 0 | 0.680556 | 0 | 0.048611 | 0.256779 | 0.032155 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006944 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3b87161579da75f346d76a9b60d17b17fdf94b8 | 129 | py | Python | new/area_function.py | nightkillerisded/twitched | 1b73f8a6e6a14cd098071c15dad8b7408e0eb523 | [
"MIT"
] | 1 | 2021-10-02T10:19:38.000Z | 2021-10-02T10:19:38.000Z | new/area_function.py | nightkillerisded/twitched | 1b73f8a6e6a14cd098071c15dad8b7408e0eb523 | [
"MIT"
] | null | null | null | new/area_function.py | nightkillerisded/twitched | 1b73f8a6e6a14cd098071c15dad8b7408e0eb523 | [
"MIT"
] | 1 | 2021-10-02T10:19:38.000Z | 2021-10-02T10:19:38.000Z | def area_circle(r):
pi=3.14
return pi*(r*r)
def area_tringle(h,b):
return (h*b)/2
def area_square(s):
return s*s
| 16.125 | 22 | 0.612403 | 27 | 129 | 2.814815 | 0.518519 | 0.276316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0.224806 | 129 | 7 | 23 | 18.428571 | 0.72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
f3c7d679452a6adde9f4eddcecc69363dfd966f7 | 49 | py | Python | metrics/__init__.py | tarun-bisht/mlpipe | 0cd1f0b57a7788222228dc08f0c8a21ed51a7cc1 | [
"MIT"
] | null | null | null | metrics/__init__.py | tarun-bisht/mlpipe | 0cd1f0b57a7788222228dc08f0c8a21ed51a7cc1 | [
"MIT"
] | null | null | null | metrics/__init__.py | tarun-bisht/mlpipe | 0cd1f0b57a7788222228dc08f0c8a21ed51a7cc1 | [
"MIT"
] | null | null | null | from .metrics import Metrics, TopKAccuracyMetrics | 49 | 49 | 0.877551 | 5 | 49 | 8.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f3c80482c2f4e8139992976ec2101ede0041dc51 | 16,498 | py | Python | dfirtrack_main/tests/task/test_task_is_abandoned.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 273 | 2018-04-18T22:09:15.000Z | 2021-06-04T09:15:48.000Z | dfirtrack_main/tests/task/test_task_is_abandoned.py | stuhli/dfirtrack | 9260c91e4367b36d4cb1ae7efe4e2d2452f58e6e | [
"Apache-2.0"
] | 75 | 2018-08-31T11:05:37.000Z | 2021-06-08T14:15:07.000Z | dfirtrack_main/tests/task/test_task_is_abandoned.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 61 | 2018-11-12T22:55:48.000Z | 2021-06-06T15:16:16.000Z | from django.contrib.auth.models import User
from django.test import TestCase
from dfirtrack_artifacts.models import (
Artifact,
Artifactpriority,
Artifactstatus,
Artifacttype,
)
from dfirtrack_main.models import (
Case,
System,
Systemstatus,
Task,
Taskname,
Taskpriority,
Taskstatus,
)
class TaskIsAbandonedTestCase(TestCase):
"""task view tests"""
@classmethod
def setUpTestData(cls):
# create user
test_user = User.objects.create_user(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
case_1 = Case.objects.create(
case_name='case_1',
case_is_incident=True,
case_created_by_user_id=test_user,
)
# create object
systemstatus_1 = Systemstatus.objects.create(systemstatus_name='systemstatus_1')
# create object
system_1 = System.objects.create(
system_name='system_1',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
system_artifact = System.objects.create(
system_name='system_artifact',
systemstatus=systemstatus_1,
system_created_by_user_id=test_user,
system_modified_by_user_id=test_user,
)
# create object
artifactpriority_1 = Artifactpriority.objects.create(
artifactpriority_name='artifactpriority_1'
)
# create object
artifactstatus_1 = Artifactstatus.objects.create(
artifactstatus_name='artifactstatus_1'
)
# create object
artifacttype_1 = Artifacttype.objects.create(artifacttype_name='artifacttype_1')
# create object
artifact_1 = Artifact.objects.create(
artifact_name='artifact_1',
artifactpriority=artifactpriority_1,
artifactstatus=artifactstatus_1,
artifacttype=artifacttype_1,
artifact_created_by_user_id=test_user,
artifact_modified_by_user_id=test_user,
system=system_artifact,
)
# create objects
taskname_none = Taskname.objects.create(taskname_name='taskname_none')
taskname_artifact = Taskname.objects.create(taskname_name='taskname_artifact')
taskname_case = Taskname.objects.create(taskname_name='taskname_case')
taskname_system = Taskname.objects.create(taskname_name='taskname_system')
# create object
taskpriority_1 = Taskpriority.objects.create(taskpriority_name='prio_1')
# create object
taskstatus_1 = Taskstatus.objects.create(taskstatus_name='taskstatus_1')
# create object
Task.objects.create(
taskname=taskname_none,
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
)
Task.objects.create(
taskname=taskname_artifact,
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
artifact=artifact_1,
)
Task.objects.create(
taskname=taskname_case,
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
case=case_1,
)
Task.objects.create(
taskname=taskname_system,
taskpriority=taskpriority_1,
taskstatus=taskstatus_1,
task_created_by_user_id=test_user,
task_modified_by_user_id=test_user,
system=system_1,
)
def test_task_add_post_fk_none(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# create object
taskname = Taskname.objects.create(taskname_name='task_add_post_fk_none')
# get objects
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post('/task/add/', data_dict)
# compare
self.assertTrue(Task.objects.get(taskname=taskname).task_is_abandoned)
def test_task_add_post_fk_artifact(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# create object
taskname = Taskname.objects.create(taskname_name='task_add_post_fk_artifact')
# get objects
artifact_id = Artifact.objects.get(artifact_name='artifact_1').artifact_id
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'artifact': artifact_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post('/task/add/', data_dict)
# compare
self.assertFalse(Task.objects.get(taskname=taskname).task_is_abandoned)
def test_task_add_post_fk_case(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# create object
taskname = Taskname.objects.create(taskname_name='task_add_post_fk_case')
# get objects
case_id = Case.objects.get(case_name='case_1').case_id
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'case': case_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post('/task/add/', data_dict)
# compare
self.assertFalse(Task.objects.get(taskname=taskname).task_is_abandoned)
def test_task_add_post_fk_system(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# create object
taskname = Taskname.objects.create(taskname_name='task_add_post_fk_system')
# get objects
system_id = System.objects.get(system_name='system_1').system_id
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'system': system_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post('/task/add/', data_dict)
# compare
self.assertFalse(Task.objects.get(taskname=taskname).task_is_abandoned)
def test_task_edit_post_fk_none(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_none')
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
task_none = Task.objects.get(taskname=taskname)
artifact_id = Artifact.objects.get(artifact_name='artifact_1').artifact_id
case_id = Case.objects.get(case_name='case_1').case_id
system_id = System.objects.get(system_name='system_1').system_id
# compare
self.assertTrue(task_none.task_is_abandoned)
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'artifact': artifact_id,
'case': case_id,
'system': system_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post(f'/task/{task_none.task_id}/edit/', data_dict)
# refresh object
task_none.refresh_from_db()
# compare
self.assertFalse(task_none.task_is_abandoned)
def test_task_edit_post_fk_artifact(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_artifact')
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
task_artifact = Task.objects.get(taskname=taskname)
# compare
self.assertFalse(task_artifact.task_is_abandoned)
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post(f'/task/{task_artifact.task_id}/edit/', data_dict)
# refresh object
task_artifact.refresh_from_db()
# compare
self.assertTrue(task_artifact.task_is_abandoned)
def test_task_edit_post_fk_case(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_case')
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
task_case = Task.objects.get(taskname=taskname)
# compare
self.assertFalse(task_case.task_is_abandoned)
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post(f'/task/{task_case.task_id}/edit/', data_dict)
# refresh object
task_case.refresh_from_db()
# compare
self.assertTrue(task_case.task_is_abandoned)
def test_task_edit_post_fk_system(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get user
test_user_id = User.objects.get(username='testuser_task_is_abandoned').id
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_system')
taskpriority_id = Taskpriority.objects.get(
taskpriority_name='prio_1'
).taskpriority_id
taskstatus_id = Taskstatus.objects.get(
taskstatus_name='taskstatus_1'
).taskstatus_id
task_system = Task.objects.get(taskname=taskname)
# compare
self.assertFalse(task_system.task_is_abandoned)
# get post data
data_dict = {
'taskname': taskname.taskname_id,
'taskpriority': taskpriority_id,
'taskstatus': taskstatus_id,
'task_created_by_user_id': test_user_id,
'task_modified_by_user_id': test_user_id,
}
# get response
self.client.post(f'/task/{task_system.task_id}/edit/', data_dict)
# refresh object
task_system.refresh_from_db()
# compare
self.assertTrue(task_system.task_is_abandoned)
def test_task_edit_post_delete_artifact(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_artifact')
task_artifact = Task.objects.get(taskname=taskname)
artifact_1 = Artifact.objects.get(artifact_name='artifact_1')
# compare
self.assertFalse(task_artifact.task_is_abandoned)
# delete object
artifact_1.delete()
# refresh object
task_artifact.refresh_from_db()
# compare
self.assertTrue(task_artifact.task_is_abandoned)
def test_task_edit_post_delete_case(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_case')
task_case = Task.objects.get(taskname=taskname)
case_1 = Case.objects.get(case_name='case_1')
# compare
self.assertFalse(task_case.task_is_abandoned)
# delete object
case_1.delete()
# refresh object
task_case.refresh_from_db()
# compare
self.assertTrue(task_case.task_is_abandoned)
def test_task_edit_post_delete_system(self):
"""test abandoned setting"""
# login testuser
self.client.login(
username='testuser_task_is_abandoned', password='kOlEaeHosQ2H3svhYkzv'
)
# get objects
taskname = Taskname.objects.get(taskname_name='taskname_system')
task_system = Task.objects.get(taskname=taskname)
system_1 = System.objects.get(system_name='system_1')
# compare
self.assertFalse(task_system.task_is_abandoned)
# delete object
system_1.delete()
# refresh object
task_system.refresh_from_db()
# compare
self.assertTrue(task_system.task_is_abandoned)
| 35.943355 | 88 | 0.637532 | 1,779 | 16,498 | 5.554244 | 0.038786 | 0.033397 | 0.057686 | 0.037648 | 0.862868 | 0.84283 | 0.801943 | 0.767331 | 0.737375 | 0.713187 | 0 | 0.007456 | 0.276458 | 16,498 | 458 | 89 | 36.021834 | 0.820307 | 0.082071 | 0 | 0.633229 | 0 | 0 | 0.144582 | 0.074425 | 0 | 0 | 0 | 0 | 0.056426 | 1 | 0.037618 | false | 0.037618 | 0.012539 | 0 | 0.053292 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45ed5910bce42b459fa82b21ae2883aa6e1a773e | 92 | py | Python | terrascript/tls/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/tls/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | terrascript/tls/__init__.py | vutsalsinghal/python-terrascript | 3b9fb5ad77453d330fb0cd03524154a342c5d5dc | [
"BSD-2-Clause"
] | null | null | null | # terrascript/tls/__init__.py
import terrascript
class tls(terrascript.Provider):
pass | 15.333333 | 32 | 0.782609 | 11 | 92 | 6.181818 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 92 | 6 | 33 | 15.333333 | 0.85 | 0.293478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
340b8c707bcd4455f65c849dff1493c4eccea411 | 66 | py | Python | Day_1_Scientific_Python/numpys/_solutions/02_dataset_intro_1.py | rth/data-science-workshop-2021 | 4a048d9732c60b6015c324212abdb4c51041263c | [
"BSD-3-Clause"
] | null | null | null | Day_1_Scientific_Python/numpys/_solutions/02_dataset_intro_1.py | rth/data-science-workshop-2021 | 4a048d9732c60b6015c324212abdb4c51041263c | [
"BSD-3-Clause"
] | 1 | 2021-05-17T08:43:36.000Z | 2021-05-17T08:43:36.000Z | Day_1_Scientific_Python/numpys/_solutions/02_dataset_intro_1.py | rth/data-science-workshop-2021 | 4a048d9732c60b6015c324212abdb4c51041263c | [
"BSD-3-Clause"
] | 1 | 2021-05-13T12:06:35.000Z | 2021-05-13T12:06:35.000Z | a = np.array([[2, 7, 12, 0], [3, 9, 3, 4], [4, 0, 1, 3]])
print(a) | 33 | 57 | 0.409091 | 17 | 66 | 1.588235 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.212121 | 66 | 2 | 58 | 33 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
342dc19a438faaf0d7672308796faea429f1ed32 | 41 | py | Python | solving problems in python/1.armstrong.py | virajPatil11/Skill-India-AI-ML-Scholarship | a3d64ef4dfd75e97f42d059838632cf7b7d73a96 | [
"Apache-2.0"
] | null | null | null | solving problems in python/1.armstrong.py | virajPatil11/Skill-India-AI-ML-Scholarship | a3d64ef4dfd75e97f42d059838632cf7b7d73a96 | [
"Apache-2.0"
] | null | null | null | solving problems in python/1.armstrong.py | virajPatil11/Skill-India-AI-ML-Scholarship | a3d64ef4dfd75e97f42d059838632cf7b7d73a96 | [
"Apache-2.0"
] | null | null | null | import names
print(names.get_name())
| 10.25 | 24 | 0.707317 | 6 | 41 | 4.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 3 | 25 | 13.666667 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
34560e64016bb5b96c306750ab90677ebaee7b37 | 9,066 | py | Python | data-hub-api/apps/migrator/tests/queries/test_create.py | uktrade/data-hub-api-old | 5ecf093d88692870982a638ced45de6a82d55672 | [
"MIT"
] | null | null | null | data-hub-api/apps/migrator/tests/queries/test_create.py | uktrade/data-hub-api-old | 5ecf093d88692870982a638ced45de6a82d55672 | [
"MIT"
] | 18 | 2016-04-04T12:42:45.000Z | 2016-09-01T07:21:05.000Z | data-hub-api/apps/migrator/tests/queries/test_create.py | uktrade/data-hub-api-old | 5ecf093d88692870982a638ced45de6a82d55672 | [
"MIT"
] | 1 | 2016-06-01T15:45:21.000Z | 2016-06-01T15:45:21.000Z | import datetime
from django.utils import timezone
from reversion import revisions as reversion
from reversion.models import Revision, Version
from cdms_api.tests.rest.utils import mocked_cdms_create
from migrator.tests.models import SimpleObj
from migrator.tests.base import BaseMockedCDMSRestApiTestCase
class CreateWithSaveTestCase(BaseMockedCDMSRestApiTestCase):
def test_success(self):
"""
obj.save() should create a new obj in local and cdms if it doesn't exist.
The operation should create a revision with the change as well.
"""
modified_on = (timezone.now() - datetime.timedelta(days=1)).replace(microsecond=0)
cdms_id = 'brand new id'
self.mocked_cdms_api.create.side_effect = mocked_cdms_create(
create_data={
'SimpleId': cdms_id,
'ModifiedOn': modified_on
}
)
self.assertNoRevisions()
obj = SimpleObj()
obj.name = 'simple obj'
obj.dt_field = datetime.datetime(2016, 1, 1).replace(tzinfo=datetime.timezone.utc)
obj.int_field = 10
self.assertEqual(obj.cdms_pk, '')
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
obj.save()
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 1)
self.assertEqual(obj.cdms_pk, cdms_id)
self.assertEqual(obj.modified, modified_on)
self.assertAPICreateCalled(
SimpleObj, kwargs={
'data': {
'Name': 'simple obj',
'DateTimeField': '/Date(1451606400000)/',
'IntField': 10,
'FKField': None
}
}
)
self.assertAPINotCalled(['list', 'update', 'delete', 'get'])
# reload obj and check cdms_pk and modified
obj = SimpleObj.objects.skip_cdms().get(pk=obj.pk)
self.assertEqual(obj.cdms_pk, cdms_id)
self.assertEqual(obj.modified, modified_on)
# check versions
self.assertEqual(Version.objects.count(), 1)
self.assertEqual(Revision.objects.count(), 1)
version_list = reversion.get_for_object(obj)
self.assertEqual(len(version_list), 1)
version = version_list[0]
self.assertIsNotCDMSRefreshRevision(version.revision)
version_data = version.field_dict
self.assertEqual(version_data['cdms_pk'], obj.cdms_pk)
self.assertEqual(version_data['modified'], obj.modified)
self.assertEqual(version_data['created'], obj.created)
def test_exception_triggers_rollback(self):
"""
In case of exceptions during cdms calls, no changes should be reflected in the db and no revisions
should be created.
"""
self.mocked_cdms_api.create.side_effect = Exception
obj = SimpleObj()
obj.name = 'simple obj'
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
self.assertRaises(Exception, obj.save)
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
self.assertAPINotCalled(['list', 'update', 'delete', 'get'])
self.assertNoRevisions()
class CreateWithManagerTestCase(BaseMockedCDMSRestApiTestCase):
def test_success(self):
"""
MyObject.objects.create() should create a new obj in local and cdms.
The operation should create a revision with the change as well.
"""
modified_on = (timezone.now() - datetime.timedelta(days=1)).replace(microsecond=0)
cdms_id = 'brand new id'
self.mocked_cdms_api.create.side_effect = mocked_cdms_create(
create_data={
'SimpleId': cdms_id,
'ModifiedOn': modified_on
}
)
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
obj = SimpleObj.objects.create(name='simple obj')
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 1)
self.assertEqual(obj.cdms_pk, cdms_id)
self.assertEqual(obj.modified, modified_on)
self.assertAPICreateCalled(
SimpleObj, kwargs={
'data': {
'Name': 'simple obj',
'DateTimeField': None,
'IntField': None,
'FKField': None
}
}
)
self.assertAPINotCalled(['list', 'update', 'delete', 'get'])
# reload obj and check cdms_pk and modified
obj = SimpleObj.objects.skip_cdms().get(pk=obj.pk)
self.assertEqual(obj.cdms_pk, cdms_id)
self.assertEqual(obj.modified, modified_on)
# check versions
self.assertEqual(Version.objects.count(), 1)
self.assertEqual(Revision.objects.count(), 1)
version_list = reversion.get_for_object(obj)
self.assertEqual(len(version_list), 1)
version = version_list[0]
self.assertIsNotCDMSRefreshRevision(version.revision)
version_data = version.field_dict
self.assertEqual(version_data['cdms_pk'], obj.cdms_pk)
self.assertEqual(version_data['modified'], obj.modified)
self.assertEqual(version_data['created'], obj.created)
def test_exception_triggers_rollback(self):
"""
In case of exceptions during cdms calls, no changes should be reflected in the db and no revisions
should be created.
"""
self.mocked_cdms_api.create.side_effect = Exception
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
self.assertRaises(
Exception,
SimpleObj.objects.create, name='simple obj'
)
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
self.assertAPINotCalled(['list', 'update', 'delete', 'get'])
self.assertNoRevisions()
def test_with_bulk_create(self):
"""
bulk_create() not currently implemented.
"""
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
self.assertRaises(
NotImplementedError,
SimpleObj.objects.bulk_create,
[
SimpleObj(name='simple obj1'),
SimpleObj(name='simple obj2')
]
)
self.assertNoAPICalled()
self.assertNoRevisions()
def test_with_bulk_create_private(self):
"""
bulk_create() using the private django method.
"""
self.assertRaises(
NotImplementedError,
SimpleObj.objects._insert,
[
SimpleObj(id=1000, name='simple obj1'),
SimpleObj(id=1001, name='simple obj2')
], SimpleObj._meta.fields
)
class CreateWithSaveSkipCDMSTestCase(BaseMockedCDMSRestApiTestCase):
def test_success(self):
"""
When calling obj.save(skip_cdms=True), changes should only happen in local, not in cdms.
The operation should create a revision with the change as usual.
"""
obj = SimpleObj()
obj.name = 'simple obj'
self.assertEqual(obj.cdms_pk, '')
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
obj.save(skip_cdms=True)
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 1)
self.assertEqual(obj.cdms_pk, '')
self.assertNoAPICalled()
# check versions
self.assertEqual(Version.objects.count(), 1)
self.assertEqual(Revision.objects.count(), 1)
class CreateWithManagerSkipCDMSTestCase(BaseMockedCDMSRestApiTestCase):
def test_with_create(self):
"""
When calling MyObject.objects.skip_cdms().create(), changes should only happen in local, not in cdms.
The operation should create a revision with the change as usual.
"""
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
obj = SimpleObj.objects.skip_cdms().create(name='simple obj')
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 1)
self.assertEqual(obj.cdms_pk, '')
self.assertNoAPICalled()
# check versions
self.assertEqual(Version.objects.count(), 1)
self.assertEqual(Revision.objects.count(), 1)
def test_with_bulk_create(self):
"""
When calling MyObject.objects.skip_cdms().bulk_create(obj1, obj2), changes should only happen in local,
not in cdms.
The operation does NOT create any revisions as bulk_create is a low level call intended to skip all
custom and non custom logic and hit the db directly.
"""
self.assertEqual(SimpleObj.objects.skip_cdms().count(), 0)
SimpleObj.objects.skip_cdms().bulk_create([
SimpleObj(name='simple obj1'),
SimpleObj(name='simple obj2')
])
self.assertNoAPICalled()
self.assertNoRevisions() # no revisions as this is a low level call without signals
def test_create_without_objects(self):
self.assertEqual(
SimpleObj.objects.skip_cdms()._batched_insert([], None, None),
None
)
| 36.264 | 111 | 0.624752 | 984 | 9,066 | 5.623984 | 0.159553 | 0.116552 | 0.056921 | 0.0824 | 0.820383 | 0.762378 | 0.746838 | 0.739248 | 0.707445 | 0.694977 | 0 | 0.010426 | 0.27002 | 9,066 | 249 | 112 | 36.409639 | 0.825778 | 0.152989 | 0 | 0.672727 | 0 | 0 | 0.056675 | 0.002841 | 0 | 0 | 0 | 0 | 0.387879 | 1 | 0.060606 | false | 0 | 0.042424 | 0 | 0.127273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cadabd6751f7598209e1fe27c41fcbe6341ae922 | 17 | py | Python | Jogo.py | felipemamore/TemploJedi | a2555767ab578aa075236acc94e53b7a2019ebc8 | [
"Apache-2.0"
] | null | null | null | Jogo.py | felipemamore/TemploJedi | a2555767ab578aa075236acc94e53b7a2019ebc8 | [
"Apache-2.0"
] | null | null | null | Jogo.py | felipemamore/TemploJedi | a2555767ab578aa075236acc94e53b7a2019ebc8 | [
"Apache-2.0"
] | null | null | null | from BJ import *
| 8.5 | 16 | 0.705882 | 3 | 17 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 17 | 1 | 17 | 17 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b0d004cbc455b31640eacb063dddd1f06255f81 | 35 | py | Python | itriage/models/__init__.py | dawei22/iTriage | fe754fce5363f01d4e42f514b83909e6d4c58de8 | [
"MIT"
] | null | null | null | itriage/models/__init__.py | dawei22/iTriage | fe754fce5363f01d4e42f514b83909e6d4c58de8 | [
"MIT"
] | null | null | null | itriage/models/__init__.py | dawei22/iTriage | fe754fce5363f01d4e42f514b83909e6d4c58de8 | [
"MIT"
] | null | null | null | from .models import BasicUserModel
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b462333b59893c71e94324cc56db2307aa2f15c | 35 | py | Python | fitness.py | DilipIITBHU/MLST-IMPLEMENTATION | 6ecfaab85f954171fc5aa9694a511a9e44a4ffa8 | [
"MIT"
] | 1 | 2020-02-26T17:28:37.000Z | 2020-02-26T17:28:37.000Z | fitness.py | DilipIITBHU/MLST-IMPLEMENTATION | 6ecfaab85f954171fc5aa9694a511a9e44a4ffa8 | [
"MIT"
] | null | null | null | fitness.py | DilipIITBHU/MLST-IMPLEMENTATION | 6ecfaab85f954171fc5aa9694a511a9e44a4ffa8 | [
"MIT"
] | 1 | 2020-02-26T17:29:00.000Z | 2020-02-26T17:29:00.000Z | def fitness(l1):
return len(l1) | 17.5 | 18 | 0.657143 | 6 | 35 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.2 | 35 | 2 | 18 | 17.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1b69d3e8b70efbe5c07e05129a3778adae31896c | 1,929 | py | Python | components/micropython/modules/sha2017lite/install_hh_logo.py | badgeteam/Firmware | 6192b2902c70beb7a298a256d9087274d045fbc0 | [
"Apache-2.0"
] | 7 | 2019-02-11T10:02:14.000Z | 2019-08-02T00:08:45.000Z | components/micropython/modules/sha2017lite/install_hh_logo.py | badgeteam/Firmware | 6192b2902c70beb7a298a256d9087274d045fbc0 | [
"Apache-2.0"
] | 17 | 2019-01-05T18:02:11.000Z | 2019-03-09T21:46:43.000Z | components/micropython/modules/sha2017lite/install_hh_logo.py | badgeteam/Firmware | 6192b2902c70beb7a298a256d9087274d045fbc0 | [
"Apache-2.0"
] | 4 | 2019-02-15T16:03:20.000Z | 2019-06-27T22:23:24.000Z | import uos
logo = b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x80\x00\x00\x00<\x01\x03\x00\x00\x00\x94\xf8 \x86\x00\x00\x00\x06PLTE\xff\xff\xff\x00\x00\x00U\xc2\xd3~\x00\x00\x00\tpHYs\x00\x00\x0e\xc4\x00\x00\x0e\xc4\x01\x95+\x0e\x1b\x00\x00\x01\xdfIDAT(\x91\xb5\xd2\xb1n\xdb0\x10\x06\xe0\x9f\x95k\xb2\x80[\xc9\xedP\x01U-\xf5\r\xec\xcd\x83\x11\xa9\xe8\x8b\xf8\x11\xe2-\x9b\xe81@\x83\xbe\x81\xf2*22\xb4\x05\x8a\xce\x1d\xe5d(\xba)[\x06A\xd7;J\xb2\x82"k\t\x88"?\x1c\x8f\xd4Q\xd0\xa5\xae\x90S\x83\x18y\rn&\xd1K\x9cc\x8a\xa4<\x0f\x05\x96\x01\xfc\x06S\x0f\xd9\x16_\x88\x80m\x04hja\xb3PYh \x8b$n\xca\xe0\x93\x95a\x86\xae\xd9\x0c\xaa\x94\xc1\xc7J_\xd3=Qq8\x12\xc5T\x8fp\xd3\xc3\x87\x01>\x1d:\x98\x0f\xa0\xf7\x1d\xbc\xa8t\x81]\x85\xc2\xdf\x97\x88\xc0@\xc4\x11\x15\xaeI\xf5\x11\x03\xb4=\xfc\xe9`u\x82\xef\x1d\xe4\xf1\x00\x97\x1d\x9c\xc5\x9c\xa3\x8d\xa9\xc2e\xbf\xcb\xc2\xe7\x93G\xfeg\x07F@\xcb\xb6\xfa\x07\xaenu\x11\xee\x88\x16\x9aO\x1ay?q\xf5[\x17k\x01\xef(p\xc4\x9e#\xb6\x02J@\x1dao\xf5"\xdb\xc9\xf8\xdeu\xc8&pO\xdf\x14\x12\x9e\x94Z\xaa\xeb\x13\xd9\xb4E\xfc\xa0\xf6v\xca%\xc4+\x0eu\x05U\xad\xf4$\x05M-\xc6\xc5%\xfegS5\xdc\x06j\xd8r\x1a\xc2\xdd\xb2G=\xcc&(9\xc4S\xd6vW\x1f\xf2?\x91\x92|\x0c\xcf*\x86\xc0s\x8b=\xe6\xb6\x83w8P\x13\xd3\r5+,\x056\xea\x8e\xda\x94\xbeRsa\xc3\x13\xe4\x0e2#I\x1fA"`\x1e-\t\x05&\x0cv\x81Z[\\D\x02\xcfRb0\xb5\xe6\x88\x99\xc0\xdcA\xe4\x00\x02\xab\xa7!\x19a\xee\x92&.\xe9\xd3\xf0\xe6_x\xbby\xc9\x07[J\x8e\x9c\xbe9x\xcf\x10\x8e\xf0\xdaE\x98\x11\x82\xcd\xf3;jz\xf8%\xe02\x99\xae\xc0\x81\x1b\x06H\x1f\x0c4\x97/\x97\x12j\xaa\xcbrm\xd6\x86\xab\xb5\xedo\x82&\x08\xce\x02\xdd\xccN\xb7\xe7S\xad*x\xed\tf\xe3\xeb/O\x13NGB\xcf\x87.\x00\x00\x00\x00IEND\xaeB`\x82'
try:
uos.mkdir('/media')
except:
pass
media = uos.listdir('/media')
if not "hackerhotel.png" in media:
try:
f = open("/media/hackerhotel.png", 'wb')
f.write(logo)
f.close()
print("Logo installed.")
except:
print("Could not install logo.")
| 96.45 | 1,660 | 0.722136 | 423 | 1,929 | 3.286052 | 0.56974 | 0.077698 | 0.045324 | 0.017266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233884 | 0.026957 | 1,929 | 19 | 1,661 | 101.526316 | 0.50666 | 0 | 0 | 0.266667 | 0 | 0.066667 | 0.900985 | 0.865215 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.066667 | 0 | 0.066667 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
1ba66d586fc81708257c6be12d40c993993ae421 | 2,306 | py | Python | file_sockets.py | programmingrakesh/file_server | 05ebaa7e0fd5f168729b4ab41e1d96a19ee758d1 | [
"MIT"
] | null | null | null | file_sockets.py | programmingrakesh/file_server | 05ebaa7e0fd5f168729b4ab41e1d96a19ee758d1 | [
"MIT"
] | null | null | null | file_sockets.py | programmingrakesh/file_server | 05ebaa7e0fd5f168729b4ab41e1d96a19ee758d1 | [
"MIT"
] | null | null | null | import socket
class Server:
def __init__(self, IP, PORT, SIZE, FORMAT):
self.IP = IP
self.PORT = PORT
self.SIZE = SIZE
self.FORMAT = FORMAT
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.bind((self.IP, self.PORT))
self.server.listen()
def connect_client(self):
self.conn, addr = self.server.accept()
def recv(self):
msg = self.conn.recv(self.SIZE).decode(self.FORMAT)
print(msg)
def send(self, MESSAGE):
self.conn.send(MESSAGE.encode(self.FORMAT))
def recv_file(self, FILEPATH):
msg = self.conn.recv(self.SIZE).decode(self.FORMAT)
msg = msg.split("@")
PATH = msg[1]
content = msg[0]
PATH = PATH.split("\\")
PATH = PATH[-1]
PATH = f'{FILEPATH}\\{PATH}'
with open(PATH, 'w') as f:
f.write(content)
f.close
def send_file(self, PATH):
with open(PATH, 'r') as f:
content = f.read()
content = f'{content}@{PATH}'
self.conn.send(content.encode(self.FORMAT))
class Client:
def __init__(self, IP, PORT, SIZE, FORMAT):
self.IP = IP
self.PORT = PORT
self.SIZE = SIZE
self.FORMAT = FORMAT
self.client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def connect_server(self):
self.client.connect((self.IP, self.PORT))
def recv(self):
msg = self.client.recv(self.SIZE).decode(self.FORMAT)
print(msg)
def send(self, MESSAGE):
self.client.send(MESSAGE.encode(self.FORMAT))
def send_file(self, PATH):
with open(PATH, 'r') as f:
content = f.read()
content = f'{content}@{PATH}'
self.client.send(content.encode(self.FORMAT))
def recv_file(self, FILEPATH):
msg = self.client.recv(self.SIZE).decode(self.FORMAT)
msg = msg.split("@")
PATH = msg[1]
content = msg[0]
PATH = PATH.split("\\")
PATH = PATH[-1]
PATH = f'{FILEPATH}\\{PATH}'
with open(PATH, 'w') as f:
f.write(content)
f.close
| 28.121951 | 72 | 0.524284 | 283 | 2,306 | 4.208481 | 0.159011 | 0.083963 | 0.033585 | 0.060453 | 0.824517 | 0.774139 | 0.742233 | 0.742233 | 0.675063 | 0.646516 | 0 | 0.003942 | 0.339983 | 2,306 | 81 | 73 | 28.469136 | 0.778581 | 0 | 0 | 0.761905 | 0 | 0 | 0.035056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190476 | false | 0 | 0.015873 | 0 | 0.238095 | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
94642d1068c4de350bedb31b184793dbfc429092 | 195 | py | Python | IEProtLib/mol/__init__.py | luwei0917/IEConv_proteins | 9c79ea000c20088fa48234f1868e42883a9b5a21 | [
"MIT"
] | 24 | 2021-03-09T02:42:12.000Z | 2022-03-25T23:48:14.000Z | IEProtLib/mol/__init__.py | luwei0917/IEConv_proteins | 9c79ea000c20088fa48234f1868e42883a9b5a21 | [
"MIT"
] | 1 | 2021-11-05T20:06:16.000Z | 2021-11-05T20:06:16.000Z | IEProtLib/mol/__init__.py | luwei0917/IEConv_proteins | 9c79ea000c20088fa48234f1868e42883a9b5a21 | [
"MIT"
] | 8 | 2021-05-21T14:07:56.000Z | 2022-01-24T09:52:42.000Z | from .Molecule import Molecule
from .Molecule import MoleculePH
from .Protein import Protein
from .Protein import ProteinPH
from .MolConv import MolConv
from .MolConvBuilder import MolConvBuilder | 32.5 | 42 | 0.851282 | 24 | 195 | 6.916667 | 0.333333 | 0.144578 | 0.216867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117949 | 195 | 6 | 42 | 32.5 | 0.965116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
948954f556d0e42503cb6ab7e4713acb84196764 | 160 | py | Python | igem2017/init.py | StrickerLee/SYSU-Software-2017 | 626cc24d347c5525edb8831a41f6e155a9050ab2 | [
"MIT"
] | 40 | 2017-11-02T03:45:21.000Z | 2020-07-03T09:05:16.000Z | igem2017/init.py | StrickerLee/SYSU-Software-2017 | 626cc24d347c5525edb8831a41f6e155a9050ab2 | [
"MIT"
] | 2 | 2020-02-11T23:35:36.000Z | 2020-06-05T17:33:42.000Z | igem2017/init.py | StrickerLee/SYSU-Software-2017 | 626cc24d347c5525edb8831a41f6e155a9050ab2 | [
"MIT"
] | 9 | 2017-11-02T12:35:07.000Z | 2020-02-25T13:30:46.000Z | from sdin.tools.pre_load_data import *
from os.path import join
pre_load_data(join('sdin', 'tools', 'preload'), join("static", "img", "Team_img", "none.jpg"))
| 32 | 94 | 0.70625 | 26 | 160 | 4.153846 | 0.615385 | 0.166667 | 0.203704 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 160 | 4 | 95 | 40 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.257862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84b80c61eb279ab56d888228acf791dc5b45fd3e | 52 | py | Python | src/__init__.py | Smtihy305/surface | 583ead8df41684485a47a38ff9c174e1a4565876 | [
"MIT"
] | null | null | null | src/__init__.py | Smtihy305/surface | 583ead8df41684485a47a38ff9c174e1a4565876 | [
"MIT"
] | null | null | null | src/__init__.py | Smtihy305/surface | 583ead8df41684485a47a38ff9c174e1a4565876 | [
"MIT"
] | null | null | null | import src.common
import src.gui
from .log import *
| 13 | 18 | 0.769231 | 9 | 52 | 4.444444 | 0.666667 | 0.45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 52 | 3 | 19 | 17.333333 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84c3ceec722402da3e72df93b01844d9c2e94406 | 116 | py | Python | main/test/test_no_db_creation/settings.py | taekwan-hwang/infocom_notice | f45f6608459e6124d7315725ebaa8144a23ea4fe | [
"MIT"
] | 6 | 2018-02-25T14:08:03.000Z | 2018-03-05T14:39:42.000Z | main/test/test_no_db_creation/settings.py | taekwan-hwang/infocom_notice | f45f6608459e6124d7315725ebaa8144a23ea4fe | [
"MIT"
] | 2 | 2018-02-28T02:12:58.000Z | 2018-03-05T02:39:19.000Z | main/test/test_no_db_creation/settings.py | taekwan-hwang/infocom_notice | f45f6608459e6124d7315725ebaa8144a23ea4fe | [
"MIT"
] | null | null | null | from mysite.settings import *
TEST_RUNNER='boing.test_no_db_creation.test_runner_without_db_creation.NoDBTestRunner' | 58 | 86 | 0.896552 | 17 | 116 | 5.647059 | 0.705882 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 116 | 2 | 86 | 58 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ca1355e9ebc68744cd3fd29ec6cd649da9637559 | 108 | py | Python | server/websockets/consumers/world/broadcasts/__init__.py | nking1232/html5-msoy | 6e026f1989b15310ad67c050beb69a168c3bdd5f | [
"MIT"
] | null | null | null | server/websockets/consumers/world/broadcasts/__init__.py | nking1232/html5-msoy | 6e026f1989b15310ad67c050beb69a168c3bdd5f | [
"MIT"
] | null | null | null | server/websockets/consumers/world/broadcasts/__init__.py | nking1232/html5-msoy | 6e026f1989b15310ad67c050beb69a168c3bdd5f | [
"MIT"
] | 2 | 2020-12-18T19:19:38.000Z | 2020-12-18T19:53:56.000Z | from .message import broadcast_message
from .avatar import broadcast_avatar_position, broadcast_avatar_state | 54 | 69 | 0.898148 | 14 | 108 | 6.571429 | 0.5 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 108 | 2 | 69 | 54 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca24dac560de62ccafafa671ad77b5866b5a71c9 | 155 | py | Python | ummon/transformations/imagetransforms/__init__.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | 1 | 2022-02-10T06:47:13.000Z | 2022-02-10T06:47:13.000Z | ummon/transformations/imagetransforms/__init__.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | null | null | null | ummon/transformations/imagetransforms/__init__.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from .binarize import *
from .flatten import *
from .embedd_in_empty import *
from .gray_to_rgb import *
from .rgb_to_gray import * | 25.833333 | 30 | 0.722581 | 24 | 155 | 4.416667 | 0.541667 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.154839 | 155 | 6 | 31 | 25.833333 | 0.801527 | 0.135484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca272edf60a90340a11f416c43799e418242fd2c | 93 | py | Python | lss_likelihood/joint_boss_likelihoods.py | LBJ-Wade/CobayaLSS | faa233a31cf1fba120258ebd143b1c92c9e13135 | [
"MIT"
] | 1 | 2021-12-14T07:29:17.000Z | 2021-12-14T07:29:17.000Z | lss_likelihood/joint_boss_likelihoods.py | LBJ-Wade/CobayaLSS | faa233a31cf1fba120258ebd143b1c92c9e13135 | [
"MIT"
] | null | null | null | lss_likelihood/joint_boss_likelihoods.py | LBJ-Wade/CobayaLSS | faa233a31cf1fba120258ebd143b1c92c9e13135 | [
"MIT"
] | 1 | 2021-12-14T07:29:18.000Z | 2021-12-14T07:29:18.000Z | from joint_likelihood_zs import JointLikelihood
class NGCZ3_joint(JointLikelihood):
pass | 23.25 | 47 | 0.849462 | 11 | 93 | 6.909091 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.11828 | 93 | 4 | 48 | 23.25 | 0.914634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
ca2c9db5c162b6f2020c591f19c394e5b3bee1fd | 184 | py | Python | gloro/__init__.py | klasleino/gloro | 5ebfe0f3850bca20e4ee4414fa2ee8a4af303023 | [
"MIT"
] | 16 | 2021-02-17T15:06:07.000Z | 2022-03-28T19:08:54.000Z | gloro/__init__.py | klasleino/gloro | 5ebfe0f3850bca20e4ee4414fa2ee8a4af303023 | [
"MIT"
] | 1 | 2021-11-30T15:49:31.000Z | 2021-12-06T20:28:49.000Z | gloro/__init__.py | klasleino/gloro | 5ebfe0f3850bca20e4ee4414fa2ee8a4af303023 | [
"MIT"
] | 1 | 2021-06-20T06:34:51.000Z | 2021-06-20T06:34:51.000Z | __version__ = '1.1.0'
import gloro.constants
from gloro.models import GloroNet
from gloro.relaxations.models import AffinityGloroNet
from gloro.relaxations.models import RtkGloroNet
| 23 | 53 | 0.836957 | 24 | 184 | 6.25 | 0.5 | 0.18 | 0.266667 | 0.346667 | 0.426667 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.103261 | 184 | 7 | 54 | 26.285714 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0.027174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca485cdb2694110b6161e8a239dcc9fc1b68f47e | 3,282 | py | Python | smart_meter.py | ran-sama/python_ehz_smart_meter | f9bd24edaf360916c59a3c948f0334d1414b3476 | [
"WTFPL"
] | 1 | 2019-04-23T21:28:02.000Z | 2019-04-23T21:28:02.000Z | smart_meter.py | ran-sama/python_ehz_smart_meter | f9bd24edaf360916c59a3c948f0334d1414b3476 | [
"WTFPL"
] | null | null | null | smart_meter.py | ran-sama/python_ehz_smart_meter | f9bd24edaf360916c59a3c948f0334d1414b3476 | [
"WTFPL"
] | null | null | null | start = '1b1b1b1b01010101'
stop = '1b1b1b1b1a'
data = ''
runs = 0
result = ''
while runs <1:
char = open("smart_meter.log", "r")
data = data + char.read().encode('HEX')
offset = data.find(start)
if (offset <> -1):
data = data[offset:len(data)]
offset = data.find(stop)
if (offset <> -1):
search = '070100010800ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 22
hex_value = data[offset:offset + 16]
dec_value = int(hex_value, 16) / 10000
print 'Active energy: ' + str(dec_value) + ' kWh'
result = result + ';' + str(dec_value)
search = '070100010801ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 14
hex_value = data[offset:offset + 16]
dec_value = int(hex_value, 16) / 10000
print 'Active energy - Pricing 1: ' + str(dec_value) + ' kWh'
result = result + ';' + str(dec_value)
search = '070100010802ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 14
hex_value = data[offset:offset + 16]
dec_value = int(hex_value, 16) / 10000
print 'Active energy - Pricing 2: ' + str(dec_value) + ' kWh'
result = result + ';' + str(dec_value)
search = '0701000f0700ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 14
hex_value = data[offset:offset + 8]
dec_value = int(hex_value, 16)
print 'Active power: ' + str(dec_value) + ' W'
result = result + ';' + str(dec_value)
search = '070100150700ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 14
hex_value = data[offset:offset + 8]
dec_value = int(hex_value, 16)
print 'Active power - L1: ' + str(dec_value) + ' W'
result = result + ';' + str(dec_value)
search = '070100290700ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 14
hex_value = data[offset:offset + 8]
dec_value = int(hex_value, 16)
print 'Active power - L2: ' + str(dec_value) + ' W'
result = result + ';' + str(dec_value)
search = '0701003d0700ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 14
hex_value = data[offset:offset + 8]
dec_value = int(hex_value, 16)
print 'Active power - L3: ' + str(dec_value) + ' W'
result = result + ';' + str(dec_value)
search = '070100000009ff'
offset = data.find(search)
if (offset <> -1):
offset = offset + len(search) + 20
hex_value = data[offset:offset + 6]
dec_value = int(hex_value, 16)
print 'Smart-Meter-ID: ' + str(dec_value)
result = result + ';' + str(dec_value)
data = ''
runs = 1
| 36.876404 | 74 | 0.497867 | 359 | 3,282 | 4.437326 | 0.153203 | 0.120527 | 0.110483 | 0.100439 | 0.7828 | 0.753296 | 0.753296 | 0.736974 | 0.736974 | 0.736974 | 0 | 0.090909 | 0.373248 | 3,282 | 88 | 75 | 37.295455 | 0.68352 | 0 | 0 | 0.620253 | 0 | 0 | 0.106763 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.101266 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04d34f96716a2399ef08da9737cff1ff9e5a614f | 73 | py | Python | src/gmmreg-python/src/__init__.py | lpbsscientist/targettrack | dbe261f84c60b5beb1de28ef88693fe56bff0ac6 | [
"MIT"
] | 34 | 2019-11-23T03:50:38.000Z | 2022-01-30T17:23:34.000Z | src/gmmreg-python/src/__init__.py | lpbsscientist/targettrack | dbe261f84c60b5beb1de28ef88693fe56bff0ac6 | [
"MIT"
] | 2 | 2020-12-15T12:21:49.000Z | 2021-10-16T23:06:17.000Z | src/gmmreg-python/src/__init__.py | lpbsscientist/targettrack | dbe261f84c60b5beb1de28ef88693fe56bff0ac6 | [
"MIT"
] | 7 | 2020-08-06T13:09:42.000Z | 2022-02-05T03:10:58.000Z | #!/usr/bin/env python
#coding=utf-8
from ._run_config import run_config
| 14.6 | 35 | 0.767123 | 13 | 73 | 4.076923 | 0.846154 | 0.339623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.109589 | 73 | 4 | 36 | 18.25 | 0.8 | 0.438356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b6e5c838e5f49624d489aab5a13a554f0ba24f47 | 107 | py | Python | bionorm/normalizers/gene/GNormPlus/processing/__init__.py | utikeev/bio-normalizers | d7234d8ce01687d24f0f5bbba63a59eb87474bbb | [
"MIT"
] | null | null | null | bionorm/normalizers/gene/GNormPlus/processing/__init__.py | utikeev/bio-normalizers | d7234d8ce01687d24f0f5bbba63a59eb87474bbb | [
"MIT"
] | null | null | null | bionorm/normalizers/gene/GNormPlus/processing/__init__.py | utikeev/bio-normalizers | d7234d8ce01687d24f0f5bbba63a59eb87474bbb | [
"MIT"
] | null | null | null | from .normalization import *
from .paper_processing import *
from .scoring import *
from .species import *
| 21.4 | 31 | 0.775701 | 13 | 107 | 6.307692 | 0.538462 | 0.365854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149533 | 107 | 4 | 32 | 26.75 | 0.901099 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e1b03774563f733ddb95b380cc9ae462d40e33e | 71 | py | Python | Aula09/Alunos/Services/ListarAlunos.py | Luclujan7198/Ac8_aplic_Distribuidas | a8589b85415b2e535c3c7682b3bb631411492547 | [
"Unlicense"
] | null | null | null | Aula09/Alunos/Services/ListarAlunos.py | Luclujan7198/Ac8_aplic_Distribuidas | a8589b85415b2e535c3c7682b3bb631411492547 | [
"Unlicense"
] | null | null | null | Aula09/Alunos/Services/ListarAlunos.py | Luclujan7198/Ac8_aplic_Distribuidas | a8589b85415b2e535c3c7682b3bb631411492547 | [
"Unlicense"
] | null | null | null | from Models.Alunos import Alunos
def ListarAlunos():
return Alunos | 17.75 | 32 | 0.774648 | 9 | 71 | 6.111111 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 71 | 4 | 33 | 17.75 | 0.932203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
6d081b7d909f8d1f10d641df96742aa645b8405f | 88 | py | Python | torchsat/models/__init__.py | monocilindro/torchsat | 5ac62e1aa9fee1d7a5a4a58914c128cf8e18cc09 | [
"MIT"
] | 316 | 2019-08-14T11:56:13.000Z | 2022-03-31T06:15:50.000Z | torchsat/models/__init__.py | monocilindro/torchsat | 5ac62e1aa9fee1d7a5a4a58914c128cf8e18cc09 | [
"MIT"
] | 8 | 2019-10-07T20:16:08.000Z | 2021-09-03T18:09:20.000Z | torchsat/models/__init__.py | monocilindro/torchsat | 5ac62e1aa9fee1d7a5a4a58914c128cf8e18cc09 | [
"MIT"
] | 49 | 2019-08-14T11:55:22.000Z | 2022-01-31T16:43:41.000Z | from .classification import *
from .segmentation import *
# from .detection import *
| 22 | 30 | 0.738636 | 9 | 88 | 7.222222 | 0.555556 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 88 | 3 | 31 | 29.333333 | 0.902778 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6d3ace86db7b58311e7b9b23cdcc2f39cd5b6978 | 1,119 | py | Python | autokeras/nn/metric.py | S4iz/beta | 21994919156aac15558f77555538346fb702bcbc | [
"MIT"
] | 1 | 2019-06-12T17:02:44.000Z | 2019-06-12T17:02:44.000Z | autokeras/nn/metric.py | S4iz/beta | 21994919156aac15558f77555538346fb702bcbc | [
"MIT"
] | 4 | 2018-10-23T13:08:03.000Z | 2018-10-23T13:18:22.000Z | autokeras/nn/metric.py | S4iz/beta | 21994919156aac15558f77555538346fb702bcbc | [
"MIT"
] | 2 | 2018-11-12T19:43:31.000Z | 2018-11-26T08:14:32.000Z | from abc import abstractmethod
from sklearn.metrics import accuracy_score, mean_squared_error
class Metric:
@classmethod
@abstractmethod
def higher_better(cls):
pass
@classmethod
@abstractmethod
def compute(cls, prediction, target):
pass
@classmethod
@abstractmethod
def evaluate(cls, prediction, target):
pass
class Accuracy(Metric):
@classmethod
def higher_better(cls):
return True
@classmethod
def compute(cls, prediction, target):
prediction = list(map(lambda x: x.argmax(), prediction))
target = list(map(lambda x: x.argmax(), target))
return cls.evaluate(prediction, target)
@classmethod
def evaluate(cls, prediction, target):
return accuracy_score(prediction, target)
class MSE(Metric):
@classmethod
def higher_better(cls):
return False
@classmethod
def compute(cls, prediction, target):
return cls.evaluate(prediction, target)
@classmethod
def evaluate(cls, prediction, target):
return mean_squared_error(prediction, target)
| 21.519231 | 64 | 0.671135 | 119 | 1,119 | 6.235294 | 0.268908 | 0.237197 | 0.153639 | 0.072776 | 0.578167 | 0.498652 | 0.342318 | 0.231806 | 0.231806 | 0.231806 | 0 | 0 | 0.244861 | 1,119 | 51 | 65 | 21.941176 | 0.878107 | 0 | 0 | 0.702703 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.243243 | false | 0.081081 | 0.054054 | 0.135135 | 0.540541 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
edb52223146700369ebce20d7a3ec99343794e4d | 8,684 | py | Python | nslsii/tests/test_logutils.py | bruceravel/nslsii | 75a365ea79b65938348d79c64a6aeecb572d4c95 | [
"BSD-3-Clause"
] | null | null | null | nslsii/tests/test_logutils.py | bruceravel/nslsii | 75a365ea79b65938348d79c64a6aeecb572d4c95 | [
"BSD-3-Clause"
] | null | null | null | nslsii/tests/test_logutils.py | bruceravel/nslsii | 75a365ea79b65938348d79c64a6aeecb572d4c95 | [
"BSD-3-Clause"
] | null | null | null | import os
from pathlib import Path
import shutil
import stat
from unittest.mock import MagicMock
import appdirs
import IPython.core.interactiveshell
import pytest
from nslsii import configure_bluesky_logging, configure_ipython_logging
from nslsii.common.ipynb.logutils import log_exception
def test_configure_bluesky_logging(tmpdir):
"""
Set environment variable BLUESKY_LOG_FILE and assert the log
file is created.
"""
log_file_path = Path(tmpdir) / Path("bluesky.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_LOG_FILE"] = str(log_file_path)
bluesky_log_file_path = configure_bluesky_logging(ipython=ip,)
assert bluesky_log_file_path == log_file_path
assert log_file_path.exists()
def test_configure_bluesky_logging_with_nonexisting_dir(tmpdir):
"""
Set environment variable BLUESKY_LOG_FILE to include a directory
that does not exist. Assert an exception is raised.
"""
log_dir = Path(tmpdir) / Path("does_not_exist")
log_file_path = log_dir / Path("bluesky.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_LOG_FILE"] = str(log_file_path)
with pytest.raises(FileNotFoundError):
configure_bluesky_logging(ipython=ip,)
def test_configure_bluesky_logging_with_unwriteable_dir(tmpdir):
"""
Set environment variable BLUESKY_LOG_FILE to include a directory
that is not writeable. Assert an exception is raised.
"""
log_dir = Path(tmpdir)
log_file_path = log_dir / Path("bluesky.log")
# make the log_dir read-only to force an exception
log_dir.chmod(mode=stat.S_IREAD)
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_LOG_FILE"] = str(log_file_path)
with pytest.raises(PermissionError):
configure_bluesky_logging(ipython=ip,)
def test_configure_bluesky_logging_creates_default_dir():
"""
Remove environment variable BLUESKY_LOG_FILE and test that
the default log file path is created. This test creates a
directory rather than using pytest's tmp_path so the test
must clean up at the end.
"""
test_appname = "bluesky-test"
log_dir = Path(appdirs.user_log_dir(appname=test_appname))
# remove log_dir if it exists to test that it will be created
if log_dir.exists():
shutil.rmtree(path=log_dir)
log_file_path = log_dir / Path("bluesky.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ.pop("BLUESKY_LOG_FILE", default=None)
bluesky_log_file_path = configure_bluesky_logging(
ipython=ip, appdirs_appname=test_appname
)
assert bluesky_log_file_path == log_file_path
assert log_file_path.exists()
# clean up the file and directory this test creates
bluesky_log_file_path.unlink()
bluesky_log_file_path.parent.rmdir()
def test_configure_bluesky_logging_existing_default_dir():
"""
Remove environment variable BLUESKY_LOG_FILE and test that
the default log file path is used. This test creates a
directory rather than using pytest's tmp_path so the test
must clean up at the end.
"""
test_appname = "bluesky-test"
log_dir = Path(appdirs.user_log_dir(appname=test_appname))
# create the default log directory
log_dir.mkdir(parents=True, exist_ok=True)
log_file_path = log_dir / Path("bluesky.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ.pop("BLUESKY_LOG_FILE", default=None)
bluesky_log_file_path = configure_bluesky_logging(
ipython=ip, appdirs_appname=test_appname
)
assert bluesky_log_file_path == log_file_path
assert log_file_path.exists()
# clean up the file and directory this test creates
bluesky_log_file_path.unlink()
bluesky_log_file_path.parent.rmdir()
def test_ipython_log_exception():
ip = IPython.core.interactiveshell.InteractiveShell()
ip.logger = MagicMock()
ip.set_custom_exc((BaseException,), log_exception)
ip.run_cell("raise Exception")
ip.logger.log_write.assert_called_with("Exception\n", kind="output")
def test_ipython_exc_logging_creates_default_dir():
"""
Remove environment variable BLUESKY_IPYTHON_LOG_FILE and
test that the default log file path is created. This test creates
a directory rather than using pytest's tmp_path so the test
must clean up at the end.
"""
test_appname = "bluesky-test"
log_dir = Path(appdirs.user_log_dir(appname=test_appname))
# remove log_dir if it exists to test that it will be created
if log_dir.exists():
shutil.rmtree(path=log_dir)
log_file_path = log_dir / Path("bluesky_ipython.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ.pop("BLUESKY_IPYTHON_LOG_FILE", default=None)
bluesky_ipython_log_file_path = configure_ipython_logging(
exception_logger=log_exception, ipython=ip, appdirs_appname=test_appname
)
assert bluesky_ipython_log_file_path == log_file_path
assert log_file_path.exists()
bluesky_ipython_log_file_path.unlink()
bluesky_ipython_log_file_path.parent.rmdir()
def test_ipython_exc_logging_existing_default_dir():
"""
Remove environment variable BLUESKY_IPYTHON_LOG_FILE and
test that the default log file path is used. This test creates
a directory rather than using pytest's tmp_path so the test
must clean up at the end.
"""
test_appname = "bluesky-test"
log_dir = Path(appdirs.user_log_dir(appname=test_appname))
# create the default log directory
log_dir.mkdir(parents=True, exist_ok=True)
log_file_path = log_dir / Path("bluesky_ipython.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ.pop("BLUESKY_IPYTHON_LOG_FILE", default=None)
bluesky_ipython_log_file_path = configure_ipython_logging(
exception_logger=log_exception, ipython=ip, appdirs_appname=test_appname
)
assert bluesky_ipython_log_file_path == log_file_path
assert log_file_path.exists()
bluesky_ipython_log_file_path.unlink()
bluesky_ipython_log_file_path.parent.rmdir()
def test_configure_ipython_exc_logging(tmpdir):
log_file_path = Path(tmpdir) / Path("bluesky_ipython.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_IPYTHON_LOG_FILE"] = str(log_file_path)
bluesky_ipython_log_file_path = configure_ipython_logging(
exception_logger=log_exception, ipython=ip,
)
assert bluesky_ipython_log_file_path == log_file_path
assert log_file_path.exists()
def test_configure_ipython_exc_logging_with_nonexisting_dir(tmpdir):
log_dir = Path(tmpdir) / Path("does_not_exist")
log_file_path = log_dir / Path("bluesky_ipython.log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_IPYTHON_LOG_FILE"] = str(log_file_path)
with pytest.raises(UserWarning):
configure_ipython_logging(
exception_logger=log_exception, ipython=ip,
)
def test_configure_ipython_exc_logging_with_unwriteable_dir(tmpdir):
log_dir = Path(tmpdir)
log_file_path = log_dir / Path("bluesky_ipython.log")
log_dir.chmod(stat.S_IREAD)
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_IPYTHON_LOG_FILE"] = str(log_file_path)
with pytest.raises(PermissionError):
configure_ipython_logging(
exception_logger=log_exception, ipython=ip,
)
def test_configure_ipython_exc_logging_file_exists(tmpdir):
log_file_path = Path(tmpdir) / Path("bluesky_ipython.log")
with open(log_file_path, "w") as f:
f.write("log log log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_IPYTHON_LOG_FILE"] = str(log_file_path)
bluesky_ipython_log_file_path = configure_ipython_logging(
exception_logger=log_exception, ipython=ip,
)
assert bluesky_ipython_log_file_path == log_file_path
assert log_file_path.exists()
def test_configure_ipython_exc_logging_rotate(tmpdir):
log_file_path = Path(tmpdir) / Path("bluesky_ipython.log")
with open(log_file_path, "w") as f:
f.write("log log log")
ip = IPython.core.interactiveshell.InteractiveShell()
os.environ["BLUESKY_IPYTHON_LOG_FILE"] = str(log_file_path)
bluesky_ipython_log_file_path = configure_ipython_logging(
exception_logger=log_exception, ipython=ip, rotate_file_size=0
)
assert bluesky_ipython_log_file_path == log_file_path
assert log_file_path.exists()
old_log_file_path = log_file_path.parent / Path(log_file_path.name + ".old")
assert old_log_file_path.exists()
| 35.300813 | 80 | 0.750576 | 1,206 | 8,684 | 5.063018 | 0.100332 | 0.103177 | 0.126105 | 0.079103 | 0.897805 | 0.873895 | 0.860465 | 0.848837 | 0.834753 | 0.816738 | 0 | 0.000138 | 0.168356 | 8,684 | 245 | 81 | 35.444898 | 0.845334 | 0.168471 | 0 | 0.695946 | 0 | 0 | 0.081146 | 0.023833 | 0 | 0 | 0 | 0 | 0.121622 | 1 | 0.087838 | false | 0 | 0.067568 | 0 | 0.155405 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61195c27be2e37f3610ce9fd70747d91c0281020 | 149 | py | Python | tphysics/__init__.py | OsmosizBiz/tphysics | 0058d0b7a25d4eb2e98c67fe7344d3b9f9a92c6e | [
"MIT"
] | 1 | 2019-05-03T11:58:53.000Z | 2019-05-03T11:58:53.000Z | tphysics/__init__.py | OsmosizBiz/tphysics | 0058d0b7a25d4eb2e98c67fe7344d3b9f9a92c6e | [
"MIT"
] | null | null | null | tphysics/__init__.py | OsmosizBiz/tphysics | 0058d0b7a25d4eb2e98c67fe7344d3b9f9a92c6e | [
"MIT"
] | null | null | null | from tphysics.shapes import *
from tphysics.engine import *
from tphysics.verlet import *
from tphysics.keys import *
from tphysics.sprites import *
| 24.833333 | 30 | 0.798658 | 20 | 149 | 5.95 | 0.4 | 0.504202 | 0.605042 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134228 | 149 | 5 | 31 | 29.8 | 0.922481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b61ee15d4d445a7c1c03c3efd1aa63ad911edd15 | 242 | py | Python | intersim/viz/__init__.py | sisl/InteractionSimulator | a4f68349eb7fa55ed5855a94bb97d8242869149d | [
"MIT"
] | 3 | 2021-07-13T07:28:34.000Z | 2021-07-29T12:37:20.000Z | intersim/viz/__init__.py | sisl/InteractionSimulator | a4f68349eb7fa55ed5855a94bb97d8242869149d | [
"MIT"
] | 6 | 2021-08-30T15:51:19.000Z | 2022-02-21T12:39:08.000Z | intersim/viz/__init__.py | sisl/InteractionSimulator | a4f68349eb7fa55ed5855a94bb97d8242869149d | [
"MIT"
] | 1 | 2021-08-29T20:28:54.000Z | 2021-08-29T20:28:54.000Z |
from intersim.viz.animatedviz import animate, AnimatedViz
from intersim.viz.wrappers import make_action_viz, make_marker_viz, make_observation_viz, make_reward_viz
from intersim.viz.utils import build_map
from intersim.viz.rasta import Rasta | 48.4 | 105 | 0.867769 | 37 | 242 | 5.432432 | 0.432432 | 0.238806 | 0.298507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082645 | 242 | 5 | 106 | 48.4 | 0.905405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b68509ca8224353158af7dcf453d653ba53ff0f7 | 49 | py | Python | mantra_mixer/__init__.py | bossauh/mantra-mixer | 6116bc0d620c2e98656a4c349a2fcc6dbed6c955 | [
"MIT"
] | null | null | null | mantra_mixer/__init__.py | bossauh/mantra-mixer | 6116bc0d620c2e98656a4c349a2fcc6dbed6c955 | [
"MIT"
] | null | null | null | mantra_mixer/__init__.py | bossauh/mantra-mixer | 6116bc0d620c2e98656a4c349a2fcc6dbed6c955 | [
"MIT"
] | null | null | null | from .mixer import Mixer, OutputTrack, InputTrack | 49 | 49 | 0.836735 | 6 | 49 | 6.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fcbdba3b950e0f343135fd47b70d8e3180fa5b2f | 99 | py | Python | titan/react_pkg/tailwindcss/props.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | titan/react_pkg/tailwindcss/props.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | titan/react_pkg/tailwindcss/props.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | def has_tailwind_css(self):
return [x for x in self.service.tools if x.name == "tailwind_css"]
| 33 | 70 | 0.717172 | 18 | 99 | 3.777778 | 0.722222 | 0.323529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161616 | 99 | 2 | 71 | 49.5 | 0.819277 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fcc32418e7732579aacee22d4de156e04b968f06 | 11,291 | py | Python | test.py | kelicht/ordce | b993934f5b1fea8eef609599115c4a7f96527e7e | [
"MIT"
] | null | null | null | test.py | kelicht/ordce | b993934f5b1fea8eef609599115c4a7f96527e7e | [
"MIT"
] | 1 | 2021-07-30T12:13:58.000Z | 2021-07-30T18:34:54.000Z | test.py | kelicht/ordce | b993934f5b1fea8eef609599115c4a7f96527e7e | [
"MIT"
] | null | null | null | import numpy as np
from lingam import DirectLiNGAM
from lingam.utils import make_dot
from utils import interaction_matrix, cost_order_all_permutations
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from linear_oce import LinearOrderedActionExtractor
from mlp_oce import MLPOrderedActionExtractor
from forest_oce import ForestOrderedActionExtractor
def exp_synthetic(n=10, verbose=False):
N = 1000
c_21 = 1
c_32 = 6
c_34 = 4
c_54 = -0.5
names = ['Education','JobSkill','Income(K)','WorkPerDay','HealthStatus']
x_1 = np.random.randint(1, 5, N)
x_2 = c_21 * x_1 + np.random.randint(-1, 1, N)
x_4 = np.random.randint(2, 6, N) * 2
x_3 = c_32 * x_2 + c_34 * x_4 + np.random.randint(-2, 2, N)
x_5 = c_54 * x_4 + np.random.randint(6, 13, N)
X = np.array([x_1, x_2, x_3, x_4, x_5]).T
_, C = interaction_matrix(X, interaction_type='causal')
w_3, w_5 = 1.0/x_3.mean(), 1.0/x_5.mean()
y = (w_3 * X[:,2] + w_5 * X[:,4] < 2.0).astype(int)
mdl = LogisticRegression(penalty='l2', C=1.0, fit_intercept=True, solver='liblinear', max_iter=10000)
mdl = mdl.fit(X, y)
print('# Model Coef.: \n', mdl.coef_, (mdl.intercept_))
oce = LinearOrderedActionExtractor(mdl, X, feature_names=names, feature_types=['I', 'I', 'I', 'I', 'I'], feature_constraints=['INC']*2+['']*3, target_name='Loan', target_labels=['Accept', 'Reject'], interaction_matrix=C)
denied_individual = X[mdl.predict(X)==1]
costs = ['TLPS', 'MAD', 'DACE', 'SCM']
gammas = [1.0] if verbose else [0.1 + i * 0.1 for i in range(20)]
res_dict = {}; res_dict_ord = {}; res_dict_time = {}
for key in costs:
res_dict[key] = []; res_dict_ord[key] = []; res_dict_time[key] = []
for c in costs:
for g in gammas:
key = c + '_ORDER_{}'.format(g)
res_dict[key] = []; res_dict_ord[key] = []; res_dict_time[key] = []
for i, x in enumerate(denied_individual[:n]):
print('# {}-th Denied Individual: '.format(i+1), x)
for cost in costs:
print('## {}: '.format(cost))
oa = oce.extract(x, K=5, ordering=False, post_ordering=True, post_ordering_mode='greedy', cost_type=cost, ordering_cost_type='standard')
if(oa!=-1):
print(oa)
res_dict[cost].append(oa.c_ordinal_)
res_dict_ord[cost].append(oa.c_ordering_)
res_dict_time[cost].append(oa.time_)
if(verbose): print('## {} + C_order: '.format(cost))
for gamma in gammas:
oa = oce.extract(x, K=5, gamma=gamma, ordering=True, cost_type=cost, ordering_cost_type='standard')
if(oa!=-1):
print(oa)
res_dict[cost+'_ORDER_{}'.format(gamma)].append(oa.c_ordinal_)
res_dict_ord[cost+'_ORDER_{}'.format(gamma)].append(oa.c_ordering_)
res_dict_time[cost+'_ORDER_{}'.format(gamma)].append(oa.time_)
print('---')
if(verbose==False):
import pandas as pd
res_dist = pd.DataFrame(res_dict)
res_dist.to_csv('./res/synthetic_res_dist_lr.csv', index=False)
res_ord = pd.DataFrame(res_dict_ord)
res_ord.to_csv('./res/synthetic_res_ord_lr.csv', index=False)
res_time = pd.DataFrame(res_dict_time)
res_time.to_csv('./res/synthetic_res_time_lr.csv', index=False)
def exp_real(clf='lr', dataset='h', n=10, verbose=False, costs=['TLPS','MAD','DACE','SCM'], suf='', tol=1e-6):
from utils import DatasetHelper
D = DatasetHelper(dataset=dataset, feature_prefix_index=False)
X_tr, X_ts, y_tr, y_ts = D.train_test_split()
B, M = interaction_matrix(X_tr, interaction_type='causal')
if(clf=='lr'):
mdl = LogisticRegression(penalty='l2', C=1.0, fit_intercept=True, solver='liblinear', max_iter=10000)
mdl = mdl.fit(X_tr, y_tr)
# print('# Model Coef.: \n', mdl.coef_, (mdl.intercept_))
oce = LinearOrderedActionExtractor(mdl, X_tr, feature_names=D.feature_names, feature_types=D.feature_types, feature_categories=D.feature_categories,
feature_constraints=D.feature_constraints, target_name=D.target_name, target_labels=D.target_labels, interaction_matrix=M)
elif(clf=='mlp'):
mdl = MLPClassifier(hidden_layer_sizes=(200,), max_iter=500, activation='relu', alpha=0.0001)
mdl = mdl.fit(X_tr, y_tr)
oce = MLPOrderedActionExtractor(mdl, X_tr, feature_names=D.feature_names, feature_types=D.feature_types, feature_categories=D.feature_categories,
feature_constraints=D.feature_constraints, target_name=D.target_name, target_labels=D.target_labels, interaction_matrix=M, tol=tol)
elif(clf=='rf'):
h = 6 if dataset=='g' else 4
mdl = RandomForestClassifier(n_estimators=100, max_depth=h)
mdl = mdl.fit(X_tr, y_tr)
oce = ForestOrderedActionExtractor(mdl, X_tr, feature_names=D.feature_names, feature_types=D.feature_types, feature_categories=D.feature_categories,
feature_constraints=D.feature_constraints, target_name=D.target_name, target_labels=D.target_labels, interaction_matrix=M)
denied_individual = X_ts[mdl.predict(X_ts)==1]
gammas = [1.0]
res_dict = {}; res_dict_ord = {}; res_dict_time = {}
for key in costs:
res_dict[key] = []; res_dict_ord[key] = []; res_dict_time[key] = []
for c in costs:
for g in gammas:
key = c + '_ORDER_{}'.format(g)
res_dict[key] = []; res_dict_ord[key] = []; res_dict_time[key] = []
for i, x in enumerate(denied_individual[:n]):
print('# {}-th Denied Individual:'.format(i+1))
for cost in costs:
print('## {}: '.format(cost))
oa = oce.extract(x, K=4, ordering=False, post_ordering=True, post_ordering_mode='greedy', cost_type=cost, ordering_cost_type='standard', time_limit=300, log_stream=False)
if(oa!=-1):
print(oa)
res_dict[cost].append(oa.c_ordinal_)
res_dict_ord[cost].append(oa.c_ordering_)
res_dict_time[cost].append(oa.time_)
else:
res_dict[cost].append(-1)
res_dict_ord[cost].append(-1)
res_dict_time[cost].append(-1)
print('## {} + C_order: '.format(cost))
for gamma in gammas:
oa = oce.extract(x, K=4, gamma=gamma, ordering=True, cost_type=cost, ordering_cost_type='standard', time_limit=300, log_stream=False)
if(oa!=-1):
print(oa)
res_dict[cost+'_ORDER_{}'.format(gamma)].append(oa.c_ordinal_)
res_dict_ord[cost+'_ORDER_{}'.format(gamma)].append(oa.c_ordering_)
res_dict_time[cost+'_ORDER_{}'.format(gamma)].append(oa.time_)
else:
res_dict[cost+'_ORDER_{}'.format(gamma)].append(-1)
res_dict_ord[cost+'_ORDER_{}'.format(gamma)].append(-1)
res_dict_time[cost+'_ORDER_{}'.format(gamma)].append(-1)
if(verbose): print('---')
print('# Results')
print('+ ', res_dict)
print('+ ', res_dict_ord)
print('+ ', res_dict_time)
print('---')
if(verbose==False):
import pandas as pd
res_dist = pd.DataFrame(res_dict)
res_dist.to_csv('./res/{}_res_dist_{}_{}.csv'.format(D.dataset_name, clf, suf), index=False)
res_ord = pd.DataFrame(res_dict_ord)
res_ord.to_csv('./res/{}_res_ord_{}_{}.csv'.format(D.dataset_name, clf, suf), index=False)
res_time = pd.DataFrame(res_dict_time)
res_time.to_csv('./res/{}_res_time_{}_{}.csv'.format(D.dataset_name, clf, suf), index=False)
def exp_real_sens(dataset='h', n=10, verbose=False, time_limit=300, costs=['TLPS', 'MAD', 'DACE', 'SCM']):
from utils import DatasetHelper
D = DatasetHelper(dataset=dataset, feature_prefix_index=False)
X_tr, X_ts, y_tr, y_ts = D.train_test_split()
B, M = interaction_matrix(X_tr, interaction_type='causal')
mdl = LogisticRegression(penalty='l2', C=1.0, fit_intercept=True, solver='liblinear', max_iter=10000)
mdl = mdl.fit(X_tr, y_tr)
oce = LinearOrderedActionExtractor(mdl, X_tr, feature_names=D.feature_names, feature_types=D.feature_types, feature_categories=D.feature_categories,
feature_constraints=D.feature_constraints, target_name=D.target_name, target_labels=D.target_labels, interaction_matrix=M)
denied_individual = X_ts[mdl.predict(X_ts)==1]
gammas = [10**i for i in range(-3, 3)]
res_dict = {}; res_dict_ord = {}
for key in costs:
res_dict[key] = []; res_dict_ord[key] = []
for c in costs:
for g in gammas:
key = c + '_ORDER_{}'.format(g)
res_dict[key] = []; res_dict_ord[key] = []
for i, x in enumerate(denied_individual[:n]):
print('# {}-th Denied Individual:'.format(i+1))
for cost in costs:
print('## {}: '.format(cost))
oa = oce.extract(x, K=4, ordering=False, post_ordering=True, post_ordering_mode='greedy', cost_type=cost, ordering_cost_type='standard', time_limit=time_limit)
if(oa!=-1):
print(oa)
res_dict[cost].append(oa.c_ordinal_)
res_dict_ord[cost].append(oa.c_ordering_)
else:
res_dict[cost].append(-1)
res_dict_ord[cost].append(-1)
print('## {} + C_order: '.format(cost))
for gamma in gammas:
oa = oce.extract(x, K=4, gamma=gamma, ordering=True, cost_type=cost, ordering_cost_type='standard', time_limit=time_limit)
if(oa!=-1):
print(oa)
res_dict[cost+'_ORDER_{}'.format(gamma)].append(oa.c_ordinal_)
res_dict_ord[cost+'_ORDER_{}'.format(gamma)].append(oa.c_ordering_)
else:
res_dict[cost+'_ORDER_{}'.format(gamma)].append(-1)
res_dict_ord[cost+'_ORDER_{}'.format(gamma)].append(-1)
if(verbose): print('---')
print('# Results')
print('+ ', res_dict)
print('+ ', res_dict_ord)
print('---')
if(verbose==False):
import pandas as pd
res_dist = pd.DataFrame(res_dict)
res_dist.to_csv('./res/{}_res_dist_sens.csv'.format(D.dataset_name), index=False)
res_ord = pd.DataFrame(res_dict_ord)
res_ord.to_csv('./res/{}_res_ord_sens.csv'.format(D.dataset_name), index=False)
if(__name__ == '__main__'):
np.random.seed(1)
for dataset in ['g', 'd', 'w', 'h']:
exp_real(clf='lr', dataset=dataset, n=50, costs=['TLPS', 'DACE'])
# for dataset in ['g', 'd', 'w', 'h']:
# exp_real(clf='mlp', dataset=dataset, n=50, costs=['TLPS', 'DACE'])
# for dataset in ['g', 'd', 'w', 'h']:
# exp_real(clf='rf', dataset=dataset, n=50, costs=['TLPS', 'DACE'])
# for dataset in ['g', 'd', 'w', 'h']:
# exp_real_sens(dataset=dataset, n=50, time_limit=60, costs=['TLPS', 'DACE'])
| 46.465021 | 224 | 0.607386 | 1,576 | 11,291 | 4.076777 | 0.113579 | 0.068638 | 0.037354 | 0.040467 | 0.814786 | 0.787704 | 0.767315 | 0.767315 | 0.750661 | 0.744903 | 0 | 0.020577 | 0.238154 | 11,291 | 242 | 225 | 46.657025 | 0.726343 | 0.034187 | 0 | 0.640625 | 0 | 0 | 0.077749 | 0.02047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.078125 | 0 | 0.09375 | 0.145833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fcd446cbd2b87d2880229fe036c7e83b84a1ea36 | 4,470 | py | Python | contrail/scr/pages/about_page.py | sisl/Contrail | 50bdb9800c882480fbb3070ae1926d1c55b5c186 | [
"MIT"
] | 2 | 2022-01-21T17:53:25.000Z | 2022-03-16T21:30:10.000Z | contrail/scr/pages/about_page.py | sisl/Contrail | 50bdb9800c882480fbb3070ae1926d1c55b5c186 | [
"MIT"
] | null | null | null | contrail/scr/pages/about_page.py | sisl/Contrail | 50bdb9800c882480fbb3070ae1926d1c55b5c186 | [
"MIT"
] | 1 | 2022-03-16T21:29:34.000Z | 2022-03-16T21:29:34.000Z | import warnings
warnings.filterwarnings("ignore")
import dash
import dash_bootstrap_components as dbc
from dash import dcc
from dash import html
import dash_leaflet as dl
# Import Dash Instance #
from app import app
MIT_license = 'MIT License\n\n\
Copyright (c) 2021 Stanford Intelligent Systems Laboratory \n\
Permission is hereby granted, free of charge, to any person obtaining a copy\
of this software and associated documentation files (the "Software"), to deal\
in the Software without restriction, including without limitation the rights\
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\
copies of the Software, and to permit persons to whom the Software is\
furnished to do so, subject to the following conditions:\n\
The above copyright notice and this permission notice shall be included in all\
copies or substantial portions of the Software.\n\
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\
SOFTWARE.'
layout = html.Div([
dbc.Row([
dbc.Card(className='card-about-page ml-4', children=[
dbc.CardBody([
dbc.Row([
dcc.Link("Link to Contrail Documentation and Tutorial", href='https://github.com/sisl/Contrail', target='_blank', className="github-link m-25")
],
justify='center',
align='center',
no_gutters=True)
])
])
]),
dbc.Row([
dbc.Card(className='card-about-page-license ml-4 mt-2', children=[
dbc.CardBody([
html.H5(id='mit-license-1', children=["MIT License"], className="card-body-white p-1 ml-1"),
html.H6(id='mit-license-2', children=["Copyright (c) 2021 Stanford Intelligent Systems Laboratory"], className="card-body-white p-1 ml-1"),
html.H6(id='mit-license-3', children=["Permission is hereby granted, free of charge, to any person obtaining a copyof this software\
and associated documentation files (the \"Software\"), to deal in the Software without\
restriction, including without limitation the rights to use, copy, modify, merge, publish,\
distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom\
the Software is furnished to do so, subject to the following conditions:"], className="card-body-white p-1 ml-1"),
html.H6(id='mit-license-4', children=["The above copyright notice and this permission notice shall be included in all\
copies or substantial portions of the Software"], className="card-body-white p-1 ml-1"),
html.H6(id='mit-license-5', children=["THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\
SOFTWARE."], className="card-body-white p-1 ml-1"),
])
])
])
]) | 62.957746 | 170 | 0.565324 | 520 | 4,470 | 4.848077 | 0.296154 | 0.069814 | 0.0238 | 0.043633 | 0.800079 | 0.800079 | 0.800079 | 0.760413 | 0.732646 | 0.732646 | 0 | 0.011571 | 0.361969 | 4,470 | 71 | 171 | 62.957746 | 0.87237 | 0.004474 | 0 | 0.344262 | 0 | 0 | 0.098269 | 0.005172 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.114754 | 0 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fcee8c9fb0ebf1a58a558eebf66bac1d95605980 | 31 | py | Python | src/amuse/community/pikachu/__init__.py | sibonyves/amuse | 5557bf88d14df1aa02133a199b6d60c0c57dcab7 | [
"Apache-2.0"
] | null | null | null | src/amuse/community/pikachu/__init__.py | sibonyves/amuse | 5557bf88d14df1aa02133a199b6d60c0c57dcab7 | [
"Apache-2.0"
] | 12 | 2021-11-15T09:13:03.000Z | 2022-02-02T14:53:04.000Z | src/amuse/community/pikachu/__init__.py | sibonyves/amuse | 5557bf88d14df1aa02133a199b6d60c0c57dcab7 | [
"Apache-2.0"
] | null | null | null | from .interface import Pikachu
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1e45944533a638119356bd54c5f713b1d797365e | 146 | py | Python | tests/app.py | octue/octue-sdk-python | 31c6e9358d3401ca708f5b3da702bfe3be3e52ce | [
"MIT"
] | 5 | 2020-10-01T12:43:10.000Z | 2022-03-14T17:26:25.000Z | tests/app.py | octue/octue-sdk-python | 31c6e9358d3401ca708f5b3da702bfe3be3e52ce | [
"MIT"
] | 322 | 2020-06-24T15:55:22.000Z | 2022-03-30T11:49:28.000Z | tests/app.py | octue/octue-sdk-python | 31c6e9358d3401ca708f5b3da702bfe3be3e52ce | [
"MIT"
] | null | null | null | CUSTOM_APP_RUN_MESSAGE = "This is a custom app run function"
def run(analysis, *args, **kwargs):
print(CUSTOM_APP_RUN_MESSAGE) # noqa:T001
| 24.333333 | 60 | 0.739726 | 23 | 146 | 4.434783 | 0.652174 | 0.264706 | 0.352941 | 0.372549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.157534 | 146 | 5 | 61 | 29.2 | 0.804878 | 0.061644 | 0 | 0 | 0 | 0 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1e7e0606f1af8779d74db39e6d3ffc944baf178b | 14,070 | py | Python | preprocess/datautils/tgif_qa.py | hdchieh/hcrn-videoqa | 4051e4f183095bd11ba886afe55ab2f5a62c9553 | [
"Apache-2.0"
] | 111 | 2020-02-29T20:49:43.000Z | 2022-03-30T07:46:36.000Z | preprocess/datautils/tgif_qa.py | AmeerAnsari/hcrn-videoqa | eb92c9b21aaa00f912fe5c2e5188abc47fda5211 | [
"Apache-2.0"
] | 17 | 2020-04-06T01:31:59.000Z | 2022-03-14T22:23:08.000Z | preprocess/datautils/tgif_qa.py | AmeerAnsari/hcrn-videoqa | eb92c9b21aaa00f912fe5c2e5188abc47fda5211 | [
"Apache-2.0"
] | 23 | 2020-05-07T07:26:51.000Z | 2022-03-09T10:34:38.000Z | import os
import pandas as pd
import json
from datautils import utils
import nltk
import pickle
import numpy as np
def load_video_paths(args):
''' Load a list of (path,image_id tuples).'''
input_paths = []
annotation = pd.read_csv(args.annotation_file.format(args.question_type), delimiter='\t')
gif_names = list(annotation['gif_name'])
keys = list(annotation['key'])
print("Number of questions: {}".format(len(gif_names)))
for idx, gif in enumerate(gif_names):
gif_abs_path = os.path.join(args.video_dir, ''.join([gif, '.gif']))
input_paths.append((gif_abs_path, keys[idx]))
input_paths = list(set(input_paths))
print("Number of unique videos: {}".format(len(input_paths)))
return input_paths
def openeded_encoding_data(args, vocab, questions, video_names, video_ids, answers, mode='train'):
''' Encode question tokens'''
print('Encoding data')
questions_encoded = []
questions_len = []
video_ids_tbw = []
video_names_tbw = []
all_answers = []
question_ids = []
for idx, question in enumerate(questions):
question = question.lower()[:-1]
question_tokens = nltk.word_tokenize(question)
question_encoded = utils.encode(question_tokens, vocab['question_token_to_idx'], allow_unk=True)
questions_encoded.append(question_encoded)
questions_len.append(len(question_encoded))
question_ids.append(idx)
video_names_tbw.append(video_names[idx])
video_ids_tbw.append(video_ids[idx])
if args.question_type == "frameqa":
answer = answers[idx]
if answer in vocab['answer_token_to_idx']:
answer = vocab['answer_token_to_idx'][answer]
elif mode in ['train']:
answer = 0
elif mode in ['val', 'test']:
answer = 1
else:
answer = max(int(answers[idx]), 1)
all_answers.append(answer)
# Pad encoded questions
max_question_length = max(len(x) for x in questions_encoded)
for qe in questions_encoded:
while len(qe) < max_question_length:
qe.append(vocab['question_token_to_idx']['<NULL>'])
questions_encoded = np.asarray(questions_encoded, dtype=np.int32)
questions_len = np.asarray(questions_len, dtype=np.int32)
print(questions_encoded.shape)
glove_matrix = None
if mode == 'train':
token_itow = {i: w for w, i in vocab['question_token_to_idx'].items()}
print("Load glove from %s" % args.glove_pt)
glove = pickle.load(open(args.glove_pt, 'rb'))
dim_word = glove['the'].shape[0]
glove_matrix = []
for i in range(len(token_itow)):
vector = glove.get(token_itow[i], np.zeros((dim_word,)))
glove_matrix.append(vector)
glove_matrix = np.asarray(glove_matrix, dtype=np.float32)
print(glove_matrix.shape)
print('Writing ', args.output_pt.format(args.question_type, args.question_type, mode))
obj = {
'questions': questions_encoded,
'questions_len': questions_len,
'question_id': question_ids,
'video_ids': np.asarray(video_ids_tbw),
'video_names': np.array(video_names_tbw),
'answers': all_answers,
'glove': glove_matrix,
}
with open(args.output_pt.format(args.question_type, args.question_type, mode), 'wb') as f:
pickle.dump(obj, f)
def multichoice_encoding_data(args, vocab, questions, video_names, video_ids, answers, ans_candidates, mode='train'):
# Encode all questions
print('Encoding data')
questions_encoded = []
questions_len = []
question_ids = []
all_answer_cands_encoded = []
all_answer_cands_len = []
video_ids_tbw = []
video_names_tbw = []
correct_answers = []
for idx, question in enumerate(questions):
question = question.lower()[:-1]
question_tokens = nltk.word_tokenize(question)
question_encoded = utils.encode(question_tokens, vocab['question_answer_token_to_idx'], allow_unk=True)
questions_encoded.append(question_encoded)
questions_len.append(len(question_encoded))
question_ids.append(idx)
video_names_tbw.append(video_names[idx])
video_ids_tbw.append(video_ids[idx])
# grounthtruth
answer = int(answers[idx])
correct_answers.append(answer)
# answer candidates
candidates = ans_candidates[idx]
candidates_encoded = []
candidates_len = []
for ans in candidates:
ans = ans.lower()
ans_tokens = nltk.word_tokenize(ans)
cand_encoded = utils.encode(ans_tokens, vocab['question_answer_token_to_idx'], allow_unk=True)
candidates_encoded.append(cand_encoded)
candidates_len.append(len(cand_encoded))
all_answer_cands_encoded.append(candidates_encoded)
all_answer_cands_len.append(candidates_len)
# Pad encoded questions
max_question_length = max(len(x) for x in questions_encoded)
for qe in questions_encoded:
while len(qe) < max_question_length:
qe.append(vocab['question_answer_token_to_idx']['<NULL>'])
questions_encoded = np.asarray(questions_encoded, dtype=np.int32)
questions_len = np.asarray(questions_len, dtype=np.int32)
print(questions_encoded.shape)
# Pad encoded answer candidates
max_answer_cand_length = max(max(len(x) for x in candidate) for candidate in all_answer_cands_encoded)
for ans_cands in all_answer_cands_encoded:
for ans in ans_cands:
while len(ans) < max_answer_cand_length:
ans.append(vocab['question_answer_token_to_idx']['<NULL>'])
all_answer_cands_encoded = np.asarray(all_answer_cands_encoded, dtype=np.int32)
all_answer_cands_len = np.asarray(all_answer_cands_len, dtype=np.int32)
print(all_answer_cands_encoded.shape)
glove_matrix = None
if mode in ['train']:
token_itow = {i: w for w, i in vocab['question_answer_token_to_idx'].items()}
print("Load glove from %s" % args.glove_pt)
glove = pickle.load(open(args.glove_pt, 'rb'))
dim_word = glove['the'].shape[0]
glove_matrix = []
for i in range(len(token_itow)):
vector = glove.get(token_itow[i], np.zeros((dim_word,)))
glove_matrix.append(vector)
glove_matrix = np.asarray(glove_matrix, dtype=np.float32)
print(glove_matrix.shape)
print('Writing ', args.output_pt.format(args.question_type, args.question_type, mode))
obj = {
'questions': questions_encoded,
'questions_len': questions_len,
'question_id': question_ids,
'video_ids': np.asarray(video_ids_tbw),
'video_names': np.array(video_names_tbw),
'ans_candidates': all_answer_cands_encoded,
'ans_candidates_len': all_answer_cands_len,
'answers': correct_answers,
'glove': glove_matrix,
}
with open(args.output_pt.format(args.question_type, args.question_type, mode), 'wb') as f:
pickle.dump(obj, f)
def process_questions_openended(args):
print('Loading data')
if args.mode in ["train"]:
csv_data = pd.read_csv(args.annotation_file.format("Train", args.question_type), delimiter='\t')
else:
csv_data = pd.read_csv(args.annotation_file.format("Test", args.question_type), delimiter='\t')
csv_data = csv_data.iloc[np.random.permutation(len(csv_data))]
questions = list(csv_data['question'])
answers = list(csv_data['answer'])
video_names = list(csv_data['gif_name'])
video_ids = list(csv_data['key'])
print('number of questions: %s' % len(questions))
# Either create the vocab or load it from disk
if args.mode in ['train']:
print('Building vocab')
answer_cnt = {}
if args.question_type == "frameqa":
for i, answer in enumerate(answers):
answer_cnt[answer] = answer_cnt.get(answer, 0) + 1
answer_token_to_idx = {'<UNK>': 0}
for token in answer_cnt:
answer_token_to_idx[token] = len(answer_token_to_idx)
print('Get answer_token_to_idx, num: %d' % len(answer_token_to_idx))
elif args.question_type == 'count':
answer_token_to_idx = {'<UNK>': 0}
question_token_to_idx = {'<NULL>': 0, '<UNK>': 1}
for i, q in enumerate(questions):
question = q.lower()[:-1]
for token in nltk.word_tokenize(question):
if token not in question_token_to_idx:
question_token_to_idx[token] = len(question_token_to_idx)
print('Get question_token_to_idx')
print(len(question_token_to_idx))
vocab = {
'question_token_to_idx': question_token_to_idx,
'answer_token_to_idx': answer_token_to_idx,
'question_answer_token_to_idx': {'<NULL>': 0, '<UNK>': 1}
}
print('Write into %s' % args.vocab_json.format(args.question_type, args.question_type))
with open(args.vocab_json.format(args.question_type, args.question_type), 'w') as f:
json.dump(vocab, f, indent=4)
# split 10% of questions for evaluation
split = int(0.9 * len(questions))
train_questions = questions[:split]
train_answers = answers[:split]
train_video_names = video_names[:split]
train_video_ids = video_ids[:split]
val_questions = questions[split:]
val_answers = answers[split:]
val_video_names = video_names[split:]
val_video_ids = video_ids[split:]
openeded_encoding_data(args, vocab, train_questions, train_video_names, train_video_ids, train_answers, mode='train')
openeded_encoding_data(args, vocab, val_questions, val_video_names, val_video_ids, val_answers, mode='val')
else:
print('Loading vocab')
with open(args.vocab_json.format(args.question_type, args.question_type), 'r') as f:
vocab = json.load(f)
openeded_encoding_data(args, vocab, questions, video_names, video_ids, answers, mode='test')
def process_questions_mulchoices(args):
print('Loading data')
if args.mode in ["train", "val"]:
csv_data = pd.read_csv(args.annotation_file.format("Train", args.question_type), delimiter='\t')
else:
csv_data = pd.read_csv(args.annotation_file.format("Test", args.question_type), delimiter='\t')
csv_data = csv_data.iloc[np.random.permutation(len(csv_data))]
questions = list(csv_data['question'])
answers = list(csv_data['answer'])
video_names = list(csv_data['gif_name'])
video_ids = list(csv_data['key'])
ans_candidates = np.asarray(
[csv_data['a1'], csv_data['a2'], csv_data['a3'], csv_data['a4'], csv_data['a5']])
ans_candidates = ans_candidates.transpose()
print(ans_candidates.shape)
# ans_candidates: (num_ques, 5)
print('number of questions: %s' % len(questions))
# Either create the vocab or load it from disk
if args.mode in ['train']:
print('Building vocab')
answer_token_to_idx = {'<UNK0>': 0, '<UNK1>': 1}
question_answer_token_to_idx = {'<NULL>': 0, '<UNK>': 1}
for candidates in ans_candidates:
for ans in candidates:
ans = ans.lower()
for token in nltk.word_tokenize(ans):
if token not in answer_token_to_idx:
answer_token_to_idx[token] = len(answer_token_to_idx)
if token not in question_answer_token_to_idx:
question_answer_token_to_idx[token] = len(question_answer_token_to_idx)
print('Get answer_token_to_idx, num: %d' % len(answer_token_to_idx))
question_token_to_idx = {'<NULL>': 0, '<UNK>': 1}
for i, q in enumerate(questions):
question = q.lower()[:-1]
for token in nltk.word_tokenize(question):
if token not in question_token_to_idx:
question_token_to_idx[token] = len(question_token_to_idx)
if token not in question_answer_token_to_idx:
question_answer_token_to_idx[token] = len(question_answer_token_to_idx)
print('Get question_token_to_idx')
print(len(question_token_to_idx))
print('Get question_answer_token_to_idx')
print(len(question_answer_token_to_idx))
vocab = {
'question_token_to_idx': question_token_to_idx,
'answer_token_to_idx': answer_token_to_idx,
'question_answer_token_to_idx': question_answer_token_to_idx,
}
print('Write into %s' % args.vocab_json.format(args.question_type, args.question_type))
with open(args.vocab_json.format(args.question_type, args.question_type), 'w') as f:
json.dump(vocab, f, indent=4)
# split 10% of questions for evaluation
split = int(0.9 * len(questions))
train_questions = questions[:split]
train_answers = answers[:split]
train_video_names = video_names[:split]
train_video_ids = video_ids[:split]
train_ans_candidates = ans_candidates[:split, :]
val_questions = questions[split:]
val_answers = answers[split:]
val_video_names = video_names[split:]
val_video_ids = video_ids[split:]
val_ans_candidates = ans_candidates[split:, :]
multichoice_encoding_data(args, vocab, train_questions, train_video_names, train_video_ids, train_answers, train_ans_candidates, mode='train')
multichoice_encoding_data(args, vocab, val_questions, val_video_names, val_video_ids, val_answers,
val_ans_candidates, mode='val')
else:
print('Loading vocab')
with open(args.vocab_json.format(args.question_type, args.question_type), 'r') as f:
vocab = json.load(f)
multichoice_encoding_data(args, vocab, questions, video_names, video_ids, answers,
ans_candidates, mode='test')
| 42.765957 | 150 | 0.653092 | 1,848 | 14,070 | 4.662879 | 0.091991 | 0.043867 | 0.062667 | 0.064988 | 0.806893 | 0.770106 | 0.749217 | 0.709296 | 0.700012 | 0.68655 | 0 | 0.005285 | 0.233404 | 14,070 | 328 | 151 | 42.896341 | 0.793621 | 0.027292 | 0 | 0.634058 | 0 | 0 | 0.092659 | 0.027154 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018116 | false | 0 | 0.025362 | 0 | 0.047101 | 0.115942 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ecf11543a7650e78a223d4d683fb5085942bb67 | 72 | py | Python | python_module/pypinyin_module/random_test.py | panc-test/python-study | fb172ed4a4f7fb521de9a005cd55115ad63a5b6d | [
"MIT"
] | 1 | 2021-09-17T09:32:56.000Z | 2021-09-17T09:32:56.000Z | python_module/pypinyin_module/random_test.py | panc-test/python-study | fb172ed4a4f7fb521de9a005cd55115ad63a5b6d | [
"MIT"
] | 2 | 2021-05-11T05:47:13.000Z | 2021-05-11T05:48:10.000Z | python_module/pypinyin_module/random_test.py | panc-test/python-study | fb172ed4a4f7fb521de9a005cd55115ad63a5b6d | [
"MIT"
] | null | null | null | import random
print(random.randint(1, 4))
print(random.choice('1234'))
| 14.4 | 28 | 0.736111 | 11 | 72 | 4.818182 | 0.727273 | 0.415094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.083333 | 72 | 4 | 29 | 18 | 0.712121 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
a213808d70981a1cbda8b2b634de89206efa4042 | 69 | py | Python | te_hic_lib/__init__.py | oaxiom/te_hic | 979a4927bac2ea488aa72ede7d864ec9a74e7bb1 | [
"MIT"
] | 1 | 2020-09-03T00:35:15.000Z | 2020-09-03T00:35:15.000Z | te_hic_lib/__init__.py | oaxiom/te_hic | 979a4927bac2ea488aa72ede7d864ec9a74e7bb1 | [
"MIT"
] | null | null | null | te_hic_lib/__init__.py | oaxiom/te_hic | 979a4927bac2ea488aa72ede7d864ec9a74e7bb1 | [
"MIT"
] | 1 | 2020-05-06T04:29:24.000Z | 2020-05-06T04:29:24.000Z | from . import common
from .measure_contacts import measure_contacts
| 17.25 | 46 | 0.84058 | 9 | 69 | 6.222222 | 0.555556 | 0.535714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 69 | 3 | 47 | 23 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf3ff838c032d5fc0962878425e9c4293bd00407 | 96 | py | Python | venv/lib/python3.8/site-packages/poetry/core/masonry/metadata.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/poetry/core/masonry/metadata.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/poetry/core/masonry/metadata.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/97/66/52/72298b66463f053b198928ba433860c8a0d7f211b60ae9102da58fd606 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 96 | 1 | 96 | 96 | 0.395833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bf4c7da5f2f560d2234e14d571773dacd859be35 | 42 | py | Python | UI/__init__.py | Juwdohr/WGU_Package_Tracker | f03dba7e06340e6779ad40667ae0d8ca0b8b07a7 | [
"MIT"
] | 1 | 2021-11-15T23:50:08.000Z | 2021-11-15T23:50:08.000Z | UI/__init__.py | Juwdohr/WGU_Package_Tracker | f03dba7e06340e6779ad40667ae0d8ca0b8b07a7 | [
"MIT"
] | 4 | 2021-10-21T22:09:45.000Z | 2021-12-12T16:48:02.000Z | UI/__init__.py | Juwdohr/WGU_Package_Tracker | f03dba7e06340e6779ad40667ae0d8ca0b8b07a7 | [
"MIT"
] | 1 | 2022-01-20T19:15:39.000Z | 2022-01-20T19:15:39.000Z | from .user_interface import UserInterface
| 21 | 41 | 0.880952 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bf7db31c17f5e75e18a73aaefbd256c4aee63a96 | 99 | py | Python | sympy/tensor/array/mutable_ndim_array.py | ovolve/sympy | 0a15782f20505673466b940454b33b8014a25c13 | [
"BSD-3-Clause"
] | 4 | 2018-07-04T17:20:12.000Z | 2019-07-14T18:07:25.000Z | sympy/tensor/array/mutable_ndim_array.py | ovolve/sympy | 0a15782f20505673466b940454b33b8014a25c13 | [
"BSD-3-Clause"
] | 7 | 2017-05-01T14:15:32.000Z | 2017-09-06T20:44:24.000Z | sympy/tensor/array/mutable_ndim_array.py | ovolve/sympy | 0a15782f20505673466b940454b33b8014a25c13 | [
"BSD-3-Clause"
] | 1 | 2018-10-22T09:17:11.000Z | 2018-10-22T09:17:11.000Z | from sympy.tensor.array.ndim_array import NDimArray
class MutableNDimArray(NDimArray):
pass
| 14.142857 | 51 | 0.79798 | 12 | 99 | 6.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141414 | 99 | 6 | 52 | 16.5 | 0.917647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
bfb539747a89872f397ba8ec6ff4c9801324a5da | 25,888 | py | Python | simopt/models/contam.py | simopt-admin/simopt | 5119c605305699dce9e0c44e0b8b68e23e77c02f | [
"MIT"
] | 24 | 2020-01-06T17:21:10.000Z | 2022-03-08T16:36:29.000Z | simopt/models/contam.py | simopt-admin/simopt | 5119c605305699dce9e0c44e0b8b68e23e77c02f | [
"MIT"
] | 4 | 2020-02-20T18:59:41.000Z | 2020-10-18T22:28:29.000Z | simopt/models/contam.py | simopt-admin/simopt | 5119c605305699dce9e0c44e0b8b68e23e77c02f | [
"MIT"
] | 8 | 2020-02-13T18:37:48.000Z | 2021-12-15T08:27:33.000Z | """
Summary
-------
Simulate contamination rates.
"""
import numpy as np
from base import Model, Problem
class Contamination(Model):
"""
A model that simulates a contamination problem with a
beta distribution.
Returns the probability of violating contamination upper limit
in each level of supply chain.
Attributes
----------
name : string
name of model
n_rngs : int
number of random-number generators used to run a simulation replication
n_responses : int
number of responses (performance measures)
factors : dict
changeable factors of the simulation model
specifications : dict
details of each factor (for GUI and data validation)
check_factor_list : dict
switch case for checking factor simulatability
Arguments
---------
fixed_factors : nested dict
fixed factors of the simulation model
See also
--------
base.Model
"""
def __init__(self, fixed_factors={}):
self.name = "CONTAM"
self.n_rngs = 2
self.n_responses = 1
self.specifications = {
"contam_rate_alpha": {
"description": "Alpha parameter of beta distribution for growth rate of contamination at each stage.",
"datatype": float,
"default": 1.0
},
"contam_rate_beta": {
"description": "Beta parameter of beta distribution for growth rate of contamination at each stage.",
"datatype": float,
"default": 17 / 3
},
"restore_rate_alpha": {
"description": "Alpha parameter of beta distribution for rate that contamination decreases by after prevention effort.",
"datatype": float,
"default": 1.0
},
"restore_rate_beta": {
"description": "Beta parameter of beta distribution for rate that contamination decreases by after prevention effort.",
"datatype": float,
"default": 3 / 7
},
"initial_rate_alpha": {
"description": "Alpha parameter of beta distribution for initial contamination fraction.",
"datatype": float,
"default": 1.0
},
"initial_rate_beta": {
"description": "Beta parameter of beta distribution for initial contamination fraction.",
"datatype": float,
"default": 30.0
},
"stages": {
"description": "Stage of food supply chain.",
"datatype": int,
"default": 5
},
"prev_decision": {
"description": "Prevention decision.",
"datatype": tuple,
"default": (0, 0, 0, 0, 0)
}
}
self.check_factor_list = {
"contam_rate_alpha": self.check_contam_rate_alpha,
"contam_rate_beta": self.check_contam_rate_beta,
"restore_rate_alpha": self.check_restore_rate_alpha,
"restore_rate_beta": self.check_restore_rate_beta,
"initial_rate_alpha": self.check_initial_rate_alpha,
"initial_rate_beta": self.check_initial_rate_beta,
"stages": self.check_stages,
"prev_decision": self.check_prev_decision
}
# Set factors of the simulation model.
super().__init__(fixed_factors)
def check_contam_rate_alpha(self):
return self.factors["contam_rate_alpha"] > 0
def check_contam_rate_beta(self):
return self.factors["contam_rate_beta"] > 0
def check_restore_rate_alpha(self):
return self.factors["restore_rate_alpha"] > 0
def check_restore_rate_beta(self):
return self.factors["restore_rate_beta"] > 0
def check_initial_rate_alpha(self):
return self.factors["initial_rate_alpha"] > 0
def check_initial_rate_beta(self):
return self.factors["initial_rate_beta"] > 0
def check_prev_cost(self):
return all(cost > 0 for cost in self.factors["prev_cost"])
def check_stages(self):
return self.factors["stages"] > 0
def check_prev_decision(self):
return all(u >= 0 & u <= 1 for u in self.factors["prev_decision"])
def check_simulatable_factors(self):
# Check for matching number of stages.
if len(self.factors["prev_decision"]) != self.factors["stages"]:
return False
else:
return True
def replicate(self, rng_list):
"""
Simulate a single replication for the current model factors.
Arguments
---------
rng_list : list of rng.MRG32k3a objects
rngs for model to use when simulating a replication
Returns
-------
responses : dict
performance measures of interest
"level" = a list of contamination levels over time
gradients : dict of dicts
gradient estimates for each response
"""
# Designate separate random number generators.
# Outputs will be coupled when generating demand.
contam_rng = rng_list[0]
restore_rng = rng_list[1]
# Generate rates with beta distribution.
X = np.zeros(self.factors["stages"])
X[0] = restore_rng.betavariate(alpha=self.factors["initial_rate_alpha"], beta=self.factors["initial_rate_beta"])
u = self.factors["prev_decision"]
for i in range(1, self.factors["stages"]):
c = contam_rng.betavariate(alpha=self.factors["contam_rate_alpha"], beta=self.factors["contam_rate_beta"])
r = restore_rng.betavariate(alpha=self.factors["restore_rate_alpha"], beta=self.factors["restore_rate_beta"])
X[i] = c * (1 - u[i]) * (1 - X[i - 1]) + (1 - r * u[i]) * X[i - 1]
# Compose responses and gradients.
responses = {'level': X}
gradients = {response_key: {factor_key: np.nan for factor_key in self.specifications} for response_key in responses}
return responses, gradients
"""
Summary
-------
Minimize the (deterministic) total cost of prevention efforts.
"""
class ContaminationTotalCostDisc(Problem):
"""
Base class to implement simulation-optimization problems.
Attributes
----------
name : string
name of problem
dim : int
number of decision variables
n_objectives : int
number of objectives
n_stochastic_constraints : int
number of stochastic constraints
minmax : tuple of int (+/- 1)
indicator of maximization (+1) or minimization (-1) for each objective
constraint_type : string
description of constraints types:
"unconstrained", "box", "deterministic", "stochastic"
variable_type : string
description of variable types:
"discrete", "continuous", "mixed"
lower_bounds : tuple
lower bound for each decision variable
upper_bounds : tuple
upper bound for each decision variable
gradient_available : bool
indicates if gradient of objective function is available
optimal_value : float
optimal objective function value
optimal_solution : tuple
optimal solution
model : Model object
associated simulation model that generates replications
model_default_factors : dict
default values for overriding model-level default factors
model_fixed_factors : dict
combination of overriden model-level factors and defaults
rng_list : list of rng.MRG32k3a objects
list of RNGs used to generate a random initial solution
or a random problem instance
factors : dict
changeable factors of the problem
initial_solution : list
default initial solution from which solvers start
budget : int > 0
max number of replications (fn evals) for a solver to take
prev_cost : list
cost of prevention
upper_thres : float > 0
upper limit of amount of contamination
specifications : dict
details of each factor (for GUI, data validation, and defaults)
Arguments
---------
name : str
user-specified name for problem
fixed_factors : dict
dictionary of user-specified problem factors
model_fixed factors : dict
subset of user-specified non-decision factors to pass through to the model
See also
--------
base.Problem
"""
def __init__(self, name="CONTAM-1", fixed_factors={}, model_fixed_factors={}):
self.name = name
self.n_objectives = 1
self.minmax = (-1,)
self.constraint_type = "stochastic"
self.variable_type = "discrete"
self.gradient_available = False
self.optimal_value = None
self.optimal_solution = None
self.model_default_factors = {}
self.model_decision_factors = {"prev_decision"}
self.factors = fixed_factors
self.specifications = {
"initial_solution": {
"description": "Initial solution.",
"datatype": tuple,
"default": (1, 1, 1, 1, 1)
},
"budget": {
"description": "Max # of replications for a solver to take.",
"datatype": int,
"default": 10000
},
"prev_cost": {
"description": "Cost of prevention.",
"datatype": list,
"default": [1, 1, 1, 1, 1]
},
"error_prob": {
"description": "Error probability.",
"datatype": list,
"default": [0.2, 0.2, 0.2, 0.2, 0.2]
},
"upper_thres": {
"description": "Upper limit of amount of contamination.",
"datatype": list,
"default": [0.1, 0.1, 0.1, 0.1, 0.1]
}
}
self.check_factor_list = {
"initial_solution": self.check_initial_solution,
"budget": self.check_budget,
"prev_cost": self.check_prev_cost,
"error_prob": self.check_error_prob,
"upper_thres": self.check_upper_thres,
}
super().__init__(fixed_factors, model_fixed_factors)
# Instantiate model with fixed factors and over-riden defaults.
self.model = Contamination(self.model_fixed_factors)
self.dim = self.model.factors["stages"]
self.n_stochastic_constraints = self.model.factors["stages"]
self.lower_bounds = (0,) * self.model.factors["stages"]
self.upper_bounds = (1,) * self.model.factors["stages"]
def check_prev_cost(self):
if len(self.factors["prev_cost"]) != self.dim:
return False
elif any([elem < 0 for elem in self.factors["prev_cost"]]):
return False
else:
return True
def check_error_prob(self):
if len(self.factors["error_prob"]) != self.dim:
return False
elif all(error < 0 for error in self.factors["error_prob"]):
return False
else:
return True
def check_upper_thres(self):
return len(self.factors["upper_thres"]) == self.dim
def vector_to_factor_dict(self, vector):
"""
Convert a vector of variables to a dictionary with factor keys
Arguments
---------
vector : tuple
vector of values associated with decision variables
Returns
-------
factor_dict : dictionary
dictionary with factor keys and associated values
"""
factor_dict = {
"prev_decision": vector[:]
}
return factor_dict
def factor_dict_to_vector(self, factor_dict):
"""
Convert a dictionary with factor keys to a vector
of variables.
Arguments
---------
factor_dict : dictionary
dictionary with factor keys and associated values
Returns
-------
vector : tuple
vector of values associated with decision variables
"""
vector = tuple(factor_dict["prev_decision"])
return vector
def response_dict_to_objectives(self, response_dict):
"""
Convert a dictionary with response keys to a vector
of objectives.
Arguments
---------
response_dict : dictionary
dictionary with response keys and associated values
Returns
-------
objectives : tuple
vector of objectives
"""
objectives = (0,)
return objectives
def response_dict_to_stoch_constraints(self, response_dict):
"""
Convert a dictionary with response keys to a vector
of left-hand sides of stochastic constraints: E[Y] >= 0
Arguments
---------
response_dict : dictionary
dictionary with response keys and associated values
Returns
-------
stoch_constraints : tuple
vector of LHSs of stochastic constraint
"""
stoch_constraints = tuple(response_dict["level"] <= self.factors["upper_thres"])
return stoch_constraints
def deterministic_stochastic_constraints_and_gradients(self, x):
"""
Compute deterministic components of stochastic constraints for a solution `x`.
Arguments
---------
x : tuple
vector of decision variables
Returns
-------
det_stoch_constraints : tuple
vector of deterministic components of stochastic constraints
det_stoch_constraints_gradients : tuple
vector of gradients of deterministic components of stochastic constraints
"""
det_stoch_constraints = tuple(-np.ones(self.dim) + self.factors["error_prob"])
det_stoch_constraints_gradients = ((0,),)
return det_stoch_constraints, det_stoch_constraints_gradients
def deterministic_objectives_and_gradients(self, x):
"""
Compute deterministic components of objectives for a solution `x`.
Arguments
---------
x : tuple
vector of decision variables
Returns
-------
det_objectives : tuple
vector of deterministic components of objectives
det_objectives_gradients : tuple
vector of gradients of deterministic components of objectives
"""
det_objectives = (np.dot(self.factors["prev_cost"], x),)
det_objectives_gradients = ((self.factors["prev_cost"],),)
return det_objectives, det_objectives_gradients
def check_deterministic_constraints(self, x):
"""
Check if a solution `x` satisfies the problem's deterministic constraints.
Arguments
---------
x : tuple
vector of decision variables
Returns
-------
satisfies : bool
indicates if solution `x` satisfies the deterministic constraints.
"""
return np.all(x >= 0) & np.all(x <= 1)
def get_random_solution(self, rand_sol_rng):
"""
Generate a random solution for starting or restarting solvers.
Arguments
---------
rand_sol_rng : rng.MRG32k3a object
random-number generator used to sample a new random solution
Returns
-------
x : tuple
vector of decision variables
"""
x = tuple([rand_sol_rng.randint(0, 1) for _ in range(self.dim)])
return x
class ContaminationTotalCostCont(Problem):
"""
Base class to implement simulation-optimization problems.
Attributes
----------
name : string
name of problem
dim : int
number of decision variables
n_objectives : int
number of objectives
n_stochastic_constraints : int
number of stochastic constraints
minmax : tuple of int (+/- 1)
indicator of maximization (+1) or minimization (-1) for each objective
constraint_type : string
description of constraints types:
"unconstrained", "box", "deterministic", "stochastic"
variable_type : string
description of variable types:
"discrete", "continuous", "mixed"
lower_bounds : tuple
lower bound for each decision variable
upper_bounds : tuple
upper bound for each decision variable
gradient_available : bool
indicates if gradient of objective function is available
optimal_value : float
optimal objective function value
optimal_solution : tuple
optimal solution
model : Model object
associated simulation model that generates replications
model_default_factors : dict
default values for overriding model-level default factors
model_fixed_factors : dict
combination of overriden model-level factors and defaults
rng_list : list of rng.MRG32k3a objects
list of RNGs used to generate a random initial solution
or a random problem instance
factors : dict
changeable factors of the problem
initial_solution : list
default initial solution from which solvers start
budget : int > 0
max number of replications (fn evals) for a solver to take
prev_cost : list
cost of prevention
upper_thres : float > 0
upper limit of amount of contamination
specifications : dict
details of each factor (for GUI, data validation, and defaults)
Arguments
---------
name : str
user-specified name for problem
fixed_factors : dict
dictionary of user-specified problem factors
model_fixed factors : dict
subset of user-specified non-decision factors to pass through to the model
See also
--------
base.Problem
"""
def __init__(self, name="CONTAM-2", fixed_factors={}, model_fixed_factors={}):
self.name = name
self.n_objectives = 1
self.minmax = (-1,)
self.constraint_type = "stochastic"
self.variable_type = "continuous"
self.gradient_available = False
self.optimal_value = None
self.optimal_solution = None
self.model_default_factors = {}
self.model_decision_factors = {"prev_decision"}
self.factors = fixed_factors
self.specifications = {
"initial_solution": {
"description": "Initial solution.",
"datatype": tuple,
"default": (1, 1, 1, 1, 1)
},
"budget": {
"description": "Max # of replications for a solver to take.",
"datatype": int,
"default": 10000
},
"prev_cost": {
"description": "Cost of prevention.",
"datatype": list,
"default": [1, 1, 1, 1, 1]
},
"error_prob": {
"description": "Error probability.",
"datatype": list,
"default": [0.2, 0.2, 0.2, 0.2, 0.2]
},
"upper_thres": {
"description": "Upper limit of amount of contamination.",
"datatype": list,
"default": [0.1, 0.1, 0.1, 0.1, 0.1]
}
}
self.check_factor_list = {
"initial_solution": self.check_initial_solution,
"budget": self.check_budget,
"prev_cost": self.check_prev_cost,
"error_prob": self.check_error_prob,
"upper_thres": self.check_upper_thres,
}
super().__init__(fixed_factors, model_fixed_factors)
# Instantiate model with fixed factors and over-riden defaults.
self.model = Contamination(self.model_fixed_factors)
self.dim = self.model.factors["stages"]
self.n_stochastic_constraints = self.model.factors["stages"]
self.lower_bounds = (0,) * self.model.factors["stages"]
self.upper_bounds = (1,) * self.model.factors["stages"]
def check_initial_solution(self):
if len(self.factors["initial_solution"]) != self.dim:
return False
elif all(u < 0 or u > 1 for u in self.factors["initial_solution"]):
return False
else:
return True
def check_prev_cost(self):
if len(self.factors["prev_cost"]) != self.dim:
return False
elif any([elem < 0 for elem in self.factors["prev_cost"]]):
return False
else:
return True
def check_budget(self):
return self.factors["budget"] > 0
def check_error_prob(self):
if len(self.factors["error_prob"]) != self.dim:
return False
elif all(error < 0 for error in self.factors["error_prob"]):
return False
else:
return True
def check_upper_thres(self):
return len(self.factors["upper_thres"]) == self.dim
def check_simulatable_factors(self):
if len(self.lower_bounds) != self.dim:
return False
elif len(self.upper_bounds) != self.dim:
return False
else:
return True
def vector_to_factor_dict(self, vector):
"""
Convert a vector of variables to a dictionary with factor keys
Arguments
---------
vector : tuple
vector of values associated with decision variables
Returns
-------
factor_dict : dictionary
dictionary with factor keys and associated values
"""
factor_dict = {
"prev_decision": vector[:]
}
return factor_dict
def factor_dict_to_vector(self, factor_dict):
"""
Convert a dictionary with factor keys to a vector
of variables.
Arguments
---------
factor_dict : dictionary
dictionary with factor keys and associated values
Returns
-------
vector : tuple
vector of values associated with decision variables
"""
vector = tuple(factor_dict["prev_decision"])
return vector
def response_dict_to_objectives(self, response_dict):
"""
Convert a dictionary with response keys to a vector
of objectives.
Arguments
---------
response_dict : dictionary
dictionary with response keys and associated values
Returns
-------
objectives : tuple
vector of objectives
"""
objectives = (0,)
return objectives
def response_dict_to_stoch_constraints(self, response_dict):
"""
Convert a dictionary with response keys to a vector
of left-hand sides of stochastic constraints: E[Y] >= 0
Arguments
---------
response_dict : dictionary
dictionary with response keys and associated values
Returns
-------
stoch_constraints : tuple
vector of LHSs of stochastic constraint
"""
stoch_constraints = tuple(response_dict["level"] <= self.factors["upper_thres"])
return stoch_constraints
def deterministic_stochastic_constraints_and_gradients(self, x):
"""
Compute deterministic components of stochastic constraints for a solution `x`.
Arguments
---------
x : tuple
vector of decision variables
Returns
-------
det_stoch_constraints : tuple
vector of deterministic components of stochastic constraints
det_stoch_constraints_gradients : tuple
vector of gradients of deterministic components of stochastic constraints
"""
det_stoch_constraints = tuple(-np.ones(self.dim) + self.factors["error_prob"])
det_stoch_constraints_gradients = ((0,),) # tuple of tuples – of sizes self.dim by self.dim, full of zeros
return det_stoch_constraints, det_stoch_constraints_gradients
def deterministic_objectives_and_gradients(self, x):
"""
Compute deterministic components of objectives for a solution `x`.
Arguments
---------
x : tuple
vector of decision variables
Returns
-------
det_objectives : tuple
vector of deterministic components of objectives
det_objectives_gradients : tuple
vector of gradients of deterministic components of objectives
"""
det_objectives = (np.dot(self.factors["prev_cost"], x),)
det_objectives_gradients = ((self.factors["prev_cost"],),)
return det_objectives, det_objectives_gradients
def check_deterministic_constraints(self, x):
"""
Check if a solution `x` satisfies the problem's deterministic constraints.
Arguments
---------
x : tuple
vector of decision variables
Returns
-------
satisfies : bool
indicates if solution `x` satisfies the deterministic constraints.
"""
return np.all(x >= 0) & np.all(x <= 1)
def get_random_solution(self, rand_sol_rng):
"""
Generate a random solution for starting or restarting solvers.
Arguments
---------
rand_sol_rng : rng.MRG32k3a object
random-number generator used to sample a new random solution
Returns
-------
x : tuple
vector of decision variables
"""
x = tuple([rand_sol_rng.random() for _ in range(self.dim)])
return x
| 33.708333 | 136 | 0.591625 | 2,790 | 25,888 | 5.326882 | 0.094265 | 0.031826 | 0.020993 | 0.011506 | 0.850222 | 0.812138 | 0.783475 | 0.77392 | 0.771027 | 0.761338 | 0 | 0.009388 | 0.321075 | 25,888 | 767 | 137 | 33.752282 | 0.83614 | 0.390799 | 0 | 0.641447 | 0 | 0 | 0.182014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128289 | false | 0 | 0.006579 | 0.039474 | 0.305921 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bfb7861446c0061c8115b9f6ab6a585ee68103e3 | 37 | py | Python | cupyimg/scipy/stats/__init__.py | haesleinhuepf/cupyimg | 1fbe5d5ed53a030eb0dfbf618a0b194af1cac2ae | [
"BSD-3-Clause"
] | 39 | 2020-03-28T14:36:45.000Z | 2022-02-26T20:39:24.000Z | cupyimg/scipy/stats/__init__.py | haesleinhuepf/cupyimg | 1fbe5d5ed53a030eb0dfbf618a0b194af1cac2ae | [
"BSD-3-Clause"
] | 10 | 2020-09-02T18:19:37.000Z | 2022-03-11T08:48:29.000Z | cupyimg/scipy/stats/__init__.py | haesleinhuepf/cupyimg | 1fbe5d5ed53a030eb0dfbf618a0b194af1cac2ae | [
"BSD-3-Clause"
] | 4 | 2020-04-13T21:24:14.000Z | 2021-06-17T18:07:22.000Z | from .distributions import * # noqa
| 18.5 | 36 | 0.72973 | 4 | 37 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189189 | 37 | 1 | 37 | 37 | 0.9 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bfc4fc8d70e591c1f20fe8b0a9586cf8655a9aaf | 34 | py | Python | .mario/origin/__init__.py | run-hub/run | 67072c4e837982aca4962f006c7eb180337e8ebc | [
"Unlicense"
] | 1 | 2016-01-14T19:37:31.000Z | 2016-01-14T19:37:31.000Z | .mario/origin/__init__.py | roll/run | 67072c4e837982aca4962f006c7eb180337e8ebc | [
"Unlicense"
] | null | null | null | .mario/origin/__init__.py | roll/run | 67072c4e837982aca4962f006c7eb180337e8ebc | [
"Unlicense"
] | null | null | null | from .module import ProjectModule
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
449c897dfc82fc9a8b912578a8ef2229b10e0fed | 41 | py | Python | utils/__init__.py | YPFoerster/pattern_walker | 4f136390846166f8bf60702ba83e762c54e0de55 | [
"MIT"
] | null | null | null | utils/__init__.py | YPFoerster/pattern_walker | 4f136390846166f8bf60702ba83e762c54e0de55 | [
"MIT"
] | null | null | null | utils/__init__.py | YPFoerster/pattern_walker | 4f136390846166f8bf60702ba83e762c54e0de55 | [
"MIT"
] | null | null | null | from pattern_walker.utils.utils import *
| 20.5 | 40 | 0.829268 | 6 | 41 | 5.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.891892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
44c635ceea3555cda59d6d22c24b56442eb7b26f | 20 | py | Python | __init__.py | kingspp/tensorflow-playground | 2df26184d9fd1ecceaf2fb874560933eb574e5e2 | [
"MIT"
] | null | null | null | __init__.py | kingspp/tensorflow-playground | 2df26184d9fd1ecceaf2fb874560933eb574e5e2 | [
"MIT"
] | null | null | null | __init__.py | kingspp/tensorflow-playground | 2df26184d9fd1ecceaf2fb874560933eb574e5e2 | [
"MIT"
] | null | null | null | from tfplay import * | 20 | 20 | 0.8 | 3 | 20 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
44d2db682ea7a099989652af1d4a746595351e9b | 71 | py | Python | component/model/__init__.py | dfguerrerom/restoration_planning_module | 263dbaac114c1a0a7ec40284e01a2863c9864a3c | [
"MIT"
] | null | null | null | component/model/__init__.py | dfguerrerom/restoration_planning_module | 263dbaac114c1a0a7ec40284e01a2863c9864a3c | [
"MIT"
] | null | null | null | component/model/__init__.py | dfguerrerom/restoration_planning_module | 263dbaac114c1a0a7ec40284e01a2863c9864a3c | [
"MIT"
] | null | null | null | from .customize_layer_model import *
from .questionnaire_model import * | 35.5 | 36 | 0.84507 | 9 | 71 | 6.333333 | 0.666667 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098592 | 71 | 2 | 37 | 35.5 | 0.890625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78537701aa429f9af16f0670558c3466758ed2e4 | 9,992 | py | Python | apps/sample/tests/test_search.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 3 | 2019-02-24T14:24:43.000Z | 2019-10-24T18:51:32.000Z | apps/sample/tests/test_search.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 17 | 2017-03-14T10:55:56.000Z | 2022-03-11T23:20:19.000Z | apps/sample/tests/test_search.py | sotkonstantinidis/testcircle | 448aa2148fbc2c969e60f0b33ce112d4740a8861 | [
"Apache-2.0"
] | 2 | 2016-02-01T06:32:40.000Z | 2019-09-06T04:33:50.000Z | # Prevent logging of Elasticsearch queries
import logging
import pytest
logging.disable(logging.CRITICAL)
import collections
from django.db.models import Q
from qcat.tests import TestCase
from questionnaire.models import Questionnaire
from questionnaire.utils import get_list_values
from search.search import advanced_search
from search.tests.test_index import create_temp_indices
FilterParam = collections.namedtuple(
'FilterParam',
['questiongroup', 'key', 'values', 'operator', 'type'])
@pytest.mark.usefixtures('es')
class AdvancedSearchTest(TestCase):
fixtures = [
'global_key_values',
'sample',
'samplemulti',
'sample_questionnaires_search',
]
def setUp(self):
create_temp_indices([('sample', '2015'), ('samplemulti', '2015')])
def test_advanced_search(self):
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['value_14_1'],
operator='eq', type='image_checkbox')
key_search = advanced_search(
filter_params=[filter_param],
configuration_codes=['sample']).get('hits')
self.assertEqual(key_search.get('total'), 2)
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['value_14_2'],
operator='eq', type='image_checkbox')
key_search = advanced_search(
filter_params=[filter_param],
configuration_codes=['sample']).get('hits')
self.assertEqual(key_search.get('total'), 1)
def test_advanced_search_single_filter(self):
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['value_14_1'],
operator='eq', type='image_checkbox')
search = advanced_search(
filter_params=[filter_param], configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 2)
def test_advanced_search_multiple_arguments(self):
query_string = 'key'
filter_param = FilterParam(
questiongroup='qg_35', key='key_48', values=['value_1'],
operator='eq', type='radio')
search = advanced_search(
filter_params=[filter_param],
query_string=query_string,
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 1)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['1'])
def test_advanced_search_multiple_arguments_match_one(self):
query_string = 'key'
filter_param = FilterParam(
questiongroup='qg_35', key='key_48', values=['value_1'],
operator='eq', type='radio')
search = advanced_search(
filter_params=[filter_param],
query_string=query_string,
configuration_codes=['sample'],
match_all=False
).get('hits')
self.assertEqual(search.get('total'), 2)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['2', '1'])
def test_advanced_search_multiple_arguments_2_match_one(self):
query_string = 'key'
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['value_14_1'],
operator='eq', type='image_checkbox')
search = advanced_search(
filter_params=[filter_param],
query_string=query_string,
configuration_codes=['sample'],
match_all=False
).get('hits')
self.assertEqual(search.get('total'), 3)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['2', '1', '5'])
def test_advanced_search_multiple_arguments_2(self):
query_string = 'key'
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['value_14_1'],
operator='eq', type='image_checkbox')
search = advanced_search(
filter_params=[filter_param],
query_string=query_string,
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 1)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['1'])
def test_advanced_search_multiple_arguments_same_filter(self):
filter_param = FilterParam(
questiongroup='qg_11', key='key_14',
values=['value_14_1', 'value_14_3'],
operator='eq', type='image_checkbox')
search = advanced_search(
filter_params=[filter_param],
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 3)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['1', '5', '4'])
def test_advanced_search_multiple_arguments_same_filter_2(self):
filter_param_1 = FilterParam(
questiongroup='qg_11', key='key_14',
values=['value_14_1', 'value_14_3'],
operator='eq', type='image_checkbox')
filter_param_2 = FilterParam(
questiongroup='qg_35', key='key_48', values=['value_3'],
operator='eq', type='radio')
search = advanced_search(
filter_params=[filter_param_1, filter_param_2],
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 1)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['4'])
def test_advanced_search_multiple_arguments_same_filter_2_match_one(self):
filter_param_1 = FilterParam(
questiongroup='qg_11', key='key_14',
values=['value_14_1', 'value_14_3'],
operator='eq', type='image_checkbox')
filter_param_2 = FilterParam(
questiongroup='qg_35', key='key_48', values=['value_2'],
operator='eq', type='radio')
search = advanced_search(
filter_params=[filter_param_1, filter_param_2],
configuration_codes=['sample'],
match_all=False,
).get('hits')
self.assertEqual(search.get('total'), 4)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertListEqual(hit_ids, ['1', '2', '5', '4'])
def test_advanced_search_gte(self):
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['2'],
operator='gte', type='image_checkbox')
with self.assertRaises(NotImplementedError):
search = advanced_search(
filter_params=[filter_param],
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 2)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['4', '1'])
def test_advanced_search_lt(self):
filter_param = FilterParam(
questiongroup='qg_11', key='key_14', values=['2'],
operator='lt', type='image_checkbox')
with self.assertRaises(NotImplementedError):
search = advanced_search(
filter_params=[filter_param],
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 2)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['5', '1'])
def test_advanced_search_lte(self):
filter_param = FilterParam(
questiongroup='qg_35', key='key_48', values=['2'],
operator='lte', type='radio')
with self.assertRaises(NotImplementedError):
search = advanced_search(
filter_params=[filter_param],
configuration_codes=['sample']
).get('hits')
self.assertEqual(search.get('total'), 2)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['2', '1'])
def test_advanced_search_gte_lte(self):
filter_param_1 = FilterParam(
questiongroup='qg_11', key='key_14', values=['1'],
operator='lte', type='image_checkbox')
filter_param_2 = FilterParam(
questiongroup='qg_11', key='key_14', values=['3'],
operator='gte', type='image_checkbox')
with self.assertRaises(NotImplementedError):
search = advanced_search(
filter_params=[filter_param_1, filter_param_2],
configuration_codes=['sample'],
match_all=False,
).get('hits')
self.assertEqual(search.get('total'), 3)
hit_ids = [r.get('_id') for r in search.get('hits')]
self.assertEqual(hit_ids, ['5', '4', '1'])
@pytest.mark.usefixtures('es')
class GetListValuesTest(TestCase):
fixtures = [
'global_key_values',
'sample',
'samplemulti',
'sample_questionnaires_search',
]
def setUp(self):
create_temp_indices([('sample', '2015'), ('samplemulti', '2015')])
def test_returns_same_result_for_es_search_and_db_objects(self):
es_hits = advanced_search(
filter_params=[], query_string='key',
configuration_codes=['sample'])
res_1 = get_list_values(
configuration_code='sample', es_hits=es_hits.get(
'hits', {}).get('hits', []))
ids = [q.get('id') for q in res_1]
res_2 = get_list_values(
configuration_code='sample',
questionnaire_objects=Questionnaire.objects.filter(pk__in=ids),
status_filter=Q())
for res in [res_1, res_2]:
for r in res:
self.assertEqual(r.get('configuration'), 'sample')
self.assertIn('key_1', r)
self.assertIn('key_5', r)
self.assertIn('created', r)
self.assertIn('updated', r)
| 39.03125 | 78 | 0.596377 | 1,134 | 9,992 | 4.96649 | 0.095238 | 0.066406 | 0.048828 | 0.09375 | 0.841974 | 0.815696 | 0.791193 | 0.781072 | 0.777521 | 0.767756 | 0 | 0.025491 | 0.265813 | 9,992 | 255 | 79 | 39.184314 | 0.74223 | 0.004003 | 0 | 0.691964 | 0 | 0 | 0.118693 | 0.005628 | 0 | 0 | 0 | 0 | 0.151786 | 1 | 0.071429 | false | 0 | 0.040179 | 0 | 0.129464 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
78537cba55f240eebbeeee79b985eb1cf03f9e93 | 1,067 | py | Python | pyenv/lib/python3.6/stat.py | ronald-rgr/ai-chatbot-smartguide | c9c830feb6b66c2e362f8fb5d147ef0c4f4a08cf | [
"Apache-2.0"
] | null | null | null | pyenv/lib/python3.6/stat.py | ronald-rgr/ai-chatbot-smartguide | c9c830feb6b66c2e362f8fb5d147ef0c4f4a08cf | [
"Apache-2.0"
] | 3 | 2020-03-23T18:01:51.000Z | 2021-03-19T23:15:15.000Z | pyenv/lib/python3.6/stat.py | ronald-rgr/ai-chatbot-smartguide | c9c830feb6b66c2e362f8fb5d147ef0c4f4a08cf | [
"Apache-2.0"
] | null | null | null | XSym
0071
926c0219b64f641971de0765527517b5
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/stat.py
| 213.4 | 952 | 0.092784 | 15 | 1,067 | 6.6 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.306306 | 0.89597 | 1,067 | 5 | 952 | 213.4 | 0.585586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
78797741851d8ab674a36c823ef273d10947065e | 43 | py | Python | iuml/annotation/gather_data/__init__.py | fierval/mlsdk | e0b54732d154efc3ed89f0ac1561c0e7c5c49d8a | [
"MIT"
] | null | null | null | iuml/annotation/gather_data/__init__.py | fierval/mlsdk | e0b54732d154efc3ed89f0ac1561c0e7c5c49d8a | [
"MIT"
] | null | null | null | iuml/annotation/gather_data/__init__.py | fierval/mlsdk | e0b54732d154efc3ed89f0ac1561c0e7c5c49d8a | [
"MIT"
] | null | null | null | # packaging annotation tool
from . import * | 21.5 | 27 | 0.767442 | 5 | 43 | 6.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162791 | 43 | 2 | 28 | 21.5 | 0.916667 | 0.581395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78c082168a4d51566f3640b3df8e026196247d45 | 148 | py | Python | hello/views.py | llcranmer/dunder-mifflin-paper-company | acd8c1a7ad775c3a15705850b3e9c7d39782d485 | [
"MIT"
] | null | null | null | hello/views.py | llcranmer/dunder-mifflin-paper-company | acd8c1a7ad775c3a15705850b3e9c7d39782d485 | [
"MIT"
] | null | null | null | hello/views.py | llcranmer/dunder-mifflin-paper-company | acd8c1a7ad775c3a15705850b3e9c7d39782d485 | [
"MIT"
] | null | null | null | from django.http import HttpResponse
from django.shortcuts import render
def hello(request):
return HttpResponse("Online Orders Coming Soon!")
| 24.666667 | 53 | 0.797297 | 19 | 148 | 6.210526 | 0.789474 | 0.169492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 148 | 5 | 54 | 29.6 | 0.921875 | 0 | 0 | 0 | 0 | 0 | 0.175676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
ec5a2966baa4e0ffcff089aa49437005e6e7ca76 | 38 | py | Python | pytbot/__init__.py | alex3d/pytbot | b3bcf0a8b91aebefbbe0335c18780cb56cf9daa6 | [
"MIT"
] | null | null | null | pytbot/__init__.py | alex3d/pytbot | b3bcf0a8b91aebefbbe0335c18780cb56cf9daa6 | [
"MIT"
] | null | null | null | pytbot/__init__.py | alex3d/pytbot | b3bcf0a8b91aebefbbe0335c18780cb56cf9daa6 | [
"MIT"
] | null | null | null | from .api import *
from .bot import *
| 12.666667 | 18 | 0.684211 | 6 | 38 | 4.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 38 | 2 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec627e473948615fe8684942caf150ca52b201f7 | 33 | py | Python | deep_aqi/__init__.py | fdanieluk/deep_aqi | c525a225bc4536605eeebe5325b7455c29e3fa6f | [
"MIT"
] | 1 | 2018-05-31T18:36:15.000Z | 2018-05-31T18:36:15.000Z | deep_aqi/__init__.py | fdanieluk/deep_aqi | c525a225bc4536605eeebe5325b7455c29e3fa6f | [
"MIT"
] | null | null | null | deep_aqi/__init__.py | fdanieluk/deep_aqi | c525a225bc4536605eeebe5325b7455c29e3fa6f | [
"MIT"
] | null | null | null | from deep_aqi.config import ROOT
| 16.5 | 32 | 0.848485 | 6 | 33 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec8645d8b76bd98ac5673d237fc1a3cd38b2d911 | 70 | py | Python | hello_world.py | emjosephs/phs-outreach | e1cc9156e1f666c50d3935155eaa283c1bb3128c | [
"MIT"
] | null | null | null | hello_world.py | emjosephs/phs-outreach | e1cc9156e1f666c50d3935155eaa283c1bb3128c | [
"MIT"
] | null | null | null | hello_world.py | emjosephs/phs-outreach | e1cc9156e1f666c50d3935155eaa283c1bb3128c | [
"MIT"
] | null | null | null | #this is a python script
#this is a comment
print("Hello World!")
| 8.75 | 24 | 0.685714 | 12 | 70 | 4 | 0.75 | 0.25 | 0.291667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 70 | 7 | 25 | 10 | 0.872727 | 0.571429 | 0 | 0 | 0 | 0 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
019c419e951cc26e1f97176abe071be6d5d77b67 | 121 | py | Python | lib/hover_handler.py | jtowers/dxmate | ffaedb295996c1b4ac8fab4c707944a9c42afd04 | [
"MIT"
] | 1 | 2017-08-30T18:11:45.000Z | 2017-08-30T18:11:45.000Z | lib/hover_handler.py | jtowers/dxmate | ffaedb295996c1b4ac8fab4c707944a9c42afd04 | [
"MIT"
] | null | null | null | lib/hover_handler.py | jtowers/dxmate | ffaedb295996c1b4ac8fab4c707944a9c42afd04 | [
"MIT"
] | null | null | null | import sublime
from .util import *
from .event_hub import *
from .languageServer import *
from .diagnostic import *
| 17.285714 | 29 | 0.743802 | 15 | 121 | 5.933333 | 0.533333 | 0.337079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190083 | 121 | 7 | 30 | 17.285714 | 0.908163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
01b320f3364b6af313746a21df02aa2a0d190187 | 104 | py | Python | app/user/forms.py | ab7289-tandon-nyu/csgy6083_PDS_Project | d2b7d22274dcabbb6ae35c17a8ffd06498f3634f | [
"MIT"
] | null | null | null | app/user/forms.py | ab7289-tandon-nyu/csgy6083_PDS_Project | d2b7d22274dcabbb6ae35c17a8ffd06498f3634f | [
"MIT"
] | null | null | null | app/user/forms.py | ab7289-tandon-nyu/csgy6083_PDS_Project | d2b7d22274dcabbb6ae35c17a8ffd06498f3634f | [
"MIT"
] | null | null | null | # TODO implement user form
# TODO implement role model forms
# TODO implement ab_user_role_mtom forms
| 17.333333 | 40 | 0.798077 | 16 | 104 | 5 | 0.5625 | 0.4875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173077 | 104 | 5 | 41 | 20.8 | 0.930233 | 0.913462 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.2 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
01c16374b6754b7c09b40fea89dd87002fd9292b | 355 | py | Python | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_11_19PitcherBase_bop_test.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 33 | 2021-12-15T07:11:47.000Z | 2022-03-29T08:58:32.000Z | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_11_19PitcherBase_bop_test.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | 3 | 2021-12-15T11:39:54.000Z | 2022-03-29T07:24:23.000Z | configs/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_11_19PitcherBase_bop_test.py | THU-DA-6D-Pose-Group/self6dpp | c267cfa55e440e212136a5e9940598720fa21d16 | [
"Apache-2.0"
] | null | null | null | _base_ = "./FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_Pbr_01_02MasterChefCan_bop_test.py"
OUTPUT_DIR = "output/deepim/ycbvPbrSO/FlowNet512_1.5AugCosyAAEGray_NoiseRandom_AggressiveR_ClipGrad_fxfy1_Dtw01_LogDz_PM10_Flat_ycbvPbr_SO/11_19PitcherBase"
DATASETS = dict(TRAIN=("ycbv_019_pitcher_base_train_pbr",))
| 88.75 | 156 | 0.907042 | 47 | 355 | 6.12766 | 0.680851 | 0.076389 | 0.180556 | 0.256944 | 0.548611 | 0.548611 | 0.548611 | 0.548611 | 0.548611 | 0.548611 | 0 | 0.089595 | 0.025352 | 355 | 3 | 157 | 118.333333 | 0.742775 | 0 | 0 | 0 | 0 | 0 | 0.839437 | 0.839437 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
01f5cc5253e0d819d6987eaff443bcbb7e64d037 | 42 | py | Python | Codeforces/289 Division 2/Problem C/gen.py | VastoLorde95/Competitive-Programming | 6c990656178fb0cd33354cbe5508164207012f24 | [
"MIT"
] | 170 | 2017-07-25T14:47:29.000Z | 2022-01-26T19:16:31.000Z | Codeforces/289 Division 2/Problem C/gen.py | navodit15/Competitive-Programming | 6c990656178fb0cd33354cbe5508164207012f24 | [
"MIT"
] | null | null | null | Codeforces/289 Division 2/Problem C/gen.py | navodit15/Competitive-Programming | 6c990656178fb0cd33354cbe5508164207012f24 | [
"MIT"
] | 55 | 2017-07-28T06:17:33.000Z | 2021-10-31T03:06:22.000Z | print 100
for i in xrange(100):
print 18
| 10.5 | 21 | 0.714286 | 9 | 42 | 3.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242424 | 0.214286 | 42 | 3 | 22 | 14 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.666667 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
bf21574db2d2ac1f76d937b5cf25cb36b7dd14e4 | 19,220 | py | Python | markovflow/kernels/matern.py | prakharverma/markovflow | 9b7fafc199dae2f7f3207c2945fd43f674386dc1 | [
"Apache-2.0"
] | 17 | 2021-09-16T10:34:19.000Z | 2022-03-11T20:24:28.000Z | markovflow/kernels/matern.py | prakharverma/markovflow | 9b7fafc199dae2f7f3207c2945fd43f674386dc1 | [
"Apache-2.0"
] | 2 | 2021-12-01T17:53:53.000Z | 2021-12-16T15:55:49.000Z | markovflow/kernels/matern.py | prakharverma/markovflow | 9b7fafc199dae2f7f3207c2945fd43f674386dc1 | [
"Apache-2.0"
] | 1 | 2021-12-16T09:29:49.000Z | 2021-12-16T09:29:49.000Z | #
# Copyright (c) 2021 The Markovflow Contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Module containing the Matern family of kernels."""
import tensorflow as tf
from gpflow import Parameter, default_float
from gpflow.utilities import positive
from markovflow.kernels.sde_kernel import StationaryKernel
from markovflow.utils import tf_scope_class_decorator, tf_scope_fn_decorator
@tf_scope_class_decorator
class Matern12(StationaryKernel):
r"""
Represents the Matern1/2 kernel. This kernel has the formula:
.. math:: C(x, x') = σ² exp(-|x - x'| / ℓ)
...where lengthscale :math:`ℓ` and signal variance :math:`σ²` are kernel parameters.
This defines an SDE where:
.. math::
&F = - 1/ℓ\\
&L = 1
...so that :math:`Aₖ = exp(-Δtₖ/ℓ)`.
"""
def __init__(
self, lengthscale: float, variance: float, output_dim: int = 1, jitter: float = 0.0
) -> None:
"""
:param lengthscale: A value for the lengthscale parameter.
:param variance: A value for the variance parameter.
:param output_dim: The output dimension of the kernel.
:param jitter: A small non-negative number to add into a matrix's diagonal to
maintain numerical stability during inversion.
"""
super().__init__(output_dim, jitter=jitter)
_check_lengthscale_and_variance(lengthscale, variance)
self._lengthscale = Parameter(lengthscale, transform=positive(), name="lengthscale")
self._variance = Parameter(variance, transform=positive(), name="variance")
@property
def state_dim(self) -> int:
"""Return the state dimension of the kernel, which is always one."""
return 1
def state_transitions(self, transition_times: tf.Tensor, time_deltas: tf.Tensor) -> tf.Tensor:
"""
Return the state transition matrices kernel.
The state dimension is one, so the matrix exponential reduces to a standard one:
.. math:: Aₖ = exp(-Δtₖ/ℓ)
Because this is a stationary kernel, `transition_times` is ignored.
:param transition_times: A tensor of times at which to produce matrices, with shape
``batch_shape + [num_transitions]``. Ignored.
:param time_deltas: A tensor of time gaps for which to produce matrices, with shape
``batch_shape + [num_transitions]``.
:return: A tensor with shape ``batch_shape + [num_transitions, state_dim, state_dim]``.
"""
tf.debugging.assert_rank_at_least(time_deltas, 1, message="time_deltas cannot be a scalar.")
state_transitions = tf.exp(-time_deltas / self._lengthscale)[..., None, None]
shape = tf.concat([tf.shape(time_deltas), [self.state_dim, self.state_dim]], axis=0)
tf.debugging.assert_equal(tf.shape(state_transitions), shape)
return state_transitions
@property
def feedback_matrix(self) -> tf.Tensor:
"""
Return the feedback matrix :math:`F`. This is where:
.. math:: dx(t)/dt = F x(t) + L w(t)
For this kernel, note that :math:`F = - 1 / ℓ`.
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return tf.identity([[-1.0 / self._lengthscale]])
@property
def steady_state_covariance(self) -> tf.Tensor:
"""
Return the steady state covariance :math:`P∞`. For this kernel,
this is the variance hyperparameter.
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return tf.identity(tf.reshape(self._variance, (self.state_dim, self.state_dim)))
@property
def lengthscale(self) -> Parameter:
"""
Return the lengthscale parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._lengthscale
@property
def variance(self) -> Parameter:
"""
Return the variance parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._variance
@tf_scope_class_decorator
class OrnsteinUhlenbeck(StationaryKernel):
r"""
Represents the Ornstein–Uhlenbeck kernel.
This is an alternative parameterization of the Matern1/2 kernel.
This kernel has the formula:
.. math:: C(x, x') = q/2λ exp(-λ|x - x'|)
...where decay :math:`λ` and diffusion coefficient :math:`q` are kernel parameters.
This defines an SDE where:
.. math::
&F = - λ\\
&L = q
...so that :math:`Aₖ = exp(-λ Δtₖ)`.
"""
def __init__(
self, decay: float, diffusion: float, output_dim: int = 1, jitter: float = 0.0
) -> None:
"""
:param decay: A value for the decay parameter.
:param diffusion: A value for the diffusion parameter.
:param output_dim: The output dimension of the kernel.
:param jitter: A small non-negative number to add into a matrix's diagonal to
maintain numerical stability during inversion.
"""
super().__init__(output_dim, jitter=jitter)
_check_lengthscale_and_variance(decay, diffusion)
self._decay = Parameter(decay, transform=positive(), name="decay")
self._diffusion = Parameter(diffusion, transform=positive(), name="diffusion")
@property
def state_dim(self) -> int:
"""Return the state dimension of the kernel, which is always one."""
return 1
def state_transitions(self, transition_times: tf.Tensor, time_deltas: tf.Tensor) -> tf.Tensor:
"""
Return the state transition matrices kernel.
The state dimension is one, so the matrix exponential reduces to a standard one:
.. math:: Aₖ = exp(-λ Δtₖ)
Because this is a stationary kernel, `transition_times` is ignored.
:param transition_times: A tensor of times at which to produce matrices, with shape
``batch_shape + [num_transitions]``. Ignored.
:param time_deltas: A tensor of time gaps for which to produce matrices, with shape
``batch_shape + [num_transitions]``.
:return: A tensor with shape ``batch_shape + [num_transitions, state_dim, state_dim]``.
"""
tf.debugging.assert_rank_at_least(time_deltas, 1, message="time_deltas cannot be a scalar.")
state_transitions = tf.exp(-time_deltas * self._decay)[..., None, None]
shape = tf.concat([tf.shape(time_deltas), [self.state_dim, self.state_dim]], axis=0)
tf.debugging.assert_equal(tf.shape(state_transitions), shape)
return state_transitions
@property
def feedback_matrix(self) -> tf.Tensor:
"""
Return the feedback matrix :math:`F`. This is where:
.. math:: dx(t)/dt = F x(t) + L w(t)
For this kernel, note that :math:`F = -λ`.
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return tf.identity([[-self._decay]])
@property
def steady_state_covariance(self) -> tf.Tensor:
"""
Return the steady state covariance :math:`P∞`. For this kernel,
this is q/2λ.
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return tf.identity(
tf.reshape(0.5 * self._diffusion / self._decay, (self.state_dim, self.state_dim))
)
@property
def decay(self) -> Parameter:
"""
Return the decay parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._decay
@property
def diffusion(self) -> Parameter:
"""
Return the diffusion parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._diffusion
@tf_scope_class_decorator
class Matern32(StationaryKernel):
r"""
Represents the Matern3/2 kernel. This kernel has the formula:
.. math:: C(x, x') = σ² (1 + λ|x - x'|) exp(λ|x - x'|)
...where :math:`λ = √3 / ℓ`, and lengthscale :math:`ℓ` and signal variance :math:`σ²`
are kernel parameters.
The transition matrix :math:`F` in the SDE form for this kernel is:
.. math::
F = &[[0, 1]\\
&[[-λ², -2λ]]
Covariance for the initial state is:
.. math::
P∞ = [&[1, 0],\\
&[0, λ²]] * \verb|variance|
...where `variance` is a kernel parameter.
Since the characteristic equation for the feedback matrix :math:`F` for this kernel
is :math:`(λI + F)² = 0`, the state transition matrix is:
.. math::
Aₖ &= expm(FΔtₖ)\\
&= exp(-λΔtₖ) expm((λI + F)Δtₖ)\\
&= exp(-λΔtₖ) (I + (λI + F)Δtₖ)
...where :math:`expm` is the matrix exponential operator. Note that all higher order terms of
:math:`expm((λI + F)Δtₖ)` disappear.
"""
def __init__(
self, lengthscale: float, variance: float, output_dim: int = 1, jitter: float = 0.0
) -> None:
"""
:param lengthscale: A value for the lengthscale parameter.
:param variance: A value for the variance parameter.
:param output_dim: The output dimension of the kernel.
:param jitter: A small non-negative number to add into a matrix's diagonal to
maintain numerical stability during inversion.
"""
super().__init__(output_dim, jitter=jitter)
_check_lengthscale_and_variance(lengthscale, variance)
self._lengthscale = Parameter(lengthscale, transform=positive(), name="lengthscale")
self._variance = Parameter(variance, transform=positive(), name="variance")
@property
def _lambda(self) -> tf.Tensor:
""" λ the scalar used elsewhere in the docstrings """
return tf.math.sqrt(tf.constant(3.0, dtype=default_float())) / self._lengthscale
@property
def state_dim(self) -> int:
"""Return the state dimension of the kernel, which is always two."""
return 2
def state_transitions(self, transition_times: tf.Tensor, time_deltas: tf.Tensor) -> tf.Tensor:
"""
Return the state transition matrices for the kernel.
Because this is a stationary kernel, `transition_times` is ignored.
:param transition_times: A tensor of times at which to produce matrices, with shape
``batch_shape + [num_transitions]``. Ignored.
:param time_deltas: A tensor of time gaps for which to produce matrices, with shape
``batch_shape + [num_transitions]``.
:return: A tensor with shape ``batch_shape + [num_transitions, state_dim, state_dim]``.
"""
tf.debugging.assert_rank_at_least(time_deltas, 1, message="time_deltas cannot be a scalar.")
# [state_dim, state_dim]
I = tf.eye(self.state_dim, dtype=default_float())
# [..., num_transitions, 1, 1]
extended_time_deltas = time_deltas[..., None, None]
# (λI + F)t [..., num_transitions, state_dim, state_dim]
F_lambda_I_t = (self.feedback_matrix + self._lambda * I) * extended_time_deltas
# expm(-λΔtₖ)(I + (λI + F)Δtₖ) [... num_transitions, state_dim, state_dim]
result = tf.exp(-self._lambda * extended_time_deltas) * (I + F_lambda_I_t)
shape = tf.concat([tf.shape(time_deltas), [self.state_dim, self.state_dim]], axis=0)
tf.debugging.assert_equal(tf.shape(result), shape)
return result
@property
def feedback_matrix(self) -> tf.Tensor:
r"""
Return the feedback matrix :math:`F`. This is where:
.. math:: dx(t)/dt = F x(t) + L w(t)
For this kernel, note that:
.. math::
F = &[0 &1]\\
&[-λ² &-2λ]
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return tf.identity([[0, 1], [-tf.square(self._lambda), -2.0 * self._lambda]])
@property
def steady_state_covariance(self) -> tf.Tensor:
r"""
Return the steady state covariance :math:`P∞`. This is given by:
.. math::
P∞ = σ² [&[1, 0],\\
&[0, λ²]]
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return self._variance * tf.convert_to_tensor(
value=[[1.0, 0], [0, tf.square(self._lambda)]], dtype=default_float()
)
@property
def lengthscale(self) -> Parameter:
"""
Return the lengthscale parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._lengthscale
@property
def variance(self) -> Parameter:
"""
Return the variance parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._variance
@tf_scope_class_decorator
class Matern52(StationaryKernel):
r"""
Represents the Matern5/2 kernel. This kernel has the formula:
.. math:: C(x, x') = σ² (1 + λ|x - x'| + λ²|x - x'|²/3) exp(λ|x - x'|)
...where :math:`λ = √5 / ℓ`, and lengthscale :math:`ℓ` and signal variance :math:`σ²`
are kernel parameters.
The transition matrix :math:`F` in the SDE form for this kernel is::
F = [ 0, 1, 0]
[ 0, 0, 1]
[-λ³, -3λ², -3λ]
Covariance for the initial state is::
P∞ = σ² [ 1, 0, -λ²/3]
[ 0, λ²/3, 0]
[-λ²/3, 0, λ⁴]
Since the characteristic equation for the feedback matrix :math:`F` for this kernel
is :math:`(λI + F)³ = 0`, the state transition matrix is:
.. math::
Aₖ &= expm(FΔtₖ)\\
&= exp(-λΔtₖ) expm((λI + F)Δtₖ)\\
&= exp(-λΔtₖ) (I + (λI + F)Δtₖ + (λI + F)²Δtₖ²/2)
...where :math:`expm` is the matrix exponential operator. Note that all
higher order terms disappear.
"""
def __init__(
self, lengthscale: float, variance: float, output_dim: int = 1, jitter: float = 0.0
) -> None:
"""
:param lengthscale: A value for the lengthscale parameter.
:param variance: A value for the variance parameter.
:param output_dim: The output dimension of the kernel.
:param jitter: A small non-negative number to add into a matrix's diagonal to
maintain numerical stability during inversion.
"""
super().__init__(output_dim, jitter=jitter)
_check_lengthscale_and_variance(lengthscale, variance)
self._lengthscale = Parameter(lengthscale, transform=positive(), name="lengthscale")
self._variance = Parameter(variance, transform=positive(), name="variance")
@property
def _lambda(self) -> tf.Tensor:
""" λ the scalar used elsewhere in the docstrings """
return tf.math.sqrt(tf.constant(5.0, dtype=default_float())) / self._lengthscale
@property
def state_dim(self) -> int:
"""Return the state dimension of the kernel, which is always three."""
return 3
def state_transitions(self, transition_times: tf.Tensor, time_deltas: tf.Tensor) -> tf.Tensor:
"""
Return the state transition matrices for the kernel.
Because this is a stationary kernel, `transition_times` is ignored.
:param transition_times: A tensor of times at which to produce matrices, with shape
``batch_shape + [num_transitions]``. Ignored.
:param time_deltas: A tensor of time gaps for which to produce matrices, with shape
``batch_shape + [num_transitions]``.
:return: A tensor with shape ``batch_shape + [num_transitions, state_dim, state_dim]``.
"""
tf.debugging.assert_rank_at_least(time_deltas, 1, message="time_deltas cannot be a scalar.")
# [state_dim, state_dim]
I = tf.eye(self.state_dim, dtype=default_float())
extended_time_deltas = time_deltas[..., None, None]
# (λI + F)t [..., num_transitions, state_dim, state_dim]
F_lambda_I_t = (self.feedback_matrix + self._lambda * I) * extended_time_deltas
# expm(-λΔtₖ)(I + (λI + F)Δtₖ + (λI + F)²Δtₖ²/2) [... num_transitions, state_dim, state_dim]
result = tf.exp(-self._lambda * extended_time_deltas) * (
I + F_lambda_I_t + F_lambda_I_t @ F_lambda_I_t / 2.0
)
shape = tf.concat([tf.shape(time_deltas), [self.state_dim, self.state_dim]], axis=0)
tf.debugging.assert_equal(tf.shape(result), shape)
return result
@property
def feedback_matrix(self) -> tf.Tensor:
r"""
Return the feedback matrix :math:`F`. This is where:
.. math:: dx(t)/dt = F x(t) + L w(t)
For this kernel, note that::
F = [[ 0, 1, 0]
[ 0, 0, 1]
[-λ³, -3λ², -3λ]]
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
return tf.identity(
[
[0, 1, 0],
[0, 0, 1],
[-self._lambda ** 3, -3.0 * tf.square(self._lambda), -3.0 * self._lambda],
]
)
@property
def steady_state_covariance(self) -> tf.Tensor:
r"""
Return the steady state covariance :math:`P∞`. This is given by::
P∞ = σ² [ 1, 0, -λ²/3]
[ 0, λ²/3, 0]
[-λ²/3, 0, λ⁴]
:return: A tensor with shape ``[state_dim, state_dim]``.
"""
lambda_23 = tf.square(self._lambda) / 3.0
return self._variance * tf.convert_to_tensor(
value=[[1, 0, -lambda_23], [0, lambda_23, 0], [-lambda_23, 0, self._lambda ** 4]],
dtype=default_float(),
)
@property
def lengthscale(self) -> Parameter:
"""
Return the lengthscale parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._lengthscale
@property
def variance(self) -> Parameter:
"""
Return the variance parameter. This is a GPflow
`Parameter <https://gpflow.readthedocs.io/en/master/gpflow/index.html#gpflow-parameter>`_.
"""
return self._variance
@tf_scope_fn_decorator
def _check_lengthscale_and_variance(lengthscale: float, variance: float) -> None:
"""Verify that the lengthscale and variance are positive"""
if lengthscale <= 0.0:
raise ValueError("lengthscale must be positive.")
if variance <= 0.0:
raise ValueError("variance must be positive.")
| 36.470588 | 100 | 0.615661 | 2,485 | 19,220 | 4.629779 | 0.104225 | 0.037549 | 0.020339 | 0.025033 | 0.845024 | 0.829987 | 0.819731 | 0.817384 | 0.807736 | 0.802868 | 0 | 0.013269 | 0.262851 | 19,220 | 526 | 101 | 36.539924 | 0.797995 | 0.509261 | 0 | 0.646707 | 0 | 0 | 0.031726 | 0 | 0 | 0 | 0 | 0 | 0.047904 | 1 | 0.185629 | false | 0 | 0.02994 | 0 | 0.39521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bf2bc726c7f05cda707358f545875c63fedd9a72 | 32 | py | Python | 02.Data-Structures-and-Algorithms/05.OOPS-2/readme.py | PramitSahoo/Python-with-Data-Structures-and-Algorithms | f0004e2f5f981da2ae9c2b81c36659b1b7d92cc8 | [
"Apache-2.0"
] | null | null | null | 02.Data-Structures-and-Algorithms/05.OOPS-2/readme.py | PramitSahoo/Python-with-Data-Structures-and-Algorithms | f0004e2f5f981da2ae9c2b81c36659b1b7d92cc8 | [
"Apache-2.0"
] | null | null | null | 02.Data-Structures-and-Algorithms/05.OOPS-2/readme.py | PramitSahoo/Python-with-Data-Structures-and-Algorithms | f0004e2f5f981da2ae9c2b81c36659b1b7d92cc8 | [
"Apache-2.0"
] | null | null | null | print("This contains oops - 2")
| 16 | 31 | 0.6875 | 5 | 32 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.15625 | 32 | 1 | 32 | 32 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
17428904a367cfce35db8da2ea89352c33c8dcb5 | 42 | py | Python | pylanelet2/lanelet2/projection.py | FisherTiger95/Lanelet2_standalone | 8f0c6076db5a945f78ec773515035de0e11f07da | [
"BSD-3-Clause"
] | 2 | 2021-10-13T12:53:31.000Z | 2022-03-15T15:15:47.000Z | pylanelet2/lanelet2/projection.py | FisherTiger95/lanelet2_standalone | 8f0c6076db5a945f78ec773515035de0e11f07da | [
"BSD-3-Clause"
] | null | null | null | pylanelet2/lanelet2/projection.py | FisherTiger95/lanelet2_standalone | 8f0c6076db5a945f78ec773515035de0e11f07da | [
"BSD-3-Clause"
] | null | null | null | from liblanelet2_projection_pyapi import * | 42 | 42 | 0.904762 | 5 | 42 | 7.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.071429 | 42 | 1 | 42 | 42 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
174ead7de2c7a5be81292a6af1e0233851453318 | 11,224 | py | Python | fieldbillard/fields.py | DFNaiff/FieldBillard | 0cbdfbe3e0ee516f5820b2dfa27d9c4ca10aaba4 | [
"BSD-3-Clause"
] | null | null | null | fieldbillard/fields.py | DFNaiff/FieldBillard | 0cbdfbe3e0ee516f5820b2dfa27d9c4ca10aaba4 | [
"BSD-3-Clause"
] | null | null | null | fieldbillard/fields.py | DFNaiff/FieldBillard | 0cbdfbe3e0ee516f5820b2dfa27d9c4ca10aaba4 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import itertools
import torch
from . import utils
class FieldObject(object):
def potential(x, y, charge, coupling):
pass
class Ring(FieldObject):
def __init__(self, radius: float, charge_density: float = 1.0,
x0: float = 0.0, y0: float = 0.0):
super().__init__()
self.radius = radius
self.charge_density = charge_density
self.x0 = x0
self.y0 = y0
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
r, _ = utils.to_polar(x - self.x0, y - self.y0)
value = coupling*self.charge_density*charge*utils.circle_phi(r/self.radius)
return value
class HorizontalLine(FieldObject):
def __init__(self, y0: float, charge_density: float = 1.0):
self.charge_density = charge_density
self.y0 = y0
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
value = -coupling*self.charge_density*charge*utils.torch.log(torch.abs(y - self.y0))
return value
class VerticalLine(FieldObject):
def __init__(self, x0, charge_density=1.0):
self.charge_density = charge_density
self.x0 = x0
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
value = -coupling*self.charge_density*charge*torch.log(torch.abs(x - self.x0))
return value
class Hash(FieldObject):
def __init__(self, l, charge_density=1.0, x0=0.0, y0=0.0):
self.l = l
self.charge_density = charge_density
self.x0 = x0
self.y0 = y0
self._set_lines()
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
value = self._upper.potential(x, y, charge, coupling) + \
self._lower.potential(x, y, charge, coupling) + \
self._left.potential(x, y, charge, coupling) + \
self._right.potential(x, y, charge, coupling)
return value
def _set_lines(self):
self._upper = HorizontalLine(self.y0 + self.l/2, self.charge_density)
self._lower = HorizontalLine(self.y0 - self.l/2, self.charge_density)
self._left = VerticalLine(self.x0 + self.l/2, self.charge_density)
self._right = VerticalLine(self.x0 - self.l/2, self.charge_density)
class HorizontalFiniteLine(FieldObject):
def __init__(self, y0, l, x0=0.0, charge_density=1.0):
self.charge_density = charge_density
self.y0 = y0
self.x0 = x0
self.l = l
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
dx, dy = x - self.x0, y - self.y0
integral = torch.arcsinh((2*dx + self.l)/(2*torch.abs(dy))) - \
torch.arcsinh((2*dx - self.l)/(2*torch.abs(dy)))
value = coupling*self.charge_density*charge*integral
return value
class VerticalFiniteLine(FieldObject):
def __init__(self, x0, l, y0=0, charge_density=1.0):
self.charge_density = charge_density
self.x0 = x0
self.l = l
self.y0 = y0
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
dx, dy = x - self.x0, y - self.y0
integral = torch.arcsinh((2*dy + self.l)/(2*torch.abs(dx))) - \
torch.arcsinh((2*dy - self.l)/(2*torch.abs(dx)))
value = coupling*self.charge_density*charge*integral
return value
class Square(FieldObject):
def __init__(self, l, charge_density=1.0, x0=0.0, y0=0.0):
self.l = l
self.charge_density = charge_density
self.x0 = x0
self.y0 = y0
self._set_lines()
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
value = self._upper.potential(x, y, charge, coupling) + \
self._lower.potential(x, y, charge, coupling) + \
self._left.potential(x, y, charge, coupling) + \
self._right.potential(x, y, charge, coupling)
return value
def _set_lines(self):
self._upper = HorizontalFiniteLine(self.y0 + self.l/2, self.l, self.x0,
self.charge_density)
self._lower = HorizontalFiniteLine(self.y0 - self.l/2, self.l, self.x0,
self.charge_density)
self._left = VerticalFiniteLine(self.x0 + self.l/2, self.l, self.y0,
self.charge_density)
self._right = VerticalFiniteLine(self.x0 - self.l/2, self.l, self.y0,
self.charge_density)
class FixedPoints(FieldObject):
def __init__(self, x0, y0, charge=1.0):
super().__init__()
self.x0 = x0 #(m, )
self.y0 = y0 #(m, )
self.charge = charge
#Assert blablabla
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
#x : (m,)
#y : (m,)
#x0 : (n,)
#y0 : (n,)
x = x[..., None]
y = y[..., None]
x0 = self.x0 #(1, n)
y0 = self.y0 #(1, n)
d = torch.sqrt((x - x0)**2 + (y - y0)**2) #(m, n)
values = coupling*self.charge*charge*torch.sum(1/d, axis=-1) #(m,)
return values
#values = coupling*self.charge*charge/d
#return values
class PeriodicFixedPoints(FieldObject):
def __init__(self, x0, y0, charge=1.0, lx=None, ly=None, cx=0.0, cy=0.0):
super().__init__()
self.x0 = x0 #(n, )
self.y0 = y0 #(n, )
self.lx = lx
self.ly = ly
self.cx = cx
self.cy = cy
self.charge = charge #(n, )
self.nper = 1
def potential(self, x: torch.Tensor, y: torch.Tensor, charge: float, coupling: float = 1.0):
"""
Potential generated by the object on (x, y)
Parameters
----------
x : torch.Tensor
Position x-coordinate.
y : torch.Tensor
Position y-coordinate.
charge : float
Charge of particles.
coupling : float, optional
Coupling constant. The default is 1.0.
Returns
-------
value : torch.Tensor
Value of potential.
"""
x = x[..., None]
y = y[..., None]
x0 = self.x0 #(1, n)
y0 = self.y0 #(1, n)
d = torch.sqrt((x - x0)**2 + (y - y0)**2) #(m, n)
values = coupling*self.charge*charge*torch.sum(1/d, axis=-1) #(m,)
xiterator = list(range(-self.nper, self.nper + 1)) if self.lx is not None else [0]
yiterator = list(range(-self.nper, self.nper + 1)) if self.ly is not None else [0]
iterator = itertools.product(xiterator, yiterator)
single_values = [self.single_potential(x, y, n, m, charge, coupling)
for n, m in iterator]
values = sum(single_values)
return values
def single_potential(self, x, y, n, m, charge, coupling):
lx = self.lx if self.lx is not None else 0.0
ly = self.ly if self.ly is not None else 0.0
d = torch.sqrt((x - self.x0 + n*lx)**2 + (y - self.y0 + m*ly)**2) #(m, n)
values = coupling*self.charge*charge*torch.sum(1/d, axis=-1) #(m,)
return values
| 31.005525 | 96 | 0.531361 | 1,345 | 11,224 | 4.350186 | 0.074349 | 0.084601 | 0.05811 | 0.047171 | 0.870962 | 0.836951 | 0.811143 | 0.802598 | 0.789609 | 0.75235 | 0 | 0.027175 | 0.347559 | 11,224 | 362 | 97 | 31.005525 | 0.771815 | 0.280203 | 0 | 0.609929 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156028 | false | 0.007092 | 0.021277 | 0 | 0.319149 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1757e75bc04b9b0f83914ab7639ec861d6d8b96d | 123 | py | Python | rest_api/project/users/models.py | Razz21/Nuxt-Django-E-Commerce-Demo | 24834007f7554f9e59758b611c73ea0da85c841e | [
"MIT"
] | 1 | 2020-10-31T12:46:17.000Z | 2020-10-31T12:46:17.000Z | project/users/models.py | Razz21/DRF-Vue-template | cdc175802acabe7b6c8fe801e2134087bd425870 | [
"MIT"
] | 4 | 2021-03-09T12:18:14.000Z | 2022-02-26T15:25:42.000Z | rest_api/project/users/models.py | Razz21/Nuxt-Django-E-Commerce-Demo | 24834007f7554f9e59758b611c73ea0da85c841e | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.db import models
class UserModel(AbstractUser):
pass
| 17.571429 | 51 | 0.804878 | 16 | 123 | 6.1875 | 0.6875 | 0.20202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138211 | 123 | 6 | 52 | 20.5 | 0.933962 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
176f8e4bdf16ff3d8104ff48775e0d5e117816af | 97 | py | Python | configs/snippets/aggregate_widgets.py | trunkclub/ontology_etl | 097985be505469258ee6c831e789f64fb804f091 | [
"MIT"
] | null | null | null | configs/snippets/aggregate_widgets.py | trunkclub/ontology_etl | 097985be505469258ee6c831e789f64fb804f091 | [
"MIT"
] | null | null | null | configs/snippets/aggregate_widgets.py | trunkclub/ontology_etl | 097985be505469258ee6c831e789f64fb804f091 | [
"MIT"
] | null | null | null | import random
def total_widgets_purchased(transaction):
return int(random.random() * 10000)
| 19.4 | 41 | 0.773196 | 12 | 97 | 6.083333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 0.134021 | 97 | 4 | 42 | 24.25 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
17727d28ef696b0633f58f1a009e160dd527c382 | 150 | py | Python | sparc/component/testing.py | davisd50/sparc.component | c94996c8927aeaa7b5c4c480cc1c3682ae57f8cf | [
"MIT"
] | null | null | null | sparc/component/testing.py | davisd50/sparc.component | c94996c8927aeaa7b5c4c480cc1c3682ae57f8cf | [
"MIT"
] | null | null | null | sparc/component/testing.py | davisd50/sparc.component | c94996c8927aeaa7b5c4c480cc1c3682ae57f8cf | [
"MIT"
] | null | null | null | import sparc.component
from sparc.testing.testlayer import SparcZCMLFileLayer
SPARC_COMPONENT_INTEGRATION_LAYER = SparcZCMLFileLayer(sparc.component) | 37.5 | 71 | 0.893333 | 16 | 150 | 8.1875 | 0.5625 | 0.320611 | 0.48855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 150 | 4 | 71 | 37.5 | 0.929078 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bd8a4ca985f417a701703b861a059d8edd273a4d | 42 | py | Python | vnpy/gateway/bitstamp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 19,529 | 2015-03-02T12:17:35.000Z | 2022-03-31T17:18:27.000Z | vnpy/gateway/bitstamp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 2,186 | 2015-03-04T23:16:33.000Z | 2022-03-31T03:44:01.000Z | vnpy/gateway/bitstamp/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 8,276 | 2015-03-02T05:21:04.000Z | 2022-03-31T13:13:13.000Z | from vnpy_bitstamp import BitstampGateway
| 21 | 41 | 0.904762 | 5 | 42 | 7.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.973684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bdb946c644e678f9b00b91a4be889415be9e8521 | 66 | py | Python | bills/models.py | coreyar/swipe-for-rights-api | 703c3f990e8a7ba3036d00d1cff99404d5803cce | [
"MIT"
] | null | null | null | bills/models.py | coreyar/swipe-for-rights-api | 703c3f990e8a7ba3036d00d1cff99404d5803cce | [
"MIT"
] | 3 | 2021-03-19T22:52:05.000Z | 2021-06-10T21:46:04.000Z | bills/models.py | coreyar/swipe-for-rights-api | 703c3f990e8a7ba3036d00d1cff99404d5803cce | [
"MIT"
] | null | null | null | from django.db import models
class Bill(models.Model):
pass
| 11 | 28 | 0.727273 | 10 | 66 | 4.8 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 66 | 5 | 29 | 13.2 | 0.90566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
bdd497bcb0112a66b9893e146e734570001ceabc | 156 | py | Python | example/test_settings.py | Jyrno42/tg-react | 3efed82f0dff0a67aeabfa43f6f22c868dd91764 | [
"BSD-3-Clause"
] | 1 | 2018-07-26T07:41:35.000Z | 2018-07-26T07:41:35.000Z | example/test_settings.py | Jyrno42/tg-react | 3efed82f0dff0a67aeabfa43f6f22c868dd91764 | [
"BSD-3-Clause"
] | null | null | null | example/test_settings.py | Jyrno42/tg-react | 3efed82f0dff0a67aeabfa43f6f22c868dd91764 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
from .settings import * # NOQA
| 17.333333 | 59 | 0.782051 | 23 | 156 | 4.913043 | 0.565217 | 0.106195 | 0.230089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121795 | 156 | 8 | 60 | 19.5 | 0.824818 | 0.025641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bded829680c834127ca640cb898f114076df4520 | 127 | py | Python | institution/exceptions.py | mmesiti/cogs3 | c48cd48629570f418b93aec73de49bc2fb59edc2 | [
"MIT"
] | 1 | 2020-03-28T23:55:02.000Z | 2020-03-28T23:55:02.000Z | institution/exceptions.py | mmesiti/cogs3 | c48cd48629570f418b93aec73de49bc2fb59edc2 | [
"MIT"
] | 60 | 2018-04-16T13:40:23.000Z | 2020-06-05T18:02:01.000Z | institution/exceptions.py | mmesiti/cogs3 | c48cd48629570f418b93aec73de49bc2fb59edc2 | [
"MIT"
] | 10 | 2018-03-14T22:25:50.000Z | 2020-01-09T21:32:22.000Z | class InvalidInstitutionalEmailAddress(Exception):
pass
class InvalidInstitutionalIndentityProvider(Exception):
pass
| 18.142857 | 55 | 0.826772 | 8 | 127 | 13.125 | 0.625 | 0.247619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125984 | 127 | 6 | 56 | 21.166667 | 0.945946 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
bdf8bb513e458444602da84355a69b2d534ad812 | 110 | py | Python | src/exercises/exercise0/exercise0_solution.py | paramraghavan/beginners-py-learn | 120db42b3ad304915d5be172f4ebc555ef2cb405 | [
"MIT"
] | null | null | null | src/exercises/exercise0/exercise0_solution.py | paramraghavan/beginners-py-learn | 120db42b3ad304915d5be172f4ebc555ef2cb405 | [
"MIT"
] | null | null | null | src/exercises/exercise0/exercise0_solution.py | paramraghavan/beginners-py-learn | 120db42b3ad304915d5be172f4ebc555ef2cb405 | [
"MIT"
] | null | null | null | '''
Give a string below, find the mode value:
'13 14 18 13 13 21 13 16'
Print the mode value which is : 13
''' | 22 | 41 | 0.672727 | 23 | 110 | 3.217391 | 0.695652 | 0.189189 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211765 | 0.227273 | 110 | 5 | 42 | 22 | 0.658824 | 0.927273 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
da5bfa8bd719dc9f267995f3bef3d51fe0151003 | 153 | py | Python | decorator_1.py | jepster/python_advanced_techniques | f4b0e0dda7b66be55f650f9f902e735d3f5a9f64 | [
"MIT"
] | null | null | null | decorator_1.py | jepster/python_advanced_techniques | f4b0e0dda7b66be55f650f9f902e735d3f5a9f64 | [
"MIT"
] | null | null | null | decorator_1.py | jepster/python_advanced_techniques | f4b0e0dda7b66be55f650f9f902e735d3f5a9f64 | [
"MIT"
] | null | null | null | def perform_twice(fn, *args, **kwargs):
fn(*args, **kwargs)
fn(*args, **kwargs)
perform_twice(print, 5, 10, sep='&', end='...')
# 5&10...5&10... | 25.5 | 47 | 0.555556 | 23 | 153 | 3.608696 | 0.478261 | 0.216867 | 0.433735 | 0.337349 | 0.433735 | 0.433735 | 0 | 0 | 0 | 0 | 0 | 0.069231 | 0.150327 | 153 | 6 | 48 | 25.5 | 0.569231 | 0.091503 | 0 | 0.5 | 0 | 0 | 0.028986 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
da646d15c8befe6b6654c2ff5e7cd0ca2da22fa5 | 166 | py | Python | tree/EricD/104. Maximum Depth of Binary Tree - EricD.py | lidongdongbuaa/leetcode | ca5507c30f1177df14e488221b7cc92bb1a747c1 | [
"MIT"
] | 1,232 | 2018-04-20T07:30:43.000Z | 2022-03-31T09:34:56.000Z | tree/EricD/104. Maximum Depth of Binary Tree - EricD.py | lidongdongbuaa/leetcode | ca5507c30f1177df14e488221b7cc92bb1a747c1 | [
"MIT"
] | 98 | 2018-06-25T16:13:28.000Z | 2021-06-28T21:46:15.000Z | tree/EricD/104. Maximum Depth of Binary Tree - EricD.py | lidongdongbuaa/leetcode | ca5507c30f1177df14e488221b7cc92bb1a747c1 | [
"MIT"
] | 283 | 2018-04-20T07:30:46.000Z | 2022-03-20T01:14:10.000Z | def maxDepth(self, root):
"""
:type root: TreeNode
:rtype: int
"""
return max(self.maxDepth(root.left),self.maxDepth(root.right))+1 if root else 0 | 27.666667 | 83 | 0.63253 | 24 | 166 | 4.375 | 0.666667 | 0.228571 | 0.304762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0.210843 | 166 | 6 | 83 | 27.666667 | 0.78626 | 0.192771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
da858899fe3cca4ac4924972de200b14678ef5da | 130 | py | Python | src/python_qa/utils/iterable.py | Starkov-EG/python-qa | c407051e2d4c8941a2713e8ef2a450d0d91a6372 | [
"Apache-2.0"
] | null | null | null | src/python_qa/utils/iterable.py | Starkov-EG/python-qa | c407051e2d4c8941a2713e8ef2a450d0d91a6372 | [
"Apache-2.0"
] | null | null | null | src/python_qa/utils/iterable.py | Starkov-EG/python-qa | c407051e2d4c8941a2713e8ef2a450d0d91a6372 | [
"Apache-2.0"
] | null | null | null | import typing
def filtered(func: typing.Callable, iterable: typing.Iterable):
return type(iterable)(filter(func, iterable))
| 21.666667 | 63 | 0.761538 | 16 | 130 | 6.1875 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 130 | 5 | 64 | 26 | 0.868421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
da933d4a08785b02a317a12c67be5a721e5a4883 | 23,154 | py | Python | tests/test_track.py | spankders/pyspotify | b18ac0c72771e6c3418f0d57b775ae5c6e1ab44e | [
"Apache-2.0"
] | 1 | 2019-07-20T08:31:49.000Z | 2019-07-20T08:31:49.000Z | tests/test_track.py | spankders/pyspotify | b18ac0c72771e6c3418f0d57b775ae5c6e1ab44e | [
"Apache-2.0"
] | null | null | null | tests/test_track.py | spankders/pyspotify | b18ac0c72771e6c3418f0d57b775ae5c6e1ab44e | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
import unittest
import spotify
import tests
from tests import mock
@mock.patch('spotify.track.lib', spec=spotify.lib)
class TrackTest(unittest.TestCase):
def setUp(self):
self.session = tests.create_session_mock()
def assert_fails_if_error(self, lib_mock, func):
lib_mock.sp_track_error.return_value = (
spotify.ErrorType.BAD_API_VERSION)
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
with self.assertRaises(spotify.Error):
func(track)
def test_create_without_uri_or_sp_track_fails(self, lib_mock):
with self.assertRaises(AssertionError):
spotify.Track(self.session)
@mock.patch('spotify.Link', spec=spotify.Link)
def test_create_from_uri(self, link_mock, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
link_instance_mock = link_mock.return_value
link_instance_mock.as_track.return_value = spotify.Track(
self.session, sp_track=sp_track)
uri = 'spotify:track:foo'
result = spotify.Track(self.session, uri=uri)
link_mock.assert_called_with(self.session, uri=uri)
link_instance_mock.as_track.assert_called_with()
lib_mock.sp_track_add_ref.assert_called_with(sp_track)
self.assertEqual(result._sp_track, sp_track)
@mock.patch('spotify.Link', spec=spotify.Link)
def test_create_from_uri_fail_raises_error(self, link_mock, lib_mock):
link_instance_mock = link_mock.return_value
link_instance_mock.as_track.return_value = None
uri = 'spotify:track:foo'
with self.assertRaises(ValueError):
spotify.Track(self.session, uri=uri)
def test_adds_ref_to_sp_track_when_created(self, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
spotify.Track(self.session, sp_track=sp_track)
lib_mock.sp_track_add_ref.assert_called_with(sp_track)
def test_releases_sp_track_when_track_dies(self, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
track = None # noqa
tests.gc_collect()
lib_mock.sp_track_release.assert_called_with(sp_track)
@mock.patch('spotify.Link', spec=spotify.Link)
def test_repr(self, link_mock, lib_mock):
link_instance_mock = link_mock.return_value
link_instance_mock.uri = 'foo'
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = repr(track)
self.assertEqual(result, 'Track(%r)' % 'foo')
def test_eq(self, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track1 = spotify.Track(self.session, sp_track=sp_track)
track2 = spotify.Track(self.session, sp_track=sp_track)
self.assertTrue(track1 == track2)
self.assertFalse(track1 == 'foo')
def test_ne(self, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track1 = spotify.Track(self.session, sp_track=sp_track)
track2 = spotify.Track(self.session, sp_track=sp_track)
self.assertFalse(track1 != track2)
def test_hash(self, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track1 = spotify.Track(self.session, sp_track=sp_track)
track2 = spotify.Track(self.session, sp_track=sp_track)
self.assertEqual(hash(track1), hash(track2))
def test_is_loaded(self, lib_mock):
lib_mock.sp_track_is_loaded.return_value = 1
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_loaded
lib_mock.sp_track_is_loaded.assert_called_once_with(sp_track)
self.assertTrue(result)
def test_error(self, lib_mock):
lib_mock.sp_track_error.return_value = int(
spotify.ErrorType.IS_LOADING)
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.error
lib_mock.sp_track_error.assert_called_once_with(sp_track)
self.assertIs(result, spotify.ErrorType.IS_LOADING)
@mock.patch('spotify.utils.load')
def test_load(self, load_mock, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
track.load(10)
load_mock.assert_called_with(self.session, track, timeout=10)
def test_offline_status(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_offline_get_status.return_value = 2
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.offline_status
lib_mock.sp_track_offline_get_status.assert_called_with(sp_track)
self.assertIs(result, spotify.TrackOfflineStatus.DOWNLOADING)
def test_offline_status_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.offline_status
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_offline_status_fails_if_error(self, lib_mock):
lib_mock.sp_track_error.return_value = (
spotify.ErrorType.BAD_API_VERSION)
lib_mock.sp_track_offline_get_status.return_value = 2
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
with self.assertRaises(spotify.Error):
track.offline_status
def test_availability(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_get_availability.return_value = 1
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.availability
lib_mock.sp_track_get_availability.assert_called_with(
self.session._sp_session, sp_track)
self.assertIs(result, spotify.TrackAvailability.AVAILABLE)
def test_availability_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.availability
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_availability_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.availability)
def test_is_local(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_is_local.return_value = 1
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_local
lib_mock.sp_track_is_local.assert_called_with(
self.session._sp_session, sp_track)
self.assertTrue(result)
def test_is_local_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_local
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_is_local_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.is_local)
def test_is_autolinked(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_is_autolinked.return_value = 1
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_autolinked
lib_mock.sp_track_is_autolinked.assert_called_with(
self.session._sp_session, sp_track)
self.assertTrue(result)
def test_is_autolinked_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_autolinked
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_is_autolinked_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.is_autolinked)
def test_playable(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
sp_track_playable = spotify.ffi.cast('sp_track *', 43)
lib_mock.sp_track_get_playable.return_value = sp_track_playable
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.playable
lib_mock.sp_track_get_playable.assert_called_with(
self.session._sp_session, sp_track)
lib_mock.sp_track_add_ref.assert_called_with(sp_track_playable)
self.assertIsInstance(result, spotify.Track)
self.assertEqual(result._sp_track, sp_track_playable)
def test_playable_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.playable
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_playable_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.playable)
def test_is_placeholder(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_is_placeholder.return_value = 1
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_placeholder
lib_mock.sp_track_is_placeholder.assert_called_with(sp_track)
self.assertTrue(result)
def test_is_placeholder_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.is_placeholder
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_is_placeholder_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.is_placeholder)
def test_is_starred(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_is_starred.return_value = 1
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.starred
lib_mock.sp_track_is_starred.assert_called_with(
self.session._sp_session, sp_track)
self.assertTrue(result)
def test_is_starred_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.starred
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertIsNone(result)
def test_is_starred_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.starred)
def test_set_starred(self, lib_mock):
lib_mock.sp_track_set_starred.return_value = spotify.ErrorType.OK
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
track.starred = True
lib_mock.sp_track_set_starred.assert_called_with(
self.session._sp_session, mock.ANY, 1, 1)
def test_set_starred_fails_if_error(self, lib_mock):
tests.create_session_mock()
lib_mock.sp_track_set_starred.return_value = (
spotify.ErrorType.BAD_API_VERSION)
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
with self.assertRaises(spotify.Error):
track.starred = True
@mock.patch('spotify.artist.lib', spec=spotify.lib)
def test_artists(self, artist_lib_mock, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
sp_artist = spotify.ffi.cast('sp_artist *', 43)
lib_mock.sp_track_num_artists.return_value = 1
lib_mock.sp_track_artist.return_value = sp_artist
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.artists
self.assertEqual(len(result), 1)
lib_mock.sp_track_num_artists.assert_called_with(sp_track)
item = result[0]
self.assertIsInstance(item, spotify.Artist)
self.assertEqual(item._sp_artist, sp_artist)
self.assertEqual(lib_mock.sp_track_artist.call_count, 1)
lib_mock.sp_track_artist.assert_called_with(sp_track, 0)
artist_lib_mock.sp_artist_add_ref.assert_called_with(sp_artist)
def test_artists_if_no_artists(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_num_artists.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.artists
self.assertEqual(len(result), 0)
lib_mock.sp_track_num_artists.assert_called_with(sp_track)
self.assertEqual(lib_mock.sp_track_artist.call_count, 0)
def test_artists_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.artists
lib_mock.sp_track_is_loaded.assert_called_with(sp_track)
self.assertEqual(len(result), 0)
def test_artists_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.artists)
@mock.patch('spotify.album.lib', spec=spotify.lib)
def test_album(self, album_lib_mock, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
sp_album = spotify.ffi.cast('sp_album *', 43)
lib_mock.sp_track_album.return_value = sp_album
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.album
lib_mock.sp_track_album.assert_called_with(sp_track)
self.assertEqual(album_lib_mock.sp_album_add_ref.call_count, 1)
self.assertIsInstance(result, spotify.Album)
self.assertEqual(result._sp_album, sp_album)
@mock.patch('spotify.album.lib', spec=spotify.lib)
def test_album_if_unloaded(self, album_lib_mock, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.album
self.assertEqual(lib_mock.sp_track_album.call_count, 0)
self.assertIsNone(result)
def test_album_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.album)
def test_name(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_name.return_value = spotify.ffi.new(
'char[]', b'Foo Bar Baz')
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.name
lib_mock.sp_track_name.assert_called_once_with(sp_track)
self.assertEqual(result, 'Foo Bar Baz')
def test_name_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
lib_mock.sp_track_name.return_value = spotify.ffi.new('char[]', b'')
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.name
self.assertEqual(lib_mock.sp_track_name.call_count, 0)
self.assertIsNone(result)
def test_name_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.name)
def test_duration(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_duration.return_value = 60000
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.duration
lib_mock.sp_track_duration.assert_called_with(sp_track)
self.assertEqual(result, 60000)
def test_duration_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.duration
self.assertEqual(lib_mock.sp_track_duration.call_count, 0)
self.assertIsNone(result)
def test_duration_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.duration)
def test_popularity(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_popularity.return_value = 90
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.popularity
lib_mock.sp_track_popularity.assert_called_with(sp_track)
self.assertEqual(result, 90)
def test_popularity_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.popularity
self.assertEqual(lib_mock.sp_track_popularity.call_count, 0)
self.assertIsNone(result)
def test_popularity_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.popularity)
def test_disc(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_disc.return_value = 2
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.disc
lib_mock.sp_track_disc.assert_called_with(sp_track)
self.assertEqual(result, 2)
def test_disc_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.disc
self.assertEqual(lib_mock.sp_track_disc.call_count, 0)
self.assertIsNone(result)
def test_disc_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.disc)
def test_index(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.OK
lib_mock.sp_track_index.return_value = 7
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.index
lib_mock.sp_track_index.assert_called_with(sp_track)
self.assertEqual(result, 7)
def test_index_is_none_if_unloaded(self, lib_mock):
lib_mock.sp_track_error.return_value = spotify.ErrorType.IS_LOADING
lib_mock.sp_track_is_loaded.return_value = 0
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
result = track.index
self.assertEqual(lib_mock.sp_track_index.call_count, 0)
self.assertIsNone(result)
def test_index_fails_if_error(self, lib_mock):
self.assert_fails_if_error(lib_mock, lambda t: t.index)
@mock.patch('spotify.Link', spec=spotify.Link)
def test_link_creates_link_to_track(self, link_mock, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
sp_link = spotify.ffi.cast('sp_link *', 43)
lib_mock.sp_link_create_from_track.return_value = sp_link
link_mock.return_value = mock.sentinel.link
result = track.link
lib_mock.sp_link_create_from_track.asssert_called_once_with(
sp_track, 0)
link_mock.assert_called_once_with(
self.session, sp_link=sp_link, add_ref=False)
self.assertEqual(result, mock.sentinel.link)
@mock.patch('spotify.Link', spec=spotify.Link)
def test_link_with_offset(self, link_mock, lib_mock):
sp_track = spotify.ffi.cast('sp_track *', 42)
track = spotify.Track(self.session, sp_track=sp_track)
sp_link = spotify.ffi.cast('sp_link *', 43)
lib_mock.sp_link_create_from_track.return_value = sp_link
link_mock.return_value = mock.sentinel.link
result = track.link_with_offset(90)
lib_mock.sp_link_create_from_track.asssert_called_once_with(
sp_track, 90)
link_mock.assert_called_once_with(
self.session, sp_link=sp_link, add_ref=False)
self.assertEqual(result, mock.sentinel.link)
class TrackAvailability(unittest.TestCase):
def test_has_constants(self):
self.assertEqual(spotify.TrackAvailability.UNAVAILABLE, 0)
self.assertEqual(spotify.TrackAvailability.AVAILABLE, 1)
class TrackOfflineStatusTest(unittest.TestCase):
def test_has_constants(self):
self.assertEqual(spotify.TrackOfflineStatus.NO, 0)
self.assertEqual(spotify.TrackOfflineStatus.DOWNLOADING, 2)
| 39.111486 | 76 | 0.706098 | 3,330 | 23,154 | 4.534535 | 0.042643 | 0.154834 | 0.072119 | 0.106623 | 0.849272 | 0.823377 | 0.78245 | 0.766358 | 0.713709 | 0.705497 | 0 | 0.009931 | 0.199836 | 23,154 | 591 | 77 | 39.177665 | 0.805095 | 0.000173 | 0 | 0.554273 | 0 | 0 | 0.031623 | 0 | 0 | 0 | 0 | 0 | 0.258661 | 1 | 0.145497 | false | 0 | 0.011547 | 0 | 0.163972 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16f7dbb2ae299b0afc242d8c3dd6c7f16481f8d8 | 48 | py | Python | vulcanmodeling/model_registry.py | vulcan-collaboration/vulcanmodeling | b70c93368f52c5e439a17e1315b7014ed9765484 | [
"MIT"
] | null | null | null | vulcanmodeling/model_registry.py | vulcan-collaboration/vulcanmodeling | b70c93368f52c5e439a17e1315b7014ed9765484 | [
"MIT"
] | null | null | null | vulcanmodeling/model_registry.py | vulcan-collaboration/vulcanmodeling | b70c93368f52c5e439a17e1315b7014ed9765484 | [
"MIT"
] | null | null | null | from .webgme.base.model import VulcanGMEProject
| 24 | 47 | 0.854167 | 6 | 48 | 6.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e5846f17318af8cc9dc3759dece041ddb4b2dead | 2,068 | py | Python | resources/migrations/0001_initial.py | japsu/tracontent | 169fe84c49c1a30133e927f1be50abba171ebe68 | [
"PostgreSQL",
"Unlicense",
"MIT"
] | null | null | null | resources/migrations/0001_initial.py | japsu/tracontent | 169fe84c49c1a30133e927f1be50abba171ebe68 | [
"PostgreSQL",
"Unlicense",
"MIT"
] | 7 | 2020-11-26T18:41:07.000Z | 2022-01-18T09:27:00.000Z | resources/migrations/0001_initial.py | tracon/tracontent | 65bd8c15b7909a90ebe5ed28cbbf66683a4e3c2c | [
"MIT",
"PostgreSQL",
"Unlicense"
] | null | null | null | from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='StyleSheet',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(help_text='Uniikki tunniste, jolla resurssi ladataan koodista tai HTML:st\xe4 k\xe4sin.', unique=True, max_length=63, verbose_name='Nimi')),
('active', models.BooleanField(default=True, help_text='Ei-aktiivisia resursseja ei huomioida.', verbose_name='Aktiivinen')),
('content', models.TextField(verbose_name='Sis\xe4lt\xf6', blank=True)),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Luotu')),
('updated_at', models.DateTimeField(auto_now=True, verbose_name='Muokattu')),
],
options={
'verbose_name': 'Tyylitiedosto',
'verbose_name_plural': 'Tyylitiedostot',
},
),
migrations.CreateModel(
name='Template',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(help_text='Uniikki tunniste, jolla resurssi ladataan koodista tai HTML:st\xe4 k\xe4sin.', unique=True, max_length=63, verbose_name='Nimi')),
('active', models.BooleanField(default=True, help_text='Ei-aktiivisia resursseja ei huomioida.', verbose_name='Aktiivinen')),
('content', models.TextField(verbose_name='Sis\xe4lt\xf6', blank=True)),
('created_at', models.DateTimeField(auto_now_add=True, verbose_name='Luotu')),
('updated_at', models.DateTimeField(auto_now=True, verbose_name='Muokattu')),
],
options={
'verbose_name': 'Sivupohja',
'verbose_name_plural': 'Sivupohjat',
},
),
]
| 50.439024 | 182 | 0.605899 | 208 | 2,068 | 5.841346 | 0.346154 | 0.144856 | 0.069136 | 0.082305 | 0.804938 | 0.804938 | 0.804938 | 0.804938 | 0.804938 | 0.804938 | 0 | 0.007818 | 0.257737 | 2,068 | 40 | 183 | 51.7 | 0.783713 | 0 | 0 | 0.611111 | 0 | 0 | 0.249516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027778 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e58885d745301ea35b53ef00e5d4e4c160bb184a | 214 | py | Python | external/me_parser/me_parser/__init__.py | snelliott/RCDriver | 89dc5c3ad7bb173212ec64793bddebd09f40832e | [
"Apache-2.0"
] | null | null | null | external/me_parser/me_parser/__init__.py | snelliott/RCDriver | 89dc5c3ad7bb173212ec64793bddebd09f40832e | [
"Apache-2.0"
] | null | null | null | external/me_parser/me_parser/__init__.py | snelliott/RCDriver | 89dc5c3ad7bb173212ec64793bddebd09f40832e | [
"Apache-2.0"
] | null | null | null | from .lib import paper
from .lib import get_temp_pres
from .lib import get_pdep_k
from .lib import fit_pdep
from .lib import print_plog
__all__ = ['paper', 'get_temp_pres', 'get_pdep_k', 'fit_pdep', 'print_plog']
| 26.75 | 76 | 0.766355 | 38 | 214 | 3.894737 | 0.342105 | 0.236486 | 0.439189 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130841 | 214 | 7 | 77 | 30.571429 | 0.795699 | 0 | 0 | 0 | 0 | 0 | 0.214953 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e5af84dfed9b53ae8cdceb8c2b322a2c861cb4b5 | 78,510 | py | Python | server/flask_server_for_capstone/train_util.py | yjun1806/find_receipe | 8489fe8211de0fae96b9298fa4a435883cbd3da7 | [
"MIT"
] | null | null | null | server/flask_server_for_capstone/train_util.py | yjun1806/find_receipe | 8489fe8211de0fae96b9298fa4a435883cbd3da7 | [
"MIT"
] | null | null | null | server/flask_server_for_capstone/train_util.py | yjun1806/find_receipe | 8489fe8211de0fae96b9298fa4a435883cbd3da7 | [
"MIT"
] | null | null | null | from IPython.core.interactiveshell import InteractiveShell # 표를 이쁘게 만들어주는 기능
import seaborn as sns # 데이터 분포를 시각화해주는 라이브러리
# PyTorch
# torchvision : 영상 분야를 위한 패키지, ImageNet, CIFAR10, MNIST와 같은 데이터셋을 위한 데이터 로더와 데이터 변환기 등이 포함되어 있다.
from torchvision import transforms, datasets, models
import torch
# optim : 가중치를 갱신할 Optimizer가 정의된 패키지. SGD + momentum, RMSProp, Adam등과 같은 알고리즘이 정의되어 있다.
# cuda : CUDA 텐서 유형에 대한 지원을 추가하는 패키지이다. CPU텐서와 동일한 기능을 구현하지만 GPU를 사용하여 계산한다.
from torch import optim, cuda
# DataLoader : 학습 데이터를 읽어오는 용도로 사용되는 패키지.
# sampler : 데이터 세트에서 샘플을 추출하는 용도로 사용하는 패키지
from torch.utils.data import DataLoader, sampler
import torch.nn as nn
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
# Data science tools
import numpy as np
import pandas as pd # Pandas : Data science를 위한 패키지이다.
import os
# Image manipulations
from PIL import Image
# Useful for examining network
from torchsummary import summary
# Timing utility
from timeit import default_timer as timer
# Visualizations
# import matplotlib
# matplotlib.use('TkAgg')
import matplotlib.pyplot as plt # matplotlib를 쓸때 seaborn이 있는것과 없는것이 생긴게 다르다.
plt.rcParams['font.size'] = 14
# Printing out all outputs
InteractiveShell.ast_node_interactivity = 'all'
import datetime
import sys
# 어떤 모델을 학습할 것인지
model_choice = "densenet161" # 이거만 바꾸면 된다.
# 몇 epoch 학습할 것인지
training_epoch = 100
# 배치 사이즈 조절
batch_size = 128
selected_opti = 'sgd'
Early_stop = True
def get_date():
now = datetime.datetime.now()
nowDatetime = now.strftime('%Y%m%d%H%M%S')
return nowDatetime
def save_result_to_txt(model_name):
script_dir = os.path.dirname(__file__)
results_dir = os.path.join(script_dir, model_name + '_Results/')
save_txt = "result_txt_" + get_date() + ".txt"
if not os.path.isdir(results_dir):
os.makedirs(results_dir)
sys.stdout = open(results_dir + save_txt, 'w')
def setting_save_folder(save_file_name, model_name):
script_dir = os.path.dirname(__file__)
results_dir = os.path.join(script_dir, model_name + '_Results/')
save_file = save_file_name +"_bts" +str(batch_size) + "_ep" + str(training_epoch) + "_" + get_date()
if not os.path.isdir(results_dir):
os.makedirs(results_dir)
return results_dir, save_file
def get_pretrained_model_last_layer_change(model_name, n_classes):
"""
:param model_name: 불러올 pre trained 모델 이름
:param n_classes: 분류할 클래스 갯수
:return: 불러온 모델의 분류기 부분만 수정한 구조를 리턴한다.
"""
if model_name == 'alexnet':
model = models.alexnet(pretrained=True)
# Freeze early layers
for param in model.parameters():
param.requires_grad = False
# 모델 구조를 보면 알겠지만 classifier 부분은 6개의 레이어로 이루어져있다.
# 여기에서 6번째 레이어의 in_features를 꺼내서 n_inputs에 담는 코드이다.
# n_inputs = model.classifier[6].in_features
# n_inputs = model.avgpool.in_features
print("\t- 변경 전 레이어")
# print(model.classifier[6])
print(model.classifier)
# Add on classifier
# 모델의 classifier 부분의 6번째 레이어에 새로운 레이어를 넣는 부분이다.
# Linear 레이어와 Softmax 레이어가 들어간다.
# Linear 레이어는 Fully-Connected Layer와 동일한 역할을 한다.
# model.classifier[6] = nn.Sequential(
# nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
model.classifier = nn.Sequential(
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=9216, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Linear(in_features=4096, out_features=n_classes, bias=True),
nn.LogSoftmax(dim=1)
)
print("\t- 변경 후 레이어")
#print(model.classifier[6])
print(model.classifier)
# VGG
elif model_name == 'vgg11':
model = models.vgg11(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier[0].in_features
print("\t- 변경 전 레이어")
# print(model.classifier[6])
print(model.classifier)
# Add on classifier
# 모델의 classifier 부분의 6번째 레이어에 새로운 레이어를 넣는 부분이다.
# Linear 레이어와 Softmax 레이어가 들어간다.
# Linear 레이어는 Fully-Connected Layer와 동일한 역할을 한다.
# model.classifier[6] = nn.Sequential(
# nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
model.classifier = nn.Sequential(
nn.Linear(in_features=n_inputs, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=n_classes, bias=True),
nn.LogSoftmax(dim=1)
)
print("\t- 변경 후 레이어")
# print(model.classifier[6])
print(model.classifier)
elif model_name == 'vgg13':
model = models.vgg13(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier[0].in_features
print("\t- 변경 전 레이어")
# print(model.classifier[6])
print(model.classifier)
# Add on classifier
# 모델의 classifier 부분의 6번째 레이어에 새로운 레이어를 넣는 부분이다.
# Linear 레이어와 Softmax 레이어가 들어간다.
# Linear 레이어는 Fully-Connected Layer와 동일한 역할을 한다.
# model.classifier[6] = nn.Sequential(
# nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
model.classifier = nn.Sequential(
nn.Linear(in_features=n_inputs, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=n_classes, bias=True),
nn.LogSoftmax(dim=1)
)
print("\t- 변경 후 레이어")
# print(model.classifier[6])
print(model.classifier)
elif model_name == 'vgg16':
model = models.vgg16(pretrained=True)
# Freeze early layers
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier[0].in_features
print("\t- 변경 전 레이어")
# print(model.classifier[6])
print(model.classifier)
# Add on classifier
# 모델의 classifier 부분의 6번째 레이어에 새로운 레이어를 넣는 부분이다.
# Linear 레이어와 Softmax 레이어가 들어간다.
# Linear 레이어는 Fully-Connected Layer와 동일한 역할을 한다.
# model.classifier[6] = nn.Sequential(
# nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
model.classifier = nn.Sequential(
nn.Linear(in_features=n_inputs, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=n_classes, bias=True),
nn.LogSoftmax(dim=1)
)
print("\t- 변경 후 레이어")
# print(model.classifier[6])
print(model.classifier)
elif model_name == 'vgg19':
model = models.vgg19(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier[0].in_features
print("\t- 변경 전 레이어")
# print(model.classifier[6])
print(model.classifier)
# Add on classifier
# 모델의 classifier 부분의 6번째 레이어에 새로운 레이어를 넣는 부분이다.
# Linear 레이어와 Softmax 레이어가 들어간다.
# Linear 레이어는 Fully-Connected Layer와 동일한 역할을 한다.
# model.classifier[6] = nn.Sequential(
# nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
model.classifier = nn.Sequential(
nn.Linear(in_features=n_inputs, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5, inplace=False),
nn.Linear(in_features=4096, out_features=n_classes, bias=True),
nn.LogSoftmax(dim=1)
)
print("\t- 변경 후 레이어")
# print(model.classifier[6])
print(model.classifier)
elif model_name == 'vgg11_bn':
model = models.vgg11_bn(pretrained=True)
elif model_name == 'vgg13_bn':
model = models.vgg13_bn(pretrained=True)
elif model_name == 'vgg16_bn':
model = models.vgg16_bn(pretrained=True)
elif model_name == 'vgg19_bn':
model = models.vgg19_bn(pretrained=True)
# ResNet
elif model_name == 'resnet18':
model = models.resnet18(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'resnet34':
model = models.resnet34(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'resnet50':
model = models.resnet50(pretrained=True)
# ResNet 50의 경우 분류기 부분이 (fc): Linear(in_features=2048, out_features=1000, bias=True) 형식으로 되어있다.
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'resnet101':
model = models.resnet101(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'resnet152':
model = models.resnet152(pretrained=True)
for param in model.parameters():
param.requires_grad = False
print("\t- 변경 전 레이어")
print(model.fc)
n_inputs = model.fc.in_features
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
# Inception
elif model_name == 'googlenet':
model = models.googlenet(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'inception_v3':
model = models.inception_v3(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
# DenseNet
elif model_name == 'densenet121':
model = models.densenet121(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier.in_features
print("\t- 변경 전 레이어")
print(model.classifier)
# Add on classifier
model.classifier = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True),
nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier)
elif model_name == 'densenet161':
model = models.densenet161(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier.in_features
print("\t- 변경 전 레이어")
print(model.classifier)
# Add on classifier
model.classifier = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True),
nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier)
elif model_name == 'densenet169':
model = models.densenet169(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier.in_features
print("\t- 변경 전 레이어")
print(model.classifier)
# Add on classifier
model.classifier = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True),
nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier)
elif model_name == 'densenet201':
model = models.densenet201(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier.in_features
print("\t- 변경 전 레이어")
print(model.classifier)
# Add on classifier
model.classifier = nn.Sequential(
nn.Linear(n_inputs, n_classes), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier)
# MobileNet V2
elif model_name == 'mobilenet_v2':
model = models.mobilenet_v2(pretrained=True)
# Freeze early layers
for param in model.parameters():
param.requires_grad = False
print("\t- 변경 전 레이어")
print(model.classifier[1])
n_inputs = model.classifier[1].in_features
# Add on classifier
model.classifier[1] = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True),
nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier[1])
# ResNeXt
elif model_name == 'resnext50':
model = models.resnext50_32x4d(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'resnext101':
model = models.resnext101_32x8d(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
# ShuffleNet
elif model_name == 'shufflenet_v2_05':
model = models.shufflenet_v2_x0_5(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'shufflenet_v2_10':
model = models.shufflenet_v2_x1_0(pretrained=True)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.fc.in_features
print("\t- 변경 전 레이어")
print(model.fc)
model.fc = nn.Sequential(
nn.Linear(n_inputs, n_classes, bias=True), nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.fc)
elif model_name == 'shufflenet_v2_15':
model = models.shufflenet_v2_x1_5(pretrained=True)
elif model_name == 'shufflenet_v2_20':
model = models.shufflenet_v2_x2_0(pretrained=True)
# SqueezeNet
elif model_name == 'squeezenet1.0':
model = models.squeezenet1_0(pretrained=True)
for param in model.parameters():
param.requires_grad = False
print("\t- 변경 전 레이어")
print(model.classifier)
model.classifier = nn.Sequential(
nn.Dropout(p=0.5, inplace=False),
nn.Conv2d(512, 30, kernel_size=(1,1), stride=(1, 1)),
nn.AdaptiveAvgPool2d(output_size=(1, 1)),
nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier)
elif model_name == 'squeezenet1.1':
model = models.squeezenet1_1(pretrained=True)
for param in model.parameters():
param.requires_grad = False
print("\t- 변경 전 레이어")
print(model.classifier)
model.classifier = nn.Sequential(
nn.Dropout(p=0.5, inplace=False),
nn.Conv2d(512, 30, kernel_size=(1,1), stride=(1, 1)),
nn.AdaptiveAvgPool2d(output_size=(1, 1)),
nn.LogSoftmax(dim=1))
print("\t- 변경 후 레이어")
print(model.classifier)
# MNASNet
elif model_name == 'mnasnet05':
model = models.mnasnet0_5(pretrained=True)
elif model_name == 'mnasnet075':
model = models.mnasnet0_75(pretrained=True)
elif model_name == 'mnasnet10':
model = models.mnasnet1_0(pretrained=True)
elif model_name == 'mnasnet13':
model = models.mnasnet1_3(pretrained=True)
# WideResNet
elif model_name == 'wideresnet50':
model = models.wide_resnet50_2(pretrained=True)
elif model_name == 'wideresnet101':
model = models.wide_resnet101_2(pretrained=True)
model = model.to('cuda')
return model
def init_dataset():
# ## 데이터셋 경로 / GPU 학습 가능 여부 확인
#
# - 불러올 데이터셋의 경로를 지정한다.
# - train, validation, test 로 나눠져 있으므로, 각각의 경로를 지정한다.
# - 학습된 모델을 저장할 이름을 지정한다.
# - 배치크기를 지정한다.
# - GPU에서 학습이 가능한지 확인한다.
# Location of data
datadir = '/home/kunde/DeepCNN/ingredient_data_TR7_VA2_TE1/' # 데이터셋 경로
traindir = datadir + 'train/'
validdir = datadir + 'valid/'
testdir = datadir + 'test/'
image_transforms = {
# Train uses data augmentation
'train':
transforms.Compose([
# transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), # 썩을 왜 이렇게 해놨어?
transforms.RandomResizedCrop(size=256, scale=(0.08, 1.0)), # 0.08 ~ 1.0 이 값이 디폴트값
transforms.RandomRotation(degrees=15),
transforms.ColorJitter(),
transforms.RandomHorizontalFlip(),
transforms.CenterCrop(size=224), # Image net standards
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225]) # Imagenet standards
]),
# Validation does not use augmentation
'val':
transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
# Test does not use augmentation
'test':
transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
# Datasets from each folder
data = {
'train':
datasets.ImageFolder(root=traindir, transform=image_transforms['train']),
'val':
datasets.ImageFolder(root=validdir, transform=image_transforms['val']),
'test':
datasets.ImageFolder(root=testdir, transform=image_transforms['test'])
}
# Dataloader iterators
dataloaders = {
'train': DataLoader(data['train'], batch_size=batch_size, shuffle=True),
'val': DataLoader(data['val'], batch_size=batch_size, shuffle=True),
'test': DataLoader(data['test'], batch_size=batch_size, shuffle=True)
}
return datadir, traindir, validdir, testdir, image_transforms, data, dataloaders
def init_cv_dataset(total_K, current_k_fold):
# ## 데이터셋 경로 / GPU 학습 가능 여부 확인
#
# - 불러올 데이터셋의 경로를 지정한다.
# - train, validation, test 로 나눠져 있으므로, 각각의 경로를 지정한다.
# - 학습된 모델을 저장할 이름을 지정한다.
# - 배치크기를 지정한다.
# - GPU에서 학습이 가능한지 확인한다.
# Location of data
datadir = '/home/kunde/DeepCNN/ingredient_data_TR9_TE1/'+ str(total_K) +'_fold_cross_validation_dataset/' # 데이터셋 경로
traindir = datadir + 'K_'+ str(current_k_fold) + '/train/'
validdir = datadir + 'K_'+ str(current_k_fold) + '/valid/'
testdir = datadir + 'test/'
print('\n----------------------------------------------------------------')
print(f'{current_k_fold} fold 데이터셋 세팅')
print(f'데이터셋 경로 : {datadir}')
print(f'학습 데이터셋 경로 : {traindir}')
print(f'검증 데이터셋 경로 : {validdir}')
print(f'테스트 데이터셋 경로 : {testdir}')
print('----------------------------------------------------------------\n')
image_transforms = {
# Train uses data augmentation
'train':
transforms.Compose([
# transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), # 썩을 왜 이렇게 해놨어?
transforms.RandomResizedCrop(size=256, scale=(0.08, 1.0)), # 0.08 ~ 1.0 이 값이 디폴트값
transforms.RandomRotation(degrees=15),
transforms.ColorJitter(),
transforms.RandomHorizontalFlip(),
transforms.CenterCrop(size=224), # Image net standards
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225]) # Imagenet standards
]),
# Validation does not use augmentation
'val':
transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
# Test does not use augmentation
'test':
transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
# Datasets from each folder
data = {
'train':
datasets.ImageFolder(root=traindir, transform=image_transforms['train']),
'val':
datasets.ImageFolder(root=validdir, transform=image_transforms['val']),
'test':
datasets.ImageFolder(root=testdir, transform=image_transforms['test'])
}
# Dataloader iterators
dataloaders = {
'train': DataLoader(data['train'], batch_size=batch_size, shuffle=True),
'val': DataLoader(data['val'], batch_size=batch_size, shuffle=True),
'test': DataLoader(data['test'], batch_size=batch_size, shuffle=True)
}
return datadir, traindir, validdir, testdir, image_transforms, data, dataloaders
def category_dataframe(traindir, validdir, testdir):
# Empty lists
categories = []
img_categories = []
n_train = []
n_valid = []
n_test = []
hs = []
ws = []
# os.listdir(path) : path에 존재하는 파일, 서브폴더 목록을 가져온다.
# Iterate through each category
for d in os.listdir(traindir): # train 데이터의 경로를 탐색한다. os.listdir을 사용하면 train 폴더 내의 폴더들을 순차적으로 탐색한다.
categories.append(d) # categories라는 리스트에 추가해준다. 폴더명을 카테고리 이름으로 해놨으므로 카테고리명이 저장된다.
# Number of each image
train_imgs = os.listdir(traindir + d)
valid_imgs = os.listdir(validdir + d)
test_imgs = os.listdir(testdir + d)
n_train.append(len(train_imgs))
n_valid.append(len(valid_imgs))
n_test.append(len(test_imgs))
# Find stats for train images
for i in train_imgs:
img_categories.append(d)
img = Image.open(traindir + d + '/' + i) # 이미지 열기
img_array = np.array(img)
# Shape
hs.append(img_array.shape[0])
ws.append(img_array.shape[1])
# Dataframe of categories
# Pandas 라이브러리를 이용한 부분. Dataframe은 테이블 형식의 데이터를 다룰때 사용한다. 컬럼, 로우(데이터), 인덱스로 이루어져있다.
cat_df = pd.DataFrame({'category': categories,
'n_train': n_train,
'n_valid': n_valid,
'n_test': n_test}).sort_values('category')
image_df = pd.DataFrame({
'category': img_categories,
'height': hs,
'width': ws
})
return cat_df, image_df
def train(model,
criterion,
optimizer,
train_loader,
valid_loader,
save_file_name,
max_epochs_stop=3,
n_epochs=20,
print_every=1,
early_stop=True):
"""
:param model: 학습할 모델을 입력받는다.
:param criterion: 학습에 사용할 손실함수를 입력받는다
:param optimizer: 학습에 사용할 최적화함수를 입력받는다
:param train_loader: 학습에 사용할 training dataset을 입력받는다(dataloader 형식)
:param valid_loader: 학습에 사용할 vaildation dataset을 입력받는다(dataloader 형식)
:param save_file_name: 최적의 모델을 저장하기 위한 이름을 입력받는다.
:param max_epochs_stop: 몇 만큼 vaild loss 값의 감소가 없다면 학습을 중단할지 설정한다.
:param n_epochs: 최대 학습 epoch값을 입력받는다
:param print_every: 몇 epoch마다 학습 상황을 출력할지 입력받는다
:param early_stop: 조기 중단을 할지 말지 결정한다.
:return:
model (PyTorch model): trained cnn with best weights
history (DataFrame): history of train and validation loss and accuracy
"""
train_on_gpu = cuda.is_available() # GPU를 사용할 수 있는지 없는지 판단한다.
# Early stopping intialization
epochs_no_improve = 0 # epoch을 진행해도 valid_loss의 감소가 없으면 1씩 올라간다.
valid_loss_min = np.Inf # np.Inf : 무한대
valid_max_acc = 0 # ???
history = []
# Number of epochs already trained (if using loaded in model weights)
try: # model이 아직 학습되지 않았다면 model.epochs라는 변수가 없을 것이다. 그래서 에러가 나기 때문에 except문이 실행된다.
print(f'\n\n이미 {model.epochs} epochs 만큼 학습된 모델입니다. 추가학습을 시작합니다.\n')
except:
model.epochs = 0
print('\n\n----------------------------------------------------------------')
print('학습 시작')
print(f'총 {n_epochs} epochs 학습할 예정입니다.')
print('----------------------------------------------------------------\n')
overall_start = timer() # 학습에 들어가기전의 시간을 기록한다.
# Main loop
for epoch in range(n_epochs): # 입력받은 Epochs 만큼 반복한다.
# keep track of training and validation loss each epoch
# train_loss와 vaild_loss, train_acc와 vaild_acc를 기록할 변수를 만든다.
train_loss = 0.0
valid_loss = 0.0
train_acc = 0
valid_acc = 0
# Set to training
model.train() # 학습모드로 설정한다.
start = timer() # epochs의 시작 시간을 기록한다.
# Training loop
# data : 학습에 사용될 이미지 데이터, target : 이미지에 라벨링된 데이터(여기에서는 폴더명)
for ii, (data, target) in enumerate(train_loader):
# print('\r', f'\ntest : {data.size}, {target.size}', end='')
# Tensors to gpu
if train_on_gpu: # GPU에서 트레이닝이 되는지 여부를 담은 변수
data, target = data.cuda(), target.cuda() # .cuda()메소드를 사용해서 GPU에서 연산이 가능하도록 바꿔준다.
# Clear gradients
optimizer.zero_grad()
# Predicted outputs are log probabilities
output = model(
data) # 여기에서 모델은 학습에 사용되는 VGG나 AlexNet과 같은 구조를 말한다. 이 모델은 함수로써 쓰이며 input값으로 데이터를 넣으면 output이 나온다.
# 카테고리별 확률값이 저장되어 나올 것이다. softmax이므로 확률을 모두 더하면 1이 나온다.
# Loss and backpropagation of gradients
loss = criterion(output, target) # loss 값 업데이트
# 역전파 단계 : 모델의 매개변수에 대한 손실의 변화도를 계산한다.
loss.backward()
# 이 함수를 호출하면 매개변수가 갱신된다.
optimizer.step()
# Track train loss by multiplying average loss by number of examples in batch
# loss는 (1,)형태의 Tensor이며, loss.item()은 loss의 스칼라 값이다.
# 여기에서 data.size(0)는 배치사이즈를 말한다.
train_loss += loss.item() * data.size(0)
# Calculate accuracy by finding max log probability
_, pred = torch.max(output, dim=1) # output에 저장된 확률값중 가장 높은 값을 가진 인덱스를 리턴한다..
correct_tensor = pred.eq(target.data.view_as(pred)) #
# Need to convert correct tensor from int to float to average
accuracy = torch.mean(correct_tensor.type(torch.FloatTensor))
# Multiply average accuracy times the number of examples in batch
train_acc += accuracy.item() * data.size(0)
# Track training progress
print('\r',
f'Epoch: {epoch}\t학습진행률 : {100 * (ii + 1) / len(train_loader):.2f}%' \
+ f'\t 현재 Epoch에서 걸린 시간 : {timer() - start:.2f}s' \
+ f'\t Train_Loss : {train_loss / len(train_loader.dataset):.4f}' \
+ f'\t Train_Acc : {100 * (train_acc / len(train_loader.dataset)):.2f}%',
end='') # end='\r' : 해당 줄의 처음으로 와서 다시 출력한다.
# After training loops ends, start validation ===============================================
else: # 트레이닝 루프가 끝나면 실행되는 곳이다.
model.epochs += 1 # 트레이닝 루프 한번을 반복했기 때문에 epoch을 1 올려준다.
# Don't need to keep track of gradients
with torch.no_grad():
# Set to evaluation mode
model.eval() # 평가모드로 설정한다. pytorch에는 train(), eval() 두가지 모드밖에 없다. eval()모드에서는 드랍아웃이 작동하지 않는다.
start_eval = timer()
print('')
# Validation loop
for ii, (data, target) in enumerate(valid_loader):
# Tensors to gpu
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# Forward pass
# 평가시엔 역전파는 수행하지 않는다.
output = model(data)
# Validation loss
loss = criterion(output, target)
# Multiply average loss times the number of examples in batch
valid_loss += loss.item() * data.size(0)
# Calculate validation accuracy
_, pred = torch.max(output, dim=1)
correct_tensor = pred.eq(target.data.view_as(pred))
accuracy = torch.mean(
correct_tensor.type(torch.FloatTensor))
# Multiply average accuracy times the number of examples
valid_acc += accuracy.item() * data.size(0)
print('\r',
f'\t\t\t평가진행률 : {100 * (ii + 1) / len(valid_loader):.2f}%' \
+ f'\t 현재 Epoch에서 걸린 시간 : {timer() - start_eval:.2f}s' \
+ f'\t Vaild_Loss : {valid_loss / len(valid_loader.dataset):.4f}' \
+ f'\t Vaild_Acc : {100 * (valid_acc / len(valid_loader.dataset)):.2f}%',
end='') # end='\r' : 해당 줄의 처음으로 와서 다시 출력한다.
# Calculate average losses
train_loss = train_loss / len(train_loader.dataset)
valid_loss = valid_loss / len(valid_loader.dataset)
# Calculate average accuracy
train_acc = train_acc / len(train_loader.dataset)
valid_acc = valid_acc / len(valid_loader.dataset)
history.append([train_loss, valid_loss, train_acc, valid_acc])
# Print training and validation results
if (epoch + 1) % print_every == 0:
# print(
# f'\n\t\t\tTraining Loss: {train_loss:.4f} \t\t Validation Loss: {valid_loss:.4f}'
# )
# print(
# f'\t\t\tTraining Accuracy: {100 * train_acc:.2f}%\t Validation Accuracy: {100 * valid_acc:.2f}%'
# )
print(
f'\n\t\t\t현재 Epochs에서 Train과 Vaild에 걸린 시간 : {timer() - start:.2f}s\n'
)
# Save the model if validation loss decreases
# 예를 들어보자. 초기 valid_loss_min이 무한대값이다. 당연히 epoch 0에선 이 값보다 작을수밖에 없다.
# 따라서 valid_loss_min 값이 epoch 0에서의 valid_loss값으로 바뀐다.
# epoch 1부터 valid_loss가 이전 epoch보다 작아지지 않는다면 epochs_no_improve 값이 증가한다.
# 만약 작아지지 않는 상태가 max_epochs_stop 값보다 커지게 되면 중지한다.
# 그 이유는 학습이 계속 진행되더라도 loss 값이 더 이상 작아지지 않으므로, 수렴했다고 볼 수 있기 때문이다.
if valid_loss < valid_loss_min:
# Save model
# torch.save(model.state_dict(), save_file_name) # 이때 저장되는 모델은 최적의 epochs를 가진 모델이다.
# Track improvement
epochs_no_improve = 0
valid_loss_min = valid_loss
valid_best_acc = valid_acc
best_epoch = epoch
# Otherwise increment count of epochs with no improvement
else:
epochs_no_improve += 1
# Trigger early stopping
if early_stop == True: # Early_stop 옵션이 있는 경우에만 진행한다.
if epochs_no_improve >= max_epochs_stop:
print(
f'\nEarly Stop! {max_epochs_stop} epochs 동안 valid loss 값이 감소하지 않았습니다.\n' \
+ f'현재까지 진행한 총 epochs : {epoch}\t 최상의 epochs : {best_epoch} (loss: {valid_loss_min:.4f} and acc: {100 * valid_acc:.4f}%)'
)
total_time = timer() - overall_start
print(
f'\n[ 총 학습시간 : {total_time:.2f}s, Epoch당 평균 학습 시간 : {total_time / (epoch + 1):.2f}s ]')
# Load the best state dict
# model.load_state_dict(torch.load(save_file_name))
# Attach the optimizer
model.optimizer = optimizer
# Format history
history = pd.DataFrame(
history,
columns=[
'train_loss', 'valid_loss', 'train_acc',
'valid_acc'
])
return model, history
# Attach the optimizer
model.optimizer = optimizer
# Record overall time and print out stats
total_time = timer() - overall_start
print('----------------------------------------------------------------')
print('학습 결과')
print(
f'\n최상의 epoch 수: {best_epoch} (loss: {valid_loss_min:.4f}, acc: {100 * valid_acc:.4f}%)'
)
print(f'[ 총 학습시간 : {total_time:.2f}s, Epoch당 평균 학습 시간 : {total_time / (epoch + 1):.2f}s ]')
print('----------------------------------------------------------------\n')
# Format history
history = pd.DataFrame(
history,
columns=['train_loss', 'valid_loss', 'train_acc', 'valid_acc'])
return model, history
def train_cv(model,
criterion,
optimizer,
train_loader,
valid_loader,
max_epochs_stop=3,
n_epochs=20,
print_every=1,
early_stop=True):
"""
:param model: 학습할 모델을 입력받는다.
:param criterion: 학습에 사용할 손실함수를 입력받는다
:param optimizer: 학습에 사용할 최적화함수를 입력받는다
:param train_loader: 학습에 사용할 training dataset을 입력받는다(dataloader 형식)
:param valid_loader: 학습에 사용할 vaildation dataset을 입력받는다(dataloader 형식)
:param save_file_name: 최적의 모델을 저장하기 위한 이름을 입력받는다.
:param max_epochs_stop: 몇 만큼 vaild loss 값의 감소가 없다면 학습을 중단할지 설정한다.
:param n_epochs: 최대 학습 epoch값을 입력받는다
:param print_every: 몇 epoch마다 학습 상황을 출력할지 입력받는다
:param early_stop: 조기 중단을 할지 말지 결정한다.
:return:
model (PyTorch model): trained cnn with best weights
history (DataFrame): history of train and validation loss and accuracy
"""
train_on_gpu = cuda.is_available() # GPU를 사용할 수 있는지 없는지 판단한다.
# Early stopping intialization
epochs_no_improve = 0 # epoch을 진행해도 valid_loss의 감소가 없으면 1씩 올라간다.
valid_loss_min = np.Inf # np.Inf : 무한대
valid_max_acc = 0 # ???
history = []
# Number of epochs already trained (if using loaded in model weights)
try: # model이 아직 학습되지 않았다면 model.epochs라는 변수가 없을 것이다. 그래서 에러가 나기 때문에 except문이 실행된다.
print(f'\n\n이미 {model.epochs} epochs 만큼 학습된 모델입니다. 추가학습을 시작합니다.\n')
except:
model.epochs = 0
print('\n\n----------------------------------------------------------------')
print('학습 시작')
print(f'총 {n_epochs} epochs 학습할 예정입니다.')
print('----------------------------------------------------------------\n')
overall_start = timer() # 학습에 들어가기전의 시간을 기록한다.
# Main loop
for epoch in range(n_epochs): # 입력받은 Epochs 만큼 반복한다.
# keep track of training and validation loss each epoch
# train_loss와 vaild_loss, train_acc와 vaild_acc를 기록할 변수를 만든다.
train_loss = 0.0
valid_loss = 0.0
train_acc = 0
valid_acc = 0
# Set to training
model.train() # 학습모드로 설정한다.
start = timer() # epochs의 시작 시간을 기록한다.
# Training loop
# data : 학습에 사용될 이미지 데이터, target : 이미지에 라벨링된 데이터(여기에서는 폴더명)
for ii, (data, target) in enumerate(train_loader):
# print('\r', f'\ntest : {data.size}, {target.size}', end='')
# Tensors to gpu
if train_on_gpu: # GPU에서 트레이닝이 되는지 여부를 담은 변수
data, target = data.cuda(), target.cuda() # .cuda()메소드를 사용해서 GPU에서 연산이 가능하도록 바꿔준다.
# Clear gradients
optimizer.zero_grad()
# Predicted outputs are log probabilities
output = model(
data) # 여기에서 모델은 학습에 사용되는 VGG나 AlexNet과 같은 구조를 말한다. 이 모델은 함수로써 쓰이며 input값으로 데이터를 넣으면 output이 나온다.
# 카테고리별 확률값이 저장되어 나올 것이다. softmax이므로 확률을 모두 더하면 1이 나온다.
# Loss and backpropagation of gradients
loss = criterion(output, target) # loss 값 업데이트
# 역전파 단계 : 모델의 매개변수에 대한 손실의 변화도를 계산한다.
loss.backward()
# 이 함수를 호출하면 매개변수가 갱신된다.
optimizer.step()
# Track train loss by multiplying average loss by number of examples in batch
# loss는 (1,)형태의 Tensor이며, loss.item()은 loss의 스칼라 값이다.
# 여기에서 data.size(0)는 배치사이즈를 말한다.
train_loss += loss.item() * data.size(0)
# Calculate accuracy by finding max log probability
_, pred = torch.max(output, dim=1) # output에 저장된 확률값중 가장 높은 값을 가진 인덱스를 리턴한다..
correct_tensor = pred.eq(target.data.view_as(pred)) #
# Need to convert correct tensor from int to float to average
accuracy = torch.mean(correct_tensor.type(torch.FloatTensor))
# Multiply average accuracy times the number of examples in batch
train_acc += accuracy.item() * data.size(0)
# Track training progress
print('\r',
f'Epoch: {epoch}\t학습진행률 : {100 * (ii + 1) / len(train_loader):.2f}%' \
+ f'\t 현재 Epoch에서 걸린 시간 : {timer() - start:.2f}s' \
+ f'\t Train_Loss : {train_loss / len(train_loader.dataset):.4f}' \
+ f'\t Train_Acc : {100 * (train_acc / len(train_loader.dataset)):.2f}%',
end='') # end='\r' : 해당 줄의 처음으로 와서 다시 출력한다.
# After training loops ends, start validation ===============================================
else: # 트레이닝 루프가 끝나면 실행되는 곳이다.
model.epochs += 1 # 트레이닝 루프 한번을 반복했기 때문에 epoch을 1 올려준다.
# Don't need to keep track of gradients
with torch.no_grad():
# Set to evaluation mode
model.eval() # 평가모드로 설정한다. pytorch에는 train(), eval() 두가지 모드밖에 없다. eval()모드에서는 드랍아웃이 작동하지 않는다.
start_eval = timer()
print('')
# Validation loop
for ii, (data, target) in enumerate(valid_loader):
# Tensors to gpu
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# Forward pass
# 평가시엔 역전파는 수행하지 않는다.
output = model(data)
# Validation loss
loss = criterion(output, target)
# Multiply average loss times the number of examples in batch
valid_loss += loss.item() * data.size(0)
# Calculate validation accuracy
_, pred = torch.max(output, dim=1)
correct_tensor = pred.eq(target.data.view_as(pred))
accuracy = torch.mean(
correct_tensor.type(torch.FloatTensor))
# Multiply average accuracy times the number of examples
valid_acc += accuracy.item() * data.size(0)
print('\r',
f'\t\t\t평가진행률 : {100 * (ii + 1) / len(valid_loader):.2f}%' \
+ f'\t 현재 Epoch에서 걸린 시간 : {timer() - start_eval:.2f}s' \
+ f'\t Vaild_Loss : {valid_loss / len(valid_loader.dataset):.4f}' \
+ f'\t Vaild_Acc : {100 * (valid_acc / len(valid_loader.dataset)):.2f}%',
end='') # end='\r' : 해당 줄의 처음으로 와서 다시 출력한다.
# Calculate average losses
train_loss = train_loss / len(train_loader.dataset)
valid_loss = valid_loss / len(valid_loader.dataset)
# Calculate average accuracy
train_acc = train_acc / len(train_loader.dataset)
valid_acc = valid_acc / len(valid_loader.dataset)
history.append([train_loss, valid_loss, train_acc, valid_acc])
# Print training and validation results
if (epoch + 1) % print_every == 0:
# print(
# f'\n\t\t\tTraining Loss: {train_loss:.4f} \t\t Validation Loss: {valid_loss:.4f}'
# )
# print(
# f'\t\t\tTraining Accuracy: {100 * train_acc:.2f}%\t Validation Accuracy: {100 * valid_acc:.2f}%'
# )
print(
f'\n\t\t\t현재 Epochs에서 Train과 Vaild에 걸린 시간 : {timer() - start:.2f}s\n'
)
# Save the model if validation loss decreases
# 예를 들어보자. 초기 valid_loss_min이 무한대값이다. 당연히 epoch 0에선 이 값보다 작을수밖에 없다.
# 따라서 valid_loss_min 값이 epoch 0에서의 valid_loss값으로 바뀐다.
# epoch 1부터 valid_loss가 이전 epoch보다 작아지지 않는다면 epochs_no_improve 값이 증가한다.
# 만약 작아지지 않는 상태가 max_epochs_stop 값보다 커지게 되면 중지한다.
# 그 이유는 학습이 계속 진행되더라도 loss 값이 더 이상 작아지지 않으므로, 수렴했다고 볼 수 있기 때문이다.
if valid_loss < valid_loss_min:
# Save model
# torch.save(model.state_dict(), save_file_name) # 이때 저장되는 모델은 최적의 epochs를 가진 모델이다.
# Track improvement
epochs_no_improve = 0
valid_loss_min = valid_loss
valid_best_acc = valid_acc
best_epoch = epoch
# Otherwise increment count of epochs with no improvement
else:
epochs_no_improve += 1
# Trigger early stopping
if early_stop == True: # Early_stop 옵션이 있는 경우에만 진행한다.
if epochs_no_improve >= max_epochs_stop:
print(
f'\nEarly Stop! {max_epochs_stop} epochs 동안 valid loss 값이 감소하지 않았습니다.\n' \
+ f'현재까지 진행한 총 epochs : {epoch}\t 최상의 epochs : {best_epoch} (loss: {valid_loss_min:.2f} and acc: {100 * valid_acc:.2f}%)'
)
total_time = timer() - overall_start
print(
f'\n[ 총 학습시간 : {total_time:.2f}s, Epoch당 평균 학습 시간 : {total_time / (epoch + 1):.2f}s ]')
# Load the best state dict
# model.load_state_dict(torch.load(save_file_name))
# Attach the optimizer
model.optimizer = optimizer
# Format history
history = pd.DataFrame(
history,
columns=[
'train_loss', 'valid_loss', 'train_acc',
'valid_acc'
])
return model, history, total_time, best_epoch, epoch
# Attach the optimizer
model.optimizer = optimizer
# Record overall time and print out stats
total_time = timer() - overall_start
print('----------------------------------------------------------------')
print('학습 결과')
print(
f'\n최상의 epoch 수: {best_epoch} (loss: {valid_loss_min:.2f}, acc: {100 * valid_acc:.2f}%)'
)
print(f'[ 총 학습시간 : {total_time:.2f}s, Epoch당 평균 학습 시간 : {total_time / (epoch + 1):.2f}s ]')
print('----------------------------------------------------------------\n')
# Format history
history = pd.DataFrame(
history,
columns=['train_loss', 'valid_loss', 'train_acc', 'valid_acc'])
return model, history, total_time, best_epoch, epoch
def get_loss_function():
return nn.NLLLoss()
def get_optimizer(parameter):
# Adam의 Default Learning Rate = 1e-3 = 0.001
if selected_opti == 'adam':
return optim.Adam(parameter)
elif selected_opti == 'sgd':
return optim.SGD(parameter, lr=0.01, momentum=0.9)
def print_model_architecture(model_name):
"""
모델 구조를 출력해주는 함수이다.
만약 모델이 다운로드가 안되어 있다면 다운받는다.
:param model_name: 출력할 모델 명을 입력받는다.
:return:
"""
# AlexNet
if model_name == 'alexnet':
model = models.alexnet(pretrained=True)
# VGG
elif model_name == 'vgg11':
model = models.vgg11(pretrained=True)
elif model_name == 'vgg13':
model = models.vgg13(pretrained=True)
elif model_name == 'vgg16':
model = models.vgg16(pretrained=True)
elif model_name == 'vgg19':
model = models.vgg19(pretrained=True)
elif model_name == 'vgg11_bn':
model = models.vgg11_bn(pretrained=True)
elif model_name == 'vgg13_bn':
model = models.vgg13_bn(pretrained=True)
elif model_name == 'vgg16_bn':
model = models.vgg16_bn(pretrained=True)
elif model_name == 'vgg19_bn':
model = models.vgg19_bn(pretrained=True)
# ResNet
elif model_name == 'resnet18':
model = models.resnet18(pretrained=True)
elif model_name == 'resnet34':
model = models.resnet34(pretrained=True)
elif model_name == 'resnet50':
model = models.resnet50(pretrained=True)
elif model_name == 'resnet101':
model = models.resnet101(pretrained=True)
elif model_name == 'resnet152':
model = models.resnet152(pretrained=True)
# Inception
elif model_name == 'googlenet':
model = models.googlenet(pretrained=True)
elif model_name == 'inception_v3':
model = models.inception_v3(pretrained=True)
# DenseNet
elif model_name == 'densenet121':
model = models.densenet121(pretrained=True)
elif model_name == 'densenet161':
model = models.densenet161(pretrained=True)
elif model_name == 'densenet169':
model = models.densenet169(pretrained=True)
elif model_name == 'densenet201':
model = models.densenet201(pretrained=True)
# MobileNet V2
elif model_name == 'mobilenet_v2':
model = models.mobilenet_v2(pretrained=True)
# ResNeXt
elif model_name == 'resnext50':
model = models.resnext50_32x4d(pretrained=True)
elif model_name == 'resnext101':
model = models.resnext101_32x8d(pretrained=True)
# ShuffleNet
elif model_name == 'shufflenet_v2_05':
model = models.shufflenet_v2_x0_5(pretrained=True)
elif model_name == 'shufflenet_v2_10':
model = models.shufflenet_v2_x1_0(pretrained=True)
elif model_name == 'shufflenet_v2_15':
model = models.shufflenet_v2_x1_5(pretrained=True)
elif model_name == 'shufflenet_v2_20':
model = models.shufflenet_v2_x2_0(pretrained=True)
# SqueezeNet
elif model_name == 'squeezenet1.0':
model = models.squeezenet1_0(pretrained=True)
elif model_name == 'squeezenet1.1':
model = models.squeezenet1_1(pretrained=True)
# MNASNet
elif model_name == 'mnasnet05':
model = models.mnasnet0_5(pretrained=True)
elif model_name == 'mnasnet075':
model = models.mnasnet0_75(pretrained=True)
elif model_name == 'mnasnet10':
model = models.mnasnet1_0(pretrained=True)
elif model_name == 'mnasnet13':
model = models.mnasnet1_3(pretrained=True)
# WideResNet
elif model_name == 'wideresnet50':
model = models.wide_resnet50_2(pretrained=True)
elif model_name == 'wideresnet101':
model = models.wide_resnet101_2(pretrained=True)
print(model)
def save_checkpoint(model, path, model_name):
"""Save a PyTorch model checkpoint
Params
--------
model (PyTorch model): model to save
path (str): location to save model. Must start with `model_name-` and end in '.pth'
Returns
--------
None, save the `model` to `path`
"""
# model_name = path.split('-')[0]
# assert (model_name in ['vgg16', 'resnet50']), "Path must have the correct model name"
# Basic details
checkpoint = {
'class_to_idx': model.class_to_idx,
'idx_to_class': model.idx_to_class,
'epochs': model.epochs,
}
# AlexNet
if model_name == 'alexnet':
# Check to see if model was parallelized
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
# VGG
elif model_name == 'vgg11':
# Check to see if model was parallelized
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg13':
# Check to see if model was parallelized
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg16':
# Check to see if model was parallelized
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg19':
# Check to see if model was parallelized
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg11_bn':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg13_bn':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg16_bn':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'vgg19_bn':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
# ResNet
elif model_name == 'resnet18':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'resnet34':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'resnet50':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'resnet101':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'resnet152':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
# Inception
elif model_name == 'googlenet':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'inception_v3':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
# DenseNet
elif model_name == 'densenet121':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'densenet161':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'densenet169':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'densenet201':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
# MobileNet V2
elif model_name == 'mobilenet_v2':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
# ResNeXt
elif model_name == 'resnext50':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'resnext101':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
#
# ShuffleNet
elif model_name == 'shufflenet_v2_05':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'shufflenet_v2_10':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'shufflenet_v2_15':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'shufflenet_v2_20':
checkpoint['fc'] = model.fc
checkpoint['state_dict'] = model.state_dict()
#
# SqueezeNet
elif model_name == 'squeezenet1.0':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
elif model_name == 'squeezenet1.1':
checkpoint['classifier'] = model.classifier
checkpoint['state_dict'] = model.state_dict()
#
# # MNASNet
# elif model_name == 'mnasnet05':
# elif model_name == 'mnasnet075':
# elif model_name == 'mnasnet10':
# elif model_name == 'mnasnet13':
#
# # WideResNet
# elif model_name == 'wideresnet50':
# elif model_name == 'wideresnet101':
# Add the optimizer
checkpoint['optimizer'] = model.optimizer
checkpoint['optimizer_state_dict'] = model.optimizer.state_dict()
# Save the data to the path
torch.save(checkpoint, path)
print('학습된 모델이 저장되었습니다.')
def load_checkpoint(path, inference_type='gpu'):
"""Load a PyTorch model checkpoint
Params
--------
path (str): saved model checkpoint. Must start with `model_name-` and end in '.pth'
Returns
--------
None, save the `model` to `path`
"""
# Whether to train on a gpu
train_on_gpu = cuda.is_available() # GPU를 사용할 수 있는지 없는지 판단한다.
print(f'Train on gpu: {train_on_gpu}')
print(f'Inference Type: {inference_type}')
# Get the model name
model_name = path.split('-')[0]
model_name = model_name.split('/')[-1]
print(f'불러온 모델 : {model_name}')
assert (model_name in ['alexnet',
'vgg11','vgg13','vgg16','vgg19','vgg13',
'vgg11_bn', 'vgg13_bn', 'vgg16_bn', 'vgg19_bn',
'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152',
'googlenet', 'inception_v3',
'densenet121', 'densenet161', 'densenet169', 'densenet201',
'mobilenet_v2', 'resnext50', 'resnext101',
'shufflenet_v2_05', 'shufflenet_v2_10', 'shufflenet_v2_15', 'shufflenet_v2_20',
'squeezenet1.0', 'squeezenet1.1',
'mnasnet05', 'mnasnet075', 'mnasnet10', 'mnasnet13',
'wideresnet50', 'wideresnet101']), "Path must have the correct model name"
# Load in checkpoint
load_start = timer()
if inference_type == 'gpu':
checkpoint = torch.load(path)
elif inference_type == 'cpu':
# GPU로 저장된 모델을 전부 CPU로 동작하도록 불러온다.
checkpoint = torch.load(path, map_location=lambda storage, loc: storage)
load_end = timer() - load_start
print(f'## Torch.load end tiem : {load_end*1000.0:.2f}ms')
load_state_start = timer()
# AlexNet
if model_name == 'alexnet':
model = models.alexnet(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
# VGG
elif model_name == 'vgg11':
model = models.vgg11(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg13':
model = models.vgg13(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg16':
model = models.vgg16(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg19':
model = models.vgg19(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg11_bn':
model = models.vgg11_bn(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg13_bn':
model = models.vgg13_bn(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg16_bn':
model = models.vgg16_bn(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'vgg19_bn':
model = models.vgg19_bn(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
# ResNet
elif model_name == 'resnet18':
model = models.resnet18(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'resnet34':
model = models.resnet34(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'resnet50':
model = models.resnet50(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'resnet101':
model = models.resnet101(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'resnet152':
model = models.resnet152(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
# Inception
elif model_name == 'googlenet':
model = models.googlenet(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'inception_v3':
model = models.inception_v3(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
# DenseNet
elif model_name == 'densenet121':
model = models.densenet121(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'densenet161':
model = models.densenet161(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'densenet169':
model = models.densenet169(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'densenet201':
model = models.densenet201(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
# MobileNet V2
elif model_name == 'mobilenet_v2':
model = models.mobilenet_v2(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
# ResNeXt
elif model_name == 'resnext50':
model = models.resnext50_32x4d(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'resnext101':
model = models.resnext101_32x8d(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
# ShuffleNet
elif model_name == 'shufflenet_v2_05':
model = models.shufflenet_v2_x0_5(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'shufflenet_v2_10':
model = models.shufflenet_v2_x1_0(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'shufflenet_v2_15':
model = models.shufflenet_v2_x1_5(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
elif model_name == 'shufflenet_v2_20':
model = models.shufflenet_v2_x2_0(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.fc = checkpoint['fc']
# SqueezeNet
elif model_name == 'squeezenet1.0':
model = models.squeezenet1_0(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
elif model_name == 'squeezenet1.1':
model = models.squeezenet1_1(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier = checkpoint['classifier']
# MNASNet
elif model_name == 'mnasnet05':
model = models.mnasnet0_5(pretrained=True)
elif model_name == 'mnasnet075':
model = models.mnasnet0_75(pretrained=True)
elif model_name == 'mnasnet10':
model = models.mnasnet1_0(pretrained=True)
elif model_name == 'mnasnet13':
model = models.mnasnet1_3(pretrained=True)
# WideResNet
elif model_name == 'wideresnet50':
model = models.wide_resnet50_2(pretrained=True)
elif model_name == 'wideresnet101':
model = models.wide_resnet101_2(pretrained=True)
load_state_end = timer() - load_state_start
print(f'## Load Model State : {load_state_end*1000.0:.2f}ms')
flag1 = timer()
# Load in the state dict
model.load_state_dict(checkpoint['state_dict'])
flag2 = timer() - flag1
total_params = sum(p.numel() for p in model.parameters())
# print(f'{total_params:,} total parameters.')
total_trainable_params = sum(
p.numel() for p in model.parameters() if p.requires_grad)
# print(f'{total_trainable_params:,} total gradient parameters.')
if train_on_gpu and inference_type == 'gpu':
model = model.to('cuda')
print("GPU 에서 동작합니다!!!")
flag3 = timer() - flag2 - flag1
# Model basics
model.class_to_idx = checkpoint['class_to_idx']
model.idx_to_class = checkpoint['idx_to_class']
model.epochs = checkpoint['epochs']
flag4 = timer() - flag3 - flag2 - flag1
# Optimizer
optimizer = checkpoint['optimizer']
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
flag5 = timer() - flag4 - flag3 - flag2 -flag1
print(f'## Flag 2 : {flag2*1000.0:.2f}ms, 3 : {flag3*1000.0:.2f}ms, 4: {flag4*1000.0:.2f}ms, 5: {flag5*1000.0:.2f}ms')
return model, optimizer
def save_distribution_of_images(category_dataframe, model_name):
results_dir, sample_file_name = setting_save_folder("distribution_of_images", model_name)
category_dataframe.set_index('category')['n_train'].plot.bar(
color='r', figsize=(18, 12))
plt.xticks(rotation=80)
plt.ylabel('Count')
plt.title('Training Images by Category')
plt.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.3)
plt.savefig(results_dir + sample_file_name)
print('\n----------------------------------------------------------------')
print('이미지 분포 그래프가 저장되었습니다.')
print(f'저장 경로 : {results_dir}')
print(f'분포 그래프 파일 명 : {sample_file_name}')
print('----------------------------------------------------------------\n')
plt.close('all')
def save_number_of_trainig_image_top1_top5(results, model_name, etc=''):
results_dir, sample_file_name_top1 = setting_save_folder(etc + "number_of_image_top1", model_name)
results_dir, sample_file_name_top5 = setting_save_folder(etc + "number_of_image_top5", model_name)
# Plot using seaborn
sns.lmplot(
y='top1', x='n_train', data=results, height=8) # height=8만 있으면 800x800짜리 이미지가 만들어진다.
plt.xlabel('images')
plt.ylabel('Accuracy (%)')
plt.title('Top 1 Accuracy vs Number of Training Images')
plt.ylim(-5, 105) # y축 그래프의 범위 지정, -5 ~ 105까지로 설정되어 있다.
plt.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.1)
plt.savefig(results_dir + sample_file_name_top1)
sns.lmplot(
y='top3', x='n_train', data=results, height=8)
plt.xlabel('images')
plt.ylabel('Accuracy (%)')
plt.title('Top 3 Accuracy vs Number of Training Images')
plt.ylim(-5, 105)
plt.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.1)
plt.savefig(results_dir + sample_file_name_top5)
print('\n----------------------------------------------------------------')
print("학습 이미지 수에 따른 Top1, Top3 정확도 그래프가 저장되었습니다.")
print(f'저장 경로 : {results_dir}')
print(f'Top1 파일 명 : {sample_file_name_top1}')
print(f'Top5 파일 명 : {sample_file_name_top5}')
print('----------------------------------------------------------------\n')
plt.close('all')
def save_train_valid_loss(history, model_name, etc=''):
results_dir, sample_file_name_loss = setting_save_folder(etc + "loss", model_name)
results_dir, sample_file_name_acc = setting_save_folder(etc + "acc", model_name)
plt.figure(figsize=(8, 6))
for c in ['train_loss', 'valid_loss']:
plt.plot(
history[c], label=c)
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Average Negative Log Likelihood')
plt.title('Training and Validation Losses')
plt.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.1)
plt.savefig(results_dir + sample_file_name_loss)
plt.figure(figsize=(8, 6))
for c in ['train_acc', 'valid_acc']:
plt.plot(
100 * history[c], label=c)
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Average Accuracy')
plt.title('Training and Validation Accuracy')
plt.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.1)
plt.savefig(results_dir + sample_file_name_acc)
print('\n----------------------------------------------------------------')
print('Training, Validation의 Loss와 Accuracy 그래프가 저장되었습니다.')
print(f'저장 경로 : {results_dir}')
print(f'Loss 파일 명 : {sample_file_name_loss}')
print(f'Acc 파일 명 : {sample_file_name_acc}')
print('----------------------------------------------------------------\n')
plt.close('all')
def imshow_tensor(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# Set the color channel as the third dimension
image = image.numpy().transpose((1, 2, 0))
# Reverse the preprocessing steps
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Clip the image pixel values
image = np.clip(image, 0, 1)
ax.imshow(image)
plt.axis('off')
return ax, image
def process_image(image_path):
"""Process an image path into a PyTorch tensor"""
image = Image.open(image_path) # 이미지 경로에 있는 이미지를 불러온다.
print(f'## Image Info : {image}')
# Resize
img = image.resize((256, 256)) # 256 크기로 리사이즈 한다. Numpy에서 제공하는 resize 함수이다.
# Center crop
width = 256
height = 256
new_width = 224
new_height = 224
left = (width - new_width) / 2 # (256 - 224)/2 = 16
top = (height - new_height) / 2 # 16
right = (width + new_width) / 2 # (256 + 224) / 2 = 240
bottom = (height + new_height) / 2 # 240
img = img.crop((left, top, right, bottom)) # 이미지를 크롭한다. 이미지 크롭 방법은 가운데에서 224 크기로 이미지를 자르는 것이다.
# Convert to numpy, transpose color dimension and normalize
img = np.array(img).transpose((2, 0, 1)) / 256
# Standardization
means = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1))
stds = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1))
img = img - means
img = img / stds
img_tensor = torch.Tensor(img)
return img_tensor
def predict(image_path, model, topk=5, inference_type = 'gpu'):
"""Make a prediction for an image using a trained model
Params
--------
image_path (str): filename of the image
model (PyTorch model): trained model for inference
topk (int): number of top predictions to return
Returns
"""
real_class = image_path.split('/')[-2] # 이미지 경로에서 폴더이름을 빼낸다. 폴더 이름이 카테고리 명이다.
# Convert to pytorch tensor
img_process_start = timer()
img_tensor = process_image(image_path)
img_process_end = timer() - img_process_start
print(f'## Image Process Time : {img_process_end*1000.0:.2f}ms')
# Resize
if inference_type == 'gpu':
img_tensor = img_tensor.view(1, 3, 224, 224).cuda()
elif inference_type == 'cpu':
img_tensor = img_tensor.view(1, 3, 224, 224)
# Set to evaluation
with torch.no_grad():
model.eval()
# Model outputs log probabilities
in_model_start = timer()
out = model(img_tensor)
in_model_end = timer() - in_model_start
print(f'## Input image to Model : {in_model_end*1000.0:.2f}ms')
ps = torch.exp(out)
# Find the topk predictions
topk, topclass = ps.topk(topk, dim=1)
# Extract the actual classes and probabilities
top_classes = [
model.idx_to_class[class_] for class_ in topclass.cpu().numpy()[0]
]
top_p = topk.cpu().numpy()[0]
print(f'## Image Process + Model Inference Time : {(img_process_end + in_model_end)*1000.0:.2f}ms')
return img_tensor.cpu().squeeze(), top_p, top_classes, real_class
def display_prediction(image_path, model, topk, model_name, etc=''):
"""Display image and preditions from model"""
results_dir, random_predict = setting_save_folder(etc + "random_predict", model_name)
start_inference = timer() # 이미지 추론 시작 시간
# Get predictions
img, ps, classes, y_obs = predict(image_path, model, topk)
# Convert results to dataframe for plotting
result = pd.DataFrame({'p': ps}, index=classes) # 추론 결과
inference_time = timer() - start_inference # 이미지 추론 끝나는 시간
# Show the image
plt.figure(figsize=(16, 5))
ax = plt.subplot(1, 2, 1)
ax, img = imshow_tensor(img, ax=ax)
# Set title to be the actual class
ax.set_title(y_obs, size=20)
ax = plt.subplot(1, 2, 2)
# Plot a bar plot of predictions
result.sort_values('p')['p'].plot.barh(color='blue', edgecolor='k', ax=ax)
plt.xlabel('Predicted Probability')
plt.tight_layout()
plt.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.1)
plt.savefig(results_dir + random_predict)
print('\n----------------------------------------------------------------')
print('Ground_Truth : ' + y_obs)
print(result)
print(f'추론에 걸린 시간 : {inference_time*1000.0:.2f}ms')
print('Test 이미지에 대한 Top5 예측 결과가 저장되었습니다.')
print(f'저장 경로 : {results_dir}')
print(f'결과 파일 명: {random_predict}')
print('----------------------------------------------------------------\n')
plt.close('all')
return inference_time
def img_prediction(image_path, model, topk, gt, inference_type = 'gpu'):
start_inference = timer() # 추론시간 측정을 위한 시작 시간 입력
# Get predictions
img, ps, classes, y_obs = predict(image_path, model, topk, inference_type)
# Convert results to dataframe for plotting
result = pd.DataFrame({'p': ps}, index=classes)
inference_time = timer() - start_inference # 추론이 끝난 시간 - 추론 시작 시간 = 추론에 걸린 시간
print('\n----------------------------------------------------------------')
print('Ground_Truth : '+ gt)
print(result)
print(f"Top1 정확도 : {(result['p'][0])*100:.2f}%")
print(f'추론에 걸린 시간 : {inference_time*1000.0:.2f}ms')
print('----------------------------------------------------------------\n')
return inference_time, result, classes
def accuracy(output, target, topk=(1, )):
"""Compute the topk accuracy(s)"""
output = output.to('cuda')
target = target.to('cuda')
# print(f'output : {output}\ntarget: {target}')
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
# print(f'maxk: {maxk}\nbatchsize : {batch_size}')
# Find the predicted classes and transpose
_, pred = output.topk(k=maxk, dim=1, largest=True, sorted=True)
pred = pred.t()
# print(f'pred : {pred}')
# Determine predictions equal to the targets
correct = pred.eq(target.view(1, -1).expand_as(pred))
# print(f'corret : {correct}')
res = []
# For each k, find the percentage of correct
for k in topk:
# print(f'k : {k}\ntopk: {topk}')
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
# print(f'correct_k : {correct_k}')
res.append(correct_k.mul_(100.0 / batch_size).item())
# print(f'for in res : {res}')
return res
def evaluate(model, test_loader, criterion, n_classes, topk=(1, 3)):
"""Measure the performance of a trained PyTorch model
Params
--------
model (PyTorch model): trained cnn for inference
test_loader (PyTorch DataLoader): test dataloader
topk (tuple of ints): accuracy to measure
Returns
--------
results (DataFrame): results for each category
"""
classes = []
losses = []
# Hold accuracy results
acc_results = np.zeros((len(test_loader.dataset), len(topk)))
i = 0
model.eval()
with torch.no_grad():
# Testing loop
for data, targets in test_loader:
data, targets = data.to('cuda'), targets.to('cuda')
# Raw model output
out = model(data)
# Iterate through each example
for pred, true in zip(out, targets):
# Find topk accuracy
acc_results[i, :] = accuracy(
pred.unsqueeze(0), true.unsqueeze(0), topk)
classes.append(model.idx_to_class[true.item()])
# Calculate the loss
loss = criterion(pred.view(1, n_classes), true.view(1))
losses.append(loss.item())
# print(f'acc_result : {acc_results}')
i += 1
# Send results to a dataframe and calculate average across classes
results = pd.DataFrame(acc_results, columns=[f'top{i}' for i in topk])
# print(f'result : {results}')
# print(f'result top1 : {results["top1"].mean()}, top5 : {results["top5"].mean()}')
results['class'] = classes
results['loss'] = losses
results = results.groupby(classes).mean()
return results.reset_index().rename(columns={'index': 'class'})
def training_result(results):
# Weighted column of test images
results['weighted'] = results['n_test'] / results['n_test'].sum()
# Create weighted accuracies
for i in (1, 3):
results[f'weighted_top{i}'] = results['weighted'] * results[f'top{i}']
# Find final accuracy accounting for frequencies
top1_weighted = results['weighted_top1'].sum()
top5_weighted = results['weighted_top3'].sum()
loss_weighted = (results['weighted'] * results['loss']).sum()
print('\n----------------------------------------------------------------')
print(f'Final test cross entropy per image = {loss_weighted:.4f}.')
print(f'Final test top 1 weighted accuracy = {top1_weighted:.2f}%')
print(f'Final test top 3 weighted accuracy = {top5_weighted:.2f}%')
print('----------------------------------------------------------------\n')
| 35.653951 | 153 | 0.579353 | 9,511 | 78,510 | 4.642309 | 0.113763 | 0.034856 | 0.040043 | 0.017666 | 0.771612 | 0.749122 | 0.743506 | 0.729713 | 0.726315 | 0.706181 | 0 | 0.02959 | 0.290613 | 78,510 | 2,201 | 154 | 35.67015 | 0.763188 | 0.196777 | 0 | 0.756006 | 0 | 0.011261 | 0.148984 | 0.0378 | 0 | 0 | 0 | 0 | 0.000751 | 1 | 0.018769 | false | 0 | 0.012763 | 0.000751 | 0.047297 | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e5d025bcf9eed7dfbed4fe7a0e37b228fb957b9d | 180 | py | Python | Server/src/utils/utils.py | SamuelSlavka/dp | af5329afcfc99595f1e5e50ad0af7995c219b597 | [
"MIT"
] | null | null | null | Server/src/utils/utils.py | SamuelSlavka/dp | af5329afcfc99595f1e5e50ad0af7995c219b597 | [
"MIT"
] | 10 | 2022-03-21T11:09:53.000Z | 2022-03-30T07:08:14.000Z | Server/src/utils/utils.py | SamuelSlavka/dp | af5329afcfc99595f1e5e50ad0af7995c219b597 | [
"MIT"
] | null | null | null | """ General utils """
def castStrListToHex(list):
return [int(val,16) for val in list]
def castNestedStrListToHex(list):
return [[int(x,16) for x in lst] for lst in list] | 25.714286 | 53 | 0.683333 | 28 | 180 | 4.392857 | 0.5 | 0.162602 | 0.211382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027211 | 0.183333 | 180 | 7 | 53 | 25.714286 | 0.809524 | 0.072222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e5d27eea01795df5083f89f84c0b15d39aa1c4a2 | 103 | py | Python | tests/exog/random/random_exog_300_160.py | shaido987/pyaf | b9afd089557bed6b90b246d3712c481ae26a1957 | [
"BSD-3-Clause"
] | 377 | 2016-10-13T20:52:44.000Z | 2022-03-29T18:04:14.000Z | tests/exog/random/random_exog_300_160.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 160 | 2016-10-13T16:11:53.000Z | 2022-03-28T04:21:34.000Z | tests/exog/random/random_exog_300_160.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 63 | 2017-03-09T14:51:18.000Z | 2022-03-27T20:52:57.000Z | import tests.exog.test_random_exogenous as testrandexog
testrandexog.test_random_exogenous( 300,160); | 25.75 | 55 | 0.864078 | 14 | 103 | 6.071429 | 0.714286 | 0.235294 | 0.447059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 0.067961 | 103 | 4 | 56 | 25.75 | 0.822917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e5e045c9b3e9bc77963ff8a4a61b2cb06b769ac6 | 6,461 | py | Python | crawlers/townCode/municipalPage.py | comjoy91/SKorean-Election_result-Crawler | 26674819357628cafc7149b72a220dfca3697bb4 | [
"Apache-2.0"
] | null | null | null | crawlers/townCode/municipalPage.py | comjoy91/SKorean-Election_result-Crawler | 26674819357628cafc7149b72a220dfca3697bb4 | [
"Apache-2.0"
] | null | null | null | crawlers/townCode/municipalPage.py | comjoy91/SKorean-Election_result-Crawler | 26674819357628cafc7149b72a220dfca3697bb4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- encoding=utf-8 -*-
from crawlers.townCode.base_municipalPage import *
from utils import sanitize, InvalidCrawlerError
def Crawler(nth, election_name, electionType, target, target_eng, target_kor):
if target == 'local-pp' :
if 1 <= nth <= 3:
crawler = Province_townCodeCrawler_GuOld(int(nth), election_name, electionType)
if nth == 3:
crawler.urlParam_PR_sgg_list = dict(electionId='0000000000', electionName=election_name, electionCode=7)
crawler.urlParam_PR_elector_list = dict(electionId='0000000000', electionName=election_name,\
requestURI='/WEB-INF/jsp/electioninfo/0000000000/ep/epei01.jsp',\
statementId='EPEI01_#91',\
oldElectionType=0, electionType=2, electionCode=8,\
townCode=-1)
elif 4 <= nth <= 6:
crawler = Province_townCodeCrawler_Old(int(nth), election_name, electionType)
crawler.urlParam_PR_sgg_list = dict(electionId='0000000000', electionName=election_name, electionCode=8)
crawler.urlParam_PR_elector_list = dict(electionId='0000000000', electionName=election_name,\
requestURI='/WEB-INF/jsp/electioninfo/0000000000/ep/epei01.jsp',\
statementId='EPEI01_#1',\
oldElectionType=1, electionType=2, electionCode=8,\
townCode=-1)
elif nth == 7:
raise InvalidCrawlerError('townCode', nth, election_name, electionType)
#"최근선거"로 들어갈 때의 code: crawler = Province_townCodeCrawler_Recent(int(nth), election_name, electionType)
else:
raise InvalidCrawlerError('townCode', nth, election_name, electionType)
elif target == 'local-mp' :
if 1 <= nth <= 3:
crawler = Province_townCodeCrawler_GuOld(int(nth), election_name, electionType)
elif 4 <= nth <= 6:
crawler = Province_townCodeCrawler_Old(int(nth), election_name, electionType)
crawler.urlParam_PR_sgg_list = dict(electionId='0000000000', electionName=election_name, electionCode=9)
crawler.urlParam_PR_elector_list = dict(electionId='0000000000', electionName=election_name,\
requestURI='/WEB-INF/jsp/electioninfo/0000000000/ep/epei01.jsp',\
statementId='EPEI01_#1',\
oldElectionType=1, electionType=2, electionCode=9,\
townCode=-1)
crawler.nth = nth
crawler.target = target
crawler.target_eng = target_eng
crawler.target_kor = target_kor
return crawler
class Province_townCodeCrawler_GuOld(JSONCrawler_municipal):
def __init__(self, nth, _election_name, _election_type):
self.urlPath_city_codes = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_cityCodeBySgJson_GuOld.json'
self.urlParam_city_codes = dict(electionId='0000000000', electionCode=_election_name, subElectionCode=_election_type)
# 여기서 크롤링된 데이터는 행정구역(시군구, 행정구 포함) 단위로 분류됨.
self.urlPath_town_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_townCodeBySgJson_GuOld.json'
self.urlParam_town_list = dict(electionId='0000000000', electionCode=_election_name, subElectionCode=_election_type)
# 여기서 크롤링된 데이터는 선거구 단위로 분류됨.
self.urlPath_sgg_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_getSggCityCodeJson_GuOld.json'
self.urlParam_sgg_list = dict(electionId='0000000000', electionName=_election_name, electionCode=_election_type)
# 여기서 크롤링된 데이터는 선거구 단위로 분류됨.
self.urlPath_sggTown_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_getSggTownCodeJson_GuOld.json'
self.urlParam_sggTown_list = dict(electionId='0000000000', electionName=_election_name, electionCode=_election_type)
self.urlPath_elector_list = 'http://info.nec.go.kr/electioninfo/electionInfo_report.xhtml'
self.urlParam_elector_list = dict(electionId='0000000000', electionName=_election_name,\
requestURI='/WEB-INF/jsp/electioninfo/0000000000/ep/epei01.jsp',\
statementId='EPEI01_#91',\
oldElectionType=0, electionType=2, electionCode=_election_type,\
townCode=-1)
class Province_townCodeCrawler_Old(JSONCrawler_municipal):
def __init__(self, nth, _election_name, _election_type):
self.urlPath_city_codes = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_cityCodeBySgJson_Old.json'
self.urlParam_city_codes = dict(electionId='0000000000', electionCode=_election_name, subElectionCode=_election_type)
# 여기서 크롤링된 데이터는 행정구역(시군구, 행정구 포함) 단위로 분류됨.
self.urlPath_town_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_townCodeBySgJson_Old.json'
self.urlParam_town_list = dict(electionId='0000000000', electionCode=_election_name, subElectionCode=_election_type)
# 여기서 크롤링된 데이터는 선거구 단위로 분류됨.
self.urlPath_sgg_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_getSggCityCodeJson_Old.json'
self.urlParam_sgg_list = dict(electionId='0000000000', electionName=_election_name, electionCode=_election_type)
# 여기서 크롤링된 데이터는 선거구 단위로 분류됨.
self.urlPath_sggTown_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_getSggTownCodeJson_Old.json'
self.urlParam_sggTown_list = dict(electionId='0000000000', electionName=_election_name, electionCode=_election_type)
self.urlPath_elector_list = 'http://info.nec.go.kr/electioninfo/electionInfo_report.xhtml'
self.urlParam_elector_list = dict(electionId='0000000000', electionName=_election_name,\
requestURI='/WEB-INF/jsp/electioninfo/0000000000/ep/epei01.jsp',\
statementId='EPEI01_#1',\
oldElectionType=1, electionType=2, electionCode=_election_type,\
townCode=-1)
class Province_townCodeCrawler_Recent(JSONCrawler_municipal):
def __init__(self, nth, _election_name, _election_type):
self.urlPath_city_codes = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_cityCodeBySgJson.json'
self.urlParam_city_codes = dict(electionId=_election_name, electionCode=_election_type)
# 여기서 크롤링된 데이터는 행정구역(시군구, 행정구 포함) 단위로 분류됨.
self.urlPath_town_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_townCodeJson.json'
self.urlParam_town_list = dict(electionId=_election_name, electionCode=_election_type)
# 여기서 크롤링된 데이터는 선거구 단위로 분류됨.
self.urlPath_sgg_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_getSggCityCodeJson.json'
self.urlParam_sgg_list = dict(electionId=_election_name, electionCode=_election_type)
# 여기서 크롤링된 데이터는 선거구 단위로 분류됨.
self.urlPath_sggTown_list = 'http://info.nec.go.kr/bizcommon/selectbox/selectbox_getSggTownCodeJson_GuOld.json'
self.urlParam_sggTown_list = dict(electionId=_election_name, electionCode=_election_type)
| 52.959016 | 119 | 0.768612 | 792 | 6,461 | 5.989899 | 0.137626 | 0.078415 | 0.064503 | 0.082631 | 0.9043 | 0.897133 | 0.897133 | 0.85371 | 0.846121 | 0.846121 | 0 | 0.047326 | 0.12026 | 6,461 | 121 | 120 | 53.396694 | 0.787298 | 0.066398 | 0 | 0.542169 | 0 | 0 | 0.255648 | 0.041528 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048193 | false | 0 | 0.024096 | 0 | 0.120482 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e5e36e30ce21178b0f5d9b09325a3c3dc09a04c2 | 175 | py | Python | python/__init__.py | EnvSys/kealib | f1a7e7281a2fac178008a9c21026f07914afc030 | [
"MIT"
] | 5 | 2020-09-18T03:21:25.000Z | 2021-09-09T02:24:02.000Z | python/__init__.py | EnvSys/kealib | f1a7e7281a2fac178008a9c21026f07914afc030 | [
"MIT"
] | 9 | 2020-05-28T10:45:14.000Z | 2022-03-26T06:44:23.000Z | python/__init__.py | EnvSys/kealib | f1a7e7281a2fac178008a9c21026f07914afc030 | [
"MIT"
] | 5 | 2019-12-01T20:08:41.000Z | 2022-02-21T12:03:54.000Z | """
Module for Kealib 'extras' - functionality that GDAL
doesn't currently support
"""
# load this into mem as symbols are required by the shared libs in here
import awkward
| 25 | 71 | 0.76 | 27 | 175 | 4.925926 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177143 | 175 | 6 | 72 | 29.166667 | 0.923611 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f91f82f75140415a2987062340c6d8cc5cfc5f05 | 109 | py | Python | tempCodeRunnerFile.py | 22037/22037-Camera | 77d543399ef0e0adef719c93ff4bd956d7057b94 | [
"MIT"
] | 1 | 2022-03-04T21:31:24.000Z | 2022-03-04T21:31:24.000Z | tempCodeRunnerFile.py | 22037/22037-Camera | 77d543399ef0e0adef719c93ff4bd956d7057b94 | [
"MIT"
] | null | null | null | tempCodeRunnerFile.py | 22037/22037-Camera | 77d543399ef0e0adef719c93ff4bd956d7057b94 | [
"MIT"
] | 1 | 2022-03-25T00:11:01.000Z | 2022-03-25T00:11:01.000Z | self.data_cube_corr=cv2.resize(self.data_cube_corr, (540,720), fx=0, fy=0, interpolation = cv2.INTER_NEAREST) | 109 | 109 | 0.788991 | 20 | 109 | 4.05 | 0.7 | 0.197531 | 0.296296 | 0.395062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097087 | 0.055046 | 109 | 1 | 109 | 109 | 0.68932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
00bb23f540035d3114088422a99a10b37895fb88 | 198 | py | Python | akutil/akutil/__init__.py | ak47mrj/arkouda | a9167e674aff57e02e1bed49fbb0c3cf1b2f2707 | [
"MIT"
] | 75 | 2019-10-21T17:20:41.000Z | 2021-05-10T22:01:19.000Z | akutil/akutil/__init__.py | ak47mrj/arkouda | a9167e674aff57e02e1bed49fbb0c3cf1b2f2707 | [
"MIT"
] | 424 | 2019-10-21T16:48:45.000Z | 2021-05-12T11:49:18.000Z | akutil/akutil/__init__.py | ak47mrj/arkouda | a9167e674aff57e02e1bed49fbb0c3cf1b2f2707 | [
"MIT"
] | 36 | 2019-10-23T17:45:44.000Z | 2021-04-17T01:15:03.000Z | from akutil.dataframe import *
from akutil.util import *
from akutil.row import *
from akutil.alignment import *
from akutil.plotting import *
from akutil.join import *
from akutil.hdbscan import *
| 24.75 | 30 | 0.787879 | 28 | 198 | 5.571429 | 0.357143 | 0.448718 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141414 | 198 | 7 | 31 | 28.285714 | 0.917647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
00c8228418c8b0bd1ef2566c15523e67a79fddd1 | 29 | py | Python | manage.py | flyinactor91/AVWX-Account | 29f3b9226699243966f9c7b041e94773c79d0314 | [
"MIT"
] | 1 | 2019-09-14T02:20:04.000Z | 2019-09-14T02:20:04.000Z | manage.py | flyinactor91/AVWX-Account | 29f3b9226699243966f9c7b041e94773c79d0314 | [
"MIT"
] | null | null | null | manage.py | flyinactor91/AVWX-Account | 29f3b9226699243966f9c7b041e94773c79d0314 | [
"MIT"
] | 1 | 2019-03-23T09:34:50.000Z | 2019-03-23T09:34:50.000Z | from avwx_account import app
| 14.5 | 28 | 0.862069 | 5 | 29 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.