hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
745fd82ce62ab2730c797bebd1dec7307747441a | 2,612 | py | Python | leviathan/__init__.py | scottywz/leviathan-player | 31e19efd0e88959455371dd605981d98bddd388b | [
"Apache-2.0"
] | null | null | null | leviathan/__init__.py | scottywz/leviathan-player | 31e19efd0e88959455371dd605981d98bddd388b | [
"Apache-2.0"
] | null | null | null | leviathan/__init__.py | scottywz/leviathan-player | 31e19efd0e88959455371dd605981d98bddd388b | [
"Apache-2.0"
] | null | null | null | # Leviathan Music Manager
# A command-line utility to manage your music collection.
#
# Copyright (C) 2010-2011, 2020 S. Zeid
# https://code.s.zeid.me/leviathan
#
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
#
# EXCEPTION: Any version of this program, modified or otherwise, or any portion
# or modified portion of this program, which does not use or import the Mutagen
# audio tagging library may (at your option) be used under the following X11
# License instead of the GNU General Public License (this condition is also
# satisfied when this program is imported and used as a library without calling
# its `enable_gpl()` function):
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# Except as contained in this notice, the name(s) of the above copyright holders
# shall not be used in advertising or otherwise to promote the sale, use or
# other dealings in this Software without prior written authorization.
__author__ = "S. Zeid <s@zeid.me>"
__version__ = "0.1"
from leviathan import *
| 45.824561 | 80 | 0.765697 | 409 | 2,612 | 4.867971 | 0.444988 | 0.044199 | 0.026118 | 0.038172 | 0.103466 | 0.04219 | 0 | 0 | 0 | 0 | 0 | 0.007907 | 0.176876 | 2,612 | 56 | 81 | 46.642857 | 0.91814 | 0.929939 | 0 | 0 | 0 | 0 | 0.165414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7464f1b7a4e383c5cf1c59e3f31706e1bfe39944 | 1,858 | py | Python | scenario-generator.py | mdenesfe/scenario-generator | 1ab85092669d8d75a361520640285735d0284bc7 | [
"MIT"
] | null | null | null | scenario-generator.py | mdenesfe/scenario-generator | 1ab85092669d8d75a361520640285735d0284bc7 | [
"MIT"
] | null | null | null | scenario-generator.py | mdenesfe/scenario-generator | 1ab85092669d8d75a361520640285735d0284bc7 | [
"MIT"
] | null | null | null |
basrol = input("Başrolün ismi: ")
kardes = input("Başrolün Ağabeyi: ")
anne = input("Başrolün Annesi: ")
baba = input("Başrolün Babası: ")
sevgili1 = input("Basrolun Sevdiği: ")
arkadas = input("Başrolün Yakın Arkadaşı: ")
mekan = input("Nerede: ")
isyeri = input("Nerede Çalışıyorlar: ")
print ("------------------------------------------")
print(basrol, "ve ",kardes," kardeştir.")
print("Anneleri ",anne,", babaları ",baba,"'dir.")
print(basrol,", ağabeyinin ve ",sevgili1,"'nin" ,mekan,"nde, ",kardes," ve ",sevgili1,"'nin fotoğrafını çeken fotoğrafçı ile tartışmış ve fotoğrafçının kamerasını kırmıştır.")
print("Fotoğrafçının kamerasını kırması üzerine, kameranın parasını çıkartmak için ağabeyi ile birlikte ",isyeri," işine girip para biriktirmiştir.")
print("Kazandığı para ile birlikte âşık olduğu ",sevgili1,"'ye hediye almıştır ve ",sevgili1,"'ye aldığı hediyeyi verip açılmayı planlayan ",basrol,", ağabeyi ",kardes," ile ",sevgili1,"'yi öpüşürken görmüştür.")
print("Bu olaydan sonra",basrol," eve gitmiş ve babasının, Kameranın parası nerede? diye sormasının ardından babası ile tartışmıştır.")
print("Tartışma sırasında evin annesi ",anne,", çocuğu ve eşini ayırmak için araya girmiş ve baba ",baba,", eşine tokat atmıştır.")
print("Buna dayanamayan ",basrol," babasına yumruk atmış ve iş arabasını alıp evden kaçmıştır.")
print("Arkadaşı ",arkadas," ile alkol alan ",basrol,"'i, ",kardes," gelip almıştır.")
print("Arabayla eve giderken ",kardes," ile tartışmıştır ve bu tartışma sırasında ",kardes," önüne çıkan genci fark etmeyip ona çarpmıştır ve genç ölmüştür.")
print("Olaydan sonraki gün ",kardes,"'in üniversite sınavı olduğu için suçu ",basrol," üstlenmiştir ve",basrol," 4 yıl hapis cezasına çarptırılmıştır.")
print("hikâye de bu şekilde başlar.")
print ("------------------------------------------")
| 71.461538 | 213 | 0.700753 | 219 | 1,858 | 5.945205 | 0.570776 | 0.049923 | 0.019969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004316 | 0.127018 | 1,858 | 25 | 214 | 74.32 | 0.798397 | 0 | 0 | 0.090909 | 0 | 0 | 0.691762 | 0.045827 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.636364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
74697367474506ce045f60ccd43ed3d1a6431709 | 1,218 | py | Python | database/buyer.py | mithilesh1024/Callback-Warrior_ShoppingMart | 33aaaaee720c0d8c7e3f337c5cab0e4a438a8742 | [
"MIT"
] | null | null | null | database/buyer.py | mithilesh1024/Callback-Warrior_ShoppingMart | 33aaaaee720c0d8c7e3f337c5cab0e4a438a8742 | [
"MIT"
] | null | null | null | database/buyer.py | mithilesh1024/Callback-Warrior_ShoppingMart | 33aaaaee720c0d8c7e3f337c5cab0e4a438a8742 | [
"MIT"
] | 3 | 2020-11-19T11:05:48.000Z | 2020-11-21T16:06:55.000Z | from flask_sqlalchemy import SQLAlchemy
class Seller(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(120), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
password = db.Column(db.String(60), nullable=False)
address = db.Column(db.String(255), nullable=False)
phoneNo = db.Column(db.Integer(10), nullable=False)
companyName = db.Column(db.String(255), unique=True, nullable=False)
def __init__(self, name, email, password , address, phoneNo):
self.name = name
self.email = email
self.password = password
self.address = address
self.phoneNo = phoneNo
class Products(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(120),unique=True,nullable=False)
image = db.Column(db.String(120), nullable=False,default='default.jpg')
price = db.Column(db.Interger(10), nullable=False, default=0)
description = db.Column(db.String(500), nullable=False)
def __init__(self,name,image,price,description):
self.name = name
self.image = image
self.price = price
self.description = description
| 38.0625 | 75 | 0.679803 | 164 | 1,218 | 4.981707 | 0.243902 | 0.117503 | 0.146879 | 0.156671 | 0.400245 | 0.330477 | 0.261934 | 0.261934 | 0.261934 | 0.210526 | 0 | 0.02834 | 0.188834 | 1,218 | 31 | 76 | 39.290323 | 0.798583 | 0 | 0 | 0.230769 | 0 | 0 | 0.009031 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.115385 | 0.038462 | 0 | 0.653846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7470ca62a35741134194bea5b06bd86ec8e473b6 | 1,248 | py | Python | ymidi/containers.py | Owen-Cochell/yapmidi | 50a3a1800375a3f390eb40628387fb3d2a520b8e | [
"MIT"
] | null | null | null | ymidi/containers.py | Owen-Cochell/yapmidi | 50a3a1800375a3f390eb40628387fb3d2a520b8e | [
"MIT"
] | null | null | null | ymidi/containers.py | Owen-Cochell/yapmidi | 50a3a1800375a3f390eb40628387fb3d2a520b8e | [
"MIT"
] | null | null | null | """
Components that house MIDI events and other misc. data,
"""
from dataclasses import dataclass
class TrackInfo(dataclass):
"""
An object that contains info about a specific track.
The data in this object is used for keeping track of track statistics.
We allow for these attributes to be manually defined,
and we also allow for auto-track traversal to fill in this info.
"""
pass
class TrackPattern(list):
"""
A collection of tracks.
We contain a list of tracks that contain MIDI events.
We keep track(haha) of statistics and data related to the
MIDI data we contain.
We do this by handling Meta events and yap-events.
We also support playback of the MIDI data.
This includes ALL MIDI track types,
and supports tracks that are playing at diffrent speeds.
"""
pass
class Track(list):
"""
A track of MIDI events.
We offer some useful helper methods that make
altering and adding MIDI events a relatively painless affair.
We inherit the default python list,
so we support all list operations.
"""
def __init__(self, *args):
super().__init__(*args)
self.name = '' # Name of the track
| 24 | 74 | 0.668269 | 177 | 1,248 | 4.666667 | 0.514124 | 0.048426 | 0.029056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275641 | 1,248 | 51 | 75 | 24.470588 | 0.913717 | 0.697917 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.222222 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7475ef869a204c884139e57953853f7580e07de3 | 373 | py | Python | deepRec1/dataloader4mlleatestWithTs.py | meannoharm/movie_recommend | aa1b82c82d3e66db62e50568414c5b4a1cfa92b6 | [
"MIT"
] | null | null | null | deepRec1/dataloader4mlleatestWithTs.py | meannoharm/movie_recommend | aa1b82c82d3e66db62e50568414c5b4a1cfa92b6 | [
"MIT"
] | null | null | null | deepRec1/dataloader4mlleatestWithTs.py | meannoharm/movie_recommend | aa1b82c82d3e66db62e50568414c5b4a1cfa92b6 | [
"MIT"
] | null | null | null | from utils import osUtils as ou
import random
from tqdm import tqdm
from data_set import filepaths as fp
import pandas as pd
def readRecData(path,test_ratio = 0.1):
df = pd.read_csv(path,sep='\t',header=None)
a = df.sort_values(by=[0,3],axis=0)
a.to_csv('a.csv')
print(a)
return
if __name__ == '__main__':
readRecData(fp.Ml_latest_small.RATING_TS) | 24.866667 | 47 | 0.707775 | 66 | 373 | 3.757576 | 0.681818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016234 | 0.174263 | 373 | 15 | 48 | 24.866667 | 0.788961 | 0 | 0 | 0 | 0 | 0 | 0.040107 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.538462 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
747932e9a643cbc1f76b22228d6a2b69edf0bc39 | 118 | py | Python | anyfix_globals.py | alexander-liao/anyfix | d72e53d9382eb1a85dc2ff5ebc6a80094c5e9f57 | [
"Unlicense"
] | 4 | 2019-12-01T11:05:00.000Z | 2020-12-22T14:33:52.000Z | anyfix_globals.py | hyper-neutrino/anyfix | d72e53d9382eb1a85dc2ff5ebc6a80094c5e9f57 | [
"Unlicense"
] | null | null | null | anyfix_globals.py | hyper-neutrino/anyfix | d72e53d9382eb1a85dc2ff5ebc6a80094c5e9f57 | [
"Unlicense"
] | 1 | 2017-06-19T18:34:06.000Z | 2017-06-19T18:34:06.000Z | values = {
'r': 0.000000000001
}
typers = {
'r': float
}
def setGlobal(key, value):
values[key] = value
| 10.727273 | 26 | 0.559322 | 14 | 118 | 4.714286 | 0.714286 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151163 | 0.271186 | 118 | 10 | 27 | 11.8 | 0.616279 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
74992e791f0bfb3da96a37fdd4a8ed363d8aa216 | 1,976 | py | Python | gfxlcd/demos/ssd.py | bkosciow/gfxlcd | 953d013abb9b695c2226b348093cc64391a01f6c | [
"MIT"
] | 12 | 2018-01-31T18:43:23.000Z | 2021-10-06T10:23:05.000Z | gfxlcd/demos/ssd.py | bkosciow/gfxlcd | 953d013abb9b695c2226b348093cc64391a01f6c | [
"MIT"
] | 4 | 2018-04-25T15:15:16.000Z | 2021-03-21T09:21:50.000Z | gfxlcd/demos/ssd.py | bkosciow/gfxlcd | 953d013abb9b695c2226b348093cc64391a01f6c | [
"MIT"
] | 4 | 2018-03-15T09:12:09.000Z | 2021-03-19T20:07:33.000Z | import random
import sys
sys.path.append("../../")
from gfxlcd.driver.ssd1306.spi import SPI
from gfxlcd.driver.ssd1306.ssd1306 import SSD1306
def hole(x, y):
o.draw_pixel(x+1, y)
o.draw_pixel(x+2, y)
o.draw_pixel(x+3, y)
o.draw_pixel(x+1, y + 4)
o.draw_pixel(x+2, y + 4)
o.draw_pixel(x+3, y + 4)
o.draw_pixel(x, y + 1)
o.draw_pixel(x+4, y + 1)
o.draw_pixel(x, y + 2)
o.draw_pixel(x+4, y + 2)
o.draw_pixel(x, y + 3)
o.draw_pixel(x+4, y + 3)
drv = SPI()
o = SSD1306(128, 64, drv)
o.init()
o.auto_flush = False
for _ in range(0, 50):
hole(random.randint(2, 120), random.randint(2, 56))
hole(10, 10)
hole(15, 13)
hole(18, 23)
hole(40, 10)
o.flush(True)
# o.fill(0)
#
# o.fill(random.randint(0, 255))
#
# o.draw_pixels(2, 0, 128)
# o.draw_pixels(3, 0, 128)
# o.draw_pixels(7, 0, 128)
# o.draw_pixels(8, 0, 128)
# o.draw_pixels(1, 9, 7)
# o.draw_pixels(9, 9, 7)
# o.draw_pixels(2, 9, 8)
# o.draw_pixels(3, 9, 16)
# o.draw_pixels(4, 9, 33)
# o.draw_pixels(5, 9, 66)
# o.draw_pixels(6, 9, 33)
# o.draw_pixels(7, 9, 16)
# o.draw_pixels(8, 9, 8)
#
# o.draw_pixels(15, 9, 127)
# o.draw_pixels(16, 9, 65)
# o.draw_pixels(17, 9, 65)
# o.draw_pixels(18, 9, 62)
#
# o.draw_pixels(20, 9, 38)
# o.draw_pixels(21, 9, 73)
# o.draw_pixels(22, 9, 73)
# o.draw_pixels(23, 9, 50)
#
# o.draw_pixels(25, 9, 127)
# o.draw_pixels(26, 9, 9)
# o.draw_pixels(27, 9, 9)
# o.draw_pixels(28, 9, 6)
#
# o.draw_pixels(30, 9, 98)
# o.draw_pixels(31, 9, 81)
# o.draw_pixels(32, 9, 73)
# o.draw_pixels(33, 9, 70)
#
# o.draw_pixels(35, 9, 62)
# o.draw_pixels(36, 9, 65)
# o.draw_pixels(37, 9, 65)
# o.draw_pixels(38, 9, 62)
#
# o.draw_pixels(40, 9, 4)
# o.draw_pixels(41, 9, 2+64)
# o.draw_pixels(42, 9, 127)
# o.draw_pixels(43, 9, 64)
#
# o.draw_pixels(40, 9, 4)
# o.draw_pixels(41, 9, 2+64)
# o.draw_pixels(42, 9, 127)
# o.draw_pixels(43, 9, 64)
#
# o.draw_pixels(45, 9, 97)
# o.draw_pixels(46, 9, 25)
# o.draw_pixels(47, 9, 5)
# o.draw_pixels(48, 9, 3)
| 21.247312 | 55 | 0.612348 | 426 | 1,976 | 2.701878 | 0.197183 | 0.247611 | 0.430061 | 0.114683 | 0.56907 | 0.261512 | 0.146829 | 0.122502 | 0.122502 | 0.122502 | 0 | 0.174312 | 0.172571 | 1,976 | 92 | 56 | 21.478261 | 0.529664 | 0.583502 | 0 | 0 | 0 | 0 | 0.007843 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.137931 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
776b585e85359cdc8f805318534efb9b4298e5fa | 480 | py | Python | core/apps/accounts/urls.py | dansackett/django-project-template | 12264a8300442afda9cb207d7db24e55149d6355 | [
"MIT"
] | 4 | 2017-02-08T19:15:48.000Z | 2019-08-19T01:53:18.000Z | core/apps/accounts/urls.py | dansackett/django-project-template | 12264a8300442afda9cb207d7db24e55149d6355 | [
"MIT"
] | 3 | 2017-01-24T15:05:55.000Z | 2019-11-01T17:26:07.000Z | core/apps/accounts/urls.py | dansackett/django-project-template | 12264a8300442afda9cb207d7db24e55149d6355 | [
"MIT"
] | 6 | 2017-01-23T23:09:05.000Z | 2019-11-18T10:45:27.000Z | from django.conf.urls import url
from accounts import views as account_views
urlpatterns = [
url(r'users/$', account_views.users_list, name='users-list'),
url(r'users/new/$', account_views.user_create, name='user-create'),
url(r'user/(?P<pk>\d+)/$', account_views.user_single, name='user-single'),
url(r'user/(?P<pk>\d+)/edit/$', account_views.user_edit, name='user-edit'),
url(r'user/(?P<pk>\d+)/delete/$', account_views.user_delete, name='user-delete'),
]
| 40 | 85 | 0.68125 | 76 | 480 | 4.157895 | 0.315789 | 0.227848 | 0.202532 | 0.085443 | 0.113924 | 0.113924 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10625 | 480 | 11 | 86 | 43.636364 | 0.736597 | 0 | 0 | 0 | 0 | 0 | 0.283333 | 0.1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77746bbadb5cbe2a734fb9a9f8421987eac86fea | 807 | py | Python | patterndetector/detector/engulfing_candle_detector.py | WynnD/PatternDetector | dc25e15b101a066f96a62898a7cb4ec6535d4eb5 | [
"MIT"
] | 1 | 2020-12-08T06:56:33.000Z | 2020-12-08T06:56:33.000Z | patterndetector/detector/engulfing_candle_detector.py | ajmal017/PatternDetector | f71ae4c38481bd88956a8ef2ef162a1b7e617808 | [
"MIT"
] | 4 | 2020-08-03T16:29:31.000Z | 2021-02-24T22:51:33.000Z | patterndetector/detector/engulfing_candle_detector.py | ajmal017/PatternDetector | f71ae4c38481bd88956a8ef2ef162a1b7e617808 | [
"MIT"
] | 1 | 2020-12-08T06:56:25.000Z | 2020-12-08T06:56:25.000Z | from .detector import Detector
from patterndetector.data import Data
class EngulfingCandleDetector(Detector):
def __init__(self, data: Data):
super().__init__(data)
@property
def name(self):
return 'Engulfing Candle'
def isPattern(self, ticker):
openPrice = self.data.getOpeningPriceNDaysAgo(ticker, days=0)
openPrice2 = self.data.getOpeningPriceNDaysAgo(ticker, days=1)
closePrice = self.data.getClosingPriceNDaysAgo(ticker, days=0)
closePrice2 = self.data.getClosingPriceNDaysAgo(ticker, days=1)
return (closePrice > openPrice2 and openPrice < closePrice2 and openPrice2 > closePrice2 and openPrice < closePrice) or (openPrice > closePrice2 and closePrice < openPrice2 and closePrice2 > openPrice2 and closePrice < openPrice)
| 44.833333 | 237 | 0.732342 | 83 | 807 | 7.024096 | 0.349398 | 0.068611 | 0.106346 | 0.12693 | 0.281304 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021341 | 0.187113 | 807 | 17 | 238 | 47.470588 | 0.867378 | 0 | 0 | 0 | 0 | 0 | 0.019827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
777d64d8f54b28b97381150b318ea3b3a8373ffa | 94 | py | Python | stepik/programming_on_python/3_2_6_my_solve.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | stepik/programming_on_python/3_2_6_my_solve.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | stepik/programming_on_python/3_2_6_my_solve.py | anklav24/Python-Education | 49ebcfabda1376390ee71e1fe321a51e33831f9e | [
"Apache-2.0"
] | null | null | null | string = input().lower().split()
for word in set(string):
print(word, string.count(word))
| 23.5 | 35 | 0.670213 | 14 | 94 | 4.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138298 | 94 | 3 | 36 | 31.333333 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
777f9aa04f81ef4641223c242df10f646f2bc740 | 559 | py | Python | week2/2.03 for/step03 multiplication table.py | project-cemetery/stepik-programming-on-python | 3ca4e6a74b9ca5deb50336737fe2e6d74722a95f | [
"MIT"
] | 7 | 2020-08-03T22:10:29.000Z | 2022-02-23T16:08:44.000Z | week2/2.03 for/step03 multiplication table.py | Joni2701/stepik-programming-on-python | 3ca4e6a74b9ca5deb50336737fe2e6d74722a95f | [
"MIT"
] | null | null | null | week2/2.03 for/step03 multiplication table.py | Joni2701/stepik-programming-on-python | 3ca4e6a74b9ca5deb50336737fe2e6d74722a95f | [
"MIT"
] | 9 | 2020-05-19T16:42:39.000Z | 2022-02-24T22:41:21.000Z | def print_multiplication_table(vertical_interval, horizontal_interval):
print('\t', end='')
for i in range(horizontal_interval[0], horizontal_interval[1] + 1):
print(i, end='\t')
print()
for i in range(vertical_interval[0], vertical_interval[1] + 1):
print(i, end='\t')
for j in range(horizontal_interval[0], horizontal_interval[1] + 1):
print(i * j, end='\t')
print()
intervals = (int(input()), int(input())), (int(input()), int(input()))
print_multiplication_table(intervals[0], intervals[1])
| 32.882353 | 75 | 0.633274 | 75 | 559 | 4.56 | 0.253333 | 0.263158 | 0.087719 | 0.131579 | 0.467836 | 0.467836 | 0.374269 | 0.304094 | 0.304094 | 0.304094 | 0 | 0.02439 | 0.193202 | 559 | 16 | 76 | 34.9375 | 0.733925 | 0 | 0 | 0.333333 | 0 | 0 | 0.014311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.083333 | 0.666667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
778043d5864b69a00bdc1a821a0324321bcfd44b | 599 | py | Python | groups/templatetags/group_tags.py | AgentGeek/MangAdventure | 48d8e21355e5dc8122a6833d092190d79fef1674 | [
"MIT"
] | 2 | 2018-09-24T03:46:55.000Z | 2018-10-12T16:20:25.000Z | groups/templatetags/group_tags.py | AgentGeek/MangAdventure | 48d8e21355e5dc8122a6833d092190d79fef1674 | [
"MIT"
] | 6 | 2018-10-08T15:59:40.000Z | 2019-02-02T16:35:33.000Z | groups/templatetags/group_tags.py | AgentGeek/MangAdventure | 48d8e21355e5dc8122a6833d092190d79fef1674 | [
"MIT"
] | null | null | null | """Template tags of the groups app."""
from __future__ import annotations
from typing import TYPE_CHECKING
from django.template.defaultfilters import register
if TYPE_CHECKING: # pragma: no cover
from groups.models import Group, Member
@register.filter
def group_roles(member: Member, group: Group) -> str:
"""
Get the roles of the member within the group.
:param member: A ``Member`` model instance.
:param group: A ``Group``` model instance.
:return: A comma-separated list of roles.
"""
return member.get_roles(group) or 'N/A'
__all__ = ['group_roles']
| 22.185185 | 53 | 0.704508 | 82 | 599 | 4.987805 | 0.487805 | 0.02445 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193656 | 599 | 26 | 54 | 23.038462 | 0.846791 | 0.378965 | 0 | 0 | 0 | 0 | 0.041298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
779ddf463b66eb24f675d17d27c6919930f89723 | 188 | py | Python | menu/context_processors.py | douglasPinheiro/nirvaris-menu | 3f2d09adbcda1ee5145966bcb2c9b40d9d934f49 | [
"MIT"
] | null | null | null | menu/context_processors.py | douglasPinheiro/nirvaris-menu | 3f2d09adbcda1ee5145966bcb2c9b40d9d934f49 | [
"MIT"
] | null | null | null | menu/context_processors.py | douglasPinheiro/nirvaris-menu | 3f2d09adbcda1ee5145966bcb2c9b40d9d934f49 | [
"MIT"
] | null | null | null |
def globals(request):
#import pdb
#pdb.set_trace()
data = {}
if 'menu_item' in request.session:
data['menu_item'] = request.session['menu_item']
return data
| 17.090909 | 56 | 0.611702 | 24 | 188 | 4.625 | 0.583333 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255319 | 188 | 10 | 57 | 18.8 | 0.792857 | 0.132979 | 0 | 0 | 0 | 0 | 0.16875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
779fc28c39695eb6df6d6b4def228165b9c64ed4 | 367 | py | Python | altimeter/qj/api/v1/api.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 48 | 2019-11-06T03:20:53.000Z | 2022-02-22T21:10:45.000Z | altimeter/qj/api/v1/api.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 27 | 2020-01-07T23:48:30.000Z | 2022-02-26T00:24:04.000Z | altimeter/qj/api/v1/api.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 21 | 2019-12-20T03:06:35.000Z | 2021-12-15T23:26:00.000Z | """V1 API router"""
from fastapi import APIRouter
from altimeter.qj.api.v1.endpoints.jobs import JOBS_ROUTER
from altimeter.qj.api.v1.endpoints.result_sets import RESULT_SETS_ROUTER
V1_ROUTER = APIRouter()
V1_ROUTER.include_router(JOBS_ROUTER, prefix="/jobs", tags=["jobs"])
V1_ROUTER.include_router(RESULT_SETS_ROUTER, prefix="/result_sets", tags=["result_sets"])
| 36.7 | 89 | 0.80109 | 55 | 367 | 5.090909 | 0.290909 | 0.178571 | 0.107143 | 0.128571 | 0.207143 | 0.207143 | 0 | 0 | 0 | 0 | 0 | 0.017595 | 0.070845 | 367 | 9 | 90 | 40.777778 | 0.803519 | 0.035422 | 0 | 0 | 0 | 0 | 0.091954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
77ac851550267c373f184de9966d3e44cba6878f | 1,595 | py | Python | tests/parlaclarin/parse_test.py | welfare-state-analytics/pyriksprot | a7dab9246c3d9bb1cf19543208bc72fbacb9e909 | [
"MIT"
] | null | null | null | tests/parlaclarin/parse_test.py | welfare-state-analytics/pyriksprot | a7dab9246c3d9bb1cf19543208bc72fbacb9e909 | [
"MIT"
] | 19 | 2021-09-21T16:24:14.000Z | 2022-03-31T10:06:50.000Z | tests/parlaclarin/parse_test.py | welfare-state-analytics/pyriksprot | a7dab9246c3d9bb1cf19543208bc72fbacb9e909 | [
"MIT"
] | null | null | null | import os
import pytest
from pyriksprot import interface
from pyriksprot.corpus import parlaclarin
from ..utility import RIKSPROT_PARLACLARIN_FAKE_FOLDER, RIKSPROT_PARLACLARIN_FOLDER
jj = os.path.join
def test_to_protocol_in_depth_validation_of_correct_parlaclarin_xml():
protocol: interface.Protocol = parlaclarin.ProtocolMapper.to_protocol(
jj(RIKSPROT_PARLACLARIN_FAKE_FOLDER, "prot-1958-fake.xml")
)
assert protocol is not None
assert len(protocol.utterances) == 4
assert len(protocol) == 4
assert protocol.name == 'prot-1958-fake'
assert protocol.date == '1958'
assert protocol.has_text, 'has text'
assert protocol.checksum(), 'checksum'
# FIXME: More checks
@pytest.mark.parametrize(
'filename',
[
("prot-197879--14.xml"),
],
)
def test_parlaclarin_xml_with_no_utterances(filename):
path: str = jj(RIKSPROT_PARLACLARIN_FOLDER, "protocols", filename.split('-')[1], filename)
protocol = parlaclarin.ProtocolMapper.to_protocol(path, segment_skip_size=0)
assert len(protocol.utterances) == 0, "utterances empty"
assert not protocol.has_text
# FIXME: More checks
def test_to_protocol_by_untangle():
filename = jj(RIKSPROT_PARLACLARIN_FAKE_FOLDER, "prot-1958-fake.xml")
protocol: parlaclarin.XmlUntangleProtocol = parlaclarin.XmlUntangleProtocol(filename)
assert protocol is not None
assert len(protocol.utterances) == 4
assert len(protocol) == 4
assert protocol.name == 'prot-1958-fake'
assert protocol.date == '1958'
assert protocol.has_text, 'has text'
| 27.033898 | 94 | 0.732915 | 195 | 1,595 | 5.8 | 0.317949 | 0.111406 | 0.075155 | 0.076923 | 0.420866 | 0.344828 | 0.344828 | 0.344828 | 0.344828 | 0.263484 | 0 | 0.029367 | 0.167398 | 1,595 | 58 | 95 | 27.5 | 0.822289 | 0.023197 | 0 | 0.324324 | 0 | 0 | 0.09582 | 0 | 0 | 0 | 0 | 0.017241 | 0.405405 | 1 | 0.081081 | false | 0 | 0.135135 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77bc9d18897f9e2e8b8b8586dfd4784401b0f690 | 196 | py | Python | cleanup_instances.py | MISP/dockerized_training_environment | de458b5ef62164da67e23985c768c01dc2ebcd58 | [
"MIT"
] | 7 | 2020-11-18T21:47:55.000Z | 2022-03-09T20:46:56.000Z | cleanup_instances.py | MISP/dokerized_training_environment | 835019c742247e302855e9e4d1192ee5d81616c8 | [
"MIT"
] | null | null | null | cleanup_instances.py | MISP/dokerized_training_environment | 835019c742247e302855e9e4d1192ee5d81616c8 | [
"MIT"
] | 2 | 2021-07-04T22:59:05.000Z | 2021-07-19T12:22:17.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from misp_instances import MISPInstances
if __name__ == '__main__':
instances = MISPInstances()
instances.cleanup_all_blacklisted_event()
| 19.6 | 45 | 0.719388 | 22 | 196 | 5.863636 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012048 | 0.153061 | 196 | 9 | 46 | 21.777778 | 0.76506 | 0.219388 | 0 | 0 | 0 | 0 | 0.05298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77bd451236783e0b666c7ee928e48672d43fb99d | 980 | py | Python | venv/lib/python3.5/site-packages/coalib/bears/meta.py | prashant0598/CoffeeApp | 4fa006aebf06e12ed34766450ddcfa548ee63307 | [
"MIT"
] | null | null | null | venv/lib/python3.5/site-packages/coalib/bears/meta.py | prashant0598/CoffeeApp | 4fa006aebf06e12ed34766450ddcfa548ee63307 | [
"MIT"
] | null | null | null | venv/lib/python3.5/site-packages/coalib/bears/meta.py | prashant0598/CoffeeApp | 4fa006aebf06e12ed34766450ddcfa548ee63307 | [
"MIT"
] | null | null | null | from collections import defaultdict
from coalib.bearlib.aspects.collections import aspectlist
class bearclass(type):
"""
Metaclass for :class:`coalib.bears.Bear.Bear` and therefore all bear
classes.
Pushing bears into the future... ;)
"""
# by default a bear class has no aspects
aspects = defaultdict(lambda: aspectlist([]))
def __new__(mcs, clsname, bases, clsattrs, *varargs, aspects=None):
return type.__new__(mcs, clsname, bases, clsattrs, *varargs)
def __init__(cls, clsname, bases, clsattrs, *varargs, aspects=None):
"""
Initializes the ``.aspects`` dict on new bear classes from the mapping
given to the keyword-only `aspects` argument.
"""
type.__init__(cls, clsname, bases, clsattrs, *varargs)
if aspects is not None:
cls.aspects = defaultdict(
lambda: aspectlist([]),
((k, aspectlist(v)) for (k, v) in dict(aspects).items()))
| 32.666667 | 78 | 0.638776 | 114 | 980 | 5.350877 | 0.491228 | 0.078689 | 0.131148 | 0.177049 | 0.255738 | 0.255738 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247959 | 980 | 29 | 79 | 33.793103 | 0.82768 | 0.276531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.083333 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
77bf2a587348db96cf1ee2fcdb711f47d1f4b9d7 | 12,875 | py | Python | object_detector_app/utils/worker_utils.py | edrickwong/w3p | 5537d15310ed1f77f06c3ecff4a9e17dd1f81657 | [
"Unlicense"
] | null | null | null | object_detector_app/utils/worker_utils.py | edrickwong/w3p | 5537d15310ed1f77f06c3ecff4a9e17dd1f81657 | [
"Unlicense"
] | null | null | null | object_detector_app/utils/worker_utils.py | edrickwong/w3p | 5537d15310ed1f77f06c3ecff4a9e17dd1f81657 | [
"Unlicense"
] | null | null | null | import cv2
import multiprocessing
import socket
import tensorflow as tf
import time
from defaults import *
from multiprocessing import Queue, Pool, Process
from object_detector_utils import ObjectDetector
from utils.app_utils import WebcamVideoStream
from reference_objects_utils import ReferenceObjectsHelper
# logger object for more succicnt output
logger = multiprocessing.get_logger()
class MessageWorker(Process):
'''
TODO: add comments, its 2AM, i will add comments tomorrow
'''
def __init__(self, request_q, message_q, *args, **kwargs):
# pop out socket info or use global defined here
self.tcp_ip = kwargs.pop('tcp_ip', TCP_IP)
self.tcp_port = kwargs.pop('tcp_port', TCP_PORT)
self.buffer_size = kwargs.pop('buffer_size', BUFFER_SIZE)
# init parent with remaining kwargs
# we want to do this init first so we can fail on the parent's
# exception before having to throw our own exception after.
super(MessageWorker, self).__init__(*args, **kwargs)
self.request_q = request_q
self.message_q = message_q
# also setup socket information here
self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._socket.bind((self.tcp_ip, self.tcp_port))
self._socket.listen(1)
self._socket.settimeout(5)
self._conn = None
# boolean to see if we should end the event loop
self.kill_process = False
self.daemon = True
def run(self):
# Wait for incoming connection before doing anything
self.wait_for_incoming_connection()
# Enter Event loop
logger.debug('Entering event loop for Message Worker')
while not self.kill_process:
request_data = self._conn.recv(self.buffer_size)
logger.debug('Request data: %s' %(request_data))
print request_data
# find object that user is asking for
obj = None
# parse the incoming message
for word in request_data.split():
if word in ALLOWED_CLASSES:
# TODO: if the user asks for something that isn't allowed
# ie. obj is None after this loop, we should do something
# either ask user to repeat or something, this is currently
# terrible UX.
obj = word
break
if obj:
self.request_q.put(obj)
# print obj
# TODO: The code below makes an assumption that the flop from request_q
# to message_q will always work, so in case the request_q fails for
# some reason, we have no projtection here (because we are going to
# be sending back an empty message)
# Solutions:
# a) Implement a Timeout (remember we are bounded by a 5s
# timeout upstream from google home)
# b) more complicated solution would be to use a multiproc lock
# ie. we spin until we here from the above procs (similar
# to using signals in threads)
msg = self.message_q.get()
logger.debug(msg)
self._conn.send(msg)
# clean up socket on close
logger.debug('Closing socket connection')
self._conn.close()
def wait_for_incoming_connection(self):
while not self._conn:
try:
self._conn, _ = self._socket.accept()
logger.warning('Accepted connection from: %s' %(self._conn))
except socket.timeout:
logger.warning('Waiting on connection from Google Assistant')
time.sleep(1)
class InputFrameWorker(Process):
'''
TODO: Add comments
'''
def __init__(self, img_input_q, video_source,
*args, **kwargs):
super(InputFrameWorker, self).__init__(*args, **kwargs)
self.img_input_q = img_input_q
#self.test_stream()
self.video_source = video_source
# kill switch
self.kill_process = False
self.daemon = True
def run(self):
logger.debug('Entering event loop for input worker')
self._stream = cv2.VideoCapture(self.video_source)
# OpenCV doesnt respect these width/height parameters
#self._stream.set(cv2.CAP_PROP_FRAME_WIDTH, self.width)
#self._stream.set(cv2.CAP_PROP_FRAME_HEIGHT, self.height)
while not self.kill_process:
ret, frame = self._stream.read()
if ret:
self.img_input_q.put(frame)
else:
logger.warning('Unable to grab input frame')
def test_stream(self):
captured, frame = self._stream.read()
if not captured:
logger.debug('Unable to capture stream')
class OutputImageStreamWorker(Process):
'''
TODO: Add comments
'''
def __init__(self, img_input_q, img_output_q,
*args, **kwargs):
super(OutputImageStreamWorker, self).__init__(*args, **kwargs)
self.input_img_q = img_input_q
self.output_img_q = img_output_q
# FOR SOME GOD DAMN REASON TENSORFLOW ISN'T FORK SAFE
# SO CAN'T INITIALIZE HERE, HAVE TO INIT AFTER THE PARENT
# HAS FORKED INTO CHILD...
self._object_detector = None
self._ref_obj_helper = None
# Bool to kill event loop
self.kill_process = False
self.daemon = True
def run(self):
# encapsulate the necessary helper objects
self._object_detector = ObjectDetector()
self._ref_obj_helper = None
# Enter Event loop for worker
while not self.kill_process:
# grab most recent frame img_input_q and convert to RGB
# as expected by model
frame = self.input_img_q.get()
# HACK:: Anshuman 03/08/2019
# We need the dimensions of the frames, since OpenCV can't actually
# set the width/height for the camera input across all cameras.
# So we are going to look at the input frame to determine the size
# needed, so we can pass in normalized coordinates but work
# with denormalized/real coordinates.
if not self._ref_obj_helper:
height, width, _ = frame.shape
self._ref_obj_helper = ReferenceObjectsHelper(width, height)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
logger.debug('got input frame')
# do inference for frame capture
out_image = self._object_detector.detect_objects_visualize(frame_rgb)
self._ref_obj_helper.visualize_reference_objects(out_image)
self.output_img_q.put(out_image)
# should do with __enter__ and __exit__ methods, but w.e for now
self._object_detector.cleanup()
class ObjectDetectoResponseWorker(Process):
'''
TODO: Add comments
'''
def __init__(self, img_input_q, request_q, message_q,
*args, **kwargs):
super(ObjectDetectoResponseWorker, self).__init__(*args, **kwargs)
self.input_img_q = img_input_q
self.request_q = request_q
self.message_q = message_q
# FOR SOME GOD DAMN REASON TENSORFLOW ISN'T FORK SAFE
# SO CAN'T INITIALIZE HERE, HAVE TO INIT AFTER THE PARENT
# HAS FORKED INTO CHILD...
self._object_detector = None
self._ref_obj_helper = None
# Bool to kill event loop
self.kill_process = False
self.daemon = True
def run(self):
# Load object detector after parent process forks
self._object_detector = ObjectDetector()
# Enter Event loop for worker
logger.debug('Entering event loop for object detector response worker')
while not self.kill_process:
# block until the MessageWorker puts something in the
# request_q
obj_to_find = self.request_q.get()
# grab most recent frame img_input_q and convert to RGB
# as expected by model
frame = self.input_img_q.get()
# HACK:: Anshuman 03/08/2019
# We need the dimensions of the frames, since OpenCV can't actually
# set the width/height for the camera input across all cameras.
# So we are going to look at the input frame to determine the size
# needed, so we can pass in normalized coordinates but work
# with denormalized/real coordinates.
if not self._ref_obj_helper:
height, width, _ = frame.shape
self._ref_obj_helper = ReferenceObjectsHelper(width, height)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# do inference for frame capture
detected_objects, confidence_scores = self._object_detector.detect_objects(frame_rgb)
# construct message that we should pass back to user based on
# detected objects
msg = self.build_msg(obj_to_find, detected_objects, confidence_scores)
# add msg to message_q
self.message_q.put(msg)
self._object_detector.cleanup()
def closest_left_edge_distance(self,obj_xmin, obj_xmax,ref_xmin,ref_xmax):
#check if object is infront of reference
#
#
# YO DAMN LOGIC NOT CORRECT HERE
if ref_xmin <= obj_xmin <= ref_xmax:
print('left edge numbers: 0')
return 0
else:
#if reference is left of object, should return negative distance
print('left edge numbers')
print('obj_xmin :' + str(obj_xmin))
print('obj_xmax: ' + str(obj_xmax))
print('ref_xmax: ' + str(ref_xmax))
print(ref_xmax-obj_xmin)
return obj_xmin-ref_xmax
def closest_right_edge_distance(self,obj_xmin, obj_xmax,ref_xmin,ref_xmax):
#check if object is infront of reference
if ref_xmin <= obj_xmax <= ref_xmax:
print('right edge numbers: 0')
return 0
else:
#if reference is right of object, should return negative distance
print('right edge numbers')
print('obj_xmin: ' + str(obj_xmin))
print('obj_xmax: ' + str(obj_xmax))
print('ref_xmax: ' + str(ref_xmin))
print(ref_xmin-obj_xmax)
return obj_xmax-ref_xmin
def calculate_nearest_reference(self, obj_to_find, detected_objects):
'''
returns
distance to closest reference object edge
reference is left or right (0:left, 1:right)
'''
self.ref_list = self._ref_obj_helper.reference_objects
obj_x = [detected_objects.get(obj_to_find)[1],
detected_objects.get(obj_to_find)[3]]
obj_y = [detected_objects.get(obj_to_find)[0],
detected_objects.get(obj_to_find)[2]]
left_distances = [self.closest_left_edge_distance(obj_x[0],obj_x[1], x.norm_xmin, x.norm_xmax)
for x in self.ref_list]
right_distances = [self.closest_right_edge_distance(obj_x[0],obj_x[1], x.norm_xmin, x.norm_xmax)
for x in self.ref_list]
if 0 in left_distances:
return self.ref_list[left_distances.index(0)], 0, 0
elif 0 in right_distances:
return self.ref_list[right_distances.index(0)], 1, 0
else:
left_min = min(left_distances)
right_min = min(right_distances)
if left_min < right_min:
return self.ref_list[left_distances.index(left_min)],0, int(abs(round(left_min * 2)))
else:
return self.ref_list[right_distances.index(right_min)],1, int(abs(round(right_min * 2)))
def build_msg(self, obj_to_find, detected_objects, confidence_scores):
msg = ''
'''
self.reference_objects has a list of
ReferenceObject items that contain hard coded
objects
'''
if obj_to_find not in detected_objects:
msg = 'Unable to locate %s in current view' %(obj_to_find)
else:
uncertainty_threshold = 0.6
uncertain_start = "I'm not certain but I think the {0}".format(obj_to_find)
certain_start = "I see the {0} and it ".format(obj_to_find)
reference, location, distance = self.calculate_nearest_reference(obj_to_find, detected_objects)
if distance == 0:
msg = " is in front of the " + reference.obj_type
elif location == 0:
msg = " is " + str(distance) + " step left of " + reference.obj_type
else:
msg = " is " + str(distance) + " step right of " + reference.obj_type
msg = (uncertain_start + msg) if confidence_scores[obj_to_find] < uncertainty_threshold else (certain_start + msg)
return msg
| 38.204748 | 126 | 0.623068 | 1,669 | 12,875 | 4.587777 | 0.217496 | 0.014627 | 0.016456 | 0.018806 | 0.44874 | 0.385268 | 0.336685 | 0.28967 | 0.28967 | 0.280266 | 0 | 0.007007 | 0.30167 | 12,875 | 336 | 127 | 38.318452 | 0.844622 | 0.24334 | 0 | 0.322404 | 0 | 0 | 0.066913 | 0 | 0 | 0 | 0 | 0.014881 | 0 | 0 | null | null | 0 | 0.054645 | null | null | 0.071038 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77c2220b2033ba3b299257e30d6fd82619ee794b | 718 | py | Python | babysploit/wpseku/lib/printer.py | kevinsegal/BabySploit | 66bafc25e04e7512e8b87b161bd3b7201bb57b63 | [
"MIT"
] | null | null | null | babysploit/wpseku/lib/printer.py | kevinsegal/BabySploit | 66bafc25e04e7512e8b87b161bd3b7201bb57b63 | [
"MIT"
] | null | null | null | babysploit/wpseku/lib/printer.py | kevinsegal/BabySploit | 66bafc25e04e7512e8b87b161bd3b7201bb57b63 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# WPSeku - Wordpress Security Scanner
# by Momo Outaadi (m4ll0k)
from lib.colors import *
def decode(string):
return string.encode('utf-8')
def plus(string):
print("{}[ + ]{} {}{}{}".format(
GREEN%1,RESET,GREEN%0,string,RESET))
def test(string):
print("{}[ * ]{} {}{}{}".format(
BLUE%1,RESET,WHITE%0,string,RESET))
def warn(string):
print("{}[ ! ]{} {}{}{}".format(
RED%1,RESET,RED%0,string,RESET))
def info(string):
print("{}[ i ]{} {}{}{}".format(
YELLOW%1,RESET,YELLOW%0,string,RESET))
def normal(string):
print("{}{}{}".format(WHITE%1,string,RESET))
def more(string):
print(" {}|{} {}{}{}".format(
WHITE%0,RESET,WHITE%1,string,RESET))
| 20.514286 | 45 | 0.601671 | 97 | 718 | 4.453608 | 0.42268 | 0.152778 | 0.196759 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.130919 | 718 | 34 | 46 | 21.117647 | 0.666667 | 0.14624 | 0 | 0 | 0 | 0 | 0.149671 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.35 | false | 0 | 0.05 | 0.05 | 0.45 | 0.3 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77ccdee8931416244bf1a7a303a5dbd891ef4733 | 2,908 | py | Python | db/models.py | hnzlmnn/dora | 1c7375491af03d3ea8557e68b18afc6f916f65d7 | [
"BSD-3-Clause"
] | 6 | 2020-03-10T14:57:42.000Z | 2021-05-19T19:14:54.000Z | db/models.py | hnzlmnn/dora | 1c7375491af03d3ea8557e68b18afc6f916f65d7 | [
"BSD-3-Clause"
] | null | null | null | db/models.py | hnzlmnn/dora | 1c7375491af03d3ea8557e68b18afc6f916f65d7 | [
"BSD-3-Clause"
] | 3 | 2020-03-08T16:02:11.000Z | 2020-07-24T10:56:01.000Z | import base64
import binascii
from typing import Union
from peewee import DatabaseProxy, Model, CharField, BooleanField, DateTimeField, IntegerField, ForeignKeyField, \
CompositeKey, DoesNotExist
from db.fields import BytesField
database_proxy = DatabaseProxy()
class BaseModel(Model):
class Meta:
database = database_proxy
class Entry(BaseModel):
id = IntegerField(primary_key=True)
context = CharField(32)
source = CharField(40)
v6 = BooleanField()
received_at = DateTimeField()
line = IntegerField()
data = BytesField(64)
def summary(self):
# if self.line == 0:
# return f"{self.received_at}: received metadata for context {self.context}: {self.data.decode('ascii')}"
return f"{self.received_at}: received line {self.line} from {str(self.source)} with content '{self.binary()}' for '{self.context}'"
def _decoded(self, encoding=None) -> Union[str, None]:
try:
data = base64.b64decode(self.data, validate=True)
if encoding:
return data.decode(encoding)
return data
except (binascii.Error, UnicodeDecodeError):
return None
def ascii(self) -> Union[str, None]:
return self._decoded('ascii')
def binary(self) -> Union[str, None]:
return self._decoded()
def to_json(self):
return dict(
id=self.id,
source=self.source,
v6=self.v6,
received_at=self.received_at.timestamp(),
context=self.context,
line=self.line,
data=self.data.decode("ascii"),
)
class Line(BaseModel):
context = CharField(32)
line = IntegerField()
entry = ForeignKeyField(Entry)
selected_at = DateTimeField()
class Meta:
primary_key = CompositeKey('context', 'line')
def summary(self):
return f"{self.selected_at}: {self.context}:{self.line} -> {self.entry}"
def to_json(self):
return dict(
context=self.context,
line=self.line,
entry=self.entry.to_json(),
selected_at=self.selected_at.timestamp(),
)
class Meta(BaseModel):
context = CharField(32, primary_key=True)
source = CharField(40)
v6 = BooleanField()
lines = IntegerField()
updated_at = DateTimeField()
def summary(self):
return f"{self.updated_at}: received metadata for context {self.context}: {self.lines}"
def to_json(self):
return dict(
context=self.context,
source=self.source,
v6=self.v6,
lines=self.lines,
updated_at=self.updated_at.timestamp(),
)
def get_missing(self):
existing = set(map(lambda e: e[0], Line.select(Line.line).where(Line.context == self.context).tuples()))
return list(set(range(1, self.lines + 1)) - existing) | 28.792079 | 139 | 0.615543 | 330 | 2,908 | 5.345455 | 0.251515 | 0.056122 | 0.061224 | 0.022109 | 0.287415 | 0.252268 | 0.132653 | 0.095238 | 0.046485 | 0 | 0 | 0.013121 | 0.266162 | 2,908 | 101 | 140 | 28.792079 | 0.813496 | 0.043329 | 0 | 0.363636 | 0 | 0.012987 | 0.101079 | 0.009353 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12987 | false | 0 | 0.064935 | 0.103896 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
77e25ecf9a07d3984e7795af3734f3f79900a595 | 440 | py | Python | objmodel/step00_get_set_field/objmodel.py | suensky/500lines-rewrite | 3b8ec12e4abc777e504dfd5ad77cabce3851a23c | [
"CC-BY-3.0"
] | 80 | 2020-06-04T01:34:30.000Z | 2022-03-21T08:10:01.000Z | objmodel/step00_get_set_field/objmodel.py | suensky/500lines-rewrite | 3b8ec12e4abc777e504dfd5ad77cabce3851a23c | [
"CC-BY-3.0"
] | 5 | 2021-02-09T01:02:09.000Z | 2021-03-29T02:11:06.000Z | objmodel/step00_get_set_field/objmodel.py | suensky/500lines-rewrite | 3b8ec12e4abc777e504dfd5ad77cabce3851a23c | [
"CC-BY-3.0"
] | 11 | 2020-11-06T01:11:11.000Z | 2022-03-06T07:19:58.000Z | class Class:
def __init__(self, name: str):
self.name = name
class Instance:
def __init__(self, cls: Class):
self.cls = cls
self._fields = {}
def get_attr(self, name: str):
if name not in self._fields:
raise AttributeError(f"'{self.cls.name}' has no attribute {name}")
return self._fields[name]
def set_attr(self, name: str, value):
self._fields[name] = value
| 24.444444 | 78 | 0.595455 | 59 | 440 | 4.20339 | 0.389831 | 0.129032 | 0.133065 | 0.120968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.286364 | 440 | 17 | 79 | 25.882353 | 0.789809 | 0 | 0 | 0 | 0 | 0 | 0.093182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
77f299aaccb2e293e2955e642aaac7da3760011a | 104 | py | Python | 08.Iterators_and_generators/Lab/squares.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | 08.Iterators_and_generators/Lab/squares.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | 08.Iterators_and_generators/Lab/squares.py | nmoskova/Python-OOP | 07327bcb93eee3a7db5d7c0bbdd1b54eb9e8b864 | [
"MIT"
] | null | null | null | def squares(n):
i = 1
while i <= n:
yield i * i
i += 1
print(list(squares(5))) | 13 | 23 | 0.442308 | 17 | 104 | 2.705882 | 0.588235 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048387 | 0.403846 | 104 | 8 | 23 | 13 | 0.693548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77f3fb005629d43f754ea92f3d980821c34780fe | 55,236 | py | Python | kravatte/kravatte.py | inmcm/kravatte | 21a6da899fe2707ff6128a89f83e3aeecf2ea68a | [
"MIT"
] | 13 | 2018-04-16T19:56:50.000Z | 2022-02-07T16:21:43.000Z | kravatte/kravatte.py | inmcm/kravatte | 21a6da899fe2707ff6128a89f83e3aeecf2ea68a | [
"MIT"
] | 2 | 2018-04-27T12:51:19.000Z | 2018-11-25T16:55:49.000Z | kravatte/kravatte.py | inmcm/kravatte | 21a6da899fe2707ff6128a89f83e3aeecf2ea68a | [
"MIT"
] | 2 | 2018-08-20T06:40:08.000Z | 2019-11-12T23:16:47.000Z | """
Kravatte Achouffe Cipher Suite: Encryption, Decryption, and Authentication Tools based on the Farfalle modes
Copyright 2018 Michael Calvin McCoy
see LICENSE file
"""
from multiprocessing import Pool
from math import floor, ceil, log2
from typing import Tuple
from os import cpu_count
from ctypes import memset
import numpy as np
KravatteTagOutput = Tuple[bytes, bytes]
KravatteValidatedOutput = Tuple[bytes, bool]
class Kravatte(object):
"""Implementation of the Farfalle Pseudo-Random Function (PRF) construct utilizing the
Keccak-1600 permutation.
"""
KECCAK_BYTES = 200
'''Number of Bytes in Keccak-1600 state'''
KECCAK_LANES = 25
'''Number of 8-Byte lanes in Keccak-1600 state'''
KECCAK_PLANES_SLICES = 5
''' Size of x/y dimensions of Keccak lane array '''
THETA_REORDER = ((4, 0, 1, 2, 3), (1, 2, 3, 4, 0))
IOTA_CONSTANTS = np.array([0x000000000000800A, 0x800000008000000A, 0x8000000080008081,
0x8000000000008080, 0x0000000080000001, 0x8000000080008008],
dtype=np.uint64)
'''Iota Step Round Constants For Keccak-p(1600, 4) and Keccak-p(1600, 6)'''
RHO_SHIFTS = np.array([[0, 36, 3, 41, 18],
[1, 44, 10, 45, 2],
[62, 6, 43, 15, 61],
[28, 55, 25, 21, 56],
[27, 20, 39, 8, 14]], dtype=np.uint64)
'''Lane Shifts for Rho Step'''
CHI_REORDER = ((1, 2, 3, 4, 0), (2, 3, 4, 0, 1))
'''Lane Re-order Mapping for Chi Step'''
PI_ROW_REORDER = np.array([[0, 3, 1, 4, 2],
[1, 4, 2, 0, 3],
[2, 0, 3, 1, 4],
[3, 1, 4, 2, 0],
[4, 2, 0, 3, 1]])
'''Row Re-order Mapping for Pi Step'''
PI_COLUMN_REORDER = np.array([[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]])
'''Column Re-order Mapping for Pi Step'''
COMPRESS_ROW_REORDER = np.array([[0, 0, 0, 0, 1],
[1, 1, 1, 1, 2],
[2, 2, 2, 2, 3],
[3, 3, 3, 3, 4],
[4, 4, 4, 4, 0]])
'''Row Re-order Mapping for Compress Step'''
COMPRESS_COLUMN_REORDER = np.array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]])
'''Column Re-order Mapping for Compress Step'''
EXPAND_ROW_REORDER = np.array([[0, 0, 0, 1, 1],
[1, 1, 1, 2, 2],
[2, 2, 2, 3, 3],
[3, 3, 3, 4, 4],
[4, 4, 4, 0, 0]])
'''Row Re-order Mapping for Expand Step'''
EXPAND_COLUMN_REORDER = np.array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 4, 4]])
'''Column Re-order Mapping for Expand Step'''
def __init__(self, key: bytes=b'', workers: int=None, mp_input: bool=True, mp_output: bool=True):
"""
Initialize Kravatte with user key
Inputs:
key (bytes): encryption/authentication key
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
self.update_key(key)
self.reset_state()
# Enable Standard or Optimized Multi-process codepaths
if workers is not None:
self.collect_message = self._collect_message_mp if mp_input else self._collect_message_sp
self.generate_digest = self._generate_digest_mp if mp_output else self._generate_digest_sp
self.workers = cpu_count() if workers == 0 else workers
else:
self.collect_message = self._collect_message_sp
self.generate_digest = self._generate_digest_sp
self.workers = None
def update_key(self, key: bytes) -> None:
"""
Pad and compute new Kravatte base key from bytes source.
Inputs:
key (bytes): user provided bytes to be padded (if necessary) and computed into Kravatte base key
"""
key_pad = self._pad_10_append(key, self.KECCAK_BYTES)
key_array = np.frombuffer(key_pad, dtype=np.uint64, count=self.KECCAK_LANES,
offset=0).reshape([self.KECCAK_PLANES_SLICES,
self.KECCAK_PLANES_SLICES], order='F')
self.kra_key = self._keccak(key_array)
def reset_state(self) -> None:
"""
Clear existing Farfalle/Kravatte state and prepares for new input message collection.
Elements reset include:
- Message block collector
- Rolling key
- Currently stored output digest
- Digest Active and New Collector Flags
Inputs:
None
"""
self.roll_key = np.copy(self.kra_key)
self.collector = np.zeros([5, 5], dtype=np.uint64)
self.digest = bytearray(b'')
self.digest_active = False
self.new_collector = True
def _generate_absorb_queue(self, absorb_steps: int, kra_msg: bytes):
"""
Generator for Keccak-sized blocks of input message for Farfalle compression
Inputs:
absorb_steps (int): Number of blocks to generate for absorption
kra_msg (bytes): padded input message ready for slicing into input blocks
"""
for msg_block in range(absorb_steps):
yield (np.frombuffer(kra_msg, dtype=np.uint64, count=25, offset=msg_block * self.KECCAK_BYTES).reshape([5, 5], order='F') ^ self.roll_key)
self.roll_key = self._kravatte_roll_compress(self.roll_key)
def _collect_message_sp(self, message: bytes, append_bits: int=0, append_bit_count: int=0) -> None:
"""
Pad and Process Blocks of Message into Kravatte collector state
Inputs:
message (bytes): arbitrary number of bytes to be padded into Keccak blocks and absorbed into the collector
append_bits (int): bits to append to the message before padding. Required for more advanced Kravatte modes.
append_bit_count (int): number of bits to append
"""
if self.digest_active:
self.reset_state()
if self.new_collector:
self.new_collector = False
else:
self.roll_key = self._kravatte_roll_compress(self.roll_key)
# Pad Message
msg_len = len(message)
kra_msg = self._pad_10_append(message, msg_len + (self.KECCAK_BYTES - (msg_len % self.KECCAK_BYTES)), append_bits, append_bit_count)
absorb_steps = len(kra_msg) // self.KECCAK_BYTES
# Absorb into Collector
for msg_block in range(absorb_steps):
m = np.frombuffer(kra_msg, dtype=np.uint64, count=25, offset=msg_block * self.KECCAK_BYTES).reshape([5, 5], order='F')
m_k = m ^ self.roll_key
self.roll_key = self._kravatte_roll_compress(self.roll_key)
self.collector = self.collector ^ self._keccak(m_k)
def _collect_message_mp(self, message: bytes, append_bits: int=0, append_bit_count: int=0) -> None:
"""
Pad and Process Blocks of Message into Kravatte collector state - Multi-process Aware Variant
Inputs:
message (bytes): arbitrary number of bytes to be padded into Keccak blocks and absorbed into the collector
append_bits (int): bits to append to the message before padding. Required for more advanced Kravatte modes.
append_bit_count (int): number of bits to append
"""
if self.digest_active:
self.reset_state()
if self.new_collector:
self.new_collector = False
else:
self.roll_key = self._kravatte_roll_compress(self.roll_key)
# Pad Message
msg_len = len(message)
kra_msg = self._pad_10_append(message, msg_len + (self.KECCAK_BYTES - (msg_len % self.KECCAK_BYTES)), append_bits, append_bit_count)
absorb_steps = len(kra_msg) // self.KECCAK_BYTES
workload = 1 if (absorb_steps // self.workers) == 0 else (absorb_steps // self.workers)
with Pool(processes=self.workers) as kravatte_pool:
for output_element in kravatte_pool.imap_unordered(self._keccak, self._generate_absorb_queue(absorb_steps, kra_msg), chunksize=workload):
self.collector ^= output_element
def _generate_digest_sp(self, output_size: int, short_kravatte: bool=False) -> None:
"""
Squeeze an arbitrary number of bytes from collector state
Inputs:
output_size (int): Number of bytes to generate and store in Kravatte digest parameter
short_kravatte (bool): Enable disable short kravatte required for other Kravatte modes
"""
if not self.digest_active:
self.collector = self.collector if short_kravatte else self._keccak(self.collector)
self.roll_key = self._kravatte_roll_compress(self.roll_key)
self.digest_active = True
self.digest = bytearray(b'')
full_output_size = output_size + (200 - (output_size % 200)) if output_size % 200 else output_size
generate_steps = full_output_size // 200
for _ in range(generate_steps):
collector_squeeze = self._keccak(self.collector)
self.collector = self._kravatte_roll_expand(self.collector)
self.digest.extend((collector_squeeze ^ self.roll_key).tobytes('F'))
self.digest = self.digest[:output_size]
def _generate_squeeze_queue(self, generate_steps: int):
"""
Generator for Keccak-sized blocks of expanded collector state for output squeezing
Inputs:
generate_steps (int): Number of blocks to generate and for absorb
"""
for _ in range(generate_steps):
yield self.collector
self.collector = self._kravatte_roll_expand(self.collector)
def _generate_digest_mp(self, output_size: int, short_kravatte: bool=False) -> None:
"""
Squeeze an arbitrary number of bytes from collector state - Multi-process Aware Variant
Inputs:
output_size (int): Number of bytes to generate and store in Kravatte digest parameter
short_kravatte (bool): Enable disable short kravatte required for other Kravatte modes
"""
if not self.digest_active:
self.collector = self.collector if short_kravatte else self._keccak(self.collector)
self.roll_key = self._kravatte_roll_compress(self.roll_key)
self.digest_active = True
self.digest = bytearray(b'')
full_output_size = output_size + (200 - (output_size % 200)) if output_size % 200 else output_size
generate_steps = full_output_size // 200
workload = 1 if (generate_steps // self.workers) == 0 else (generate_steps // self.workers)
with Pool(processes=self.workers) as kravatte_pool:
for digest_block in kravatte_pool.imap(self._keccak_xor_key, self._generate_squeeze_queue(generate_steps), chunksize=workload):
self.digest.extend(digest_block.tobytes('F'))
self.digest = self.digest[:output_size]
def _keccak(self, input_array):
"""
Implementation of Keccak-1600 PRF defined in FIPS 202
Inputs:
input_array (numpy array): Keccak compatible state array: 200-byte as 5x5 64-bit lanes
Return:
numpy array: Keccak compatible state array: 200-byte as 5x5 64-bit lanes
"""
state = np.copy(input_array)
for round_num in range(6):
# theta_step:
# Exclusive-or each slice-lane by state based permutation value
array_shift = state << 1 | state >> 63
state ^= np.bitwise_xor.reduce(state[self.THETA_REORDER[0], ], 1, keepdims=True) ^ np.bitwise_xor.reduce(array_shift[self.THETA_REORDER[1], ], 1, keepdims=True)
# rho_step:
# Left Rotate each lane by pre-calculated value
state = state << self.RHO_SHIFTS | state >> np.uint64(64 - self.RHO_SHIFTS)
# pi_step:
# Shuffle lanes to pre-calculated positions
state = state[self.PI_ROW_REORDER, self.PI_COLUMN_REORDER]
# chi_step:
# Exclusive-or each individual lane based on and/invert permutation
state ^= ~state[self.CHI_REORDER[0], ] & state[self.CHI_REORDER[1], ]
# iota_step:
# Exclusive-or first lane of state with round constant
state[0, 0] ^= self.IOTA_CONSTANTS[round_num]
return state
def _keccak_xor_key(self, input_array):
"""
Implementation of Keccak-1600 PRF defined in FIPS 202 plus an XOR with the current key state
Inputs:
input_array (numpy array): Keccak compatible state array: 200-byte as 5x5 64-bit lanes
Return:
numpy array: Keccak compatible state array: 200-byte as 5x5 64-bit lanes
"""
state = np.copy(input_array)
for round_num in range(6):
# theta_step:
# Exclusive-or each slice-lane by state based permutation value
array_shift = state << 1 | state >> 63
state ^= np.bitwise_xor.reduce(state[self.THETA_REORDER[0], ], 1, keepdims=True) ^ np.bitwise_xor.reduce(array_shift[self.THETA_REORDER[1], ], 1, keepdims=True)
# rho_step:
# Left Rotate each lane by pre-calculated value
state = state << self.RHO_SHIFTS | state >> np.uint64(64 - self.RHO_SHIFTS)
# pi_step:
# Shuffle lanes to pre-calculated positions
state = state[self.PI_ROW_REORDER, self.PI_COLUMN_REORDER]
# chi_step:
# Exclusive-or each individual lane based on and/invert permutation
state ^= ~state[self.CHI_REORDER[0], ] & state[self.CHI_REORDER[1], ]
# iota_step:
# Exclusive-or first lane of state with round constant
state[0, 0] ^= self.IOTA_CONSTANTS[round_num]
return state ^ self.roll_key
def scrub(self):
"""
Explicitly zero out both the key and collector array states. Use prior to reinitialization of
key or when finished with object to help avoid leaving secret/interim data in memory.
WARNING: Does not guarantee other copies of these arrays are not present elsewhere in memory
Not applicable in multi-process mode.
Inputs:
None
Return:
None
"""
# Clear collector array
collector_location = self.collector.ctypes.data
memset(collector_location, 0x00, self.KECCAK_BYTES)
# Clear Kravatte base key array
key_location = self.kra_key.ctypes.data
memset(key_location, 0x00, self.KECCAK_BYTES)
# Clear Kravatte rolling key array
key_location = self.roll_key.ctypes.data
memset(key_location, 0x00, self.KECCAK_BYTES)
def _kravatte_roll_compress(self, input_array):
"""
Kravatte defined roll function for compression side of Farfalle PRF
Inputs:
input_array (numpy array): Keccak compatible state array: 200-byte as 5x5 64-bit lanes
Return:
numpy array: Keccak compatible state array: 200-byte as 5x5 64-bit lanes
"""
state = input_array[self.COMPRESS_ROW_REORDER, self.COMPRESS_COLUMN_REORDER]
state[4, 4] = ((state[4, 4] << np.uint64(7)) | (state[4, 4] >> np.uint64(57))) ^ \
(state[0, 4]) ^ \
(state[0, 4] >> np.uint64(3))
return state
def _kravatte_roll_expand(self, input_array):
"""
Kravatte defined roll function for expansion side of Farfalle PRF
Inputs:
input_array (numpy array): Keccak compatible state array: 200-byte as 5x5 64-bit lanes
Return:
numpy array: Keccak compatible state array: 200-byte as 5x5 64-bit lanes
"""
state = input_array[self.EXPAND_ROW_REORDER, self.EXPAND_COLUMN_REORDER]
state[4, 4] = ((input_array[0, 3] << np.uint64(7)) | (input_array[0, 3] >> np.uint64(57))) ^ \
((input_array[1, 3] << np.uint64(18)) | (input_array[1, 3] >> np.uint64(46))) ^ \
((input_array[1, 3] >> np.uint64(1)) & input_array[2, 3])
return state
@staticmethod
def _pad_10_append(input_bytes: bytes, desired_length: int, append_bits: int=0, append_bit_count: int=0) -> bytes:
"""
Farfalle defined padding function. Limited to byte divisible inputs only
Inputs:
input_bytes (bytes): Collection of bytes
desired_length (int): Number of bytes to pad input len out to
append_bits (int): one or more bits to be inserted before the padding starts. Allows
"appending" bits as required by several Kravatte modes
append_bit_count (int): number of bits to append
Return:
bytes: input bytes with padding applied
"""
start_len = len(input_bytes)
if start_len == desired_length:
return input_bytes
head_pad_byte = bytes([(0b01 << append_bit_count) | (((2**append_bit_count) - 1) & append_bits)])
pad_len = desired_length - (start_len % desired_length)
padded_bytes = input_bytes + head_pad_byte + (b'\x00' * (pad_len - 1))
return padded_bytes
@staticmethod
def compare_bytes(a: bytes, b: bytes) -> bool:
"""
Time Constant Byte Comparison Function
Inputs:
a (bytes): first set of bytes
b (bytes): second set of bytes
Return:
boolean
"""
compare = True
if len(a) != len(b):
return False
for (element_a, element_b) in zip(a, b):
compare = compare and (element_a == element_b)
return compare
def mac(key: bytes, message: bytes, output_size: int, workers: int=None, mp_input: bool=True,
mp_output: bool=True) -> bytearray:
"""
Kravatte Message Authentication Code Generation of given length from a message
based on a user provided key
Args:
key (bytes): User authentication key (0 - 200 bytes)
message (bytes): User message
output_size (int): Size of authenticated digest in bytes
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
Returns:
bytes: message authentication bytes of length output_size
"""
kravatte_mac_gen = Kravatte(key, workers=workers, mp_input=mp_input, mp_output=mp_output)
kravatte_mac_gen.collect_message(message)
kravatte_mac_gen.generate_digest(output_size)
kravatte_mac_gen.scrub()
return kravatte_mac_gen.digest
def siv_wrap(key: bytes, message: bytes, metadata: bytes, tag_size: int=32, workers: int=None,
mp_input: bool=True, mp_output: bool=True) -> KravatteTagOutput:
"""
Authenticated Encryption with Associated Data (AEAD) of a provided plaintext using a key and
metadata using the Synthetic Initialization Vector method described in the Farfalle/Kravatte
spec. Generates ciphertext (of equivalent length to the plaintext) and verification tag. Inverse
of siv_unwrap function.
Args:
key (bytes): Encryption key; 0-200 bytes in length
message (bytes): Plaintext message for encryption
metadata (bytes): Nonce/Seed value for authenticated encryption
tag_size (int, optional): The tag size in bytes. Defaults to 32 bytes as defined in the
Kravatte spec
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
Returns:
tuple (bytes, bytes): Bytes of ciphertext and tag
"""
# Initialize Kravatte
kravatte_siv_wrap = Kravatte(key, workers=workers, mp_input=mp_input, mp_output=mp_output)
# Generate Tag From Metadata and Plaintext
kravatte_siv_wrap.collect_message(metadata)
kravatte_siv_wrap.collect_message(message)
kravatte_siv_wrap.generate_digest(tag_size)
siv_tag = kravatte_siv_wrap.digest
# Generate Key Stream
kravatte_siv_wrap.collect_message(metadata)
kravatte_siv_wrap.collect_message(siv_tag)
kravatte_siv_wrap.generate_digest(len(message))
ciphertext = bytes([p_text ^ key_stream for p_text, key_stream in zip(message, kravatte_siv_wrap.digest)])
kravatte_siv_wrap.scrub()
return ciphertext, siv_tag
def siv_unwrap(key: bytes, ciphertext: bytes, siv_tag: bytes, metadata: bytes, workers: int=None,
mp_input: bool=True, mp_output: bool=True) -> KravatteValidatedOutput:
"""
Decryption of Synthetic Initialization Vector method described in the Farfalle/Kravatte
spec. Given a key, metadata, and validation tag, generates plaintext (of equivalent length to
the ciphertext) and validates message based on included tag, metadata, and key. Inverse of
siv_wrap function.
Args:
key (bytes): Encryption key; 0-200 bytes in length
ciphertext (bytes): Ciphertext SIV Message
siv_tag (bytes): Authenticating byte string
metadata (bytes): Metadata used to encrypt message and generate tag
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
Returns:
tuple (bytes, boolean): Bytes of plaintext and message validation boolean
"""
# Initialize Kravatte
kravatte_siv_unwrap = Kravatte(key, workers=workers, mp_input=mp_input, mp_output=mp_output)
# Re-Generate Key Stream
kravatte_siv_unwrap.collect_message(metadata)
kravatte_siv_unwrap.collect_message(siv_tag)
kravatte_siv_unwrap.generate_digest(len(ciphertext))
siv_plaintext = bytes([p_text ^ key_stream for p_text, key_stream in zip(ciphertext, kravatte_siv_unwrap.digest)])
# Re-Generate Tag From Metadata and Recovered Plaintext
kravatte_siv_unwrap.collect_message(metadata)
kravatte_siv_unwrap.collect_message(siv_plaintext)
kravatte_siv_unwrap.generate_digest(len(siv_tag))
generated_tag = kravatte_siv_unwrap.digest
# Check if tag matches provided tag matches reconstituted tag
valid_tag = kravatte_siv_unwrap.compare_bytes(siv_tag, generated_tag)
kravatte_siv_unwrap.scrub()
return siv_plaintext, valid_tag
class KravatteSAE(Kravatte):
"""
An authenticated encryption mode designed to track a session consisting of a series of messages
and an initialization nonce. ** DEPRECATED in favor of KravatteSANE **
"""
TAG_SIZE = 16
OFFSET = TAG_SIZE
def __init__(self, nonce: bytes, key: bytes=b'', workers: int=None, mp_input: bool=True,
mp_output: bool=True):
"""
Initialize KravatteSAE with user key and nonce
Args:
nonce (bytes) - random unique value to initialize the session with
key (bytes) - secret key for encrypting session messages
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
super(KravatteSAE, self).__init__(key, workers, mp_input, mp_output)
self.initialize_history(nonce)
def initialize_history(self, nonce: bytes) -> None:
"""
Initialize session history by storing Keccak collector state and current internal key
Args:
key (bytes): user provided bytes to be padded (if necessary) and computed into Kravatte base key
"""
self.collect_message(nonce)
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
self.generate_digest(self.TAG_SIZE)
self.tag = self.digest.copy()
def wrap(self, plaintext: bytes, metadata: bytes) -> KravatteTagOutput:
"""
Encrypt an arbitrary plaintext message using the included metadata as part of an on-going
session. Creates authentication tag for validation during decryption.
Args:
plaintext (bytes): user plaintext of arbitrary length
metadata (bytes): associated data to ensure a unique encryption permutation
Returns:
(bytes, bytes): encrypted cipher text and authentication tag
"""
# Restore Kravatte State to When Latest History was Absorbed
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.digest = bytearray(b'')
self.digest_active = False
# Generate/Apply Key Stream
self.generate_digest(len(plaintext) + self.OFFSET)
ciphertext = bytes([p_text ^ key_stream for p_text, key_stream in zip(plaintext, self.digest[self.OFFSET:])])
# Update History
if len(metadata) > 0 or len(plaintext) == 0:
self._append_to_history(metadata, 0)
if len(plaintext) > 0:
self._append_to_history(ciphertext, 1)
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
# Generate Tag
self.generate_digest(self.TAG_SIZE)
return ciphertext, self.digest
def unwrap(self, ciphertext: bytes, metadata: bytes, validation_tag: bytes) -> KravatteValidatedOutput:
"""
Decrypt an arbitrary ciphertext message using the included metadata as part of an on-going
session. Creates authentication tag for validation during decryption.
Args:
ciphertext (bytes): user ciphertext of arbitrary length
metadata (bytes): associated data from encryption
validation_tag (bytes): collection of bytes that authenticates the decrypted plaintext as
being encrypted with the same secret key
Returns:
(bytes, bool): decrypted plaintext and boolean indicating in decryption was authenticated against secret key
"""
# Restore Kravatte State to When Latest History was Absorbed
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.digest = bytearray(b'')
self.digest_active = False
# Generate/Apply Key Stream
self.generate_digest(len(ciphertext) + self.OFFSET)
plaintext = bytes([p_text ^ key_stream for p_text, key_stream in zip(ciphertext, self.digest[self.OFFSET:])])
# Update History
if len(metadata) > 0 or len(ciphertext) == 0:
self._append_to_history(metadata, 0)
if len(ciphertext) > 0:
self._append_to_history(ciphertext, 1)
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
# Generate Tag
self.generate_digest(self.TAG_SIZE)
# Store Generated Tag and Validate
self.tag = self.digest.copy()
valid_tag = self.compare_bytes(self.tag, validation_tag)
return plaintext, valid_tag
def _append_to_history(self, message: bytes, pad_bit: int) -> None:
"""
Update history collector state with provided message.
Args:
message (bytes): arbitrary number of bytes to be padded into Keccak blocks and absorbed into the collector
pad_bit (int): Either 1 or 0 to append to the end of the regular message before padding
"""
if self.digest_active:
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.digest = bytearray(b'')
self.digest_active = False
self.roll_key = self._kravatte_roll_compress(self.roll_key)
# Pad Message with a single bit and then
start_len = len(message)
padded_len = start_len + (self.KECCAK_BYTES - (start_len % self.KECCAK_BYTES))
padded_bytes = self._pad_10_append(message, padded_len, pad_bit, 1)
absorb_steps = len(padded_bytes) // self.KECCAK_BYTES
# Absorb into Collector
for msg_block in range(absorb_steps):
m = np.frombuffer(padded_bytes, dtype=np.uint64, count=25, offset=msg_block * self.KECCAK_BYTES).reshape([5, 5], order='F')
m_k = m ^ self.roll_key
self.roll_key = self._kravatte_roll_compress(self.roll_key)
self.collector = self.collector ^ self._keccak(m_k)
class KravatteSANE(Kravatte):
"""
An authenticated encryption mode designed to track a session consisting of a series of messages,
metadata, and an initialization nonce. A replacement for KravatteSAE
"""
TAG_SIZE = 16
OFFSET = TAG_SIZE
"""
An authenticated encryption mode designed to track a session consisting of a series of messages
and an initialization nonce. A replacement for KravatteSAE
"""
def __init__(self, nonce: bytes, key: bytes=b'', workers: int=None, mp_input: bool=True,
mp_output: bool=True):
"""
Initialize KravatteSANE with user key and nonce
Args:
nonce (bytes) - random unique value to initialize the session with
key (bytes) - secret key for encrypting session messages
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
super(KravatteSANE, self).__init__(key, workers, mp_input, mp_output)
self.initialize_history(nonce, False)
def initialize_history(self, nonce: bytes, reinitialize: bool=True) -> None:
"""
Initialize session history. Session history is stored pre-compressed within the Keccak collector
and current matching internal key state. Kravatte-SANE session history starts with the user
provided nonce.
Args:
nonce (bytes): user provided bytes to initialize the session history
reinitialize (bool): perform a full reset of the Keccak state when manually restarting the history log
"""
if reinitialize:
self.reset_state()
self.collect_message(nonce)
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
self.generate_digest(self.TAG_SIZE)
self.tag = self.digest.copy()
self.e_attr = 0
def wrap(self, plaintext: bytes, metadata: bytes) -> KravatteTagOutput:
"""
Encrypt an arbitrary plaintext message using the included metadata as part of an on-going
session. Creates authentication tag for validation during decryption.
Args:
plaintext (bytes): user plaintext of arbitrary length
metadata (bytes): associated data to ensure a unique encryption permutation
Returns:
(bytes, bytes): encrypted cipher text and authentication tag
"""
# Restore Kravatte State to When Latest History was Absorbed
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.digest = bytearray(b'')
self.digest_active = False
# Generate/Apply Key Stream
self.generate_digest(len(plaintext) + self.OFFSET)
ciphertext = bytes([p_text ^ key_stream for p_text, key_stream in zip(plaintext, self.digest[self.OFFSET:])])
# Restore/Update History States if required
self._restore_history()
if len(metadata) > 0 or len(plaintext) == 0:
self._append_to_history(metadata, (self.e_attr << 1) | 0, 2)
if len(plaintext) > 0:
self._append_to_history(ciphertext, (self.e_attr << 1) | 1, 2)
# Increment e toggler attribute
self.e_attr ^= 1
# Generate Tag
self.generate_digest(self.TAG_SIZE)
return ciphertext, self.digest
def unwrap(self, ciphertext: bytes, metadata: bytes, validation_tag: bytes) -> KravatteValidatedOutput:
"""
Decrypt an arbitrary ciphertext message using the included metadata as part of an on-going
session. Validates decryption based on the provided authentication tag.
Args:
ciphertext (bytes): user ciphertext of arbitrary length
metadata (bytes): associated data from encryption
validation_tag (bytes): collection of bytes that authenticates the decrypted plaintext as
being encrypted with the same secret key
Returns:
(bytes, bool): decrypted plaintext and boolean indicating in decryption was authenticated against secret key
"""
# Restore Kravatte State to When Latest History was Absorbed
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.digest = bytearray(b'')
self.digest_active = False
# Generate/Apply Key Stream
self.generate_digest(len(ciphertext) + self.OFFSET)
plaintext = bytes([p_text ^ key_stream for p_text, key_stream in zip(ciphertext, self.digest[self.OFFSET:])])
# Restore/Update History States if required
self._restore_history()
if len(metadata) > 0 or len(ciphertext) == 0:
self._append_to_history(metadata, (self.e_attr << 1) | 0, 2)
if len(ciphertext) > 0:
self._append_to_history(ciphertext, (self.e_attr << 1) | 1, 2)
# Increment e toggler attribute
self.e_attr ^= 1
# Generate Tag
self.generate_digest(self.TAG_SIZE)
# Store Generated Tag and Validate
self.tag = self.digest.copy()
valid_tag = self.compare_bytes(self.tag, validation_tag)
return plaintext, valid_tag
def _append_to_history(self, message: bytes, pad_bits: int, pad_size: int) -> None:
"""
Update history collector state with provided message.
Args:
message (bytes): arbitrary number of bytes to be padded into Keccak blocks and absorbed into the collector
pad_bits (int): Up to 6 additional bits added to the end of the regular message before padding
pad_size (int): Number of bits to append
"""
self.collect_message(message, pad_bits, pad_size)
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
def _restore_history(self) -> None:
"""
Restore the internal kravatte state to the previously saved history state
Args:
None
"""
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.digest = bytearray(b'')
self.digest_active = False
class KravatteSANSE(Kravatte):
"""
A nonce-less authenticated encryption mode designed to track a session consisting of a series of
messages and metadata. A replacement for Kravatte-SIV
"""
TAG_SIZE = 32
def __init__(self, key: bytes=b'', workers: int=None, mp_input: bool=True, mp_output: bool=True):
"""
Initialize KravatteSANSE with user key
Args:
key (bytes) - secret key for encrypting/decrypting session messages
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
super(KravatteSANSE, self).__init__(key, workers, mp_input, mp_output)
self.initialize_history(False)
def initialize_history(self, reinitialize: bool=True) -> None:
"""
Initialize session history. Session history is stored pre-compressed within the Keccak collector
and current matching internal key state. Kravatte-SANSE session history starts empty.
Args:
reinitialize (bool): perform a full reset of the Keccak state when manually restarting the history log
"""
if reinitialize:
self.reset_state()
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
self.history_collector_state = np.copy(self.new_collector)
self.e_attr = 0
def wrap(self, plaintext: bytes, metadata: bytes) -> KravatteTagOutput:
"""
Encrypt an arbitrary plaintext message using the included metadata as part of an on-going
session. Creates authentication tag for validation during decryption.
Args:
plaintext (bytes): user plaintext of arbitrary length
metadata (bytes): associated data to ensure a unique encryption permutation
Returns:
(bytes, bytes): encrypted cipher text and authentication tag
"""
# Restore Kravatte State to When Latest History was Absorbed
self._restore_history()
# Update History
if len(metadata) > 0 or len(plaintext) == 0:
self._append_to_history(metadata, (self.e_attr << 1) | 0, 2)
if len(plaintext) > 0:
# Generate Tag
self.collect_message(plaintext, (self.e_attr << 2) | 0b10, 3)
self.generate_digest(self.TAG_SIZE)
tag = self.digest
# Reset History State and Generate/Apply Key Stream
self._restore_history()
self.collect_message(tag, ((self.e_attr << 2) | 0b11), 3)
self.generate_digest(len(plaintext))
ciphertext = bytes([p_text ^ key_stream for p_text, key_stream in zip(plaintext, self.digest)])
# Reset History State and Update it with Plaintext and Padding
self._restore_history()
self._append_to_history(plaintext, (self.e_attr << 2) | 0b10, 3)
else:
ciphertext = b''
self.generate_digest(self.TAG_SIZE)
tag = self.digest
self.e_attr ^= 1
return ciphertext, tag
def unwrap(self, ciphertext: bytes, metadata: bytes, validation_tag: bytes) -> KravatteValidatedOutput:
"""
Decrypt an arbitrary ciphertext message using the included metadata as part of an on-going
session. Validates decryption based on the provided authentication tag.
Args:
ciphertext (bytes): user ciphertext of arbitrary length
metadata (bytes): associated data from encryption
validation_tag (bytes): collection of bytes that authenticates the decrypted plaintext as
being encrypted with the same secret key
Returns:
(bytes, bool): decrypted plaintext and boolean indicating in decryption was authenticated against secret key
"""
# Restore Kravatte State to When Latest History was Absorbed
self._restore_history()
if len(metadata) > 0 or len(ciphertext) == 0:
self._append_to_history(metadata, (self.e_attr << 1) | 0, 2)
if len(ciphertext) > 0:
self.collect_message(validation_tag, ((self.e_attr << 2) | 0b11), 3)
self.generate_digest(len(ciphertext))
plaintext = bytes([p_text ^ key_stream for p_text, key_stream in zip(ciphertext, self.digest)])
# Update History
self._restore_history()
self._append_to_history(plaintext, (self.e_attr << 2) | 0b10, 3)
else:
plaintext = b''
# Generate Tag
self.generate_digest(self.TAG_SIZE)
self.e_attr ^= 1
# Store Generated Tag and Validate
self.tag = self.digest.copy()
valid_tag = self.compare_bytes(self.tag, validation_tag)
return plaintext, valid_tag
def _append_to_history(self, message: bytes, pad_bits: int, pad_size: int) -> None:
"""
Update history collector state with provided message. Save the new history state.
Args:
message (bytes): arbitrary number of bytes to be padded into Keccak blocks and absorbed into the collector
pad_bits (int): Up to 6 additional bits added to the end of the regular message before padding
pad_size (int): Number of bits to append
"""
self.collect_message(message, pad_bits, pad_size)
self.history_collector = np.copy(self.collector)
self.history_key = np.copy(self.roll_key)
self.history_collector_state = np.copy(self.new_collector)
def _restore_history(self) -> None:
"""
Restore the internal kravatte state to the previously saved history state
Args:
None
"""
self.collector = np.copy(self.history_collector)
self.roll_key = np.copy(self.history_key)
self.new_collector = np.copy(self.history_collector_state)
self.digest = bytearray(b'')
self.digest_active = False
class KravatteWBC(Kravatte):
""" Configurable Wide Block Cipher encryption mode with customization tweak """
SPLIT_THRESHOLD = 398
def __init__(self, block_cipher_size: int, tweak: bytes=b'', key: bytes=b'', workers: int=None,
mp_input: bool=True, mp_output: bool=True):
"""
Initialize KravatteWBC object
Inputs:
block_cipher_size (int) - size of block cipher in bytes
tweak (bytes) - arbitrary value to customize cipher output
key (bytes) - secret key for encrypting message blocks
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
super(KravatteWBC, self).__init__(key, workers, mp_input, mp_output)
self.split_bytes(block_cipher_size)
self.tweak = tweak
def split_bytes(self, message_size_bytes: int) -> None:
"""
Calculates the size (in bytes) of the "left" and "right" components of the block encryption
decryption process. Based on algorithm given in Farfalle spec.
Input
message_size_bytes (int): user defined block size for this instance of KravatteWBC
"""
if message_size_bytes <= self.SPLIT_THRESHOLD:
nL = ceil(message_size_bytes / 2)
else:
q = floor(((message_size_bytes + 1) / self.KECCAK_BYTES)) + 1
x = floor(log2(q - 1))
nL = ((q - (2**x)) * self.KECCAK_BYTES) - 1
self.size_L = nL
self.size_R = message_size_bytes - nL
def encrypt(self, message: bytes) -> bytes:
"""
Encrypt a user message using KravatteWBC mode
Inputs:
message (bytes): plaintext message to encrypt. Length should be <= the block cipher size
defined in the KravatteWBC object
Returns:
bytes: encrypted block same length as message
"""
L = message[0:self.size_L]
R = message[self.size_L:]
# R0 ← R0 + HK(L||0), with R0 the first min(b, |R|) bits of R
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(min(self.KECCAK_BYTES, self.size_R), short_kravatte=True)
extended_digest = self.digest + ((self.size_R - len(self.digest)) * b'\x00')
R = bytes([p_text ^ key_stream for p_text, key_stream in zip(R, extended_digest)])
# L ← L + GK (R||1 ◦ W)
self.collect_message(self.tweak)
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(self.size_L)
L = bytes([p_text ^ key_stream for p_text, key_stream in zip(L, self.digest)])
# R ← R + GK (L||0 ◦ W)
self.collect_message(self.tweak)
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(self.size_R)
R = bytes([p_text ^ key_stream for p_text, key_stream in zip(R, self.digest)])
# L0 ← L0 + HK(R||1), with L0 the first min(b, |L|) bits of L
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(min(self.KECCAK_BYTES, self.size_L), short_kravatte=True)
extended_digest = self.digest + ((self.size_L - len(self.digest)) * b'\x00')
L = bytes([p_text ^ key_stream for p_text, key_stream in zip(L, extended_digest)])
# C ← the concatenation of L and R
return L + R
def decrypt(self, ciphertext: bytes) -> bytes:
"""
Decrypt a user message using KravatteWBC mode
Args:
message (bytes): ciphertext message to decrypt.
Returns:
bytes: decrypted block same length as ciphertext
"""
L = ciphertext[0:self.size_L]
R = ciphertext[self.size_L:]
# L0 ← L0 + HK(R||1), with L0 the first min(b, |L|) bits of L
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(min(self.KECCAK_BYTES, self.size_L), short_kravatte=True)
extended_digest = self.digest + ((self.size_L - len(self.digest)) * b'\x00')
L = bytes([c_text ^ key_stream for c_text, key_stream in zip(L, extended_digest)])
# R ← R + GK (L||0 ◦ W)
self.collect_message(self.tweak)
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(self.size_R)
R = bytes([c_text ^ key_stream for c_text, key_stream in zip(R, self.digest)])
# L ← L + GK (R||1 ◦ W)
self.collect_message(self.tweak)
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(self.size_L)
L = bytes([c_text ^ key_stream for c_text, key_stream in zip(L, self.digest)])
# R0 ← R0 + HK(L||0), with R0 the first min(b, |R|) bits of R
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(min(self.KECCAK_BYTES, self.size_R), short_kravatte=True)
extended_digest = self.digest + ((self.size_R - len(self.digest)) * b'\x00')
R = bytes([c_text ^ key_stream for c_text, key_stream in zip(R, extended_digest)])
# P ← the concatenation of L and R
return L + R
class KravatteWBC_AE(KravatteWBC):
""" Authentication with associated metadata version Kravatte Wide Block Cipher encryption mode """
WBC_AE_TAG_LEN = 16
def __init__(self, block_cipher_size: int, key: bytes=b'', workers: int=None,
mp_input: bool=True, mp_output: bool=True):
"""
Initialize KravatteWBC_AE object
Args:
block_cipher_size (int) - size of block cipher in bytes
key (bytes) - secret key for encrypting message blocks
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
super(KravatteWBC_AE, self).__init__(block_cipher_size + self.WBC_AE_TAG_LEN, b'', key=key,
workers=workers, mp_input=mp_input,
mp_output=mp_output)
def wrap(self, message: bytes, metadata: bytes) -> bytes:
"""
Encrypt a user message and generate included authenticated data. Requires metadata input
in lieu of customization tweak.
Args:
message (bytes): User message same length as configured object block size
metadata (bytes): associated metadata to ensure unique output
Returns:
bytes: authenticated encrypted block
"""
self.tweak = metadata # metadata treated as tweak
padded_message = message + (self.WBC_AE_TAG_LEN * b'\x00')
return self.encrypt(padded_message)
def unwrap(self, ciphertext: bytes, metadata: bytes) -> KravatteValidatedOutput:
"""
Decrypt a ciphertext block and validate included authenticated data. Requires metadata input
in lieu of customization tweak.
Args:
message (bytes): ciphertext same length as configured object block size
metadata (bytes): associated metadata to ensure unique output
Returns:
(bytes, bool): plaintext byes and decryption valid flag
"""
L = ciphertext[0:self.size_L]
R = ciphertext[self.size_L:]
self.tweak = metadata
# L0 ← L0 + HK(R||1), with L0 the first min(b, |L|) bits of L
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(min(self.KECCAK_BYTES, self.size_L), short_kravatte=True)
extended_digest = self.digest + ((self.size_L - len(self.digest)) * b'\x00')
L = bytes([c_text ^ key_stream for c_text, key_stream in zip(L, extended_digest)])
# R ← R + GK (L||0 ◦ A)
self.collect_message(self.tweak)
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(self.size_R)
R = bytes([c_text ^ key_stream for c_text, key_stream in zip(R, self.digest)])
# |R| ≥ b+t
if self.size_R >= self.KECCAK_BYTES + self.WBC_AE_TAG_LEN:
# if the last t bytes of R ̸= 0t then return error!
valid_plaintext = True if R[-self.WBC_AE_TAG_LEN:] == (self.WBC_AE_TAG_LEN * b'\x00') else False
# L ← L + GK (R||1 ◦ A)
self.collect_message(self.tweak)
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(self.size_L)
L = bytes([c_text ^ key_stream for c_text, key_stream in zip(L, self.digest)])
# R0 ← R0 + HK(L||0), with R0 the first b bytes of R
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(self.KECCAK_BYTES, short_kravatte=True)
extended_digest = self.digest + ((self.size_R - len(self.digest)) * b'\x00')
R = bytes([c_text ^ key_stream for c_text, key_stream in zip(R, extended_digest)])
else:
# L ← L + GK (R||1 ◦ A)
self.collect_message(self.tweak)
self.collect_message(R, append_bits=0b1, append_bit_count=1)
self.generate_digest(self.size_L)
L = bytes([c_text ^ key_stream for c_text, key_stream in zip(L, self.digest)])
# R0 ← R0 + HK(L||0), with R0 the first min(b, |R|) bytes of R
self.collect_message(L, append_bits=0b0, append_bit_count=1)
self.generate_digest(min(self.KECCAK_BYTES, self.size_R), short_kravatte=True)
extended_digest = self.digest + ((self.size_R - len(self.digest)) * b'\x00')
R = bytes([c_text ^ key_stream for c_text, key_stream in zip(R, extended_digest)])
# if the last t bytes of L||R ̸= 0t then return error!
valid_plaintext = True if (L + R)[-self.WBC_AE_TAG_LEN:] == (self.WBC_AE_TAG_LEN * b'\x00') else False
# P′ ← L||R
return (L + R)[:-self.WBC_AE_TAG_LEN], valid_plaintext
class KravatteOracle(Kravatte):
"""Pseudo-random byte stream generator. Accepts an authentication key and arbitrary sized seed
input. Once initialized, the random method can return an arbitrary amount of random output bytes
for each call. Generator collector state can be reinitialized at anytime with the seed_generator
method
"""
def __init__(self, seed: bytes=b'', key: bytes=b'', workers: int=None, mp_input: bool=True,
mp_output: bool=True):
"""
Initialize KravatteOracle with user key and seed.
Inputs:
seed (bytes) - random unique value to initialize the oracle object with
key (bytes) - secret key for authenticating generator
workers (int): parallel processes to use in compression/expansion operations
mp_input (bool): Enable multi-processing for calculations on input data
mp_output (bool): Enable multi-processing for calculations on output data
"""
super(KravatteOracle, self).__init__(key, workers, mp_input, mp_input)
self.seed_generator(seed)
def seed_generator(self, seed: bytes):
"""
Re-seed Kravatte collector state with new seed data.
Input:
seed (bytes): Collection of seed bytes that are absorbed as single message
"""
self.collect_message(seed)
def random(self, output_size: int) -> bytearray:
"""
Generates a stream of pseudo-random bytes from the current state of the Kravatte collector
state
Input:
output_size (bytes): Number of bytes to return
Returns:
bytearray: Pseudo-random Kravatte squeezed collector output
"""
self.generate_digest(output_size)
return self.digest
if __name__ == "__main__":
from time import perf_counter
import hashlib
from binascii import hexlify
import os
my_key = b'\xFF' * 32
my_message = bytes([x % 256 for x in range(4 * 1024 * 1024)])
print("Normal Message MAC Generation")
start = perf_counter()
my_kra = mac(my_key, my_message, 1024 * 1024 * 4)
stop = perf_counter()
print("Process Time:", stop - start)
a1 = hashlib.md5()
a1.update(my_kra)
print(hexlify(a1.digest()))
print("%d Process/Core Message MAC Generation" % os.cpu_count())
start = perf_counter()
my_kra = mac(my_key, my_message, 1024 * 1024 * 4, workers=os.cpu_count())
stop = perf_counter()
print("Process Time:", stop - start)
a2 = hashlib.md5()
a2.update(my_kra)
print(hexlify(a2.digest()))
assert a1.digest() == a2.digest()
| 43.254503 | 172 | 0.634695 | 7,158 | 55,236 | 4.737077 | 0.074742 | 0.020054 | 0.016869 | 0.009909 | 0.744131 | 0.715672 | 0.688923 | 0.671552 | 0.662115 | 0.65026 | 0 | 0.021258 | 0.278677 | 55,236 | 1,276 | 173 | 43.288401 | 0.829079 | 0.358027 | 0 | 0.559259 | 0 | 0 | 0.004884 | 0 | 0 | 0 | 0.003781 | 0 | 0.001852 | 1 | 0.085185 | false | 0 | 0.018519 | 0 | 0.194444 | 0.011111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
77f4e418af722622970c7ce018e3cf661f1435fb | 933 | py | Python | radar/models/user.py | J4LP/radar | c54feac19dc0181223c74d780179095bbe53142a | [
"Unlicense",
"MIT"
] | null | null | null | radar/models/user.py | J4LP/radar | c54feac19dc0181223c74d780179095bbe53142a | [
"Unlicense",
"MIT"
] | null | null | null | radar/models/user.py | J4LP/radar | c54feac19dc0181223c74d780179095bbe53142a | [
"Unlicense",
"MIT"
] | null | null | null | import arrow
from sqlalchemy_utils import IPAddressType, ArrowType
from radar.models import db
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.String, unique=True)
main_character = db.Column(db.String)
main_character_id = db.Column(db.Integer)
alliance_name = db.Column(db.String, nullable=True)
corporation_name = db.Column(db.String)
last_ip = db.Column(IPAddressType, default=u'127.0.0.1')
last_login_on = db.Column(ArrowType, default=lambda: arrow.utcnow())
anonymous = True
authenticated = False
structures = db.relationship('Structure', backref='added_by')
scans = db.relationship('Scan', backref='added_by')
def is_authenticated(self):
return self.authenticated
def is_active(self):
return True
def is_anonymous(self):
return self.anonymous
def get_id(self):
return str (self.id)
| 29.15625 | 72 | 0.699893 | 127 | 933 | 5.007874 | 0.440945 | 0.100629 | 0.09434 | 0.100629 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007947 | 0.190782 | 933 | 31 | 73 | 30.096774 | 0.834437 | 0 | 0 | 0 | 0 | 0 | 0.040729 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.125 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
77f931744d12e59cfee289a9376b2a8d113bac9f | 840 | py | Python | ancfindersite/admin.py | kevko/ancfinder | 1368604e9a17f70e2d3981b6c4258d6455555844 | [
"CC0-1.0"
] | null | null | null | ancfindersite/admin.py | kevko/ancfinder | 1368604e9a17f70e2d3981b6c4258d6455555844 | [
"CC0-1.0"
] | null | null | null | ancfindersite/admin.py | kevko/ancfinder | 1368604e9a17f70e2d3981b6c4258d6455555844 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8
from django.contrib import admin
from ancfindersite.models import *
@admin.register(CommissionerInfo)
class CommissionerInfoAdmin(admin.ModelAdmin):
list_display = ['id', 'latest', 'created', 'author', 'anc', 'smd', 'field_name', 'field_value', 'linkage']
raw_id_fields = ['author']
readonly_fields = ['author', 'superseded_by', 'anc', 'smd', 'field_name']
def latest(self, obj):
return obj.superseded_by is None
latest.boolean = True
def linkage(self, obj):
ret = []
if obj.superseded_by is not None: ret.append("next: %d" % obj.superseded_by.id)
try:
ret.append("prev: %d" % obj.supersedes.id)
except:
# obj.supersedes doesn't return None, instead it raises a DoesNotExist exception.
pass
return "; ".join(ret)
admin.site.register(Document)
| 31.111111 | 110 | 0.665476 | 107 | 840 | 5.121495 | 0.579439 | 0.087591 | 0.082117 | 0.054745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001471 | 0.190476 | 840 | 26 | 111 | 32.307692 | 0.804412 | 0.115476 | 0 | 0 | 0 | 0 | 0.154054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.052632 | 0.105263 | 0.052632 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
ae01727cfecda3887bfcd03981090dc1fcddea42 | 66 | py | Python | for in.py | MrAnonymous5635/CSCircles | 010ac82942c88da357e214ea5462ec378f3667b8 | [
"MIT"
] | 17 | 2018-09-19T09:44:33.000Z | 2022-01-17T15:17:11.000Z | for in.py | MrAnonymous5635/CSCircles | 010ac82942c88da357e214ea5462ec378f3667b8 | [
"MIT"
] | 2 | 2020-02-24T15:28:33.000Z | 2021-11-16T00:04:52.000Z | for in.py | MrAnonymous5635/CSCircles | 010ac82942c88da357e214ea5462ec378f3667b8 | [
"MIT"
] | 8 | 2020-02-20T00:02:06.000Z | 2022-01-06T17:25:51.000Z | def prod(L):
p = 1
for i in L:
p *= i
return p
| 11 | 15 | 0.393939 | 13 | 66 | 2 | 0.692308 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.5 | 66 | 5 | 16 | 13.2 | 0.757576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ae02de2b9a04ab2ec51a173473001761384a9382 | 781 | py | Python | flask_web/config/default.py | Max-PJB/python-learning2 | e8b05bef1574ee9abf8c90497e94ef20a7f4e3bd | [
"MIT"
] | null | null | null | flask_web/config/default.py | Max-PJB/python-learning2 | e8b05bef1574ee9abf8c90497e94ef20a7f4e3bd | [
"MIT"
] | null | null | null | flask_web/config/default.py | Max-PJB/python-learning2 | e8b05bef1574ee9abf8c90497e94ef20a7f4e3bd | [
"MIT"
] | null | null | null | # coding: utf-8
import os
class Config(object):
RESULT_ERROR = 0
RESULT_SUCCESS = 1
MONGODB_SETTINGS = {'ALIAS': 'default',
'DB': 'facepp',
'host': 'localhost',
'username': 'admin',
'password': ''}
"""Base config class."""
# Flask app config
DEBUG = True
TESTING = True
# token 的过期时间,7200 秒
EXP_SECONDS = 7200
SECRET_KEY = "\xb5\xb3}#\xb7A\xcac\x9d0\xb6\x0f\x80z\x97\x00\x1e\xc0\xb8+\xe9)\xf0}"
# Root path of project
PROJECT_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
# 允许免登录的 url,不需要登录也可以访问
allow_urls = ['/login',
'/register',
'/admin'
]
| 24.40625 | 88 | 0.50064 | 83 | 781 | 4.578313 | 0.819277 | 0.047368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057769 | 0.357234 | 781 | 31 | 89 | 25.193548 | 0.699203 | 0.117798 | 0 | 0 | 0 | 0.055556 | 0.221548 | 0.104704 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.055556 | 0.055556 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
7ac3c07bce8003bb3e2b3c3cce602636e606ad54 | 460 | py | Python | sample_pointcloud.py | jiameng1010/pointNet | 17d230f46f64136baba2c3d6cb7f05ab4bbb9f31 | [
"MIT"
] | null | null | null | sample_pointcloud.py | jiameng1010/pointNet | 17d230f46f64136baba2c3d6cb7f05ab4bbb9f31 | [
"MIT"
] | null | null | null | sample_pointcloud.py | jiameng1010/pointNet | 17d230f46f64136baba2c3d6cb7f05ab4bbb9f31 | [
"MIT"
] | 1 | 2019-02-03T12:19:36.000Z | 2019-02-03T12:19:36.000Z | from pyntcloud import PyntCloud
import pyembree
import numpy as np
import trimesh
from trimesh import sample, ray, triangles
from trimesh.ray.ray_pyembree import RayMeshIntersector
import pandas as pd
cloud = PyntCloud.from_file("/home/mjia/Documents/ShapeCompletion/test.ply")
sample = cloud.get_sample(name='mesh_random_sampling', as_PyntCloud=True, n=1024)
sample.plot(mesh=True)
mesh = trimesh.load('../ModelNet40/desk/train/desk_0008.off')
mesh.show() | 28.75 | 81 | 0.806522 | 67 | 460 | 5.432836 | 0.567164 | 0.082418 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023923 | 0.091304 | 460 | 16 | 82 | 28.75 | 0.84689 | 0 | 0 | 0 | 0 | 0 | 0.223427 | 0.180043 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.583333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7acb3839960d46daa774c78e73cb0aeba944f283 | 288 | py | Python | ddtruss/__init__.py | deeepeshthakur/ddtruss | 86aa945d577c6efe752099eee579386762942289 | [
"MIT"
] | 1 | 2020-01-27T12:03:47.000Z | 2020-01-27T12:03:47.000Z | ddtruss/__init__.py | deeepeshthakur/ddtruss | 86aa945d577c6efe752099eee579386762942289 | [
"MIT"
] | null | null | null | ddtruss/__init__.py | deeepeshthakur/ddtruss | 86aa945d577c6efe752099eee579386762942289 | [
"MIT"
] | null | null | null | from .__about__ import __author__, __email__, __license__, __status__, __version__
from .solver import DataDrivenSolver
from .truss import Truss
__all__ = [
"__author__",
"__email__",
"__license__",
"__version__",
"__status__",
"Truss",
"DataDrivenSolver",
]
| 20.571429 | 82 | 0.697917 | 24 | 288 | 6.375 | 0.5 | 0.143791 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 288 | 13 | 83 | 22.153846 | 0.659483 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7ae50d93020389130f7a04ed4c7ed892b8c42fdf | 1,743 | py | Python | tests/test_basics.py | viki-org/logstash-docker | 71d316dfa1792075ffe82398f45561cf25741609 | [
"Apache-2.0"
] | 305 | 2016-10-04T04:17:04.000Z | 2021-11-16T13:53:05.000Z | tests/test_basics.py | viki-org/logstash-docker | 71d316dfa1792075ffe82398f45561cf25741609 | [
"Apache-2.0"
] | 105 | 2016-10-10T15:52:39.000Z | 2019-06-26T13:39:36.000Z | tests/test_basics.py | viki-org/logstash-docker | 71d316dfa1792075ffe82398f45561cf25741609 | [
"Apache-2.0"
] | 133 | 2016-11-03T18:01:43.000Z | 2022-02-02T12:16:55.000Z | from .fixtures import logstash
from .constants import logstash_version_string
def test_logstash_is_the_correct_version(logstash):
assert logstash_version_string in logstash.stdout_of('logstash --version')
def test_the_default_user_is_logstash(logstash):
assert logstash.stdout_of('whoami') == 'logstash'
def test_that_the_user_home_directory_is_usr_share_logstash(logstash):
assert logstash.environment('HOME') == '/usr/share/logstash'
def test_locale_variables_are_set_correctly(logstash):
assert logstash.environment('LANG') == 'en_US.UTF-8'
assert logstash.environment('LC_ALL') == 'en_US.UTF-8'
def test_opt_logstash_is_a_symlink_to_usr_share_logstash(logstash):
assert logstash.stdout_of('realpath /opt/logstash') == '/usr/share/logstash'
def test_all_logstash_files_are_owned_by_logstash(logstash):
assert logstash.stdout_of('find /usr/share/logstash ! -user logstash') == ''
def test_logstash_user_is_uid_1000(logstash):
assert logstash.stdout_of('id -u logstash') == '1000'
def test_logstash_user_is_gid_1000(logstash):
assert logstash.stdout_of('id -g logstash') == '1000'
def test_logging_config_does_not_log_to_files(logstash):
assert logstash.stdout_of('grep RollingFile /logstash/config/log4j2.properties') == ''
# REF: https://docs.openshift.com/container-platform/3.5/creating_images/guidelines.html
def test_all_files_in_logstash_directory_are_gid_zero(logstash):
bad_files = logstash.stdout_of('find /usr/share/logstash ! -gid 0').split()
assert len(bad_files) is 0
def test_all_directories_in_logstash_directory_are_setgid(logstash):
bad_dirs = logstash.stdout_of('find /usr/share/logstash -type d ! -perm /g+s').split()
assert len(bad_dirs) is 0
| 34.176471 | 90 | 0.780838 | 253 | 1,743 | 5.011858 | 0.339921 | 0.060726 | 0.156151 | 0.132492 | 0.34858 | 0.255521 | 0.141956 | 0 | 0 | 0 | 0 | 0.016067 | 0.107286 | 1,743 | 50 | 91 | 34.86 | 0.798843 | 0.04934 | 0 | 0 | 0 | 0 | 0.201813 | 0.020544 | 0 | 0 | 0 | 0 | 0.444444 | 1 | 0.407407 | false | 0 | 0.074074 | 0 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7afcba4478178056c3c4b8f0dd94cece84b8040b | 1,334 | py | Python | tests/application/models.py | YoungAspirations/QA-Projects | 9f24886977d566548f5629371407187d8017f209 | [
"MIT"
] | null | null | null | tests/application/models.py | YoungAspirations/QA-Projects | 9f24886977d566548f5629371407187d8017f209 | [
"MIT"
] | null | null | null | tests/application/models.py | YoungAspirations/QA-Projects | 9f24886977d566548f5629371407187d8017f209 | [
"MIT"
] | null | null | null | from application import db
class Users(db.Model):
id = db.Column(db.Integer, primary_key=True)
First_name = db.Column(db.String(30), nullable = False)
Last_name = db.Column(db.String(50), nullable = False)
User_name = db.Column(db.String(30), nullable = False)
Password = db.Column(db.String(30), nullable = False)
Email = db.Column(db.String(50), nullable = False)
games = db.relationship('Games', backref='users')
class GameDetails(db.Model):
id = db.Column(db.Integer, primary_key=True)
Publisher = db.Column(db.String(30), nullable = True)
Genre = db.Column(db.String(30), nullable = True)
Units_sold = db.Column(db.Integer, nullable = True)
ESRB_Rating = db.Column(db.String(30), nullable = True)
Engine = db.Column(db.String(30), nullable = True)
games = db.relationship('Games', backref='gamedetails')
class Games(db.Model):
id = db.Column(db.Integer, primary_key=True)
Users_id = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False)
GameDetails_id = db.Column(db.Integer, db.ForeignKey('game_details.id'), nullable=False)
Title = db.Column(db.String(30), nullable = False)
Platform = db.Column(db.String(30), nullable = True)
Rating = db.Column(db.Integer, nullable = True)
Status = db.Column(db.String(30), nullable = False) | 47.642857 | 92 | 0.688906 | 192 | 1,334 | 4.729167 | 0.213542 | 0.167401 | 0.209251 | 0.211454 | 0.75 | 0.677313 | 0.613436 | 0.209251 | 0.132159 | 0.132159 | 0 | 0.02139 | 0.158921 | 1,334 | 28 | 93 | 47.642857 | 0.787879 | 0 | 0 | 0.12 | 0 | 0 | 0.036704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.04 | 0.04 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7afd81f0836abf10236a169a7a3b6acc4dd1d0a8 | 689 | py | Python | ICS4U/ICS4U-2017-2018-Code/examples/inheritance/Python-based/Vehicle.py | mrseidel-classes/archives | b3e04fd4338e3b3796ad2c681a10fbca69e1dae2 | [
"MIT"
] | 5 | 2018-05-06T21:29:00.000Z | 2022-02-02T12:02:07.000Z | ICS4U/ICS4U-2017-2018-Code/examples/inheritance/Python-based/Vehicle.py | mrseidel-classes/archives | b3e04fd4338e3b3796ad2c681a10fbca69e1dae2 | [
"MIT"
] | null | null | null | ICS4U/ICS4U-2017-2018-Code/examples/inheritance/Python-based/Vehicle.py | mrseidel-classes/archives | b3e04fd4338e3b3796ad2c681a10fbca69e1dae2 | [
"MIT"
] | null | null | null | class Vehicle:
''' Documentation needed here
'''
def __init__(self, numberOfTires, colorOfVehicle):
''' Documentation needed here
'''
self.numberOfTires = numberOfTires
self.colorOfVehicle = colorOfVehicle
def start(self):
''' This function starts the vehicle
'''
print("I started!")
def drive(self):
''' This function drives the vehicle
'''
print("I'm driving!")
def setColor(color):
''' This function updates the color of the vehicle based
on the pass in information
Parameters:
Color -> a color to update the vehicle's color with
'''
this.colorOfVehicle = color
def __repr__(self):
return "I'm a Vehicle!"
| 19.685714 | 59 | 0.658926 | 81 | 689 | 5.506173 | 0.481481 | 0.089686 | 0.103139 | 0.071749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235123 | 689 | 35 | 60 | 19.685714 | 0.8463 | 0.489115 | 0 | 0 | 0 | 0 | 0.101983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | false | 0 | 0 | 0.083333 | 0.583333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bb091e39ee9f343a16bc795582b949c70dabc617 | 2,469 | py | Python | src/retrocookie/pr/events.py | iamamutt/retrocookie | 4cc4da83c5fc3751377730c06fcef746a06fe60a | [
"MIT"
] | 15 | 2020-06-21T14:35:42.000Z | 2022-03-30T15:48:55.000Z | src/retrocookie/pr/events.py | iamamutt/retrocookie | 4cc4da83c5fc3751377730c06fcef746a06fe60a | [
"MIT"
] | 223 | 2020-05-22T14:35:05.000Z | 2022-03-28T00:19:23.000Z | src/retrocookie/pr/events.py | iamamutt/retrocookie | 4cc4da83c5fc3751377730c06fcef746a06fe60a | [
"MIT"
] | 4 | 2020-11-19T12:55:01.000Z | 2022-03-15T14:24:25.000Z | """Events and contexts for the message bus."""
from dataclasses import dataclass
from typing import List
from retrocookie import git
from retrocookie.pr.base import bus
from retrocookie.pr.protocols import github
@dataclass
class GitNotFound(bus.Event):
"""Cannot find an installation of git."""
@dataclass
class BadGitVersion(bus.Event):
"""The installed version of git is too old."""
version: git.Version
expected: git.Version
@dataclass
class ProjectNotFound(bus.Event):
"""The generated project could not be identified."""
@dataclass
class TemplateNotFound(bus.Event):
"""The project template could not be identified."""
project: github.Repository
@dataclass
class LoadProject(bus.Context):
"""The project repository is being loaded."""
repository: str
@dataclass
class LoadTemplate(bus.Context):
"""A template repository is being loaded."""
repository: str
@dataclass
class RepositoryNotFound(bus.Event):
"""The repository was not found."""
repository: str
@dataclass
class PullRequestNotFound(bus.Event):
"""The pull request was not found."""
pull_request: str
@dataclass
class PullRequestAlreadyExists(bus.Event):
"""The pull request already exists."""
template: github.Repository
template_pull: github.PullRequest
project_pull: github.PullRequest
@dataclass
class GitFailed(bus.Event):
"""The git command exited with a non-zero status."""
command: str
options: List[str]
status: int
stdout: str
stderr: str
@dataclass
class GitHubError(bus.Event):
"""The GitHub API returned an error response."""
url: str
method: str
code: int
message: str
errors: List[str]
@dataclass
class ConnectionError(bus.Event):
"""A connection to the GitHub API could not be established."""
url: str
method: str
error: str
@dataclass
class CreatePullRequest(bus.Context):
"""A pull request is being created."""
template: github.Repository
template_branch: str
project_pull: github.PullRequest
@dataclass
class UpdatePullRequest(bus.Context):
"""A pull request is being updated."""
template: github.Repository
template_pull: github.PullRequest
project_pull: github.PullRequest
@dataclass
class PullRequestCreated(bus.Event):
"""A pull request was created."""
template: github.Repository
template_pull: github.PullRequest
project_pull: github.PullRequest
| 19.289063 | 66 | 0.713244 | 294 | 2,469 | 5.959184 | 0.309524 | 0.119863 | 0.050228 | 0.073059 | 0.320205 | 0.268836 | 0.244863 | 0.211758 | 0.15468 | 0.15468 | 0 | 0 | 0.18874 | 2,469 | 127 | 67 | 19.440945 | 0.874688 | 0.253139 | 0 | 0.492537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.074627 | 0 | 0.776119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bb0e3a4a424cab3d6b9961d60125be743eac980d | 277 | py | Python | backend/src/incidents/migrations/0014_merge_20190918_1515.py | pavan168/IncidentManagement | 7fbf111922a735d4cbe75969159858d6605a1e0b | [
"MIT"
] | 17 | 2019-01-16T13:10:25.000Z | 2021-02-07T02:04:11.000Z | backend/src/incidents/migrations/0014_merge_20190918_1515.py | pavan168/IncidentManagement | 7fbf111922a735d4cbe75969159858d6605a1e0b | [
"MIT"
] | 360 | 2019-02-13T15:24:44.000Z | 2022-02-26T17:42:33.000Z | backend/src/incidents/migrations/0014_merge_20190918_1515.py | mohamednizar/request-management | a88a2ce35a7a1a98630ffd14c1a31a5173b662c8 | [
"MIT"
] | 46 | 2019-01-16T13:10:25.000Z | 2021-06-23T02:44:18.000Z | # Generated by Django 2.2.1 on 2019-09-18 15:15
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('incidents', '0013_auto_20190918_1415'),
('incidents', '0011_auto_20190918_1356'),
]
operations = [
]
| 18.466667 | 49 | 0.65343 | 33 | 277 | 5.30303 | 0.757576 | 0.137143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219626 | 0.227437 | 277 | 14 | 50 | 19.785714 | 0.598131 | 0.162455 | 0 | 0 | 1 | 0 | 0.278261 | 0.2 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb2444f0f11fe36608e740690e190ab225f4ca44 | 1,633 | py | Python | models/strategy.py | CBarreiro96/PRICE-INSPECTOR | d77984f8c11817af21cbdaa56a610a359c6d8061 | [
"MIT"
] | 1 | 2021-03-21T10:49:46.000Z | 2021-03-21T10:49:46.000Z | models/strategy.py | CBarreiro96/PRICE-INSPECTOR | d77984f8c11817af21cbdaa56a610a359c6d8061 | [
"MIT"
] | 1 | 2021-03-13T23:29:29.000Z | 2021-03-13T23:29:29.000Z | models/strategy.py | CBarreiro96/PRICE-INSPECTOR | d77984f8c11817af21cbdaa56a610a359c6d8061 | [
"MIT"
] | 3 | 2021-03-30T12:56:05.000Z | 2021-06-21T23:20:25.000Z | #!/usr/bin/python
"""class Strategy"""
import models
from models.base_model import BaseModel, Base
from sqlalchemy import Column, String, Float, ForeignKey
from sqlalchemy.orm import relationship
class Strategy(BaseModel, Base):
"""Representation of a Strategy"""
__tablename__ = 'strategies'
name = Column(String(128), nullable=False)
backtests = relationship("Backtest",
backref="strategies",
cascade="all, delete, delete-orphan")
param_0_name = Column(String(60), nullable=False)
param_0_value = Column(Float, nullable=False)
param_1_name = Column(String(60), default=None)
param_1_value = Column(Float, default=None)
param_2_name = Column(String(60), default=None)
param_2_value = Column(Float, default=None)
param_3_name = Column(String(60), default=None)
param_3_value = Column(Float, default=None)
param_4_name = Column(String(60), default=None)
param_4_value = Column(Float, default=None)
param_5_name = Column(String(60), default=None)
param_5_value = Column(Float, default=None)
param_6_name = Column(String(60), default=None)
param_6_value = Column(Float, default=None)
param_7_name = Column(String(60), default=None)
param_7_value = Column(Float, default=None)
param_8_name = Column(String(60), default=None)
param_8_value = Column(Float, default=None)
param_9_name = Column(String(60), default=None)
param_9_value = Column(Float, default=None)
def __init__(self, *args, **kwargs):
"""initializes strategy"""
super().__init__(*args, **kwargs)
| 39.829268 | 66 | 0.691366 | 213 | 1,633 | 5.051643 | 0.262911 | 0.184015 | 0.252788 | 0.167286 | 0.547398 | 0.522305 | 0.284387 | 0 | 0 | 0 | 0 | 0.032453 | 0.18861 | 1,633 | 40 | 67 | 40.825 | 0.779623 | 0.049602 | 0 | 0 | 0 | 0 | 0.035156 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.125 | 0 | 0.90625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb314ff4d3d72fdc85dd64dff7ae3856c4338bdd | 165 | py | Python | Python/011. Built-ins/04. Athlete Sort.py | subhadeep-123/HackerRank | 4596915097e58d9fd7a8bfad3194ac35f0c25177 | [
"MIT"
] | 2 | 2020-07-20T19:47:54.000Z | 2021-04-19T21:14:59.000Z | Python/011. Built-ins/04. Athlete Sort.py | subhadeep-123/HackerRank | 4596915097e58d9fd7a8bfad3194ac35f0c25177 | [
"MIT"
] | null | null | null | Python/011. Built-ins/04. Athlete Sort.py | subhadeep-123/HackerRank | 4596915097e58d9fd7a8bfad3194ac35f0c25177 | [
"MIT"
] | null | null | null | n, m = map(int, input().split())
array = [input() for _ in range(n)]
k = int(input())
for row in sorted(array, key=lambda row: int(row.split()[k])):
print(row)
| 23.571429 | 62 | 0.606061 | 29 | 165 | 3.413793 | 0.551724 | 0.161616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169697 | 165 | 6 | 63 | 27.5 | 0.722628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb37e66022cf5b3dfa7d812f2c6b4a7dea737948 | 169 | py | Python | strings.py | Abhiforcs/mypythonworkspace | ac5014160fc9b29d1a8d835441e6360f470fae59 | [
"MIT"
] | 1 | 2020-05-12T20:19:53.000Z | 2020-05-12T20:19:53.000Z | strings.py | Abhiforcs/mypythonworkspace | ac5014160fc9b29d1a8d835441e6360f470fae59 | [
"MIT"
] | null | null | null | strings.py | Abhiforcs/mypythonworkspace | ac5014160fc9b29d1a8d835441e6360f470fae59 | [
"MIT"
] | null | null | null | def count(char,word):
total=0
for any in word:
if any in char:
total = total + 1
return total
result = count('a','banana')
print(result)
| 18.777778 | 29 | 0.568047 | 25 | 169 | 3.84 | 0.64 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 0.319527 | 169 | 8 | 30 | 21.125 | 0.817391 | 0 | 0 | 0 | 0 | 0 | 0.04142 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb3ca45e0783016717a57abf87ea77f115739f69 | 14,523 | py | Python | test/testRational.py | turkeydonkey/nzmath3 | a48ae9efcf0d9ad1485c2e9863c948a7f1b20311 | [
"BSD-3-Clause"
] | 1 | 2021-05-26T19:22:17.000Z | 2021-05-26T19:22:17.000Z | test/testRational.py | turkeydonkey/nzmath3 | a48ae9efcf0d9ad1485c2e9863c948a7f1b20311 | [
"BSD-3-Clause"
] | null | null | null | test/testRational.py | turkeydonkey/nzmath3 | a48ae9efcf0d9ad1485c2e9863c948a7f1b20311 | [
"BSD-3-Clause"
] | null | null | null |
import unittest
from nzmath.rational import *
import nzmath.finitefield as finitefield
from nzmath.plugins import FLOATTYPE as Float
# Rational, Integer, theIntegerRing, theRationalField
class RationalTest (unittest.TestCase):
def testInit(self):
self.assertEqual("2/1", str(Rational(2)))
self.assertEqual("2/1", str(Rational(2)))
self.assertEqual("1/2", str(Rational(1,2)))
self.assertEqual("1/2", str(Rational(Rational(1,2))))
self.assertEqual("21/26", str(Rational(Rational(7,13),Rational(2,3))))
self.assertEqual("3/2", str(Rational(1.5)))
self.assertEqual("3/4", str(Rational(1.5, 2.0)))
self.assertRaises(ZeroDivisionError, Rational, 1, 0)
self.assertRaises(TypeError, Rational, 1, finitefield.FinitePrimeFieldElement(1,7))
self.assertRaises(TypeError, Rational, finitefield.FinitePrimeFieldElement(1,7), 4)
def testPos(self):
self.assertEqual("1/2", str(+Rational(2,4)))
self.assertEqual("-3/4", str(+Rational(-3,4)))
def testNeg(self):
self.assertEqual(-Rational(2,4), Rational(-1,2))
self.assertEqual("3/4", str(-Rational(-3,4)))
def testAdd(self):
self.assertEqual(Rational(13,6), Rational(2,3) + Rational(3,2))
self.assertEqual(Rational(31,18), Rational(13,18) + 1)
self.assertEqual(Rational(2000000000000000000000000000000000000001,2),
1000000000000000000000000000000000000000 + Rational(1,2))
self.assertEqual(1, Rational(1,2) + Rational(1,3) + Rational(1,6))
self.assertEqual(1, Rational(1,2) + 0.5)
self.assertEqual(1, 0.5 + Rational(1,2))
def testIadd(self):
a = Rational(1,2)
a += Rational(1,3)
self.assertEqual(Rational(5,6), a)
def testSub(self):
self.assertEqual(Rational(-5,6), Rational(2,3) - Rational(3,2))
self.assertEqual(Rational(-5,18), Rational(13,18) - 1)
self.assertEqual(Rational(1999999999999999999999999999999999999999,2),
1000000000000000000000000000000000000000 - Rational(1,2))
self.assertEqual(0, Rational(1,2) - Rational(1,3) - Rational(1,6))
self.assertEqual(0, Rational(1,2) - 0.5)
self.assertEqual(0, 0.5 - Rational(1,2))
def testIsub(self):
a = Rational(1,2)
a -= Rational(1,3)
self.assertEqual(Rational(1,6), a)
def testMul(self):
self.assertEqual(1, Rational(2,3) * Rational(3,2))
self.assertEqual(Rational(26,18), Rational(13,18) * 2)
self.assertEqual(500000000000000000000000000000000000000,
1000000000000000000000000000000000000000 * Rational(1,2))
self.assertEqual(Rational(1, 36), Rational(1,2) * Rational(1,3) * Rational(1,6))
## self.assertEqual(Rational(1,4), Rational(1,2) * 0.5)
## self.assertEqual(Rational(1,4), 0.5 * Rational(1,2))
self.assertEqual(0.25, Rational(1,2) * 0.5)
self.assertEqual(0.25, 0.5 * Rational(1,2))
self.assertEqual(Float(0.25), Rational(1,2) * Float(0.5))
self.assertEqual(Float(0.25), Float(0.5) * Rational(1,2))
def testImul(self):
a = Rational(1,2)
a *= Rational(1,3)
self.assertEqual(Rational(1,6), a)
def testDiv(self):
self.assertEqual(Rational(4,9), Rational(2,3) / Rational(3,2))
self.assertEqual(Rational(13,36), Rational(13,18) / 2)
self.assertEqual(2000000000000000000000000000000000000000,
1000000000000000000000000000000000000000 / Rational(1,2))
self.assertEqual(9, Rational(1,2) / Rational(1,3) / Rational(1,6))
self.assertEqual(1, Rational(1,2) / 0.5)
self.assertEqual(1, 0.5 / Rational(1,2))
def testIdiv(self):
a = Rational(1,2)
a /= Rational(1,3)
self.assertEqual(Rational(3,2), a)
def testPow(self):
self.assertEqual(Rational(2**4, 3**4), Rational(2,3) ** 4)
self.assertEqual(Rational(3,2), Rational(2,3) ** (-1))
def testIpow(self):
a = Rational(1,2)
a **= 3
self.assertEqual(Rational(1,8), a)
a **= -1
self.assertEqual(8, a)
def testLt(self):
self.assertTrue(Rational(5,7) < Rational(3,4))
self.assertFalse(Rational(3,4) < Rational(5,7))
self.assertFalse(Rational(3,4) < Rational(3,4))
self.assertTrue(Rational(132,133) < 1)
self.assertTrue(Rational(-13,12) < -1)
self.assertTrue(1 > Rational(132,133))
self.assertTrue(Rational(132,133) < 1.000001)
def testLe(self):
self.assertTrue(Rational(5,7) <= Rational(3,4))
self.assertFalse(Rational(3,4) <= Rational(5,7))
self.assertTrue(Rational(3,4) <= Rational(3,4))
self.assertTrue(Rational(132,133) <= 1)
self.assertTrue(Rational(-13,12) <= -1)
self.assertTrue(1 >= Rational(132,133))
def testEq(self):
self.assertTrue(Rational(1,2) == Rational(1,2))
self.assertTrue(Rational(-1,2) == Rational(-1,2))
self.assertTrue(Rational(4,2) == 2)
self.assertTrue(2 == Rational(14,7))
self.assertFalse(Rational(3,5) == Rational(27,46))
def testNe(self):
self.assertTrue(Rational(1,2) != Rational(1,3))
self.assertTrue(Rational(1,2) != Rational(-1,2))
self.assertFalse(Rational(1,2) != Rational(1,2))
def testGt(self):
self.assertTrue(Rational(3,4) > Rational(5,7))
self.assertFalse(Rational(5,7) > Rational(3,4))
self.assertFalse(Rational(3,4) > Rational(3,4))
self.assertTrue(Rational(13,12) > 1)
self.assertTrue(Rational(-11,12) > -1)
self.assertTrue(1 < Rational(134,133))
def testGe(self):
self.assertTrue(Rational(3,4) >= Rational(5,7))
self.assertFalse(Rational(5,7) >= Rational(3,4))
self.assertTrue(Rational(3,4) >= Rational(3,4))
self.assertTrue(Rational(13,12) >= 1)
self.assertTrue(Rational(-11,12) >= -1)
self.assertTrue(1 <= Rational(134,133))
def testLong(self):
self.assertTrue(1 == int(Rational(13,12)))
self.assertTrue(0 == int(Rational(12,13)))
self.assertTrue(-1 == int(Rational(-1,14)))
def testInt(self):
self.assertTrue(1 == int(Rational(13,12)))
self.assertTrue(0 == int(Rational(12,13)))
self.assertTrue(-1 == int(Rational(-1,14)))
def testTrim(self):
self.assertEqual(Rational(1,3), Rational(333,1000).trim(5))
self.assertEqual(Rational(13,21), Rational(34,55).trim(33))
self.assertEqual(Rational(21,34), Rational(34,55).trim(34))
def testExpand(self):
self.assertEqual(Rational(-33,100), Rational(-1, 3).expand(10,100))
def testFloat(self):
self.assertTrue(isinstance(float(Rational(1,4)), float))
self.assertEqual(0.25, float(Rational(1,4)))
def testDecimalString(self):
self.assertEqual("0.25000", Rational(1,4).decimalString(5))
self.assertEqual("0.33333", Rational(1,3).decimalString(5))
def testNonzero(self):
self.assertTrue(Rational(1,1))
self.assertFalse(Rational(0,1))
def testHash(self):
self.assertTrue(hash(Rational(1,2)))
self.assertEqual(hash(Rational(1)), hash(Rational(1)))
self.assertNotEqual(hash(Rational(1)), hash(Rational(2)))
self.assertNotEqual(hash(Rational(1,2)), hash(Rational(2,3)))
self.assertEqual(hash(Rational(3,111)), hash(Rational(36,1332)))
class IntegerTest(unittest.TestCase):
def setUp(self):
self.three = Integer(3)
def testMul(self):
self.assertEqual(24, self.three * 8)
self.assertEqual([0,0,0], self.three * [0])
self.assertEqual((0,0,0), self.three * (0,))
self.assertEqual(Rational(6,5), self.three * Rational(2,5))
def testRmul(self):
self.assertEqual(24, 8 * self.three)
self.assertEqual([0,0,0], [0] * self.three)
self.assertEqual((0,0,0), (0,) * self.three)
def testRmod(self):
self.assertEqual(1, 4 % self.three)
def testTruediv(self):
self.assertEqual(Rational(1, 3), 1 / self.three)
self.assertEqual(Rational(2, 1), 2 / Integer(1))
self.assertEqual(Rational, type(2 / Integer(1)))
def testPow(self):
self.assertEqual(25, pow(5, Integer(2)))
self.assertEqual(1, pow(self.three, 4, 5))
# return Rational when index is negative
self.assertEqual(Rational(1, 2), pow(Integer(2), -1))
# not clear that failing these tests is an issue
# self.assertRaises(TypeError, pow, 3, Integer(4), 5)
# self.assertRaises(TypeError, pow, 3, 4, Integer(5))
# raise ValueError when index is negative and modulus is given
self.assertRaises(ValueError, pow, Integer(2), -1, 5)
def testGetRing(self):
self.assertEqual(theIntegerRing, self.three.getRing())
def testNonzero(self):
self.assertTrue(Integer(1))
self.assertFalse(Integer(0))
def testHash(self):
self.assertTrue(hash(Integer(12)))
self.assertEqual(hash(Integer(1)), hash(Integer(1)))
self.assertNotEqual(hash(Integer(1)), hash(Integer(2)))
class IntegerRingTest(unittest.TestCase):
def testContains(self):
self.assertTrue(1 in theIntegerRing)
self.assertTrue(1 in theIntegerRing)
self.assertTrue(Integer(1) in theIntegerRing)
self.assertTrue(Rational(1,2) not in theIntegerRing)
self.assertTrue((1,) not in theIntegerRing)
def testGetQuotientField(self):
self.assertTrue(theRationalField is theIntegerRing.getQuotientField())
def testIssubring(self):
self.assertTrue(theIntegerRing.issubring(theRationalField))
self.assertTrue(theIntegerRing.issubring(theIntegerRing))
def testIssuperring(self):
self.assertFalse(theIntegerRing.issuperring(theRationalField))
self.assertTrue(theIntegerRing.issuperring(theIntegerRing))
def testProperties(self):
self.assertTrue(theIntegerRing.isdomain())
self.assertTrue(theIntegerRing.isnoetherian())
self.assertTrue(theIntegerRing.iseuclidean())
self.assertTrue(theIntegerRing.isufd())
self.assertTrue(theIntegerRing.ispid())
self.assertFalse(theIntegerRing.isfield())
def testGcd(self):
self.assertEqual(1, theIntegerRing.gcd(1, 2))
self.assertEqual(2, theIntegerRing.gcd(2, 4))
self.assertEqual(10, theIntegerRing.gcd(0, 10))
self.assertEqual(10, theIntegerRing.gcd(10, 0))
self.assertEqual(1, theIntegerRing.gcd(13, 21))
def testLcm(self):
self.assertEqual(2, theIntegerRing.lcm(1, 2))
self.assertEqual(4, theIntegerRing.lcm(2, 4))
self.assertEqual(0, theIntegerRing.lcm(0, 10))
self.assertEqual(0, theIntegerRing.lcm(10, 0))
self.assertEqual(273, theIntegerRing.lcm(13, 21))
def testExtGcd(self):
self.assertEqual((1, 0, 1), theIntegerRing.extgcd(1, 2))
def testConstants(self):
self.assertEqual(1, theIntegerRing.one)
self.assertTrue(isinstance(theIntegerRing.one, Integer))
self.assertEqual(0, theIntegerRing.zero)
self.assertTrue(isinstance(theIntegerRing.zero, Integer))
def testStrings(self):
# str
self.assertEqual("Z", str(theIntegerRing))
# repr
self.assertEqual("IntegerRing()", repr(theIntegerRing))
def testHash(self):
dictionary = {}
dictionary[theIntegerRing] = 1
self.assertEqual(1, dictionary[IntegerRing()])
class RationalFieldTest(unittest.TestCase):
def testContains(self):
self.assertTrue(1 in theRationalField)
self.assertTrue(1 in theRationalField)
self.assertTrue(Integer(1) in theRationalField)
self.assertTrue(Rational(1,2) in theRationalField)
self.assertTrue(3.14 not in theRationalField)
self.assertTrue((1,2) not in theRationalField)
def testGetQuotientField(self):
self.assertTrue(theRationalField is theRationalField.getQuotientField())
def testIssubring(self):
self.assertTrue(theRationalField.issubring(theRationalField))
self.assertFalse(theRationalField.issubring(theIntegerRing))
def testIssuperring(self):
self.assertTrue(theRationalField.issuperring(theRationalField))
self.assertTrue(theRationalField.issuperring(theIntegerRing))
def testProperties(self):
self.assertTrue(theRationalField.isfield())
self.assertTrue(theRationalField.isdomain())
def testConstants(self):
self.assertEqual(1, theRationalField.one)
self.assertTrue(isinstance(theRationalField.one, Rational))
self.assertEqual(0, theRationalField.zero)
self.assertTrue(isinstance(theRationalField.zero, Rational))
def testStrings(self):
# str
self.assertEqual("Q", str(theRationalField))
# repr
self.assertEqual("RationalField()", repr(theRationalField))
def testHash(self):
dictionary = {}
dictionary[theRationalField] = 1
self.assertEqual(1, dictionary[RationalField()])
class IntegerIfIntOrLongTest (unittest.TestCase):
def testInt(self):
b = IntegerIfIntOrLong(1)
self.assertTrue(isinstance(b, Integer))
def testLong(self):
b = IntegerIfIntOrLong(1)
self.assertTrue(isinstance(b, Integer))
def testRational(self):
b = IntegerIfIntOrLong(Rational(1,2))
self.assertFalse(isinstance(b, Integer))
self.assertTrue(isinstance(b, Rational))
def testTuple(self):
s = IntegerIfIntOrLong((1,1))
self.assertTrue(isinstance(s, tuple))
for i in s:
self.assertTrue(isinstance(i, Integer))
def testList(self):
s = IntegerIfIntOrLong([1,1])
self.assertTrue(isinstance(s, list))
for i in s:
self.assertTrue(isinstance(i, Integer))
def testListOfTuple(self):
ss = IntegerIfIntOrLong([(1,1),(2,2)])
self.assertTrue(isinstance(ss, list))
for s in ss:
self.assertTrue(isinstance(s, tuple))
for i in s:
self.assertTrue(isinstance(i, Integer))
def suite(suffix = "Test"):
suite = unittest.TestSuite()
all_names = globals()
for name in all_names:
if name.endswith(suffix):
suite.addTest(unittest.makeSuite(all_names[name], "test"))
return suite
if __name__ == '__main__':
runner = unittest.TextTestRunner()
runner.run(suite())
| 38.218421 | 91 | 0.641534 | 1,753 | 14,523 | 5.308614 | 0.111238 | 0.149903 | 0.046207 | 0.021062 | 0.573931 | 0.446486 | 0.35332 | 0.287234 | 0.268644 | 0.221148 | 0 | 0.08932 | 0.212146 | 14,523 | 379 | 92 | 38.319261 | 0.723999 | 0.029333 | 0 | 0.204698 | 0 | 0 | 0.006605 | 0 | 0 | 0 | 0 | 0 | 0.630872 | 1 | 0.208054 | false | 0 | 0.013423 | 0 | 0.241611 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb4365a01dbad1b2b19030bedd47cf890a7de254 | 1,014 | py | Python | analysis/NumpyFunctionLoop.py | clearyb1/COVIDDataViz | e3001afafaf67d2447181268d8ca796c27a6df13 | [
"MIT"
] | null | null | null | analysis/NumpyFunctionLoop.py | clearyb1/COVIDDataViz | e3001afafaf67d2447181268d8ca796c27a6df13 | [
"MIT"
] | null | null | null | analysis/NumpyFunctionLoop.py | clearyb1/COVIDDataViz | e3001afafaf67d2447181268d8ca796c27a6df13 | [
"MIT"
] | null | null | null | import numpy as np
#create array of weekly vaccination numbers from https://opendata-geohive.hub.arcgis.com/datasets/0101ed10351e42968535bb002f94c8c6_0.csv?outSR=%7B%22latestWkid%22%3A3857%2C%22wkid%22%3A102100%7D
a= np.array([3946,
43856,
52659,
49703,
51381,
56267,
32176,
86434,
88578,
88294,
91298,
64535,
133195,
139946,
131038,
155716,
188626,
211497,
245947,
323166,
331292,
305479,
277195,
290362,
357077,
370059,
370544,
390891,
373319,
336086,
300378,
232066,
232234,
229694,
183158,
121650,
108327,
95192,
63718,
43289,
23643,
21081,
24567,
22115,
18434,
15138,
21262,
21259,
19713,
14174,
14862,
])
#print(a.shape)
#print(np.mean(a))
#print(np.median(a))
#print(np.max(a))
#Generate min and % of adult population vaccinated per week
def uptake(value):
WkTotal = np.min(value)
WkTotalStr = "% s" % WkTotal
Str1="Minimum Uptake "
print(Str1 + WkTotalStr)
Str2="% of Population Vaccinated per Week"
print(Str2)
for i in a:
print(i / 4000000 * 100)
uptake(a)
| 12.518519 | 194 | 0.710059 | 143 | 1,014 | 5.027972 | 0.762238 | 0.029207 | 0.022253 | 0.075104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.399297 | 0.157791 | 1,014 | 80 | 195 | 12.675 | 0.442623 | 0.312623 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.015873 | 0 | 0.031746 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bb44631e394af313f273110dc22a3affea46815e | 1,136 | py | Python | mExports.py | SkyLined/mProductVersionAndLicense | 89bfe34add2eb3d5866f4ead43ef3b650841cd8d | [
"CC-BY-4.0"
] | 3 | 2018-03-28T12:05:13.000Z | 2021-01-30T07:26:52.000Z | mExports.py | SkyLined/mProductDetails | c5aed22b409f01ea509deb00d0a2440ae7afdacf | [
"CC-BY-4.0"
] | null | null | null | mExports.py | SkyLined/mProductDetails | c5aed22b409f01ea509deb00d0a2440ae7afdacf | [
"CC-BY-4.0"
] | null | null | null | from .cLicenseServer import cLicenseServer;
from .cProductDetails import cProductDetails;
from .faoGetLicensesFromFile import faoGetLicensesFromFile;
from .faoGetLicensesFromRegistry import faoGetLicensesFromRegistry;
from .faoGetProductDetailsForAllLoadedModules import faoGetProductDetailsForAllLoadedModules;
from .foGetLicenseCollectionForAllLoadedProducts import foGetLicenseCollectionForAllLoadedProducts;
from .fo0GetProductDetailsForMainModule import fo0GetProductDetailsForMainModule;
from .fo0GetProductDetailsForModule import fo0GetProductDetailsForModule;
from .ftasGetLicenseErrorsAndWarnings import ftasGetLicenseErrorsAndWarnings;
from .fsGetSystemId import fsGetSystemId;
from .fWriteLicensesToProductFolder import fWriteLicensesToProductFolder;
__all__ = [
"cLicenseServer",
"cProductDetails",
"faoGetLicensesFromFile",
"faoGetLicensesFromRegistry",
"faoGetProductDetailsForAllLoadedModules",
"foGetLicenseCollectionForAllLoadedProducts",
"fo0GetProductDetailsForMainModule",
"fo0GetProductDetailsForModule",
"ftasGetLicenseErrorsAndWarnings",
"fsGetSystemId",
"fWriteLicensesToProductFolder",
]; | 45.44 | 99 | 0.876761 | 56 | 1,136 | 17.714286 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005687 | 0.071303 | 1,136 | 25 | 100 | 45.44 | 0.934597 | 0 | 0 | 0 | 0 | 0 | 0.257696 | 0.220756 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.458333 | 0 | 0.458333 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
bb53be5d9535e67e41679bc0cccd687e1f49ef89 | 8,349 | py | Python | 3rdPartyLibraries/FuXi-master/test/SPARQL/test.py | mpetyx/pyrif | 2f7ba863030d7337bb39ad502d1e09e26ac950d2 | [
"MIT"
] | null | null | null | 3rdPartyLibraries/FuXi-master/test/SPARQL/test.py | mpetyx/pyrif | 2f7ba863030d7337bb39ad502d1e09e26ac950d2 | [
"MIT"
] | null | null | null | 3rdPartyLibraries/FuXi-master/test/SPARQL/test.py | mpetyx/pyrif | 2f7ba863030d7337bb39ad502d1e09e26ac950d2 | [
"MIT"
] | null | null | null | # """
# FuXi Harness for W3C SPARQL1.1 Entailment Evaluation Tests
# """
# import unittest
# from pprint import pprint
# from urllib2 import urlopen
# from FuXi.Rete.RuleStore import SetupRuleStore
# from FuXi.Horn.HornRules import HornFromN3
# from FuXi.Rete.Proof import ImmutableDict
# from FuXi.SPARQL.BackwardChainingStore import *
# from FuXi.Rete.Util import setdict
# from rdflib import Namespace, RDF, RDFS, URIRef
# from rdflib import BNode, Graph, Literal, Namespace, RDF, RDFS, URIRef, Variable
# from rdfextras.sparql.parser import parse
# from rdflib import OWL as OWLNS
# from rdflib.store import Store
# from amara.lib import U
# MANIFEST = Namespace('http://www.w3.org/2001/sw/DataAccess/tests/test-manifest#')
# QUERY = Namespace('http://www.w3.org/2001/sw/DataAccess/tests/test-query#')
# SD = Namespace('http://www.w3.org/ns/sparql-service-description#')
# TEST = Namespace('http://www.w3.org/2009/sparql/docs/tests/data-sparql11/entailment/manifest#')
# STRING = Namespace('http://www.w3.org/2000/10/swap/string#')
# ENT = Namespace('http://www.w3.org/ns/entailment/')
# SUPPORTED_ENTAILMENT=[
# ENT.RDF,
# ENT.RDFS
# ]
# SKIP={
# "rdf01" : "Quantification over predicates",
# "rdfs01": "Quantification over predicates",
# "rdf02" : "Reification",
# "rdf03" : "Parse of query fails",
# "rdf10" : "Malformed test", #might be fixed
# "rdfs05": "Quantification over predicates (unary)",
# "rdfs11": "Reflexivity of rdfs:subClassOf (?x -> rdfs:Container)"
# }
# nsMap = {
# u'rdfs' :RDFS,
# u'rdf' :RDF,
# u'owl' :OWLNS,
# u'mf' :MANIFEST,
# u'sd' :SD,
# u'test' :MANIFEST,
# u'qt' :QUERY
# }
# MANIFEST_QUERY = \
# """
# SELECT ?test ?name ?queryFile ?rdfDoc ?regime ?result
# WHERE {
# ?test
# a test:QueryEvaluationTest;
# mf:name ?name;
# mf:action [
# qt:query ?queryFile;
# qt:data ?rdfDoc;
# sd:entailmentRegime ?regime
# ];
# mf:result ?result
# } ORDER BY ?test """
# def GetTests():
# manifestGraph = Graph().parse(
# open(os.getcwd()+'/SPARQL/W3C/entailment/manifest.ttl'),
# format='n3')
# rt = manifestGraph.query(
# MANIFEST_QUERY,
# initNs=nsMap,
# DEBUG = False)
# for test, name, queryFile, rdfDoc, regime, result in rt:
# yield test.split(TEST)[-1], \
# name, \
# queryFile, \
# rdfDoc, \
# regime, \
# result
# def castToTerm(node):
# if node.xml_local == 'bnode':
# return BNode(U(node))
# elif node.xml_local == 'uri':
# return URIRef(U(node))
# elif node.xml_local == 'literal':
# if node.xml_select('string(@datatype)'):
# dT = URIRef(U(node.xpath('string(@datatype)')))
# return Literal(U(node),datatype=dT)
# else:
# return Literal(U(node))
# else:
# raise NotImplementedError()
# def parseResults(sparqlRT):
# from amara import bindery
# actualRT = []
# doc = bindery.parse(sparqlRT,
# prefixes={
# u'sparql':u'http://www.w3.org/2005/sparql-results#'})
# askAnswer=doc.xml_select('string(/sparql:sparql/sparql:boolean)')
# if askAnswer:
# askAnswer = U(askAnswer)
# actualRT=askAnswer==u'true'
# else:
# for result in doc.xml_select('/sparql:sparql/sparql:results/sparql:result'):
# currBind = {}
# for binding in result.binding:
# varVal = U(binding.name)
# var = Variable(varVal)
# term = castToTerm(binding.xml_select('*')[0])
# currBind[var]=term
# if currBind:
# actualRT.append(currBind)
# return actualRT
# class TestSequence(unittest.TestCase):
# verbose = False
# def setUp(self):
# rule_store, rule_graph, self.network = SetupRuleStore(makeNetwork=True)
# self.network.nsMap = nsMap
# self.rules=list(HornFromN3(open('SPARQL/W3C/rdf-rdfs.n3')))
# def test_generator(testName, label, queryFile, rdfDoc, regime, result, debug):
# def test(self,debug=debug):
# print(testName, label)
# query = urlopen(queryFile).read()
# factGraph = Graph().parse(urlopen(rdfDoc),format='n3')
# factGraph.parse(open('SPARQL/W3C/rdfs-axiomatic-triples.n3'),format='n3')
# self.rules.extend(self.network.setupDescriptionLogicProgramming(
# factGraph,
# addPDSemantics=True,
# constructNetwork=False))
# if debug:
# pprint(list(self.rules))
# topDownStore=TopDownSPARQLEntailingStore(
# factGraph.store,
# factGraph,
# idb=self.rules,
# DEBUG=debug,
# nsBindings=nsMap,
# decisionProcedure = BFP_METHOD,
# identifyHybridPredicates = True,
# templateMap={
# STRING.contains : "REGEX(%s,%s)"
# })
# targetGraph = Graph(topDownStore)
# parsedQuery=parse(query)
# for pref,nsUri in (setdict(nsMap) | setdict(
# parsedQuery.prolog.prefixBindings)).items():
# targetGraph.bind(pref,nsUri)
# # rt=targetGraph.query('',parsedQuery=parsedQuery)
# print(query)
# actualSolns=[ImmutableDict([(k,v)
# for k,v in d.items()])
# for d in parseResults(
# targetGraph.query(query).serialize(
# format='xml'))]
# expectedSolns=[ImmutableDict([(k,v)
# for k,v in d.items()])
# for d in parseResults(urlopen(result).read())]
# actualSolns.sort(key=lambda d:hash(d))
# expectedSolns.sort(key=lambda d:hash(d))
# self.failUnless(set(actualSolns) == set(expectedSolns),
# "Answers don't match %s v.s. %s"%(actualSolns,
# expectedSolns)
# )
# if debug:
# for network,goal in topDownStore.queryNetworks:
# pprint(goal)
# network.reportConflictSet(True)
# return test
# if __name__ == '__main__':
# from optparse import OptionParser
# op = OptionParser('usage: %prog [options]')
# op.add_option('--profile',
# action='store_true',
# default=False,
# help = 'Whether or not to run a profile')
# op.add_option('--singleTest',
# help = 'The short name of the test to run')
# op.add_option('--debug','-v',
# action='store_true',
# default=False,
# help = 'Run the test in verbose mode')
# (options, facts) = op.parse_args()
# for test, name, queryFile, rdfDoc, regime, result in GetTests():
# if test in SKIP or options.singleTest is not None and options.singleTest != test:
# if test in SKIP:
# print "\tSkipping (%s)"%test,SKIP[test]#>>sys.stderr,SKIP[test],
# elif regime in SUPPORTED_ENTAILMENT:
# test_name = 'test_%s' % test
# test = test_generator(test, name, queryFile, rdfDoc, regime, result, options.debug)
# setattr(TestSequence, test_name, test)
# if options.profile:
# from hotshot import Profile, stats
# p = Profile('fuxi.profile')
# p.runcall(unittest.TextTestRunner(verbosity=5).run,
# unittest.makeSuite(TestSequence))
# p.close()
# s = stats.load('fuxi.profile')
# s.strip_dirs()
# s.sort_stats('time','cumulative','pcalls')
# s.print_stats(.1)
# s.print_callers(.05)
# s.print_callees(.05)
# else:
# unittest.TextTestRunner(verbosity=5).run(
# unittest.makeSuite(TestSequence)
# )
| 38.474654 | 102 | 0.543538 | 838 | 8,349 | 5.369928 | 0.318616 | 0.010889 | 0.014 | 0.018667 | 0.160667 | 0.144444 | 0.087111 | 0.087111 | 0.040889 | 0.040889 | 0 | 0.012031 | 0.323033 | 8,349 | 216 | 103 | 38.652778 | 0.784147 | 0.948856 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
246d97b601c5bd5a2c4ed52accec7c24dfdc27f3 | 1,265 | py | Python | 3er Elemento/insertar_multiples_registros.py | Antonio985/SeminarioDeProgramacion | a1bdd7fa6fa202a0a7a2e9a0f72def24d0f1c465 | [
"Unlicense"
] | null | null | null | 3er Elemento/insertar_multiples_registros.py | Antonio985/SeminarioDeProgramacion | a1bdd7fa6fa202a0a7a2e9a0f72def24d0f1c465 | [
"Unlicense"
] | null | null | null | 3er Elemento/insertar_multiples_registros.py | Antonio985/SeminarioDeProgramacion | a1bdd7fa6fa202a0a7a2e9a0f72def24d0f1c465 | [
"Unlicense"
] | null | null | null | # Realizar la importacion del modulo postgres sql
import psycopg2
# Realizar conexion a base de datos
conexion = psycopg2.connect(user='postgres',
password='admin',
host='127.0.0.1',
port='5432',
database='testg02_db')
# utilizar el metodo cursor
cursor = conexion.cursor()
# creamos la setencia sql
sql = 'INSERT INTO persona(nombre, apellido, mail) VALUES(%s,%s,%s)'
# Creamos una varieble y asignacion de tuplas
valores = (('guillermo', 'MARTINEZ', 'eduardo.martinez@ues.mx'),
('joaquin', 'TORRES', 'enrrique.torre@ues.mx'),
('Maria', 'MONTAÑO', 'jesus.montalo@ues.mx'),
('Pedro', 'MURRIETA', 'alonso.murrieta@ues.mx'),
('Marco', 'NAVARRO', 'luis.navarro@ues.mx') )
# invocar la variable utilizando el metodo execute()
cursor.executemany(sql, valores)
# Guardaremos la informacion en la base de datos
# Utilizaremos el objeto de de conexio, invocando el metodo commit()
conexion.commit()
# Recuperaremos los registros que se han sido insertados
registros_insertados = cursor.rowcount
print(f'Registro insertado: {registros_insertados}')
# cerrar la conexion y cursor
cursor.close()
conexion.close()
| 34.189189 | 68 | 0.651383 | 150 | 1,265 | 5.473333 | 0.626667 | 0.030451 | 0.026797 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01433 | 0.227668 | 1,265 | 36 | 69 | 35.138889 | 0.825998 | 0.334387 | 0 | 0 | 0 | 0 | 0.373045 | 0.105897 | 0 | 0 | 0 | 0.027778 | 0 | 1 | 0 | false | 0.052632 | 0.052632 | 0 | 0.052632 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
246e0a5f8b17a9b0843c7beecad945fa0daad6a6 | 307 | py | Python | 03.Complete Python Developer - Zero to Mastery - AN/05.Advanced Python Decorators/decorators3.py | ptyadana/python-dojo | 98c7234b84f0afea99a091c7198342d66bbdff5b | [
"MIT"
] | 3 | 2020-06-01T04:17:18.000Z | 2020-12-18T03:05:55.000Z | 03.Complete Python Developer - Zero to Mastery - AN/05.Advanced Python Decorators/decorators3.py | ptyadana/python-dojo | 98c7234b84f0afea99a091c7198342d66bbdff5b | [
"MIT"
] | 1 | 2020-04-25T08:01:59.000Z | 2020-04-25T08:01:59.000Z | 03.Complete Python Developer - Zero to Mastery - AN/05.Advanced Python Decorators/decorators3.py | ptyadana/python-dojo | 98c7234b84f0afea99a091c7198342d66bbdff5b | [
"MIT"
] | 7 | 2020-04-26T10:02:36.000Z | 2021-06-08T05:12:46.000Z | #Decorator Pattern
def my_decorator(func):
def wrap_func(*args, **kwargs):
print("**********")
func(*args, **kwargs)
print("**********")
return wrap_func
@my_decorator
def hello(greeting,emoji, withLove="your love"):
print(greeting,emoji, withLove)
hello('yo yo', '<3') | 23.615385 | 48 | 0.589577 | 36 | 307 | 4.916667 | 0.5 | 0.124294 | 0.158192 | 0.214689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004049 | 0.19544 | 307 | 13 | 49 | 23.615385 | 0.712551 | 0.055375 | 0 | 0.2 | 0 | 0 | 0.124138 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0 | 0.4 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
247082659fc75104497197168be41753057b14d3 | 88 | py | Python | tester.py | sritzow/sritzow.github.io | 10eb5d30ed425020154dbc9aeaecf2bfbe911b87 | [
"MIT"
] | null | null | null | tester.py | sritzow/sritzow.github.io | 10eb5d30ed425020154dbc9aeaecf2bfbe911b87 | [
"MIT"
] | null | null | null | tester.py | sritzow/sritzow.github.io | 10eb5d30ed425020154dbc9aeaecf2bfbe911b87 | [
"MIT"
] | null | null | null | import pymongo
conn = pymongo.MongoClient()
db = conn.tweets
db.tweets.dropDatabase()
| 12.571429 | 28 | 0.761364 | 11 | 88 | 6.090909 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 88 | 6 | 29 | 14.666667 | 0.87013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
247312d285b054cd39227aef99a3e1dffd566287 | 522 | py | Python | src/simfoni/apps/revenue/models/revenue_group.py | django-stars/simfoni-test | eaca4adc8177505e7c53e708456fd0dbb6be0b71 | [
"MIT"
] | null | null | null | src/simfoni/apps/revenue/models/revenue_group.py | django-stars/simfoni-test | eaca4adc8177505e7c53e708456fd0dbb6be0b71 | [
"MIT"
] | null | null | null | src/simfoni/apps/revenue/models/revenue_group.py | django-stars/simfoni-test | eaca4adc8177505e7c53e708456fd0dbb6be0b71 | [
"MIT"
] | 4 | 2018-04-26T17:43:24.000Z | 2018-05-10T14:11:09.000Z | from django.core.validators import MinValueValidator
from django.db import models
from django.utils.translation import ugettext_lazy as _
from core.models import AbstractBaseModel
class RevenueGroup(AbstractBaseModel):
name = models.CharField(_('Name'), max_length=255, unique=True)
revenue_from = models.DecimalField(_('From'), max_digits=14, decimal_places=2, validators=[MinValueValidator(0)])
revenue_to = models.DecimalField(_('To'), max_digits=14, decimal_places=2, validators=[MinValueValidator(0)])
| 43.5 | 117 | 0.791188 | 64 | 522 | 6.265625 | 0.5 | 0.074813 | 0.054863 | 0.089776 | 0.264339 | 0.264339 | 0.264339 | 0.264339 | 0.264339 | 0 | 0 | 0.023404 | 0.099617 | 522 | 11 | 118 | 47.454545 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0.019157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2475dede311738c11a698f61d9c18ddd93bd1b1b | 1,972 | py | Python | hgw_frontend/hgw_frontend/__init__.py | crs4/health-gateway | e18d945b593fa5efcebe7ee33f7e8991bbe1803d | [
"MIT"
] | 5 | 2018-05-16T22:58:01.000Z | 2020-01-14T11:12:17.000Z | hgw_frontend/hgw_frontend/__init__.py | PhilanthroLab/health-gateway | e18d945b593fa5efcebe7ee33f7e8991bbe1803d | [
"MIT"
] | 10 | 2018-04-13T15:56:49.000Z | 2019-12-05T08:57:47.000Z | hgw_frontend/hgw_frontend/__init__.py | PhilanthroLab/health-gateway | e18d945b593fa5efcebe7ee33f7e8991bbe1803d | [
"MIT"
] | 6 | 2019-10-02T08:39:12.000Z | 2020-06-23T00:18:03.000Z | # Copyright (c) 2017-2018 CRS4
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or
# substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
# AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
CONFIRM_ACTIONS = (
'add',
'delete'
)
ERRORS_MESSAGE = {
'MISSING_PARAM': 'Missing parameters',
'UNKNOWN_ACTION': 'Unknown action',
'INVALID_CONFIRMATION_CODE': 'Confirmation code not valid',
'INVALID_FR_STATUS': 'Invalid flow request status',
'EXPIRED_CONFIRMATION_ID': 'Confirmation code expired',
'INVALID_CONSENT_STATUS': 'Invalid consent status',
'UNKNOWN_CONSENT': 'Unknown consent',
'INVALID_DATA': 'Invalid parameters',
'MISSING_PERSON_ID': 'Missing person id',
'INTERNAL_GATEWAY_ERROR': 'internal_health_gateway_error',
'INVALID_CONSENT_CLIENT': 'invalid_consent_client',
'CONSENT_CONNECTION_ERROR': 'consent_connection_error',
'INVALID_BACKEND_CLIENT': 'invalid_backend_client',
'BACKEND_CONNECTION_ERROR': 'backend_connection_error',
'ALL_CONSENTS_ALREADY_CREATED': 'all_required_consents_already_created'
}
| 48.097561 | 104 | 0.766227 | 262 | 1,972 | 5.599237 | 0.484733 | 0.059986 | 0.017723 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005412 | 0.156694 | 1,972 | 40 | 105 | 49.3 | 0.876729 | 0.531947 | 0 | 0 | 0 | 0 | 0.717439 | 0.408389 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
248c70fc15f6f7e34634c4bd3b90f8b9c838674d | 889 | py | Python | src/models/user.py | solnsumei/fastapi-template | a25a81f24ebe198ea2a49657da83b534260afdc7 | [
"MIT"
] | 3 | 2021-04-19T06:46:48.000Z | 2021-09-16T15:00:05.000Z | src/models/user.py | solnsumei/fastapi-template | a25a81f24ebe198ea2a49657da83b534260afdc7 | [
"MIT"
] | null | null | null | src/models/user.py | solnsumei/fastapi-template | a25a81f24ebe198ea2a49657da83b534260afdc7 | [
"MIT"
] | null | null | null | from passlib.context import CryptContext
from src.models.base.basemodel import ModelWithStatus, fields
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
class User(ModelWithStatus):
email = fields.CharField(max_length=50, unique=True)
password = fields.CharField(max_length=250)
is_admin = fields.BooleanField(default=False)
role = fields.ForeignKeyField('models.Role', related_name='users', null=True)
@classmethod
async def find_by_email(cls, email):
return await cls.filter(email=email).first()
@staticmethod
def generate_hash(password: str):
return pwd_context.hash(password)
@staticmethod
def verify_hash(password: str, hashed_password: str):
return pwd_context.verify(password, hashed_password)
class Meta:
table = "users"
class PydanticMeta:
exclude = ['password']
| 25.4 | 81 | 0.713161 | 103 | 889 | 6.029126 | 0.563107 | 0.048309 | 0.057971 | 0.077295 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006897 | 0.184477 | 889 | 34 | 82 | 26.147059 | 0.849655 | 0 | 0 | 0.095238 | 0 | 0 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.333333 | 0.095238 | 0.095238 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
249033ea477e66aafa5d9a09c2840e1dbce46b7f | 224 | py | Python | gbe/views/view_summer_act_view.py | bethlakshmi/gbe-divio-djangocms-python2.7 | 6e9b2c894162524bbbaaf73dcbe927988707231d | [
"Apache-2.0"
] | 1 | 2021-03-14T11:56:47.000Z | 2021-03-14T11:56:47.000Z | gbe/views/view_summer_act_view.py | bethlakshmi/gbe-divio-djangocms-python2.7 | 6e9b2c894162524bbbaaf73dcbe927988707231d | [
"Apache-2.0"
] | 180 | 2019-09-15T19:52:46.000Z | 2021-11-06T23:48:01.000Z | gbe/views/view_summer_act_view.py | bethlakshmi/gbe-divio-djangocms-python2.7 | 6e9b2c894162524bbbaaf73dcbe927988707231d | [
"Apache-2.0"
] | null | null | null | from gbe.forms import (
SummerActForm,
)
from gbe.views import ViewActView
class ViewSummerActView(ViewActView):
object_form_type = SummerActForm
bid_prefix = "The Summer Act"
edit_name = "summeract_edit"
| 18.666667 | 37 | 0.745536 | 26 | 224 | 6.230769 | 0.769231 | 0.08642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 224 | 11 | 38 | 20.363636 | 0.89011 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
24908e5bcbccbb113c46014441ec08a078260d01 | 101 | py | Python | 001146StepikPyBegin/Stepik001146PyBeginсh07p03st11С07__20200421.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001146StepikPyBegin/Stepik001146PyBeginсh07p03st11С07__20200421.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | 001146StepikPyBegin/Stepik001146PyBeginсh07p03st11С07__20200421.py | SafonovMikhail/python_000577 | 739f764e80f1ca354386f00b8e9db1df8c96531d | [
"Apache-2.0"
] | null | null | null | n = int(input())
sum1 = 0
for i in range(1, n + 1):
if n % i == 0:
sum1 += i
print(sum1)
| 14.428571 | 25 | 0.465347 | 20 | 101 | 2.35 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106061 | 0.346535 | 101 | 6 | 26 | 16.833333 | 0.606061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
249527376fc3fc9354ce420e6f4ee3bce7d7d193 | 777 | py | Python | main.py | luizcartolano2/reinforcement-learning-dqn-cart-pole | caca97554273afbd213c9e40c9ff2d68342c34e8 | [
"MIT"
] | null | null | null | main.py | luizcartolano2/reinforcement-learning-dqn-cart-pole | caca97554273afbd213c9e40c9ff2d68342c34e8 | [
"MIT"
] | null | null | null | main.py | luizcartolano2/reinforcement-learning-dqn-cart-pole | caca97554273afbd213c9e40c9ff2d68342c34e8 | [
"MIT"
] | null | null | null | import os
# first we need to download the libs
try:
os.system('pip3 install -r requirements.txt')
except:
print("Check your Python3 and Pip installations.")
# import libs
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# from collections import namedtuple
from itertools import count
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
# create a gym env
env = gym.make('CartPole-v0').unwrapped
# set up matplotlib
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# check for a gpu
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
| 20.447368 | 69 | 0.772201 | 123 | 777 | 4.845528 | 0.593496 | 0.073826 | 0.043624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004573 | 0.155727 | 777 | 37 | 70 | 21 | 0.903963 | 0.169884 | 0 | 0 | 0 | 0 | 0.152038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.625 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
249852694970ead7821411859a291e0c0fde0a31 | 173 | py | Python | src/typeDefs/wbes/schDataRow.py | nagasudhirpulla/wr_rtm_dem_corr_dashboard | d2b9bdd5c76fcec01537af36f86e1c06524a2169 | [
"MIT"
] | null | null | null | src/typeDefs/wbes/schDataRow.py | nagasudhirpulla/wr_rtm_dem_corr_dashboard | d2b9bdd5c76fcec01537af36f86e1c06524a2169 | [
"MIT"
] | null | null | null | src/typeDefs/wbes/schDataRow.py | nagasudhirpulla/wr_rtm_dem_corr_dashboard | d2b9bdd5c76fcec01537af36f86e1c06524a2169 | [
"MIT"
] | null | null | null | import datetime as dt
from typing import TypedDict
class ISchDataRow(TypedDict):
utilName: str
schDate: dt.datetime
block: int
schType: str
val: float
| 15.727273 | 29 | 0.705202 | 22 | 173 | 5.545455 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242775 | 173 | 10 | 30 | 17.3 | 0.931298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
249d4d9ba8d9ee7a62b73cf1f1164631e3a4f0ef | 614 | py | Python | migrations/versions/0c2841d4cfcd_.py | perna/podigger | 5aa1a507ec6d5d96b41cb43eceb820ebd8f21fa1 | [
"MIT"
] | 5 | 2016-08-02T22:37:57.000Z | 2020-12-02T12:38:59.000Z | migrations/versions/0c2841d4cfcd_.py | perna/podigger | 5aa1a507ec6d5d96b41cb43eceb820ebd8f21fa1 | [
"MIT"
] | null | null | null | migrations/versions/0c2841d4cfcd_.py | perna/podigger | 5aa1a507ec6d5d96b41cb43eceb820ebd8f21fa1 | [
"MIT"
] | 1 | 2016-07-07T02:15:54.000Z | 2016-07-07T02:15:54.000Z | """empty message
Revision ID: 0c2841d4cfcd
Revises: 34d97cbc759e
Create Date: 2016-07-28 20:19:00.866339
"""
# revision identifiers, used by Alembic.
revision = '0c2841d4cfcd'
down_revision = '34d97cbc759e'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('user', sa.Column('password', sa.String(length=255), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('user', 'password')
### end Alembic commands ###
| 22.740741 | 86 | 0.69544 | 76 | 614 | 5.578947 | 0.605263 | 0.063679 | 0.099057 | 0.108491 | 0.207547 | 0.207547 | 0.207547 | 0.207547 | 0 | 0 | 0 | 0.096078 | 0.169381 | 614 | 26 | 87 | 23.615385 | 0.735294 | 0.473941 | 0 | 0 | 0 | 0 | 0.165517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
249efaee0334887980eabc7717092860ef177340 | 236 | py | Python | Round 1/5.functionvariables/finishedLesson.py | beetlesoup/udemy-python-scripting-a-car | ae41491161821a8f4e63fc86c368a71bb3d6cc15 | [
"Unlicense"
] | null | null | null | Round 1/5.functionvariables/finishedLesson.py | beetlesoup/udemy-python-scripting-a-car | ae41491161821a8f4e63fc86c368a71bb3d6cc15 | [
"Unlicense"
] | null | null | null | Round 1/5.functionvariables/finishedLesson.py | beetlesoup/udemy-python-scripting-a-car | ae41491161821a8f4e63fc86c368a71bb3d6cc15 | [
"Unlicense"
] | null | null | null | def MoveManyStepsForward(numberOfSteps):
for everySingleNumberInTheRange in range(numberOfSteps):
env.step(0)
async def main():
MoveManyStepsForward(50)
await sleep()
MoveManyStepsForward(150) | 29.5 | 61 | 0.682203 | 20 | 236 | 8.05 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03352 | 0.241525 | 236 | 8 | 62 | 29.5 | 0.865922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
24a412413e36803f77e81c6d8b52524ead3e34b3 | 142 | py | Python | app/models/Base.py | krisnantobi/flask-python | d0da39d7c7f6e6d3b35def6e61c658ef279811e9 | [
"MIT"
] | null | null | null | app/models/Base.py | krisnantobi/flask-python | d0da39d7c7f6e6d3b35def6e61c658ef279811e9 | [
"MIT"
] | null | null | null | app/models/Base.py | krisnantobi/flask-python | d0da39d7c7f6e6d3b35def6e61c658ef279811e9 | [
"MIT"
] | null | null | null | from mongoengine.document import Document
class Base(Document):
meta = {
'allow_inheritance': True,
'abstract': True
} | 23.666667 | 41 | 0.640845 | 14 | 142 | 6.428571 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.260563 | 142 | 6 | 42 | 23.666667 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.174825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
24a55eced86573e218221a3b3467c66df61d5a29 | 461 | py | Python | django_workflow_system/api/tests/factories/workflows/workflow_image.py | eikonomega/django-workflow-system | dc0e8807263266713d3d7fa46e240e8d72db28d1 | [
"MIT"
] | 2 | 2022-01-28T12:35:42.000Z | 2022-03-23T16:06:05.000Z | django_workflow_system/api/tests/factories/workflows/workflow_image.py | eikonomega/django-workflow-system | dc0e8807263266713d3d7fa46e240e8d72db28d1 | [
"MIT"
] | 10 | 2021-04-27T20:26:32.000Z | 2021-07-21T15:34:31.000Z | django_workflow_system/api/tests/factories/workflows/workflow_image.py | eikonomega/django-workflow-system | dc0e8807263266713d3d7fa46e240e8d72db28d1 | [
"MIT"
] | 1 | 2021-11-13T14:30:34.000Z | 2021-11-13T14:30:34.000Z | import django_workflow_system.models as models
from factory.django import DjangoModelFactory
class WorkflowImageTypeFactory(DjangoModelFactory):
class Meta:
model = models.WorkflowImageType
django_get_or_create = ["type"]
class WorkflowImageFactory(DjangoModelFactory):
class Meta:
model = models.WorkflowImage
django_get_or_create = ["image", "type"]
__all__ = ["WorkflowImageTypeFactory", "WorkflowImageFactory"]
| 25.611111 | 62 | 0.752711 | 42 | 461 | 7.97619 | 0.52381 | 0.20597 | 0.161194 | 0.191045 | 0.226866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171367 | 461 | 17 | 63 | 27.117647 | 0.876963 | 0 | 0 | 0.181818 | 0 | 0 | 0.123644 | 0.052061 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
24a61d233f3e2d9fe257f8cc2aa8bb5e7883863c | 4,095 | py | Python | scratch/check_bins.py | AdriJD/cmb_sst_ksw | 635f0627a3c8a36c743cdf8955e3671352ab6d90 | [
"MIT"
] | null | null | null | scratch/check_bins.py | AdriJD/cmb_sst_ksw | 635f0627a3c8a36c743cdf8955e3671352ab6d90 | [
"MIT"
] | null | null | null | scratch/check_bins.py | AdriJD/cmb_sst_ksw | 635f0627a3c8a36c743cdf8955e3671352ab6d90 | [
"MIT"
] | null | null | null | '''
Test binning
'''
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
import sys
import os
import numpy as np
from scipy.special import spherical_jn
sys.path.insert(0,'./../')
from sst import Fisher
opj = os.path.join
def bin_test(parity, bins=None, lmin=2, lmax=23):
'''
Init bins, run over them and test them.
'''
base_dir = '/mn/stornext/d8/ITA/spider/adri/analysis/20181025_sst/test/precomputed'
F = Fisher(base_dir)
F.init_bins(lmin=lmin, lmax=lmax, parity=parity, bins=bins)
bins = F.bins['bins']
pint = 1 if parity == 'odd' else 0
if F.mpi_rank == 0:
for i1, b1 in enumerate(bins):
for i2, b2 in enumerate(bins[i1:]):
i2 += i1
for i3, b3 in enumerate(bins[i2:]):
i3 += i2
ell1, ell2, ell3 = F.bins['first_pass'][i1,i2,i3]
num = F.bins['num_pass'][i1,i2,i3]
try:
if num == 0:
assert (ell1, ell2, ell3) == (0,0,0)
else:
assert (ell1, ell2, ell3) != (0,0,0)
if parity is not None:
assert (ell1 + ell2 + ell3) % 2 == pint
# check if first pass ells fit in bins
assert ell1 >= b1
assert ell2 >= b2
assert ell3 >= b3
try:
assert ell1 < bins[i1+1]
except IndexError:
if (i1 + 1) >= bins.size:
pass
else:
raise
try:
assert ell2 < bins[i2+1]
except IndexError:
if (i2 + 1) >= bins.size:
pass
else:
raise
try:
assert ell3 < bins[i3+1]
except IndexError:
if (i3 + 1) >= bins.size:
pass
else:
raise
# Check if first pass matches triangle cond.
assert abs(ell1 - ell2) <= ell3
assert ell3 <= (ell1 + ell2)
except:
print 'error in bin:'
print 'bin_idx: ({},{},{}), bin: ({},{},{}), no. gd_tuples: {}, '\
'u_ell: ({},{},{})'.format(i1, i2, i3, b1, b2, b3,
F.bins['num_pass'][i1,i2,i3],
ell1, ell2, ell3)
raise
print 'bins: ', F.bins['bins']
print 'lmin: ', F.bins['lmin']
print 'lmax: ', F.bins['lmax']
print 'sum num_pass: ', np.sum(F.bins['num_pass'])
print 'unique_ells: ', F.bins['unique_ells']
print 'num bins: ', F.bins['bins'].size
print 'shape num_pass: ', F.bins['num_pass'].shape
print 'shape first_pass: ', F.bins['first_pass'].shape, '\n'
if __name__ == '__main__':
lmin = 2
lmax = 40
for parity in ['even', 'odd', None]:
print 'parity: {}, lmin: {}, lmax: {}'.format(parity, lmin, lmax)
print 'default bins'
bin_test(lmin=lmin, lmax=lmax, parity=parity, bins=None)
print 'bins up to lmax'
bin_test(lmin=None, lmax=lmax, parity=parity, bins=[2,3,4,10,lmax])
print 'bins over lmax'
bin_test(lmin=None, lmax=lmax, parity=parity, bins=[2,3,4,10,lmax+12])
# if lmin and lmax of different binning schemes match, they
# should have matching sum(num_pass)
| 37.227273 | 90 | 0.406349 | 431 | 4,095 | 3.784223 | 0.278422 | 0.036787 | 0.044145 | 0.04905 | 0.20233 | 0.20233 | 0.188841 | 0.101778 | 0.063765 | 0.063765 | 0 | 0.050824 | 0.481074 | 4,095 | 109 | 91 | 37.568807 | 0.716706 | 0.042002 | 0 | 0.216867 | 0 | 0 | 0.111371 | 0.018215 | 0 | 0 | 0 | 0 | 0.13253 | 0 | null | null | 0.108434 | 0.084337 | null | null | 0.168675 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
24b1f9ec5f919bdb1ef1f683673501508c9b17c1 | 553 | py | Python | benchmarks/query_benchmarks/query_dates/benchmark.py | deepakdinesh1123/actions | 859455c8582f6e3fc4d65b7266163f4276d04127 | [
"MIT"
] | null | null | null | benchmarks/query_benchmarks/query_dates/benchmark.py | deepakdinesh1123/actions | 859455c8582f6e3fc4d65b7266163f4276d04127 | [
"MIT"
] | null | null | null | benchmarks/query_benchmarks/query_dates/benchmark.py | deepakdinesh1123/actions | 859455c8582f6e3fc4d65b7266163f4276d04127 | [
"MIT"
] | null | null | null | from ...utils import bench_setup
from .models import Book
class QueryDates:
def setup(self):
bench_setup(migrate=True)
def time_query_dates(self):
list(Book.objects.dates("created_date", "year", "ASC"))
list(Book.objects.dates("created_date", "year", "DESC"))
list(Book.objects.dates("created_date", "month", "ASC"))
list(Book.objects.dates("created_date", "month", "DESC"))
list(Book.objects.dates("created_date", "day", "ASC"))
list(Book.objects.dates("created_date", "day", "DESC"))
| 34.5625 | 65 | 0.641953 | 71 | 553 | 4.859155 | 0.352113 | 0.13913 | 0.26087 | 0.347826 | 0.657971 | 0.657971 | 0.657971 | 0 | 0 | 0 | 0 | 0 | 0.180832 | 553 | 15 | 66 | 36.866667 | 0.761589 | 0 | 0 | 0 | 0 | 0 | 0.211573 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
24b94c38a11e9ef06d09ee6865284e5b1344c0b6 | 549 | py | Python | rldb/db/repo__openai_baselines_cbd21ef/algo__acer/entries.py | seungjaeryanlee/sotarl | 8c471c4666d6210c68f3cb468e439a2b168c785d | [
"MIT"
] | 45 | 2019-05-13T17:39:33.000Z | 2022-03-07T23:44:13.000Z | rldb/db/repo__openai_baselines_cbd21ef/algo__acer/entries.py | seungjaeryanlee/sotarl | 8c471c4666d6210c68f3cb468e439a2b168c785d | [
"MIT"
] | 2 | 2019-03-29T01:41:59.000Z | 2019-07-02T02:48:31.000Z | rldb/db/repo__openai_baselines_cbd21ef/algo__acer/entries.py | seungjaeryanlee/sotarl | 8c471c4666d6210c68f3cb468e439a2b168c785d | [
"MIT"
] | 2 | 2020-04-07T20:57:30.000Z | 2020-07-08T12:55:15.000Z | entries = [
{
'env-title': 'atari-enduro',
'score': 0.0,
},
{
'env-title': 'atari-space-invaders',
'score': 656.91,
},
{
'env-title': 'atari-qbert',
'score': 6433.38,
},
{
'env-title': 'atari-seaquest',
'score': 1065.98,
},
{
'env-title': 'atari-pong',
'score': 3.11,
},
{
'env-title': 'atari-beam-rider',
'score': 1959.22,
},
{
'env-title': 'atari-breakout',
'score': 82.94,
},
]
| 17.709677 | 44 | 0.397086 | 52 | 549 | 4.192308 | 0.519231 | 0.256881 | 0.417431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095522 | 0.3898 | 549 | 30 | 45 | 18.3 | 0.555224 | 0 | 0 | 0 | 0 | 0 | 0.355191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
24d41ebe163f7fd9334bbc86f8d90c7387ddcfee | 447 | py | Python | setup.py | mathieumast/cellspatialite | f429e3958b5583b4a8c448fd07a61ab29aa7cb65 | [
"MIT"
] | null | null | null | setup.py | mathieumast/cellspatialite | f429e3958b5583b4a8c448fd07a61ab29aa7cb65 | [
"MIT"
] | null | null | null | setup.py | mathieumast/cellspatialite | f429e3958b5583b4a8c448fd07a61ab29aa7cb65 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name='cellspatialite',
version='1.0.0',
packages=['cellspatialite', 'cellspatialite.test'],
author='Mathieu',
description='cellspatialite',
install_requires= ['pysqlite', 'pandas', 'docopt'],
license='MIT',
entry_points = {
'console_scripts': [
'cellspatialite = cellspatialite.cellspatialite:main',
],
},
) | 23.526316 | 66 | 0.63311 | 40 | 447 | 6.975 | 0.775 | 0.301075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008547 | 0.214765 | 447 | 19 | 67 | 23.526316 | 0.786325 | 0.044743 | 0 | 0 | 0 | 0 | 0.379391 | 0.079625 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
24da0999df55443bb1d82163487afe3007901943 | 485 | py | Python | Chapter23.ModuleCodingBasics/use_module2.py | mindnhand/Learning-Python-5th | 3dc1b28d6e048d512bf851de6c7f6445edfe7b84 | [
"MIT"
] | null | null | null | Chapter23.ModuleCodingBasics/use_module2.py | mindnhand/Learning-Python-5th | 3dc1b28d6e048d512bf851de6c7f6445edfe7b84 | [
"MIT"
] | null | null | null | Chapter23.ModuleCodingBasics/use_module2.py | mindnhand/Learning-Python-5th | 3dc1b28d6e048d512bf851de6c7f6445edfe7b84 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
#encoding=utf-8
#------------------------------------------------------
# Usage: python3 use_module2.py
# Description: module basic
#------------------------------------------------------
import module2
print(module2.sys)
print(module2.name)
print(module2.klass)
print('The dict of module2 is: ')
print(list(module2.__dict__.keys()))
print('The dict of module2 without __xxx__ is: ')
print([x for x in module2.__dict__.keys() if not x.startswith('__')])
| 21.086957 | 69 | 0.562887 | 57 | 485 | 4.526316 | 0.578947 | 0.139535 | 0.093023 | 0.108527 | 0.162791 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027523 | 0.101031 | 485 | 22 | 70 | 22.045455 | 0.56422 | 0.410309 | 0 | 0 | 0 | 0 | 0.235714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0.875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
24e402299b9f6f621706cdbd330f1ca54378a165 | 196 | py | Python | src/directory_model.py | cmtools/fastapi-cli | d926317d5032b970187f77ff76bdc4ba4780a4d7 | [
"MIT"
] | null | null | null | src/directory_model.py | cmtools/fastapi-cli | d926317d5032b970187f77ff76bdc4ba4780a4d7 | [
"MIT"
] | null | null | null | src/directory_model.py | cmtools/fastapi-cli | d926317d5032b970187f77ff76bdc4ba4780a4d7 | [
"MIT"
] | null | null | null | import typing as t
from file_model import File
class Dir:
path: str
files: t.List[File] = []
def __init__(self, path, files=[]):
self.path = path
self.files = files
| 16.333333 | 39 | 0.602041 | 28 | 196 | 4.035714 | 0.571429 | 0.141593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290816 | 196 | 11 | 40 | 17.818182 | 0.81295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
24e5758008f608aff796ffcbe193129bea1fb776 | 373 | py | Python | common/models.py | AsfanUlla/pi-backend | ee1fed18bff9881e341c1022584f9323d2a7561d | [
"Apache-2.0"
] | null | null | null | common/models.py | AsfanUlla/pi-backend | ee1fed18bff9881e341c1022584f9323d2a7561d | [
"Apache-2.0"
] | null | null | null | common/models.py | AsfanUlla/pi-backend | ee1fed18bff9881e341c1022584f9323d2a7561d | [
"Apache-2.0"
] | null | null | null | from pydantic import BaseModel, Field, EmailStr
from typing import Optional, List, Dict, Any
class SchemalessResponse(BaseModel):
data: Dict = {}
status_code: int = 200
message: Optional[str] = "Request Processed"
class EmailSchema(BaseModel):
sub: str
email_to: List[EmailStr]
body: Dict[str, Any]
template_name: str = 'default_email.html'
| 23.3125 | 48 | 0.707775 | 46 | 373 | 5.652174 | 0.673913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.19571 | 373 | 15 | 49 | 24.866667 | 0.856667 | 0 | 0 | 0 | 0 | 0 | 0.093834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.181818 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
24f79fbbf13f5128ab64a47960256c2400746ede | 1,662 | py | Python | test/test_mod_import.py | kpagacz/NMDownloader | 670652949b538319ee945071e030119e630ec6a1 | [
"MIT"
] | null | null | null | test/test_mod_import.py | kpagacz/NMDownloader | 670652949b538319ee945071e030119e630ec6a1 | [
"MIT"
] | 20 | 2019-03-16T14:42:54.000Z | 2019-04-10T19:40:41.000Z | test/test_mod_import.py | kpagacz/NMDownloader | 670652949b538319ee945071e030119e630ec6a1 | [
"MIT"
] | null | null | null | import unittest
import utils.mod_import as mod_import
import codecov
class ModListImporterTestCase(unittest.TestCase):
def setUp(self):
self.importer = mod_import.ModListImporter()
self.csv_header = "modids,names\n"
self.csv_values = ["1,abc\n"
"2,def\n"]
def testModListDefaultValue(self):
self.assertEqual(self.importer.mod_list,
[],
msg="Default value of mod_list is not []")
def testModListSetter(self):
self.importer.mod_list = [1, 2, 3]
self.assertEqual(self.importer._mod_list,
[1, 2, 3],
msg="Setter method did not assign [1, 2, 3] to mod_list")
def testModListGetter(self):
self.importer.mod_list = [1, 2]
self.assertEqual([1, 2],
self.importer.mod_list,
msg="mod_list getter failed to output correctly")
def testImportFromCSVWithHeader(self):
cls = self.importer.from_csv("test/testModImporterFromCSV.csv")
self.assertIsInstance(cls, mod_import.ModListImporter)
self.assertEqual([1, 2],
cls.mod_list,
msg="ModImporter succesfully created, but mod_list is incorrect.")
def testImportFromExcel(self):
cls = self.importer.from_excel("test/testModImporterFromExcel.xlsx")
self.assertIsInstance(cls, mod_import.ModListImporter)
self.assertEqual([1, 2],
cls.mod_list,
msg="ModImporter succesfully created, but mod_list is incorrect.")
| 36.933333 | 91 | 0.587244 | 178 | 1,662 | 5.365169 | 0.337079 | 0.087958 | 0.094241 | 0.099476 | 0.468063 | 0.393717 | 0.342408 | 0.265969 | 0.265969 | 0.265969 | 0 | 0.016711 | 0.315884 | 1,662 | 44 | 92 | 37.772727 | 0.822339 | 0 | 0 | 0.257143 | 0 | 0 | 0.203492 | 0.039133 | 0 | 0 | 0 | 0 | 0.2 | 0 | null | null | 0 | 0.514286 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
702548bde2333d9384784f1f3e19f50c90d86bf5 | 598 | py | Python | src/web/modules/dataservice/control.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | 2 | 2017-04-30T07:29:23.000Z | 2017-04-30T07:36:27.000Z | src/web/modules/dataservice/control.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null | src/web/modules/dataservice/control.py | unkyulee/elastic-cms | 3ccf4476c3523d4fefc0d8d9dee0196815b81489 | [
"MIT"
] | null | null | null |
def get(p):
if not p['operation']:
from .controllers import default
return default.get(p)
elif p["operation"] == "json":
from .controllers import json
return json.get(p)
elif p["operation"] == "create":
from .controllers import create
return create.get(p)
elif p["operation"] == "delete":
from .controllers import delete
return delete.get(p)
elif p["operation"] == "edit":
from .controllers import edit
return edit.get(p)
return "dataservice/control"
def authorize(p):
return True
| 20.62069 | 40 | 0.591973 | 71 | 598 | 4.985915 | 0.28169 | 0.067797 | 0.29661 | 0.101695 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.29097 | 598 | 28 | 41 | 21.357143 | 0.834906 | 0 | 0 | 0 | 0 | 0 | 0.140704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.263158 | 0.052632 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
702d62784e667462f72d3df18eda609962caca05 | 2,065 | py | Python | loans/urls.py | FAenX/micro-finance | 27bfd3f716365ecac22d0139f8f5bcda974498e3 | [
"MIT"
] | null | null | null | loans/urls.py | FAenX/micro-finance | 27bfd3f716365ecac22d0139f8f5bcda974498e3 | [
"MIT"
] | 5 | 2021-06-08T23:59:29.000Z | 2022-03-12T00:58:17.000Z | loans/urls.py | FAenX/micro-finance | 27bfd3f716365ecac22d0139f8f5bcda974498e3 | [
"MIT"
] | null | null | null | from django.conf.urls import url
from loans.views import *
urlpatterns = [
# Client Loans (apply, view, list)
url(r'^client/(?P<client_id>\d+)/loan/apply/$', ClientLoanApplicationView.as_view(), name='clientloanapplication'),
url(r'^client/(?P<client_id>\d+)/loans/list/$', ClientLoansListView.as_view(), name="clientloanaccountslist"),
url(r'^client/loan/(?P<pk>\d+)/view/$', ClientLoanAccount.as_view(), name='clientloanaccount'),
url(r'^client/(?P<client_id>\d+)/loan/(?P<loanaccount_id>\d+)/deposits/list/$', ClientLoanDepositsListView.as_view(), name='listofclientloandeposits'),
# Client Loans - Ledger (view, CSV, Excel, PDF downloads)
url(r'^client/(?P<client_id>\d+)/loan/(?P<loanaccount_id>\d+)/ledger/$', ClientLoanLedgerView.as_view(), name="clientloanledgeraccount"),
url(r'^client/(?P<client_id>\d+)/loan/(?P<loanaccount_id>\d+)/ledger/download/csv/$', ClientLedgerCSVDownload.as_view(), name="clientledgercsvdownload"),
url(r'^client/(?P<client_id>\d+)/loan/(?P<loanaccount_id>\d+)/ledger/download/excel/$', ClientLedgerExcelDownload.as_view(), name="clientledgerexceldownload"),
url(r'^client/(?P<client_id>\d+)/loan/(?P<loanaccount_id>\d+)/ledger/download/pdf/$', ClientLedgerPDFDownload.as_view(), name="clientledgerpdfdownload"),
# Group Loans (apply, view, list)
url(r'^group/(?P<group_id>\d+)/loan/apply/$', GroupLoanApplicationView.as_view(), name='grouploanapplication'),
url(r'^group/(?P<group_id>\d+)/loans/list/$', GroupLoansListView.as_view(), name="grouploanaccountslist"),
url(r'^group/loan/(?P<pk>\d+)/view/$', GroupLoanAccount.as_view(), name='grouploanaccount'),
url(r'^group/(?P<group_id>\d+)/loan/(?P<loanaccount_id>\d+)/deposits/list/$', GroupLoanDepositsListView.as_view(), name='viewgrouploandeposits'),
# Change Loan Account Status
url(r'^loan/(?P<pk>\d+)/change-status/$', ChangeLoanAccountStatus.as_view(), name='change_loan_account_status'),
# Issue Loan
url(r'^loan/(?P<loanaccount_id>\d+)/issue/$', IssueLoan.as_view(), name='issueloan'),
]
| 71.206897 | 163 | 0.7046 | 262 | 2,065 | 5.423664 | 0.209924 | 0.03589 | 0.098522 | 0.054187 | 0.328642 | 0.298381 | 0.273047 | 0.246305 | 0.203378 | 0.171006 | 0 | 0 | 0.079903 | 2,065 | 28 | 164 | 73.75 | 0.747895 | 0.076513 | 0 | 0 | 0 | 0.277778 | 0.531825 | 0.499211 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
702f40b28d19c269453798fcbb0e966ac84fe386 | 492 | py | Python | zinnia/views/channels.py | zapier/django-blog-zinnia | 2631cbe05fa7b95aecd172fe34b7081ca4beda47 | [
"BSD-3-Clause"
] | null | null | null | zinnia/views/channels.py | zapier/django-blog-zinnia | 2631cbe05fa7b95aecd172fe34b7081ca4beda47 | [
"BSD-3-Clause"
] | 1 | 2021-09-08T12:32:28.000Z | 2021-09-08T12:32:28.000Z | zinnia/views/channels.py | isabella232/django-blog-zinnia | 2631cbe05fa7b95aecd172fe34b7081ca4beda47 | [
"BSD-3-Clause"
] | 1 | 2021-09-08T10:28:36.000Z | 2021-09-08T10:28:36.000Z | """Views for Zinnia channels"""
from django.views.generic.list import ListView
from zinnia.models.entry import Entry
from zinnia.settings import PAGINATION
class EntryChannel(ListView):
"""View for displaying a custom selection of entries
based on a search pattern, useful for SEO/SMO pages"""
query = ''
paginate_by = PAGINATION
def get_queryset(self):
"""Override the get_queryset method to do the search"""
return Entry.published.search(self.query)
| 28.941176 | 63 | 0.727642 | 66 | 492 | 5.378788 | 0.681818 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189024 | 492 | 16 | 64 | 30.75 | 0.889724 | 0.359756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
7079d592d3692099180dbebc7dd4c79d080683a6 | 67,931 | py | Python | pysnmp-with-texts/CISCO-IMAGE-UPGRADE-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/CISCO-IMAGE-UPGRADE-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/CISCO-IMAGE-UPGRADE-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module CISCO-IMAGE-UPGRADE-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CISCO-IMAGE-UPGRADE-MIB
# Produced by pysmi-0.3.4 at Wed May 1 12:01:48 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, ObjectIdentifier, OctetString = mibBuilder.importSymbols("ASN1", "Integer", "ObjectIdentifier", "OctetString")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ConstraintsIntersection, ValueSizeConstraint, SingleValueConstraint, ConstraintsUnion = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ConstraintsIntersection", "ValueSizeConstraint", "SingleValueConstraint", "ConstraintsUnion")
ciscoMgmt, = mibBuilder.importSymbols("CISCO-SMI", "ciscoMgmt")
EntPhysicalIndexOrZero, = mibBuilder.importSymbols("CISCO-TC", "EntPhysicalIndexOrZero")
entPhysicalIndex, = mibBuilder.importSymbols("ENTITY-MIB", "entPhysicalIndex")
SnmpAdminString, = mibBuilder.importSymbols("SNMP-FRAMEWORK-MIB", "SnmpAdminString")
ModuleCompliance, ObjectGroup, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "ObjectGroup", "NotificationGroup")
MibScalar, MibTable, MibTableRow, MibTableColumn, TimeTicks, Unsigned32, Integer32, iso, Counter64, ModuleIdentity, ObjectIdentity, Gauge32, IpAddress, MibIdentifier, Counter32, Bits, NotificationType = mibBuilder.importSymbols("SNMPv2-SMI", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "TimeTicks", "Unsigned32", "Integer32", "iso", "Counter64", "ModuleIdentity", "ObjectIdentity", "Gauge32", "IpAddress", "MibIdentifier", "Counter32", "Bits", "NotificationType")
RowStatus, DisplayString, TimeStamp, TextualConvention, TruthValue = mibBuilder.importSymbols("SNMPv2-TC", "RowStatus", "DisplayString", "TimeStamp", "TextualConvention", "TruthValue")
ciscoImageUpgradeMIB = ModuleIdentity((1, 3, 6, 1, 4, 1, 9, 9, 360))
ciscoImageUpgradeMIB.setRevisions(('2011-03-28 00:00', '2008-03-18 00:00', '2007-07-18 00:00', '2006-12-21 00:00', '2004-01-20 00:00', '2003-11-04 00:00', '2003-10-28 00:00', '2003-07-11 00:00', '2003-07-08 00:00', '2003-06-01 00:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: ciscoImageUpgradeMIB.setRevisionsDescriptions(("Added new group ciuUpgradeOpNewGroup. Added new enum 'systemPreupgradeBegin' to ciuUpgradeOpStatusOperation. Added ciuUpgradeOpLastCommand and ciuUpgradeOpLastStatus to the varbind list of ciuUpgradeOpCompletionNotify. Added new compliance ciuImageUpgradeComplianceRev4 and deprecated ciuImageUpgradeComplianceRev3. Added ciuUpgradeJobStatusNotifyOnCompletion.", "Added new enum 'compactFlashTcamSanity' to ciuUpgradeOpStatusOperation.", 'Added new enums to ciuUpgradeOpStatusOperation.', 'Added new enums to ciuUpgradeOpStatus and ciuUpgradeOpStatusOperation. Added new trap ciuUpgradeJobStatusNotify. Changed type for ciuUpgradeOpStatusModule to EntPhysicalIndexOrZero. Added ciuUpgradeNotificationGroupSup group, deprecated ciuImageUpgradeComplianceRev2 and added ciuImageUpgradeComplianceRev3 ', "Added new enums to ciuUpgradeOpStatus and ciuUpgradeOpStatusOperation. Corrected description for 'configSync' enum defined in ciuUpgradeOpStatusOperation object. ", 'Updated compliance statement. Removed ciuImageLocInputGroup from conditionally mandatory.', 'Added ciuUpgradeMiscInfoTable. Added more enums to ciuUpgradeOpStatusOperation. Added ciuUpgradeMiscInfoGroup, deprecated ciuImageUpgradeComplianceRev1 and added ciuImageUpgradeComplianceRev2.', 'Changed: ciuImageLocInputURI identifier from 2 to 1, ciuImageLocInputEntryStatus identifier from 3 to 2 and ciuImageVariableName from 2 to 1. Added recommendedAction to ciuUpgradeOpStatusOperation.', 'Added ciscoImageUpgradeMisc, added ciuUpgradeMiscAutoCopy under the group ciscoImageUpgradeMisc. Added ciuUpgradeMiscGroup, deprecated ciuImageUpgradeCompliance and added ciuImageUpgradeComplianceReve1.', 'Initial version of this MIB module.',))
if mibBuilder.loadTexts: ciscoImageUpgradeMIB.setLastUpdated('201103280000Z')
if mibBuilder.loadTexts: ciscoImageUpgradeMIB.setOrganization('Cisco Systems Inc.')
if mibBuilder.loadTexts: ciscoImageUpgradeMIB.setContactInfo(' Cisco Systems Customer Service Postal: 170 W Tasman Drive San Jose, CA 95134 USA Tel: +1 800 553 -NETS E-mail: cs-san@cisco.com')
if mibBuilder.loadTexts: ciscoImageUpgradeMIB.setDescription("This mib provides, objects to upgrade images on modules in the system, objects showing the status of the upgrade operation, and objects showing the type of images that could be run in the system. For example the modules could be Controller card, Line card .. etc. The system fills up the ciuImageVariableTable with the type of images the system can support. For performing an upgrade operation a management application must first read this table and use this info in other tables, as explained below. The ciuImageURITable table is also filled by the system and provides the image name presently running for each type of image in the system. The user is allowed to configure a new image name for each image type as listed in ciuImageVariableTable. The system would use this image on the particular module on the next reboot. The management application on deciding to do an upgrade operation must first check if an upgrade operation is already in progress in the system. This is done by reading the ciuUpgradeOpCommand and if it contains 'none', signifies that no other upgrade operation is in progress. Any other value, signifies that upgrade is in progress and a new upgrade operation is not allowed. To start an 'install' operation, first the user must perform a 'check' operation to do the version compatibility for the given set of image files (provided using the ciuImageLocInputTable) against the current system configuration. Only if the result of this operation is 'success' can the user proceed to do an install operation. The tables, ciuVersionCompChkTable, ciuUpgradeImageVersionTable, ciuUpgradeOpStatusTable, provide the result of the 'check' or 'install' operation performed using ciuUpgradeOpCommand. These tables are in addition to objects ciuUpgradeOpStatus, ciuUpgradeOpTimeStarted, ciuUpgradeOpTimeCompleted, ciuUpgradeOpStatusReason. The ciuUpgradeOpStatus object provides the status of the selected upgrade operation. An option is available for user to upgrade only some modules, provided using ciuUpgradeTargetTable. If this table is empty than an upgrade operation would be performed on all the modules in the system.")
ciscoImageUpgradeMIBNotifs = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 0))
ciscoImageUpgradeMIBObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 1))
ciscoImageUpgradeMIBConform = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 2))
ciscoImageUpgradeConfig = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1))
ciscoImageUpgradeOp = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4))
ciscoImageUpgradeMisc = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 10))
class CiuImageVariableTypeName(TextualConvention, OctetString):
description = "The type of image that the system can run. e.g. Let us say that the device has 3 image variables names - 'system', 'kickstart' and 'ilce'. This TC would, then be as follows: system kickstart ilce. "
status = 'current'
subtypeSpec = OctetString.subtypeSpec + ValueSizeConstraint(1, 32)
ciuTotalImageVariables = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 1), Unsigned32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuTotalImageVariables.setStatus('current')
if mibBuilder.loadTexts: ciuTotalImageVariables.setDescription('Total number of image variables supported in the device at this time.')
ciuImageVariableTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 2), )
if mibBuilder.loadTexts: ciuImageVariableTable.setStatus('current')
if mibBuilder.loadTexts: ciuImageVariableTable.setDescription('A table listing the image variable types that exist in the device. ')
ciuImageVariableEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 2, 1), ).setIndexNames((0, "CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableName"))
if mibBuilder.loadTexts: ciuImageVariableEntry.setStatus('current')
if mibBuilder.loadTexts: ciuImageVariableEntry.setDescription('A ciuImageVariableEntry entry. Each entry provides the image variable type existing in the device. ')
ciuImageVariableName = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 2, 1, 1), CiuImageVariableTypeName()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuImageVariableName.setStatus('current')
if mibBuilder.loadTexts: ciuImageVariableName.setDescription("The type of image that the system can run. The value of this object depends on the underlying agent. e.g. Let us say that the device has 3 image variables names - 'system', 'kickstart' and 'ilce'. This table , then will list these 3 strings as entries such as follows: ciuImageVariableName system kickstart ilce The user can assign images (using ciuImageURITable) to these variables and the system will use the assigned values to boot. ")
ciuImageURITable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 3), )
if mibBuilder.loadTexts: ciuImageURITable.setStatus('current')
if mibBuilder.loadTexts: ciuImageURITable.setDescription("A table listing the Universal Resource Identifier(URI) of images that are assigned to variables of the ciuImageVariableTable. In the example for ciuImageVariableTable, there are 3 image types. This table will list the names for those image types as follows - entPhysicalIndex ciuImageVariableName ciuImageURI 25 'system' m9200-ek9-mgz.1.0.bin 25 'kickstart' boot-1.0.bin 26 'ilce' linecard-1.0.bin In this example, the 'system' image name is 'm9200-ek9-mgz.1.0.bin', the 'ilce' image name is 'linecard-1.0.bin' and the 'kickstart' image name is 'boot-1.0.bin'. ")
ciuImageURIEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 3, 1), ).setIndexNames((0, "ENTITY-MIB", "entPhysicalIndex"), (0, "CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableName"))
if mibBuilder.loadTexts: ciuImageURIEntry.setStatus('current')
if mibBuilder.loadTexts: ciuImageURIEntry.setDescription('A ciuImageURITable entry. Each entry provides the Image URI corresponding to this image variable name, identified by ciuImageVariableName, on this module identified by entPhysicalIndex. Each such module of the type PhysicalClass module(9), has an entry in entPhysicalTable in ENTITY-MIB, where that entry is identified by entPhysicalIndex. Only modules capable of running images, identified by ciuImageVariableName would have an entry in this table. ')
ciuImageURI = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 3, 1, 1), SnmpAdminString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ciuImageURI.setStatus('current')
if mibBuilder.loadTexts: ciuImageURI.setDescription('This object contains the string value of the image corresponding to ciuImageVariableName on this entity.')
ciuUpgradeOpCommand = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("none", 1), ("done", 2), ("install", 3), ("check", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ciuUpgradeOpCommand.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpCommand.setDescription("The command to be executed. Note that it is possible for a system to support only a subset of these commands. If a command is unsupported, it will complete immediatly with the 'invalidOperation' error being reported in the ciuUpgradeOpStatus object. The 'check' must be performed first before 'install' command can be executed. If 'install' is performed first the operation would fail. So 'install' will be allowed only if a read of this object returns 'check' and the value of object ciuUpgradeOpStatus is 'success'. Also 'check' will be allowed only if a read of this object returns 'none'. Command Remarks none if this object is read without performing any operation listed above, 'none' would be returned. Also 'none' would be returned for a read operation if a cleanup of the previous upgrade operation is completed either through the issue of 'done' command or the maximum timeout of 5 minutes is elapsed. Setting this object to 'none', agent would return a success without any upgrade operation being performed. done if this object returns any value other than 'none', then setting this to 'done' would do the required cleanup of previous upgrade operation and make the system ready for any new upgrade operation. This is needed because the system maintains the status of the previous upgrade operation for a maximum time of 5 minutes before it does the cleanup. During this period no new upgrade operation is allowed. install for all the physical entities listed in the ciuUpgradeTargetTable perform the required upgrade operation listed in that table. However the upgrade operation for a module would not be done if the current running image and the image to be upgraded given as an input through the ciuImageLocInputTable are the same. check check the version compatibility for the given set of image files against the current system configuration. ")
ciuUpgradeOpStatus = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10))).clone(namedValues=NamedValues(("none", 1), ("invalidOperation", 2), ("failure", 3), ("inProgress", 4), ("success", 5), ("abortInProgress", 6), ("abortSuccess", 7), ("abortFailed", 8), ("successReset", 9), ("fsUpgReset", 10))).clone('none')).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatus.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatus.setDescription('The status of the specified operation. none(1) - no operation was performed. invalidOperation(2) - the selected operation is not supported. failure(3) - the selected operation has failed. inProgress(4) - specified operation is active. success(5) - specified operation has completed successfully. abortInProgress(6) - abort in progress. abortSuccess(7) - abort operation successful. abortFailed(8) - abort failed. successReset(9) - specified operation has completed successfully and the system will reset. fsUpgReset(10) - fabric switch upgrade reset.')
ciuUpgradeOpNotifyOnCompletion = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 3), TruthValue().clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ciuUpgradeOpNotifyOnCompletion.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpNotifyOnCompletion.setDescription("Specifies whether or not a notification should be generated on the completion of an operation. If 'true', ciuUpgradeOpCompletionNotify will be generated, else if 'false' it would not be. It is the responsibility of the management entity to ensure that the SNMP administrative model is configured in such a way as to allow the notification to be delivered. This object can only be modified alongwith ciuUpgradeOpCommand object.This object returns default value when ciuUpgradeOpCommand object contains 'none'. To SET this object a multivarbind set containing this object and ciuUpgradeOpCommand must be done in the same PDU for the operation to succeed.")
ciuUpgradeOpTimeStarted = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 4), TimeStamp()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpTimeStarted.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpTimeStarted.setDescription("Specifies the time the upgrade operation was started. This object would return 0 if ciuUpgradeOpCommand contains 'none'.")
ciuUpgradeOpTimeCompleted = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 5), TimeStamp()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpTimeCompleted.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpTimeCompleted.setDescription("Specifies the time the upgrade operation completed. This object would return 0 if ciuUpgradeOpCommand contains 'none'. ")
ciuUpgradeOpAbort = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 6), TruthValue().clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ciuUpgradeOpAbort.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpAbort.setDescription("Provides the means to abort an operation. If this object is set to 'true' when an upgrade operation is in progress and the corresponding instance of ciuUpgradeOpCommand has the value 'install' or 'check', then the operation will be aborted. Setting this object to 'true' when ciuUpgradeOpCommand has a different value other than 'install' or 'check' will fail. If retrieved, this object always has the value 'false'. ")
ciuUpgradeOpStatusReason = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 7), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusReason.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusReason.setDescription("Specifies the description of the cause of 'failed' state of the object 'ciuUpgradeOpStatus'. This object would be a null string if value of 'ciuUpgradeOpStatus' is anything other than 'failure'.")
ciuUpgradeOpLastCommand = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("none", 1), ("done", 2), ("install", 3), ("check", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpLastCommand.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpLastCommand.setDescription("This object indicates previous OpCommand value. It will be updated after new OpCommand is set and delivered to upgrade process. 'none' if this object is read without performing any operation listed above, 'none' would be returned. Also 'none' would be returned for a read operation if a cleanup of the previous upgrade operation is completed either through the issue of 'done' command or the maximum timeout of 5 minutes is elapsed. Setting this object to 'none', agent would return a success without any upgrade operation being performed. 'done' if this object returns any value other than 'none', then setting this to 'done' would do the required cleanup of previous upgrade operation and make the system ready for any new upgrade operation. This is needed because the system maintains the status of the previous upgrade operation for a maximum time of 5 minutes before it does the cleanup. During this period no new upgrade operation is allowed. 'install' perform the required upgrade operation listed in ciuUpgradeTargetTable table. However the upgrade operation for a module would not be done if the current running image and the image to be upgraded given as an input through the ciuImageLocInputTable are the same. 'check' check the version compatibility for the given set of image files against the current system configuration.")
ciuUpgradeOpLastStatus = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10))).clone(namedValues=NamedValues(("none", 1), ("invalidOperation", 2), ("failure", 3), ("inProgress", 4), ("success", 5), ("abortInProgress", 6), ("abortSuccess", 7), ("abortFailed", 8), ("successReset", 9), ("fsUpgReset", 10)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpLastStatus.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpLastStatus.setDescription("This object indicates previous OpStatus value. It will be updated after new OpCommand is set and delivered to upgrade process. 'none' - no operation was performed. 'invalidOperation' - the selected operation is not supported. 'failure' - the selected operation has failed. 'inProgress' - specified operation is active. 'success' - specified operation has completed successfully. 'abortInProgress' - abort in progress. 'abortSuccess' - abort operation successful. 'abortFailed' - abort failed. 'successReset' - specified operation has completed successfully and the system will reset. 'fsUpgReset' - fabric switch upgrade reset.")
ciuUpgradeOpLastStatusReason = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 10), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpLastStatusReason.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpLastStatusReason.setDescription('This object indicates the previous OpStatusReason value. It will be updated after new OpCommand is set and delivered to upgrade process.')
ciuUpgradeJobStatusNotifyOnCompletion = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 4, 11), TruthValue()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ciuUpgradeJobStatusNotifyOnCompletion.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeJobStatusNotifyOnCompletion.setDescription("This object specifies whether or not ciuUpgradeJobStatusCompletionNotify notification should be generated on the completion of an operation. If 'true', ciuUpgradeJobStatusCompletionNotify will be generated, else if 'false' it would not be.")
ciuUpgradeTargetTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 5), )
if mibBuilder.loadTexts: ciuUpgradeTargetTable.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeTargetTable.setDescription('A table listing the modules and the type of upgrade operation to be performed on these modules. ')
ciuUpgradeTargetEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 5, 1), ).setIndexNames((0, "ENTITY-MIB", "entPhysicalIndex"))
if mibBuilder.loadTexts: ciuUpgradeTargetEntry.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeTargetEntry.setDescription("Each entry provides the module that needs to be upgraded and the type of operation that needs to be performed on this module. The upgrade operation, selected using the object 'ciuUpgradeOpCommand', would be performed on each and every module represented by an entry in this table. Each such module of the type PhysicalClass module(9), has an entry in entPhysicalTable in ENTITY-MIB, where that entry is identified by entPhysicalIndex. Only modules capable of running images, identified by ciuImageVariableName would have an entry in this table. This table cannot be modified when ciuUpgradeOpCommand object contains value other than 'none'. ")
ciuUpgradeTargetAction = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 5, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("image", 1), ("bios", 2), ("loader", 3), ("bootrom", 4)))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: ciuUpgradeTargetAction.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeTargetAction.setDescription("The type of operation to be performed on this module. image - upgrade image. bios - upgrade bios. loader - upgrade loader.loader is the program that loads and starts the operating system bootrom - upgrade boot rom This object cannot be modified while the corresponding value of ciuUpgradeTargetEntryStatus is equal to 'active'. It is okay to support only a subset of the enums defined above. ")
ciuUpgradeTargetEntryStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 5, 1, 2), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: ciuUpgradeTargetEntryStatus.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeTargetEntryStatus.setDescription('The status of this table entry. A multivarbind set containing this object and ciuUpgradeTargetAction must be done in the same PDU for the operation to succeed. ')
ciuImageLocInputTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 6), )
if mibBuilder.loadTexts: ciuImageLocInputTable.setStatus('current')
if mibBuilder.loadTexts: ciuImageLocInputTable.setDescription('A table listing the URI of the images that need to be upgraded. ')
ciuImageLocInputEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 6, 1), ).setIndexNames((0, "CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableName"))
if mibBuilder.loadTexts: ciuImageLocInputEntry.setStatus('current')
if mibBuilder.loadTexts: ciuImageLocInputEntry.setDescription("Each entry provides the image location URI that need to be upgraded. This table cannot be modified if ciuUpgradeOpCommand object contains any value other than 'none' ")
ciuImageLocInputURI = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 6, 1, 1), SnmpAdminString().subtype(subtypeSpec=ValueSizeConstraint(1, 255))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: ciuImageLocInputURI.setStatus('current')
if mibBuilder.loadTexts: ciuImageLocInputURI.setDescription("An ASCII string specifying the system image location. For example the string could be 'bootflash:file1'. This object cannot be modified while the corresponding value of ciuImageLocInputEntryStatus is equal to 'active'. ")
ciuImageLocInputEntryStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 6, 1, 2), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: ciuImageLocInputEntryStatus.setStatus('current')
if mibBuilder.loadTexts: ciuImageLocInputEntryStatus.setDescription('The status of this table entry. ')
ciuVersionCompChkTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7), )
if mibBuilder.loadTexts: ciuVersionCompChkTable.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompChkTable.setDescription("A table showing the result of the version compatibility check operation performed in response to the option 'check' selected for ciuUpgradeOpCommand. The table would be emptied out once the value of ciuUpgradeOpCommand object is 'none'. ")
ciuVersionCompChkEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1), ).setIndexNames((0, "ENTITY-MIB", "entPhysicalIndex"))
if mibBuilder.loadTexts: ciuVersionCompChkEntry.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompChkEntry.setDescription('An entry containing the results of the version compatibility check operation performed on each module, identified by entPhysicalIndex. Each such module of the type PhysicalClass module(9), has an entry in entPhysicalTable in ENTITY-MIB, where that entry is identified by entPhysicalIndex. Only modules capable of running images, identified by ciuImageVariableName would have an entry in this table. ')
ciuVersionCompImageSame = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 1), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompImageSame.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompImageSame.setDescription(' Specifies whether for this module the image provided by the user for upgrade is same as the current running image. ')
ciuVersionCompUpgradable = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 2), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradable.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradable.setDescription(" Specifies whether the set of images provided in ciuImageLocInputTable are compatible with each other as far as this module is concerned. If 'true' the set of images provided are compatible and can be run on this module else they are not compatible. This module would not come up if it is booted with a uncompatible set of image. ")
ciuVersionCompUpgradeAction = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8))).clone(namedValues=NamedValues(("none", 1), ("other", 2), ("rollingUpgrade", 3), ("switchOverReset", 4), ("reset", 5), ("copy", 6), ("notApplicable", 7), ("plugin", 8)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradeAction.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradeAction.setDescription(" Specifies the type of upgrade action that would be performed on this module if ciuUpgradeOpCommand were set to 'install' or to 'check'. none(1) : is no upgrade action. other(2) : actions other than defined here rollingUpgrade(3) : modules would be upgraded one at a time. switchOverReset(4): all the modules would be reset after a switchover happens at the same time. reset(5) : all the modules would be reset without or before a switchover. copy(6) : then image upgrade would not be done, but only bios/loader/bootrom would be updated and will take effect on next reload. notApplicable(7) : upgrade action is not possible because image is not upgradable. plugin(8) : upgrading plugin only instead of full image.")
ciuVersionCompUpgradeBios = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 4), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradeBios.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradeBios.setDescription(" Specifies whether the BIOS will be upgraded. If 'true' the bios would be upgraded else it would not.")
ciuVersionCompUpgradeBootrom = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 5), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradeBootrom.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradeBootrom.setDescription(" Specifies whether the bootrom will be upgraded. If 'true' the bootrom would be upgraded else it would not.")
ciuVersionCompUpgradeLoader = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 6), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradeLoader.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradeLoader.setDescription(" Specifies whether the loader will be upgraded. If 'true' the loader would be upgraded else it would not.")
ciuVersionCompUpgradeImpact = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("other", 1), ("nonDisruptive", 2), ("disruptive", 3), ("notApplicable", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradeImpact.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradeImpact.setDescription(' Specifies the impact of the upgrade operation that would have on this module. other(1) : reasons other than defined here nonDisruptive(2): this module would be upgraded without disruption of traffic. disruptive(3) : this module would be upgraded with disruption of traffic. notApplicable(4): upgrade is not possible because image is not upgradable. ')
ciuVersionCompUpgradeReason = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 7, 1, 8), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuVersionCompUpgradeReason.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompUpgradeReason.setDescription("This object would give the reason for the following cases: 1)value of object ciuVersionCompUpgradable is 'false' then it would give the reason why the module is not upgradable. 2)the value of object ciuversionCompUpgradeAction is either 'switchOverReset' or 'reset' and value of object ciuVersionCompUpgradable is 'true'. 3)the value of object ciuVersionCompUpgradeImpact is 'disruptive' and value of objects, ciuVersionCompUpgradable is 'true' and ciuversionCompUpgradeAction is neither 'switchOverReset' nor 'reset. This object would have the reason in the above listed order. It would be a null string for all the other values of the above mentioned objects. ")
ciuUpgradeImageVersionTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8), )
if mibBuilder.loadTexts: ciuUpgradeImageVersionTable.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionTable.setDescription("A table showing the current version of images running on the modules and the images they would be upgraded with. The table would be emptied out once the value of ciuUpgradeOpCommand object is 'none'. This table becomes valid when value of ciuUpgradeOpStatus is 'success' in response to 'check' operation selected using ciuUpgradeOpCommand. ")
ciuUpgradeImageVersionEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8, 1), ).setIndexNames((0, "ENTITY-MIB", "entPhysicalIndex"), (0, "CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionIndex"))
if mibBuilder.loadTexts: ciuUpgradeImageVersionEntry.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionEntry.setDescription('An entry containing the current version of image running on a particular module and the images they would be upgraded with. An ciuUpgradeImageVersionVarName identifies the type of software running on this module, identified by entPhysicalIndex. It is possible that the same module, identified by entPhysicalIndex, can run multiple instances of the software type identified by ciuUpgradeImageVersionVarName. Each such module of the type PhysicalClass module(9), has an entry in entPhysicalTable in ENTITY-MIB, where that entry is identified by entPhysicalIndex. Only modules capable of running images, identified by ciuImageVariableName would have an entry in this table. ')
ciuUpgradeImageVersionIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8, 1, 1), Unsigned32())
if mibBuilder.loadTexts: ciuUpgradeImageVersionIndex.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionIndex.setDescription('This is an arbitrary integer which uniquely identifies different rows which have the same value of entPhysicalIndex.')
ciuUpgradeImageVersionVarName = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8, 1, 2), CiuImageVariableTypeName()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeImageVersionVarName.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionVarName.setDescription('The type of image on this module. ')
ciuUpgradeImageVersionRunning = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8, 1, 3), SnmpAdminString().subtype(subtypeSpec=ValueSizeConstraint(1, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeImageVersionRunning.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionRunning.setDescription('An ASCII string specifying the running image version. ')
ciuUpgradeImageVersionNew = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8, 1, 4), SnmpAdminString().subtype(subtypeSpec=ValueSizeConstraint(1, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeImageVersionNew.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionNew.setDescription('An ASCII string specifying what the new image version would be after an upgrade. ')
ciuUpgradeImageVersionUpgReqd = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 8, 1, 5), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeImageVersionUpgReqd.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionUpgReqd.setDescription(" Specifies whether an upgrade is required for this software component, identified by entPhysicalIndex and ciuUpgradeImageVersionVarName. If the value of objects ciuUpgradeImageVersionRunning and ciuUpgradeImageVersionNew are same then the value of this object would be 'false' else it would be 'true'. If 'true' then this software component, identified by ciuUpgradeImageVersionVarName needs to be upgraded else it would not.")
ciuUpgradeOpStatusTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9), )
if mibBuilder.loadTexts: ciuUpgradeOpStatusTable.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusTable.setDescription("A table showing the result of the upgrade operation selected from ciuUpgradeOpCommand in ciuUpgradeOpTable. The table would be emptied out once the value of ciuUpgradeOpCommand object is 'none'. ")
ciuUpgradeOpStatusEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1), ).setIndexNames((0, "CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusOperIndex"))
if mibBuilder.loadTexts: ciuUpgradeOpStatusEntry.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusEntry.setDescription('An entry containing the status of the upgrade operation. ')
ciuUpgradeOpStatusOperIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 1), Unsigned32())
if mibBuilder.loadTexts: ciuUpgradeOpStatusOperIndex.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusOperIndex.setDescription('This is an arbitrary integer which identifies uniquely an entry in this table. ')
ciuUpgradeOpStatusOperation = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35))).clone(namedValues=NamedValues(("unknown", 1), ("other", 2), ("copy", 3), ("verify", 4), ("versionExtraction", 5), ("imageSync", 6), ("configSync", 7), ("preUpgrade", 8), ("forceDownload", 9), ("moduleOnline", 10), ("hitlessLCUpgrade", 11), ("hitfulLCUpgrade", 12), ("unusedBootvar", 13), ("convertStartUp", 14), ("looseIncompatibility", 15), ("haSeqNumMismatch", 16), ("unknownModuleOnline", 17), ("recommendedAction", 18), ("recoveryAction", 19), ("remainingAction", 20), ("additionalInfo", 21), ("settingBootvars", 22), ("informLcmFsUpg", 23), ("sysmgrSaveRuntimeStateAndSuccessReset", 24), ("kexecLoadUpgImages", 25), ("fsUpgCleanup", 26), ("saveMtsState", 27), ("fsUpgBegin", 28), ("lcWarmBootStatus", 29), ("waitStateVerificationStatus", 30), ("informLcmFsUpgExternalLc", 31), ("externalLcWarmBootStatus", 32), ("total", 33), ("compactFlashTcamSanity", 34), ("systemPreupgradeBegin", 35)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusOperation.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusOperation.setDescription("Specifies the operation that is currently in progress or completed in response to the ciuUpgradeOpCommand. 'unknown' - operation status unknown. 'other' - operation status other than defined here. 'copy' - the image is being copied from ciuUpgradeOpStatusSrcImageLoc to ciuUpgradeOpStatusDestImageLoc. 'verify' - copied images are being verified for checksum and input consistency. 'versionExtraction' - extracting the version info from image. 'imageSync' - Syncing image to the standby supervisor, if standby supervisor exists. 'configSync' - saving running configuration to startup configuration and syncing it to standby supervisor, if it exists. 'preUpgrade' - Upgrading Bios/loader/bootrom 'forceDownload' - This module is being force downloaded. 'moduleOnline' - waiting for this module to come online 'hitlessLCUpgrade' - Upgrading hitless 'hitfulLCUpgrade' - Upgrading hitful 'unusedBootvar' - The image variable name type supplied as input for upgrade operation is unused. 'convertStartUp' - converting the startup config. 'looseIncompatibility' - incomplete support for current running config in the new image. 'haSeqNumMismatch' - High availability sequence number mismatch, so the module will be power cycled. 'unknownModuleOnline' - this module was powered down before switchover and has now come online. 'recommendedAction' - Specifies the recommended action if upgrading operation fails. If this object value is 'recommendedAction' then the object 'ciuUpgradeOpStatusSrcImageLoc' would contain the string specifying the recommended action. 'recoveryAction' - Specifies that installer is doing a recovery because of install failure. If this object value is 'recoveryAction' then the object 'ciuUpgradeOpStatusSrcImageLoc' would contain the string specifying the recovery action. 'remainingAction' - Specifies the remaining actions that have not been performed due to install failure. If this object value is 'remainingAction' then the object 'ciuUpgradeOpStatusSrcImageLoc' would contain the information about the remaining actions. 'additionalInfo' - Specifies the additional info the installer conveys to the user. If this object value is 'additionalInfo' then the object 'ciuUpgradeOpStatusSrcImageLoc' would contain the information. 'settingBootvars' - setting the boot variables. 'informLcmFsUpg' - save linecard runtime state. 'sysmgrSaveRuntimeStateAndSuccessReset' - save supervisor runtime state and terminate all services. 'kexecLoadUpgImages' - load upgrade images into memory. 'fsUpgCleanup' - cleanup file system for upgrade. 'saveMtsState' - saving persistent transaction messages. 'fsUpgBegin' - notify services that upgrade is about to begin. 'lcWarmBootStatus' - linecard upgrade status. 'waitStateVerificationStatus' - supervisor state verification with the new image. 'informLcmFsUpgExternalLc' - save external linecard runtime state. 'externalLcWarmBootStatus' - external linecard upgrade status. 'total' - total. 'compactFlashTcamSanity' - compact flash and TCAM sanity test. 'systemPreupgradeBegin' - notify services of beginning of upgrade. ")
ciuUpgradeOpStatusModule = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 3), EntPhysicalIndexOrZero()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusModule.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusModule.setDescription('The physical entity of the module for which this status is being shown. For example such an entity is one of the type PhysicalClass module(9). This object must contain the same value as the entPhysicalIndex of the physical entity from entPhysicalTable in ENTITY-MIB. ')
ciuUpgradeOpStatusSrcImageLoc = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 4), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusSrcImageLoc.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusSrcImageLoc.setDescription("An ASCII string specifying the source image location. For example the string could be 'bootflash:file1'. This object is only valid if the value of ciuUpgradeOpStatusOperation is 'copy'.")
ciuUpgradeOpStatusDestImageLoc = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 5), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusDestImageLoc.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusDestImageLoc.setDescription("An ASCII string specifying the destination image location. For example the string could be 'bootflash:file1'.")
ciuUpgradeOpStatusJobStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("unknown", 1), ("other", 2), ("failed", 3), ("inProgress", 4), ("success", 5), ("planned", 6)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusJobStatus.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusJobStatus.setDescription("The status of this operation. 'unknown' - operation status unknown. 'other' - operation status other than defined here. 'failed' - this operation has failed 'inProgress' - this operation is active 'success' - this operation has completed successfully. 'planned' - this operation would be executed at later point of time.")
ciuUpgradeOpStatusPercentCompl = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 7), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusPercentCompl.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusPercentCompl.setDescription('The percentage completion of the upgrade operation selected from ciuUpgradeOpTable. If this object is invalid for a particular operation, identified by ciuUpgradeOpStatusOperation, then the value of this object would be -1. ')
ciuUpgradeOpStatusJobStatusReas = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 9, 1, 8), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeOpStatusJobStatusReas.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusJobStatusReas.setDescription("Specifies the description of the cause of 'failed' state of the object 'ciuUpgradeOpStatusJobStatus'. This object would be a null string if value of 'ciuUpgradeOpStatusJobStatus' is anything other than 'failed'.")
ciuUpgradeMiscAutoCopy = MibScalar((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 10, 1), TruthValue().clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ciuUpgradeMiscAutoCopy.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscAutoCopy.setDescription("Specifies whether or not the images on the active supervisor will be copied to the standby supervisor, if the standby supervisor exists. If the standby supervisor does not exist, the setting of this object to 'true' will not have any effect and no image copy will be done. ciuImageURITable lists all the images for the supervisor cards as well as the line cards. If this object is set to 'true', all the images pointed to by the instances of ciuImageURI will be automatically copied to the standby supervisor. For example, assume that the ciuImageURITable looks like below - entPhysicalIndex ciuImageVariableName ciuImageURI 25 'system' bootflash://image.bin 25 'kickstart' slot0://boot.bin 26 'ilce' bootflash://linecard.bin So, if the ciuUpgradeMiscAutoCopy is 'true', then bootflash://image.bin from the active supervisor will be copied to the bootflash://image.bin on the standby supervisor; slot0://boot.bin will be copied to the slot0://boot.bin on the standby supervisor etc. If this object is set to 'false' then this copying of the images will not be done.")
ciuUpgradeMiscInfoTable = MibTable((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 11), )
if mibBuilder.loadTexts: ciuUpgradeMiscInfoTable.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscInfoTable.setDescription("A table showing additional information such as warnings during upgrade. The table would be emptied out once the value of ciuUpgradeOpCommand object is 'none'. ")
ciuUpgradeMiscInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 11, 1), ).setIndexNames((0, "CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscInfoIndex"))
if mibBuilder.loadTexts: ciuUpgradeMiscInfoEntry.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscInfoEntry.setDescription('An entry containing additional information of upgrade operation being performed on modules. Each entry is uniquely identified by ciuUpgradeMiscInfoIndex. If the info given in object ciuUpgradeMiscInfoDescr is not for any module then the value of ciuUpgradeMiscInfoModule would be 0.')
ciuUpgradeMiscInfoIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 11, 1, 1), Unsigned32())
if mibBuilder.loadTexts: ciuUpgradeMiscInfoIndex.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscInfoIndex.setDescription('This is an arbitrary integer which identifies uniquely an entry in this table. ')
ciuUpgradeMiscInfoModule = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 11, 1, 2), EntPhysicalIndexOrZero()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeMiscInfoModule.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscInfoModule.setDescription('The entPhysicalIndex of the module. The value of this object would be 0 if the information shown in ciuUpgradeMiscInfoDescr is not for any module.')
ciuUpgradeMiscInfoDescr = MibTableColumn((1, 3, 6, 1, 4, 1, 9, 9, 360, 1, 1, 11, 1, 3), SnmpAdminString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ciuUpgradeMiscInfoDescr.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscInfoDescr.setDescription('Specifies the miscelleneous information of the upgrade operation.')
ciuUpgradeOpCompletionNotify = NotificationType((1, 3, 6, 1, 4, 1, 9, 9, 360, 0, 1)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpCommand"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatus"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpTimeCompleted"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpLastCommand"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpLastStatus"))
if mibBuilder.loadTexts: ciuUpgradeOpCompletionNotify.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpCompletionNotify.setDescription('A ciuUpgradeOpCompletionNotify is sent at the completion of upgrade operation denoted by ciuUpgradeOpCommand object if such a notification was requested when the operation was initiated. ciuUpgradeOpCommand indicates the type of operation. ciuUpgradeOpStatus indicates the result of the operation. ciuUpgradeOpTimeCompleted indicates the time when the operation is completed. ciuUpgradeopLastCommand indicates the previous operation that was executed. ciuUpgradeOpLastStatus indicates the result of previous operation.')
ciuUpgradeJobStatusNotify = NotificationType((1, 3, 6, 1, 4, 1, 9, 9, 360, 0, 2)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusOperation"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusModule"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusSrcImageLoc"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusDestImageLoc"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusJobStatus"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusPercentCompl"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusJobStatusReas"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatus"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusReason"))
if mibBuilder.loadTexts: ciuUpgradeJobStatusNotify.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeJobStatusNotify.setDescription('A ciuUpgradeJobStatusNotify is sent when there is status change in the upgrade process. ciuUpgradeOpStatusOperation indicates the operation to change the upgrade status. ciuUpgradeOpStatusModule indicates which module is affected. ciuUpgradeOpStatusSrcImageLoc indicates location of source image if applicable. ciuUpgradeOpStatusDestImageLoc indicates location of destination image if applicable. ciuUpgradeOpStatusJobStatus indicates the result of this operation to change the status. ciuUpgradeOpStatusPercentCompl indicates percentage of the operation that has been completed. ciuUpgradeOpStatusJobStatusReas gives explanation of the faiure if there is a failure. ciuUpgradeOpStatus indicates the result of the operation at higher level. ciuUpgradeOpStatusReason gives detailed explanation if ciuUpgradeOpStatus is not successful.')
ciuImageUpgradeCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 1))
ciuImageUpgradeGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2))
ciuImageUpgradeCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 1, 1)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageUpgradeGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageURIGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompChkGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageUpgradeCompliance = ciuImageUpgradeCompliance.setStatus('deprecated')
if mibBuilder.loadTexts: ciuImageUpgradeCompliance.setDescription("Compliance statement for Image Upgrade MIB. For the (mandatory) ciuImageLocInputGroup, it is compliant to allow only a limited number of entries to be created and concurrently 'active' in the ciuImageLocInputTable table. ")
ciuImageUpgradeComplianceRev1 = ModuleCompliance((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 1, 2)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageUpgradeGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageURIGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompChkGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageUpgradeComplianceRev1 = ciuImageUpgradeComplianceRev1.setStatus('deprecated')
if mibBuilder.loadTexts: ciuImageUpgradeComplianceRev1.setDescription("Compliance statement for Image Upgrade MIB. For the (mandatory) ciuImageLocInputGroup, it is compliant to allow only a limited number of entries to be created and concurrently 'active' in the ciuImageLocInputTable table. ")
ciuImageUpgradeComplianceRev2 = ModuleCompliance((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 1, 3)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageUpgradeGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageURIGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompChkGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscInfoGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageUpgradeComplianceRev2 = ciuImageUpgradeComplianceRev2.setStatus('deprecated')
if mibBuilder.loadTexts: ciuImageUpgradeComplianceRev2.setDescription("Compliance statement for Image Upgrade MIB. For the (mandatory) ciuImageLocInputGroup, it is compliant to allow only a limited number of entries to be created and concurrently 'active' in the ciuImageLocInputTable table.")
ciuImageUpgradeComplianceRev3 = ModuleCompliance((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 1, 4)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageUpgradeGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageURIGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompChkGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroupSup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscInfoGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageUpgradeComplianceRev3 = ciuImageUpgradeComplianceRev3.setStatus('deprecated')
if mibBuilder.loadTexts: ciuImageUpgradeComplianceRev3.setDescription("Compliance statement for Image Upgrade MIB. For the (mandatory) ciuImageLocInputGroup, it is compliant to allow only a limited number of entries to be created and concurrently 'active' in the ciuImageLocInputTable table.")
ciuImageUpgradeComplianceRev4 = ModuleCompliance((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 1, 5)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageUpgradeGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageURIGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompChkGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeNotificationGroupSup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscInfoGroup"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpNewGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageUpgradeComplianceRev4 = ciuImageUpgradeComplianceRev4.setStatus('current')
if mibBuilder.loadTexts: ciuImageUpgradeComplianceRev4.setDescription("Compliance statement for Image Upgrade MIB. For the (mandatory) ciuImageLocInputGroup, it is compliant to allow only a limited number of entries to be created and concurrently 'active' in the ciuImageLocInputTable table.")
ciuImageUpgradeGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 1)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuTotalImageVariables"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageUpgradeGroup = ciuImageUpgradeGroup.setStatus('current')
if mibBuilder.loadTexts: ciuImageUpgradeGroup.setDescription('A collection of objects providing information about Image upgrade. ')
ciuImageVariableGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 2)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageVariableName"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageVariableGroup = ciuImageVariableGroup.setStatus('current')
if mibBuilder.loadTexts: ciuImageVariableGroup.setDescription('A group containing an object providing information about the type of the system images.')
ciuImageURIGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 3)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageURI"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageURIGroup = ciuImageURIGroup.setStatus('current')
if mibBuilder.loadTexts: ciuImageURIGroup.setDescription('A group containing an object providing information about the name of system variable or parameter.')
ciuUpgradeOpGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 4)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpCommand"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatus"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpNotifyOnCompletion"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpTimeStarted"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpTimeCompleted"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpAbort"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusReason"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeOpGroup = ciuUpgradeOpGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpGroup.setDescription('A collection of objects for Upgrade operation.')
ciuUpgradeTargetGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 5)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeTargetAction"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeTargetEntryStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeTargetGroup = ciuUpgradeTargetGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeTargetGroup.setDescription('A collection of objects giving the modules and the type of image to be upgraded.')
ciuImageLocInputGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 6)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputURI"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuImageLocInputEntryStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuImageLocInputGroup = ciuImageLocInputGroup.setStatus('current')
if mibBuilder.loadTexts: ciuImageLocInputGroup.setDescription('A collection of objects giving the location of the images to be upgraded.')
ciuVersionCompChkGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 7)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompImageSame"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradable"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradeAction"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradeBios"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradeBootrom"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradeLoader"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradeImpact"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuVersionCompUpgradeReason"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuVersionCompChkGroup = ciuVersionCompChkGroup.setStatus('current')
if mibBuilder.loadTexts: ciuVersionCompChkGroup.setDescription('A collection of objects showing the results of the version compatibility check done.')
ciuUpgradeImageVersionGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 8)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionVarName"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionRunning"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionNew"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeImageVersionUpgReqd"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeImageVersionGroup = ciuUpgradeImageVersionGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeImageVersionGroup.setDescription('A collection of objects showing the current running images and the images to be upgraded with.')
ciuUpgradeOpStatusGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 9)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusOperation"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusModule"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusSrcImageLoc"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusDestImageLoc"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusJobStatus"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusPercentCompl"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpStatusJobStatusReas"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeOpStatusGroup = ciuUpgradeOpStatusGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpStatusGroup.setDescription('A collection of objects showing the status of the upgrade operation.')
ciuUpgradeNotificationGroup = NotificationGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 10)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpCompletionNotify"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeNotificationGroup = ciuUpgradeNotificationGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeNotificationGroup.setDescription('A collection of notifications for upgrade operations. ')
ciuUpgradeMiscGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 11)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscAutoCopy"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeMiscGroup = ciuUpgradeMiscGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscGroup.setDescription('A collection of objects for Miscelleneous operation.')
ciuUpgradeMiscInfoGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 12)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscInfoModule"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeMiscInfoDescr"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeMiscInfoGroup = ciuUpgradeMiscInfoGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeMiscInfoGroup.setDescription('A collection of objects for Miscelleneous info for upgrade operation.')
ciuUpgradeNotificationGroupSup = NotificationGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 13)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeJobStatusNotify"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeNotificationGroupSup = ciuUpgradeNotificationGroupSup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeNotificationGroupSup.setDescription('A collection of notifications for upgrade operations. ')
ciuUpgradeOpNewGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 9, 9, 360, 2, 2, 14)).setObjects(("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeJobStatusNotifyOnCompletion"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpLastCommand"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpLastStatus"), ("CISCO-IMAGE-UPGRADE-MIB", "ciuUpgradeOpLastStatusReason"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciuUpgradeOpNewGroup = ciuUpgradeOpNewGroup.setStatus('current')
if mibBuilder.loadTexts: ciuUpgradeOpNewGroup.setDescription('A collection of objects for Upgrade operation.')
mibBuilder.exportSymbols("CISCO-IMAGE-UPGRADE-MIB", ciuUpgradeImageVersionUpgReqd=ciuUpgradeImageVersionUpgReqd, ciuUpgradeOpAbort=ciuUpgradeOpAbort, ciuImageLocInputURI=ciuImageLocInputURI, ciuUpgradeOpGroup=ciuUpgradeOpGroup, ciuUpgradeOpStatusReason=ciuUpgradeOpStatusReason, ciuVersionCompUpgradeReason=ciuVersionCompUpgradeReason, ciuTotalImageVariables=ciuTotalImageVariables, ciuUpgradeJobStatusNotify=ciuUpgradeJobStatusNotify, ciuUpgradeImageVersionTable=ciuUpgradeImageVersionTable, ciuUpgradeTargetAction=ciuUpgradeTargetAction, ciuUpgradeOpStatusOperation=ciuUpgradeOpStatusOperation, ciuImageVariableName=ciuImageVariableName, ciuUpgradeImageVersionIndex=ciuUpgradeImageVersionIndex, ciuUpgradeOpStatusModule=ciuUpgradeOpStatusModule, ciuVersionCompChkGroup=ciuVersionCompChkGroup, ciuVersionCompUpgradeImpact=ciuVersionCompUpgradeImpact, ciuUpgradeMiscGroup=ciuUpgradeMiscGroup, ciuUpgradeOpStatusOperIndex=ciuUpgradeOpStatusOperIndex, ciuImageUpgradeGroup=ciuImageUpgradeGroup, ciuImageLocInputEntryStatus=ciuImageLocInputEntryStatus, ciuUpgradeOpStatus=ciuUpgradeOpStatus, ciuImageURIGroup=ciuImageURIGroup, ciuUpgradeMiscInfoTable=ciuUpgradeMiscInfoTable, ciuUpgradeTargetEntry=ciuUpgradeTargetEntry, ciscoImageUpgradeMIB=ciscoImageUpgradeMIB, ciuImageVariableTable=ciuImageVariableTable, ciuUpgradeOpStatusJobStatusReas=ciuUpgradeOpStatusJobStatusReas, ciuUpgradeOpLastCommand=ciuUpgradeOpLastCommand, ciuVersionCompUpgradeBios=ciuVersionCompUpgradeBios, ciuImageUpgradeComplianceRev3=ciuImageUpgradeComplianceRev3, ciuVersionCompUpgradeLoader=ciuVersionCompUpgradeLoader, ciuUpgradeTargetTable=ciuUpgradeTargetTable, ciuUpgradeOpCompletionNotify=ciuUpgradeOpCompletionNotify, ciscoImageUpgradeMIBObjects=ciscoImageUpgradeMIBObjects, ciuVersionCompChkTable=ciuVersionCompChkTable, ciuUpgradeOpStatusTable=ciuUpgradeOpStatusTable, ciuImageURI=ciuImageURI, ciuUpgradeOpStatusSrcImageLoc=ciuUpgradeOpStatusSrcImageLoc, ciuImageLocInputEntry=ciuImageLocInputEntry, ciuUpgradeImageVersionGroup=ciuUpgradeImageVersionGroup, ciuVersionCompImageSame=ciuVersionCompImageSame, ciuUpgradeMiscInfoGroup=ciuUpgradeMiscInfoGroup, ciuUpgradeOpLastStatusReason=ciuUpgradeOpLastStatusReason, ciuUpgradeMiscInfoIndex=ciuUpgradeMiscInfoIndex, ciuUpgradeMiscInfoEntry=ciuUpgradeMiscInfoEntry, ciuUpgradeImageVersionRunning=ciuUpgradeImageVersionRunning, ciuImageVariableEntry=ciuImageVariableEntry, CiuImageVariableTypeName=CiuImageVariableTypeName, ciscoImageUpgradeMisc=ciscoImageUpgradeMisc, ciscoImageUpgradeConfig=ciscoImageUpgradeConfig, ciuImageUpgradeCompliances=ciuImageUpgradeCompliances, ciuUpgradeOpStatusDestImageLoc=ciuUpgradeOpStatusDestImageLoc, ciuImageLocInputGroup=ciuImageLocInputGroup, ciuUpgradeOpTimeCompleted=ciuUpgradeOpTimeCompleted, ciuUpgradeMiscInfoModule=ciuUpgradeMiscInfoModule, ciuUpgradeTargetGroup=ciuUpgradeTargetGroup, ciuImageVariableGroup=ciuImageVariableGroup, ciuImageURITable=ciuImageURITable, ciscoImageUpgradeMIBNotifs=ciscoImageUpgradeMIBNotifs, ciuVersionCompUpgradeAction=ciuVersionCompUpgradeAction, ciuUpgradeMiscAutoCopy=ciuUpgradeMiscAutoCopy, ciuUpgradeOpNotifyOnCompletion=ciuUpgradeOpNotifyOnCompletion, ciuUpgradeImageVersionNew=ciuUpgradeImageVersionNew, ciuUpgradeOpCommand=ciuUpgradeOpCommand, ciuImageUpgradeGroups=ciuImageUpgradeGroups, ciuVersionCompUpgradeBootrom=ciuVersionCompUpgradeBootrom, ciuUpgradeOpStatusPercentCompl=ciuUpgradeOpStatusPercentCompl, ciuUpgradeNotificationGroupSup=ciuUpgradeNotificationGroupSup, ciuUpgradeOpStatusJobStatus=ciuUpgradeOpStatusJobStatus, ciuUpgradeJobStatusNotifyOnCompletion=ciuUpgradeJobStatusNotifyOnCompletion, ciuUpgradeOpNewGroup=ciuUpgradeOpNewGroup, ciuUpgradeImageVersionEntry=ciuUpgradeImageVersionEntry, ciuUpgradeOpTimeStarted=ciuUpgradeOpTimeStarted, ciuUpgradeTargetEntryStatus=ciuUpgradeTargetEntryStatus, ciuImageUpgradeComplianceRev4=ciuImageUpgradeComplianceRev4, ciuUpgradeOpStatusGroup=ciuUpgradeOpStatusGroup, ciuImageURIEntry=ciuImageURIEntry, ciuUpgradeOpLastStatus=ciuUpgradeOpLastStatus, ciuVersionCompUpgradable=ciuVersionCompUpgradable, ciuVersionCompChkEntry=ciuVersionCompChkEntry, ciuUpgradeMiscInfoDescr=ciuUpgradeMiscInfoDescr, ciuImageLocInputTable=ciuImageLocInputTable, ciuUpgradeImageVersionVarName=ciuUpgradeImageVersionVarName, ciuImageUpgradeCompliance=ciuImageUpgradeCompliance, ciuUpgradeNotificationGroup=ciuUpgradeNotificationGroup, ciscoImageUpgradeMIBConform=ciscoImageUpgradeMIBConform, ciuImageUpgradeComplianceRev2=ciuImageUpgradeComplianceRev2, ciuUpgradeOpStatusEntry=ciuUpgradeOpStatusEntry, PYSNMP_MODULE_ID=ciscoImageUpgradeMIB, ciuImageUpgradeComplianceRev1=ciuImageUpgradeComplianceRev1, ciscoImageUpgradeOp=ciscoImageUpgradeOp)
| 222.72459 | 4,682 | 0.794453 | 8,056 | 67,931 | 6.698858 | 0.096698 | 0.032465 | 0.056814 | 0.044472 | 0.478283 | 0.370178 | 0.327799 | 0.29867 | 0.274099 | 0.264908 | 0 | 0.033822 | 0.102957 | 67,931 | 304 | 4,683 | 223.457237 | 0.851781 | 0.005005 | 0 | 0.068966 | 0 | 0.186207 | 0.548371 | 0.122429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.048276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7084db86248022329a93791876ae956973fe9671 | 842 | py | Python | src/schedulers/capacity_coefficient_scheduler.py | jp172/covid19-hospital-scheduler | 0931ac7b91f3e7fdbad741c5fc92577278dfc823 | [
"MIT"
] | 5 | 2020-03-22T22:46:15.000Z | 2020-03-25T14:16:49.000Z | src/schedulers/capacity_coefficient_scheduler.py | jp172/covid19-hospital-scheduler | 0931ac7b91f3e7fdbad741c5fc92577278dfc823 | [
"MIT"
] | null | null | null | src/schedulers/capacity_coefficient_scheduler.py | jp172/covid19-hospital-scheduler | 0931ac7b91f3e7fdbad741c5fc92577278dfc823 | [
"MIT"
] | 1 | 2020-03-22T20:44:01.000Z | 2020-03-22T20:44:01.000Z | from .utils import get_feasible_hospitals
from ..objects.proposal import RankedProposal
class CapacityScheduler:
def assign_request(self, instance, request):
hospitals = instance.get_hospitals_in_area(request.person.position)
# get feasible hospitals checks for vehicle range and free beds
feasible_hospitals = get_feasible_hospitals(
hospitals, request.person.position
)
# todo: Take the min really from all hospitals here?
if not feasible_hospitals:
best_hospital = min(instance.hospitals.values(), key=lambda h: h.capacity_coefficient)
return RankedProposal([best_hospital])
else:
feasible_hospitals = sorted(feasible_hospitals, key=lambda h: h.capacity_coefficient)[:3]
return RankedProposal(feasible_hospitals)
| 36.608696 | 101 | 0.709026 | 93 | 842 | 6.236559 | 0.516129 | 0.234483 | 0.103448 | 0.037931 | 0.103448 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0.001534 | 0.225653 | 842 | 22 | 102 | 38.272727 | 0.888037 | 0.133017 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
70855b3dc786603fc81ca670fb90be336edaa6a8 | 643 | py | Python | picha/forms.py | amon-wanyonyi/gallery-app | 89b1e9f68cdb112e732f902e4d2bd5d89f2eb5d7 | [
"MIT"
] | null | null | null | picha/forms.py | amon-wanyonyi/gallery-app | 89b1e9f68cdb112e732f902e4d2bd5d89f2eb5d7 | [
"MIT"
] | null | null | null | picha/forms.py | amon-wanyonyi/gallery-app | 89b1e9f68cdb112e732f902e4d2bd5d89f2eb5d7 | [
"MIT"
] | null | null | null | from django import forms
from django.forms import ModelForm
from .models import Category,Image, Location
class ImageForm(forms.ModelForm):
class Meta:
model = Image
fields = '__all__'
CATEGORIES =(
("1", "Cars"),
("2", "Food"),
("3", "Travel"),
("4", "Animals"),
("5", "Nature"),
("6", "Sports"),
)
class ImagesForm(forms.Form):
image = forms.ImageField(required=True)
pic_name = forms.CharField(required=True)
description = forms.CharField(required=True)
location = forms.CharField(required=True)
pic_category = forms.ChoiceField(choices=CATEGORIES, required=True) | 26.791667 | 71 | 0.645412 | 71 | 643 | 5.760563 | 0.549296 | 0.146699 | 0.161369 | 0.190709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.206843 | 643 | 24 | 71 | 26.791667 | 0.790196 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
70aa44c4f89295464ac761beec4c73fc72588ede | 796 | py | Python | rockiot/app/migrations/0039_auto_20211015_1658.py | isocserbia/rockiot | fc0c52cca4ca862ba570d6b2c64b8d30f8989f05 | [
"Apache-2.0"
] | 2 | 2022-01-23T11:03:37.000Z | 2022-02-26T10:54:10.000Z | rockiot/app/migrations/0039_auto_20211015_1658.py | isocserbia/rockiot | fc0c52cca4ca862ba570d6b2c64b8d30f8989f05 | [
"Apache-2.0"
] | null | null | null | rockiot/app/migrations/0039_auto_20211015_1658.py | isocserbia/rockiot | fc0c52cca4ca862ba570d6b2c64b8d30f8989f05 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-15 16:58
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('app', '0038_auto_20211012_1634'),
]
operations = [
migrations.RenameField(
model_name='device',
old_name='erase_wifi_credentials_at',
new_name='last_event_sent_at',
),
migrations.RemoveField(
model_name='device',
name='zero_config_at',
),
migrations.RenameField(
model_name='devicelogentry',
old_name='erase_wifi_credentials_at',
new_name='last_event_sent_at',
),
migrations.RemoveField(
model_name='devicelogentry',
name='zero_config_at',
),
] | 25.677419 | 49 | 0.579146 | 80 | 796 | 5.425 | 0.525 | 0.082949 | 0.119816 | 0.138249 | 0.373272 | 0.373272 | 0.373272 | 0.373272 | 0.373272 | 0.373272 | 0 | 0.057196 | 0.319095 | 796 | 31 | 50 | 25.677419 | 0.743542 | 0.056533 | 0 | 0.72 | 1 | 0 | 0.24 | 0.097333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
560871495a84c94431f674d70f54871a629205e0 | 215 | py | Python | Practice/Beginner/Number Mirror/PYTHON/number-mirror.py | avantikat/codechef-1 | 11e678a05ad339e799bb876606297dff979e9e03 | [
"MIT"
] | 2 | 2021-01-19T05:48:45.000Z | 2021-01-20T11:58:39.000Z | Practice/Beginner/Number Mirror/PYTHON/number-mirror.py | avantikat/codechef-1 | 11e678a05ad339e799bb876606297dff979e9e03 | [
"MIT"
] | null | null | null | Practice/Beginner/Number Mirror/PYTHON/number-mirror.py | avantikat/codechef-1 | 11e678a05ad339e799bb876606297dff979e9e03 | [
"MIT"
] | 3 | 2021-01-21T06:58:27.000Z | 2021-02-09T13:20:29.000Z | """
AUTHOR - Atharva Deshpande
GITHUB - https://github.com/AtharvaD11
QUESTION LINK - https://www.codechef.com/problems/START01
"""
**********************************
n = int(input())
print(n)
| 17.916667 | 63 | 0.525581 | 21 | 215 | 5.380952 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022857 | 0.186047 | 215 | 11 | 64 | 19.545455 | 0.622857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
561949f52308487819299b30d31e6d89b37b7253 | 407 | py | Python | test/git_copy_test.py | rec/gitz | cbb07f99dd002c85b5ca95896b33d03150bf9282 | [
"MIT"
] | 24 | 2019-07-26T03:57:16.000Z | 2021-11-22T22:39:13.000Z | test/git_copy_test.py | rec/gitz | cbb07f99dd002c85b5ca95896b33d03150bf9282 | [
"MIT"
] | 212 | 2019-06-13T13:44:26.000Z | 2020-06-02T17:59:51.000Z | test/git_copy_test.py | rec/gitz | cbb07f99dd002c85b5ca95896b33d03150bf9282 | [
"MIT"
] | 2 | 2019-08-09T13:55:38.000Z | 2019-09-07T11:17:59.000Z | from gitz.git import GIT
from gitz.git import functions
from gitz.git import repo
import unittest
class GitCopyTest(unittest.TestCase):
@repo.test
def test_simple(self):
repo.make_commit('1')
GIT.new('one')
GIT.copy('two', '-v')
expected = {'origin': ['master', 'one', 'two'], 'upstream': ['master']}
self.assertEqual(functions.remote_branches(), expected)
| 27.133333 | 79 | 0.643735 | 51 | 407 | 5.078431 | 0.568627 | 0.092664 | 0.127413 | 0.196911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003086 | 0.203931 | 407 | 14 | 80 | 29.071429 | 0.796296 | 0 | 0 | 0 | 0 | 0 | 0.100737 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
561b49e04cb1faae723f199cd2bb350c8e9b89bf | 1,970 | py | Python | ndnt/paths.py | Masynchin/ndnt | 091c121d452b58bf099db2b4e19b96c3380a984d | [
"MIT"
] | 1 | 2022-01-29T15:39:46.000Z | 2022-01-29T15:39:46.000Z | ndnt/paths.py | Masynchin/ndnt | 091c121d452b58bf099db2b4e19b96c3380a984d | [
"MIT"
] | 1 | 2022-03-31T22:44:20.000Z | 2022-03-31T22:44:20.000Z | ndnt/paths.py | Masynchin/ndnt | 091c121d452b58bf099db2b4e19b96c3380a984d | [
"MIT"
] | null | null | null | """Paths interface and its implementations."""
from pathlib import Path
from typing import Iterable
from pathspec import PathSpec
from pathspec.patterns import GitWildMatchPattern
from ndnt.extension import Extension
class Paths(Iterable[Path]):
"""Paths inteface.
`Paths` behaves like `Iterable[Path]`.
"""
class FilesPaths(Paths):
"""Files paths.
All paths that are files from given directory.
"""
def __init__(self, path: Path):
self.path = path
def __iter__(self) -> Iterable[Path]:
return filter(Path.is_file, self.path.glob("**/*"))
class ExtensionPaths(Paths):
"""Extension paths.
Paths with certain extension.
"""
def __init__(self, origin: Paths, extension: Extension):
self.origin = origin
self.extension = extension
def __iter__(self) -> Iterable[Path]:
return filter(lambda path: self.extension == path.suffix, self.origin)
class ExcludeGitignoredPaths(Paths):
"""Paths exluding paths that matches gitignore.
Optimization inspired by "black" package:
https://github.com/psf/black/blob/fda2561f79e10826dbdeb900b6124d642766229f/src/black/files.py#L177
"""
def __init__(self, path: Path, gitignore_path: Path):
self.path = path
self.gitignore_path = gitignore_path
def __iter__(self) -> Iterable[Path]:
gitignore = PathSpec.from_lines(
GitWildMatchPattern, self.gitignore_path.read_text().splitlines()
)
yield from self._iter(self.path, gitignore)
def _iter(self, path: Path, gitignore: PathSpec) -> Iterable[Path]:
"""Optimized gitignore paths filter.
Do not iterate directories if they are in gitignore.
"""
for p in path.iterdir():
if gitignore.match_file(p):
continue
if p.is_file():
yield p
elif p.is_dir():
yield from self._iter(p, gitignore)
| 25.921053 | 102 | 0.649746 | 223 | 1,970 | 5.573991 | 0.358744 | 0.045052 | 0.04827 | 0.045857 | 0.134352 | 0.081255 | 0.056315 | 0 | 0 | 0 | 0 | 0.020202 | 0.246193 | 1,970 | 75 | 103 | 26.266667 | 0.816835 | 0.243655 | 0 | 0.147059 | 0 | 0 | 0.002853 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.147059 | 0.058824 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
564124cab865da25819f725850fcd01e601316a7 | 1,551 | py | Python | src/core_backend/service/plugin.py | jhchen3121/wechat_shop | c9d9ad009df1e5bb0eb23ca8d830dd5c15df5328 | [
"Apache-2.0"
] | null | null | null | src/core_backend/service/plugin.py | jhchen3121/wechat_shop | c9d9ad009df1e5bb0eb23ca8d830dd5c15df5328 | [
"Apache-2.0"
] | 5 | 2021-01-28T21:18:27.000Z | 2022-03-25T19:10:01.000Z | src/core_backend/service/plugin.py | jhchen3121/wechat_shop | c9d9ad009df1e5bb0eb23ca8d830dd5c15df5328 | [
"Apache-2.0"
] | null | null | null | #-*- coding:utf-8 -*-
import sys, traceback
from core_backend import context
from core_backend.libs.exception import Error
import logging
#logger = Log.getDebugLogger()
#logger.setLevel(logging.INFO)
logger = logging.getLogger(__name__)
class plugin(object):
def __init__(self, handler, session):
self.handler = handler
self.session = session
self.context = self.handler.context
self.request = self.context.request
self.service = self.handler._service
def process(self):
pass
def post_process(self):
pass
class PluginHandler(object):
def __init__(self, handler, session):
self.handler = handler
self.session = session
self.plg_modules = []
self.plg_inst_list = []
def import_module(self, module, fromlist):
# return __import__(self.get_module(m), fromlist=["plugins"])
return __import__(module, fromlist=fromlist)
def load_plugins(self, plg_module):
plgconfig = self.import_module(plg_module, [plg_module])
module_files = plgconfig.plugins_modules
for f in module_files:
m = self.import_module(plg_module + '.' + f, [plg_module])
self.plg_modules.append(m)
ins = m.Plugin(self.handler, self.session)
self.plg_inst_list.append(ins)
def run_plugins(self):
for ins in self.plg_inst_list:
ins.process()
def run_post_plugins(self):
for ins in self.plg_inst_list:
ins.post_process()
| 27.696429 | 70 | 0.646035 | 188 | 1,551 | 5.058511 | 0.287234 | 0.080967 | 0.046267 | 0.063091 | 0.279706 | 0.227129 | 0.227129 | 0.227129 | 0.227129 | 0.227129 | 0 | 0.000865 | 0.254674 | 1,551 | 55 | 71 | 28.2 | 0.821799 | 0.088975 | 0 | 0.263158 | 0 | 0 | 0.00071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0.052632 | 0.210526 | 0.026316 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
564972393480d24b55b21815eb789d50277da209 | 1,630 | py | Python | tfx/orchestration/portable/input_resolution/exceptions.py | ajmarcus/tfx | 28ac2be5ace31ca733f6292495f8be83484a1730 | [
"Apache-2.0"
] | null | null | null | tfx/orchestration/portable/input_resolution/exceptions.py | ajmarcus/tfx | 28ac2be5ace31ca733f6292495f8be83484a1730 | [
"Apache-2.0"
] | null | null | null | tfx/orchestration/portable/input_resolution/exceptions.py | ajmarcus/tfx | 28ac2be5ace31ca733f6292495f8be83484a1730 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module for input resolution exceptions.
Theses exceptions can be raised from ResolverStartegy or ResolverOp
implementation, and each exception is specially handled in input resolution
process. Other errors raised during the input resolution will not be catched
and be propagated to the input resolution caller.
"""
# Disable lint errors that enforces all exception name to end with -Error.
# pylint: disable=g-bad-exception-name
class InputResolutionError(Exception):
"""Base exception class for input resolution related errors."""
class InputResolutionSignal(Exception):
"""Base exception class for non-error input resolution signals."""
class SkipSignal(InputResolutionSignal):
"""Special signal to resolve empty input resolution result.
Normally empty input resolution result is regarded as an error (in synchronous
pipeline). Raising SkipSignal would resolve empty list input without an error.
TFX Conditional uses SkipSignal to _not_ execute components if the branch
predicate evaluates to false.
"""
| 38.809524 | 80 | 0.784663 | 227 | 1,630 | 5.625551 | 0.572687 | 0.09397 | 0.02036 | 0.025059 | 0.046985 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005827 | 0.157669 | 1,630 | 41 | 81 | 39.756098 | 0.924253 | 0.878528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5658ab7b24fbbc0c083f951f0b6a2a80579889fa | 295 | py | Python | aula8/aula08.py | matheusluz071098/Curso-de-Python | 633bcfd3b444327d969e3e6fd6ad37cef910e048 | [
"MIT"
] | null | null | null | aula8/aula08.py | matheusluz071098/Curso-de-Python | 633bcfd3b444327d969e3e6fd6ad37cef910e048 | [
"MIT"
] | null | null | null | aula8/aula08.py | matheusluz071098/Curso-de-Python | 633bcfd3b444327d969e3e6fd6ad37cef910e048 | [
"MIT"
] | null | null | null | # aula 8
# tuplas
numeros = [1,4,6]
usuario = {'nome:''user','passwd',:123456}
pessoa = ('matheus','luz',0,45,5,numeros)
print(numeros)
print(usuario)
print(pessoa)
numeros[1] = 5
usuario ['passwd'] = 456123
lista_pessoa = []
lista_pessoa.append(pessoa)
#pessoa[4][1] =6
print(pessoa)[4][1]) | 16.388889 | 42 | 0.664407 | 45 | 295 | 4.311111 | 0.488889 | 0.082474 | 0.082474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 0.115254 | 295 | 18 | 43 | 16.388889 | 0.639847 | 0 | 0 | 0 | 0 | 0 | 0.117424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.181818 | 0 | null | null | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
56721a21d88665f27b61aff6cca650492e5300f3 | 691 | py | Python | S4/S4 Library/simulation/objects/lighting/lighting_object_interactions.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | 1 | 2021-05-20T19:33:37.000Z | 2021-05-20T19:33:37.000Z | S4/S4 Library/simulation/objects/lighting/lighting_object_interactions.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | S4/S4 Library/simulation/objects/lighting/lighting_object_interactions.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | from objects.lighting.lighting_interactions import SwitchLightImmediateInteraction
from objects.object_state_utils import ObjectStateHelper
import sims4
logger = sims4.log.Logger('LightingAndObjectState', default_owner='mkartika')
class SwitchLightAndStateImmediateInteraction(SwitchLightImmediateInteraction):
INSTANCE_TUNABLES = {'state_settings': ObjectStateHelper.TunableFactory(description='\n Find objects in the same room or lot based on the tags and \n set state to the desired state.\n ')}
def _run_interaction_gen(self, timeline):
yield from super()._run_interaction_gen(timeline)
self.state_settings.execute_helper(self)
| 57.583333 | 224 | 0.778582 | 75 | 691 | 6.986667 | 0.653333 | 0.041985 | 0.064886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003425 | 0.154848 | 691 | 11 | 225 | 62.818182 | 0.893836 | 0 | 0 | 0 | 0 | 0.111111 | 0.254703 | 0.031838 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
5679b51f6118b18e3e4e239059714dfed0909e87 | 196 | py | Python | contacts/urls.py | melodyPereira05/PropertyDekho | 9cc860cd9645de4c21efbb755dd2312ffea684b0 | [
"MIT"
] | null | null | null | contacts/urls.py | melodyPereira05/PropertyDekho | 9cc860cd9645de4c21efbb755dd2312ffea684b0 | [
"MIT"
] | null | null | null | contacts/urls.py | melodyPereira05/PropertyDekho | 9cc860cd9645de4c21efbb755dd2312ffea684b0 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('<int:sproperty_id>/',views.contact,name="contact"),
path('',views.contact_submit,name="contact-submit"),
] | 19.6 | 61 | 0.673469 | 24 | 196 | 5.416667 | 0.541667 | 0.184615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 196 | 10 | 62 | 19.6 | 0.792683 | 0 | 0 | 0 | 0 | 0 | 0.203046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5683f938e7eeefc9e75b94fdc3d8a1fc14161acb | 1,124 | py | Python | ManualTest/eventCheckAndActionTest.py | terapotan/NewBreakingBlocks | 72927942d4a5088eae2191ce1d46562599f05cc8 | [
"MIT"
] | null | null | null | ManualTest/eventCheckAndActionTest.py | terapotan/NewBreakingBlocks | 72927942d4a5088eae2191ce1d46562599f05cc8 | [
"MIT"
] | null | null | null | ManualTest/eventCheckAndActionTest.py | terapotan/NewBreakingBlocks | 72927942d4a5088eae2191ce1d46562599f05cc8 | [
"MIT"
] | null | null | null | import unittest
class mainTest(unittest.TestCase):
def test_checkCollectEventCheckListContent(self):
self.assertEqual(input('EventOccurCheckClassesInAFrame:dummyEventCheck1,dummyEventCheck2,dummyEventCheck3,と表示されているか(y/n)'),'y')
def test_checkCollectEventQueueListContent(self):
self.assertEqual(input('EventExecuteClassesInAFrame:dummyEventAction1,dummyEventAction2,dummyEventAction3,と表示されているか(y/n)'),'y')
def test_checkCollectEventActionListCotent(self):
self.assertEqual(input('EventExecuseList:dummyEventAction1,dummyEventAction2,dummyEventAction3,と表示されているか(y/n)'),'y')
def test_checkCollectEventQueueListAfterCalling(self):
self.assertEqual(input('EventAfterQueueListEmpty:Yesと表示されているか(y/n)'),'y')
def test_checkCollectEventActionQueContent(self):
self.assertEqual(input('EventActionQue:eventAction1,eventAction2と表示されているか(y/n)'),'y')
def test_checkCallEventActionQueContent(self):
self.assertEqual(input('関数呼び出し後にEventActionQue:(何も表示されない)となっているか(y/n)'),'y')
# ほかのファイルからimportされたときにテストが実行されないようにする
if __name__ == "__main__":
unittest.main() | 53.52381 | 135 | 0.790925 | 95 | 1,124 | 9.210526 | 0.421053 | 0.048 | 0.130286 | 0.164571 | 0.204571 | 0.181714 | 0.16 | 0.16 | 0.16 | 0 | 0 | 0.010827 | 0.096085 | 1,124 | 21 | 136 | 53.52381 | 0.850394 | 0.032028 | 0 | 0 | 0 | 0 | 0.397424 | 0.384545 | 0 | 0 | 0 | 0 | 0.375 | 1 | 0.375 | false | 0 | 0.0625 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
56887c191046d293831578b1ba492a974431002a | 1,276 | py | Python | apps/gsekit/admin.py | iSecloud/bk-process-config-manager | f44c01b7a28dd9328cce6e6066eae42d5365070d | [
"MIT"
] | 8 | 2021-07-08T06:53:57.000Z | 2022-03-14T04:05:27.000Z | apps/gsekit/admin.py | iSecloud/bk-process-config-manager | f44c01b7a28dd9328cce6e6066eae42d5365070d | [
"MIT"
] | 107 | 2021-07-22T02:20:07.000Z | 2022-03-14T08:37:23.000Z | apps/gsekit/admin.py | iSecloud/bk-process-config-manager | f44c01b7a28dd9328cce6e6066eae42d5365070d | [
"MIT"
] | 12 | 2021-07-09T08:59:01.000Z | 2022-03-08T13:40:41.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸 (Blueking) available.
Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at https://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
"""
from apps.gsekit.configfile import admin as configfile_admin
from apps.gsekit.process import admin as process_admin
from apps.gsekit.job import admin as job_admin
from apps.gsekit.meta import admin as meta_admin
__all__ = ["configfile_admin", "process_admin", "job_admin", "meta_admin"]
try:
from apps.gsekit.migrate import admin as migrate_admin # noqa
except ImportError:
pass
else:
__all__.append("migrate_admin")
try:
from apps.gsekit.adapters.bscp import admin as bscp_admin # noqa
except ImportError:
pass
else:
__all__.append("bscp_admin")
| 39.875 | 115 | 0.776646 | 194 | 1,276 | 4.984536 | 0.505155 | 0.062048 | 0.086867 | 0.058945 | 0.134436 | 0.088935 | 0.088935 | 0.088935 | 0 | 0 | 0 | 0.010195 | 0.154389 | 1,276 | 31 | 116 | 41.16129 | 0.886006 | 0.547806 | 0 | 0.470588 | 0 | 0 | 0.12522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.117647 | 0.470588 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
568fbabab11c92333a7f2b9aac0fe6461e379248 | 449 | py | Python | Intensivo-Python/Cap-7/Exer-Cap-7.10.py | RodrigoTAbreu/Python-3 | 9bf0578c1ed52283c8d8516a9052557bde038947 | [
"MIT"
] | null | null | null | Intensivo-Python/Cap-7/Exer-Cap-7.10.py | RodrigoTAbreu/Python-3 | 9bf0578c1ed52283c8d8516a9052557bde038947 | [
"MIT"
] | null | null | null | Intensivo-Python/Cap-7/Exer-Cap-7.10.py | RodrigoTAbreu/Python-3 | 9bf0578c1ed52283c8d8516a9052557bde038947 | [
"MIT"
] | null | null | null | local = {}
incluir = True
while incluir:
nome = input('qual seu nome? :')
local_escolhido = input('Qual local de suas próximas férias? :')
local[nome] = local_escolhido
repetir = input('Gostaria de incluir outro na enquete?(Yes/No):')
if repetir == 'no':
incluir = False
print('Resultados da Enquete')
for nome, local_escolhido in local.items():
print('{} vai passar as férias em {}'.format(nome, local_escolhido))
| 29.933333 | 72 | 0.665924 | 59 | 449 | 5 | 0.559322 | 0.122034 | 0.244068 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202673 | 449 | 14 | 73 | 32.071429 | 0.824022 | 0 | 0 | 0 | 0 | 0 | 0.336303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
569169ad848535e2faa15e35b1462c29898e6bd8 | 168 | py | Python | examples/loopyfunction.py | j00st/larry | 9ab8b28a4870b78fa91d2989971e7334e8433cbf | [
"MIT"
] | null | null | null | examples/loopyfunction.py | j00st/larry | 9ab8b28a4870b78fa91d2989971e7334e8433cbf | [
"MIT"
] | null | null | null | examples/loopyfunction.py | j00st/larry | 9ab8b28a4870b78fa91d2989971e7334e8433cbf | [
"MIT"
] | null | null | null | def sommig(n):
result = 0
while(n>=1):
result += n
n-=1
return result
print(sommig(3))
print(sommig(8))
print(sommig(17))
print(sommig(33)) | 15.272727 | 19 | 0.565476 | 26 | 168 | 3.653846 | 0.5 | 0.463158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072581 | 0.261905 | 168 | 11 | 20 | 15.272727 | 0.693548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5697cd974f29d6e18eee5667af6ba721d9b34b38 | 159 | py | Python | CursoemVideo/ex006.py | arthxvr/coding--python | 1e91707be6cb8fef816dad0c1a65f2cc3327357e | [
"MIT"
] | null | null | null | CursoemVideo/ex006.py | arthxvr/coding--python | 1e91707be6cb8fef816dad0c1a65f2cc3327357e | [
"MIT"
] | null | null | null | CursoemVideo/ex006.py | arthxvr/coding--python | 1e91707be6cb8fef816dad0c1a65f2cc3327357e | [
"MIT"
] | null | null | null | numero = int(input('Número: '))
dobro = numero*2
triplo = numero*3
raiz = numero**(1/2)
print(f'Dobro: {dobro}, Triplo: {triplo}, Raiz quadrada: {raiz:.2f}')
| 22.714286 | 69 | 0.647799 | 24 | 159 | 4.291667 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036232 | 0.132075 | 159 | 6 | 70 | 26.5 | 0.710145 | 0 | 0 | 0 | 0 | 0 | 0.421384 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
569a457d8dd99941191d9661b2fbcf1d45459de3 | 4,509 | py | Python | river/optim/base.py | online-ml/creme | 60872844e6052b5ef20e4075aea30f9031377136 | [
"BSD-3-Clause"
] | 1,105 | 2019-01-24T15:15:30.000Z | 2020-11-10T18:27:00.000Z | river/optim/base.py | online-ml/creme | 60872844e6052b5ef20e4075aea30f9031377136 | [
"BSD-3-Clause"
] | 328 | 2019-01-25T13:48:43.000Z | 2020-11-11T11:41:44.000Z | river/optim/base.py | online-ml/creme | 60872844e6052b5ef20e4075aea30f9031377136 | [
"BSD-3-Clause"
] | 150 | 2019-01-29T19:05:21.000Z | 2020-11-11T11:50:14.000Z | import abc
import numbers
from typing import Union
import numpy as np
from river import base, optim, utils
VectorLike = Union[utils.VectorDict, np.ndarray]
__all__ = ["Initializer", "Scheduler", "Optimizer", "Loss"]
class Initializer(base.Base, abc.ABC):
"""An initializer is used to set initial weights in a model."""
@abc.abstractmethod
def __call__(self, shape=1):
"""Returns a fresh set of weights.
Parameters
----------
shape
Indicates how many weights to return. If `1`, then a single scalar value will be
returned.
"""
class Scheduler(base.Base, abc.ABC):
"""Can be used to program the learning rate schedule of an `optim.base.Optimizer`."""
@abc.abstractmethod
def get(self, t: int) -> float:
"""Returns the learning rate at a given iteration.
Parameters
----------
t
The iteration number.
"""
def __repr__(self):
return f"{self.__class__.__name__}({vars(self)})"
class Optimizer(base.Base):
"""Optimizer interface.
Every optimizer inherits from this base interface.
Parameters
----------
lr
Attributes
----------
learning_rate : float
Returns the current learning rate value.
"""
def __init__(self, lr: Union[Scheduler, float]):
if isinstance(lr, numbers.Number):
lr = optim.schedulers.Constant(lr)
self.lr = lr
self.n_iterations = 0
@property
def learning_rate(self) -> float:
return self.lr.get(self.n_iterations)
def look_ahead(self, w: dict) -> dict:
"""Updates a weight vector before a prediction is made.
Parameters:
w (dict): A dictionary of weight parameters. The weights are modified in-place.
Returns:
The updated weights.
"""
return w
def _step_with_dict(self, w: dict, g: dict) -> dict:
raise NotImplementedError
def _step_with_vector(self, w: VectorLike, g: VectorLike) -> VectorLike:
raise NotImplementedError
def step(
self, w: Union[dict, VectorLike], g: Union[dict, VectorLike]
) -> Union[dict, VectorLike]:
"""Updates a weight vector given a gradient.
Parameters
----------
w
A vector-like object containing weights. The weights are modified in-place.
g
A vector-like object of gradients.
Returns
-------
The updated weights.
"""
if isinstance(w, VectorLike.__args__) and isinstance(g, VectorLike.__args__):
try:
w = self._step_with_vector(w, g)
self.n_iterations += 1
return w
except NotImplementedError:
pass
w = self._step_with_dict(w, g)
self.n_iterations += 1
return w
def __repr__(self):
return f"{self.__class__.__name__}({vars(self)})"
class Loss(base.Base, abc.ABC):
"""Base class for all loss functions."""
def __repr__(self):
return f"{self.__class__.__name__}({vars(self)})"
@abc.abstractmethod
def __call__(self, y_true, y_pred):
"""Returns the loss.
Parameters
----------
y_true
Ground truth(s).
y_pred
Prediction(s).
Returns
-------
The loss(es).
"""
@abc.abstractmethod
def gradient(self, y_true, y_pred):
"""Return the gradient with respect to y_pred.
Parameters
----------
y_true
Ground truth(s).
y_pred
Prediction(s).
Returns
-------
The gradient(s).
"""
@abc.abstractmethod
def mean_func(self, y_pred):
"""Mean function.
This is the inverse of the link function. Typically, a loss function takes as input the raw
output of a model. In the case of classification, the raw output would be logits. The mean
function can be used to convert the raw output into a value that makes sense to the user,
such as a probability.
Parameters
----------
y_pred
Raw prediction(s).
Returns
-------
The adjusted prediction(s).
References
----------
[^1]: [Wikipedia section on link and mean function](https://www.wikiwand.com/en/Generalized_linear_model#/Link_function)
"""
| 23.731579 | 128 | 0.569528 | 521 | 4,509 | 4.74856 | 0.309021 | 0.032336 | 0.04042 | 0.016977 | 0.169361 | 0.137025 | 0.11439 | 0.11439 | 0.094179 | 0.094179 | 0 | 0.001956 | 0.319583 | 4,509 | 189 | 129 | 23.857143 | 0.804433 | 0.413839 | 0 | 0.333333 | 0 | 0 | 0.073278 | 0.057157 | 0 | 0 | 0 | 0 | 0 | 1 | 0.259259 | false | 0.018519 | 0.092593 | 0.074074 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
569f8079d548bf4d08ebe4449569ea28cde1fcb4 | 5,434 | py | Python | haros_plugin_pbt_gen/selectors.py | git-afsantos/haros-plugin-pbt | 388c990117f357fa2079be48918fd3486c895597 | [
"MIT"
] | null | null | null | haros_plugin_pbt_gen/selectors.py | git-afsantos/haros-plugin-pbt | 388c990117f357fa2079be48918fd3486c895597 | [
"MIT"
] | 6 | 2019-10-23T08:13:32.000Z | 2020-02-11T18:58:26.000Z | haros_plugin_pbt_gen/selectors.py | git-afsantos/haros-plugin-pbt | 388c990117f357fa2079be48918fd3486c895597 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# SPDX-License-Identifier: MIT
# Copyright © 2019 André Santos
###############################################################################
# Imports
###############################################################################
from builtins import map
from builtins import object
from builtins import range # Python 2 and 3: forward-compatible
###############################################################################
# Field Selector
###############################################################################
class Accessor(object):
__slots__ = ("field_name", "ros_type")
def __init__(self, field_name, type_token):
self.field_name = field_name
self.ros_type = type_token
@property
def is_dynamic(self):
return False
@property
def is_range(self):
return False
def matches(self, field_name):
return self.field_name == field_name
def __str__(self):
return self.field_name
def __repr__(self):
return "{}({}, {})".format(
type(self).__name__, self.field_name, repr(self.ros_type))
class RangeAccessor(Accessor):
__slots__ = Accessor.__slots__
def __init__(self, field_name, type_token):
raise NotImplementedError()
@property
def is_range(self):
return True
class DynamicAccessor(Accessor):
__slots__ = Accessor.__slots__ + ("predicate",)
def __init__(self, field_name, type_token):
assert not type_token.is_array
Accessor.__init__(self, field_name, type_token)
self.predicate = self._make_predicate(field_name)
@property
def is_dynamic(self):
return True
def matches(self, key):
return self.predicate(key)
def _make_predicate(self, field_name):
assert field_name.startswith("*")
if field_name == "*":
return _universal
star, indices = field_name.split("\\")
indices = list(map(int, indices.split(",")))
return self._all_except(indices)
def _all_except(self, indices):
def predicate(key):
return key not in indices
return predicate
class Selector(object):
__slots__ = ("expression", "base_type", "accessors")
_DYNAMIC = "*"
def __init__(self, field_expr, ros_type):
self.expression = field_expr
self.base_type = ros_type
accessors = []
parts = field_expr.replace("]", "").replace("[", ".").split(".")
if not parts:
raise ValueError(field_expr)
type_token = ros_type
for field_name in parts:
if type_token.is_array:
accessor = self._array_access(field_name, type_token)
type_token = accessor.ros_type
accessors.append(accessor)
elif not type_token.is_primitive:
type_token = type_token.fields[field_name]
accessor = Accessor(field_name, type_token)
accessors.append(accessor)
else:
raise KeyError(field_name)
self.accessors = tuple(accessors)
@property
def field_name(self):
return self.accessors[-1].field_name
@property
def ros_type(self):
return self.accessors[-1].ros_type
@property
def is_dynamic(self):
for accessor in self.accessors:
if accessor.is_dynamic:
return True
return False
@property
def is_array_field(self):
return (len(self.accessors) >= 2
and self.accessors[-2].ros_type.is_array)
def subselect(self, n):
if n <= 0:
raise IndexError(n)
if n >= len(self.accessors):
return self
parts = [self.accessors[0].field_name]
array = self.accessors[0].ros_type.is_array
for i in range(1, n):
f = self.accessors[i]
if array:
parts.append("[")
parts.append(f.field_name)
parts.append("]")
else:
parts.append(".")
parts.append(f.field_name)
array = f.ros_type.is_array
selector = Selector("".join(parts), self.base_type)
return selector
def _array_access(self, field_name, array_type):
if field_name.startswith(self._DYNAMIC):
return DynamicAccessor(field_name, array_type.type_token)
try:
index = int(field_name, 10)
except ValueError as e:
raise KeyError(field_name)
if not array_type.contains_index(index):
raise KeyError(field_name)
return Accessor(field_name, array_type.type_token)
def __iter__(self):
return iter(self.accessors)
def __eq__(self, other):
if not isinstance(other, Selector):
return False
return (self.expression == other.expression
and self.base_type == other.base_type)
def __hash__(self):
return 31 * hash(self.expression) + hash(self.base_type)
def __str__(self):
return self.expression
def __repr__(self):
return "Selector({}, {})".format(self.expression, repr(self.base_type))
###############################################################################
# Predicates
###############################################################################
def _universal(key):
return True
| 29.058824 | 79 | 0.552079 | 577 | 5,434 | 4.882149 | 0.183709 | 0.108626 | 0.050763 | 0.038339 | 0.207313 | 0.126021 | 0.065673 | 0 | 0 | 0 | 0 | 0.004788 | 0.269783 | 5,434 | 186 | 80 | 29.215054 | 0.704889 | 0.02742 | 0 | 0.278195 | 0 | 0 | 0.019571 | 0 | 0 | 0 | 0 | 0 | 0.015038 | 1 | 0.203008 | false | 0 | 0.022556 | 0.12782 | 0.503759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
3b23404ad012614ddf55740308271ae89117500e | 1,835 | py | Python | positionEncoding.py | GehartM/german-text-watermarking | f9765702225e0cfdce868eec816ed6e6fd4ffc63 | [
"MIT"
] | 1 | 2021-04-08T11:23:46.000Z | 2021-04-08T11:23:46.000Z | positionEncoding.py | GehartM/german-text-watermarking | f9765702225e0cfdce868eec816ed6e6fd4ffc63 | [
"MIT"
] | null | null | null | positionEncoding.py | GehartM/german-text-watermarking | f9765702225e0cfdce868eec816ed6e6fd4ffc63 | [
"MIT"
] | null | null | null | from treeNode import TreeNode
from collections import defaultdict
from treeNodeTravelThrewTreeNodes import TravelThrewTreeNodes
# In positionEncoding wird die Zugehörigkeit der Ecodierung zu ihrem Wert festgelegt.
# Dieser Wert entspricht der Position des Buchstabens der nächsten Zeile.
positionEncoding = defaultdict(list, {
'0': ['1000001', '1011111'],
'1': ['1000010', '1100000'],
'2': ['1000011', '1100001'],
'3': ['1000100', '1100010'],
'4': ['1000101', '1100011'],
'5': ['1000110', '1100100'],
'6': ['1000111', '1100101'],
'7': ['1001000', '1100110'],
'8': ['1001001', '1100111'],
'9': ['1001010', '1101000'],
'10': ['1001011', '1101001'],
'11': ['1001100', '1101010'],
'12': ['1001101', '1101011'],
'13': ['1001110', '1101100'],
'14': ['1001111', '1101101'],
'15': ['1010000', '1101110'],
'16': ['1010001', '1101111'],
'17': ['1010010', '1110000'],
'18': ['1010011', '1110001'],
'19': ['1010100', '1110010'],
'20': ['1010101', '1110011'],
'21': ['1010110', '1110100'],
'22': ['1010111', '1110101'],
'23': ['1011000', '1110110'],
'24': ['1011001', '1110111'],
'25': ['1011010', '1111000'],
'26': ['1011011', '1111001'],
'27': ['1011100', '1111010'],
'28': ['1011101', '1111011'],
'29': ['1011110', '1111100']
})
# Erstellung des Ursprungsobjektes des Baums
rootTreeNode = TreeNode()
# Hinzufügen aller Äste des Baums, um am Ende dieser die Position zu speichern.
for position, possibleEncodings in positionEncoding.items():
# Iteration über jede Kodierungsmöglichkeit
for encoding in possibleEncodings:
TravelThrewTreeNodes(rootTreeNode, encoding, position)
def GetRootTreeNode():
global rootTreeNode
return rootTreeNode
| 32.767857 | 86 | 0.595095 | 170 | 1,835 | 6.423529 | 0.817647 | 0.032967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.323691 | 0.208719 | 1,835 | 55 | 87 | 33.363636 | 0.428375 | 0.173297 | 0 | 0 | 0 | 0 | 0.322802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.071429 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3b275a2ff20e3730ac6fbbf5f863ef39cb1f2363 | 1,227 | py | Python | revenue/utils/date_utils.py | osnory/BE-Challenge | bd8a3bcfc72624e99a228612a75f092371c1fb29 | [
"MIT"
] | null | null | null | revenue/utils/date_utils.py | osnory/BE-Challenge | bd8a3bcfc72624e99a228612a75f092371c1fb29 | [
"MIT"
] | null | null | null | revenue/utils/date_utils.py | osnory/BE-Challenge | bd8a3bcfc72624e99a228612a75f092371c1fb29 | [
"MIT"
] | null | null | null | import datetime
from datetime import datetime as dt
API_DATE_FORMAT = "%d/%m/%Y"
CSV_DATE_FORMAT = "%d/%m/%y %H:%M"
def from_api_string(date: str):
"""
:param date: in format as comes from api, such as 03/06/2020
:return: datetime object.
"""
return dt.strptime(date, API_DATE_FORMAT)
def from_csv_format(date: str):
"""
:param date: in DATE_FORMAT format, such as 03/06/2020
:return: datetime object.
"""
return dt.strptime(date, CSV_DATE_FORMAT)
def to_string(date: dt):
"""
converts date to string
:param date: to conver
:return: into DATE_FORMAT format, such as 03/06/2020
"""
return date.strftime(API_DATE_FORMAT)
def get_day_range(start: dt, end: dt):
"""
Helper function to return all dates in between two dates
:param start: day to start from (incl)
:param end: day to end on (incl)
:return: inclusive dates for the range between start and end
"""
return [start + datetime.timedelta(days=i) for i in range((end - start).days + 1)]
def date_to_key(d: dt):
"""
:param d: date object
:return: string key from the day, month and year
"""
return "{}/{}/{}".format(d.day, d.month, d.year)
| 22.722222 | 86 | 0.640587 | 190 | 1,227 | 4.026316 | 0.289474 | 0.091503 | 0.05098 | 0.039216 | 0.290196 | 0.20915 | 0.20915 | 0.20915 | 0.20915 | 0.141176 | 0 | 0.026652 | 0.235534 | 1,227 | 53 | 87 | 23.150943 | 0.788913 | 0.431133 | 0 | 0 | 0 | 0 | 0.051813 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0.142857 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3b302bae18a10ec14b7a8b3b8cfb72b6ca62106b | 771 | py | Python | argentum-api/api/tests/data/configs.py | devium/argentum | 2bbb0f663fe9be78d106b1afa409b094da449519 | [
"MIT"
] | 1 | 2019-10-07T09:47:08.000Z | 2019-10-07T09:47:08.000Z | argentum-api/api/tests/data/configs.py | devium/argentum | 2bbb0f663fe9be78d106b1afa409b094da449519 | [
"MIT"
] | null | null | null | argentum-api/api/tests/data/configs.py | devium/argentum | 2bbb0f663fe9be78d106b1afa409b094da449519 | [
"MIT"
] | null | null | null | from api.models.config import Config
from api.tests.utils.test_objects import TestObjects
class TestConfigs(TestObjects):
MODEL = Config
POSTPAID_LIMIT: MODEL
POSTPAID_LIMIT_PATCHED: MODEL
@classmethod
def init(cls):
# These models are created in the initial Django signal, not from this data. These are just for easy handling.
cls.POSTPAID_LIMIT = cls.MODEL(id=10010, key='postpaid_limit', value='0.00')
cls.SAVED = [cls.POSTPAID_LIMIT]
cls.POSTPAID_LIMIT_PATCHED = cls.MODEL(id=10011, key='postpaid_limit', value='-10.00')
cls.UNSAVED = [cls.POSTPAID_LIMIT_PATCHED]
@classmethod
def create(cls):
TestConfigs.POSTPAID_LIMIT.id = Config.objects.get(key=TestConfigs.POSTPAID_LIMIT.key).id
| 32.125 | 118 | 0.714656 | 104 | 771 | 5.163462 | 0.451923 | 0.242086 | 0.119181 | 0.070764 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027157 | 0.188067 | 771 | 23 | 119 | 33.521739 | 0.830671 | 0.140078 | 0 | 0.133333 | 0 | 0 | 0.057489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.133333 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3b35b0560293592911b399dcbbc67e0ebbbf2568 | 3,505 | py | Python | FatherSon/HelloWorld2_source_code/Listing_23-11.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | 1 | 2019-01-04T05:47:50.000Z | 2019-01-04T05:47:50.000Z | FatherSon/HelloWorld2_source_code/Listing_23-11.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | FatherSon/HelloWorld2_source_code/Listing_23-11.py | axetang/AxePython | 3b517fa3123ce2e939680ad1ae14f7e602d446a6 | [
"Apache-2.0"
] | null | null | null | # Listing_23-11.py
# Copyright Warren & Csrter Sande, 2013
# Released under MIT license http://www.opensource.org/licenses/mit-license.php
# Version $version ----------------------------
# Crazy Eights - the main loop with scoring added
# Note that this is not a complete program. It needs to be put together
# with the other parts of Crazy Eights to make a working program.
done = False
p_total = c_total = 0
while not done:
game_done = False
blocked = 0
init_cards() # set up deck and create player's and computer's hands
while not game_done:
player_turn()
if len(p_hand) == 0: # player won
game_done = True
print
print "You won!"
# display game score here
p_points = 0
for card in c_hand:
p_points += card.value # add up points from computer's remaining cards
p_total += p_points # add points from this game to the total
print"You got %i points for computer's hand" % p_points
if not game_done:
computer_turn()
if len(c_hand) == 0: # computer won
game_done = True
print
print "Computer won!"
# display game score here
c_points = 0
for card in p_hand:
c_points += card.value # add up points from player's remaining cards
c_total += c_points # add points from this game to the total
print"Computer got %i points for your hand" % c_points
if blocked >= 2: # both player and computer blocked, so both get points
game_done = True
print "Both players blocked. GAME OVER."
player_points = 0
for card in c_hand:
p_points += card.value
p_total += p_points
c_points = 0
for card in p_hand:
c_points += card.value
c_total += c_points
# print points for this game
print"You got %i points for computer's hand" % p_points
print"Computer got %i points for your hand" % c_points
play_again = raw_input("Play again (Y/N)? ")
if play_again.lower().startswith('y'):
done = False
# print total points so far
print "\nSo far, you have %i points" % p_total
print "and the computer has %i points.\n" % c_total
else:
done = True
# print final totals
print "\n Final Score:"
print "You: %i Computer: %i" % (p_total, c_total)
| 52.313433 | 131 | 0.417404 | 356 | 3,505 | 3.983146 | 0.308989 | 0.039492 | 0.036671 | 0.039492 | 0.362482 | 0.330042 | 0.294781 | 0.273625 | 0.273625 | 0.273625 | 0 | 0.010204 | 0.524679 | 3,505 | 66 | 132 | 53.106061 | 0.840936 | 0.292725 | 0 | 0.5 | 0 | 0 | 0.134034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.26 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3b3bc432d990ad451477e385dce0ed17ba1791fc | 3,103 | py | Python | urls.py | audaciouscode/PassiveDataKit-Django | ed1e00c436801b9f49a3e0e6657c2adb6b2ba3d4 | [
"Apache-2.0"
] | 5 | 2016-01-26T19:19:44.000Z | 2018-12-12T18:04:04.000Z | urls.py | audacious-software/PassiveDataKit-Django | da91a375c075ceec938f2c9bb6b011f9f019b024 | [
"Apache-2.0"
] | 6 | 2020-02-17T20:16:28.000Z | 2021-12-13T21:51:20.000Z | urls.py | audacious-software/PassiveDataKit-Django | da91a375c075ceec938f2c9bb6b011f9f019b024 | [
"Apache-2.0"
] | 4 | 2020-01-29T15:36:58.000Z | 2021-06-01T18:55:26.000Z | # pylint: disable=line-too-long
from django.conf.urls import url
from django.contrib.auth.views import LogoutView
from django.conf import settings
from .views import pdk_add_data_point, pdk_add_data_bundle, pdk_app_config, pdk_issues, \
pdk_issues_json, pdk_fetch_metadata_json
urlpatterns = [
url(r'^add-point.json$', pdk_add_data_point, name='pdk_add_data_point'),
url(r'^add-bundle.json$', pdk_add_data_bundle, name='pdk_add_data_bundle'),
url(r'^app-config.json$', pdk_app_config, name='pdk_app_config'),
]
try:
from .withings_views import pdk_withings_start, pdk_withings_auth
if settings.PDK_WITHINGS_CLIENT_ID and settings.PDK_WITHINGS_SECRET:
urlpatterns.append(url(r'^withings/start/(?P<source_id>.+)$', pdk_withings_start, name='pdk_withings_start'))
urlpatterns.append(url(r'^withings/auth$', pdk_withings_auth, name='pdk_withings_auth'))
except AttributeError:
pass
try:
if settings.PDK_DASHBOARD_ENABLED:
from .views import pdk_home, pdk_unmatched_sources, pdk_source, pdk_source_generator, \
pdk_visualization_data, pdk_export, pdk_download_report, \
pdk_system_health, pdk_profile
urlpatterns.append(url(r'^visualization/(?P<source_id>.+)/(?P<generator_id>.+)/(?P<page>\d+).json$', \
pdk_visualization_data, name='pdk_visualization_data'))
urlpatterns.append(url(r'^report/(?P<report_id>\d+)/download$', pdk_download_report, name='pdk_download_report'))
urlpatterns.append(url(r'^source/(?P<source_id>.+)/(?P<generator_id>.+)$', pdk_source_generator, name='pdk_source_generator'))
urlpatterns.append(url(r'^source/(?P<source_id>.+)$', pdk_source, name='pdk_source'))
urlpatterns.append(url(r'^export$', pdk_export, name='pdk_export'))
urlpatterns.append(url(r'^system-health$', pdk_system_health, name='pdk_system_health'))
urlpatterns.append(url(r'^profile$', pdk_profile, name='pdk_profile'))
urlpatterns.append(url(r'^fetch-metadata.json$', pdk_fetch_metadata_json, name='pdk_fetch_metadata_json'))
urlpatterns.append(url(r'^issues.json$', pdk_issues_json, name='pdk_issues_json'))
urlpatterns.append(url(r'^issues$', pdk_issues, name='pdk_issues'))
urlpatterns.append(url(r'^unmatched-sources.json$', pdk_unmatched_sources, name='pdk_unmatched_sources'))
urlpatterns.append(url(r'^logout$', LogoutView.as_view(), name='pdk_logout'))
urlpatterns.append(url(r'^$', pdk_home, name='pdk_home'))
except AttributeError:
pass
try:
if settings.PDK_API_ENABLED:
from .api_views import pdk_request_token, pdk_data_point_query, pdk_data_source_query
urlpatterns.append(url(r'^api/request-token.json$', pdk_request_token, name='pdk_request_token'))
urlpatterns.append(url(r'^api/data-points.json$', pdk_data_point_query, name='pdk_data_point_query'))
urlpatterns.append(url(r'^api/data-sources.json$', pdk_data_source_query, name='pdk_data_source_query'))
except AttributeError:
pass
| 55.410714 | 134 | 0.716081 | 425 | 3,103 | 4.898824 | 0.157647 | 0.040346 | 0.172911 | 0.181556 | 0.255524 | 0.191643 | 0.073007 | 0.034582 | 0 | 0 | 0 | 0 | 0.145021 | 3,103 | 55 | 135 | 56.418182 | 0.784772 | 0.009346 | 0 | 0.195652 | 0 | 0 | 0.259766 | 0.135742 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.065217 | 0.152174 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3b48b6b797f56fc6a1d2416f53d86ebe58909abf | 974 | py | Python | aboutme/migrations/0001_initial.py | tharindubasnnayaka/MyWebApp | b9f9adae5df01f495a459eeee1083288ba568be9 | [
"MIT"
] | 2 | 2018-06-22T22:31:55.000Z | 2018-10-09T17:14:35.000Z | aboutme/migrations/0001_initial.py | tharindubasnnayaka/MyWebApp | b9f9adae5df01f495a459eeee1083288ba568be9 | [
"MIT"
] | 7 | 2018-10-04T18:56:33.000Z | 2020-10-11T18:03:22.000Z | aboutme/migrations/0001_initial.py | tharindubasnnayaka/MyWebApp | b9f9adae5df01f495a459eeee1083288ba568be9 | [
"MIT"
] | 17 | 2018-10-04T19:09:44.000Z | 2022-03-17T10:27:28.000Z | # Generated by Django 2.1a1 on 2018-07-30 18:50
import aboutme.models
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Me',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=100)),
('title', models.CharField(max_length=500)),
('location', models.CharField(max_length=250)),
('bio', models.TextField()),
('tags', models.TextField(default="''")),
('work', models.CharField(max_length=500)),
('education', models.CharField(max_length=500)),
('profileimage', models.ImageField(blank=True, null=True, upload_to=aboutme.models.get_image_path)),
],
),
]
| 32.466667 | 116 | 0.572895 | 98 | 974 | 5.581633 | 0.602041 | 0.137112 | 0.164534 | 0.219378 | 0.14808 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043103 | 0.285421 | 974 | 29 | 117 | 33.586207 | 0.742816 | 0.046201 | 0 | 0 | 1 | 0 | 0.061489 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3b5052d4ea1e103c62f23657a9f8a91d6cc99d36 | 6,756 | py | Python | src/dbn_samples.py | jamesrobertlloyd/kmc-research | 291c39a3766c2d06c561044173b582cb251eb85e | [
"MIT"
] | 9 | 2016-01-17T13:14:11.000Z | 2020-09-21T20:19:34.000Z | src/dbn_samples.py | jamesrobertlloyd/kmc-research | 291c39a3766c2d06c561044173b582cb251eb85e | [
"MIT"
] | null | null | null | src/dbn_samples.py | jamesrobertlloyd/kmc-research | 291c39a3766c2d06c561044173b582cb251eb85e | [
"MIT"
] | 5 | 2015-09-08T14:56:00.000Z | 2019-09-10T19:33:07.000Z | """
Generate and save samples from a dbn trained on mnist
Created Decemeber 2013
@authors: James Robert Lloyd (jrl44@cam.ac.uk)
"""
import numpy as np
import matplotlib.pyplot as plt
import itertools
import os.path
import cloud
from deep_learning.rbm_label import train_rbm
from deep_learning.logistic_sgd import load_data
def train_and_sample_from_dbn(random_seed=1,
dataset='bucket/mnist.pkl.gz',
epochs=15,
architecture=[500,500,2000],
samples=1,
plot_every=1000):
# Setup
rbms = []
original_dataset='bucket/mnist.pkl.gz'
# Pretraining loop
for (i, n_hidden) in enumerate(architecture):
# Train
print 'Training rbm %d' % (i+1)
(rbm, train_set_x, train_set_y, test_set_x, test_set_y) = train_rbm(learning_rate=0.1, training_epochs=epochs,
n_hidden=n_hidden,
dataset=dataset,
random_seed=random_seed,
augment_with_labels=(i==len(architecture)-1))
rbms.append(rbm)
if i < len(architecture) - 1:
print 'Passing data through rbm %d' % (i+1)
# Pass data through rbm
# First reload data to get correct object types
datasets = load_data(original_dataset)
pseudo_train_set_x, pseudo_train_set_y = datasets[0]
pseudo_test_set_x, pseudo_test_set_y = datasets[2]
x_train_array = train_set_x.get_value()
x_test_array = test_set_x.get_value()
pseudo_x_train_array = np.zeros((x_train_array.shape[0], architecture[0]))
pseudo_x_test_array = np.zeros((x_test_array.shape[0], architecture[0]))
W = rbm.W.get_value()
bias = np.tile(rbm.hbias.get_value(), (x_train_array.shape[0],1))
#### TODO - should I be using mean activations or random activations?
#### Currently using mean activations
print 'Computing training features'
pre_sigmoid_activation = np.dot(x_train_array, W) + bias
hid_prob = 1 / (1 + np.exp(-pre_sigmoid_activation))
pseudo_x_train_array = hid_prob
bias = np.tile(rbm.hbias.get_value(), (x_test_array.shape[0],1))
print 'Computing testing features'
pre_sigmoid_activation = np.dot(x_test_array, W) + bias
hid_prob = 1 / (1 + np.exp(-pre_sigmoid_activation))
pseudo_x_test_array = hid_prob
pseudo_train_set_x.set_value(pseudo_x_train_array)
pseudo_test_set_x.set_value(pseudo_x_test_array)
dataset = (pseudo_train_set_x, pseudo_train_set_y, pseudo_test_set_x, pseudo_test_set_y)
print 'Pretraining complete'
# Reload original data
datasets = load_data(original_dataset)
train_set_x, train_set_y = datasets[0]
test_set_x, test_set_y = datasets[2]
# Sampling
rng = np.random.RandomState(random_seed)
images = np.zeros((0,28*28))
labels = np.zeros((0,1))
number_of_train_samples = train_set_x.get_value(borrow=True).shape[0]
count = 0
print 'Sampling images'
while count < samples:
# Pick random test example, with which to initialize the persistent chain
train_idx = rng.randint(number_of_train_samples - 1)
starting_image = np.asarray(train_set_x.get_value(borrow=True)[train_idx:train_idx+1])
vis = starting_image
# Propogate image up the rbms
for rbm in rbms[:-1]:
pre_sigmoid_activation = np.dot(vis, rbm.W.get_value()) + rbm.hbias.get_value()
hid_prob = 1 / (1 + np.exp(-pre_sigmoid_activation))
vis = (hid_prob > np.random.rand(hid_prob.shape[0], hid_prob.shape[1])) * 1
# Append label
y_list = train_set_y.owner.inputs[0].get_value()
y_ind = np.zeros((1, 10))
y_ind[0,y_list[train_idx]] = 1
vis = np.hstack((vis, y_ind))
W = rbms[-1].W.get_value()
h_bias = rbms[-1].hbias.get_value()
v_bias = rbms[-1].vbias.get_value()
# Gibbs sample in the autoassociative memory
for dummy in range(plot_every):
pre_sigmoid_activation = np.dot(vis, W) + h_bias
hid_prob = 1 / (1 + np.exp(-pre_sigmoid_activation))
hid = (hid_prob > np.random.rand(hid_prob.shape[0], hid_prob.shape[1])) * 1
pre_sigmoid_activation = np.dot(hid, W.T) + v_bias
vis_prob = 1 / (1 + np.exp(-pre_sigmoid_activation))
vis = (vis_prob > np.random.rand(vis_prob.shape[0], vis.shape[1])) * 1
# Clamp
vis[0,-10:] = y_ind
# Propogate the image down the rbms
vis = vis[:,:-10]
for rbm in reversed(rbms[:-1]):
pre_sigmoid_activation = np.dot(vis, rbm.W.get_value().T) + rbm.vbias.get_value()
vis_prob = 1 / (1 + np.exp(-pre_sigmoid_activation))
vis = (vis_prob > np.random.rand(vis_prob.shape[0], vis_prob.shape[1])) * 1
vis_image = vis_prob
images = np.vstack((images, vis_image))
labels = np.vstack((labels, y_list[train_idx]))
np.savetxt('images.csv', images, delimiter=',')
np.savetxt('labels.csv', labels, delimiter=',')
count += 1
print 'Sampled %d images' % count
return (rbms, images, labels)
def train_and_sample(random_seed):
(rbms, images, labels) = train_and_sample_from_dbn(random_seed=random_seed)
return (images, labels)
def main(n_rbms=5, save_folder='../data/mnist/many-rbm-samples/default', cloud_simulation=True):
pass
execfile('picloud_misc_credentials.py')
if cloud_simulation:
cloud.start_simulator()
if not os.path.isdir(save_folder):
os.makedirs(save_folder)
seeds = [np.random.randint(2**31) for dummy in range(n_rbms)]
print 'Sending jobs'
job_ids = cloud.map(train_and_sample, seeds, _type='f2', _cores=1)
print 'Jobs sent'
images = np.zeros((0,28*28))
labels = np.zeros((0,1))
count = 1
for (some_images, some_labels) in cloud.iresult(job_ids):
print 'Job %d of %d complete' % (count, n_rbms)
count += 1
images = np.vstack((images, some_images))
labels = np.vstack((labels, some_labels))
np.savetxt(os.path.join(save_folder, 'images.csv'), images, delimiter=',')
np.savetxt(os.path.join(save_folder, 'labels.csv'), labels, delimiter=',')
return (images, labels) | 42.490566 | 121 | 0.600503 | 921 | 6,756 | 4.14658 | 0.222584 | 0.031422 | 0.062844 | 0.034564 | 0.391987 | 0.317099 | 0.260539 | 0.197172 | 0.168369 | 0.139303 | 0 | 0.022671 | 0.288336 | 6,756 | 159 | 122 | 42.490566 | 0.771631 | 0.062167 | 0 | 0.136752 | 1 | 0 | 0.054675 | 0.010514 | 0 | 0 | 0 | 0.006289 | 0 | 0 | null | null | 0.017094 | 0.059829 | null | null | 0.08547 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3b60ef8e91e7392daee62726820909fae0e9778a | 2,345 | py | Python | erica/api/v2/endpoints/tax.py | digitalservice4germany/erica | 7e07d88f3db78ab6e4f7cccad8dfef2a4b3a71b2 | [
"MIT"
] | 3 | 2022-01-31T15:17:17.000Z | 2022-03-01T16:15:47.000Z | erica/api/v2/endpoints/tax.py | digitalservice4germany/erica | 7e07d88f3db78ab6e4f7cccad8dfef2a4b3a71b2 | [
"MIT"
] | 59 | 2022-01-31T14:04:20.000Z | 2022-03-31T20:08:47.000Z | erica/api/v2/endpoints/tax.py | digitalservice4germany/erica | 7e07d88f3db78ab6e4f7cccad8dfef2a4b3a71b2 | [
"MIT"
] | 1 | 2022-03-10T09:24:28.000Z | 2022-03-10T09:24:28.000Z | from uuid import UUID
from fastapi import status, APIRouter
from starlette.requests import Request
from starlette.responses import FileResponse, RedirectResponse
from erica.api.v2.responses.model import response_model_get_tax_number_validity_from_queue, response_model_post_to_queue
from erica.application.JobService.job_service_factory import get_job_service
from erica.application.Shared.service_injector import get_service
from erica.application.tax_number_validation.TaxNumberValidityService import TaxNumberValidityServiceInterface
from erica.application.tax_number_validation.check_tax_number_dto import CheckTaxNumberDto
from erica.domain.Shared.EricaRequest import RequestType
router = APIRouter()
@router.post('/tax_number_validity', status_code=status.HTTP_201_CREATED, responses=response_model_post_to_queue)
async def is_valid_tax_number(tax_validity_client_identifier: CheckTaxNumberDto, request: Request):
"""
Route for validation of a tax number using the job queue.
:param request: API request object.
:param tax_validity_client_identifier: payload with client identifier and the JSON input data for the tax number validity check.
"""
result = get_job_service(RequestType.check_tax_number).add_to_queue(
tax_validity_client_identifier.payload, tax_validity_client_identifier.client_identifier,
RequestType.check_tax_number)
return RedirectResponse(
request.url_for("get_valid_tax_number_job", request_id=str(result.request_id)).removeprefix(str(request.base_url)),
status_code=201)
@router.get('/tax_number_validity/{request_id}', status_code=status.HTTP_200_OK,
responses=response_model_get_tax_number_validity_from_queue)
async def get_valid_tax_number_job(request_id: UUID):
"""
Route for retrieving job status of a tax number validity from the queue.
:param request_id: the id of the job.
"""
tax_number_validity_service: TaxNumberValidityServiceInterface = get_service(RequestType.check_tax_number)
return tax_number_validity_service.get_response_tax_number_validity(request_id)
@router.get('/tax_offices/', status_code=status.HTTP_200_OK)
def get_tax_offices():
"""
The list of tax offices for all states is requested and returned.
"""
return FileResponse("erica/infrastructure/static/tax_offices.json")
| 49.893617 | 132 | 0.816205 | 316 | 2,345 | 5.721519 | 0.265823 | 0.09458 | 0.084624 | 0.059735 | 0.298119 | 0.149336 | 0.07854 | 0.04646 | 0 | 0 | 0 | 0.00628 | 0.117271 | 2,345 | 46 | 133 | 50.978261 | 0.86715 | 0.027719 | 0 | 0 | 0 | 0 | 0.070975 | 0.053496 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.37037 | 0 | 0.518519 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3b753f30cfeb206f88b5f9dc2a6515035a15ef8b | 7,956 | py | Python | implementations/warris2018/pyPaSWAS/Core/Formatters.py | r-barnes/sw_comparison | 1ac2c9cc10a32badd6b8fb1e96516c97f7800176 | [
"BSD-Source-Code"
] | 24 | 2015-09-03T13:36:10.000Z | 2021-11-18T15:09:53.000Z | pyPaSWAS/Core/Formatters.py | Aaron1993/pyPaSWAS | 0f1edceba2a002ef30a803cc5c5fe45587ffd2aa | [
"MIT"
] | 6 | 2015-02-25T16:07:19.000Z | 2020-02-28T08:19:25.000Z | pyPaSWAS/Core/Formatters.py | Aaron1993/pyPaSWAS | 0f1edceba2a002ef30a803cc5c5fe45587ffd2aa | [
"MIT"
] | 9 | 2017-10-31T09:55:06.000Z | 2021-12-03T02:47:56.000Z | '''This module contains the output formatters for pyPaSWAS'''
class DefaultFormatter(object):
'''This is the default formatter for pyPasWas.
All available formatters inherit from this formatter.
The results are parsed into a temporary file, which can be used by the main
program for permanent storage, printing etc.
'''
def __init__(self, logger, hitlist, outputfile):
self.name = ''
self.logger = logger
self.logger.debug('Initializing formatter...')
self.hitlist = hitlist
self.outputfile = outputfile
self._set_name()
self.logger.debug('Initialized {0}'.format(self.name))
def _format_hit(self, hit):
'''This method may be overruled to enable other formats for printed results.'''
self.logger.debug('Formatting hit {0}'.format(hit.get_seq_id()))
formatted_hit = ', '.join([hit.get_seq_id(), hit.get_target_id(), str(hit.seq_location[0]), str(hit.seq_location[1]),
str(hit.target_location[0]), str(hit.target_location[1]), str(hit.score), str(hit.matches),
str(hit.mismatches), str(len(hit.alignment) - hit.matches - hit.mismatches),
str(len(hit.alignment)), str(hit.score / len(hit.alignment)),
str(hit.sequence_info.original_length), str(hit.target_info.original_length),
str(hit.score / hit.sequence_info.original_length),
str(hit.score / hit.target_info.original_length), str(hit.distance)])
formatted_hit = '\n'.join([formatted_hit, hit.sequence_match, hit.alignment, hit.target_match])
return formatted_hit
def _set_name(self):
'''Name of the formatter. Used for logging'''
self.name = 'defaultformatter'
def _get_hits(self):
'''Returns ordered list of hits'''
hits = self.hitlist.real_hits.values()
return sorted(hits, key=lambda hit: (hit.get_seq_id(), hit.get_target_id(), hit.score))
def print_results(self):
'''sets, formats and prints the results to a file.'''
self.logger.debug('printing results...')
output = open(self.outputfile, 'w')
for hit in self._get_hits():
formatted_hit = self._format_hit(hit)
output.write(formatted_hit + "\n")
self.logger.debug('finished printing results')
class SamFormatter(DefaultFormatter):
'''This Formatter is used to create SAM output
See http://samtools.sourceforge.net/SAM1.pdf
'''
def __init__(self, logger, hitlist, outputfile):
'''Since the header contains information about the target sequences and must be
present before alignment lines, formatted lines are stored before printing.
'''
DefaultFormatter.__init__(self, logger, hitlist, outputfile)
self.sq_lines = {}
self.record_lines = []
def _set_name(self):
'''Name of the formatter. Used for logging'''
self.name = 'SAM formatter'
def _format_hit(self, hit):
'''Adds a header line to self.sq_lines and an alignment line to self.record_lines.
The following mappings are used for header lines:
SN: hit.get_target_id()
LN: hit.full_target.original_length
'''
self.logger.debug('Formatting hit {0}'.format(hit.get_seq_id()))
#add a header line for the target id if not already present
if hit.get_target_id() not in self.sq_lines:
if hit.get_target_id()[-2:] != 'RC':
self.sq_lines[hit.get_target_id()] = hit.get_sam_sq()
else:
self.sq_lines[hit.get_target_id()[:-3]] = hit.get_sam_sq()
#add a line for the hit
self.record_lines.append(hit.get_sam_line())
def print_results(self):
'''sets, formats and prints the results to a file.'''
self.logger.info('formatting results...')
#format header and hit lines
for hit in self._get_hits():
self._format_hit(hit)
self.logger.debug('printing results...')
output = open(self.outputfile, 'w')
#write the header lines to the file
header_string = '@HD\tVN:1.4\tSO:unknown'
output.write(header_string + '\n')
for header_line in self.sq_lines:
output.write(self.sq_lines[header_line] + '\n')
#program information header line
output.write('@PG\tID:0\tPN:paswas\tVN:3.0\n')
#write the hit lines to the output file
for line in self.record_lines:
output.write(line + '\n')
output.close()
self.logger.debug('finished printing results')
class TrimmerFormatter(DefaultFormatter):
'''This Formatter is used to create SAM output
See http://samtools.sourceforge.net/SAM1.pdf
'''
def __init__(self, logger, hitlist, outputfile):
'''Since the header contains information about the target sequences and must be
present before alignment lines, formatted lines are stored before printing.
'''
DefaultFormatter.__init__(self, logger, hitlist, outputfile)
self.sq_lines = {}
self.record_lines = []
def _set_name(self):
'''Name of the formatter. Used for logging'''
self.name = 'SAM formatter'
def _format_hit(self, hit):
'''Adds a header line to self.sq_lines and an alignment line to self.record_lines.
The following mappings are used for header lines:
SN: hit.get_target_id()
LN: hit.full_target.original_length
'''
self.logger.debug('Formatting hit {0}'.format(hit.get_seq_id()))
self.record_lines.append(hit.get_trimmed_line())
def print_results(self):
'''sets, formats and prints the results to a file.'''
self.logger.info('formatting results...')
#format header and hit lines
for hit in self._get_hits():
self._format_hit(hit)
self.logger.debug('printing results...')
output = open(self.outputfile, 'w')
#write the hit lines to the output file
for line in self.record_lines:
output.write(line + '\n')
output.close()
self.logger.debug('finished printing results')
class FASTA(DefaultFormatter):
'''This Formatter is used to create FASTA output
'''
def __init__(self, logger, hitlist, outputfile):
'''Since the header contains information about the target sequences and must be
present before alignment lines, formatted lines are stored before printing.
'''
DefaultFormatter.__init__(self, logger, hitlist, outputfile)
self.sq_lines = {}
self.record_lines = []
def _set_name(self):
'''Name of the formatter. Used for logging'''
self.name = 'FASTA formatter'
def _format_hit(self, hit):
'''Adds a header line to self.sq_lines and an alignment line to self.record_lines.
The following mappings are used for header lines:
SN: hit.get_target_id()
LN: hit.full_target.original_length
'''
self.logger.debug('Formatting hit {0}'.format(hit.get_seq_id()))
self.record_lines.append(hit.get_full_fasta())
def print_results(self):
'''sets, formats and prints the results to a file.'''
self.logger.info('formatting results...')
#format header and hit lines
for hit in self._get_hits():
self._format_hit(hit)
self.logger.debug('printing results...')
output = open(self.outputfile, 'w')
#write the hit lines to the output file
for line in self.record_lines:
output.write(line + '\n')
output.close()
self.logger.debug('finished printing results')
| 40.8 | 125 | 0.618778 | 1,009 | 7,956 | 4.718533 | 0.152626 | 0.05251 | 0.044108 | 0.026465 | 0.741021 | 0.724008 | 0.693342 | 0.634741 | 0.624239 | 0.624239 | 0 | 0.003109 | 0.272247 | 7,956 | 194 | 126 | 41.010309 | 0.819171 | 0.295626 | 0 | 0.607843 | 0 | 0 | 0.092264 | 0.010124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.22549 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3b867a71bab84e93ec2ed8b6bc8821e5088771cc | 23,656 | py | Python | anno2070/list_of_buildings.py | Aprillion/aprillion.github.io | 9327d2324954e941f50c5171cbcc5bb1b69fc039 | [
"MIT"
] | null | null | null | anno2070/list_of_buildings.py | Aprillion/aprillion.github.io | 9327d2324954e941f50c5171cbcc5bb1b69fc039 | [
"MIT"
] | null | null | null | anno2070/list_of_buildings.py | Aprillion/aprillion.github.io | 9327d2324954e941f50c5171cbcc5bb1b69fc039 | [
"MIT"
] | null | null | null | '''
Created on 13.12.2011
@author: peter.hozak@gmail.com
'''
from __future__ import division
import json, re #, sys
from xml.etree import ElementTree as ET
__version__ = "0.3"
# global settings
# txt files must be changed encoding from UCS-2 to ANSI :(( xml files are ok )
folder = "C:\\Users\\Peter\\Documents\\ANNO 2070\\"
assets_path = folder + "patch3data\\config\\game\\assets.xml"
icons_txt_path = folder + "eng1\\loca\\eng\\txt\\icons.txt"
guids_txt_path = folder + "eng1\\loca\\eng\\txt\\guids.txt"
properties_path = folder + "patch3data\\config\\game\\properties.xml"
icons_path = folder + "patch3data\\config\\game\\icons.xml"
IconWikiaFilessource_path = folder + "wikia_icons_source.txt"
IconWikiaFiles_path = folder + "wikia_icons_map.csv"
output_name = "list_of_buildings_v" + __version__ + ".json"
model_name = "list_of_buildings_model_v" + __version__ + ".json"
model_url = "http://aprilboy.e404.sk/anno2070/" + model_name
def get_building_list():
AssetGroups = ET.parse(assets_path).findall(".//Group")
IconFileNames = parse_IconFileNames()
IconWikiaFiles = parse_IconWikiaFiles()
Eng1 = parse_Eng1()
ProductGUIDs = parse_ProductGUIDs()
BaseGoldPrices = parse_BaseGoldPrices()
Unlocks = parse_Unlocks()
buildings = []
for top_group in AssetGroups:
if top_group.find("Name").text == "Buildings":
break
for faction in top_group.findall("Groups/Group"):
faction_name = faction.find("Name").text
for group in faction.findall(".//Groups/Group"):
group_name = group.find("Name").text
if group_name == "farmfield":
group_name = "farmfields"
for asset in group.findall("Assets/Asset"):
try:
template = asset.find("Template").text
if template == "SimpleObject": # SimpleObjects (farm_field_rows and pirates_props) won't be needed in this database
continue
except: # scientist_academy does not have a Template, so let's ignore it ...
continue
GUID = int(asset.find("Values/Standard/GUID").text)
Name = asset.find("Values/Standard/Name").text
b = {"GUID": GUID, "Name": Name}
try: b["Eng1"] = Eng1[GUID]
except: pass
try: b["IconFileName"] = IconFileNames[GUID]
except: pass
try: b["IconWikiaFile"] = IconWikiaFiles[Name]
except: pass
try: b["Faction"] = faction_name
except: pass
try: b["Group"] = group_name
except: pass
try: b["Template"] = template
except: pass
try: b["InfluenceRadius"] = int(asset.find("Values/Influence/InfluenceRadius").text)
except: pass
try: b[".ifo"] = asset.find("Values/Object/Variations/Item/Filename").text.replace(".cfg",".ifo").replace("data\\graphics\\buildings\\", "")
except: pass
try: b["MaxResidentCount"] = int(asset.find("Values/ResidenceBuilding/MaxResidentCount").text)
except: pass
try:
(x, z) = get_BuildBlocker( b[".ifo"] )
b["BuildBlocker.x"] = x
b["BuildBlocker.z"] = z
except: pass
try:
b["FarmField.GUID"] = int(asset.find("Values/Farm/FarmFieldGUID").text)
try: b["FarmField.Count"] = int(asset.find("Values/Farm/FarmfieldCount").text)
except: pass
try: b["FarmField.Fertility"] = asset.find("Values/Farm/Fertility").text
except: pass
try:
# this split and join is to add "_field" to the .ifo filename
farmifo = b[".ifo"].split("\\")
for i in range(-2,0):
f = farmifo[i].split("_")
farmifo[i] = "_".join(f[0:-1] + ["field"] + [f[-1]])
farmifo = "\\".join(farmifo).replace("tycoon.ifo", "tycoons.ifo") # tycoon do not have consistend .ifo filenames for fields
(x, z) = get_BuildBlocker( farmifo )
b["FarmField.BuildBlocker.x"] = x
b["FarmField.BuildBlocker.z"] = z
except: pass
except: pass
try:
b["Production.Product.Name"] = asset.find("Values/WareProduction/Product").text
#default values:
b["Production.ProductionTime"] = 20000 #miliseconds
b["Production.ProductionCount"] = 1000 #kilograms
b["Production.RawNeeded1"] = 1000
b["Production.RawNeeded2"] = 1000
try: b["Production.Product.GUID"] = ProductGUIDs[ b["Production.Product.Name"] ]
except: pass
try: b["Production.Product.BaseGoldPrice"] = BaseGoldPrices[ b["Production.Product.Name"] ]
except: pass
try: b["Production.Product.Eng1"] = Eng1[ b["Production.Product.GUID"] ]
except: pass
try: b["Production.ProductionTime"] = int(asset.find("Values/WareProduction/ProductionTime").text)
except: pass
try: b["Production.ProductionCount"] = int(asset.find("Values/WareProduction/ProductionCount").text)
except: pass
try: b["Production.RawMaterial1"] = asset.find("Values/Factory/RawMaterial1").text
except: del b["Production.RawNeeded1"]
try: b["Production.RawMaterial2"] = asset.find("Values/Factory/RawMaterial2").text
except: del b["Production.RawNeeded2"]
try: b["Production.RawNeeded1"] = int(asset.find("Values/Factory/RawNeeded1").text)
except: pass
try: b["Production.RawNeeded2"] = int(asset.find("Values/Factory/RawNeeded2").text)
except: pass
TicksPerMinute = 60000 / b["Production.ProductionTime"]
b["Production.ProductionTonsPerMinute"] = ( b["Production.ProductionCount"] / 1000 ) * TicksPerMinute
try: b["Production.RawNeeded1TonsPerMinute"] = ( b["Production.RawNeeded1"] / 1000 ) * b["Production.ProductionTonsPerMinute"]
except: pass
try: b["Production.RawNeeded2TonsPerMinute"] = ( b["Production.RawNeeded2"] / 1000 ) * b["Production.ProductionTonsPerMinute"]
except: pass
except: pass
try:
for cost in asset.findall("Values/BuildCost/*/*"):
try:
if cost.tag == "Credits":
b["BuildCost." + cost.tag] = int(cost.text)
else:
b["BuildCost." + cost.tag] = int(cost.text) // 1000 # in tons
except: pass
except: pass
try:
for cost in asset.findall("Values/MaintenanceCost/*"):
try:
c = int(cost.text)
if "Cost" in cost.tag:
c = -c
if c % (2 << 10):
b["MaintenanceCost." + cost.tag] = c # in Credits
else:
b["MaintenanceCost." + cost.tag] = c >> 12 # in game eco / power / ... units
except: pass
except: pass
try:
b["Unlock.IntermediateLevel"] = asset.find("Values/BuildCost/NeedsIntermediatelevel").text
(count, level) = Unlocks[ b["Unlock.IntermediateLevel"] ]
b["Unlock.ResidentCount"] = count
b["Unlock.ResidentLevel"] = level
except: pass
buildings.append(b)
return buildings
#===============================================================================
def parse_Eng1():
Eng1 = {}
for line in open(icons_txt_path):
result = re.search("(\\d*)=(.*)", line)
if result:
Eng1[int(result.group(1))] = result.group(2)
for line in open(guids_txt_path):
result = re.search("(\\d*)=(.*)", line)
if result:
Eng1[int(result.group(1))] = result.group(2)
return Eng1
def parse_ProductGUIDs():
ProductGUIDs = {}
for p in ET.parse(properties_path).findall(".//ProductIconGUID/*"):
if p.tag != "icon":
ProductGUIDs[p.tag] = int(p.find("icon").text)
return ProductGUIDs
def parse_BaseGoldPrices():
BaseGoldPrices = {}
for p in ET.parse(properties_path).findall(".//ProductPrices/*"):
try:
BaseGoldPrices[p.tag] = int(int(p.find("BaseGoldPrice").text) * 2.5)
except: pass
return BaseGoldPrices
def parse_IconFileNames():
prefix = "icon_"
midfix = "_"
postfix = ".png"
IconFileNames = {}
for i in ET.parse(icons_path).findall("i"):
IconFileID = i.find("Icons/i/IconFileID").text
try:
IconIndex = i.find("Icons/i/IconIndex").text
except:
IconIndex = "0"
IconFileNames[int(i.find("GUID").text)] = prefix + IconFileID + midfix + IconIndex + postfix
return IconFileNames
def parse_IconWikiaFiles():
IconWikiaFiles = {}
with open(IconWikiaFiles_path) as f:
f.readline() # first line contains headers
for line in f:
(key, value) = line.strip().replace("\"","").split(";")[0:2]
IconWikiaFiles[key] = value
return IconWikiaFiles
def parse_IconWikiaFilesSource():
buildings = get_building_list()
WikiaCSVString = "Name;Wikia Icon File;Wikia Label\n"
def prep(string):
return re.sub("[ ._-]*", "", string.lower())
with open(IconWikiaFilessource_path) as f:
for line in f:
if ".png" in line and "File:" not in line:
try:
(png, label) = line.strip().replace(";","").split("|")[0:2]
except:
(png, label) = (line.strip().replace(";","").split("|")[0], "")
name = ""
for b in buildings:
names = [ prep(b["Name"]) ]
try: names.append(prep(b["Eng1"]))
except: pass
try: names.append(prep(b["IconWikiaFile"]))
except: pass
try: names.append(prep(b["Production.Product.Name"]))
except: pass
try: names.append(prep(b["Production.Product.Eng1"]))
except: pass
try: names.append(prep(b["IconFileName"]))
except: pass
if prep(label) in names or prep(png.split(".")[0]) in names:
name = b["Name"]
break
WikiaCSVString += "{0};{1};{2}\n".format(name, png, label)
return WikiaCSVString
def update_IconWikiaFiles():
pass # manually for now ...
def get_BuildBlocker(detail_path):
ifo_path = folder + "data2\\graphics\\buildings\\" + detail_path
b = ET.parse(ifo_path).find(".//BuildBlocker/Position")
x = abs(int(b.find("x").text)) >> 11
z = abs(int(b.find("z").text)) >> 11
return [x, z]
def parse_Unlocks():
Unlocks = {}
for p in ET.parse(properties_path).findall(".//SortedLevels")[1].getchildren():
for i in p.findall("levels/Item"):
try: Unlocks[i.find("IntermediateLevel").text] = ( int(i.find("ResidentCount").text), p.tag )
except: pass
return Unlocks
#===============================================================================
def validate(buildings, model):
valid_keys = model.keys()
result = set()
for b in buildings:
for k in b.keys():
if k not in valid_keys:
result.add("\"{0}\": \"\",".format(k))
break
t = model[k].split(":")[0]
if t not in ("text", "int", "float", "int(+/-)", "float(+/-)"):
result.add("unknown type <{0}> for key: {1}".format(t, k))
elif t == "text" and not isinstance(b[k], str) or t[:3] == "int" and not isinstance(b[k], int) or t[:3] == "float" and not isinstance(b[k], float):
result.add("{0} should be <{1}> type, but for b[\"Name\"] = {2} the value is: {3}".format(k, t, b["Name"], b[k]))
if result:
text_result = "Invalid keys not found in model:\n\n"
for r in result:
text_result += r + "\n"
else:
text_result = "ok"
return text_result
def out_json(buildings, model):
json.dump(model,
fp=open(folder + model_name, "w"),
indent=2,
sort_keys=True)
json.dump({"_version": __version__,
"_model": model_url,
"buildings": buildings},
fp=open(folder + output_name, "w"),
indent=2,
sort_keys=True)
return None
def out_csv(objects, model, object_type):
csv = "This is a csv dump for anno data version {0}, see model {1} ...\n".format(__version__, model_url,)
csv += "To calculate with the data in spreadsheet formulas, try something like the following instead of fixed ranges (order of columns might change in future versions):\n"
csv += "\" =INDEX(data_range;MATCH($A2;OFFSET(data_first_column;0;MATCH('Eng1';data_headers;0)-1);0);MATCH(B$1;data_headers;0))\"\n\n"
headers = sorted(model[object_type].keys())
for h in headers:
csv += "{0};".format(h)
csv += "\n"
for b in objects:
for h in headers:
try: temp = b[h]
except: temp = ""
csv += "{0};".format(temp)
csv += "\n"
with open(folder + output_name.replace(".json", "_{0}.csv".format(object_type)), "w") as f:
f.write(csv)
return None
#===============================================================================
def main():
model = {"_description": "this is a list of Anno 2070 buildings with properties that help fan-made tools in there .. i tried to name the properties somewhat close to actual xml elements in game data files .. you can contact me on http://anno2070.wikia.com/wiki/User:DeathApril or peter.hozak@gmail.com",
"_version": __version__,
"_gameversion": "v1.02 (patch3.rda)",
"_missing_keys": "not all objects use all the keys in this model, please check for KeyError exceptions before working with them (e.g. Production.RawNeeded2Material will be missing for factories with 1 input only)",
"_changelog": {"0.3": ["2011-12-17",
"IconFileName changed, the second number corresponds to IconIndexdo without added +1 (for icons numbered from 0 instead from 1)",
"IconWikiaFile added",
"Production.Product.BaseGoldPrice added (in default trade price, not in datafile format)",
"FarmField.BuildBlocker.* added (for convenience)",
"BuildCost.* added",
"MaintananceCost.* added",
"Unlock.* added"
],
"0.2": ["2011-12-15",
"BuildBlocker array of 2 ints split to 2 properties *.x and *.z (so .csv dump of could be 1:1 to JSON)",
"ProductName, ProductGUID and ProductEng1 renamed to Production.Product.* (for naming consistency)",
"MaxResidentCount, Faction and Group added",
"FarmField.* added + the farmfields themselves can be found in the buildings array by GUID (for farm size)",
"Production.* added"
]
},
"buildings": {"GUID": "int: GUID as appears in assets.xml and other files",
"Name": "text: base name that appears in data files",
"Eng1": "text: english localisation labels from Eng1.rda for building GUID",
"IconFileName": "text: filename from the folder http://odegroot.nl/anno2070/img/orig/icon/ (or see icon folder in the zip file from http://odegroot.nl/anno2070/all_icons.php (first number is IconFileID, the second is IconIndex)",
"IconWikiaFile": "text: filename from the folder http://odegroot.nl/anno2070/img/orig/icon/ (or see icon folder in the zip file from http://odegroot.nl/anno2070/all_icons.php (first number is IconFileID, the second is IconIndex)",
"Faction": "text: tycoons, ecos, techs, others, ... ",
"Group": "text: residence, public, production, special, ... (farms and factories are both production, see template)",
"Template":"text: type of building",
"InfluenceRadius": "int: radius from the center of the building in tiles",
".ifo": "text: path to .ifo of data2.rda data\graphics\buildings, based on the first Object/Variations/Item/Filename for each asset",
"MaxResidentCount": "int: max. number of inhabitants (houses only)",
"BuildBlocker.x": "int: 'x' dimension of the building in tiles (right shift by 11 bits of the number found in BuildBlocker/x element in .ifo file",
"BuildBlocker.z": "int: 'z' dimension of the building in tiles (right shift by 11 bits of the number found in BuildBlocker/z element in .ifo file",
"FarmField.GUID": "int: GUID of a farmfield (farms only)",
"FarmField.Count": "int: number of farmfields needed (farms only)",
"FarmField.Fertility": "text: type of fertility needed on island (farms only)",
"FarmField.BuildBlocker.x": "int: copy of x dimension from the farmfield building BuildBlocker.x property (farms only)",
"FarmField.BuildBlocker.z": "int: copy of x dimension from the farmfield building BuildBlocker.z property (farms only)",
"Production.Product.Name": "text: name of product (factories and farms only)",
"Production.Product.GUID": "int: GUID of product (factories and farms only)",
"Production.Product.Eng1": "text: english localisation labels from Eng1.rda for product GUID (factories and farms only)",
"Production.Product.BaseGoldPrice": "int: default trade price of product in credits [2.5 times the number from properties.xml //ProductPrices/*/BaseGoldPrice] (factories and farms only)",
"Production.ProductionTime": "int: miliseconds (factories and farms only)",
"Production.ProductionCount": "int: kilograms (factories and farms only)",
"Production.ProductionTonsPerMinute": "float: tons per minute, calculated (factories and farms only)",
"Production.RawMaterial1": "text: reference to Production.Product.Name of 1st supplier factory/farm (factories only)",
"Production.RawMaterial2": "text: reference to Production.Product.Name of 2nd supplier factory/farm (factories only)",
"Production.RawNeeded1": "int: kilograms (factories only)",
"Production.RawNeeded2": "int: kilograms (factories only)",
"Production.RawNeeded1TonsPerMinute": "float: tons per minute, calculated (factories only)",
"Production.RawNeeded2TonsPerMinute": "float: tons per minute, calculated (factories only)",
"BuildCost.Credits": "int: credis needet to build",
"BuildCost.BuildingModules": "int: raw material needed to build in kilograms",
"BuildCost.Wood": "int: raw material needed to build in kilograms",
"BuildCost.Glass": "int: raw material needed to build in kilograms",
"BuildCost.Carbon": "int: raw material needed to build in kilograms",
"BuildCost.Concrete": "int: raw material needed to build in kilograms",
"BuildCost.Steel": "int: raw material needed to build in kilograms",
"BuildCost.Tools": "int: raw material needed to build in kilograms",
"BuildCost.Weapons": "int: raw material needed to build in kilograms",
"BuildCost.HeavyWeapons": "int: raw material needed to build in kilograms",
"BuildCost.AdvancedWeapons": "int: raw material needed to build in kilograms",
"MaintenanceCost.ActiveCost": "int: credits for maintsenance per tick",
"MaintenanceCost.InactiveCost": "int: credits for maintenance of paused building per tick",
"MaintenanceCost.ActiveEcoEffect": "float(+/-): eco effect (game unit)",
"MaintenanceCost.InactiveEcoEffect": "float(+/-): eco effect of paused building (game unit)",
"MaintenanceCost.ActiveEnergyCost": "float: energy consumption (game unit)",
"MaintenanceCost.InactiveEnergyCost": "float: energy consumption of paused building (game unit)",
"MaintenanceCost.EcoEffectFadingSpeed": "float: game unit",
"MaintenanceCost.InitTime": "float: miliseconds",
"MaintenanceCost.ActiveEnergyProduction": "float: game unit",
"MaintenanceCost.InactiveEnergyProduction": "float: game unit",
"MaintenanceCost.MinimumEnergyLevel": "float: game unit",
"MaintenanceCost.ActiveAtStart": "float: game unit",
"Unlock.IntermediateLevel": "text: NeedsIntermediatelevel from assets.xml to pair the building to properties.xml's SortedLevels/IntermediateLevel",
"Unlock.ResidentCount": "int: number of residents from properties.xml",
"Unlock.ResidentLevel": "text: workers, employees, engineers or executives",
}
}
#print parse_IconWikiaFilesSource()
buildings = get_building_list()
print validate(buildings, model["buildings"])
out_json(buildings, model)
out_csv(buildings, model, "buildings")
if __name__ == "__main__":
main() | 55.661176 | 307 | 0.534283 | 2,408 | 23,656 | 5.184801 | 0.19103 | 0.028835 | 0.028114 | 0.021306 | 0.30004 | 0.229876 | 0.177733 | 0.147297 | 0.123268 | 0.074489 | 0 | 0.014587 | 0.33928 | 23,656 | 425 | 308 | 55.661176 | 0.784197 | 0.032592 | 0 | 0.224932 | 0 | 0.00271 | 0.390555 | 0.122555 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.100271 | 0.00813 | null | null | 0.00271 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3b8ea57cf350f46a0b5175e07f66d2bedeb67fa8 | 8,795 | py | Python | environments/transfer_env/reacher_env.py | WilsonWangTHU/neural_graph_evolution | 9142b7f1c32c476aa44c83da38d3f0bacc276d73 | [
"MIT"
] | 33 | 2019-06-06T13:59:51.000Z | 2022-03-07T10:36:42.000Z | environments/transfer_env/reacher_env.py | WilsonWangTHU/neural_graph_evolution | 9142b7f1c32c476aa44c83da38d3f0bacc276d73 | [
"MIT"
] | 8 | 2019-07-09T16:58:28.000Z | 2022-02-10T00:27:39.000Z | environments/transfer_env/reacher_env.py | WilsonWangTHU/neural_graph_evolution | 9142b7f1c32c476aa44c83da38d3f0bacc276d73 | [
"MIT"
] | 12 | 2019-06-07T10:43:12.000Z | 2022-03-09T13:47:01.000Z | #!/usr/bin/env python2
# -----------------------------------------------------------------------------
# @brief:
# The reacher task
# @author:
# Tingwu (Wilson) Wang, July 22nd, 2017
# -----------------------------------------------------------------------------
import pdb
import numpy as np
from gym import utils
from gym.envs.mujoco import mujoco_env
import init_path
import os
import num2words
class ReacherEnv(mujoco_env.MujocoEnv, utils.EzPickle):
'''
@brief:
In the modifiedReacherEnv, we take out the cos and sin functions
applied on the environments.
In this case, the env is compatible with all the other environments
@reward:
normalize the distance reward to -1
normalize the ctrl reward to -0.1
'''
def __init__(self, pod_number=2):
# get the path of the environments
xml_name = 'Reacher' + self.get_env_num_str(pod_number) + '.xml'
xml_path = os.path.join(os.path.join(init_path.get_base_dir(),
'environments', 'assets', xml_name))
xml_path = str(os.path.abspath(xml_path))
# the environment coeff
self.num_body = pod_number + 1
self._task_indicator = -1.0
self._ctrl_coeff = 2.0 / (self.num_body / 2 + 1)
# norm the max penalty to be 1, max norm is self.num_body * 0.1 * 2
self._dist_coeff = 2.0 / self.num_body
mujoco_env.MujocoEnv.__init__(self, xml_path, 2)
utils.EzPickle.__init__(self)
def step(self, a):
vec = self.sim.data.site_xpos[0] - self.get_body_com("target")
'''
max is: 0.4
reward_ctrl = - np.square(a).sum()
reward_dist = self._task_indicator * \
sum(np.square(vec)) * self._dist_coeff
'''
reward_ctrl = - np.square(a).sum() * self._ctrl_coeff
reward_dist = -1 * np.linalg.norm(vec) * self._dist_coeff
reward = reward_dist + reward_ctrl
self.do_simulation(a, self.frame_skip)
ob = self._get_obs()
done = False
return ob, reward, done, \
dict(reward_dist=reward_dist, reward_ctrl=reward_ctrl)
def viewer_setup(self):
self.viewer.cam.trackbodyid = 0
self.viewer.cam.distance = self.model.stat.extent * 0.8
def reset_model(self):
# init qpos
qpos = self.np_random.uniform(low=-0.1, high=0.1, size=self.model.nq) \
+ self.init_qpos
# get goal position
self.goal = self.np_random.uniform(
low=-.1 * self.num_body, high=.1 * self.num_body, size=2
)
'''
length = self.np_random.uniform(
low=0.0, high=0.1 * self.num_body, size=1
)[0]
theta = self.np_random.uniform(
low=-3.1415, high=3.1415, size=1
)[0]
self.goal = [length * np.sin(theta), length * np.cos(theta)]
'''
qpos[-2:] = self.goal
# init qvel
qvel = self.init_qvel + \
self.np_random.uniform(low=-.005, high=.005, size=self.model.nv)
if self._task_indicator < 0:
qvel[-2:] = 0
else:
qvel[-2:] = self.np_random.uniform(low=-3, high=3, size=2)
# set state
self.set_state(qpos, qvel)
# self.set_color() # disable set_color() for now. No solution to
return self._get_obs()
def _get_obs(self):
# pdb.set_trace()
return np.concatenate([
self.sim.data.qpos.flat[:-2],
self.sim.data.qvel.flat[:-2],
(self.sim.data.site_xpos[0] - self.get_body_com("target"))[:2],
[self._task_indicator], # indicating task, -1: reacher, 1: avoider
self.sim.data.qpos.flat[-2:] # target pos
# self.model.data.qvel.flat[-2:] # target vel
])
def get_env_num_str(self, number):
num_str = num2words.num2words(number)
return num_str[0].upper() + num_str[1:]
def set_color(self):
reacher_id = self.model.geom_names.index('reacherIndicator')
avoider_id = self.model.geom_names.index('avoiderIndicator')
temp = np.array(self.model.geom_size)
if self._task_indicator < 0:
temp[reacher_id, 0] = 0.015
temp[avoider_id, 0] = 0.001
else:
temp[reacher_id, 0] = 0.001
temp[avoider_id, 0] = 0.015
self.model.geom_size = temp
return
class AvoiderEnv(ReacherEnv):
'''
@brief:
In the modifiedReacherEnv, we take out the cos and sin functions
applied on the environments.
In this case, the env is compatible with all the other environments
'''
def __init__(self, pod_number=2):
ReacherEnv.__init__(self, pod_number=pod_number)
def reset_model(self):
self._task_indicator = 1.0
ReacherEnv.reset_model(self) # reset the model
return self._get_obs()
class SwitcherEnv(ReacherEnv):
'''
@brief:
In the modifiedReacherEnv, we take out the cos and sin functions
applied on the environments.
In this case, the env is compatible with all the other environments
'''
def reset_model(self):
self._task_indicator = float(self.np_random.randint(2) * 2 - 1)
ReacherEnv.reset_model(self) # reset the model
return self._get_obs()
'''
Reachers
'''
class ReacherZeroEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=0)
class ReacherOneEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=1)
class ReacherTwoEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=2)
class ReacherThreeEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=3)
class ReacherFourEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=4)
class ReacherFiveEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=5)
class ReacherSixEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=6)
class ReacherSevenEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=7)
class ReacherEightEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=8)
class ReacherNineEnv(ReacherEnv):
def __init__(self):
ReacherEnv.__init__(self, pod_number=9)
'''
Avoiders
'''
class AvoiderZeroEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=0)
class AvoiderOneEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=1)
class AvoiderTwoEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=2)
class AvoiderThreeEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=3)
class AvoiderFourEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=4)
class AvoiderFiveEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=5)
class AvoiderSixEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=6)
class AvoiderSevenEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=7)
class AvoiderEightEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=8)
class AvoiderNineEnv(AvoiderEnv):
def __init__(self):
AvoiderEnv.__init__(self, pod_number=9)
'''
Switchers
'''
class SwitcherZeroEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=0)
class SwitcherOneEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=1)
class SwitcherTwoEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=2)
class SwitcherThreeEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=3)
class SwitcherFourEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=4)
class SwitcherFiveEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=5)
class SwitcherSixEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=6)
class SwitcherSevenEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=7)
class SwitcherEightEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=8)
class SwitcherNineEnv(SwitcherEnv):
def __init__(self):
SwitcherEnv.__init__(self, pod_number=9)
| 24.430556 | 79 | 0.621717 | 1,080 | 8,795 | 4.686111 | 0.186111 | 0.102746 | 0.071725 | 0.110848 | 0.571429 | 0.515906 | 0.424422 | 0.411579 | 0.411579 | 0.123098 | 0 | 0.021478 | 0.253553 | 8,795 | 359 | 80 | 24.498607 | 0.749429 | 0.144741 | 0 | 0.255814 | 0 | 0 | 0.010705 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232558 | false | 0 | 0.040698 | 0.005814 | 0.505814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3b90e01f97c9f9b164bb39ffe2d1651f169409d3 | 647 | py | Python | test/test_constraints.py | maffettone/constrained_sampler | 7674116d330ce3a56e44511797929127b4374a85 | [
"MIT"
] | null | null | null | test/test_constraints.py | maffettone/constrained_sampler | 7674116d330ce3a56e44511797929127b4374a85 | [
"MIT"
] | null | null | null | test/test_constraints.py | maffettone/constrained_sampler | 7674116d330ce3a56e44511797929127b4374a85 | [
"MIT"
] | null | null | null | import unittest
import os
from constrained_sampler.constraints import Constraint
class ConstraintTestCase(unittest.TestCase):
def setUp(self):
self.constraint = Constraint(os.path.join(os.path.dirname(__file__), '../examples/mixture.txt'))
def test_example(self):
self.assertListEqual([0., 0.], self.constraint.get_example())
def test_ndim(self):
self.assertEqual(self.constraint.get_ndim(), 2)
def test_apply(self):
self.assertTrue(self.constraint.apply(self.constraint.get_example()))
self.assertFalse(self.constraint.apply([1., 1.]))
if __name__ == '__main__':
unittest.main()
| 28.130435 | 104 | 0.704791 | 78 | 647 | 5.602564 | 0.461538 | 0.19222 | 0.116705 | 0.10984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009174 | 0.157651 | 647 | 22 | 105 | 29.409091 | 0.792661 | 0 | 0 | 0 | 0 | 0 | 0.047913 | 0.035549 | 0 | 0 | 0 | 0 | 0.266667 | 1 | 0.266667 | false | 0 | 0.2 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3b91a60fc28a44b7333d8840574bee24164ae81f | 318 | py | Python | doc/week1/w1d3/fasta.py | VinceSD/mebioda | 79021e7e0313ed7b2a116a4f2c1099eac9d5ed63 | [
"MIT"
] | null | null | null | doc/week1/w1d3/fasta.py | VinceSD/mebioda | 79021e7e0313ed7b2a116a4f2c1099eac9d5ed63 | [
"MIT"
] | null | null | null | doc/week1/w1d3/fasta.py | VinceSD/mebioda | 79021e7e0313ed7b2a116a4f2c1099eac9d5ed63 | [
"MIT"
] | 2 | 2019-12-09T12:05:59.000Z | 2019-12-09T12:09:57.000Z | from Bio import SeqIO # sudo pip install biopython
with open("Danaus.fas", "rU") as handle:
# Example: retain COI
for record in SeqIO.parse(handle, "fasta"):
fields = record.description.split('|')
if fields[2] == 'COI-5P':
print '>' + record.description
print record.seq
| 31.8 | 50 | 0.610063 | 40 | 318 | 4.85 | 0.775 | 0.175258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008511 | 0.261006 | 318 | 9 | 51 | 35.333333 | 0.817021 | 0.144654 | 0 | 0 | 0 | 0 | 0.092937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.142857 | null | null | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3ba1265751c6c07d0afae1857dc2c19c788c3648 | 209 | py | Python | Easy/Delete_Columns_to_Make_Sorted/Delete_Columns_to_Make_Sorted.py | nitin3685/LeetCode_Solutions | ab920e96cd27e0b2c3c895ce20853edceef0cce8 | [
"MIT"
] | null | null | null | Easy/Delete_Columns_to_Make_Sorted/Delete_Columns_to_Make_Sorted.py | nitin3685/LeetCode_Solutions | ab920e96cd27e0b2c3c895ce20853edceef0cce8 | [
"MIT"
] | null | null | null | Easy/Delete_Columns_to_Make_Sorted/Delete_Columns_to_Make_Sorted.py | nitin3685/LeetCode_Solutions | ab920e96cd27e0b2c3c895ce20853edceef0cce8 | [
"MIT"
] | null | null | null | class Solution:
def minDeletionSize(self, A: List[str]) -> int:
res = 0
for col_str in zip(*A):
if list(col_str) != sorted(col_str):
res += 1
return res
| 26.125 | 51 | 0.502392 | 28 | 209 | 3.642857 | 0.678571 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015504 | 0.382775 | 209 | 7 | 52 | 29.857143 | 0.775194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e67d515242da662f008461e6ce926c1409d9c17 | 123 | py | Python | demo_broadcast.py | Monster880/pytorch_py | 9c5ac5974f48edb5ea3d897a1100a63d488c61d9 | [
"MIT"
] | null | null | null | demo_broadcast.py | Monster880/pytorch_py | 9c5ac5974f48edb5ea3d897a1100a63d488c61d9 | [
"MIT"
] | null | null | null | demo_broadcast.py | Monster880/pytorch_py | 9c5ac5974f48edb5ea3d897a1100a63d488c61d9 | [
"MIT"
] | null | null | null | import torch
a = torch.rand(2,4,1,3)
b = torch.rand(4,2,3)
c = a + b
# 2*4*2*3
print(a)
print(b)
print(c)
print(c.shape)
| 10.25 | 23 | 0.601626 | 31 | 123 | 2.387097 | 0.387097 | 0.243243 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107843 | 0.170732 | 123 | 11 | 24 | 11.181818 | 0.617647 | 0.056911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.5 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.