hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4f8f41c2ea101096bc2b2d8aa474d7d243f80fed | 1,177 | py | Python | python/legacy_upload_document.py | Glynt-ai/Glynt | 1820e2869d191c7148f8e5b4513ef218d4c28918 | [
"MIT"
] | 1 | 2020-01-09T19:19:52.000Z | 2020-01-09T19:19:52.000Z | python/legacy_upload_document.py | Glynt-ai/Glynt | 1820e2869d191c7148f8e5b4513ef218d4c28918 | [
"MIT"
] | 1 | 2020-02-26T20:06:52.000Z | 2020-02-26T20:06:52.000Z | python/legacy_upload_document.py | Glynt-ai/Glynt | 1820e2869d191c7148f8e5b4513ef218d4c28918 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import fire
import requests
from utils import get_token, get_content_md5
def legacy_upload_document(
data_pool_id,
label,
content_type,
filepath
):
token, token_type = get_token()
content_md5 = get_content_md5(filepath)
# POST document and store response content
resp = requests.post(
url=f'https://api.glynt.ai/v6/data-pools/{data_pool_id}/documents/',
headers={
'Authorization': f'{token_type} {token}',
'content-type': 'application/json'
},
json={
'label': label,
'content_type': content_type,
'content_md5': content_md5
}
)
assert resp.status_code == 201
document = resp.json()
# PUT document content
with open(filepath, 'rb') as f:
resp = requests.put(
url=document['file_upload_url'],
headers={
'content-type': content_type,
'content-md5': content_md5
},
data=f.read()
)
assert resp.status_code == 200
return document
if __name__ == '__main__':
fire.Fire(legacy_upload_document)
| 23.078431 | 76 | 0.585387 | 133 | 1,177 | 4.909774 | 0.428571 | 0.107198 | 0.11026 | 0.067381 | 0.128637 | 0.128637 | 0.128637 | 0.128637 | 0 | 0 | 0 | 0.01836 | 0.305862 | 1,177 | 50 | 77 | 23.54 | 0.780906 | 0.070518 | 0 | 0.052632 | 0 | 0 | 0.180568 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.026316 | false | 0 | 0.078947 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f8f81517e49eb19bf7c0194e7d8701b98532c6b | 695 | py | Python | src/filters/filter_points_to_tube.py | grosenkj/ParaViewGeophysics | 217e2f12b53beba3668e5153b79233d2215ab715 | [
"BSD-3-Clause"
] | null | null | null | src/filters/filter_points_to_tube.py | grosenkj/ParaViewGeophysics | 217e2f12b53beba3668e5153b79233d2215ab715 | [
"BSD-3-Clause"
] | null | null | null | src/filters/filter_points_to_tube.py | grosenkj/ParaViewGeophysics | 217e2f12b53beba3668e5153b79233d2215ab715 | [
"BSD-3-Clause"
] | null | null | null | Name = 'PointsToTube'
Label = 'Points To Tube'
FilterCategory = 'PVGP Filters'
Help = 'Takes points from a vtkPolyData object and constructs a line of those points then builds a polygonal tube around that line with some specified radius and number of sides.'
NumberOfInputs = 1
InputDataType = 'vtkPolyData'
OutputDataType = 'vtkPolyData'
ExtraXml = ''
Properties = dict(
Number_of_Sides=20,
Radius=10.0,
Use_nearest_nbr=True,
)
def RequestData():
from PVGPpy.filt import pointsToTube
pdi = self.GetInput() # VTK PolyData Type
pdo = self.GetOutput() # VTK PolyData Type
pointsToTube(pdi, radius=Radius, numSides=Number_of_Sides, nrNbr=Use_nearest_nbr, pdo=pdo)
| 27.8 | 179 | 0.743885 | 92 | 695 | 5.532609 | 0.652174 | 0.047151 | 0.076621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010453 | 0.174101 | 695 | 24 | 180 | 28.958333 | 0.876307 | 0.05036 | 0 | 0 | 0 | 0.055556 | 0.350076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f9144e99d2d48aecabed295d16a29cfffdc9f32 | 1,957 | py | Python | project_euler/python/lib/search.py | hacktoolkit/code_challenges | d71f8362496a72963a53abba7bcc9dd4d35a2920 | [
"MIT"
] | 10 | 2015-01-31T09:04:45.000Z | 2022-01-08T04:09:48.000Z | project_euler/python/lib/search.py | hacktoolkit/code_challenges | d71f8362496a72963a53abba7bcc9dd4d35a2920 | [
"MIT"
] | 3 | 2016-05-16T07:37:01.000Z | 2016-05-18T14:14:16.000Z | project_euler/python/lib/search.py | hacktoolkit/code_challenges | d71f8362496a72963a53abba7bcc9dd4d35a2920 | [
"MIT"
] | 6 | 2015-02-06T06:00:00.000Z | 2020-02-13T16:13:48.000Z | def binary_search(items, value, exact=True, ascending=True, initial_guess=None):
"""Performs binary search for an item matching `value`
in a list of `items`.
Finds an exact match if `exact is True`, else as close as possible without crossing `value`
`items` sorted in ascending order if `ascending is True`
Optionally takes in `initial_guess`, an index `k` between
`0 <= k <= len(items)`
Returns the `index` of the matching item
Test Cases:
- 745
"""
index = None
lower = 0
upper = len(items) - 1
def _update_guess():
return int((lower + upper) / 2.0)
if initial_guess is not None:
k = initial_guess
else:
k = _update_guess()
while index is None and lower <= upper:
# loop until item is found, or lower cross upper
item = items[k]
next_item = items[k + 1] if k + 1 < len(items) else None
if exact:
criteria = item == value
elif ascending:
criteria = (
item <= value
and (
next_item is None
or next_item > value
)
)
else:
# `items` are in descending order
criteria = (
item >= value
and (
next_item is None
or next_item < value
)
)
if criteria:
index = k
elif ascending:
if item > value:
upper = k - 1
k = _update_guess()
elif item < value:
lower = k + 1
k = _update_guess()
else:
# `items` are in descending order
if item < value:
upper = k - 1
k = _update_guess()
elif item > value:
lower = k + 1
k = _update_guess()
return index
| 27.180556 | 95 | 0.479305 | 221 | 1,957 | 4.144796 | 0.289593 | 0.088428 | 0.065502 | 0.039301 | 0.305677 | 0.305677 | 0.242358 | 0.242358 | 0.242358 | 0.242358 | 0 | 0.012927 | 0.446602 | 1,957 | 71 | 96 | 27.56338 | 0.832872 | 0.244251 | 0 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0 | 0.020833 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f92d1a3f563bb77823c2ad0f42a53c11e38b971 | 17,157 | py | Python | ShapeShifter/SSFile.py | kimballh/ShapeShifter | a9897cabec700726629466eea0159e75ba68ba91 | [
"MIT"
] | null | null | null | ShapeShifter/SSFile.py | kimballh/ShapeShifter | a9897cabec700726629466eea0159e75ba68ba91 | [
"MIT"
] | null | null | null | ShapeShifter/SSFile.py | kimballh/ShapeShifter | a9897cabec700726629466eea0159e75ba68ba91 | [
"MIT"
] | null | null | null | import shutil
import tempfile
class SSFile:
"""
Abstract base class for all the supported file types in ShapeShifter. Subclasses must implement reading a file to pandas and exporting a dataframe to the filetype
"""
def __init__(self, filePath, fileType):
self.filePath=filePath
self.fileType=fileType
self.isGzipped= self.__is_gzipped()
def read_input_to_pandas(self, columnList=[], indexCol="Sample"):
"""
Reads from a file into a Pandas data frame. File may be gzipped. Must be implemented by subclasses
:param columnList: List of string column names to be read in. If blank, all columns will be read in
:param indexCol: String name of the column representing the index of the data set
:return: Pandas data frame with the requested data
"""
raise NotImplementedError("Reading from this file type is not currently supported.")
def export_filter_results(self, inputSSFile, column_list=[], query=None, transpose=False, include_all_columns=False,
gzip_results=False, index_col="Sample"):
"""
Filters and then exports data to a file
:param inputSSFile: SSFile object representing the file to be read and filtered
:param column_list: list of columns to include in the output. If blank, all columns will be included.
:param query: string representing the query or filter to apply to the data set
:param transpose: boolean indicating whether the results will be transposed
:param include_all_columns: boolean indicating whether to include all columns in the output. If True, overrides columnList
:param gzip_results: boolean indicating whether the resulting file will be gzipped
:param index_col: string name of the index column of the data set
"""
df = None
includeIndex = False
null = 'NA'
query, inputSSFile, df, includeIndex = self._prep_for_export(inputSSFile, column_list, query, transpose,
include_all_columns, df, includeIndex, index_col)
self.write_to_file(df, gzip_results, includeIndex, null)
def _prep_for_export(self, inputSSFile, columnList, query, transpose, includeAllColumns, df, includeIndex,
indexCol):
"""
Prepares a file to be exported by checking query syntax, unzipping the input, filtering, and transposing the data. This function is used
in every file type's export_filter_results function with the exception of SQLiteFile
:param inputSSFile: SSFile containing the data to be filtered
:param columnList: list of column names to be included in the output. If the list is empty, all columns will be included
:param query: string representing the query or filter to be applied to the data set
:param transpose: boolean indicating if the resulting data should be transposed
:param includeAllColumns: boolean indicating to include all columns in the output. If True, it overrides columnList
:param df: the Pandas data frame that will contain the results of the filters
:param includeIndex: boolean that will store whether or not the output file will include the index column
:param indexCol: string representing the name of the index column of the data set
:return: updated query, inputSSFile, df, includeIndex. These updated values will be used by export_filter_results
"""
if query != None:
query = self._translate_null_query(query)
df = inputSSFile._filter_data(columnList=columnList, query=query,
includeAllColumns=includeAllColumns, indexCol=indexCol)
if transpose:
df = df.set_index(indexCol) if indexCol in df.columns else df
df = df.transpose()
includeIndex = True
#TODO: remove returning inputSSFile for every file type, it is no longer needed since gzip is taken care of elsewhere
return query, inputSSFile, df, includeIndex
def factory(filePath, type=None):
"""
Constructs the appropriate subclass object based on the type of file passed in
:param filePath: string representing a file's path
:param type: string representing the type of file
:return: SSFile subclass object
"""
if type==None:
type = SSFile.__determine_extension(filePath)
if type.lower() == 'parquet': return ParquetFile.ParquetFile(filePath, type)
elif type.lower() == 'tsv': return TSVFile.TSVFile(filePath,type)
elif type.lower() == 'csv': return CSVFile.CSVFile(filePath,type)
elif type.lower() == 'json': return JSONFile.JSONFile(filePath,type)
elif type.lower() == 'excel': return ExcelFile.ExcelFile(filePath,type)
elif type.lower() == 'hdf5': return HDF5File.HDF5File(filePath,type)
elif type.lower() == 'pickle': return PickleFile.PickleFile(filePath,type)
elif type.lower() == 'msgpack': return MsgPackFile.MsgPackFile(filePath, type)
elif type.lower() == 'stata': return StataFile.StataFile(filePath,type)
elif type.lower() == 'sqlite': return SQLiteFile.SQLiteFile(filePath,type)
elif type.lower() == 'html': return HTMLFile.HTMLFile(filePath,type)
elif type.lower() == 'arff': return ARFFFile.ARFFFile(filePath,type)
elif type.lower() == 'gct': return GCTFile.GCTFile(filePath,type)
elif type.lower() == 'jupyternotebook': return JupyterNotebookFile.JupyterNBFile(filePath,type)
elif type.lower() == 'rmarkdown': return RMarkdownFile.RMarkdownFile(filePath,type)
elif type.lower() == 'kallistotpm': return KallistoTPMFile.KallistoTPMFile(filePath,type)
elif type.lower() == 'kallisto_est_counts': return Kallisto_est_counts_File.Kallisto_est_counts_File(filePath,type)
elif type.lower() == 'salmontpm': return SalmonTPMFile.SalmonTPMFile(filePath, type)
elif type.lower() == 'salmonnumreads': return SalmonNumReadsFile.SalmonNumReadsFile(filePath,type)
else:
raise Exception("File type not recognized. Supported file types include: TSV, CSV, Parquet, JSON, Excel, HDF5, Pickle, MsgPack, Stata, SQLite, HTML, ARFF, GCT")
factory=staticmethod(factory)
def __determine_extension(fileName):
"""
Determines the file type of a given file based off its extension
:param fileName: Name of a file whose extension will be examined
:return: string representing the file type indicated by the file's extension
"""
extensions = fileName.rstrip("\n").split(".")
if len(extensions) > 1:
extension = extensions[len(extensions) - 1]
if extension == 'gz':
extension = extensions[len(extensions) - 2]
else:
extension = None
if extension == "tsv" or extension == "txt":
return 'tsv'
elif extension == "csv":
return 'csv'
elif extension == "json":
return 'json'
elif extension == "xlsx":
return 'excel'
elif extension == "hdf" or extension == "h5":
return 'hdf5'
elif extension == "pq":
return 'parquet'
elif extension == "mp":
return 'msgpack'
elif extension == "dta":
return 'stata'
elif extension == "pkl":
return 'pickle'
elif extension == "html":
return 'html'
elif extension == "db":
return 'sqlite'
elif extension == "arff":
return 'arff'
elif extension == "gct":
return 'gct'
elif extension == "ipynb":
return 'jupyternotebook'
elif extension == "rmd":
return 'rmarkdown'
else:
raise Exception("Error: Extension on " + fileName + " not recognized. Please use appropriate file extensions or explicitly specify file type.")
__determine_extension = staticmethod(__determine_extension)
def write_to_file(self, df, gzipResults=False, includeIndex=False, null='NA', indexCol="Sample", transpose=False):
"""
Writes a Pandas data frame to a file
:param transpose:
:param indexCol:
:param df: Pandas data frame to be written to file
:param gzipResults: boolean indicating whether the written file will be gzipped
:param includeIndex: boolean indicating whether the index column should be written to the file
:param null: string representing how null or None values should be represented in the output file
"""
raise NotImplementedError("Writing to this file type is not currently supported.")
def _update_index_col(self, df, indexCol="Sample"):
"""
Function for internal use. If the given index column is not in the data frame, it will default to the first column name
"""
if indexCol not in df.columns:
return
def __is_gzipped(self):
"""
Function for internal use. Checks if a file is gzipped based on its extension
"""
extensions = self.filePath.rstrip("\n").split(".")
if extensions[len(extensions) - 1] == 'gz':
return True
return False
def _filter_data(self, columnList=[], query=None,
includeAllColumns=False, indexCol="Sample"):
"""
Filters a data set down according to queries and requested columns
:param columnList: List of string column names to include in the results. If blank, all columns will be included
:param query: String representing a query to be applied to the data set
:param includeAllColumns: boolean indicating whether all columns should be included. If true, overrides columnList
:param indexCol: string representing the name of the index column of the data set
:return: filtered Pandas data frame
"""
if includeAllColumns:
columnList = []
df = self.read_input_to_pandas(columnList, indexCol)
self.__report_if_missing_columns(df, [indexCol])
df = self.__replace_index(df, indexCol)
if query != None:
df = df.query(query)
return df
if len(columnList) == 0 and query == None:
df = self.read_input_to_pandas(columnList, indexCol)
self.__report_if_missing_columns(df, [indexCol])
df = self.__replace_index(df, indexCol)
return df
if query != None:
columnNamesFromQuery = self.__parse_column_names_from_query(query)
columnList = columnNamesFromQuery + columnList
if indexCol not in columnList:
columnList.insert(0, indexCol)
else:
columnList.insert(0, columnList.pop(columnList.index(indexCol)))
df = self.read_input_to_pandas(columnList, indexCol)
self.__report_if_missing_columns(df, columnList)
if query != None:
df = df.query(query)
return df
def _translate_null_query(self, query):
"""
For internal use only. Because pandas does not support querying for null values by "columnname == None", this function translates such queries into valid syntax
"""
regex1 = r"\S*\s*!=\s*None\s*"
regex2 = r"\S*\s*==\s*None\s*"
matchlist1 = re.findall(regex1, query, flags=0)
matchlist2 = re.findall(regex2, query, flags=0)
for match in matchlist1:
col = match.split("!=")[0].rstrip()
query = query.replace(match, col + "==" + col + " ")
for match in matchlist2:
col = match.split("==")[0].rstrip()
query = query.replace(match, col + "!=" + col + " ")
return query
def get_column_names(self) -> list:
"""
Retrieves all column names from a data set stored in a parquet file
:return: All column names
:rtype: list
"""
raise NotImplementedError("This method should have been implemented, but has not been")
def __parse_column_names_from_query(self, query):
"""
For internal use. Takes a query and determines what columns are being queried on
"""
query = re.sub(r'\band\b', '&', query)
query = re.sub(r'\bor\b', '|', query)
args = re.split('==|<=|>=|!=|<|>|\&|\|', query)
colList = []
for arg in args:
# first remove all whitespace and parentheses and brackets
arg = arg.strip()
arg = arg.replace("(", "")
arg = arg.replace(")", "")
arg = arg.replace("[", "")
arg = arg.replace("]", "")
# if it is a number, it isn't a column name
try:
float(arg)
except:
# check if the string is surrounded by quotes. If so, it is not a column name
if len(arg) > 0 and arg[0] != "'" and arg[0] != '"':
# check for duplicates
if arg not in colList and arg != "True" and arg != "False":
colList.append(arg)
#check if any columns begin with a number
for col in colList:
if col[0].isdigit():
raise Exception("Error: columns whose names begin with numbers cannot be queried on")
return colList
def __check_if_columns_exist(self, df, columnList):
"""
For internal use. Checks to see if certain columns are found in a data frame
:param df: Pandas data frame to be examined
:param columnList: List of string column names to be checked
:return: A list of string column names representing columns that were not found
"""
missingColumns = []
for column in columnList:
if column not in df.columns:
missingColumns.append(column)
# if len(missingColumns)>0:
# raise ColumnNotFoundError(missingColumns)
return missingColumns
def _append_gz(self, outFilePath):
"""
For internal use. If a file is to be gzipped, this function appends '.gz' to the file path if necessary.
"""
if not (outFilePath[len(outFilePath) - 3] == '.' and outFilePath[len(outFilePath) - 2] == 'g' and outFilePath[
len(outFilePath) - 1] == 'z'):
outFilePath += '.gz'
return outFilePath
def _remove_gz(self, outFilePath):
"""
For internal use. If a file is to be gzipped, this function removes '.gz' to the file path if necessary.
"""
if (outFilePath[len(outFilePath) - 3] == '.' and outFilePath[len(outFilePath) - 2] == 'g' and outFilePath[
len(outFilePath) - 1] == 'z'):
outFilePath=outFilePath[:-3]
return outFilePath
def _gzip_results(self, tempFilePath, outFilePath):
"""
For internal use. Manually gzips result files if Pandas does not inherently do so for the given file type.
"""
with open(tempFilePath, 'rb') as f_in:
with gzip.open(self._append_gz(outFilePath), 'wb') as f_out:
#f_out.writelines(f_in)
shutil.copyfileobj(f_in,f_out)
os.remove(tempFilePath)
def _gunzip_to_temp_file(self):
"""
Takes a gzipped file with extension 'gz' and unzips it to a temporary file location so it can be read into pandas
"""
with gzip.open(self.filePath, 'rb') as f_in:
# with open(self._remove_gz(self.filePath), 'wb') as f_out:
# shutil.copyfileobj(f_in, f_out)
f_out=tempfile.NamedTemporaryFile(delete=False)
shutil.copyfileobj(f_in, f_out)
f_out.close()
return f_out
def __replace_index(selfs, df, indexCol):
"""
For internal use. If the user requests a certain column be the index, this function puts that column as the first in the data frame df
"""
if indexCol in df.columns:
df.set_index(indexCol, drop=True, inplace=True)
df.reset_index(inplace=True)
return df
def __report_if_missing_columns(self,df, columnList):
"""
Prints out a warning showing which of the columns in the given columnList are not found in the data frame df
"""
missingColumns = self.__check_if_columns_exist(df, columnList)
if len(missingColumns) > 0:
print("Warning: the following columns were not found and therefore not included in output: " + ", ".join(
missingColumns))
import gzip
import os
import re
import ARFFFile
import CSVFile
import ExcelFile
import GCTFile
import HDF5File
import HTMLFile
import JSONFile
import MsgPackFile
import ParquetFile
import PickleFile
import SQLiteFile
import StataFile
import TSVFile
import JupyterNotebookFile
import RMarkdownFile
import KallistoTPMFile
import Kallisto_est_counts_File
import SalmonTPMFile
import SalmonNumReadsFile | 45.874332 | 172 | 0.63391 | 2,066 | 17,157 | 5.180058 | 0.175218 | 0.022426 | 0.026911 | 0.033639 | 0.248178 | 0.175201 | 0.163895 | 0.159036 | 0.126238 | 0.097645 | 0 | 0.00316 | 0.28076 | 17,157 | 374 | 173 | 45.874332 | 0.8641 | 0.322376 | 0 | 0.12 | 0 | 0.004444 | 0.092193 | 0.001938 | 0 | 0 | 0 | 0.002674 | 0 | 1 | 0.088889 | false | 0 | 0.106667 | 0 | 0.337778 | 0.004444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f933cca9a376532a3bc93f78b79788387ab7bbc | 7,565 | py | Python | GZP_GTO_ArcMap/scripts/SCR_PFLICHT_Layer.py | msgis/swwat-gzp-template | 080afbe9d49fb34ed60ba45654383d9cfca01e24 | [
"MIT"
] | 3 | 2019-06-18T15:28:09.000Z | 2019-07-11T07:31:45.000Z | GZP_GTO_ArcMap/scripts/SCR_PFLICHT_Layer.py | msgis/swwat-gzp-template | 080afbe9d49fb34ed60ba45654383d9cfca01e24 | [
"MIT"
] | 2 | 2019-07-11T14:03:25.000Z | 2021-02-08T16:14:04.000Z | GZP_GTO_ArcMap/scripts/SCR_PFLICHT_Layer.py | msgis/swwat-gzp-template | 080afbe9d49fb34ed60ba45654383d9cfca01e24 | [
"MIT"
] | 1 | 2019-06-12T11:07:37.000Z | 2019-06-12T11:07:37.000Z | # -*- coding: utf-8 -*-
"""
@author: ms.gis, June 2020
Script for ArcGIS GTO for Modul GZP
"""
##
import arcpy
import pythonaddins
## -------------------------
# Open progress dialog
with pythonaddins.ProgressDialog as dialog:
dialog.title = "PRUEFUNG PFLICHTDATENSAETZE"
dialog.description = "Pruefe Pflichtdatensaetze ... Bitte warten..."
dialog.animation = "Spiral"
# --- Identify compulsory layers without entries/ features ---
# Create List for Message Content
lyrList = []
countBGef = 0
countObj = 0
# domainvalues of DOM_FG_V_KLASSE and DOM_WT_T_KLASSE
list_NOE_STM = { 1, 2, 3, 4, 5, 6, 7, 8, 9} # BWV regional NOE_STM
list_B_O_K_S_T_V_W = {10, 11, 12, 13, 14, 15, 16, 17, 18, 19} # BWV regional B_O_K_ST_V_W
list_AT_2021 = {20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30} # BWV national AT_2021
# Access current map document
mxd = arcpy.mapping.MapDocument("CURRENT")
# --- Check TABLES
# Clear all previous selections
for tbl in arcpy.mapping.ListTableViews(mxd):
arcpy.SelectLayerByAttribute_management(tbl.name, "CLEAR_SELECTION")
# Query tables
for tbl in arcpy.mapping.ListTableViews(mxd):
tblSrcName = tbl.datasetName
if tblSrcName in ["TBGEN", "TBGGN", "TBGZP", "TBPRJ"]:
result = arcpy.GetCount_management(tbl)
count = int(result.getOutput(0))
if count == 0:
lyrList.append(tblSrcName)
# --- Check FEATURE LAYERS
# Clear all previous selections
for lyr in arcpy.mapping.ListLayers(mxd):
if lyr.isFeatureLayer:
arcpy.SelectLayerByAttribute_management(lyr.name, "CLEAR_SELECTION")
# Eliminate multiple listed layers in TOC
lyr_set = set()
for feat in arcpy.mapping.ListLayers(mxd):
if feat.isFeatureLayer:
lyr_set.add((feat.datasetName, feat))
# Query tables
for (lyrSrcName, lyr) in sorted(lyr_set):
if lyrSrcName in ["FLUSS", "GSCHUTZ", "LPAKT", "MODEL", "PLGBT"]:
result = arcpy.GetCount_management(lyr)
count = int(result.getOutput(0))
if count == 0:
lyrList.append(lyrSrcName)
elif lyrSrcName == "BWERT":
listKat = []
with arcpy.da.SearchCursor(lyr, ["SZENARIO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that all szenarios (30, 100, 300) present
if not {30, 100, 300}.issubset(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName == "FG":
listKat = set()
with arcpy.da.SearchCursor(lyr, ["V_KLASSE"]) as cursor:
for row in cursor:
listKat.add(row[0])
# check if listKat are in only one group available
is_valid = False
if listKat.issubset(list_NOE_STM):
is_valid = True
if listKat.issubset(list_B_O_K_S_T_V_W):
is_valid = True
if listKat.issubset(list_AT_2021):
is_valid = True
if not is_valid:
lyrList.append(lyrSrcName)
elif lyrSrcName == "FUNKT":
listKat = []
with arcpy.da.SearchCursor(lyr, ["L_KATEGO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that category "Rot-Gelb-schraffierter Funktionsbereich" (1) present
if 1 not in set(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName in ["GFPKT", "GFLIN", "GFFLA"]:
result = arcpy.GetCount_management(lyr)
countBGef += int(result.getOutput(0))
elif lyrSrcName == "GPLBAU":
listKat = []
with arcpy.da.SearchCursor(lyr, ["L_KATEGO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that category "beplant od. verbaut" (1) present
if 1 not in set(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName == "GZ100":
# Access unfiltered source layer
SrcLayer = lyr.dataSource
listKat = []
with arcpy.da.SearchCursor(SrcLayer, ["L_KATEGO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that categories (1, 2) present
if not {1, 2}.issubset(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName == "GZ300":
listKat = []
with arcpy.da.SearchCursor(lyr, ["L_KATEGO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that category "Gelb-schraffierte Zone" (2) present
if 2 not in set(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName == "KNTPKT":
listKat = []
with arcpy.da.SearchCursor(lyr, ["SZENARIO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that all szenarios (30, 100, 300) present
if not {30, 100, 300}.issubset(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName in ["OBPKT", "OBLIN", "OBFLA"]:
result = arcpy.GetCount_management(lyr)
countObj += int(result.getOutput(0))
elif lyrSrcName == "QPLIN":
listKat = []
with arcpy.da.SearchCursor(lyr, ["L_KATEGO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that at least categories 1 & 2 present
if not {1,2}.issubset(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName in ["UFHQN", "UFHQNLIN"]:
listKat = []
with arcpy.da.SearchCursor(lyr, ["L_KATEGO"]) as cursor:
for row in cursor:
listKat.append(row[0])
# Check that all scenario categories (1,2,3) present
if not {1, 2, 3}.issubset(listKat):
lyrList.append(lyrSrcName)
elif lyrSrcName == "WT":
listKat = set()
with arcpy.da.SearchCursor(lyr, ["T_KLASSE"]) as cursor:
for row in cursor:
listKat.add(row[0])
# check if listKat are in only one group available
is_valid = False
if listKat.issubset(list_NOE_STM):
is_valid = True
if listKat.issubset(list_B_O_K_S_T_V_W):
is_valid = True
if listKat.issubset(list_AT_2021):
is_valid = True
if not is_valid:
lyrList.append(lyrSrcName)
# Test if at least one feature of Besondere Gefährdungen or Objekte present
if countBGef == 0:
lyrList.append("GFPKT, GFLIN oder GFFLA")
if countObj == 0:
lyrList.append("OBPKT, OBLIN oder OBFLA")
##
MessageContent = ""
for l in lyrList:
MessageContent += "\n{}".format(l)
##
# Define Message
if len(lyrList) == 0:
pythonaddins.MessageBox("Pruefung erfolgreich.\nAlle Pflichtdatensaetze befuellt.", "INFORMATION", 0)
else:
MessageFinal = "Folgende Pflichtdatensaetze sind nicht (ausreichend) befuellt:\n" + MessageContent + "\n\nBitte korrigieren! \n"
pythonaddins.MessageBox(MessageFinal, "FEHLERMELDUNG", 0)
del lyrList
| 35.186047 | 132 | 0.556642 | 840 | 7,565 | 4.92381 | 0.277381 | 0.044004 | 0.06117 | 0.06528 | 0.572534 | 0.516199 | 0.486219 | 0.437863 | 0.434236 | 0.421663 | 0 | 0.030631 | 0.339722 | 7,565 | 214 | 133 | 35.350467 | 0.797397 | 0.155056 | 0 | 0.568345 | 0 | 0 | 0.085867 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014388 | 0 | 0.014388 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f9705caafa2e6523e728d53ad95dca1eb3d602f | 546 | py | Python | cpp/select.py | Qvery-mm/JB-application | 6cef44cbdb3d35e83c93cc66676e5ee7a62a9938 | [
"MIT"
] | 3 | 2021-11-30T21:51:42.000Z | 2021-12-15T22:11:04.000Z | cpp/select.py | Qvery-mm/JB-application | 6cef44cbdb3d35e83c93cc66676e5ee7a62a9938 | [
"MIT"
] | 4 | 2020-04-23T21:08:22.000Z | 2022-02-10T01:34:32.000Z | cpp/select.py | Qvery-mm/JB-application | 6cef44cbdb3d35e83c93cc66676e5ee7a62a9938 | [
"MIT"
] | null | null | null | import sys
import os
import csv
from time import sleep
dir = "Clones"
try:
print("Specify threshold:")
threshold = float(input())
except Exception as e:
print(e)
exit(-1)
with open("finalClones", "w") as output:
with open("ClonesWithDistance", "r") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
row[8] = float(row[8])
if row[8] <= threshold:
totalNumber+=1
print(','.join(row[:-1]), file=output)
print(totalNumber)
| 23.73913 | 54 | 0.575092 | 67 | 546 | 4.686567 | 0.58209 | 0.038217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 0.285714 | 546 | 22 | 55 | 24.818182 | 0.789744 | 0 | 0 | 0 | 0 | 0 | 0.104396 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f9726928dfbdcfb8c92c7b858bd71589864d0f5 | 4,358 | py | Python | backend/core/service/data_source/data_provider/database_provider.py | pecimuth/synthia | a54265ca6f772959d395de789bfe16bf054d97ad | [
"MIT"
] | null | null | null | backend/core/service/data_source/data_provider/database_provider.py | pecimuth/synthia | a54265ca6f772959d395de789bfe16bf054d97ad | [
"MIT"
] | null | null | null | backend/core/service/data_source/data_provider/database_provider.py | pecimuth/synthia | a54265ca6f772959d395de789bfe16bf054d97ad | [
"MIT"
] | null | null | null | from typing import Iterator, Tuple, Any, Optional
from sqlalchemy import MetaData, select, Column, func
from sqlalchemy.engine import Connection
from sqlalchemy.exc import SQLAlchemyError
from core.service.data_source.data_provider.base_provider import DataProvider
from core.service.data_source.database_common import DatabaseConnectionManager
from core.service.data_source.identifier import Identifiers, Identifier
from core.service.exception import DataSourceIdentifierError, DatabaseNotReadable, FatalDatabaseError
class DatabaseDataProvider(DataProvider):
"""Provide data from a database."""
def scalar_data(self) -> Iterator[Any]:
idf = self._identifiers[0]
for tup in self._select([idf]):
yield tup[0]
def vector_data(self) -> Iterator[Tuple]:
for tup in self._select(self._identifiers):
yield tup
@property
def _conn(self) -> Connection:
"""Return database connection."""
conn_manager = self._injector.get(DatabaseConnectionManager)
return conn_manager.get_connection(self._data_source)
@property
def _first_column(self) -> Column:
"""Return column (bound to a DB connection) identified by the first identifier."""
return self._get_column(self._identifiers[0])
def _safe_exec(self, *args, **kwargs):
"""Execute statement and convert SQLAlchemy exception
to FatalDatabaseError so that it can be caught by our handlers."""
try:
return self._conn.execute(*args, **kwargs)
except SQLAlchemyError:
raise FatalDatabaseError()
def _select(self, identifiers: Identifiers) -> Iterator[Tuple]:
"""Yield tuples of values selected by identifiers."""
columns = [self._get_column(idf) for idf in identifiers]
for row in self._safe_exec(select(columns)):
yield row
def _get_column(self, idf: Identifier) -> Column:
"""Convert identifier to a column bound to a database connection."""
conn_manager = self._injector.get(DatabaseConnectionManager)
engine = conn_manager.get_engine(self._data_source)
meta = MetaData()
try:
meta.reflect(bind=engine)
except SQLAlchemyError:
raise DatabaseNotReadable(self._data_source)
if idf.table not in meta.tables:
raise DataSourceIdentifierError('Table not found', self._data_source, repr(idf))
table = meta.tables[idf.table]
if idf.column not in table.columns:
raise DataSourceIdentifierError('Column not found', self._data_source, repr(idf))
return table.columns[idf.column]
def scalar_data_not_none(self) -> Iterator[Any]:
column = self._first_column
query = select([column]).where(column.isnot(None))
for row in self._safe_exec(query):
yield row[0]
def estimate_min(self) -> Any:
column = self._first_column
query = select([func.min(column)])
return self._safe_exec(query).scalar()
def estimate_max(self) -> Any:
column = self._first_column
query = select([func.max(column)])
return self._safe_exec(query).scalar()
def get_null_count(self) -> int:
column = self._first_column
query = select([func.count()]).where(column.is_(None))
return self._safe_exec(query).scalar()
def get_not_null_count(self) -> int:
column = self._first_column
query = select([func.count(column)])
return self._safe_exec(query).scalar()
def estimate_null_frequency(self) -> Optional[float]:
null_count = self.get_null_count()
not_null_count = self.get_not_null_count()
count = null_count + not_null_count
if count == 0:
return None
return null_count / count
def estimate_mean(self) -> Optional[float]:
column = self._first_column
query = select([func.avg(column)])
return self._safe_exec(query).scalar()
def estimate_variance(self) -> Optional[float]:
column = self._first_column
query = select([
func.avg(column),
func.avg(column * column)
])
avg, square_avg = self._safe_exec(query).fetchone()
if avg is None:
return None
return max(square_avg - avg ** 2, 0)
| 37.895652 | 101 | 0.662919 | 518 | 4,358 | 5.378378 | 0.216216 | 0.035894 | 0.034458 | 0.052764 | 0.343144 | 0.288227 | 0.273869 | 0.240488 | 0.163676 | 0.085427 | 0 | 0.00211 | 0.238871 | 4,358 | 114 | 102 | 38.22807 | 0.837805 | 0.082836 | 0 | 0.25 | 0 | 0 | 0.007832 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170455 | false | 0 | 0.090909 | 0 | 0.420455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f976db5c1efb3d31ce7d39ff102615d14caf375 | 6,271 | py | Python | utils.py | frosinastojanovska/emoji2vec | f18626f95f96392b5e6abebf078cae733896096e | [
"MIT"
] | null | null | null | utils.py | frosinastojanovska/emoji2vec | f18626f95f96392b5e6abebf078cae733896096e | [
"MIT"
] | null | null | null | utils.py | frosinastojanovska/emoji2vec | f18626f95f96392b5e6abebf078cae733896096e | [
"MIT"
] | null | null | null | """Utility functions for training and evaluation"""
# External dependencies
import pickle as pk
from sklearn import metrics
import numpy as np
import os.path
from gensim import matutils
from naga.shared.kb import KB
# Internal dependencies
from phrase2vec import Phrase2Vec
# Authorship
__author__ = "Ben Eisner, Tim Rocktaschel"
__email__ = "beisner@princeton.edu"
def generate_embeddings(ind2phr, kb, embeddings_file, word2vec_file, word2vec_dim=400):
"""Generate a numpy array of phrase embeddings for all phrases in the knowledge base.
Since it is expensive to calculate these phrase embeddings every time, we cache the output
in a file, which we can load from if this function is called on a set we've seen before.
Args:
word2vec_dim: Dimension of the static word2vec model we use
ind2phr: Mapping from phrase indices to phrases in the KB
kb: Knowledge base
embeddings_file: File where we store the embeddings
word2vec_file: word2vec model file
Returns:
"""
phrase_vector_sums = dict()
# get the complete word vectors from the second argument
if not (os.path.isfile(embeddings_file)):
print('reading embedding data from: ' + word2vec_file)
phrase_vec_model = Phrase2Vec.from_word2vec_paths(word2vec_dim, w2v_path=word2vec_file)
print('generating vector subset')
for phrase in kb.get_vocab(1):
phrase_vector_sums[phrase] = phrase_vec_model[phrase]
pk.dump(phrase_vector_sums, open(embeddings_file, 'wb'))
else:
print('loading embeddings...')
phrase_vector_sums = pk.load(open(embeddings_file, 'rb'))
# build the embeddings array, for lookup later
embeddings_array = np.zeros(shape=[len(ind2phr), 400], dtype=np.float32)
for ind, phr in ind2phr.items():
embeddings_array[ind] = phrase_vector_sums[phr]
return embeddings_array
# Read data from a file and inject it into a knowledge base
def __read_data(filename, base, ind_to_phr, ind_to_emoj, typ):
with open(filename, 'r', encoding="utf8") as f:
# build the data line by line
lines = f.readlines()
for line in lines:
ph, em, truth = line.rstrip().split('\t')
base.add((truth == 'True'), typ, em, ph)
ind_to_phr[base.get_id(ph, 1)] = ph
ind_to_emoj[base.get_id(em, 0)] = em
def build_kb(data_folder):
"""Read training data from the training directory and generate a KB
Args:
data_folder: Directory containing a train.txt, a dev.txt, and a test.txt from which
we can assemble our knowledge base.
"""
base = KB()
# KB indices to phrase
ind_to_phr = dict()
# KB indices to emoji
ind_to_emoj = dict()
__read_data(data_folder + 'train.txt', base, ind_to_phr, ind_to_emoj, 'train')
__read_data(data_folder + 'dev.txt', base, ind_to_phr, ind_to_emoj, 'dev')
__read_data(data_folder + 'test.txt', base, ind_to_phr, ind_to_emoj, 'test')
return base, ind_to_phr, ind_to_emoj
def get_examples_from_kb(kb, example_type='train'):
"""Extract all the examples of a type (i.e. train, dev, test) from the knowledge base
Args:
kb: Knowledge base
example_type: Name of example type (i.e. train, dev, test)
Returns:
Lists of the rows, columns, and targets from the dataset
"""
# prepare the training set
batch = list(kb.get_all_facts([example_type]))
rows = list()
cols = list()
targets = list()
for i in range(len(batch)):
example = batch[i]
cols.append(kb.get_id(example[0][0], 0))
rows.append(kb.get_id(example[0][1], 1))
targets.append(example[1])
return rows, cols, targets
def __sigmoid(x):
return 1 / (1 + np.math.exp(-x))
def generate_predictions(e2v, dset, phr_embeddings, ind2emoji, threshold):
"""Calculate whether a set of emoji/phrase pairs are correlated
This implementation doesn't use TensorFlow, and relies instead of injected vectors
Args:
e2v: Mapping from emoji to vector, typically the trained emoji vectors from our model.
dset: KB that contains pairs of emoji and phrases, as well as whether they are correlated.
phr_embeddings: Map between phrase indices and phrase vectors, as computed by the vector sum of
word vectors for that phrase.
ind2emoji: Map between emoji index and emoji, for converting dset indices into emoji.
threshold: Threshold for classifying correlation as true or false.
Returns:
y_pred_labels: List of predicted labels for pairs in the dataset
y_pred_values: List of predicted scores for pairs in the dataset
y_true_labels: List of true labels for pairs in the dataset
y_true_values: List of true scores for pairs in the dataset
"""
y_pred_labels = list()
y_pred_values = list()
phr_ixs, em_ixs, truths = dset
for (phr_ix, em_ix, truth) in zip(phr_ixs, em_ixs, truths):
prob = __sigmoid(
np.dot(matutils.unitvec(phr_embeddings[phr_ix]), matutils.unitvec(e2v[ind2emoji[em_ix]])))
y_pred_values.append(prob)
y_pred_labels.append(prob >= threshold) # Threshold predicted probability
y_true_values = [float(v) for v in truths]
return y_pred_labels, y_pred_values, truths, y_true_values
def get_metrics(pred_labels, pred_values, truth_labels, truth_values):
"""Get a set of metrics, including accuracy, f1 score, and area under the curve.
This method takes in predictions and spits out performance metrics.
Args:
pred_labels: Predicted labels for correlation between an emoji and a phrase.
pred_values: Predicted correlation value between an emoji and a phrase.
truth_labels: True labels for correlation between an emoji and a phrase.
truth_values: True correlation value between an emoji and a phrase
Returns:
"""
acc = metrics.accuracy_score(y_true=truth_labels, y_pred=pred_labels)
f1 = metrics.f1_score(y_true=truth_labels, y_pred=pred_labels)
try:
auc = metrics.roc_auc_score(y_true=truth_values, y_score=pred_values)
except:
auc = 'N/A'
return acc, f1, auc
| 34.456044 | 103 | 0.689364 | 921 | 6,271 | 4.514658 | 0.274701 | 0.016835 | 0.013468 | 0.01443 | 0.1443 | 0.136123 | 0.117364 | 0.075036 | 0.03848 | 0 | 0 | 0.010149 | 0.230107 | 6,271 | 181 | 104 | 34.646409 | 0.851077 | 0.439324 | 0 | 0 | 0 | 0 | 0.054865 | 0.006366 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09589 | false | 0 | 0.09589 | 0.013699 | 0.273973 | 0.041096 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f9790f20093c4af6de2079c822d49385298f35d | 2,768 | py | Python | cratedigger/media/library.py | adammillerio/cratedigger | d18b809009ae03aed4b1741f1aa12614fd6d211b | [
"MIT"
] | 11 | 2019-09-09T00:41:22.000Z | 2022-02-07T02:48:00.000Z | cratedigger/media/library.py | adammillerio/cratedigger | d18b809009ae03aed4b1741f1aa12614fd6d211b | [
"MIT"
] | null | null | null | cratedigger/media/library.py | adammillerio/cratedigger | d18b809009ae03aed4b1741f1aa12614fd6d211b | [
"MIT"
] | 1 | 2020-02-02T04:27:45.000Z | 2020-02-02T04:27:45.000Z | #!/usr/bin/env python3
import os
from json import dumps
from cratedigger.media.crate import MediaCrate
from cratedigger.serato.library import SeratoLibrary
class MediaLibrary(SeratoLibrary):
"""A library of media folders represented as Serato crates.
This is composed of a volume and associated metadata, as well as a list of
folders and their equivalent Serato crate representation.
Attributes:
path (str): Path to the loaded Serato Library
volume_type (str): Type of the volume, either mac or windows
volume (str): Name of the volume. On windows, this is a drive letter. On a
mac, this is either root if on the root drive, or an arbitrary
volume name if on a volume.
volume_path (str): Path to the root of the volume
crates_path (str): Path to the Subcrates folder on the volume
crates (obj:`MediaCrate`): Tree of all crates in the Serato library
root_crate (obj:`MediaCrate`): Root crate of this folder's tree, all loaded
Serato libraries are grouped under this
"""
root_crate = MediaCrate(parent=SeratoLibrary.root_crate)
root_crate.crate_name = 'Media'
def __init__(self) -> None:
"""Initialize a Media Library.
This invokes the initialization method for the Serato Library.
"""
super().__init__()
def load(self, path: str) -> None:
"""Load a Media Library from a given path.
This method traverses all folders in a given path and creates Media crates
with all compatible files.
Args:
path (str): Path to load crates for.
"""
# Store the path
self.path = path
# Determine volume name and type
self.split_volume(path)
# Add Media root crate to the global tree
self.crates = MediaCrate(parent=MediaLibrary.root_crate)
self.crates.crate_name = self.volume
# Load crates
self.load_crates(path, self.crates)
def load_crates(self, path: str, parent: MediaCrate) -> None:
"""Load crates in a given media folder.
This creates a MediaCrate for a given path and parent, and loads all
compatible files into it. Then, if there are any subdirectories, it
recursively invokes this method to load the subcrates.
Args:
path (str): Path to load a crate from
parent (obj:`MediaCrate`): Parent MediaCrate for the created subcrate
"""
# Create new subcrate and load it
child = MediaCrate(parent=parent)
child.load_crate(path, self.volume, self.volume_path, MediaLibrary.root_crate.crate_name)
for file in os.listdir(path):
# If this crate has subdirectories, load the subcrates
full_path = os.path.join(path, file)
if os.path.isdir(full_path):
self.load_crates(full_path, child)
| 32.186047 | 93 | 0.693642 | 398 | 2,768 | 4.751256 | 0.273869 | 0.038075 | 0.029085 | 0.034373 | 0.047594 | 0.02221 | 0 | 0 | 0 | 0 | 0 | 0.000473 | 0.235549 | 2,768 | 85 | 94 | 32.564706 | 0.893195 | 0.609104 | 0 | 0 | 0 | 0 | 0.005192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.181818 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f9b26be443455b6fc6f649c8f2f555a313c6d6e | 5,928 | py | Python | language/mentionmemory/tasks/text_classifier_test.py | urikz/language | 503aca178c98fed4c606cf83e58ae0f84012a4d9 | [
"Apache-2.0"
] | null | null | null | language/mentionmemory/tasks/text_classifier_test.py | urikz/language | 503aca178c98fed4c606cf83e58ae0f84012a4d9 | [
"Apache-2.0"
] | null | null | null | language/mentionmemory/tasks/text_classifier_test.py | urikz/language | 503aca178c98fed4c606cf83e58ae0f84012a4d9 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for text classifier model."""
import copy
import json
from absl.testing import absltest
from absl.testing import parameterized
import jax
from language.mentionmemory.encoders import import_encoders # pylint: disable=unused-import
from language.mentionmemory.tasks import text_classifier
from language.mentionmemory.utils import test_utils
import ml_collections
import numpy as np
import tensorflow as tf
# easiest to define as constant here
MENTION_SIZE = 2
class TextClassifierTest(test_utils.TestCase):
"""Tests for text classifier model."""
encoder_config = {
'dtype': 'bfloat16',
'vocab_size': 1000,
'entity_vocab_size': 1000,
'max_positions': 512,
'max_length': 128,
'hidden_size': 64,
'intermediate_dim': 128,
'entity_dim': 32,
'num_attention_heads': 8,
'num_initial_layers': 4,
'num_final_layers': 8,
'dropout_rate': 0.1,
}
model_config = {
'encoder_config': encoder_config,
'vocab_size': 3,
'encoder_name': 'eae',
'dtype': 'bfloat16',
}
config = {
'model_config': model_config,
'seed': 0,
'per_device_batch_size': 2,
'samples_per_example': 1,
'max_sample_mentions': 24,
'max_mentions': 10,
'max_length_with_entity_tokens': 150,
}
def setUp(self):
super().setUp()
self.config = ml_collections.ConfigDict(self.config)
self.model_config = self.config.model_config
encoder_config = self.model_config.encoder_config
self.max_length = encoder_config.max_length
self.max_sample_mentions = self.config.max_sample_mentions
self.collater_fn = text_classifier.TextClassifier.make_collater_fn(
self.config)
self.postprocess_fn = text_classifier.TextClassifier.make_output_postprocess_fn(
self.config)
model = text_classifier.TextClassifier.build_model(self.model_config)
dummy_input = text_classifier.TextClassifier.dummy_input(self.config)
init_rng = jax.random.PRNGKey(0)
self.init_parameters = model.init(init_rng, dummy_input, True)
def _gen_raw_batch(
self,
n_mentions,
):
"""Generate raw example."""
bsz = self.config.per_device_batch_size
text_ids = np.random.randint(
low=1,
high=self.model_config.encoder_config.vocab_size,
size=(bsz, self.max_length),
dtype=np.int64)
text_mask = np.ones_like(text_ids)
pad_size = max(0, self.max_sample_mentions - n_mentions)
mention_pad_shape = (0, pad_size)
mention_start_positions = np.random.choice(
self.max_length // MENTION_SIZE, size=n_mentions,
replace=False) * MENTION_SIZE
mention_start_positions.sort()
mention_start_positions = mention_start_positions.astype(np.int64)
mention_end_positions = mention_start_positions + MENTION_SIZE - 1
mention_mask = np.ones_like(mention_start_positions)
mention_start_positions = np.pad(
mention_start_positions[:self.max_sample_mentions],
pad_width=mention_pad_shape,
mode='constant')
mention_end_positions = np.pad(
mention_end_positions[:self.max_sample_mentions],
pad_width=mention_pad_shape,
mode='constant')
mention_mask = np.pad(
mention_mask[:self.max_sample_mentions],
pad_width=mention_pad_shape,
mode='constant')
target = np.random.randint(self.model_config.vocab_size, size=bsz)
raw_batch = {
'text_ids': tf.constant(text_ids),
'text_mask': tf.constant(text_mask),
'target': tf.constant(target),
'mention_start_positions': tf.constant(mention_start_positions),
'mention_end_positions': tf.constant(mention_end_positions),
'mention_mask': tf.constant(mention_mask),
}
for key in [
'mention_start_positions', 'mention_end_positions', 'mention_mask'
]:
raw_batch[key] = tf.tile(tf.reshape(raw_batch[key], (1, -1)), (bsz, 1))
return raw_batch
@parameterized.parameters(
{'n_mentions': 0},
{'n_mentions': 1},
{'n_mentions': 10},
{'n_mentions': 24},
{'n_mentions': 30},
{
'n_mentions': 0,
'apply_mlp': True
},
{
'n_mentions': 24,
'apply_mlp': True
},
)
def test_loss_fn(self, n_mentions, apply_mlp=False):
"""Test loss function runs and produces expected values."""
config = copy.deepcopy(self.config)
config['model_config']['apply_mlp'] = apply_mlp
raw_batch = self._gen_raw_batch(n_mentions)
batch = self.collater_fn(raw_batch)
batch = jax.tree_map(np.asarray, batch)
loss_fn = text_classifier.TextClassifier.make_loss_fn(config)
_, metrics, auxiliary_output = loss_fn(
model_config=self.model_config,
model_params=self.init_parameters['params'],
model_vars={},
batch=batch,
deterministic=True)
self.assertEqual(metrics['agg']['denominator'],
config.per_device_batch_size)
features = self.postprocess_fn(batch, auxiliary_output)
# Check features are JSON-serializable
json.dumps(features)
# Check features match the original batch
for key in batch.keys():
self.assertArrayEqual(np.array(features[key]), batch[key])
if __name__ == '__main__':
absltest.main()
| 31.365079 | 92 | 0.689609 | 762 | 5,928 | 5.082677 | 0.2979 | 0.034082 | 0.059644 | 0.027111 | 0.195972 | 0.093984 | 0.05164 | 0.05164 | 0.05164 | 0.05164 | 0 | 0.015718 | 0.205803 | 5,928 | 188 | 93 | 31.531915 | 0.806924 | 0.14693 | 0 | 0.086331 | 0 | 0 | 0.125324 | 0.027496 | 0 | 0 | 0 | 0 | 0.014388 | 1 | 0.021583 | false | 0 | 0.079137 | 0 | 0.136691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4f9d50b5dd00d9408b20a8ea33db22497190a070 | 659 | py | Python | examples/worker_example.py | charsyam/simplekiq | cd8b02078e06af64d79c5498af55fcdfbaf81676 | [
"Apache-2.0"
] | null | null | null | examples/worker_example.py | charsyam/simplekiq | cd8b02078e06af64d79c5498af55fcdfbaf81676 | [
"Apache-2.0"
] | null | null | null | examples/worker_example.py | charsyam/simplekiq | cd8b02078e06af64d79c5498af55fcdfbaf81676 | [
"Apache-2.0"
] | null | null | null | import redis
from simplekiq import KiqQueue
from simplekiq import EventBuilder
from simplekiq import Worker
class MyEventWorker(Worker):
def __init__(self, queue, failed_queue):
super().__init__(queue, failed_queue)
def _process(self, event_type, value):
print(event_type, value)
queue = KiqQueue("127.0.0.1:6379", "api_worker", True)
failed_queue = KiqQueue("127.0.0.1:6379", "api_failed", True)
#event_builder = EventBuilder(queue)
#value = event_builder.emit("test_event", {"age": 13, "value": "test", "1": {"2": 2}}, 3)
#queue.enqueue(value)
worker = MyEventWorker(queue, failed_queue)
while True:
worker.process(True)
| 25.346154 | 89 | 0.713202 | 90 | 659 | 5 | 0.388889 | 0.097778 | 0.126667 | 0.075556 | 0.115556 | 0.115556 | 0.115556 | 0.115556 | 0 | 0 | 0 | 0.046181 | 0.145675 | 659 | 25 | 90 | 26.36 | 0.753108 | 0.216995 | 0 | 0 | 0 | 0 | 0.093567 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.5 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fa4fdfadedb85e618d18869c67ff69488aba81d | 8,254 | py | Python | main.py | Marcel-Velez/CLMR | 730bd9078756650a53b4c6438b29e5aeb2c15134 | [
"Apache-2.0"
] | null | null | null | main.py | Marcel-Velez/CLMR | 730bd9078756650a53b4c6438b29e5aeb2c15134 | [
"Apache-2.0"
] | null | null | null | main.py | Marcel-Velez/CLMR | 730bd9078756650a53b4c6438b29e5aeb2c15134 | [
"Apache-2.0"
] | null | null | null | import argparse
from gc import callbacks
import pytorch_lightning as pl
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning import Trainer
# from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_lightning.loggers import WandbLogger # newline 1
from torch.utils.data import DataLoader
# Audio Augmentations
from torchaudio_augmentations import (
RandomApply,
ComposeMany,
RandomResizedCrop,
PolarityInversion,
Noise,
Gain,
HighLowPass,
Delay,
PitchShift,
Reverb,
)
from clmr.data import ContrastiveDataset
from clmr.datasets import get_dataset
from clmr.evaluation import evaluate
from clmr.modules import ContrastiveLearning, SupervisedLearning
from clmr.utils import yaml_config_hook
from clmr.models import VanillaTailedU
from clmr.models import SampleCNN
from clmr.models import VanillaHybridTailedUWithFirst, DefrostedHybridTailedUWithFirst
from clmr.models import TailedWithPinkLinksU #, TailedNoBlueLinksU, TailedNoLinksU
from clmr.models import TailedU1Tail, TailedU2Tail, TailedU3Tail, TailedU4Tail, TailedU5Tail
from clmr.models import TailedU1Cont, TailedU2Cont, TailedU3Cont #, TailedUMin1Cont, TailedUMin2Cont
from clmr.models import TailedU1Expa, TailedU2Expa, TailedU3Expa
from clmr.models import BigPink,BigTail
from clmr.models import VanillaTailedUXS, TailedU5TailSmallEnd
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="CLMR")
parser = Trainer.add_argparse_args(parser)
config = yaml_config_hook("./config/config.yaml")
for k, v in config.items():
parser.add_argument(f"--{k}", default=v, type=type(v))
args = parser.parse_args()
pl.seed_everything(args.seed)
# ------------
# data augmentations
# ------------
if args.supervised:
train_transform = [RandomResizedCrop(n_samples=args.audio_length)]
num_augmented_samples = 1
else:
train_transform = [
RandomResizedCrop(n_samples=args.audio_length),
RandomApply([PolarityInversion()], p=args.transforms_polarity),
RandomApply([Noise()], p=args.transforms_noise),
RandomApply([Gain()], p=args.transforms_gain),
RandomApply(
[HighLowPass(sample_rate=args.sample_rate)], p=args.transforms_filters
),
RandomApply([Delay(sample_rate=args.sample_rate)], p=args.transforms_delay),
RandomApply(
[
PitchShift(
n_samples=args.audio_length,
sample_rate=args.sample_rate,
)
],
p=args.transforms_pitch,
),
RandomApply(
[Reverb(sample_rate=args.sample_rate)], p=args.transforms_reverb
),
]
num_augmented_samples = 2
# ------------
# dataloaders
# ------------
# train_dataset = get_dataset(args.dataset, args.dataset_dir, subset="train")
if args.dataset == "magnatagatune":
train_dataset = train_dataset = get_dataset(args.dataset, args.dataset_dir, subset="train")
else:
train_dataset = get_dataset(args.dataset, args.dataset_dir, subset=None)
valid_dataset = get_dataset(args.dataset, args.dataset_dir, subset="valid")
contrastive_train_dataset = ContrastiveDataset(
train_dataset,
input_shape=(1, args.audio_length),
transform=ComposeMany(
train_transform, num_augmented_samples=num_augmented_samples
),
)
contrastive_valid_dataset = ContrastiveDataset(
valid_dataset,
input_shape=(1, args.audio_length),
transform=ComposeMany(
train_transform, num_augmented_samples=num_augmented_samples
),
)
train_loader = DataLoader(
contrastive_train_dataset,
batch_size=args.batch_size,
num_workers=args.workers,
drop_last=True,
shuffle=True,
)
valid_loader = DataLoader(
contrastive_valid_dataset,
batch_size=args.batch_size,
num_workers=args.workers,
drop_last=True,
shuffle=False,
)
# ------------
# encoder
# ------------
if args.model == "vanilla_tailed_u":
encoder = VanillaTailedU(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_1_tail":
encoder = TailedU1Tail(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_2_tail":
encoder = TailedU2Tail(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_3_tail":
encoder = TailedU3Tail(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_4_tail":
encoder = TailedU4Tail(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_5_tail":
encoder = TailedU5Tail(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_janne_frozen":
encoder = VanillaHybridTailedUWithFirst(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_janne_defrosted":
encoder = DefrostedHybridTailedUWithFirst(n_classes=train_dataset.n_classes)
elif args.model == "tailed_with_pink_u":
encoder = TailedWithPinkLinksU(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_1_cont":
encoder = TailedU1Cont(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_2_cont":
encoder = TailedU2Cont(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_3_cont":
encoder = TailedU3Cont(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_1_expa":
encoder = TailedU1Expa(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_2_expa":
encoder = TailedU2Expa(n_classes=train_dataset.n_classes)
elif args.model == "tailed_u_3_expa":
encoder = TailedU3Expa(n_classes=train_dataset.n_classes)
elif args.model == "bigpink":
encoder = BigPink(n_classes=train_dataset.n_classes)
elif args.model == "vanilla_small":
encoder = VanillaTailedUXS(n_classes=train_dataset.n_classes)
elif args.model == "pink_small_end":
encoder = TailedU5TailSmallEnd(n_classes=train_dataset.n_classes)
elif args.model == "clmr":
encoder = SampleCNN(
strides=[3, 3, 3, 3, 3, 3, 3, 3, 3],
supervised=args.supervised,
out_dim=train_dataset.n_classes,
)
else:
print("no correct model given as args.model, given: '", args.model,"'")
exit()
# ------------
# model
# ------------
if args.supervised:
module = SupervisedLearning(args, encoder, output_dim=train_dataset.n_classes)
else:
module = ContrastiveLearning(args, encoder)
# logger = TensorBoardLogger("runs", name="CLMRv2-{}".format(args.dataset))
logger = WandbLogger(save_dir="runs", name="ISMIR-{}-{}".format(args.dataset, args.model))
if args.checkpoint_path:
trainer = Trainer.from_argparse_args(
args,
logger=logger,
sync_batchnorm=True,
max_epochs=args.max_epochs,
log_every_n_steps=10,
check_val_every_n_epoch=1,
accelerator=args.accelerator,
)
trainer.fit(module, train_loader, valid_loader, ckpt_path=args.checkpoint_path)
else:
# ------------
# training
# ------------
if args.supervised:
early_stopping = EarlyStopping(monitor="Valid/loss", patience=20)
else:
early_stopping = None
# early_stopping = EarlyStopping(monitor="Valid/loss", patience=5)
trainer = Trainer.from_argparse_args(
args,
logger=logger,
sync_batchnorm=True,
max_epochs=args.max_epochs,
log_every_n_steps=10,
check_val_every_n_epoch=1,
accelerator=args.accelerator,
)
trainer.fit(module, train_loader, valid_loader)
trainer.save_checkpoint(filepath="./custom_checkpoints/Ismir-models-{}-{}-epoch{}-step{}-after-training.ckpt".format(module.hparams.dataset, module.hparams.model, module.current_epoch, module.global_step))
| 35.886957 | 209 | 0.667434 | 903 | 8,254 | 5.829457 | 0.213732 | 0.057751 | 0.049392 | 0.075988 | 0.43788 | 0.420973 | 0.410714 | 0.391717 | 0.339856 | 0.299582 | 0 | 0.009578 | 0.228374 | 8,254 | 229 | 210 | 36.043668 | 0.816926 | 0.067119 | 0 | 0.252809 | 0 | 0 | 0.063575 | 0.015503 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.011236 | 0.123596 | 0 | 0.123596 | 0.005618 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fa53da032d155c026ca9e19f8c65ea5e5a3ff94 | 1,839 | py | Python | supervisely/labeling-tool/src/sly_globals.py | supervisely-ecosystem/gl-metric-learning | 3dd0f70616c3a743ed4113ade4cf401fd816ef47 | [
"MIT"
] | null | null | null | supervisely/labeling-tool/src/sly_globals.py | supervisely-ecosystem/gl-metric-learning | 3dd0f70616c3a743ed4113ade4cf401fd816ef47 | [
"MIT"
] | null | null | null | supervisely/labeling-tool/src/sly_globals.py | supervisely-ecosystem/gl-metric-learning | 3dd0f70616c3a743ed4113ade4cf401fd816ef47 | [
"MIT"
] | null | null | null | from pathlib import Path
import sys
import os
import supervisely_lib as sly
from supervisely_lib import Api
root_source_dir = str(Path(sys.argv[0]).parents[1])
sly.logger.info(f"Root source directory: {root_source_dir}")
sys.path.append(root_source_dir) # adds labeling-tool to path
source_path = str(Path(sys.argv[0]).parents[0])
sly.logger.info(f"App source directory: {source_path}")
sys.path.append(source_path) # adds labeling-tool/src to path
ui_sources_dir = os.path.join(source_path, "ui")
sly.logger.info(f"UI source directory: {ui_sources_dir}")
sys.path.append(ui_sources_dir) # adds labeling-tool/src/ui to path
sly.logger.info(f"Added to sys.path: {ui_sources_dir}")
owner_id = int(os.environ['context.userId'])
team_id = int(os.environ['context.teamId'])
my_app: sly.AppService = sly.AppService(ignore_task_id=True)
api = my_app.public_api
task_id = my_app.task_id
spawn_api = Api(server_address=os.environ['SERVER_ADDRESS'], token=os.environ['_SPAWN_API_TOKEN'],
ignore_task_id=True, retry_count=5) # api of spawner (admin / manager)
spawn_user_login = os.environ['_SPAWN_USER_LOGIN']
model_info = None
calculator_info = None
nn_session_id = None
calculator_session_id = None
# nn_session_id = 10726 # DEBUG
# calculator_session_id = 10727 # DEBUG
tags_examples = None
examples_data = None
model_tag_names = None
project2meta = {} # project_id -> project_meta
image2info = {}
image2ann = {} # image_id -> annotation
figures2embeddings = {} # image_id -> annotation
figures_in_reference = []
items_database = None
cache_path = os.path.join(my_app.data_dir, "cache")
sly.fs.mkdir(cache_path)
unknown_tag_meta = sly.TagMeta("unknown", sly.TagValueType.NONE, color=[255, 165, 0])
items_preview_size = 250
items_preview_count = 5
annotated_figures_count = 0
figures_on_frame_count = 0
| 28.292308 | 98 | 0.76074 | 290 | 1,839 | 4.548276 | 0.351724 | 0.034117 | 0.039424 | 0.042456 | 0.065201 | 0.033359 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 0.121262 | 1,839 | 64 | 99 | 28.734375 | 0.796411 | 0.1441 | 0 | 0 | 0 | 0 | 0.151088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.116279 | 0 | 0.116279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fa719c1a1304246d4350399acbbe6dd26dec668 | 4,038 | py | Python | Competition/RNN-based Baseline/DataManager.py | zhengcj1/ChID-Dataset | f7d9b7b75cccd50455987a623c898b490e8450f6 | [
"Apache-2.0"
] | 72 | 2019-06-08T13:21:36.000Z | 2019-09-24T04:11:29.000Z | Competition/RNN-based Baseline/DataManager.py | a626709452/ChID-Dataset | f7d9b7b75cccd50455987a623c898b490e8450f6 | [
"Apache-2.0"
] | 7 | 2019-06-12T07:12:30.000Z | 2019-10-01T04:11:40.000Z | Competition/RNN-based Baseline/DataManager.py | a626709452/ChID-Dataset | f7d9b7b75cccd50455987a623c898b490e8450f6 | [
"Apache-2.0"
] | 31 | 2019-06-27T03:38:18.000Z | 2019-09-24T04:11:16.000Z | # -*- coding: utf-8 -*-
import os
import pickle
import numpy as np
import random
import re
import jieba
import time
from utils import Vocabulary
random.seed(time.time())
class DataManager:
def __init__(self):
self.vocab = Vocabulary()
self.ans = {}
for line in open("../data/train_answer.csv"):
line = line.strip().split(',')
self.ans[line[0]] = int(line[1])
print("*** Finish building vocabulary")
def get_num(self):
num_word, num_idiom = len(self.vocab.id2word) - 2, len(self.vocab.id2idiom) - 1
print("Numbers of words and idioms: %d %d" % (num_word, num_idiom))
return num_word, num_idiom
def _prepare_data(self, temp_data):
cans = temp_data["candidates"]
cans = [self.vocab.tran2id(each, True) for each in cans]
for text in temp_data["content"]:
content = re.split(r'(#idiom\d+#)', text)
doc = []
loc = []
labs = []
tags = []
for i, segment in enumerate(content):
if re.match(r'#idiom\d+#', segment) is not None:
tags.append(segment)
if segment in self.ans:
labs.append(self.ans[segment])
loc.append(len(doc))
doc.append(self.vocab.tran2id('#idiom#'))
else:
doc += [self.vocab.tran2id(each) for each in jieba.lcut(segment)]
yield doc, cans, labs, loc, tags
def train(self, dev=False):
if dev:
file = open("../data/train.txt")
lines = file.readlines()[:10000]
else:
file = open("../data/train.txt")
lines = file.readlines()[10000:]
random.shuffle(lines)
for line in lines:
temp_data = eval(line)
for doc, cans, labs, loc, tags in self._prepare_data(temp_data):
yield doc, cans, labs, loc, tags
def test(self, file):
for line in open(file):
temp_data = eval(line)
for doc, cans, _, loc, tags in self._prepare_data(temp_data):
yield doc, cans, loc, tags
def get_embed_matrix(self): # DataManager
np.random.seed(37)
def embed_matrix(file, dic, dim=200):
fr = open(file, encoding="utf8")
wv = {}
for line in fr:
vec = line.split(" ")
word = vec[0]
if word in dic:
vec = [float(value) for value in vec[1:]]
assert len(vec) == dim
wv[dic[word]] = vec
# which indicates the order filling in wv is the same as id2idiom/id2word
lost_cnt = 0
matrix = []
for i in range(len(dic)):
if i in wv:
matrix.append(wv[i])
else:
lost_cnt += 1
matrix.append(np.random.uniform(-0.1, 0.1, [dim]))
return matrix, lost_cnt
if os.path.exists("newWordvector.txt"):
self.word_embed_matrix, lost_word = embed_matrix("newWordvector.txt", self.vocab.word2id)
else:
self.word_embed_matrix = np.random.rand(len(self.vocab.word2id), 200)
lost_word = len(self.vocab.word2id)
if os.path.exists("newIdiomvector.txt"):
self.idiom_embed_matrix, lost_idiom = embed_matrix("newIdiomvector.txt", self.vocab.idiom2id)
else:
self.idiom_embed_matrix = np.random.rand(len(self.vocab.idiom2id), 200)
lost_idiom = len(self.vocab.idiom2id)
self.word_embed_matrix = np.array(self.word_embed_matrix, dtype=np.float32)
self.idiom_embed_matrix = np.array(self.idiom_embed_matrix, dtype=np.float32)
print("*** %d idioms and %d words not found" % (lost_idiom, lost_word))
print("*** Embed matrixs built")
return self.word_embed_matrix, self.idiom_embed_matrix
| 33.65 | 105 | 0.541852 | 502 | 4,038 | 4.239044 | 0.25498 | 0.072368 | 0.033835 | 0.044643 | 0.219925 | 0.157895 | 0.157895 | 0.114662 | 0.081767 | 0.041353 | 0 | 0.019505 | 0.339772 | 4,038 | 119 | 106 | 33.932773 | 0.778695 | 0.026003 | 0 | 0.11828 | 0 | 0 | 0.077119 | 0.006108 | 0 | 0 | 0 | 0 | 0.010753 | 1 | 0.075269 | false | 0 | 0.086022 | 0 | 0.204301 | 0.043011 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fa92dbb34ca0620685882514ead1ec8e915c063 | 6,738 | py | Python | tests/test_decorator.py | maximlt/tranquilizer | 9b4a738d1f24af7f4c2397d43454e0fe2ee5e86b | [
"BSD-3-Clause"
] | 13 | 2020-09-08T08:59:23.000Z | 2022-01-16T20:03:27.000Z | tests/test_decorator.py | maximlt/tranquilizer | 9b4a738d1f24af7f4c2397d43454e0fe2ee5e86b | [
"BSD-3-Clause"
] | 14 | 2020-08-06T17:27:11.000Z | 2022-03-03T05:24:57.000Z | tests/test_decorator.py | maximlt/tranquilizer | 9b4a738d1f24af7f4c2397d43454e0fe2ee5e86b | [
"BSD-3-Clause"
] | 5 | 2021-02-22T18:25:16.000Z | 2021-12-11T18:12:21.000Z | from tranquilizer.decorator import tranquilize, publish
from tranquilizer.decorator import _prepare, _prepare_arg, _prepare_arg_docs
from tranquilizer.decorator import _prepare_error_docs
from inspect import signature
import datetime
import typing
import PIL.Image
import numpy
def test_attributes():
def _func():
return 0
decorated = tranquilize()(_func)
assert hasattr(decorated, '_spec')
assert hasattr(decorated, '_method')
assert hasattr(decorated, '_methods')
assert decorated._methods is None
def test_publish_attributes():
def _func():
return 0
decorated = publish()(_func)
assert hasattr(decorated, '_spec')
assert hasattr(decorated, '_method')
assert hasattr(decorated, '_methods')
assert decorated._method is None
def test_method():
# separate functions are used for
# get and post. Calling the decorator
# a second time updates the original
# function
def _funcg():
return 0
get = tranquilize(method='GET')(_funcg)
assert get._method == 'get'
def _funcp():
return 0
post = tranquilize(method='PosT')(_funcp)
assert post._method == 'post'
def _funcput():
return 0
post = tranquilize(method='pUt')(_funcput)
assert post._method == 'put'
def test_methods():
# separate functions are used for
# get and post. Calling the decorator
# a second time updates the original
# function
def _funcg():
return 0
get = publish(methods=['GET'])(_funcg)
assert get._methods == ['get']
def _funcp():
return 0
post = publish(methods=['PosT'])(_funcp)
assert post._methods == ['post']
def _funcpg():
return 0
post_get = publish(methods=['GET', 'PosT'])(_funcpg)
assert post_get._methods == ['get', 'post']
def test_prepare():
def _func(arg: float):
'''docstring
:param arg: number
:raises ValueError: not a number'''
return arg
spec = _prepare(_func)
assert isinstance(spec, dict)
assert spec.keys() == set(['name','docstring','args',
'param_docs','error_docs'])
assert spec['name'] == '_func'
assert spec['docstring'] == 'docstring\n\n '
def test_prepare_no_docstring():
def _empty(arg: float):
pass
spec = _prepare(_empty)
assert spec['param_docs'] == {}
assert spec['error_docs'] == {}
assert spec['docstring'] == ''
def test_prepare_args():
def _func(
s: str,
i: int,
f: float,
b: bool,
d: datetime.date,
dt: datetime.datetime,
l: list,
L: typing.List,
Ls: typing.List[str],
Li: typing.List[int],
Lf: typing.List[float],
Lb: typing.List[bool],
Ld: typing.List[datetime.date],
Ldt: typing.List[datetime.datetime],
fnb: typing.BinaryIO,
fnt: typing.TextIO,
img: PIL.Image.Image,
arr: numpy.ndarray,
untyped,
untyped_default = None,
typed_default: str = 'python'
):
pass
sig = signature(_func)
assert _prepare_arg(sig.parameters['s']) == {'name':'s','type':'str','annotation':str}
assert _prepare_arg(sig.parameters['i']) == {'name':'i','type':'int','annotation':int}
assert _prepare_arg(sig.parameters['f']) == {'name':'f','type':'float','annotation':float}
assert _prepare_arg(sig.parameters['b']) == {'name':'b','type':'bool','annotation':bool}
assert _prepare_arg(sig.parameters['d']) == {'name':'d','type':'date','annotation':datetime.date}
assert _prepare_arg(sig.parameters['dt']) == {'name':'dt','type':'datetime','annotation':datetime.datetime}
assert _prepare_arg(sig.parameters['l']) == {'name':'l','type':'list','annotation':list}
assert _prepare_arg(sig.parameters['L']) == {'name':'L','type':'List','annotation':typing.List}
assert _prepare_arg(sig.parameters['Ls']) == {'name':'Ls','type':'List','annotation':typing.List[str]}
assert _prepare_arg(sig.parameters['Li']) == {'name':'Li','type':'List','annotation':typing.List[int]}
assert _prepare_arg(sig.parameters['Lf']) == {'name':'Lf','type':'List','annotation':typing.List[float]}
assert _prepare_arg(sig.parameters['Lb']) == {'name':'Lb','type':'List','annotation':typing.List[bool]}
assert _prepare_arg(sig.parameters['Ld']) == {'name':'Ld','type':'List','annotation':typing.List[datetime.date]}
assert _prepare_arg(sig.parameters['Ldt']) == {'name':'Ldt','type':'List','annotation':typing.List[datetime.datetime]}
assert _prepare_arg(sig.parameters['fnb']) == {'name':'fnb','type':'BinaryIO','annotation':typing.BinaryIO}
assert _prepare_arg(sig.parameters['fnt']) == {'name':'fnt','type':'TextIO','annotation':typing.TextIO}
assert _prepare_arg(sig.parameters['img']) == {'name':'img','type':'Image','annotation':PIL.Image.Image}
assert _prepare_arg(sig.parameters['arr']) == {'name':'arr','type':'ndarray','annotation':numpy.ndarray}
assert _prepare_arg(sig.parameters['untyped']) == {'name':'untyped'}
assert _prepare_arg(sig.parameters['untyped_default']) == {'name':'untyped_default', 'default': None}
assert _prepare_arg(sig.parameters['typed_default']) == {'name':'typed_default', 'type':'str', 'annotation':str, 'default': 'python'}
def test_prepare_arg_docs():
doc = '''docstring
docstring
:param arg1: number
:param arg2: string
:raises ValueError: not a number'''
param_docs, remainder = _prepare_arg_docs(doc)
assert param_docs == {'arg1': 'number', 'arg2': 'string'}
assert remainder == 'docstring\n\n docstring\n\n :raises ValueError: not a number'
def test_prepare_arg_doc_noargs():
doc = '''docstring
docstring
:raises ValueError: not a number'''
param_docs, remainder = _prepare_arg_docs(doc)
assert param_docs == {}
assert remainder == doc
def test_prepare_error_docs():
doc = '''docstring
docstring
:param arg1: number
:param arg2: string
:raises ValueError: not a number'''
error_docs, remainder = _prepare_error_docs(doc)
assert error_docs == {500:'ValueError:not a number'}
assert remainder == 'docstring\n\n docstring\n\n :param arg1: number\n :param arg2: string\n '
def test_prepare_error_noerror():
doc = '''docstring
docstring
:param arg1: number
:param arg2: string'''
error_docs, remainder = _prepare_error_docs(doc)
assert error_docs is None
assert remainder == doc | 31.933649 | 137 | 0.621549 | 787 | 6,738 | 5.135959 | 0.139771 | 0.066799 | 0.083127 | 0.098714 | 0.535626 | 0.427759 | 0.294904 | 0.259525 | 0.241217 | 0.2286 | 0 | 0.004 | 0.220837 | 6,738 | 211 | 138 | 31.933649 | 0.765905 | 0.042594 | 0 | 0.30137 | 0 | 0.006849 | 0.220925 | 0 | 0 | 0 | 0 | 0 | 0.342466 | 1 | 0.150685 | false | 0.013699 | 0.054795 | 0.054795 | 0.267123 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4faae048fa4d1796efc72ddd79750287cb0981a1 | 5,503 | py | Python | exercise/exercise9.py | FrancescoPenasa/Intro2MachineLearning | 2a0176acc52d786f0c7435a3a53b2eff06573069 | [
"MIT"
] | null | null | null | exercise/exercise9.py | FrancescoPenasa/Intro2MachineLearning | 2a0176acc52d786f0c7435a3a53b2eff06573069 | [
"MIT"
] | null | null | null | exercise/exercise9.py | FrancescoPenasa/Intro2MachineLearning | 2a0176acc52d786f0c7435a3a53b2eff06573069 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Mar 14 17:50:04 2019
@author: francesco
Given the iris dataset CSV file,
apply the k-nearest neighbors algorithm to all the elements of the dataset
using k ∈ {3, 5, 10, 20} and build the confusion matrix.
Using the confusion matrix, compute total precision, total accuracy and total recall.
TIPS:
total metrics correspond to the average of the relative metric computed on the elements of each class
when evaluating, do not include the point you are predicting in the neighbors (this would be cheating)
"""
import pandas as pd
import numpy as np
import math
import matplotlib.pyplot as plt
import sys
def euclidean_distance(s1,s2):
"""
Compute the Euclidean distance between two n-dimensional objects.
"""
tmpsum = 0
for index,value in enumerate(s1):
tmpsum += (s1[index]-s2[index])**2
return math.sqrt(tmpsum)
def find_distances(frame, newPoint):
"""
Find the distance between a point and all the points in a dataframe, and
sort the elements in ascending order.
"""
distances = []
# iterate over all rows in the dataframe
for index in range(frame.shape[0]):
# get all columns of a row (except the label)
point = frame.iloc[index,:-1]
# compute the distance, then save distance and label
# (use distance as first value)
distance = euclidean_distance(point, newPoint)
if distance != 0:
distances.append((distance, frame.iloc[index,-1]))
else:
distances.append((sys.maxsize, frame.iloc[index,-1]))
distances.sort()
return distances
def k_nn(frame, newPoint, colClass, k):
"""
Predict the class of a point by using the k-nearest neighbor algorithm on
the points of a dataframe.
"""
counts = []
# find all distances wrt the newPoint
dist = find_distances(frame, newPoint)
# find the nearest k points, extract their labels and save them in a list
labels = [label for distance,label in dist[:k]]
# for each class label, count how many occurrencies have been found
for label in frame[colClass].unique():
# save the number of occurrencies in a list of tuples (number, label)
counts.append((labels.count(label), label))
# sort the list in descending order, and use the first label of the tuples'
# list to make the prediction
counts.sort(reverse=True)
prediction = counts[0][1]
return prediction
def compute_accuracy(confusionMatrix):
"""
Compute accuracy based on a Confusion Matrix
with prediction on rows
and truth on columns.
"""
correct = 0
for elem in range(confusionMatrix.shape[0]):
correct += confusionMatrix[elem, elem]
tot = confusionMatrix.sum()
return correct / tot
def compute_precision(confusionMatrix):
"""
Compute precision based on a Confusion Matrix
with prediction on rows
and truth on columns.
precision = true positive / (true positive + false positive)
"""
precision = []
for i in range(confusionMatrix.shape[0]):
tot = 0
for j in range(confusionMatrix.shape[0]):
tot += confusionMatrix[i, j]
correct = confusionMatrix[i, i]
precision.append(correct/tot)
return precision
def compute_recall(confusionMatrix):
"""
Compute recall based on a Confusion Matrix
with prediction on rows
and truth on columns.
recall = true positive / (true positive + false negative)
"""
recall = []
for elem in range(confusionMatrix.shape[0]):
tot = 0
for j in range(confusionMatrix.shape[0]):
tot += confusionMatrix[j, elem]
correct = confusionMatrix[elem, elem]
recall.append(correct/tot)
return recall
def init_confusionMatrix(df, spec):
"""
Init confusion matrix with rows and columns based on the df[spec].unique()
"""
rows = 0
names = []
for name in df[spec].unique():
rows += 1
names.append(name)
confusionMatrix = [[0] * rows for x in range(rows)]
confusionMatrix = np.matrix(confusionMatrix)
return confusionMatrix,names
def k_nn_all(df, k, spec):
"""
k_nn on all the rows of the frame excluding the one tested
"""
confusionMatrix,names = init_confusionMatrix(df, spec)
for tested in range(df.shape[0]):
prediction = k_nn(df, df.iloc[tested], spec, k)
if prediction == df.iloc[tested,-1]:
i = names.index(prediction)
confusionMatrix[i,i] += 1
elif prediction != df.iloc[tested,-1]:
i = names.index(prediction)
j = names.index(df.iloc[tested,-1])
confusionMatrix[i,j] += 1
print(confusionMatrix)
print("k:" , k)
print("accuracy: ", compute_accuracy(confusionMatrix))
print("precision: ", compute_precision(confusionMatrix))
print("recall: ", compute_recall(confusionMatrix))
print("")
# --------------------------------------------------------------------------- #
df = pd.read_csv("iris.data", names = ["SepalLength","SepalWidth","PetalLength","PetalWidth","Class"])
#
#k_nn_all(df, 3, "Class")
#k_nn_all(df, 5, "Class")
#k_nn_all(df, 10, "Class")
k_nn_all(df, 20, "Class") | 31.445714 | 102 | 0.620571 | 697 | 5,503 | 4.863702 | 0.271162 | 0.016519 | 0.032448 | 0.039823 | 0.19174 | 0.159292 | 0.139823 | 0.127434 | 0.127434 | 0.101475 | 0 | 0.014176 | 0.269308 | 5,503 | 175 | 103 | 31.445714 | 0.82865 | 0.371252 | 0 | 0.097561 | 0 | 0 | 0.028083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097561 | false | 0 | 0.060976 | 0 | 0.243902 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fabc369428b02566684980c209f3e0046e1c075 | 4,779 | py | Python | env_data/env_data.py | michaelandrewblum/be534-final-project | 53f1bea7a7a73e3a1b9d36e7766707be4b7c46a2 | [
"MIT"
] | null | null | null | env_data/env_data.py | michaelandrewblum/be534-final-project | 53f1bea7a7a73e3a1b9d36e7766707be4b7c46a2 | [
"MIT"
] | null | null | null | env_data/env_data.py | michaelandrewblum/be534-final-project | 53f1bea7a7a73e3a1b9d36e7766707be4b7c46a2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
Author : michaelblum <michaelblum@localhost>
Date : 2021-11-17
Purpose: Environmental Data Dashboard
"""
import argparse
import os
import csv
from collections import Counter
from pathlib import Path
# --------------------------------------------------
def get_args():
"""Get command-line arguments"""
parser = argparse.ArgumentParser(
description='Environmental Data Dashboard',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('files',
metavar='FILE',
nargs='+',
type=argparse.FileType('rt'),
help='CSV input file(s)')
parser.add_argument('-o',
'--outfile',
metavar='FILe',
type=argparse.FileType('at'),
default='output/data.csv',
help='CSV input file')
parser.add_argument('-d',
'--dashboard',
help='A boolean flag',
action='store_true')
parser.add_argument('-n',
'--nottofile',
help='A boolean flag',
action='store_true')
parser.add_argument('-r',
'--remain',
help='A boolean flag',
action='store_true')
args = parser.parse_args()
for file in args.files:
if os.path.splitext(file.name)[1] != '.csv':
parser.error('Error -- All input files should be csv files.')
return args
# --------------------------------------------------
def main():
"""Make a jazz noise here"""
args = get_args()
outfile = args.outfile.name
# Sort infiles, which will put them in chronological order
infiles_sorted = sorted(args.files, key=lambda fh: fh.name)
if not args.nottofile:
# Copy contents of input files to output file
for infile in infiles_sorted:
in_fh = open(infile.name, 'r')
out_fh = open(outfile, 'a')
for row in in_fh:
out_fh.write(row)
# Print to stdout what files input and where output
for infile in infiles_sorted:
print(f'Data input from {infile.name}.')
print(f'Data copied to {outfile}.')
# Get averages for data input in select columns
if args.dashboard:
headers = [
"TIMESTAMP", "RECORD", "batt_volt_Min", "PanelT", "RH_East_Avg",
"RH_West_Avg", "RH_Center_Avg", "AirT_East_Avg", "AirT_West_Avg",
"AirT_Center_Avg", "PAR_E_Avg", "PAR_W_Avg", "PAR_E_Total",
"PAR_W_Total", "Incoming_SW_Avg", "Outgoing_SW_Avg",
"Incoming_LW_Avg", "Outgoing_LW_Avg", "TargmV_E_Avg",
"SBTempC_E_Avg", "TargTempC_E_Avg", "TargmV_W_Avg",
"SBTempC_W_Avg", "TargTempC_W_Avg"
]
headers_to_print = [
"RH_East_Avg", "RH_West_Avg", "RH_Center_Avg", "AirT_East_Avg",
"AirT_West_Avg", "AirT_Center_Avg", "PAR_E_Avg", "PAR_W_Avg",
"TargTempC_E_Avg", "TargTempC_W_Avg"
]
for infile in infiles_sorted:
print('')
print('{:20}{:7}'.format('Sensor', 'Average'))
print('-' * 27)
avg_list = get_averages(infile)
avg_dict = {}
for i, header in enumerate(headers):
avg_dict[header] = avg_list[i]
for header in headers_to_print:
print(f'{header:<20}{avg_dict[header]:>7}')
# Move new input data file into old_data folder if not remain flag.
if not args.remain:
for infile in infiles_sorted:
Path(infile.name).rename('old_data/' +
os.path.basename(infile.name))
# --------------------------------------------------
def get_averages(fh):
""" format data dashboard """
col_totals = Counter()
with open(fh.name, 'rt') as f:
reader = csv.reader(f)
row_count = 0.0
for row in reader:
for col_id, col_value in enumerate(row):
try:
n = float(col_value)
col_totals[col_id] += n
except ValueError:
col_totals[col_id] = 'N/A'
row_count += 1.0
row_count -= 1.0
col_indexes = col_totals.keys()
averages = []
for i in col_indexes:
try:
averages.append(round(col_totals[i] / row_count, 2))
except TypeError:
averages.append(col_totals[i])
return averages
# --------------------------------------------------
if __name__ == '__main__':
main()
| 30.634615 | 77 | 0.514961 | 533 | 4,779 | 4.390244 | 0.326454 | 0.017949 | 0.036325 | 0.030769 | 0.17906 | 0.145727 | 0.12094 | 0.107692 | 0.107692 | 0.107692 | 0 | 0.007862 | 0.334589 | 4,779 | 155 | 78 | 30.832258 | 0.727987 | 0.138941 | 0 | 0.116505 | 0 | 0 | 0.198187 | 0.008084 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029126 | false | 0 | 0.048544 | 0 | 0.097087 | 0.07767 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fb0212464a03e0f7ea4553bfadd19dc3fee9475 | 2,017 | py | Python | utils.py | tivaliy/cloudwatch_importer | 46704a4001642a3d00ab1281bf11a4a8dbb900d4 | [
"Apache-2.0"
] | 2 | 2017-04-26T19:22:14.000Z | 2018-07-09T09:15:37.000Z | utils.py | tivaliy/cloudwatch_importer | 46704a4001642a3d00ab1281bf11a4a8dbb900d4 | [
"Apache-2.0"
] | null | null | null | utils.py | tivaliy/cloudwatch_importer | 46704a4001642a3d00ab1281bf11a4a8dbb900d4 | [
"Apache-2.0"
] | null | null | null | #
# Copyright 2017 Vitalii Kulanov
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import yaml
import json
SUPPORTED_FILE_FORMATS = ('json', 'yaml')
def safe_load(data_format, stream):
loaders = {'json': json.load,
'yaml': yaml.safe_load}
if data_format not in loaders:
raise ValueError('Unsupported data format. '
'Only {} are allowed'.format(SUPPORTED_FILE_FORMATS))
loader = loaders[data_format]
return loader(stream)
def safe_dump(data_format, stream, data):
yaml_dumper = lambda data, stream: yaml.safe_dump(data,
stream,
default_flow_style=False)
json_dumper = lambda data, stream: json.dump(data, stream, indent=4)
dumpers = {'json': json_dumper,
'yaml': yaml_dumper}
if data_format not in dumpers:
raise ValueError('Unsupported data format. '
'Only {} are allowed.'.format(SUPPORTED_FILE_FORMATS))
dumper = dumpers[data_format]
dumper(data, stream)
def read_from_file(file_path):
data_format = os.path.splitext(file_path)[1].lstrip('.')
with open(file_path, 'r') as stream:
return safe_load(data_format, stream)
def write_to_file(file_path, data):
data_format = os.path.splitext(file_path)[1].lstrip('.')
with open(file_path, 'w') as stream:
safe_dump(data_format, stream, data)
| 32.532258 | 79 | 0.644522 | 261 | 2,017 | 4.835249 | 0.402299 | 0.095087 | 0.050713 | 0.025357 | 0.316957 | 0.251981 | 0.207607 | 0.207607 | 0.207607 | 0.207607 | 0 | 0.007407 | 0.263758 | 2,017 | 61 | 80 | 33.065574 | 0.842424 | 0.287556 | 0 | 0.125 | 0 | 0 | 0.082452 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.09375 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96c4b664bc3413b1f9e355f0a102ef1ef7fec4e3 | 3,563 | py | Python | src/frr/tests/topotests/bgp_prefix_sid2/test_bgp_prefix_sid2.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/topotests/bgp_prefix_sid2/test_bgp_prefix_sid2.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/topotests/bgp_prefix_sid2/test_bgp_prefix_sid2.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# test_bgp_prefix_sid2.py
# Part of NetDEF Topology Tests
#
# Copyright (c) 2020 by LINE Corporation
# Copyright (c) 2020 by Hiroki Shirokura <slank.dev@gmail.com>
#
# Permission to use, copy, modify, and/or distribute this software
# for any purpose with or without fee is hereby granted, provided
# that the above copyright notice and this permission notice appear
# in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND NETDEF DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NETDEF BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
#
"""
test_bgp_prefix_sid2.py: Test BGP topology with EBGP on prefix-sid
"""
import json
import os
import sys
import functools
import pytest
CWD = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(CWD, "../"))
# pylint: disable=C0413
from lib import topotest
from lib.topogen import Topogen, TopoRouter, get_topogen
from lib.topolog import logger
pytestmark = [pytest.mark.bgpd]
def build_topo(tgen):
router = tgen.add_router("r1")
switch = tgen.add_switch("s1")
switch.add_link(router)
switch = tgen.gears["s1"]
peer1 = tgen.add_exabgp_peer("peer1", ip="10.0.0.101", defaultRoute="via 10.0.0.1")
switch.add_link(peer1)
def setup_module(module):
tgen = Topogen(build_topo, module.__name__)
tgen.start_topology()
router = tgen.gears["r1"]
router.load_config(
TopoRouter.RD_ZEBRA, os.path.join(CWD, "{}/zebra.conf".format("r1"))
)
router.load_config(
TopoRouter.RD_BGP, os.path.join(CWD, "{}/bgpd.conf".format("r1"))
)
router.start()
logger.info("starting exaBGP")
peer_list = tgen.exabgp_peers()
for pname, peer in peer_list.items():
logger.info("starting exaBGP on {}".format(pname))
peer_dir = os.path.join(CWD, pname)
env_file = os.path.join(CWD, pname, "exabgp.env")
logger.info("Running ExaBGP peer on {}".format(pname))
peer.start(peer_dir, env_file)
logger.info(pname)
def teardown_module(module):
tgen = get_topogen()
tgen.stop_topology()
def open_json_file(filename):
try:
with open(filename, "r") as f:
return json.load(f)
except IOError:
assert False, "Could not read file {}".format(filename)
def test_r1_rib():
def _check(name, cmd, expected_file):
logger.info("polling")
tgen = get_topogen()
router = tgen.gears[name]
output = json.loads(router.vtysh_cmd(cmd))
expected = open_json_file("{}/{}".format(CWD, expected_file))
return topotest.json_cmp(output, expected)
def check(name, cmd, expected_file):
logger.info('[+] check {} "{}" {}'.format(name, cmd, expected_file))
tgen = get_topogen()
func = functools.partial(_check, name, cmd, expected_file)
success, result = topotest.run_and_expect(func, None, count=10, wait=0.5)
assert result is None, "Failed"
check("r1", "show bgp ipv6 vpn 2001:1::/64 json", "r1/vpnv6_rib_entry1.json")
check("r1", "show bgp ipv6 vpn 2001:2::/64 json", "r1/vpnv6_rib_entry2.json")
if __name__ == "__main__":
args = ["-s"] + sys.argv[1:]
ret = pytest.main(args)
sys.exit(ret)
| 30.452991 | 87 | 0.681448 | 513 | 3,563 | 4.596491 | 0.405458 | 0.017812 | 0.021204 | 0.027566 | 0.133164 | 0.078032 | 0.052587 | 0.031383 | 0 | 0 | 0 | 0.022632 | 0.193938 | 3,563 | 116 | 88 | 30.715517 | 0.798398 | 0.26691 | 0 | 0.074627 | 0 | 0 | 0.127421 | 0.01859 | 0 | 0 | 0 | 0 | 0.029851 | 1 | 0.104478 | false | 0 | 0.119403 | 0 | 0.253731 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96c57c612456a9bf43b8ed89c8bf5efd335e5221 | 3,021 | py | Python | cloudroast/bare_metal/api/nodes/test_create_node.py | lmaycotte/cloudroast | c1835aa45e0e86c755d4b24b33e12ba30eee1995 | [
"Apache-2.0"
] | null | null | null | cloudroast/bare_metal/api/nodes/test_create_node.py | lmaycotte/cloudroast | c1835aa45e0e86c755d4b24b33e12ba30eee1995 | [
"Apache-2.0"
] | null | null | null | cloudroast/bare_metal/api/nodes/test_create_node.py | lmaycotte/cloudroast | c1835aa45e0e86c755d4b24b33e12ba30eee1995 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2014 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cloudroast.bare_metal.fixtures import BareMetalFixture
class CreateNodeTest(BareMetalFixture):
@classmethod
def setUpClass(cls):
super(CreateNodeTest, cls).setUpClass()
cls._create_chassis()
cls._create_node()
def test_create_node_response_code(self):
"""Verify that the response code for the create node
request is correct.
"""
self.assertEqual(self.create_node_resp.status_code, 201)
def test_created_node_properties(self):
"""Verify that the properties provided to the create node request
are reflected in the created node.
"""
self.assertEqual(self.node.driver, self.node_driver)
self.assertEqual(self.node.chassis_uuid, self.chassis.uuid)
self.assertEqual(self.node.properties, self.node_properties)
self.assertEqual(self.node.driver_info, self.driver_info)
self.assertEqual(self.node.extra, self.node_extra)
def test_new_node_in_list_of_nodes(self):
"""Verify that the newly created node exists in the
list of nodes.
"""
existing_nodes = self.nodes_client.list_nodes().entity
node_uuids = [node.uuid for node in existing_nodes]
self.assertIn(self.node.uuid, node_uuids)
def test_new_node_in_detailed_list_of_nodes(self):
"""Verify that the newly created node exists in the
detailed list of nodes.
"""
resp = self.nodes_client.list_nodes_with_details()
existing_nodes = resp.entity
node_uuids = [node.uuid for node in existing_nodes]
self.assertIn(self.node.uuid, node_uuids)
def test_get_node(self):
"""Verify the details returned by a get node request match
the expected values.
"""
resp = self.nodes_client.get_node(self.node.uuid)
self.assertEqual(resp.status_code, 200)
self.assertEqual(self.node.driver, self.node_driver)
self.assertEqual(self.node.properties, self.node_properties)
self.assertEqual(self.node.driver_info, self.driver_info)
self.assertEqual(self.node.extra, self.node_extra)
def test_list_nodes_by_chassis(self):
"""Verify that all nodes assigned to a chassis are returned."""
resp = self.chassis_client.list_nodes_for_chassis(self.chassis.uuid)
nodes = resp.entity
node_uuids = [node.uuid for node in nodes]
self.assertIn(self.node.uuid, node_uuids)
| 38.240506 | 76 | 0.706389 | 413 | 3,021 | 5.002421 | 0.290557 | 0.073572 | 0.091965 | 0.100194 | 0.404163 | 0.372217 | 0.372217 | 0.372217 | 0.353824 | 0.353824 | 0 | 0.005885 | 0.212512 | 3,021 | 78 | 77 | 38.730769 | 0.862547 | 0.327375 | 0 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.388889 | 1 | 0.194444 | false | 0 | 0.027778 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96c7b2860052551b46f0ebebeb105580c3b47caf | 6,004 | py | Python | pacman/actors/actor.py | JCatrielLopez/pacman | 5989291d4a55f2fa01b3c4c7f0e27857983d0d0d | [
"MIT"
] | 2 | 2020-07-23T00:28:56.000Z | 2020-07-23T16:59:26.000Z | pacman/actors/actor.py | JCatrielLopez/pacman | 5989291d4a55f2fa01b3c4c7f0e27857983d0d0d | [
"MIT"
] | 1 | 2019-09-06T14:09:23.000Z | 2019-09-16T13:04:49.000Z | pacman/actors/actor.py | JCatrielLopez/pacman | 5989291d4a55f2fa01b3c4c7f0e27857983d0d0d | [
"MIT"
] | null | null | null | import pygame as pg
from pacman import constants
from pacman import spritesheet as sp
class Actor(pg.sprite.Sprite):
rect = None
image = None
def __init__(self, x, y, width, height, color, *groups):
super().__init__(*groups)
self.image = pg.Surface([width, height])
self.image.fill(color)
self.rect = self.image.get_rect()
self.rect.x = x
self.rect.y = y
self.timer = 0.0
def in_collision(self, hitbox):
pass
class MovingActor(Actor):
direction = None
sprites_up = None
sprites_down = None
sprites_left = None
sprites_right = None
current_sprite = None
timer = None
increment = 1 / constants.FPS
def __init__(self, x, y, width, height, color, res_path, current_map, *groups):
super().__init__(x, y, width, height, color, *groups)
self.set_spritesheet(res_path)
self.current_sprite = 0
self.direction = constants.LEFT
self.next_dir = constants.LEFT
self.original_x = x
self.original_y = y
self.current_map = current_map
def restart_position(self):
self.rect.x = self.original_x
self.rect.y = self.original_y
self.direction = constants.LEFT
def set_spritesheet(self, path):
coord = []
sprites_dim = constants.TILE_SIZE * 2
for i in range(8):
coord.append((i * sprites_dim, 0, sprites_dim, sprites_dim))
sp_left = sp.Spritesheet(f"{path}/spritesheet_left.png")
sp_right = sp.Spritesheet(f"{path}/spritesheet_right.png")
sp_up = sp.Spritesheet(f"{path}/spritesheet_up.png")
sp_down = sp.Spritesheet(f"{path}/spritesheet_down.png")
self.sprites_left = [sprite for sprite in sp_left.images_at(coord, -1)]
self.sprites_right = [sprite for sprite in sp_right.images_at(coord, -1)]
self.sprites_up = [sprite for sprite in sp_up.images_at(coord, -1)]
self.sprites_down = [sprite for sprite in sp_down.images_at(coord, -1)]
def get_sprite(self):
out_index = self.current_sprite
self.current_sprite += 1
self.current_sprite = self.current_sprite % len(self.sprites_left)
out_sprite = None
if self.direction == constants.UP:
out_sprite = self.sprites_up[out_index]
if self.direction == constants.DOWN:
out_sprite = self.sprites_down[out_index]
if self.direction == constants.LEFT:
out_sprite = self.sprites_left[out_index]
if self.direction == constants.RIGHT:
out_sprite = self.sprites_right[out_index]
return out_sprite
def get_pos(self):
return (
self.rect.centerx - constants.TILE_SIZE,
self.rect.centery - constants.TILE_SIZE,
)
def move_up(self):
if self.direction != constants.UP:
if self.direction == constants.DOWN:
self.direction = constants.UP
self.next_dir = None
else:
self.next_dir = constants.UP
def move_down(self):
if self.direction != constants.DOWN:
if self.direction == constants.UP:
self.direction = constants.DOWN
self.next_dir = None
else:
self.next_dir = constants.DOWN
def move_left(self):
if self.direction != constants.LEFT:
if self.direction == constants.RIGHT:
self.direction = constants.LEFT
self.next_dir = None
else:
self.next_dir = constants.LEFT
def move_right(self):
if self.direction != constants.RIGHT:
if self.direction == constants.LEFT:
self.direction = constants.RIGHT
self.next_dir = None
else:
self.next_dir = constants.RIGHT
def move(self):
if (
self.rect.x % constants.TILE_SIZE == 0
and self.rect.y % constants.TILE_SIZE == 0
and self.next_dir is not None
):
j = int(self.rect.x / constants.TILE_SIZE)
i = int(self.rect.y / constants.TILE_SIZE)
j += self.next_dir[0]
i += self.next_dir[1]
if (
self.current_map.is_valid((j, i))
or self.current_map.get_value((j, i)) == 4
):
self.direction = self.next_dir
self.next_dir = None
self.rect.x += self.direction[0]
if 0 < self.rect.x < constants.COLS * constants.TILE_SIZE:
self.rect.y += self.direction[1]
self.adjust_movement()
self.add_timer()
self.check_limits()
def check_limits(self):
if self.rect.x == -(constants.TILE_SIZE // 2):
self.rect.x = constants.TILE_SIZE * constants.COLS
if self.rect.x == (constants.COLS + 1) * constants.TILE_SIZE:
self.rect.x = 0
def adjust_movement(self):
if self.direction == constants.LEFT or self.direction == constants.RIGHT:
block_hit_list = pg.sprite.spritecollide(
self, self.current_map.wall_group, False
)
for block in block_hit_list:
if self.direction == constants.RIGHT:
self.rect.right = block.rect.left
else:
self.rect.left = block.rect.right
else:
block_hit_list = pg.sprite.spritecollide(
self, self.current_map.wall_group, False
)
for block in block_hit_list:
if self.direction == constants.DOWN:
self.rect.bottom = block.rect.top
else:
self.rect.top = block.rect.bottom
def get_direction(self):
return self.direction
def add_timer(self):
self.timer += self.increment
def get_timer(self):
return self.timer
| 31.767196 | 83 | 0.576616 | 742 | 6,004 | 4.483827 | 0.137466 | 0.101593 | 0.145476 | 0.108206 | 0.50526 | 0.327622 | 0.169222 | 0.150286 | 0.113616 | 0.066727 | 0 | 0.005682 | 0.325783 | 6,004 | 188 | 84 | 31.93617 | 0.816206 | 0 | 0 | 0.246667 | 0 | 0 | 0.017821 | 0.017821 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113333 | false | 0.006667 | 0.02 | 0.02 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96c94ac855c8bc2dc948c51a7ccdae11acb22223 | 4,595 | py | Python | src/subcommands/payments.py | dguo/churn | 4e599e47b87ace2bec816602f25b0e18e2338e2d | [
"MIT"
] | 1 | 2020-03-26T22:28:59.000Z | 2020-03-26T22:28:59.000Z | src/subcommands/payments.py | dguo/churn | 4e599e47b87ace2bec816602f25b0e18e2338e2d | [
"MIT"
] | null | null | null | src/subcommands/payments.py | dguo/churn | 4e599e47b87ace2bec816602f25b0e18e2338e2d | [
"MIT"
] | null | null | null | import click
from pick import pick
from tabulate import tabulate
from ..util import (pick_with_cancel, prompt_for_date, prompt_for_money,
format_money)
from .cards import select_card
def _get_payments(connection):
command = '''SELECT date(payment_date) as payment_date,
card_networks.name,
card_issuers.name,
cards.name,
cast(amount AS FLOAT) / 100 AS amount
FROM payments
JOIN cards
ON cards.id = payments.card_id
JOIN card_networks
ON card_networks.id = cards.card_network_id
JOIN card_issuers
ON card_issuers.id = cards.card_issuer_id
ORDER BY payment_date
'''
return connection.execute(command).fetchall()
def _prompt_for_payment_date():
return prompt_for_date('Payment date (YYYY-MM-DD)')
def select_payment(connection, payment_id):
command = '''SELECT payments.id,
date(payment_date) as payment_date,
card_networks.name as network,
card_issuers.name as issuer,
cards.name as card,
cast(amount AS FLOAT) / 100 AS amount
FROM payments
JOIN cards
ON cards.id = payments.card_id
JOIN card_networks
ON card_networks.id = cards.card_network_id
JOIN card_issuers
ON card_issuers.id = cards.card_issuer_id
'''
if payment_id:
command += ' WHERE payments.id = ?'
payment = connection.execute(command, (payment_id,)).fetchone()
return payment
command += ' ORDER BY payment_date'
payments = connection.execute(command).fetchall()
if not payments:
return None
options = [payment['payment_date'] + ' | ' + payment['issuer'] + ' ' +
payment['card'] + ' | ' + format_money(payment['amount'])
for payment in payments]
selection = pick(options + ['(cancel)'], 'Select the payment:')
index = selection[1]
return None if index == len(payments) else payments[index]
def list_payments(connection):
payments = _get_payments(connection)
headers = ['Date', 'Network', 'Issuer', 'Name', 'Amount']
click.echo_via_pager(tabulate(payments, headers, 'fancy_grid',
floatfmt=',.2f'))
def add_payment(connection):
card = select_card(connection, None)
if not card:
return
card_id = card['id']
payment_date = _prompt_for_payment_date()
amount = prompt_for_money('Amount')
command = '''INSERT INTO payments (payment_date, amount, card_id)
VALUES (?, ?, ?)'''
with connection:
connection.execute(command, (payment_date, amount, card_id))
def remove_payment(connection):
payment = select_payment(connection, None)
if not payment:
click.secho('There is no payment to remove.', fg='red')
return
command = 'DELETE FROM payments WHERE id = ?'
with connection:
connection.execute(command, (payment['id'],))
click.secho('Removed the payment.', fg='green')
def update_payment(connection, payment_id):
payment = select_payment(connection, payment_id)
if not payment:
click.secho('There is no payment to update.', fg='red')
return
attributes = [
('payment_date', 'Payment date: ' + payment['payment_date']),
('card_id', 'Card: ' + payment['network'] + ' ' + payment['card']),
('amount', 'Amount: ' + format_money(payment['amount']))
]
current_attributes = [attribute[1] for attribute in attributes]
selected_attribute = pick_with_cancel('Select an attribute to update.',
current_attributes)
if selected_attribute:
index = selected_attribute[1]
name = attributes[index][0]
if name == 'payment_date':
value = _prompt_for_payment_date()
elif name == 'amount':
value = prompt_for_money('Amount')
elif name == 'card_id':
card = select_card(connection, None)
if not card:
return
value = card['id']
command = 'UPDATE payments SET ' + name + ' = ? WHERE id = ?'
with connection:
connection.execute(command, (value, payment['id']))
update_payment(connection, payment['id'])
| 35.076336 | 75 | 0.579325 | 493 | 4,595 | 5.20284 | 0.182556 | 0.077193 | 0.05614 | 0.040546 | 0.364912 | 0.28616 | 0.265887 | 0.230799 | 0.230799 | 0.162963 | 0 | 0.003516 | 0.319042 | 4,595 | 130 | 76 | 35.346154 | 0.816235 | 0 | 0 | 0.28972 | 0 | 0 | 0.385201 | 0.00914 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065421 | false | 0 | 0.046729 | 0.009346 | 0.196262 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96ccefc9c2020e43c6626f2e0bf7918eaf2c2aea | 5,530 | py | Python | moip_sdk/tests/test_payment.py | mastertech/moip-sdk-python | 15c28b8643dfb63242c0cd55f1a2cbee04d9fbaa | [
"MIT"
] | 2 | 2020-06-26T16:27:43.000Z | 2021-06-01T20:21:04.000Z | moip_sdk/tests/test_payment.py | mastertech/moip-sdk-python | 15c28b8643dfb63242c0cd55f1a2cbee04d9fbaa | [
"MIT"
] | null | null | null | moip_sdk/tests/test_payment.py | mastertech/moip-sdk-python | 15c28b8643dfb63242c0cd55f1a2cbee04d9fbaa | [
"MIT"
] | 1 | 2020-06-26T16:28:57.000Z | 2020-06-26T16:28:57.000Z | import random
import unittest
from moip_sdk.customer.schemas import CustomerSchema
from moip_sdk.customer.service import register_customer
from moip_sdk.order.schemas import OrderSchema
from moip_sdk.order.service import register_order
from moip_sdk.payment.service import register_payment
from moip_sdk.payment.schemas import PaymentSchema
from moip_sdk.payment.enums import MoipPaymentMethod
payment_payload = {
'installmentCount': 1,
'statementDescriptor': 'Teste'
}
class PaymentTestCase(unittest.TestCase):
def setUp(self):
customer_payload = {
'ownId': random.randint(900000, 90000000),
'fullname': 'João da Silva Teste',
'email': 'joao.teste@mastertech.com.br',
'birthDate': '2000-02-02',
'taxDocument': {
'type': 'CPF',
'number': '55237990096'
},
'phone': {
'countryCode': '55',
'areaCode': '11',
'number': '40028922'
},
'shippingAddress': {
'city': 'Santa Catarina',
'zipCode': '88132241',
'street': 'Rua Bréscia',
'streetNumber': '12',
'state': 'SC',
'country': 'Brazil',
'district': 'Passa Vinte'
}
}
self.customer = CustomerSchema().load(customer_payload)
registered_customer = register_customer(self.customer)
order_payload = {
'ownId': random.randint(900000, 90000000),
'amount': {
'currency': 'BRL'
},
'items': [{
'product': 'Curso Mastertech',
'quantity': 1,
'detail': 'Este é um teste para venda de um produto',
'price': 40000,
}],
'customer': {
'id': registered_customer['id']
}
}
order = OrderSchema().load(order_payload)
self.registered_order = register_order(order)
def test_register_credit_card_payment(self):
payment_payload['fundingInstrument'] = {
'method': MoipPaymentMethod.CREDIT_CARD.name,
'creditCard': {
'hash': 'MD6ZDloloRbBcYCnQKjluRzblLmUrGqfd0U0FuzTcmaWkhpHMX1Im9lh'
'MwzhA3YDrYWui9GY3hVef37c6rSEWsb6ztZZqRbUz5dElpm3AKcKhVHpm'
'LayKTcAWNLVynw+Fy3nfpTboN756e6nM8DmfaPBkUfQ2OXtgZKUWS6kGCPG'
'Q4pIHRSA/dxSkxVmzUmTtbUsToT9fAZJbXIh88/Q6tznlV3Ulsb/WE8jkZm'
'872zebB2fkfyQS+6IExDOuRa3WndiFGJHdTHS/JdpHe+lRXondIFjBrJ9lW'
'8+EK4yZjLTvWxUMNbgGRui1dQ6Y5KDJHc5bVPVHFuVaH50lmcbnw==',
'store': False,
'holder': {
'fullname': 'João da Silva',
'birthDate': '2000-03-02',
'taxDocument': {
'type': 'CPF',
'number': '55237990096'
},
'phone': {
'countryCode': '55',
'areaCode': '11',
'number': '40028922'
}
}
}
}
payment = PaymentSchema().load(payment_payload)
response = register_payment(payment, self.registered_order['id'])
self.assertIsNotNone(response['id'])
self.assertEqual(len(response), 14)
def test_register_credit_card_payment_with_invalid_data(self):
payment_payload['fundingInstrument'] = {
'method': MoipPaymentMethod.CREDIT_CARD.name,
'creditCard': {
'hash': 'MD6ZDloloRbBcYCnQKjluRzblLmUrGqfd0U0FuzTcmaWkhpHMX1Im9lh'
'MwzhA3YDrYWui9GY3hVef37c6rSEWsb6ztZZqRbUz5dElpm3AKcKhVHpm'
'LayKTcAWNLVynw+Fy3nfpTboN756e6nM8DmfaPBkUfQ2OXtgZKUWS6kGCPG'
'Q4pIHRSA/dxSkxVmzUmTtbUsToT9fAZJbXIh88/Q6tznlV3Ulsb/WE8jkZm'
'872zebB2fkfyQS+6IExDOuRa3WndiFGJHdTHS/JdpHe+lRXondIFjBrJ9lW'
'8+EK4yZjLTvWxUMNbgGRui1dQ6Y5KDJHc5bVPVHFuVaH50lmcbnw=='
}
}
payment = PaymentSchema().load(payment_payload)
response = register_payment(payment, self.registered_order['id'])
self.assertIsNotNone(response['ERROR'])
self.assertEqual(response['ERROR'], 'Ops... We were not waiting for it')
def test_register_boleto_payment(self):
payment_payload['fundingInstrument'] = {
'method': MoipPaymentMethod.BOLETO.name,
'boleto': {
'expirationDate': '2030-02-02'
}
}
payment = PaymentSchema().load(payment_payload)
response = register_payment(payment, self.registered_order['id'])
self.assertIsNotNone(response['id'])
self.assertEqual(len(response), 13)
def test_register_boleto_payment_with_invalid_expiration_date(self):
payment_payload['fundingInstrument'] = {
'method': MoipPaymentMethod.BOLETO.name,
'boleto': {
'expirationDate': '2012-02-02'
}
}
payment = PaymentSchema().load(payment_payload)
response = register_payment(payment, self.registered_order['id'])
self.assertIsNotNone(response['errors'])
self.assertEqual(response['errors'][0]['code'], 'PAY-644')
if __name__ == '__main__':
unittest.main()
| 36.866667 | 85 | 0.564557 | 407 | 5,530 | 7.506143 | 0.366093 | 0.041244 | 0.025205 | 0.045827 | 0.621931 | 0.605892 | 0.561702 | 0.557119 | 0.557119 | 0.557119 | 0 | 0.059061 | 0.329476 | 5,530 | 149 | 86 | 37.114094 | 0.764833 | 0 | 0 | 0.375 | 0 | 0 | 0.275226 | 0.129476 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.039063 | false | 0.007813 | 0.070313 | 0 | 0.117188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96cd9d5ad388df57e6ac05e8d62eec0d043c5b5e | 5,472 | py | Python | evaluation/webnlg-automatic-evaluation/significance_block_creation.py | sedrickkeh/dart | a2c9ced6928a52d8a5de3d66880f69ecb05f495a | [
"MIT"
] | 107 | 2020-07-07T05:48:03.000Z | 2022-03-31T03:23:20.000Z | evaluation/webnlg-automatic-evaluation/significance_block_creation.py | sedrickkeh/dart | a2c9ced6928a52d8a5de3d66880f69ecb05f495a | [
"MIT"
] | 5 | 2020-07-21T12:38:58.000Z | 2021-09-30T20:24:36.000Z | evaluation/webnlg-automatic-evaluation/significance_block_creation.py | sedrickkeh/dart | a2c9ced6928a52d8a5de3d66880f69ecb05f495a | [
"MIT"
] | 14 | 2020-07-07T05:48:09.000Z | 2022-02-10T08:21:14.000Z | import random
# shuffle always the same
random.seed(5)
# create shuffle indices for all, old, new, and sample
index_shuf_all = list(range(1, 1863))
index_shuf_old = list(range(1, 972))
index_shuf_new = list(range(1, 892))
random.shuffle(index_shuf_all)
random.shuffle(index_shuf_old)
random.shuffle(index_shuf_new)
categories = ['Astronaut',
'Airport',
'Monument',
'University',
'Food',
'SportsTeam',
'City',
'Building',
'WrittenWork',
'ComicsCharacter',
'Politician',
'Athlete',
'MeanOfTransportation',
'Artist',
'CelestialBody']
teams = ['ADAPT_Centre',
'GKB_Unimelb',
'PKUWriter',
'Tilburg_University-1',
'Tilburg_University-2',
'Tilburg_University-3',
'UIT-DANGNT-CLNLP',
'UPF-TALN',
'Baseline']
def randomise_data(filelines, param):
# randomise; sort list based on numbers from another list
if param == 'all-cat':
filelines = [x for _, x in sorted(zip(index_shuf_all, filelines))]
elif param == 'old-cat':
filelines = [x for _, x in sorted(zip(index_shuf_old, filelines))]
elif param == 'new-cat':
filelines = [x for _, x in sorted(zip(index_shuf_new, filelines))]
return filelines
def randomise_meteor_ter_data(filelines, param):
# randomise; keep every three lines frozen;
# match each shuffle index to three corresponding lines
filelines_randomised = []
if param == 'all-cat':
filelines_per_3 = [filelines[i:i + 3] for i in range(0, len(filelines), 3)]
# flat three references
filelines_randomised = [ref for _, x in sorted(zip(index_shuf_all, filelines_per_3)) for ref in x]
print('')
elif param == 'old-cat':
filelines_per_3 = [filelines[i:i + 3] for i in range(0, len(filelines), 3)]
# flat three references
filelines_randomised = [ref for _, x in sorted(zip(index_shuf_old, filelines_per_3)) for ref in x]
elif param == 'new-cat':
filelines_per_3 = [filelines[i:i + 3] for i in range(0, len(filelines), 3)]
# flat three references
filelines_randomised = [ref for _, x in sorted(zip(index_shuf_new, filelines_per_3)) for ref in x]
return filelines_randomised
def metric_create_blocks():
params = ['all-cat', 'old-cat', 'new-cat']
for team in teams:
for param in params:
# for bleu and meteor
write_files(team, param, '')
# for ter
write_files(team, param, 'ter')
print('Blocks for teams were successfully created!')
def write_files(team, param, metric):
filelines = []
if metric == 'ter':
option = '_ter'
else:
option = ''
with open('teams/' + team + '_' + str(param) + option + '.txt', 'r') as f:
filelines += [line for line in f]
filelines = randomise_data(filelines, param)
# each block has 20 elements
blocks = [filelines[i:i + 20] for i in range(0, len(filelines), 20)]
for block_id, block in enumerate(blocks[:-1]): # except the last block of the list
with open('teams/metric_per_block/' + team + '_' + param + option + '_' + str(block_id + 1) + '.txt', 'w+') as f_block:
f_block.write(''.join(block))
def reference_create_blocks():
params = ['all-cat', 'old-cat', 'new-cat']
for param in params:
# for bleu, ter, and meteor
write_reference_files(param, '0')
write_reference_files(param, '1')
write_reference_files(param, '2')
write_reference_files(param, 'meteor')
write_reference_files(param, 'ter')
print('Blocks for references were successfully created!')
def write_reference_files(param, metric):
filelines = []
if metric == 'meteor':
option = '-3ref.meteor'
elif metric == 'ter':
option = '-3ref-space.ter'
else:
option = metric + '.lex'
with open('references/gold-' + param + '-reference' + option, 'r') as f:
filelines += [line for line in f]
# each block has 20 elements in .lex and 60 elements in .meteor
if metric == 'meteor' or metric == 'ter':
# need to randomise every three lines, i.e. keep every three lines frozen
filelines = randomise_meteor_ter_data(filelines, param)
blocks = [filelines[i:i + 60] for i in range(0, len(filelines), 60)]
else:
# randomise
filelines = randomise_data(filelines, param)
blocks = [filelines[i:i + 20] for i in range(0, len(filelines), 20)]
for block_id, block in enumerate(blocks[:-1]): # except the last block of the list
if metric == 'meteor':
with open('references/metric_per_block/gold-' + param + '-reference-3ref-' + str(block_id + 1) + '.meteor', 'w+') as f_block:
f_block.write(''.join(block))
if metric == 'ter':
with open('references/metric_per_block/gold-' + param + '-reference-3ref-' + str(block_id + 1) + '.ter', 'w+') as f_block:
f_block.write(''.join(block))
else:
with open('references/metric_per_block/gold-' + param + '-reference' + metric + '-' + str(block_id + 1) + '.lex', 'w+') as f_block:
f_block.write(''.join(block))
# for teams
metric_create_blocks()
# for references
reference_create_blocks()
| 35.076923 | 143 | 0.600512 | 699 | 5,472 | 4.543634 | 0.191702 | 0.034005 | 0.011335 | 0.02267 | 0.573363 | 0.437028 | 0.406486 | 0.369962 | 0.369962 | 0.278023 | 0 | 0.016529 | 0.270285 | 5,472 | 155 | 144 | 35.303226 | 0.778863 | 0.111842 | 0 | 0.318182 | 0 | 0 | 0.15585 | 0.025217 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.009091 | 0 | 0.081818 | 0.027273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96cf9b2b7b80c7fe7b34ceb3a499fe84ba0f6cf3 | 1,134 | py | Python | makeico.py | NeverDecaf/Alastore | 3da74f1e6bace392b26793a68c40257a18971657 | [
"MIT"
] | null | null | null | makeico.py | NeverDecaf/Alastore | 3da74f1e6bace392b26793a68c40257a18971657 | [
"MIT"
] | 2 | 2020-06-18T21:25:27.000Z | 2020-09-28T05:53:45.000Z | makeico.py | NeverDecaf/Alastore | 3da74f1e6bace392b26793a68c40257a18971657 | [
"MIT"
] | null | null | null | '''downloads an image and resizes it to create an icon. sets desired folder to use said icon.'''
import pyico
import iconchange
from PIL import Image
from io import BytesIO
import os
import re
import anidb
def resize_center_image(image):
IMAGE_SIZE=(256,256)
zero,zero,w,h=image.getbbox()
w+=0.0
h+=0.0
ratio= min(IMAGE_SIZE[0]/w,IMAGE_SIZE[1]/h)
if ratio!=1:
image = image.resize((int(w*ratio),int(h*ratio)),Image.ANTIALIAS)# 4 should be antialias
background = Image.new('RGBA',IMAGE_SIZE,(255,255,255,0))
x=int((IMAGE_SIZE[0]-w*ratio)/2)
y=int((IMAGE_SIZE[1]-h*ratio)/2)
background.paste(image,(x,y))
return background
return image.crop((x,y,IMAGE_SIZE[0]+x,IMAGE_SIZE[1]+y))
def makeIcon(aid,url,dest_folder):
img = anidb.anidb_dl_poster_art(url)
img = resize_center_image(img)
buf = BytesIO()
img.save(buf, format="PNG")
ico = pyico.Icon([BytesIO(buf.getvalue())],os.path.join(dest_folder,'%i.ico'%aid))
ico.save()
iconchange.seticon_unicode(dest_folder,'%i.ico'%aid,0) # dest_folder.encode('utf8') removed this and instead use seticon_unicode.
| 35.4375 | 133 | 0.693122 | 191 | 1,134 | 4.005236 | 0.403141 | 0.094118 | 0.039216 | 0.028758 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033299 | 0.152557 | 1,134 | 31 | 134 | 36.580645 | 0.762747 | 0.164021 | 0 | 0 | 0 | 0 | 0.020191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.241379 | 0 | 0.37931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96d944b44e0c0537f7bc0f8a95a0de26952f30da | 13,756 | py | Python | MLTrainer/models.py | nicholaslaw/ML_Trainer | 1808fae595304e8806a82347e532ab769b014e2b | [
"Apache-2.0"
] | null | null | null | MLTrainer/models.py | nicholaslaw/ML_Trainer | 1808fae595304e8806a82347e532ab769b014e2b | [
"Apache-2.0"
] | null | null | null | MLTrainer/models.py | nicholaslaw/ML_Trainer | 1808fae595304e8806a82347e532ab769b014e2b | [
"Apache-2.0"
] | null | null | null | from sklearn import ensemble, linear_model, naive_bayes, neighbors, svm, tree, model_selection, metrics
from xgboost import XGBClassifier
import numpy as np, pandas as pd, logging, os, joblib
from .model_params import classf_grids
import warnings
from typing import Union
class MLTrainer:
def __init__(self, ensemble: bool=True, linear: bool=True, naive_bayes: bool=True, neighbors: bool=True, svm: bool=True, decision_tree: bool=True, seed: int=100) -> None:
"""
PARAMS
==========
ensemble: bool
True if want ensemble models
linear: bool
True if want linear models
naive_bayes: bool
True if want naive bayes models
neighbors: bool
True if want neighbors models
svm: bool
True if want svm models
decision tree: bool
True if want decision tree models
NOTE: Need fix naive bayes and folder names
"""
self.models = [] # list containing names of models, i.e. strings
self.n_classes = None # Number of classes
self.fitted = False
self.ensemble = ensemble
self.linear = linear
self.naive_bayes = naive_bayes
self.neighbors = neighbors
self.svm = svm
self.decision_trees = decision_tree
self.seed = seed
self.cv_scores = dict()
self.model_keys = dict()
self.idx_label_dic = dict()
self.init_all_models()
def init_ensemble(self) -> None:
all_models = [ensemble.AdaBoostClassifier(), ensemble.BaggingClassifier(), ensemble.ExtraTreesClassifier(),
ensemble.GradientBoostingClassifier(), ensemble.RandomForestClassifier(), XGBClassifier()]
self.models.extend(all_models)
models = ["adaboost", "bagging", "extratrees", "gradientboosting", 'randomforest', "xgboost"]
for mod in models:
self.model_keys[mod] = "ensemble"
def init_linear(self) -> None:
all_models = [linear_model.LogisticRegression()]
self.models.extend(all_models)
models = ["logreg"]
for mod in models:
self.model_keys[mod] = "linear"
def init_naive_bayes(self) -> None:
"""
MultinomialNB works with occurrence counts
BernoulliNB is designed for binary/boolean features
"""
all_models = [naive_bayes.BernoulliNB(), naive_bayes.GaussianNB(), naive_bayes.MultinomialNB(), naive_bayes.ComplementNB()]
self.models.extend(all_models)
models = ["bernoulli", "gaussian", "multinomial", "complement"]
for mod in models:
self.model_keys[mod] = "nb"
def init_neighbors(self) -> None:
all_models = [neighbors.KNeighborsClassifier()]
self.models.extend(all_models)
models = ["knn"]
for mod in models:
self.model_keys[mod] = "neighbors"
def init_svm(self) -> None:
all_models = [svm.NuSVC(probability=True), svm.SVC(probability=True)]
self.models.extend(all_models)
models = ["nu", "svc"]
for mod in models:
self.model_keys[mod] = "svm"
def init_decision_tree(self) -> None:
all_models = [tree.DecisionTreeClassifier(), tree.ExtraTreeClassifier()]
self.models.extend(all_models)
models = ["decision", "extra"]
for mod in models:
self.model_keys[mod] = "tree"
def init_all_models(self) -> None:
if self.ensemble:
self.init_ensemble()
if self.linear:
self.init_linear()
if self.naive_bayes:
self.init_naive_bayes()
if self.neighbors:
self.init_neighbors()
if self.svm:
self.init_svm()
if self.decision_trees:
self.init_decision_tree()
if len(self.models) == 0:
raise Exception("No Models Selected, Look at the Parameters of ___init__")
def fit(self, X: Union[tuple, list, np.ndarray], Y: Union[tuple, list, np.ndarray], n_folds: int=5, scoring: str="accuracy", n_jobs: int=-1, gridsearchcv: bool=False, param_grids: dict={}, greater_is_better: bool=True):
"""
PARAMS
==========
X: numpy array
shape is (n_samples, n_features)
Y: numpy array
shape is (n_samples,)
n_folds: int
number of cross validation folds
njobs: int
sklearn parallel
scoring: str
string indicating scoring metric, reference can be found at https://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter
gridsearchcv: bool
True if want parameter search with gridsearch
param_grids: nested dictionary
contains several parameter grids
greater_is_better: bool
True if the evaluation metric is better when it is greater, the results dataframe will be sorted with ascending = not greater_is_better
"""
self.n_classes = len(np.unique(Y))
cv_metric = "mean_cv_"+scoring
self.cv_scores = {"model": [], "parameters": [], cv_metric: [], "remarks": []}
if gridsearchcv:
param_grids = classf_grids
counter = 0
for model_name, model in zip(list(self.model_keys.keys()),self.models):
if gridsearchcv:
mod = model_selection.GridSearchCV(model, param_grids[self.model_keys[model_name]][model_name], n_jobs=n_jobs)
else:
mod = model
if hasattr(mod, "n_jobs"):
mod.n_jobs = n_jobs
if hasattr(mod, "random_state"):
mod.random_state = self.seed
mod.set_params(**param_grids.get(model_name, dict()))
params = None
score = None
remark = ""
try:
if gridsearchcv:
mod.fit(X, Y)
score = mod.best_score_
mod = mod.best_estimator_
else:
score = np.mean(model_selection.cross_val_score(mod, X, Y, cv=n_folds, scoring=scoring))
mod.fit(X, Y)
params = mod.get_params()
except Exception as e:
remark = e
self.models[counter] = mod
counter += 1
self.cv_scores["model"].append(model_name)
self.cv_scores["parameters"].append(params)
self.cv_scores["remarks"].append(remark)
self.cv_scores[cv_metric].append(score)
self.cv_scores = pd.DataFrame(self.cv_scores)
self.fitted = True
return self
def predict(self, X: Union[tuple, list, np.ndarray]) -> dict:
"""
PARAMS
==========
X: numpy array
shape is (n_samples, n_features)
RETURNS
==========
test_Y: numpy array
shape is (n_samples,)
"""
assert self.fitted == True, "Call .fit() method first"
result = dict()
model_names = list(self.model_keys.keys())
for idx, model in enumerate(self.models):
model_name = model_names[idx]
try:
predictions = model.predict(X)
except Exception as e:
predictions = e
result[model_name] = predictions
return result
def predict_proba(self, X: Union[tuple, list, np.ndarray]) -> dict:
"""
PARAMS
==========
X: numpy array
shape is (n_samples, n_features)
RETURNS
==========
test_Y: numpy array
shape is (n_samples,)
"""
assert self.fitted == True, "Call .fit() method first"
result = dict()
model_names = list(self.model_keys.keys())
for idx, model in enumerate(self.models):
model_name = model_names[idx]
try:
proba = model.predict_proba(X)
except Exception as e:
proba = e
result[model_name] = proba
return result
def evaluate(self, test_X: Union[tuple, list, np.ndarray], test_Y: Union[tuple, list, np.ndarray], idx_label_dic: dict=None, class_report: str="classf_report.csv", con_mat: str="confusion_matrix.csv", pred_proba: str="predictions_proba.csv") -> None:
"""
PARAMS
==========
test_X: numpy array
shape is (n_samples, n_features), test features
test_Y: numpy array
shape is (n_samples, 1), test labels
idx_label_dic: dictionary
keys are indices, values are string labels
class_report: str
file path to save classification report
con_mat: str
file path to save confusion matrix
pred_proba: str
file path to save csv containing prediction probabilities
RETURNS
==========
Saves classification report, confusion matrix and label probabilities in CSV
"""
assert self.fitted == True, "Call .fit() method first"
if idx_label_dic is None:
idx_label_dic = {idx: str(idx) for idx in range(self.n_classes)}
self.idx_label_dic = idx_label_dic
del idx_label_dic
for model_name, model in zip(list(self.model_keys.keys()) ,self.models):
folder = "./" + model_name + "/"
if not os.path.exists(folder):
os.makedirs(folder)
self.evaluate_model(model, test_X, test_Y, folder, class_report=class_report, con_mat=con_mat, pred_proba=pred_proba)
def evaluate_model(self, model, test_X: Union[tuple, list, np.ndarray], test_Y: Union[tuple, list, np.ndarray], folder: str="", class_report: str="classf_report.csv", con_mat: str="confusion_matrix.csv", pred_proba: str="predictions_proba.csv") -> None:
"""
PARAMS
==========
model: Sklearn model object
test_X: numpy array
shape is (n_samples, n_features), test features
test_Y: numpy array
shape is (n_samples, 1), test labels
folder: string
path to folder where all files are saved in
class_report: string
path to save classification report in csv
confusion_mat: string
path to save confusion matrix
pred_proba: string
path to save predicted probabilities
RETURNS
==========
Saves classification report, confusion matrix and label probabilities in CSV
"""
try:
predictions = model.predict(test_X)
predictions_proba = model.predict_proba(test_X)
except:
return
else:
self.save_classf_report(metrics.classification_report(test_Y, predictions, labels=list(self.idx_label_dic.keys())), folder+class_report) # Save sklearn classification report in csv
self.save_conf_mat(test_Y, predictions, folder+con_mat)
self.save_label_proba(predictions_proba, folder+pred_proba)
def save_classf_report(self, report, file_path: str):
"""
PARAMS
==========
report: sklearn classification report
file_path: string
path to save classification report as csv
RETURNS
==========
Saves classification report in CSV
"""
report_data = []
lines = report.split('\n')
for line in lines[2:-4]:
row_data = line.split()
if len(row_data) != 0:
row = {}
row['precision'] = float(row_data[-4])
row['recall'] = float(row_data[-3])
row['f1_score'] = float(row_data[-2])
row['support'] = float(row_data[-1])
row['class'] = self.idx_label_dic[int(row_data[0])]
report_data.append(row)
df = pd.DataFrame.from_dict(report_data)
df.to_csv(file_path, index=False)
def save_conf_mat(self, test_Y: Union[tuple, list, np.ndarray], predictions: Union[tuple, list, np.ndarray], file_path: str):
"""
PARAMS
==========
test_Y: numpy array
shape is (n_samples, 1), true labels
predictions: numpy array
shape is (n_samples, 1), predicted labels
file_path: string
path to save confusion matrix
RETURNS
==========
Saves confusion matrix in CSV
"""
confusion_mat = metrics.confusion_matrix(test_Y, predictions, labels=list(self.idx_label_dic.keys()))
total_row = confusion_mat.sum(axis=0)
total_col = [np.nan] + list(confusion_mat.sum(axis=1)) + [sum(total_row)]
confusion_mat_df = pd.DataFrame({})
confusion_mat_df["Predicted"] = ["True"] + list(self.idx_label_dic.values()) + ["All"]
for idx, label in self.idx_label_dic.items():
temp = [np.nan] + list(confusion_mat[:, idx]) + [total_row[idx]]
confusion_mat_df[label] = temp
confusion_mat_df["All"] = total_col
confusion_mat_df.to_csv(file_path, index=False)
def save_label_proba(self, pred_proba: np.ndarray, file_path: str):
"""
PARAMS
==========
pred_proba: numpy array
shape is (n_samples, 1), predicted probabilities
file_path: string
file path to save label probabilities in CSV
RETURNS
==========
Saves label probabilities in CSV
"""
proba_df = pd.DataFrame({})
for idx, label in self.idx_label_dic.items():
proba_df[label] = pred_proba[:, idx]
proba_df.to_csv(file_path, index=False) | 37.791209 | 257 | 0.582582 | 1,621 | 13,756 | 4.759408 | 0.160395 | 0.016591 | 0.019961 | 0.028646 | 0.339987 | 0.311471 | 0.261309 | 0.238496 | 0.201167 | 0.180298 | 0 | 0.002541 | 0.313391 | 13,756 | 364 | 258 | 37.791209 | 0.814293 | 0.229354 | 0 | 0.231579 | 0 | 0 | 0.056354 | 0.004408 | 0 | 0 | 0 | 0 | 0.015789 | 1 | 0.084211 | false | 0 | 0.031579 | 0 | 0.142105 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96dc7ce7fb17e70368eee365480b119807ca055c | 8,226 | py | Python | xfel/small_cell/command_line/small_cell_index.py | Anthchirp/cctbx | b8064f755b1dbadf05b8fbf806b7d50d73ef69bf | [
"BSD-3-Clause-LBNL"
] | null | null | null | xfel/small_cell/command_line/small_cell_index.py | Anthchirp/cctbx | b8064f755b1dbadf05b8fbf806b7d50d73ef69bf | [
"BSD-3-Clause-LBNL"
] | null | null | null | xfel/small_cell/command_line/small_cell_index.py | Anthchirp/cctbx | b8064f755b1dbadf05b8fbf806b7d50d73ef69bf | [
"BSD-3-Clause-LBNL"
] | null | null | null | from __future__ import absolute_import, division, print_function
#-*- Mode: Python; c-basic-offset: 2; indent-tabs-mode: nil; tabwidth: 8 -*-
#
# LIBTBX_SET_DISPATCHER_NAME cctbx.small_cell_index
# LIBTBX_PRE_DISPATCHER_INCLUDE_SH export PHENIX_GUI_ENVIRONMENT=1
# LIBTBX_PRE_DISPATCHER_INCLUDE_SH export BOOST_ADAPTBX_FPE_DEFAULT=1
import xfel.small_cell.small_cell
from xfel.small_cell.small_cell import small_cell_index
import libtbx.load_env
import libtbx.option_parser
import sys,os
from six.moves import zip
small_cell_phil_str = """
small_cell {
powdercell = None
.type=unit_cell
.help = "Specify unit cell for powder rings"
spacegroup = None
.type=str
.help = "Specify spacegroup for the unit cell"
high_res_limit = 1.5
.type=float
.help= "Highest resolution limit to process"
min_spots_to_integrate = 5
.type=int
.help= "At least this many spots needed to have been indexed to integrate the image"
interspot_distance = 5
.type=int
.help= "Minimum distance in pixels between a prediction and a spotfinder spot to be accepted"
faked_mosaicity = 0.005
.type=float
.help= "Non-experimentally determined mosaicity to use for each image"
spot_connection_epsilon = 2.e-3
.type=float
.help= "Epsilon for comparing measured vs. predicted inter-spot distances when building the maximum clique"
d_ring_overlap_limit = 5
.type = int
.help = "Number of d rings a spot can overlap before it is removed from consideration. Set to None to use all spots, but this can be time consuming"
override_wavelength = None
.type=float
.help = "Use to override the wavelength found in the image file"
write_gnuplot_input = False
.type = bool
.help = "Use to produce a series of files as inputs to gnuplot to show the indexing results"
max_calls_to_bronk = 100000
.type = int
.help = "Terminate indexing on this many calls to the maximum clique finder."
"This eliminates a long tail of slow images with too many spots."
}
"""
dials_phil_str = """
include scope dials.algorithms.spot_finding.factory.phil_scope
"""
def run(argv=None):
if (argv is None):
argv = sys.argv
from iotbx.phil import parse
small_cell_phil = parse(small_cell_phil_str+dials_phil_str,process_includes=True)
welcome_message = """
%s [-s] -t PATH <directory or image paths>
cctbx.small_cell: software for indexing sparse, still patterns.
An excellent knowledge of the unit cell, detector distance, wavelength and
beam center is required. Specify at least the unit cell in the target phil
file passed in with the -t parameter.
If the image can be integrated, the integrated intensities will be found in
a *.int file (plain text) and in a cctbx.xfel integration pickle file.
See Brewster, A.S., Sawaya, M.R., Rodriguez, J., Hattne, J., Echols, N.,
McFarlane, H.T., Cascio, D., Adams, P.D., Eisenberg, D.S. & Sauter, N.K.
(2015). Acta Cryst. D71, doi:10.1107/S1399004714026145.
Showing phil parameters:
""" % libtbx.env.dispatcher_name
welcome_message += small_cell_phil.as_str(attributes_level = 2)
command_line = (libtbx.option_parser.option_parser(
usage=welcome_message)
.option(None, "--target", "-t",
type="string",
default=None,
dest="target",
metavar="PATH",
help="Target phil file")
.option(None, "--skip_processed_files", "-s",
action="store_true",
default=False,
dest="skip_processed_files",
help="Will skip images that have a .int file already created")
).process(args=argv[1:])
paths = command_line.args
# Target phil file and at least one file to process are required
if len(paths) == 0:
command_line.parser.print_usage()
return
# Parse the target
args = []
if command_line.options.target is not None:
args.append(parse(file_name=command_line.options.target,process_includes=True))
horiz_phil = small_cell_phil.fetch(sources = args).extract()
for path in paths:
# process an entire directory
if os.path.isdir(path):
files = os.listdir(path)
try:
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
# determine which subset of the files in this directory this process will
# work on
chunk = len(files) // size
myfiles = files[rank*chunk:(rank+1)*chunk]
if rank == 0:
myfiles += files[len(files)-len(files)%size:len(files)]
except ImportError as e:
print("MPI not found, multiprocessing disabled")
myfiles = files
counts = []
processed = []
for file in myfiles:
if (os.path.splitext(file)[1] == ".pickle" or os.path.splitext(file)[1] == ".edf") and os.path.basename(file)[0:3].lower() != "int" and file != "spotfinder.pickle":
if command_line.options.skip_processed_files and os.path.exists(file + ".int"):
print("Skiping %s as it has already been processed"%file)
continue
counts.append(small_cell_index(os.path.join(path,file),horiz_phil))
if counts[-1] == None: counts[-1] = 0
processed.append(file)
for file, count in zip(processed,counts):
print("%s %4d spots in max clique"%(file,count))
# process a single file
elif os.path.isfile(path):
if os.path.splitext(path)[1] == ".txt":
# Given a list of a file names in a text file, process each file listed
f = open(path, "r")
for line in f.readlines():
if os.path.isfile(line.strip()):
count = small_cell_index(line.strip(),horiz_phil)
if count != None:
print("%s %4d spots in max clique"%(line.strip(),count))
f.close()
elif os.path.splitext(path)[1] == ".int":
# Summarize a .int file, providing completeness and multiplicity statistics
f = open(path, "r")
hkls_all = []
hkls_unique = []
files = []
for line in f.readlines():
strs = line.strip().split()
src = strs[0].split(":")[0]
if not src in files:
files.append(src)
hkl = (int(strs[7]), int(strs[8]), int(strs[9]))
if not hkl in hkls_unique:
hkls_unique.append(hkl)
hkls_all.append(hkl)
print("%d unique hkls from %d orginal files. Completeness: "%(len(hkls_unique),len(files)))
from cctbx.crystal import symmetry
import cctbx.miller
from cctbx.array_family import flex
sym = symmetry(unit_cell=horiz_phil.small_cell.powdercell,
space_group_symbol=horiz_phil.small_cell.spacegroup)
millerset = cctbx.miller.set(sym,flex.miller_index(hkls_unique),anomalous_flag=False)
millerset = millerset.resolution_filter(d_min=horiz_phil.small_cell.high_res_limit)
millerset.setup_binner(n_bins=10)
data = millerset.completeness(True)
data.show()
data = millerset.completeness(False)
print("Total completeness: %d%%\n"%(data * 100))
print("%d measurements total from %d original files. Multiplicity (measurements/expected):"%(len(hkls_all),len(files)))
millerset = cctbx.miller.set(sym,flex.miller_index(hkls_all),anomalous_flag=False)
millerset = millerset.resolution_filter(d_min=horiz_phil.small_cell.high_res_limit)
millerset.setup_binner(n_bins=10)
data = millerset.completeness(True)
data.show()
print("Total multiplicty: %.3f"%(len(hkls_all)/len(millerset.complete_set().indices())))
f.close()
else:
# process a regular image file
count = small_cell_index(path,horiz_phil)
if count != None:
print("%s %4d spots in max clique"%(path,count))
else:
print("Not a file or directory: %s"%path)
if xfel.small_cell.small_cell.app is not None:
del xfel.small_cell.small_cell.app
if __name__=='__main__':
sys.exit(run())
| 38.801887 | 172 | 0.650134 | 1,132 | 8,226 | 4.583039 | 0.313604 | 0.041635 | 0.013493 | 0.017348 | 0.149961 | 0.119507 | 0.096762 | 0.092136 | 0.092136 | 0.074788 | 0 | 0.012893 | 0.245684 | 8,226 | 211 | 173 | 38.985782 | 0.823207 | 0.077924 | 0 | 0.164706 | 0 | 0.029412 | 0.373266 | 0.022322 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005882 | false | 0.005882 | 0.076471 | 0 | 0.088235 | 0.070588 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96dcbbd324ea5f3e5a79e8d1114a8c35997f91dd | 4,097 | py | Python | scripts/getNVDData.py | clowee/Exploring-Factors-and-Measures-to-Select-Open-Source-Software | 321a1ee3ab7081bbec81d2fe024d320bb493465f | [
"Apache-2.0"
] | 3 | 2021-08-19T01:20:35.000Z | 2021-11-23T06:58:29.000Z | scripts/getNVDData.py | clowee/Exploring-Factors-and-Measures-to-Select-Open-Source-Software | 321a1ee3ab7081bbec81d2fe024d320bb493465f | [
"Apache-2.0"
] | null | null | null | scripts/getNVDData.py | clowee/Exploring-Factors-and-Measures-to-Select-Open-Source-Software | 321a1ee3ab7081bbec81d2fe024d320bb493465f | [
"Apache-2.0"
] | 2 | 2021-02-19T16:21:35.000Z | 2021-03-16T06:14:17.000Z | """
version 2.1
@author: Sergio Moreschini, Xiaozhou Li
Main file for scrape from NVD
what we want:
-total number of vulnerabilites,
-severity
-average vulnerabilities of the last 12 months,
-average vuln. total
therefore we need
-date
"""
####################################################### Imports ########################################################
import os
from scripts.updateInfo import your_email, getGithubToken
from prawcore import NotFound
import requests, csv
from scripts import updateFlag
####################################################### Configs ########################################################
#start_time = time.time()
count = 0
github_personal_token = getGithubToken(your_email)
github_token = os.getenv('GITHUB_TOKEN', github_personal_token)
github_headers = {'Authorization': f'token {github_token}'}
####################################################### Functions ######################################################
def check_extra(project_info):
totalResults = project_info['totalResults']
resultsPerPage = project_info['resultsPerPage']
numberOfExtraRequests = int(totalResults/resultsPerPage)
return numberOfExtraRequests
def check_availability(project_info):
exists = True
try:
project_id = project_info['result']
except NotFound:
exists = False
return exists
def getNVDDataProjectsInRange(fromN, toN):
with open("dataset/the100k.txt", 'r', encoding='utf-8') as txtfile:
projectList = [x.strip('\n') for x in txtfile.readlines()][fromN:toN]
count = fromN + 1
thereturn = []
for item in projectList:
#theProjectQuery = f"https://api.github.com/repos/{item}"
#p_search = requests.get(theProjectQuery, headers=github_headers)
#project_info = p_search.json()
#if "message" in project_info and "documentation_url" in project_info:
# print("Project not exist on Github any more. Sad.")
# return 0
#else:
# project_id = project_info['id']
try:
projecttitle = item.split('/')[1]
theProjectQuery = f"https://services.nvd.nist.gov/rest/json/cves/1.0?keyword={projecttitle}&resultsPerPage=1000"
p_search = requests.get(theProjectQuery)
project_info = p_search.json()
if projecttitle =='electron':
print('Check Now!')
exist = check_availability(project_info)
#project_id = project_info['id']
if exist:
project_result = project_info['result']
numberOfExtraRequests = check_extra(project_info)
total_number_of_vulnerabilities = project_info['totalResults']
listOfCVE = project_info['result']
listOfCVE = listOfCVE['CVE_Items']
for item2 in range(total_number_of_vulnerabilities):
# need to make a separate CSV file to check all of the single severities
temp = listOfCVE[item2]
publishedDate = temp['publishedDate']
lastModifiedDate = temp['lastModifiedDate']
temp = temp['impact']
metric3 = temp['baseMetricV3']
severity3 = metric3['cvssV3']
severity3 = severity3['baseSeverity']
metric2 = temp['baseMetricV2']
severity2 = metric2['severity']
thereturn = [item, item2, publishedDate, lastModifiedDate, severity3, severity2]
with open("dataset/nvdData.csv", 'a', encoding='utf-8') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerow(thereturn)
else:
print("Project {} NOT exist any more... very sad".format(count))
except KeyError:
print(Exception)
print("Project {} NOT exist any more... very sad".format(count))
updateFlag.updateflag("dataset/flag.csv", your_email, 'nvd', count, toN)
count = count + 1
| 41.383838 | 124 | 0.573835 | 395 | 4,097 | 5.827848 | 0.417722 | 0.076455 | 0.016942 | 0.026064 | 0.107732 | 0.059948 | 0.039096 | 0.039096 | 0.039096 | 0.039096 | 0 | 0.011796 | 0.255065 | 4,097 | 98 | 125 | 41.806122 | 0.742464 | 0.174518 | 0 | 0.064516 | 0 | 0.016129 | 0.151385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.080645 | 0 | 0.16129 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96dd0bb755972bcdb625ad07bf1a0e15f45e3990 | 1,180 | py | Python | src/utilities.py | notprash/Bluey | 0bde812d6e4714139649ba83efbda39f910dbd35 | [
"MIT"
] | 1 | 2021-07-30T15:09:01.000Z | 2021-07-30T15:09:01.000Z | src/utilities.py | notprash/Bluey | 0bde812d6e4714139649ba83efbda39f910dbd35 | [
"MIT"
] | null | null | null | src/utilities.py | notprash/Bluey | 0bde812d6e4714139649ba83efbda39f910dbd35 | [
"MIT"
] | 1 | 2021-06-09T08:39:26.000Z | 2021-06-09T08:39:26.000Z | from os import read
import sqlite3
from discord.ext import commands
import discord
def read_database(guildId):
with sqlite3.connect('db.sqlite3') as db:
# Fetching Data
command = f"SELECT * FROM Settings WHERE GuildId = '{guildId}'"
data = db.execute(command)
data = data.fetchone()
return data
def update_database(database, setting, value, condition_parameter, condition_value):
with sqlite3.connect("db.sqlite3") as db:
command = f"UPDATE {database} SET {setting} = {value} WHERE {condition_parameter} = {condition_value}"
db.execute(command)
db.commit()
def has_admin_permissions():
return commands.has_permissions(administrator=True)
def not_none(value):
return value != None
async def help_embed(channel, syntax, none_value):
if none_value == None or none_value == ():
prefix = read_database(channel.guild.id)[8]
description = f'The command input is incomplete. \n```{prefix}{syntax}```'
embed = discord.Embed(title="Error", description=description, color=discord.Color.red())
await channel.send(embed=embed)
return True
return False
| 30.25641 | 111 | 0.679661 | 146 | 1,180 | 5.390411 | 0.431507 | 0.045743 | 0.045743 | 0.050826 | 0.07878 | 0.07878 | 0.07878 | 0 | 0 | 0 | 0 | 0.006445 | 0.211017 | 1,180 | 38 | 112 | 31.052632 | 0.838883 | 0.011017 | 0 | 0 | 0 | 0 | 0.1897 | 0.038627 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.148148 | 0.074074 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96de7db54652ba7ef516d3e2f899aa9d39451c1b | 20,197 | py | Python | mmdet/models/necks/fpn_CSA.py | iPriest001/my_mmdetection_v218 | 229c7da16908d08abcf65f55b241184f6b61b47d | [
"Apache-2.0"
] | null | null | null | mmdet/models/necks/fpn_CSA.py | iPriest001/my_mmdetection_v218 | 229c7da16908d08abcf65f55b241184f6b61b47d | [
"Apache-2.0"
] | null | null | null | mmdet/models/necks/fpn_CSA.py | iPriest001/my_mmdetection_v218 | 229c7da16908d08abcf65f55b241184f6b61b47d | [
"Apache-2.0"
] | null | null | null | import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn import ConvModule, xavier_init
from mmcv.runner import auto_fp16, BaseModule
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
from ..builder import NECKS
class GroupAttention(BaseModule):
def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., ws=1, init_cfg=None):
"""
ws 1 for stand attention
"""
super(GroupAttention, self).__init__(init_cfg)
assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."
self.dim = dim
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
self.ws = ws
@auto_fp16()
def forward(self, x, H, W):
B, N, C = x.shape
x = x.view(B, H, W, C)
pad_l = pad_t = 0
pad_r = (self.ws - W % self.ws) % self.ws
pad_b = (self.ws - H % self.ws) % self.ws
x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
_, Hp, Wp, _ = x.shape
_h, _w = Hp // self.ws, Wp // self.ws
x = x.reshape(B, _h, self.ws, _w, self.ws, C).transpose(2, 3)
qkv = self.qkv(x).reshape(B, _h * _w, self.ws * self.ws, 3, self.num_heads,
C // self.num_heads).permute(3, 0, 1, 4, 2, 5)
q, k, v = qkv[0], qkv[1], qkv[2]
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
attn = (attn @ v).transpose(2, 3).reshape(B, _h, _w, self.ws, self.ws, C)
x = attn.transpose(2, 3).reshape(B, _h * self.ws, _w * self.ws, C)
if pad_r > 0 or pad_b > 0:
x = x[:, :H, :W, :].contiguous()
x = x.reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
class Attention(BaseModule):
def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., sr_ratio=1, init_cfg=None):
super().__init__(init_cfg)
assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."
self.dim = dim
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
self.q = nn.Linear(dim, dim, bias=qkv_bias)
self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
self.sr_ratio = sr_ratio
if sr_ratio > 1:
self.sr = nn.Conv2d(dim, dim, kernel_size=sr_ratio, stride=sr_ratio)
self.norm = nn.LayerNorm(dim)
@auto_fp16()
def forward(self, x, H, W):
B, N, C = x.shape
q = self.q(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
if self.sr_ratio > 1:
x_ = x.permute(0, 2, 1).reshape(B, C, H, W)
x_ = self.sr(x_).reshape(B, C, -1).permute(0, 2, 1) #conv, maybe it can be replaced by pooling
x_ = self.norm(x_)
kv = self.kv(x_).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
else:
kv = self.kv(x).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
k, v = kv[0], kv[1]
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
out = (attn @ v).transpose(1, 2).reshape(B, N, C)
out = self.proj(out)
out = self.proj_drop(out)
return out
class Cross_Attention(BaseModule):
def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., sr_ratio=1, init_cfg=None):
super().__init__(init_cfg)
assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."
self.dim = dim
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
self.q = nn.Linear(dim, dim, bias=qkv_bias)
self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
self.sr_ratio = sr_ratio
if sr_ratio > 1:
self.sr = nn.Conv2d(dim, dim, kernel_size=sr_ratio, stride=sr_ratio)
self.norm = nn.LayerNorm(dim)
@auto_fp16()
def forward(self, x, H, W, y, H1, W1):
B, N, C = x.shape
B1, N1, C1 = y.shape
q = self.q(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
if self.sr_ratio > 1:
y_ = y.permute(0, 2, 1).reshape(B1, C1, H1, W1)
y_ = self.sr(y_).reshape(B1, C1, -1).permute(0, 2, 1)
y_ = self.norm(y_)
kv = self.kv(y_).reshape(B1, -1, 2, self.num_heads, C1 // self.num_heads).permute(2, 0, 3, 1, 4)
else:
kv = self.kv(y).reshape(B1, -1, 2, self.num_heads, C1 // self.num_heads).permute(2, 0, 3, 1, 4)
k, v = kv[0], kv[1]
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
out = (attn @ v).transpose(1, 2).reshape(B, N, C)
out = self.proj(out)
out = self.proj_drop(out)
return out
class self_attn(BaseModule):
def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, drop_path=0., attn_drop=0., proj_drop=0., ws=1, sr_ratio=1.0, init_cfg=None):
super(self_attn, self).__init__(init_cfg)
self.dim = dim
self.num_heads = num_heads
self.qkv_bias = qkv_bias
self.qk_scale = qk_scale
self.attn_drop = attn_drop
self.proj_drop = proj_drop
self.ws = ws
self.sr_ratio = sr_ratio
self.group_attn = GroupAttention(dim=self.dim, num_heads=self.num_heads, ws = self.ws)
self.global_attn = Attention(dim=self.dim, num_heads=self.num_heads, sr_ratio=self.sr_ratio) # self-attention
self.layernorm1 = nn.LayerNorm(dim)
self.layernorm2 = nn.LayerNorm(dim)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
@auto_fp16()
def forward(self, x):
B, C, H, W = x.size()
x1 = x.reshape(B, C, -1).permute(0, 2, 1).contiguous() # (B, H*W, C)
x1 = x1 + self.drop_path(self.group_attn(self.layernorm1(x1), H, W))
x1 = x1 + self.drop_path(self.global_attn(self.layernorm2(x1), H, W))
x1 = x1.permute(0, 2, 1).reshape(B, C, H, W).contiguous() # (B,C,H,W)
return x1
class high2low_attn(BaseModule):
def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., ws=1, sr_ratio=1.0, init_cfg=None):
super(high2low_attn, self).__init__(init_cfg)
self.dim = dim
self.num_heads = num_heads
self.qkv_bias = qkv_bias
self.qk_scale = qk_scale
self.attn_drop = attn_drop
self.proj_drop = proj_drop
self.ws = ws
self.sr_ratio = sr_ratio
# attention operation
self.group_attn = GroupAttention(dim=self.dim, num_heads=self.num_heads, ws=self.ws)
self.global_attn = Cross_Attention(dim=self.dim, num_heads=self.num_heads, sr_ratio=self.sr_ratio) # cross_attention
self.layernorm1 = nn.LayerNorm(dim)
self.layernorm2 = nn.LayerNorm(dim)
self.layernorm3 = nn.LayerNorm(dim)
@auto_fp16()
def forward(self, x_low, x_high):
B, C, H, W = x_low.size()
B1, C1, H1, W1 = x_high.size()
x_low1 = x_low.reshape(B, C, -1).permute(0, 2, 1).contiguous() # (B, H*W, C)
x_high1 = x_high.reshape(B1, C1, -1).permute(0, 2, 1).contiguous()
x_low1 = x_low1 + self.group_attn(self.layernorm1(x_low1), H, W)
x_low1 = x_low1 + self.global_attn(self.layernorm2(x_low1), H, W, self.layernorm3(x_high1), H1, W1)
x_low1 = x_low1.permute(0, 2, 1).reshape(B, C, H, W).contiguous() # (B,C,H,W)
x_high1 = x_high1.permute(0, 2, 1).reshape(B1, C1, H1, W1).contiguous() # (B,C,H,W)
return x_low1
class low2high_attn(BaseModule):
def __init__(self, channels_high, channels_low, ratio, init_cfg=None):
super(low2high_attn, self).__init__(init_cfg)
self.conv1x1 = nn.Conv2d(channels_low, channels_high, kernel_size=1, stride=1, padding=0, bias=False)
self.bn_reduction = nn.BatchNorm2d(channels_high)
self.relu = nn.ReLU(inplace=True)
self.coorattention = cross_scale_CoordAtt(channels_low, channels_low, ratio)
def forward(self, x_low, x_high):
x_att = self.coorattention(x_low, x_high)
out = self.relu(self.bn_reduction(self.conv1x1(x_high + x_att)))
return out
# coordAttention !!!
class h_sigmoid(BaseModule):
def __init__(self, inplace=True, init_cfg=None):
super(h_sigmoid, self).__init__(init_cfg)
self.relu = nn.ReLU6(inplace=inplace)
def forward(self, x):
return self.relu(x + 3) / 6
class h_swish(BaseModule):
def __init__(self, inplace=True, init_cfg=None):
super(h_swish, self).__init__(init_cfg)
self.sigmoid = h_sigmoid(inplace=inplace)
def forward(self, x):
return x * self.sigmoid(x)
class cross_scale_CoordAtt(BaseModule):
def __init__(self, inp, oup, ratio, reduction=32, init_cfg=None):
super(cross_scale_CoordAtt, self).__init__(init_cfg)
self.pool_h = nn.AdaptiveAvgPool2d((None, 1))
self.pool_w = nn.AdaptiveAvgPool2d((1, None))
self.ratio = ratio
mip = max(8, inp // reduction)
if self.ratio == 2:
self.conv1 = nn.Conv2d(inp, mip, kernel_size=3, stride=2, padding=1) # change k=1,s=1
elif self.ratio == 4:
self.conv1 = nn.Sequential(nn.Conv2d(inp, mip, kernel_size=3, stride=2, padding=1, bias=False),
nn.Conv2d(mip, mip, kernel_size=3, stride=2, padding=1, bias=False))
else:
self.conv1 = nn.Sequential(nn.Conv2d(inp, mip, kernel_size=3, stride=2, padding=1, bias=False),
nn.Conv2d(mip, mip, kernel_size=3, stride=2, padding=1, bias=False),
nn.Conv2d(mip, mip, kernel_size=3, stride=2, padding=1, bias=False))
self.bn1 = nn.BatchNorm2d(mip)
self.act = h_swish()
self.conv_h = nn.Conv2d(mip, oup, kernel_size=1, stride=1, padding=0)
self.conv_w = nn.Conv2d(mip, oup, kernel_size=1, stride=1, padding=0)
def forward(self, x_low, x_high):
identity = x_high
n, c, h, w = x_low.size()
n1, c1, h1, w1 = x_high.size()
x_h = self.pool_h(x_low)
x_w = self.pool_w(x_low).permute(0, 1, 3, 2)
y = torch.cat([x_h, x_w], dim=2)
y = self.conv1(y)
y = self.bn1(y)
y = self.act(y)
x_h, x_w = torch.split(y, [h1, w1], dim=2)
x_w = x_w.permute(0, 1, 3, 2)
a_h = self.conv_h(x_h).sigmoid()
a_w = self.conv_w(x_w).sigmoid()
out = identity * a_w * a_h
return out
def add_conv(in_ch, out_ch, ksize, stride, leaky=True):
"""
Add a conv2d / batchnorm / leaky ReLU block.
Args:
in_ch (int): number of input channels of the convolution layer.
out_ch (int): number of output channels of the convolution layer.
ksize (int): kernel size of the convolution layer.
stride (int): stride of the convolution layer.
Returns:
stage (Sequential) : Sequential layers composing a convolution block.
"""
stage = nn.Sequential()
pad = (ksize - 1) // 2
stage.add_module('conv', nn.Conv2d(in_channels=in_ch,
out_channels=out_ch, kernel_size=ksize, stride=stride,
padding=pad, bias=False))
stage.add_module('batch_norm', nn.BatchNorm2d(out_ch))
if leaky:
stage.add_module('leaky', nn.LeakyReLU(0.1))
else:
stage.add_module('relu6', nn.ReLU6(inplace=True))
return stage
# adaptive scale feature fusion
class ASFF(BaseModule):
def __init__(self, rfb=False, vis=False, init_cfg=None):
super(ASFF, self).__init__(init_cfg)
self.dim = 256
self.inter_dim = self.dim
compress_c = 8 if rfb else 16 #when adding rfb, we use half number of channels to save memory
self.weight_level_0 = add_conv(self.inter_dim, compress_c, 1, 1)
self.weight_level_1 = add_conv(self.inter_dim, compress_c, 1, 1)
self.weight_level_2 = add_conv(self.inter_dim, compress_c, 1, 1)
self.weight_levels = nn.Conv2d(compress_c*3, 3, kernel_size=1, stride=1, padding=0)
self.vis= vis
self.expand = add_conv(self.inter_dim, 256, 3, 1)
def forward(self, x_level_0, x_level_1, x_level_2):
level_0_weight_v = self.weight_level_0(x_level_0)
level_1_weight_v = self.weight_level_1(x_level_1)
level_2_weight_v = self.weight_level_2(x_level_2)
levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v) ,1)
levels_weight = self.weight_levels(levels_weight_v)
levels_weight = F.softmax(levels_weight, dim=1)
fused_out_reduced = x_level_0 * levels_weight[:,0:1,:,:]+\
x_level_1 * levels_weight[:,1:2,:,:]+\
x_level_2 * levels_weight[:,2:,:,:]
out = self.expand(fused_out_reduced)
if self.vis:
return out, levels_weight, fused_out_reduced.sum(dim=1)
else:
return out
@NECKS.register_module()
class FPN_CSA(BaseModule):
def __init__(self,
in_channels,
out_channels,
start_level=1,
end_level=-1,
add_extra_convs=True, # use P6, P7
extra_convs_on_inputs=False,
relu_before_extra_convs=False,
num_outs=5, # in = out
with_norm=False,
upsample_method='bilinear',
init_cfg=None):
super(FPN_CSA, self).__init__(init_cfg)
assert isinstance(in_channels, list)
self.in_channels = in_channels
self.feature_dim = out_channels
self.num_ins = len(in_channels)
self.num_outs = num_outs
self.start_level = start_level
self.end_level = end_level
self.add_extra_convs = add_extra_convs
self.extra_convs_on_inputs = extra_convs_on_inputs
self.relu_before_extra_convs = relu_before_extra_convs
assert upsample_method in ['nearest', 'bilinear']
if end_level == -1:
self.backbone_end_level = self.num_ins
assert num_outs >= self.num_ins - start_level
else:
# if end_level < inputs, no extra level is allowed
self.backbone_end_level = end_level
assert end_level <= len(in_channels)
assert num_outs == end_level - start_level
self.start_level = start_level
self.end_level = end_level
self.add_extra_convs = add_extra_convs
assert isinstance(add_extra_convs, (str, bool))
if isinstance(add_extra_convs, str):
# Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
elif add_extra_convs: # True
if extra_convs_on_inputs:
# TODO: deprecate `extra_convs_on_inputs`
warnings.simplefilter('once')
warnings.warn(
'"extra_convs_on_inputs" will be deprecated in v2.9.0,'
'Please use "add_extra_convs"', DeprecationWarning)
self.add_extra_convs = 'on_input'
else:
self.add_extra_convs = 'on_output'
if with_norm:
self.fpna_p5_1x1 = nn.Sequential(
*[nn.Conv2d(in_channels[3], out_channels, 1, bias=False), nn.BatchNorm2d(out_channels)])
self.fpna_p4_1x1 = nn.Sequential(
*[nn.Conv2d(in_channels[2], out_channels, 1, bias=False), nn.BatchNorm2d(out_channels)])
self.fpna_p3_1x1 = nn.Sequential(
*[nn.Conv2d(in_channels[1], out_channels, 1, bias=False), nn.BatchNorm2d(out_channels)])
# self.fpna_p2_1x1 = nn.Sequential(*[nn.Conv2d(in_channels[0], out_channels, 1, bias=False), nn.BatchNorm2d(out_channels)])
else:
self.fpna_p5_1x1 = nn.Conv2d(in_channels[3], out_channels, 1)
self.fpna_p4_1x1 = nn.Conv2d(in_channels[2], out_channels, 1)
self.fpna_p3_1x1 = nn.Conv2d(in_channels[1], out_channels, 1)
# add attention
# self_attention
self.self_p3 = self_attn(dim=out_channels, num_heads=8, ws=7, sr_ratio=8)
self.self_p4 = self_attn(dim=out_channels, num_heads=8, ws=7, sr_ratio=4)
self.self_p5 = self_attn(dim=out_channels, num_heads=8, ws=7, sr_ratio=2)
# high_to_low attention
self.h2l_p4_p3 = high2low_attn(dim=out_channels, num_heads=8, ws=7, sr_ratio=4)
self.h2l_p5_p3 = high2low_attn(dim=out_channels, num_heads=8, ws=7, sr_ratio=2)
self.h2l_p5_p4 = high2low_attn(dim=out_channels, num_heads=8, ws=7, sr_ratio=2)
# low_to_high attention
self.l2h_p3_p4 = low2high_attn(out_channels, out_channels, 2)
self.l2h_p3_p5 = low2high_attn(out_channels, out_channels, 4)
self.l2h_p4_p5 = low2high_attn(out_channels, out_channels, 2)
# adaptive feature fusion
self.p3_fusion = ASFF(rfb=False, vis=False)
self.p4_fusion = ASFF(rfb=False, vis=False)
self.p5_fusion = ASFF(rfb=False, vis=False)
# add extra conv layers (e.g., RetinaNet)
if self.add_extra_convs == 'on_input':
self.fpna_p6 = nn.Conv2d(in_channels[3], out_channels, kernel_size=3, stride=2, padding=1)
self.fpna_p7 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=2, padding=1)
else:
self.fpna_p6 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=2, padding=1)
self.fpna_p7 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=2, padding=1)
# default init_weights for conv(msra) and norm in ConvModule
def init_weights(self):
"""Initialize the weights of FPN module."""
for m in self.modules():
if isinstance(m, nn.Conv2d):
xavier_init(m, distribution='uniform')
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1.0)
m.bias.data.zero_()
@auto_fp16()
def forward(self, inputs):
assert len(inputs) == len(self.in_channels)
p5 = self.fpna_p5_1x1(inputs[self.end_level])
p4 = self.fpna_p4_1x1(inputs[self.start_level + 1])
p3 = self.fpna_p3_1x1(inputs[self.start_level])
fpna_p3_out = self.p3_fusion(self.self_p3(p3), self.h2l_p4_p3(p3, p4), self.h2l_p5_p3(p3, p5))
fpna_p4_out = self.p4_fusion(self.self_p4(p4), self.h2l_p5_p4(p4, p5), self.l2h_p3_p4(p3, p4))
fpna_p5_out = self.p5_fusion(self.self_p5(p5), self.l2h_p3_p5(p3, p5), self.l2h_p4_p5(p4, p5))
# part 2: add extra levels
if self.add_extra_convs == 'on_input':
fpna_p6_out = self.fpna_p6(inputs[-1])
fpna_p7_out = self.fpna_p7(fpna_p6_out)
else:
fpna_p6_out = self.fpna_p6(fpna_p5_out)
fpna_p7_out = self.fpna_p7(fpna_p6_out)
fpn_csa_out = [fpna_p3_out, fpna_p4_out, fpna_p5_out, fpna_p6_out, fpna_p7_out]
return tuple(fpn_csa_out)
| 40.884615 | 149 | 0.605684 | 3,109 | 20,197 | 3.679318 | 0.089418 | 0.038465 | 0.024128 | 0.01049 | 0.589125 | 0.531602 | 0.499781 | 0.456159 | 0.429583 | 0.405368 | 0 | 0.039261 | 0.268555 | 20,197 | 493 | 150 | 40.967546 | 0.735057 | 0.061197 | 0 | 0.360215 | 0 | 0 | 0.018981 | 0.001219 | 0 | 0 | 0 | 0.002028 | 0.02957 | 1 | 0.064516 | false | 0 | 0.021505 | 0.005376 | 0.150538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96e20883205607038aa8e251bf0037ad7aaf4289 | 10,055 | py | Python | utils/utils.py | mo-vic/standalone-center-loss | 49730909be09d4eefbd43511227f4e787ad8af51 | [
"MIT"
] | 9 | 2019-09-09T00:29:16.000Z | 2020-03-25T10:18:07.000Z | utils/utils.py | mo-vic/standalone-center-loss | 49730909be09d4eefbd43511227f4e787ad8af51 | [
"MIT"
] | null | null | null | utils/utils.py | mo-vic/standalone-center-loss | 49730909be09d4eefbd43511227f4e787ad8af51 | [
"MIT"
] | null | null | null | import numpy as np
from tqdm import tqdm
from PIL import Image
import torch
import torchvision
from torch.utils.data import DataLoader
from torchvision.transforms import Compose, RandomHorizontalFlip, RandomRotation, Pad, RandomCrop, ToTensor
from models.resnet import ResNet
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
def load_dataset(dataset, batch_size, use_gpu, num_workers):
if dataset == "mnist":
transform = Compose([RandomRotation(degrees=5, resample=Image.BILINEAR), ToTensor()])
trainset = torchvision.datasets.MNIST("data/mnist", train=True, download=True, transform=transform)
testset = torchvision.datasets.MNIST("data/mnist", train=False, download=True, transform=ToTensor())
input_shape = (1, 28, 28)
classes = torchvision.datasets.MNIST.classes
elif dataset == "fashion-mnist":
transform = Compose([RandomHorizontalFlip(), RandomRotation(degrees=5, resample=Image.BILINEAR), ToTensor()])
trainset = torchvision.datasets.FashionMNIST("data/fashion-mnist", train=True, download=True,
transform=transform)
testset = torchvision.datasets.FashionMNIST("data/fashion-mnist", train=False, download=True,
transform=ToTensor())
input_shape = (1, 28, 28)
classes = torchvision.datasets.FashionMNIST.classes
elif dataset == "cifar-10":
transform_tr = Compose([RandomHorizontalFlip(), Pad(4), RandomCrop(32), ToTensor()])
trainset = torchvision.datasets.CIFAR10("data/cifar-10", train=True, download=True,
transform=transform_tr)
testset = torchvision.datasets.CIFAR10("data/cifar-10", train=False, download=True,
transform=ToTensor())
input_shape = (3, 32, 32)
classes = ["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"]
trainloader = DataLoader(trainset, batch_size, True, num_workers=num_workers, pin_memory=use_gpu, drop_last=True)
testloader = DataLoader(testset, batch_size, False, num_workers=num_workers, pin_memory=use_gpu, drop_last=False)
return trainloader, testloader, input_shape, classes
def build_model(model, input_shape, feature_dims, num_classes):
if model == "resnet":
model = ResNet(input_shape, feature_dims, num_classes)
else:
raise NotImplementedError
return model
def train(model, dataloader, criterion, weight_intra, weight_inter, optimizer, use_gpu, writer, epoch, max_epoch, vis,
feat_dim, classes):
model.train()
criterion.train()
if vis:
if feat_dim == 2 or epoch == max_epoch - 1:
all_features, all_labels = [], []
all_images = []
all_acc = []
all_loss = []
all_inter_loss = []
all_intra_loss = []
distmat = np.array([]).reshape((0, len(classes)))
for idx, (data, labels) in tqdm(enumerate(dataloader), desc="Training Epoch {}".format(epoch)):
optimizer.zero_grad()
if use_gpu:
data, labels = data.cuda(), labels.cuda()
features, outputs = model(data)
intra_loss, inter_loss, intra_dist_data = criterion(features, labels)
intra_loss *= weight_intra
inter_loss *= -weight_inter
loss = intra_loss + inter_loss
all_inter_loss.append(inter_loss.item())
all_intra_loss.append(intra_loss.item())
loss.backward()
optimizer.step()
distmat = np.concatenate([distmat, intra_dist_data], axis=0)
all_loss.append(loss.item())
centers = criterion.get_centers().data
batch_size = features.size(0)
batch_distmat = torch.pow(features.data, 2).sum(dim=1, keepdim=True).expand(batch_size, len(classes)) + \
torch.pow(centers, 2).sum(dim=1, keepdim=True).expand(len(classes), batch_size).t()
batch_distmat.addmm_(1, -2, features.data, centers.t())
acc = (batch_distmat.data.min(1)[1] == labels.data).double().mean()
all_acc.append(acc.item())
writer.add_scalar("loss", loss.item(), global_step=epoch * len(dataloader) + idx)
writer.add_scalar("acc", acc.item(), global_step=epoch * len(dataloader) + idx)
writer.add_scalar("inter_loss", inter_loss.item(), global_step=epoch * len(dataloader) + idx)
writer.add_scalar("intra_loss", intra_loss.item(), global_step=epoch * len(dataloader) + idx)
if vis:
if feat_dim == 2 or epoch == max_epoch - 1:
all_features.append(features.data.cpu().numpy())
all_labels.append(labels.data.cpu().numpy())
if feat_dim != 2 and epoch == max_epoch - 1:
all_images.append(data.data.cpu().numpy())
mean = np.mean(distmat, axis=0)
std = np.std(distmat, axis=0)
for i, (m, s) in enumerate(zip(mean, std)):
writer.add_scalar("mean of %s" % i, m, global_step=epoch)
writer.add_scalar("std of %s" % i, s, global_step=epoch)
centers = criterion.get_centers()
with torch.no_grad():
distmat = torch.pow(centers, 2).sum(dim=1, keepdim=True).expand(len(classes), len(classes)) + \
torch.pow(centers, 2).sum(dim=1, keepdim=True).expand(len(classes), len(classes)).t()
distmat.addmm_(1, -2, centers, centers.t())
distmat = distmat.cpu().data
for i in range(len(classes)):
for j in range(i):
writer.add_scalar("%s-%s" % (i, j), distmat[i][j], global_step=epoch)
print("Epoch {}: total trainset loss: {}, global trainset accuracy:{}, global inter_loss:{}, global intra_loss:{}" \
.format(epoch, np.mean(all_loss), np.mean(all_acc), np.mean(all_inter_loss), np.mean(all_intra_loss)))
if vis:
if feat_dim == 2 or epoch == max_epoch - 1:
visualize(all_images, all_features, all_labels, feat_dim, classes, epoch, writer, tag="train")
def eval(model, dataloader, criterion, scheduler, use_gpu, writer, epoch, max_epoch, vis, feat_dim, classes):
model.eval()
criterion.eval()
if vis:
if feat_dim == 2 or epoch == max_epoch - 1:
all_features, all_labels = [], []
all_images = []
all_acc = []
all_loss = []
all_inter_loss = []
all_intra_loss = []
distmat = np.array([]).reshape((0, len(classes)))
with torch.no_grad():
centers = criterion.get_centers()
for idx, (data, labels) in tqdm(enumerate(dataloader), desc="Evaluating Epoch {}".format(epoch)):
if use_gpu:
data, labels = data.cuda(), labels.cuda()
features, outputs = model(data)
intra_loss, inter_loss, intra_dist_data = criterion(features, labels)
inter_loss *= -1.0
loss = intra_loss + inter_loss
all_inter_loss.append(inter_loss.item())
all_intra_loss.append(intra_loss.item())
distmat = np.concatenate([distmat, intra_dist_data], axis=0)
all_loss.append(loss.item())
batch_size = features.size(0)
batch_distmat = torch.pow(features, 2).sum(dim=1, keepdim=True).expand(batch_size, len(classes)) + \
torch.pow(centers, 2).sum(dim=1, keepdim=True).expand(len(classes), batch_size).t()
batch_distmat.addmm_(1, -2, features, centers.t())
acc = (batch_distmat.data.min(1)[1] == labels.data).double().mean()
all_acc.append(acc.item())
if vis:
if feat_dim == 2 or epoch == max_epoch - 1:
all_features.append(features.data.cpu().numpy())
all_labels.append(labels.data.cpu().numpy())
if feat_dim != 2 and epoch == max_epoch - 1:
all_images.append(data.data.cpu().numpy())
val_loss = np.mean(all_loss)
val_acc = np.mean(all_acc)
val_inter_loss = np.mean(all_inter_loss)
val_intra_loss = np.mean(all_intra_loss)
writer.add_scalar("val_loss", val_loss, global_step=epoch)
writer.add_scalar("val_acc", val_acc, global_step=epoch)
writer.add_scalar("val_inter_loss", val_inter_loss, global_step=epoch)
writer.add_scalar("val_intra_loss", val_intra_loss, global_step=epoch)
mean = np.mean(distmat, axis=0)
std = np.std(distmat, axis=0)
for i, (m, s) in enumerate(zip(mean, std)):
writer.add_scalar("val_mean of %s" % i, m, global_step=epoch)
writer.add_scalar("val_std of %s" % i, s, global_step=epoch)
print("Epoch {}: testset loss: {}, testset accuracy:{}, val_inter_loss:{}, " \
"val_intra_loss:{}".format(epoch, val_loss, val_acc, val_inter_loss, val_intra_loss))
scheduler.step(val_acc)
if vis:
if feat_dim == 2 or epoch == max_epoch - 1:
visualize(all_images, all_features, all_labels, feat_dim, classes, epoch, writer, tag="val")
def visualize(images, features, labels, feat_dim, classes, epoch, writer, tag):
if feat_dim == 2:
colors = ["C0", "C1", "C2", "C3", "C4", "C5", "C6", "C7", "C8", "C9"]
features = np.concatenate(features, axis=0)
labels = np.concatenate(labels, axis=0)
figure = plt.figure(figsize=[8., 8.])
for idx in range(len(classes)):
plt.scatter(features[labels == idx, 0],
features[labels == idx, 1],
c=colors[idx], s=1)
figure.legend(classes, loc="upper right")
writer.add_figure(tag=tag, figure=figure, global_step=epoch, close=True)
else:
images = torch.tensor(np.concatenate(images, axis=0))
labels = np.concatenate(labels, axis=0)
features = torch.tensor(np.concatenate(features, axis=0))
writer.add_embedding(features, tag=tag, metadata=np.array(classes)[labels], label_img=images)
| 43.717391 | 120 | 0.618299 | 1,278 | 10,055 | 4.687793 | 0.149452 | 0.034552 | 0.035053 | 0.015023 | 0.646303 | 0.624437 | 0.590052 | 0.557503 | 0.5111 | 0.496077 | 0 | 0.01334 | 0.247041 | 10,055 | 229 | 121 | 43.908297 | 0.777969 | 0 | 0 | 0.430939 | 0 | 0 | 0.055097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027624 | false | 0 | 0.055249 | 0 | 0.093923 | 0.01105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96e21776bf9d96621c9d080d4af4563c744a2443 | 37,588 | py | Python | analysis/ana_som_lp_control.py | tgbugs/mlab | dacc1663cbe714bb45c31b1b133fddb7ebcf5c79 | [
"MIT"
] | null | null | null | analysis/ana_som_lp_control.py | tgbugs/mlab | dacc1663cbe714bb45c31b1b133fddb7ebcf5c79 | [
"MIT"
] | null | null | null | analysis/ana_som_lp_control.py | tgbugs/mlab | dacc1663cbe714bb45c31b1b133fddb7ebcf5c79 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3.3
from rig.ipython import embed
import socket
import numpy as np
import gc
#import scipy as sci
import pylab as plt
from neo import AxonIO, AnalogSignal
from abf_analysis import get_abf_path, load_abf
#magic numbers!
_mu='\u03BC'
def pA(sig,gain): #XXX dep
if sig.unit=='pA':
return sig/gain
else:
raise TypeError('units werent pA')
def mV(pA_signal,clx_rescale=2): #XXX dep
base=pA_signal.base
out=AnalogSignal(base/clx_rescale, units='mV', name=pA_signal.name,
sampling_rate=pA_signal.sampling_rate,
t_start=pA_signal.t_start,#*0,
channel_index=pA_signal.channel_index)
return out #FIXME the raw for the abf file seems to be pA no matter what to get mV divide by two?!
def V(pA_signal): #FIXME, only need this if the file is stopped half way through recording (wtf)
out=mV(pA_signal)
out.units='V'
return out
def set_gain_and_time_func(gain=1,zero_times=False):
def set_gain(sig):
if zero_times:
sig.t_start=sig.t_start*0
return sig/gain
return set_gain
def plot_signal(sig):
plt.plot(sig.times.base,sig.base) #use base because Quantities is slow as BALLS
plt.xlabel(sig.times.units)
plt.ylabel(sig.units)
def plot_abf(filepath,signal_map): #FIXME need better abstraction for the sigmap
raw,block,segments=load_abf(filepath)
nseg=len(segments)
plt.figure()
for seg,n in zip(segments,range(nseg)):
plt.title(block.file_origin)
#plt.title('%s Segment %s'%(block.file_origin,n))
nas=seg.size()['analogsignals']
for anasig,i in zip(seg.analogsignals,range(nas)):
plt.subplot(nas,1,i+1)
stp=signal_map[i](anasig)
plot_signal(stp)
#plot_signal(anasig)
fADC_DO_0=9.5017
fADC_DA_0=24.7521
fDAC_scale_0=20
fADC_DO_1=2.1144
fADC_DA_1=4.2731
fDAC_scale_1=1
def transform_maker(offset,gain):#,scale):
def t_signal(signal):
rescale=(signal.base/gain)+offset
#rescale=signal.base
out=AnalogSignal(rescale, units=signal.units, name=signal.name,
sampling_rate=signal.sampling_rate,
t_start=signal.t_start,
channel_index=signal.channel_index)
return out
return t_signal
zero=transform_maker(0,400) #FIXME why is this *20 again? gain is only @ 20
one=transform_maker(0,5) #FIXME why do I need this!?
image_path='D:/tom_data/macroscope/'
#path='D:/tom_data/clampex/'
#path='/mnt/tgdata/clampex/'
path='D:/clampex/'
filenames=[
'2013_12_04_0024.abf',
'2013_12_04_0025.abf',
'2013_12_04_0026.abf',
'2013_12_04_0036.abf', #perfect pathalogical example
]
sig_map={0:set_gain_and_time_func(20,True),1:set_gain_and_time_func(1,True)}
sig_map_nt={0:set_gain_and_time_func(20,False),1:set_gain_and_time_func(1,False)} #FIXME why is the time doubled?
test_map={0:zero,1:one}
#[plot_abf(path+fn,sig_map_nt) for fn in filenames]
#[plot_abf(path+fn,test_map) for fn in filenames]
#raw,block,segments=load_abf(path+filenames[0])
def detect_spikes(array,thresh=3,max_thresh=5,threshold=None,space=5): #FIXME need an actual threshold?
#TODO local max using f>=thrs s<thrs switch on the way down, and get the indexes and then call max() on it?
try:
iter(array.base)
array=array.base
except:
pass
avg=np.mean(array)
std=np.std(array)
if threshold:
pass
elif max(array) > avg+std*max_thresh:
threshold=avg+thresh*std
else:
#print('guessing noise')
threshold=avg+std*max_thresh
if max(array) < threshold:
#print('yep it was')
return [],threshold
first=array[:-space] #5 to hopefully prevent the weirdness with tiny fluctuations
second=array[space:]
base=len(first)-space
def check_thrs(i):
#for n in range((space+1)//2,space):
for n in range(space):
if (first[i+n]<=threshold and second[i+n]>threshold):
pass
else:
return 0
return 1
s_list=[ i for i in range(base) if check_thrs(i) ]
#s_list=[ i for i in range(base) if first[i]<=threshold and second[i]>threshold ]
#s_list=[ i for i in range(base-1) if first[i]<=threshold and second[i]>threshold and first[i+1]<=threshold and second[i+1]>threshold ]
#s_index=s_list[0::space] #take only the first spike #need the -1 to prevent aliasing?
return s_list,threshold
#return s_index
def detect_led(led_array):
if led_array.base:
led_array=led_array.base
maxV=max(led_array)
minV=min(led_array)
if maxV-minV < .095:
return [],base,maxV,minV
half=maxV-(maxV-minV)/2
led_on_index=np.where(led_array < half)[0]
#led_on_index=np.where(led_array.base)[0]
base=np.ones_like(led_on_index)
return led_on_index,base,maxV,minV
def plot_count_spikes(filepath,signal_map): #FIXME integrate this with count_spikes properly
""" useful for manual review """
raw,block,segments,header=load_abf(filepath)
if list(raw.read_header()['nADCSamplingSeq'][:2]) != [1,2]: #FIXME
print('Not a ledstim file')
return None,None
nseg=len(segments)
gains=header['fTelegraphAdditGain'] #TODO
zero=transform_maker(0,20*gains[0]) #cell
one=transform_maker(0,5) #led
plt.figure()
counts=[]
for seg,n in zip(segments,range(nseg)):
if len(seg.analogsignals) != 2:
print('No led channel found.')
continue
plt.subplot(nseg,1,n+1)
nas=seg.size()['analogsignals']
signal=zero(seg.analogsignals[0])
led=one(seg.analogsignals[1])
led_on_index,base,maxV,minV=detect_led(led) #FIXME move this to count_spikes
if not len(led_on_index):
print('No led detected maxV: %s minV: %s'%(maxV,minV))
continue
sig_on=signal.base[led_on_index]
sig_mean=np.mean(sig_on)
sig_std=np.std(sig_on)
#plt.plot(sig_std+sig_mean)
sm_arr=base*sig_mean
std_arr=base*sig_std
thresh=3.3
s_list=detect_spikes(sig_on,thresh,5)
counts.append(len(s_list))
[plt.plot(s,sig_on[s],'ro') for s in s_list] #plot all the spikes
plt.plot(sig_on,'k-')
plt.plot(sm_arr,'b-')
plt.fill_between(np.arange(len(sm_arr)),sm_arr-std_arr*thresh,sm_arr+std_arr*thresh,color=(.8,.8,1))
plt.xlim((0,len(sig_on)))
plt.title('%s spikes %s'%(len(s_list),block.file_origin))
#plt.title(block.file_origin)
#plt.title('%s Segment %s'%(block.file_origin,n))
return np.mean(counts),np.var(counts),counts
def count_spikes(filepath,signal_map): #TODO
raw,block,segments,header=load_abf(filepath)
nseg=len(segments)
gains=header['fTelegraphAdditGain'] #TODO
zero=transform_maker(0,20*gains[0]) #cell
one=transform_maker(0,5) #led
counts=[]
for seg,n in zip(segments,range(nseg)):
nas=seg.size()['analogsignals']
signal=zero(seg.analogsignals[0])
led=one(seg.analogsignals[1])
led_on_index=np.where(led.base < led.base[0]-.05)[0]
base=np.ones_like(led_on_index)
sig_on=signal.base[led_on_index]
thresh=3.3
#print(filepath)
s_list=detect_spikes(sig_on,thresh,5)
counts.append(len(s_list))
return np.mean(counts),np.var(counts),counts
#count_spikes(path+filenames[0],None)
#[count_spikes(path+fn,test_map) for fn in filenames]
#plt.show()
#embed()
def get_disp(origin,target):
a2=(origin[0]-target[0])**2
b2=(origin[1]-target[1])**2
return (a2+b2)**.5
sqrt2=get_disp([0,0],[1,1])
#print('This should be square root of 2',sqrt2)
sqrt2=get_disp([0,0],[-1,-1])
#print('And so should this!',sqrt2)
def esp_fix_x(x_vector):
""" the positive axis for the espX is on the left hand side so multiply by -1"""
return -x_vector
from database.models import *
from database.engines import engine
from sqlalchemy.orm import Session #FIXME need logics
import os
import sys
session=Session(engine)
s=session
def get_metadata_query(MappedClass,id_,mds_name):
MD=MappedClass.MetaData
md_query=s.query(MD).filter_by(parent_id=id_).\
join(MetaDataSource,MD.metadatasource).\
filter_by(name=mds_name).order_by(MD.id)
return md_query
#name:spikes #FIXME make it so manual threshold works AND align all spikes from the detection forward
for_review={
'03_0117':[0,0,0,0,0],
'03_0121':[0,0,0,0,0],
'03_0125':[0,0,0,0,0],
'04_0007':[26,23,22,20,20],
'04_0009':[28,25,22,20,19],
'04_0010':[14,13,12,13,12],
'04_0011':[19,18,16,15,15],
'04_0012':[17,15,14,14,15],
'04_0013':[17,19,17,17,16],
'04_0014':[15,17,15,14,12],
'04_0015':[11,9,11,11,11],
'04_0031':[0,0,0,0,0],
}
reviews=['2013_12_'+review+'.abf' for review in for_review.keys()]
#[plot_count_spikes(path+fn,test_map) for fn in reviews]
reviewed={'2013_12_'+review+'.abf':list_ for review,list_ in for_review.items()}
to_ignore=[ #see LB1:81
'03_0041', #aperature was closed
'03_0042', #aperature about half open
'08_0002', #something got recalibrated in the middle
'08_0001', #shutter closed
'08_0002', #shutter closed
'08_0003', #shutter closed
'08_0022', #min was at 0V so way more spikes #FIXME automate?
'08_0023', #min was at 0V so way more spikes
'08_0024', #min was at 0V so way more spikes
'08_0025', #min was at 0V so way more spikes
'08_0026', #min was at 0V so way more spikes
'08_0027', #min was at 0V so way more spikes
'08_0028', #min was at 0V so way more spikes
'08_0053', #shutter open
'08_0076', #accident with membrane test
'08_0077', #accident with membrane test
'10_0001', #was a test run with a completely open aperature
'10_0010', #missing 3 channels not sure what happened
'10_0011', #complete garbage and not even in the list for cell 36
'10_0012', #somehow this was recorded as a control despite being even? thing it has to do with the problems above
]
to_ignore.extend(['10_0%s'%i for i in range(143,177)]) #lost the cell here
ignored=['2013_12_'+ignore+'.abf' for ignore in to_ignore]
#TODO threading with a callback that returns our numbers
from threading import Thread
class accumulator: #FIXME Queue??
positions=[]
distances=[]
spike_means=[]
spike_vars=[]
def append(pos,dist,mean,var):
self.positions.append(position)
self.distances.append(dist)
self.spike_means.append(mean)
self.spike_vars.append(var)
def bin_dists(dist,mean,std,bin_width=.025): #FIXME return dont plot?
shift=bin_width/2
bin_lefts=np.arange(-shift,10*bin_width,bin_width)
bin_dict={}
for v in bin_lefts:
bin_dict[v]=[]#[[],[]]
for d,m,s in zip(dist,mean,std):
left=bin_lefts[bin_lefts<=d][-1] #left inclusive
#right=left+bin_width
bin_dict[left].append(m)
#bin_dict[left][1].append(s)
for left_bin,means in bin_dict.items():
new_mean=np.mean(means)
new_std=np.std(means) #FIXME ignores the individual vars
new_sem=new_std/np.sqrt(len(means))
#plt.errorbar(left_bin+shift,new_mean,fmt='bo',ecolor=(.8,.8,1),yerr=new_std,capthick=2)
plt.errorbar(left_bin+shift,new_mean,fmt='ko',ecolor=(.2,.2,.2),yerr=new_sem,capsize=8,capthick=4,elinewidth=3,markersize=20)
return bin_dict
def get_cell_data(cid,abfpath):
cell=s.query(Cell).get(cid)
#DFMD=DataFile.MetaData
#cell_pos=s.query(Cell.MetaData).filter_by(parent_id=cid).\
#join(MetaDataSource,Cell.MetaData.metadatasource).\
#filter_by(name='getPos').order_by(Cell.MetaData.id).first().value
#print(cell_pos)
#TODO get slice points
slice_=cell.parent
s_pos=get_metadata_query(Slice,slice_.id,'getPos').all()[:3]
#print(s_pos)
files=s.query(DataFile).join(Cell,DataFile.subjects).filter(Cell.id==cid).order_by(DataFile.creationDateTime).all()
positions=[]
dists=[]
smeans=[] #for spike counts
svars=[] #for spike counts
counts={} #raw counts
for filename,distance in cell.distances.items(): #XXX NOTE XXX not ordered
if filename in ignored:
continue
filepath=abfpath+filename
size=os.path.getsize(filepath)
#print(size)
if size != 10009088 or size != 2009088: #FIXME another way to detect filetype really just need a way to get the protocol
#print(size)
continue
fp=s.query(DataFile).get(('file:///'+abfpath,filename)).position #FIXME ;_; add eq ne hash to datafile
positions.append(fp)
dists.append(distance)
try:
_scount=reviewed[filename]
spike_mean=np.mean(_scount)
spike_var=np.var(_scount)
except KeyError:
spike_mean,spike_var,_scount=count_spikes(filepath,None) #TODO OH NOSE MEMORY USAGE
counts[filename]=_scount
smeans.append(spike_mean)
svars.append(spike_var)
#TODO use a Queue to block on threads until all the spikes are gotten
pos=np.array(positions)
#embed()
return cid,cell.position,pos,dists,smeans,svars,s_pos,counts
def get_cell_traces(cid,abfpath):
cell=s.query(Cell).get(cid)
filepaths=[]
dists=[]
files=cell.datafiles
files.sort(key=lambda file:file.filename) #FIXME assuming not sorted
for file in files:
filename=file.filename
distance=file.distances[cid]
if filename in ignored:
print(filename,'ignored')
continue
filepath=abfpath+filename #FIXME
size=os.path.getsize(filepath)
if size != 10009088 and size != 2009088: #FIXME another way to detect filetype really just need a way to get the protocol
print(filepath,size)
continue
filepaths.append(abfpath+filename)
dists.append(distance)
return filepaths,dists
def plot_abf_traces(filepaths,dists,spikes=False,spike_func=lambda filepath:None,std_thrs=3,std_max=5,threshold_func=lambda filepath:None,cell_id=None,do_plot=False): #FIXME this is for 58 alternating
#for filepath,distance in
from abf_analysis import parmap
#from queue import Queue
#args=zip(filepaths,dists)
#[print(arg) for arg in args]
#baseline_spikes=Queue()
#baseline_spikes.put(mean_base)
def dothing(args):
fp1,fp2,d1,d2=args
fn=fp1.split('/')[-1]
figure=plt.figure(figsize=(20,20))
spike_counts=[]
spike_dist=None
base_counts=[]
base_dist=None
for filepath,distance in zip((fp1,fp2),(d1,d2)):
raw,block,segments,header=load_abf(filepath)
if list(raw.read_header()['nADCSamplingSeq'][:2]) != [1,2]: #FIXME
print('Not a ledstim file')
return None,None
#print(header['nADCSamplingSeq']) #gain debugging
#print(header['nTelegraphEnable'])
#print(header['fTelegraphAdditGain'])
#gains=header['fTelegraphAdditGain'] #TODO
#print(fn,gains)
#TODO cell.headstage number?!
#headstage_number=1 #FIXME
#gain=gains[headstage_number] #FIXME the problem I think is still in AxonIO
#zero=transform_maker(0,20*gain) #FIXME where does the first 20 come from !?
#one=transform_maker(0,5) #led
#one=transform_maker(0,10) #led no idea why the gain is different for these
nseg=len(segments)
for seg,n in zip(segments,range(nseg)):
title_base=''
if nseg == 1:
plt.subplot(6,1,6)
base_fp=filepath
else:
dist_fp=filepath
plt.subplot(6,1,n+1)
signal=seg.analogsignals[0].base
units_sig=seg.analogsignals[0].units
sig_times=seg.analogsignals[0].times.base
units_sig_times=seg.analogsignals[0].times.units
led=seg.analogsignals[1].base
led_on_index,base,maxV,minV=detect_led(led) #FIXME move this to count_spikes
if not len(led_on_index):
print('No led detected maxV: %s minV: %s'%(maxV,minV))
continue
sig_on=signal[led_on_index]
sig_on_times=sig_times[led_on_index] #FIXME may not be synch?
sig_mean=np.mean(sig_on)
sig_std=np.std(sig_on)
#plt.plot(sig_std+sig_mean)
sm_arr=base*sig_mean
sm_arr_times=sig_on_times #[led_on_index]
std_arr=base*sig_std
#do spike detection
if spikes:
spike_indexes,threshold=detect_spikes(sig_on,std_thrs,std_max,threshold_func(filepath))
spike_times=sig_on_times[spike_indexes]
if spike_func(filepath):
sc=spike_func(filepath)[seg.index]
else:
sc=len(spike_indexes)
if nseg == 1:
base_counts.append(sc)
base_dist=distance
else:
spike_counts.append(sc)
spike_dist=distance
#TODO find a way to associate the spikecount with the DataFile
plt.plot(spike_times,sig_on[spike_indexes],'ro')
title_base+='%s spikes '%(sc)
if do_plot:
title_base+='%s $\mu$m from cell %s $\\verb|%s|$ '%(int(distance*1000),cell_id,block.file_origin[:-4])
#title_base+='gain = %s *20'%gain
plt.title(title_base)
plt.xlabel(units_sig_times)
plt.ylabel(units_sig)
#plt.xlabel(led.times.units)
#plt.ylabel(led.units)
#plt.plot(led.times.base[led_on_index],led.base[led_on_index])
lrf=(sig_on_times[0],sig_on_times[-1])
plt.plot(sig_on_times[::10],sig_on[::10],'k-',linewidth=1)
plt.plot(lrf,[sig_mean]*2,'b-', label = 'sig mean' )
plt.plot(lrf,[threshold]*2,'m-', label = 'threshold' )
plt.plot(lrf,[sig_mean+sig_std*std_max]*2,'y-', label = 'max thresh' )
if threshold_func(filepath):
plt.plot(lrf,[threshold_func(filepath)]*2,'g-', label = 'manual thresh' )
plt.fill_between(np.arange(len(sm_arr)),sm_arr-std_arr*std_thrs,sm_arr+std_arr*std_thrs,color=(.8,.8,1))
plt.xlim((sig_on_times[0],sig_on_times[-1]))
plt.legend(loc='upper right',fontsize='xx-small',frameon=False)
spike_count_callback(base_counts,spike_counts,base_dist,spike_dist,dist_fp,base_fp)
spath='/tmp/'+str(cell_id)+'_'+fn[:-4]+'.png'
if do_plot:
print(spath)
plt.savefig(spath,bbox_inches='tight',pad_inches=0)
#plt.show()
figure.clf() #might reduce mem footprint
plt.close()
gc.collect()
cont={}
cont['base_dist_stats']=[]
cont['spike_dist_stats']=[]
cont['norm_dist_stats']=[]
cont['bnorm_dist_stats']=[]
spike_fn_dict={}
args=[(f1,f2,d1,d2) for f1,f2,d1,d2 in zip(filepaths[0::2],filepaths[1::2],dists[0::2],dists[1::2])]
def spike_count_callback(base_counts,spike_counts,base_dist,spike_dist,filepath,base_fp):
mean_base=np.mean(base_counts)
std_base=np.std(base_counts)
mean_spikes=np.mean(spike_counts)
std_spikes=np.std(spike_counts)
norm_counts=spike_counts/mean_base #normalize the counts at distance by the mean base
mean_norm=np.mean(norm_counts)
std_norm=np.std(norm_counts)
spike_fn_dict[filepath]=spike_counts
spike_fn_dict[base_fp]=base_counts
cont['base_dist_stats'].append((base_dist,mean_base,std_base))#,base_counts))
cont['spike_dist_stats'].append((spike_dist,mean_spikes,std_spikes))#,spike_counts))
cont['norm_dist_stats'].append((spike_dist,mean_norm,std_norm))#,norm_counts))
#print(cont.base_dist_stats)
#print(cont.spike_dist_stats)
#print(cont.norm_dist_stats)
#TODO add position...
[dothing(arg) for arg in args] #bloody callbacks not working
#parmap(dothing,args)
b=np.array(cont['base_dist_stats'])
#bdm=np.mean(b[:,0]) #get the mean distance for all the baselines
bm=np.mean(b[:,1])
bstd=np.std(b[:,1])
base_normed=(b[:,1]-bm)/bstd #mean subtracted and divided by std
bn_list=[(d,m,0) for d,m in zip(b[:,0],base_normed)]
#bsem=bstd/np.sqrt(len(b[:,1]))
cont['bnorm_dist_stats']=bn_list
dist_stats=cont['base_dist_stats']#+cont['bnorm_dist_stats']
plot_by_dist(dist_stats,cell_id,filepaths[-1][-8:-4])
return cont,spike_fn_dict
def plot_by_dist(dist_stats,cell_id,end_file,yl='Normalized',yu='um'):
normed=np.array(dist_stats)#np.array(cont['norm_dist_stats'])
fig=plt.figure(figsize=(20,20))
dist,mean,std,bin_width=normed[:,0],normed[:,1]*1000,normed[:,2],25
shift=bin_width/2
bin_lefts=np.arange(-shift,10*bin_width,bin_width)
bin_dict={}
print(mean)
for v in bin_lefts:
bin_dict[v]=[]#[[],[]]
for d,m,s in zip(dist,mean,std):
left=bin_lefts[bin_lefts<=d][-1] #left inclusive
#right=left+bin_width
bin_dict[left].append(m)
#bin_dict[left][1].append(s)
for left_bin,means in bin_dict.items():
new_mean=np.mean(means)
new_std=np.std(means) #FIXME ignores the individual vars
new_sem=new_std/np.sqrt(len(means))
#plt.errorbar(left_bin+shift,new_mean,fmt='bo',ecolor=(.8,.8,1),yerr=new_std,capthick=2)
print('hell0')
plt.errorbar(left_bin+shift,new_mean,fmt='ko',ecolor=(.2,.2,.2),yerr=new_sem,capsize=8,capthick=4,elinewidth=3,markersize=20)
#plt.errorbar(normed[:,0]*1000,normed[:,1],yerr=normed[:,2],fmt='ko',ecolor=(1,1,1))
#plt.xlim((.01,.250))
#plt.ylim(0,1.5)
format_axes(fig.axes[0],50,50)
plt.yticks(np.arange(0,1.25,.25))
plt.xticks(np.arange(0,225,25))
plt.xlim(xmin=-2)
plt.xlabel('Distance from cell body in um')
plt.ylabel('%s spikes %s'%(yl,yu))
plt.title('%s spikes as a function of distance for cell %s'%(yl,cell_id))
fig.savefig('/tmp/norm_dist_%s_%s.png'%(cell_id,end_file),bbox_inches='tight',pad_inches=0)
def plot_cell_data(cid,cell_pos,df_pos,dists,smeans,svars,slice_pos):
plt.figure()
plt.title('Cell %s'%cid)
bin_dists(dists,smeans,svars) #TODO
#plt.bar(np.array(dists),smeans,np.ones_like(dists)*.025,color=(.8,.8,1),yerr=svars,align='center')
plt.figure()
plt.title('Cell %s'%cid)
plt.plot(esp_fix_x(df_pos[:,0]),df_pos[:,1],'bo') #TODO these are flipped from reality!
plt.plot(esp_fix_x(cell_pos[0]),cell_pos[1],'ro') #FIXME dake the math and shit out of here for readability
[plt.plot(esp_fix_x(p.value[0]),p.value[1],'go') for p in slice_pos] #plot the slice positions
for count,x,y in zip(smeans,esp_fix_x(df_pos[:,0]),df_pos[:,1]):
plt.annotate(
'%s'%count, #aka label
xy = (x,y), xytext = (-20,20),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'),
)
def get_dist_data(file,callback=None): #get data from a datafile
#TODO
callback()
return
def get_n_before(n,filepaths,end_file):
""" FILEPATHS MUST BE SORTED!!! """
print(len(filepaths))
#base=[0]*len(filepaths)
index=filepaths.index(end_file)
#for i in range(index-57,index+1):
#base[i]=1
#print(base)
return np.arange(index-(n-1),index+1)
def notes():
#_cell_endfile=review;threshold,max_above
_36_0060=1,8,19,39,55;3,5
def threshold_maker(path,THRS_DICT={}):
threshold_dict={path+k+'.abf':v for k,v in THRS_DICT.items()}
def threshold_func(filepath):
try:
threshold=threshold_dict[filepath]
except:
threshold=None
return threshold
return threshold_func
THRS_DICT={ #FIXME may need to include sub channels?
'2013_12_10_0019':.7,
'2013_12_10_0039':1,
'2013_12_10_0055':1,
'2013_12_10_0069':1.5,
'2013_12_10_0079':1.2,
'2013_12_10_0088':0.9,
'2013_12_10_0097':1.5,
'2013_12_10_0189':2.7,
'2013_12_10_0214':3.3,
'2013_12_10_0216':3.27,
'2013_12_10_0217':3.55,
#'2013_12_10_0221':3.25,
'2013_12_10_0222':3.4,
'2013_12_10_0223':3.4,
'2013_12_10_0225':3.7,
#'2013_12_10_0226':3.45,
'2013_12_10_0227':3.5,
'2013_12_10_0233':3.41,
'2013_12_11_0000':3.5,
'2013_12_11_0074':1.2,
'2013_12_11_0081':1.2,
'2013_12_11_0083':1.5,
'2013_12_11_0085':1.8,
'2013_12_11_0089':2.5,
'2013_12_11_0090':2.0,
'2013_12_11_0092':2.2,
'2013_12_11_0093':2.3,
'2013_12_11_0094':2.5,
'2013_12_11_0100':2.7,
'2013_12_11_0102':2.7,
'2013_12_11_0104':2.7,
'2013_12_11_0116':3.0,
'2013_12_11_0187':5.3,
'2013_12_11_0186':5.2,
'2013_12_11_0188':5.1,
'2013_12_11_0190':5.1,
'2013_12_11_0051':3.6,
'2013_12_11_0052':3.7,
'2013_12_11_0054':3.6,
'2013_12_11_0056':3.6,
'2013_12_11_0058':3.65,
'2013_12_11_0185':5,
}
def make_spike_count_dict(path,COUNT_DICT={}):
count_dict={path+k+'.abf':v for k,v in COUNT_DICT.items()}
def count_func(filepath):
try:
count=count_dict[filepath]
except:
count=None #FIXME lenght?
return count
return count_func
COUNT_DICT={ #manual counts for traces that are super rough and I don't fell like down sampling right now
'2013_12_10_0221':[8],
'2013_12_10_0222':[7,7,6,7,5],
'2013_12_10_0223':[6],
'2013_12_10_0225':[6],
'2013_12_10_0226':[6,5,5,4,6],
'2013_12_10_0227':[5],
'2013_12_10_0228':[3,3,3,3,3],
'2013_12_10_0229':[6],
'2013_12_10_0230':[4,5,5,4,4],
'2013_12_10_0231':[6],
'2013_12_10_0232':[3,3,3,3,4],
'2013_12_10_0233':[6],
'2013_12_11_0000':[8],
'2013_12_11_0001':[2,0,0,2,2],
'2013_12_11_0002':[8],
'2013_12_11_0004':[8],
'2013_12_11_0005':[6,6,6,6,6,6],
'2013_12_11_0006':[6],
'2013_12_11_0008':[8],
'2013_12_11_0011':[7,7,6,5,6],
'2013_12_11_0012':[6],
'2013_12_11_0014':[8],
'2013_12_11_0016':[8],
'2013_12_11_0018':[9],
'2013_12_11_0019':[8,7,5,7,7],
'2013_12_11_0020':[7],
'2013_12_11_0022':[8],
'2013_12_11_0023':[2,2,2,2,1],
'2013_12_11_0024':[8],
'2013_12_11_0026':[10],
'2013_12_11_0027':[5,4,5,4,4],
'2013_12_11_0028':[8],
'2013_12_11_0029':[3,3,3,3,3],
'2013_12_11_0030':[7],
'2013_12_11_0031':[6,7,7,7,7],
'2013_12_11_0032':[7],
'2013_12_11_0033':[3,4,5,4,4],
'2013_12_11_0034':[9],
'2013_12_11_0036':[9],
'2013_12_11_0038':[9],
'2013_12_11_0040':[9],
'2013_12_11_0042':[8],
'2013_12_11_0044':[8],
'2013_12_11_0045':[6,6,5,6,5],
'2013_12_11_0046':[7],
'2013_12_11_0048':[9], #observe the 9 after nothing prior
'2013_12_11_0049':[7,6,6,6,6],
'2013_12_11_0050':[7],
'2013_12_11_0166':[15],
'2013_12_11_0170':[15],
'2013_12_11_0178':[15],
'2013_12_11_0186':[15],
'2013_12_11_0189':[9,8,8,8,8],
'2013_12_11_0195':[0,0,1,1,2],
'2013_12_11_0196':[15],
'2013_12_11_0197':[0,1,1,1,1],
'2013_12_11_0200':[15],
'2013_12_11_0202':[15],
'2013_12_11_0204':[16],
'2013_12_11_0207':[12,12,12,11,11],
'2013_12_11_0208':[14],
}
def files_by_dist(files,dists,cid,endf):
#all_files,dists=get_cell_traces(cid,abfpath)
afd=[(file,dist) for file,dist in zip(files,dists)]
#afd.sort(key=lambda tup: tup[1])
fig=plt.figure(figsize=(20,20),frameon=False)
for filepath,distance in afd:
raw,block,segments,header=load_abf(filepath) #TODO fp segment dict like I had with queue/threading
if len(segments)==1:
continue
for segment in segments:
#for ass in segment.analogsignals:
ass=segment.analogsignals[0]
xs=np.linspace(0,.1,len(ass.times.base))+distance #center by distance
#ys=(ass.base-np.min(ass.base))/(np.max(ass.base)-np.min(ass.base))-segment.index*1.5 #move down by segment number
ys=(ass.base-np.mean(ass.base))/2.5+segment.index #move down by segment number
plt.plot(xs[:len(xs)//14]*1000,ys[:len(ys)//14],'k-',label='%s'%distance)
plt.ylim(-.2,5)
plt.yticks(np.arange(0,5,1))
plt.xlabel('Distance from cell body in $\mu$m. Normalized trial length')
plt.ylabel('Amplitude and run number')
plt.title('Example traces as a function of distance from cell %s'%cid)
#fig.frameon(False)
#fig.patch.set_visible(False)
ax=fig.axes[0]
ax.spines['left'].set_edgecolor((1,1,1,1))
format_axes(ax)
#embed()
fig.savefig('/tmp/fd_%s_%s.png'%(cid,endf[-4:]),bbox_inches='tight',pad_inches=0)
def format_axes(ax,fontsize=50,ticksize=50):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
things=[ax.xaxis.label, ax.yaxis.label, ax.title]
ticks=ax.get_xticklabels()+ax.get_yticklabels()
[t.set_fontsize(fontsize) for t in things]
[t.set_fontsize(ticksize) for t in ticks]
def do_files():
abfpath=get_abf_path()
cell_ids=36,37,41,43 #39, 40 no data
end=[#filename, go back, std_thrs, std_max
('2013_12_10_0060', 58, 2.5, 5), #36 missing file 11 from the list due to weird bug #space=5
('2013_12_10_0118', 58, 2.5, 5), #37 4V #space =3
('2013_12_10_0176', 58, 2.5, 5), #37 4.1V #health failed
('2013_12_11_0000', 58, 3.0, 5), #41 3V
('2013_12_11_0058', 58, 3.0, 5), #41 0V
('2013_12_11_0127', 58, 3.3, 5), #43 3V
('2013_12_11_0208', 58, 2.5, 3.8), #43 0V, could go farther back and review the files around crash
]
for cid in cell_ids:
all_files,dists=get_cell_traces(cid,abfpath)
cell=s.query(Cell).get(cid)
files=cell.datafiles
for endf,counts,std_thrs,std_max in end:
try:
indexes=get_n_before(counts,all_files,abfpath+endf+'.abf')
except ValueError as e:
continue
fileset=np.array(all_files)[indexes]
filenames=[f.split('/')[-1] for f in fileset]
distset=np.array(dists)[indexes]
#files_by_dist(fileset,distset,cid,endf)
fig=plt.figure(figsize=(20,20))
pos=np.array([ f.position for f in files if f.filename in filenames])
slice_=cell.parent
s_pos=np.array([d.value for d in get_metadata_query(Slice,slice_.id,'getPos').all()[:3]])
print(s_pos)
plt.plot(-pos[:,0]*1000,pos[:,1]*1000,'bo',color=(.8,.8,1),markersize=10)
plt.plot(-s_pos[:,0]*1000,s_pos[:,1]*1000,'go',markersize=20)
plt.plot(-cell.position[0]*1000,cell.position[1]*1000,'ro',markersize=20)
plt.axis('equal')
plt.xlabel('$\mu$m')
plt.ylabel('$\mu$m')
plt.title('Simulus positions and cortex surface for cell %s'%(cid))
format_axes(fig.axes[0])
plt.savefig('/tmp/positions_%s_%s'%(cid,endf[-4:]),bbox_inches='tight',pad_inches=0)
def main():
#do_files()
#return None
import pickle
try:
spike_fn_dict=pickle.load(open('/tmp/led_spikes.p','rb'))
stats_dict=pickle.load(open('/tmp/led_stats.p','rb'))
except FileNotFoundError:
abfpath=get_abf_path()
cell_ids=36,37,41,43 #39, 40 no data
end=[#filename, go back, std_thrs, std_max
('2013_12_10_0060', 58, 2.5, 5), #36 missing file 11 from the list due to weird bug #space=5
('2013_12_10_0118', 58, 2.5, 5), #37 4V #space =3
('2013_12_10_0176', 58, 2.5, 5), #37 4.1V #health failed
('2013_12_11_0000', 58, 3.0, 5), #41 3V
('2013_12_11_0058', 58, 3.0, 5), #41 0V
('2013_12_11_0127', 58, 3.3, 5), #43 3V
('2013_12_11_0208', 58, 2.5, 3.8), #43 0V, could go farther back and review the files around crash
]
spike_fn_dict={}
stats_dict={}
for cid in cell_ids:
all_files,dists=get_cell_traces(cid,abfpath)
for endf,counts,std_thrs,std_max in end:
try:
indexes=get_n_before(counts,all_files,abfpath+endf+'.abf')
except ValueError as e:
continue
fileset=np.array(all_files)[indexes]
distset=np.array(dists)[indexes]
#files_by_dist(fileset,distset,cid,endf)
stats,sfnd=plot_abf_traces(fileset,distset,std_thrs=std_thrs,std_max=std_max,threshold_func=threshold_maker(abfpath,THRS_DICT),spikes=True,spike_func=make_spike_count_dict(abfpath,COUNT_DICT),cell_id=cid,do_plot=False)
stats_dict['%s_%s'%(cid,endf[-4:])]=stats
spike_fn_dict.update(sfnd)
pickle.dump(spike_fn_dict, open('/tmp/led_spikes.p','wb'))
pickle.dump(stats_dict, open('/tmp/led_stats.p','wb'))
cid=37
#cid=41
abfpath=get_abf_path()
all_files,dists=get_cell_traces(cid,abfpath)
endf='2013_12_10_0118'
std_thrs=2.5
std_max=5
indexes=get_n_before(58,all_files,abfpath+endf+'.abf')
fileset=np.array(all_files)[indexes]
distset=np.array(dists)[indexes]
stats,sfnd=plot_abf_traces(fileset,distset,std_thrs=std_thrs,std_max=std_max,threshold_func=threshold_maker(abfpath,THRS_DICT),spikes=True,spike_func=make_spike_count_dict(abfpath,COUNT_DICT),cell_id=cid,do_plot=True)
#cell_ids=16,26 #based on num files
#cell_ids=32,34
#cell_ids=37,#41,43
#cell_ids=36,
#cell_ids=41,
#plot_abf_traces([abfpath+'2013_12_10_0188.abf',abfpath+'2013_12_10_0189.abf'],[0,0],std_thrs=std_thrs,std_max=std_max,threshold_func=threshold_maker(abfpath,THRS_DICT),spikes=True,cell_id=cid)
#plot_abf_traces([abfpath+'2013_12_10_0216.abf',abfpath+'2013_12_10_0217.abf'],[0,0],std_thrs=std_thrs,std_max=std_max,threshold_func=threshold_maker(abfpath,THRS_DICT),spikes=True,cell_id=cid)
def compile_all(stats_dict):
all_base=[]
all_spike=[]
all_norm=[]
for key,cont in stats_dict.items():
all_base.extend(cont['base_dist_stats'])
all_spike.extend(cont['spike_dist_stats'])
all_norm.extend(cont['norm_dist_stats'])
return all_base,all_spike,all_norm
b,s,n=compile_all(stats_dict)
b=np.array(b)
s=np.array(s)
n=np.array(n)
ndists=n[:,0]
nmeans=n[:,1]
nstds=n[:,2]
sdists=s[:,0]
smeans=s[:,1]
sstds=s[:,2]
bdists=b[:,0]
bmeans=b[:,1]
bstds=b[:,2]
fig=plt.figure(figsize=(20,20))
nbins=bin_dists(ndists*1000,nmeans,nstds,25)
plt.xlim(xmin=-2)
plt.xticks(np.arange(0,225,25))
#plt.ylim(ymax=2)
plt.yticks(np.arange(0,1.25,.25))
plt.title('Normalized spike count vs distance')#' (binned). Error is SEM.')
plt.xlabel(r'Distance in $\mu$m')
plt.ylabel('Normalized spike counts')
format_axes(fig.axes[0],50,50)
fig.savefig('/tmp/norm_dist_all.png',bbox_inches='tight',pad_inches=0)
plt.clf()
#fig=plt.figure(figsize=(20,20))
plt.plot(ndists*1000,nmeans,'ko',markersize=20)
plt.title('All spike counts vs distance')#' (binned). Error is SEM.')
plt.xlabel(r'Distance in $\mu$m')
plt.ylabel('Normalized spike counts')
format_axes(fig.axes[0],50,50)
fig.savefig('/tmp/population_normalized_counts.png',bbox_inches='tight',pad_inches=0)
plt.clf()
#fig=plt.figure(figsize=(20,20))
sbins=bin_dists(sdists*1000,smeans,sstds,25)
plt.xlim(xmin=-2)
plt.xticks(np.arange(0,225,25))
plt.yticks(np.arange(0,8,1))
plt.title('Average spike count vs distance')#' (binned). Error is SEM.')
plt.xlabel(r'Distance in $\mu$m')
plt.ylabel('Average spike count')
format_axes(fig.axes[0],50,50)
fig.savefig('/tmp/spike_dist_all.png',bbox_inches='tight',pad_inches=0)
plt.clf()
bbins=bin_dists(bdists,bmeans,bstds)
#embed()
def get_lmes(bins):
lefts=[]
means=[]
stds=[]
sems=[]
for left,means in bins.items():
lefts.append(left)
new_mean=np.mean(means)
new_std=np.std(means)
new_sem=new_std/len(means)
means.append(new_mean)
stds.append(new_std)
sems.append(new_sem)
return np.array(lefts),np.array(means),np.array(stds),np.array(sems)
lefts,means,stds,sems=get_lmes(nbins)
#plt.figure(figsize=(10,10))
#plt.errorbar(lefts+12.5,means,sems,fmt='ko',ecolor=(.2,.2,.2))
#plt.savefig('/tmp/test3.png')
#useful="plt.figure();[ plt.errorbar(left+.0125,np.mean(lst[0]),np.std(lst[0])/np.sqrt(len(lst[0])) ) for left,lst in bins.items()];plt.xlim(-.1,.250);plt.savefig('/tmp/test2.png')"
#embed()
#data=get_cell_data(cid)
#plot_cell_data(*data[:-1])
#plt.show()
if __name__=='__main__':
main()
| 37.588 | 234 | 0.629403 | 5,963 | 37,588 | 3.763206 | 0.136676 | 0.034492 | 0.028877 | 0.005214 | 0.380481 | 0.32402 | 0.301381 | 0.271034 | 0.248797 | 0.225624 | 0 | 0.097976 | 0.22313 | 37,588 | 999 | 235 | 37.625626 | 0.670491 | 0.212648 | 0 | 0.281617 | 0 | 0 | 0.124603 | 0.0044 | 0 | 0 | 0 | 0.001001 | 0 | 1 | 0.050847 | false | 0.003911 | 0.019557 | 0 | 0.118644 | 0.015645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96e47d34a931bd4172b2ac8f870aec8f302e1196 | 14,251 | py | Python | gama/utilities/generic/async_evaluator.py | learsi1911/GAMA_pygmo_v4 | 459807db352dd1c9f9c1e0e322f8c1e9b5abbca0 | [
"Apache-2.0"
] | null | null | null | gama/utilities/generic/async_evaluator.py | learsi1911/GAMA_pygmo_v4 | 459807db352dd1c9f9c1e0e322f8c1e9b5abbca0 | [
"Apache-2.0"
] | null | null | null | gama/utilities/generic/async_evaluator.py | learsi1911/GAMA_pygmo_v4 | 459807db352dd1c9f9c1e0e322f8c1e9b5abbca0 | [
"Apache-2.0"
] | null | null | null |
"""
I don't want to be reinventing the wheel but I can't find a satisfying implementation.
I want to be able to execute arbitrary functions asynchronously on a different process.
Any ongoing subprocesses must immediately be able to be terminated without errors.
Results of cancelled subprocesses may be ignored.
`concurrent.futures.ProcessPoolExecutor` gets very close to the desired implementation,
but it has issues:
- by default it waits for subprocesses to close on __exit__.
Unfortunately it is possible the subprocesses can be running non-Python code,
e.g. a C implementation of SVC whose subprocess won't end until fit is complete.
- even if that is overwritten and no wait is performed,
the subprocess will raise an error when it is done.
Though that does not hinder the execution of the program,
I don't want errors for expected behavior.
"""
import datetime
import gc
import logging
import multiprocessing
import os
import psutil
import queue
import struct
import time
import traceback
from typing import Optional, Callable, Dict, List
import uuid
from psutil import NoSuchProcess
try:
import resource
except ModuleNotFoundError:
resource = None # type: ignore
log = logging.getLogger(__name__)
class AsyncFuture:
""" Reference to a function call executed on a different process. """
def __init__(self, fn, *args, **kwargs):
self.id = uuid.uuid4()
self.fn = fn
self.args = args
self.kwargs = kwargs
self.result = None
self.exception = None
self.traceback = None
def execute(self, extra_kwargs):
""" Execute the function call `fn(*args, **kwargs)` and record results. """
try:
# Don't update self.kwargs, as it will be pickled back to the main process
kwargs = {**self.kwargs, **extra_kwargs}
self.result = self.fn(*self.args, **kwargs)
except Exception as e:
self.exception = e
self.traceback = traceback.format_exc()
class AsyncEvaluator:
""" Manages subprocesses on which arbitrary functions can be evaluated.
The function and all its arguments must be picklable.
Using the same AsyncEvaluator in two different contexts raises a `RuntimeError`.
defaults: Dict, optional (default=None)
Default parameter values shared between all submit calls.
This allows these defaults to be transferred only once per process,
instead of twice per call (to and from the subprocess).
Only supports keyword arguments.
"""
defaults: Dict = {}
def __init__(
self,
n_workers: Optional[int] = None,
memory_limit_mb: Optional[int] = None,
logfile: Optional[str] = None,
wait_time_before_forced_shutdown: int = 10,
):
"""
Parameters
----------
n_workers : int, optional (default=None)
Maximum number of subprocesses to run for parallel evaluations.
Defaults to `AsyncEvaluator.n_jobs`, using all cores unless overwritten.
memory_limit_mb : int, optional (default=None)
The maximum number of megabytes that this process and its subprocesses
may use in total. If None, no limit is enforced.
There is no guarantee the limit is not violated.
logfile : str, optional (default=None)
If set, recorded resource usage will be written to this file.
wait_time_before_forced_shutdown : int (default=10)
Number of seconds to wait between asking the worker processes to shut down
and terminating them forcefully if they failed to do so.
"""
self._has_entered = False
self.futures: Dict[uuid.UUID, AsyncFuture] = {}
self._processes: List[psutil.Process] = []
self._n_jobs = n_workers
self._memory_limit_mb = memory_limit_mb
self._mem_violations = 0
self._mem_behaved = 0
self._logfile = logfile
self._wait_time_before_forced_shutdown = wait_time_before_forced_shutdown
self._input: multiprocessing.Queue = multiprocessing.Queue()
self._output: multiprocessing.Queue = multiprocessing.Queue()
self._command: multiprocessing.Queue = multiprocessing.Queue()
pid = os.getpid()
self._main_process = psutil.Process(pid)
def __enter__(self):
if self._has_entered:
raise RuntimeError(
"You can not use the same AsyncEvaluator in two different contexts."
)
self._has_entered = True
self._input = multiprocessing.Queue()
self._output = multiprocessing.Queue()
log.debug(
f"Process {self._main_process.pid} starting {self._n_jobs} subprocesses."
)
if self._n_jobs == None: #quitar
self._n_jobs = multiprocessing.cpu_count() #quitar
for _ in range(self._n_jobs):
self._start_worker_process()
self._log_memory_usage()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
log.debug(f"Signaling {len(self._processes)} subprocesses to stop.")
for _ in self._processes:
self._command.put("stop")
for i in range(self._wait_time_before_forced_shutdown + 1):
if self._command.empty():
break
time.sleep(1)
else:
# A non-empty command queue indicates a process(es) was unable to shut down.
# All processes need to be terminated to free resources.
for process in self._processes:
try:
process.terminate()
except psutil.NoSuchProcess:
pass
return False
def submit(self, fn: Callable, *args, **kwargs) -> AsyncFuture:
""" Submit fn(*args, **kwargs) to be evaluated on a subprocess.
Parameters
----------
fn: Callable
Function to call on a subprocess.
args
Positional arguments to call `fn` with.
kwargs
Keyword arguments to call `fn` with.
Returns
-------
AsyncFuture
A Future of which the `result` or `exception` field will be populated
once evaluation is finished.
"""
future = AsyncFuture(fn, *args, **kwargs)
self.futures[future.id] = future
self._input.put(future)
return future
def wait_next(self, poll_time: float = 0.05) -> AsyncFuture:
""" Wait until an AsyncFuture has been completed and return it.
Parameters
----------
poll_time: float (default=0.05)
Time to sleep between checking if a future has been completed.
Returns
-------
AsyncFuture
The completed future that completed first.
Raises
------
RuntimeError
If all futures have already been completed and returned.
"""
if len(self.futures) == 0:
raise RuntimeError("No Futures queued, must call `submit` first.")
while True:
self._control_memory_usage()
self._log_memory_usage()
try:
completed_future = self._output.get(block=False)
except queue.Empty:
time.sleep(poll_time)
continue
match = self.futures.pop(completed_future.id)
match.result, match.exception, match.traceback = (
completed_future.result,
completed_future.exception,
completed_future.traceback,
)
self._mem_behaved += 1
return match
def _start_worker_process(self) -> psutil.Process:
""" Start a new worker node and add it to the process pool. """
mp_process = multiprocessing.Process(
target=evaluator_daemon,
args=(self._input, self._output, self._command, AsyncEvaluator.defaults),
daemon=True,
)
mp_process.start()
subprocess = psutil.Process(mp_process.pid)
self._processes.append(subprocess)
return subprocess
def _stop_worker_process(self, process: psutil.Process):
""" Terminate a new worker node and remove it from the process pool. """
process.terminate()
self._processes.remove(process)
def _control_memory_usage(self, threshold=0.05):
""" Dynamically restarts or kills processes to adhere to memory constraints. """
if self._memory_limit_mb is None:
return
# If the memory usage of all processes (the main process, and the evaluation
# subprocesses) exceeds the maximum allowed memory usage, we have to terminate
# one of them.
# If we were never to start new processes, eventually all subprocesses would
# likely be killed due to 'silly' pipelines (e.g. multiple polynomial feature
# steps).
# On the other hand if there is e.g. a big dataset, by always restarting we
# will set up the same scenario for failure over and over again.
# So we want to dynamically find the right amount of evaluation processes, such
# that the total memory usage is not exceeded "too often".
# Here `threshold` defines the ratio of processes that should be allowed to
# fail due to memory constraints. Setting it too high might lead to aggressive
# subprocess killing and underutilizing compute resources. If it is too low,
# the number of concurrent jobs might shrink too slowly inducing a lot of
# loss in compute time due to interrupted evaluations.
# ! Like the rest of this module, I hate to use custom code with this,
# in particular there is a risk that terminating the process might leave
# the multiprocess queue broken.
mem_proc = list(self._get_memory_usage())
if sum(map(lambda x: x[1], mem_proc)) > self._memory_limit_mb:
log.info(
f"GAMA exceeded memory usage "
f"({self._mem_violations}, {self._mem_behaved})."
)
self._log_memory_usage()
self._mem_violations += 1
# Find the process with the most memory usage, that is not the main process
proc, _ = max(mem_proc[1:], key=lambda t: t[1])
n_evaluations = self._mem_violations + self._mem_behaved
fail_ratio = self._mem_violations / n_evaluations
if fail_ratio < threshold or len(self._processes) == 1:
# restart `pid`
log.info(f"Terminating {proc.pid} due to memory usage.")
self._stop_worker_process(proc)
log.info("Starting new evaluations process.")
self._start_worker_process()
else:
# More than one process left alive and a violation of the threshold,
# requires killing a subprocess.
self._mem_behaved = 0
self._mem_violations = 0
log.info(f"Terminating {proc.pid} due to memory usage.")
self._stop_worker_process(proc)
# todo: update the Future of the evaluation that was terminated.
def _log_memory_usage(self):
if not self._logfile:
return
mem_by_pid = self._get_memory_usage()
mem_str = ",".join([f"{proc.pid},{mem_mb}" for (proc, mem_mb) in mem_by_pid])
timestamp = datetime.datetime.now().isoformat()
with open(self._logfile, "a") as memory_log:
memory_log.write(f"{timestamp},{mem_str}\n")
def _get_memory_usage(self):
processes = [self._main_process] + self._processes
for process in processes:
try:
yield process, process.memory_info()[0] / (2 ** 20)
except NoSuchProcess:
# can never be the main process anyway
self._processes = [p for p in self._processes if p.pid != process.pid]
self._start_worker_process()
def evaluator_daemon(
input_queue: queue.Queue,
output_queue: queue.Queue,
command_queue: queue.Queue,
default_parameters: Optional[Dict] = None,
):
""" Function for daemon subprocess that evaluates functions from AsyncFutures.
Parameters
----------
input_queue: queue.Queue[AsyncFuture]
Queue to get AsyncFuture from.
Queue should be managed by multiprocessing.manager.
output_queue: queue.Queue[AsyncFuture]
Queue to put AsyncFuture to.
Queue should be managed by multiprocessing.manager.
command_queue: queue.Queue[Str]
Queue to put commands for the subprocess.
Queue should be managed by multiprocessing.manager.
default_parameters: Dict, optional (default=None)
Additional parameters to pass to AsyncFuture.Execute.
This is useful to avoid passing lots of repetitive data through AsyncFuture.
"""
try:
while True:
try:
command_queue.get(block=False)
break
except queue.Empty:
pass
try:
future = input_queue.get(block=False)
future.execute(default_parameters)
if future.result:
if isinstance(future.result, tuple):
result = future.result[0]
else:
result = future.result
if isinstance(result.error, MemoryError):
# Can't pickle MemoryErrors. Should work around this later.
result.error = "MemoryError"
gc.collect()
output_queue.put(future)
except (MemoryError, struct.error) as e:
future.result = None
future.exception = str(type(e))
gc.collect()
output_queue.put(future)
except queue.Empty:
pass
except Exception as e:
# There are no plans currently for recovering from any exception:
print(f"Stopping daemon:{type(e)}:{str(e)}")
traceback.print_exc() | 40.257062 | 88 | 0.619676 | 1,707 | 14,251 | 5.033978 | 0.253076 | 0.020482 | 0.012219 | 0.011637 | 0.111952 | 0.090306 | 0.04911 | 0.015594 | 0.015594 | 0.015594 | 0 | 0.003235 | 0.305803 | 14,251 | 354 | 89 | 40.257062 | 0.865359 | 0.390008 | 0 | 0.233831 | 0 | 0 | 0.063595 | 0.017032 | 0 | 0 | 0 | 0.002825 | 0 | 1 | 0.064677 | false | 0.014925 | 0.069652 | 0 | 0.18408 | 0.00995 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96e77ed1d364e77b22f4c6a50d95e8b575a56b59 | 585 | py | Python | utils/CSS单位替换脚本.py | hi-noikiy/ColorUI-H5 | f130bfedfe7cfa5b69aeb0258de7246b27fdbd01 | [
"MIT"
] | null | null | null | utils/CSS单位替换脚本.py | hi-noikiy/ColorUI-H5 | f130bfedfe7cfa5b69aeb0258de7246b27fdbd01 | [
"MIT"
] | null | null | null | utils/CSS单位替换脚本.py | hi-noikiy/ColorUI-H5 | f130bfedfe7cfa5b69aeb0258de7246b27fdbd01 | [
"MIT"
] | 1 | 2021-01-24T09:50:40.000Z | 2021-01-24T09:50:40.000Z | """
按固定比例替换文件中的单位
"""
import re
path="E:/projects/ColorUI-H5/css/"
def main():
pattern = "(?P<value>\d+)upx"
file=open(path+"ColorUi-simplified.css","r",encoding="utf-8") #只读方式打开文件
file_new = open(path+"ColorUi-H5.css","w+",encoding="utf-8") #保存修改后的文件,不存在则创建
lines=file.readlines()
for line in lines:
str=re.sub(pattern, calculate, line)
file_new.write(str)
file.close()
file_new.close()
def calculate(matched):# 单位换算
value = int(matched.group('value'))
return str(int(round(value /2)))+'px'
if __name__ == "__main__":
main() | 24.375 | 81 | 0.634188 | 82 | 585 | 4.390244 | 0.585366 | 0.058333 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010331 | 0.17265 | 585 | 24 | 82 | 24.375 | 0.733471 | 0.071795 | 0 | 0 | 0 | 0 | 0.202247 | 0.09176 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96ea0ab5e6f7dd7c4a15ce17744623a63d6b8494 | 836 | py | Python | test_player/showdown_replay_analyzer.py | kokoukohiro/poke-env | 9bc69328af6f7bfa1b5f261a54b3e677990f1e3c | [
"MIT"
] | null | null | null | test_player/showdown_replay_analyzer.py | kokoukohiro/poke-env | 9bc69328af6f7bfa1b5f261a54b3e677990f1e3c | [
"MIT"
] | null | null | null | test_player/showdown_replay_analyzer.py | kokoukohiro/poke-env | 9bc69328af6f7bfa1b5f261a54b3e677990f1e3c | [
"MIT"
] | null | null | null | import re
import urllib.request as ur
class ShowdownReplayAnalyzer:
def __init__(self,replaylist,lowestelo):
self.replaylist = replaylist
self.lowestelo = lowestelo
replayurl = 'https://replay.pokemonshowdown.com{replay}'
for replay in replaylist:
lines = self.__getinfo(replayurl.format(replay=replay))
for line in lines:
dealline = line.decode().strip()
print(dealline)
def __getinfo(self,url):
req = ur.Request(url,headers={'User-Agent': 'Mozilla/5.0'})
f = ur.urlopen(req)
lines = f.readlines()
f.close()
return lines
def main():
replay=ShowdownReplayAnalyzer(['/gen8ou-1480854270','/gen8ou-1102257860'],1686)
if __name__ == '__main__':
main()
| 27.866667 | 84 | 0.592105 | 85 | 836 | 5.635294 | 0.564706 | 0.058455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047377 | 0.293062 | 836 | 29 | 85 | 28.827586 | 0.763113 | 0 | 0 | 0 | 0 | 0 | 0.132754 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.090909 | 0 | 0.318182 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f199a4c566520b5f745f4cbaf0782438e55113 | 316 | py | Python | submissions/decompress-run-length-encoded-list/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | null | null | null | submissions/decompress-run-length-encoded-list/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | 1 | 2022-03-04T20:24:32.000Z | 2022-03-04T20:31:58.000Z | submissions/decompress-run-length-encoded-list/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | null | null | null | # https://leetcode.com/problems/decompress-run-length-encoded-list
class Solution:
def decompressRLElist(self, nums: List[int]) -> List[int]:
ret = []
N = len(nums)
for i in range(0, N, 2):
for _ in range(nums[i]):
ret.append(nums[i + 1])
return ret
| 26.333333 | 66 | 0.550633 | 42 | 316 | 4.119048 | 0.666667 | 0.080925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 0.306962 | 316 | 11 | 67 | 28.727273 | 0.776256 | 0.202532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f22e59e8c29caf98fa35305618c165df1647f5 | 1,262 | py | Python | Aulas2-31/Aula6/exercicio3.py | matheusschuetz/TrabalhoPython | 953957898de633f8f2776681a45a1a15b68e80b9 | [
"MIT"
] | 1 | 2020-01-21T11:43:12.000Z | 2020-01-21T11:43:12.000Z | Aulas2-31/Aula6/exercicio3.py | matheusschuetz/TrabalhoPython | 953957898de633f8f2776681a45a1a15b68e80b9 | [
"MIT"
] | null | null | null | Aulas2-31/Aula6/exercicio3.py | matheusschuetz/TrabalhoPython | 953957898de633f8f2776681a45a1a15b68e80b9 | [
"MIT"
] | null | null | null | # Exercicio 3 - foreach
#Escreva um programa que leia as notas (4) de 10 alunos
#Armazene as notas e os nomes em listas
#Imprima:
# 1- o nome do aluno
# 2- Média do aluno
# 3-Resultado (Aprovado>=7.0)
lista1 = []
nota1 = []
nota2 = []
nota3 = []
nota4 = []
for i in range(1,5):
nome = input(f'digite o nome do aluno em sequencia nome{i}: ')
lista1.append(nome)
for i in range(1,5):
valor = int(input(f'Digite a nota{i} do aluno1: '))
nota1.append(valor)
for i in range(1,5):
valor = int(input(f'Digite a nota{i} do aluno2: '))
nota2.append(valor)
for i in range(1,5):
valor = int(input(f'Digite a nota{i} do aluno3: '))
nota3.append(valor)
for i in range(1,5):
valor = int(input(f'Digite a nota{i} do aluno4: '))
nota4.append(valor)
resultadoaluno1 = (nota1[0] + nota2[1] + nota3[2] + nota4[3]) / 4
print('A média do aluno1 foi:', resultadoaluno1 )
resultadoaluno2 = (nota1[1] + nota2[1] + nota3[1] + nota4[1]) / 4
print('A média do aluno2 foi:', resultadoaluno2)
resultadoaluno3 = (nota1[2] + nota2[2] + nota3[2] + nota4[2]) / 4
print('A média do aluno3 foi:', resultadoaluno3)
resultadoaluno4 = (nota1[3] + nota2[3] + nota3[3] + nota4[3]) / 4
print('A média do aluno3 foi:', resultadoaluno4) | 32.358974 | 66 | 0.635499 | 207 | 1,262 | 3.874396 | 0.294686 | 0.043641 | 0.037406 | 0.068579 | 0.369077 | 0.351621 | 0.335411 | 0.245636 | 0.245636 | 0.245636 | 0 | 0.080758 | 0.20523 | 1,262 | 39 | 67 | 32.358974 | 0.718843 | 0.161648 | 0 | 0.178571 | 0 | 0 | 0.233111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f300fa98b87a8fcb9eff14b133bde7d641b89d | 3,978 | py | Python | gprpy/mergeProfiles.py | ChMaDowns/GPRPyMini | 444cd9f834e8841988c9240f01656b851c1fd6b6 | [
"MIT"
] | null | null | null | gprpy/mergeProfiles.py | ChMaDowns/GPRPyMini | 444cd9f834e8841988c9240f01656b851c1fd6b6 | [
"MIT"
] | null | null | null | gprpy/mergeProfiles.py | ChMaDowns/GPRPyMini | 444cd9f834e8841988c9240f01656b851c1fd6b6 | [
"MIT"
] | null | null | null | import gprpy.gprpy as gp
import numpy as np
from scipy.ndimage import zoom
def mergeProfiles(file1,file2,outfile,gapfill=0):
'''
Merges two GPR profiles by placing the second one at the end
of the first one.
Make sure you preprocessed them in GPRPy and save them to have the
correct starting and end times for the profile, or to both start at
0 to just append the second profile at the end of the first profile.
INPUT:
file1 File name (including path) of the first profile
file2 File name (including path) of the second profile
outfile File name (including path) for the merged file
gapfill If there is a gap between the profiles, fill it with
zeros (0) or NaN ('NaN')? [default: 0]
'''
# Load the two profiles
profile1 = gp.gprpyProfile(file1)
profile2 = gp.gprpyProfile(file2)
# make sure starting and end times are the same
assert (profile1.twtt[0]==profile2.twtt[0] and profile1.twtt[-1]==profile2.twtt[-1]), "\n\nUse GPRPy to cut the profiles to the same two-way travel times\nCurrently: file 1 is %g ns to %g ns and file 2 is %g ns to %g ns \n" %(profile1.twtt[0],profile1.twtt[-1],profile2.twtt[0],profile2.twtt[-1])
# If they don't have the same number of samples,
# then we need to interpolate the data to make them fit
if len(profile1.twtt) > len(profile2.twtt):
zfac = len(profile1.twtt)/len(profile2.twtt)
profile2.data = zoom(profile2.data,[zfac,1])
elif len(profile1.twtt) < len(profile2.twtt):
zfac = len(profile2.twtt)/len(profile1.twtt)
profile1.data = zoom(profile1.data,[zfac,1])
profile1.twtt = profile2.twtt
# If they don't have the same along-profile sampling,
# need to interpolate the data such that it makes sense:
if np.diff(profile1.profilePos)[3] < np.diff(profile2.profilePos)[3]:
zfac = np.diff(profile2.profilePos)[3]/np.diff(profile1.profilePos)[3]
profile2.data = zoom(profile2.data,[1,zfac])
profile2.profilePos=zoom(profile2.profilePos,zfac)
elif np.diff(profile1.profilePos)[3] > np.diff(profile2.profilePos)[3]:
zfac = np.diff(profile1.profilePos)[3]/np.diff(profile2.profilePos)[3]
profile1.data = zoom(profile1.data,[1,zfac])
profile1.profilePos=zoom(profile1.profilePos,zfac)
# Now concatenate the profile positions
# In case someone didn't adjust their profile but just tries to merge them:
if abs(profile2.profilePos[0]) < 1e-5:
profile2.profilePos = profile2.profilePos + profile1.profilePos[-1]+np.diff(profile2.profilePos)[1]
# Otherwise they probably know what they are doing
# If there is a gap, create an array with zeros or NaNs
dx=np.diff(profile2.profilePos)[0]
if profile2.profilePos[0] - profile1.profilePos[-1] < dx:
nfill = int(np.round((profile2.profilePos[0] -
profile1.profilePos[-1])/dx))
posfill = np.arange(0,nfill)*dx + profile1.profilePos[-1] + dx
datfill = np.empty(((profile2.data).shape[0],nfill))
if gapfill == 0:
datfill.fill(0)
else:
datfill.fill(np.NaN)
#datfill = np.zeros(((profile2.data).shape[0],nfill))
profile2.profilePos=np.append(posfill,profile2.profilePos)
profile2.data = np.hstack((datfill,profile2.data))
# Append profile positions
profile1.profilePos = np.append(profile1.profilePos,profile2.profilePos)
# Now merge them into profile 1
profile1.data = np.asmatrix(np.hstack((profile1.data,profile2.data)))
# Set history to shortest possible:
profile1.history = ["mygpr = gp.gprpyProfile()", "mygpr.importdata('%s.gpr')" %(outfile)]
profile1.info="Merged"
# Save the result in a .gpr file
profile1.save(outfile)
| 45.724138 | 301 | 0.646305 | 550 | 3,978 | 4.674545 | 0.28 | 0.112019 | 0.032672 | 0.056009 | 0.319331 | 0.192532 | 0.140023 | 0.092571 | 0.063788 | 0.063788 | 0 | 0.038257 | 0.244344 | 3,978 | 86 | 302 | 46.255814 | 0.817033 | 0.316742 | 0 | 0 | 0 | 0.02439 | 0.074941 | 0.010148 | 0 | 0 | 0 | 0 | 0.02439 | 1 | 0.02439 | false | 0 | 0.097561 | 0 | 0.121951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f30e108086b9a72b85c6856b9fcdd658bb5a38 | 217 | py | Python | Bittorrent/test.py | Leo-xh/C-S-and-P2P-demo | 7ded8b58f6284901ebf179bcb1850b653623724f | [
"MIT"
] | 1 | 2018-04-10T16:48:30.000Z | 2018-04-10T16:48:30.000Z | Bittorrent/test.py | Leo-xh/C-S-and-P2P-demo | 7ded8b58f6284901ebf179bcb1850b653623724f | [
"MIT"
] | null | null | null | Bittorrent/test.py | Leo-xh/C-S-and-P2P-demo | 7ded8b58f6284901ebf179bcb1850b653623724f | [
"MIT"
] | 3 | 2018-04-10T06:27:28.000Z | 2018-04-15T01:58:48.000Z | ##file = open("test.txt",'wb')
##file.seek(1024*1024)
##file.write(b'\x00')
##file.close()
file = open("bitfield", 'wb')
#for i in range(0,64,2):
# file.seek(16384*i, 0)
file.write(b'\xff'*17995)
file.close()
| 19.727273 | 30 | 0.599078 | 38 | 217 | 3.421053 | 0.578947 | 0.123077 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 0.124424 | 217 | 11 | 31 | 19.727273 | 0.552632 | 0.585253 | 0 | 0 | 0 | 0 | 0.17284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f3c2b86c4917957cf34cff68d704ffd6c85e6f | 5,830 | py | Python | scripts/rve.py | basic-ph/feat | 0660a34e5eeeab920d1ce8e139ab486e63bd419b | [
"MIT"
] | 2 | 2020-07-13T11:59:19.000Z | 2020-07-13T12:02:05.000Z | scripts/rve.py | basic-ph/feat | 0660a34e5eeeab920d1ce8e139ab486e63bd419b | [
"MIT"
] | null | null | null | scripts/rve.py | basic-ph/feat | 0660a34e5eeeab920d1ce8e139ab486e63bd419b | [
"MIT"
] | null | null | null | import argparse
import csv
import logging
import math
import time
from datetime import datetime
from pathlib import Path
from statistics import mean, stdev
import numpy as np
import fem
from feat import mesh
def main():
# argument parser creation and setup
desc = (
"Complete an RVE analysis for the evaluation of transverse modulus "
"of unidirectional fiber-reinforced composite materials"
)
parser = argparse.ArgumentParser(
prog="python rve.py",
description=desc,
)
parser.add_argument(
"version",
help="specify which version of the FEA code should be used",
choices=["base", "sparse", "vector"],
default="vector",
nargs="?"
)
args = parser.parse_args()
if args.version == "base":
analysis = fem.base_analysis
elif args.version == "sparse":
analysis = fem.sp_base_analysis
else: # if no version arg is provided the default is used
analysis = fem.vector_analysis
# DATA
Vf = 0.3 # fiber volume fraction
max_side = 70
radius = 1.0 # fiber radius
min_distance = 2.1 * radius
offset = 1.1 * radius
max_iter = 100000
coarse_cls = [0.5, 0.25, 0.12, 0.06] # [1.0, 0.5, 0.25, 0.12, 0.06] coarse element dimension (far from matrix-fiber boundary)
fine_cls = [cl / 2 for cl in coarse_cls] # fine element dimension (matrix-fiber boundary)
element_type = "triangle"
# max_number = 500
# max_side = math.sqrt(math.pi * radius**2 * max_number / Vf)
logger.info("-------- RVE ANALYSIS --------")
# logger.info("max number: %s - max side: %s", max_number, max_side)
logger.info("analysis function: %s", analysis)
num_steps = 5 # number of steps from the
side_step = max_side / (num_steps*2) # distance between box vertices of different RVE
seeds = [96, 11, 50, 46, 88, 66, 89, 15, 33, 49]
# seeds = [44, 5, 34, 58, 11, 16, 91, 77, 84, 11]
# seeds = [24, 21, 65, 22, 35] # 3, 16, 18, 15, 92]
num_samples = 5 # can't exceed seeds lenght
data = [] # list of [s: id del sample, n: numero di fibre nel dominiio, coarse_cl, side, num_nodes, E2, {0|1}]
for p in range(num_samples):
centers = []
seed = seeds[p]
data_file = f"../data/rve_samples/sample-{Vf}-{max_side}-{seed}.csv"
with open(data_file, 'r', newline='') as f:
reader = csv.reader(f, quoting=csv.QUOTE_NONNUMERIC)
for row in reader:
centers.append(row)
# logger.debug("centers:\n%s", centers)
for s in range(num_steps):
r = num_steps - 1 - s # reversing succession, from small to large RVE
box_vertex = [r*side_step, r*side_step, 0.0]
box_side = max_side - (r*2*side_step)
filtered_centers = mesh.filter_centers(centers, radius, box_vertex, box_side)
moduli = [] # clean list used for mesh convergence validation
# logger.info("filtered centers:\n%s", filtered_centers)
for m in range(len(coarse_cls)):
filename = f"rve-{p}-{s}-{m}"
geo_path = "../data/happy/" + filename + ".geo"
msh_path = "../data/happy/" + filename + ".msh"
coarse_cl = coarse_cls[m]
fine_cl = fine_cls[m]
mesh_obj = mesh.create_mesh(
geo_path,
msh_path,
radius,
box_vertex,
box_side,
filtered_centers,
coarse_cl,
fine_cl
)
num_nodes = mesh_obj.points.shape[0]
# run FEM simulation
E2 = analysis(mesh_obj, element_type, post_process=True, vtk_filename=filename)
# E2 = analysis(mesh_obj, element_type)
logger.info("SAMPLE %s - STEP %s - MESH %s - nodes: %s - E2: %s", p, s, m, num_nodes, E2)
if m == 0: # first mesh
moduli.append(E2) # store the value obtained for mesh convergence validation
data.append([p, s, m, box_side, num_nodes, E2, 0]) # 0 means non-converged result
else:
moduli.append(E2)
prev_E2 = moduli[m-1]
rel_diff = abs(E2 - prev_E2) / prev_E2 # difference relative to precedent obtained estimate
if rel_diff < 0.0025: # 0.25%
data.append([p, s, m, box_side, num_nodes, E2, 1]) # 1 means converged result
logger.info("SAMPLE %s - STEP %s - MESH %s - converged!\n", p, s, m)
break # mesh convergence obtained, continue with the next random realization
else:
data.append([p, s, m, box_side, num_nodes, E2, 0]) # 0 means non-converged result
logger.info("SAMPLE %s - STEP %s - MESH %s - NOT converged!\n", p, s, m)
actual_date = datetime.now().strftime('%Y-%m-%dT%H-%M-%S')
data_file = f"../data/csv/{actual_date}.csv"
with open(data_file, 'w', newline='') as file:
writer = csv.writer(file, quoting=csv.QUOTE_NONNUMERIC)
writer.writerows(data)
logger.info("Output written to: %s", data_file)
if __name__ == "__main__":
# LOGGING (you can skip this)
log_lvl = logging.DEBUG
logger = logging.getLogger()
logger.setLevel(log_lvl)
handler = logging.StreamHandler()
handler.setLevel(log_lvl)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
start_time = time.time()
main()
print(f"--- {time.time() - start_time} seconds ---")
| 39.391892 | 131 | 0.568782 | 764 | 5,830 | 4.210733 | 0.342932 | 0.024868 | 0.006528 | 0.017408 | 0.135841 | 0.097606 | 0.080199 | 0.080199 | 0.064967 | 0.064967 | 0 | 0.03575 | 0.313894 | 5,830 | 147 | 132 | 39.659864 | 0.7685 | 0.213894 | 0 | 0.06087 | 0 | 0.008696 | 0.154066 | 0.018022 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008696 | false | 0 | 0.095652 | 0 | 0.104348 | 0.008696 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f4b86d65f49243b1bd3e199943c822dfb59e6f | 4,047 | py | Python | genome_designer/variants/tests/test_common.py | churchlab/millstone | ddb5d003a5b8a7675e5a56bafd5c432d9642b473 | [
"MIT"
] | 45 | 2015-09-30T14:55:33.000Z | 2021-06-28T02:33:30.000Z | genome_designer/variants/tests/test_common.py | churchlab/millstone | ddb5d003a5b8a7675e5a56bafd5c432d9642b473 | [
"MIT"
] | 261 | 2015-06-03T20:41:56.000Z | 2022-03-07T08:46:10.000Z | genome_designer/variants/tests/test_common.py | churchlab/millstone | ddb5d003a5b8a7675e5a56bafd5c432d9642b473 | [
"MIT"
] | 22 | 2015-06-04T20:43:10.000Z | 2022-02-27T08:27:34.000Z | """
Tests for variants/common.py.
"""
import os
from django.contrib.auth.models import User
from django.test import TestCase
from main.models import Dataset
from main.models import Project
from main.models import ReferenceGenome
from main.models import Variant
from main.models import VariantAlternate
from main.testing_util import create_common_entities_w_variants
from variants.dynamic_snp_filter_key_map import update_filter_key_map
from settings import PWD as GD_ROOT
from variants.common import determine_visible_field_names
from variants.common import extract_filter_keys
from variants.common import SymbolGenerator
from variants.common import update_parent_child_variant_fields
TEST_DIR = os.path.join(GD_ROOT, 'test_data', 'genbank_aligned')
TEST_ANNOTATED_VCF = os.path.join(TEST_DIR, 'bwa_align_annotated.vcf')
class TestCommon(TestCase):
def setUp(self):
user = User.objects.create_user('testuser', password='password',
email='test@test.com')
self.project = Project.objects.create(owner=user.get_profile(),
title='Test Project')
self.ref_genome = ReferenceGenome.objects.create(project=self.project,
label='refgenome')
# Make sure the reference genome has the required vcf keys.
update_filter_key_map(self.ref_genome, TEST_ANNOTATED_VCF)
self.vcf_dataset = Dataset.objects.create(
label='test_data_set',
type=Dataset.TYPE.VCF_FREEBAYES,
filesystem_location=TEST_ANNOTATED_VCF)
def test_extract_filter_keys(self):
"""Tests extracting filter keys.
"""
FILTER_EXPR = 'position > 5'
EXPECTED_FILTER_KEY_SET = set(['POSITION'])
self.assertEqual(EXPECTED_FILTER_KEY_SET,
set(extract_filter_keys(FILTER_EXPR, self.ref_genome)))
FILTER_EXPR = '(position < 5 & gt_type = 2) in ANY(1234, 4567)'
EXPECTED_FILTER_KEY_SET = set(['POSITION', 'GT_TYPE'])
self.assertEqual(EXPECTED_FILTER_KEY_SET,
set(extract_filter_keys(FILTER_EXPR, self.ref_genome)))
def test_determine_visible_field_names(self):
EXPECTED_VISIBLE_KEYS = ['INFO_EFF_EFFECT']
self.assertEqual(EXPECTED_VISIBLE_KEYS,
determine_visible_field_names(EXPECTED_VISIBLE_KEYS, '',
self.ref_genome))
def test_update_parent_child_variant_fields(self):
self.common_entities = create_common_entities_w_variants()
self.common_entities['samples'][0].add_child(
self.common_entities['samples'][1])
self.common_entities['samples'][0].add_child(
self.common_entities['samples'][2])
self.common_entities['samples'][2].add_child(
self.common_entities['samples'][3])
self.common_entities['samples'][4].add_child(
self.common_entities['samples'][5])
self.common_entities['samples'][5].add_child(
self.common_entities['samples'][6])
update_parent_child_variant_fields(
self.common_entities['alignment_group'])
v_1808 = Variant.objects.get(
reference_genome=self.common_entities['reference_genome'],
position=1808)
self.assertEqual(set(v_1808.get_alternates()), set(['A']))
vcc_1808 = v_1808.variantcallercommondata_set.get()
ve_for_uid = lambda uid: (
vcc_1808.variantevidence_set.get(experiment_sample__uid=uid))
ve_sample_3 = ve_for_uid(u'9dd7a7a1')
ve_sample_2 = ve_for_uid(u'9b19e708')
self.assertEqual(ve_sample_3.data['GT_TYPE'],2)
self.assertEqual(ve_sample_2.data['IN_CHILDREN'],1)
class TestSymbolGenerator(TestCase):
"""Tests the symbol generator used for symbolic manipulation.
"""
def test_generator(self):
symbol_maker = SymbolGenerator()
self.assertEqual('A', symbol_maker.next())
self.assertEqual('B', symbol_maker.next())
self.assertEqual('C', symbol_maker.next())
| 35.814159 | 78 | 0.690635 | 503 | 4,047 | 5.246521 | 0.270378 | 0.079576 | 0.08867 | 0.094733 | 0.277378 | 0.190603 | 0.103827 | 0.103827 | 0.103827 | 0.103827 | 0 | 0.019051 | 0.208797 | 4,047 | 112 | 79 | 36.133929 | 0.805122 | 0.047195 | 0 | 0.08 | 0 | 0 | 0.090151 | 0.005993 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.066667 | false | 0.013333 | 0.2 | 0 | 0.293333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f4ce26904604a283c0e95683f79269e16a4240 | 6,933 | py | Python | vault/datadog_checks/vault/vault.py | jessehub/integrations-core | 76955b6e55beae7bc5c2fd25867955d2a3c8d5ef | [
"BSD-3-Clause"
] | null | null | null | vault/datadog_checks/vault/vault.py | jessehub/integrations-core | 76955b6e55beae7bc5c2fd25867955d2a3c8d5ef | [
"BSD-3-Clause"
] | null | null | null | vault/datadog_checks/vault/vault.py | jessehub/integrations-core | 76955b6e55beae7bc5c2fd25867955d2a3c8d5ef | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2018
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from time import time as timestamp
import requests
from simplejson import JSONDecodeError
from datadog_checks.checks import AgentCheck
from datadog_checks.config import is_affirmative
from datadog_checks.utils.containers import hash_mutable
from .errors import ApiUnreachable
class Vault(AgentCheck):
CHECK_NAME = 'vault'
DEFAULT_API_VERSION = '1'
EVENT_LEADER_CHANGE = 'vault.leader_change'
SERVICE_CHECK_CONNECT = 'vault.can_connect'
SERVICE_CHECK_UNSEALED = 'vault.unsealed'
SERVICE_CHECK_INITIALIZED = 'vault.initialized'
HTTP_CONFIG_REMAPPER = {
'ssl_verify': {'name': 'tls_verify'},
'ssl_cert': {'name': 'tls_cert'},
'ssl_private_key': {'name': 'tls_private_key'},
'ssl_ca_cert': {'name': 'tls_ca_cert'},
'ssl_ignore_warning': {'name': 'tls_ignore_warning'},
}
def __init__(self, name, init_config, instances):
super(Vault, self).__init__(name, init_config, instances)
self.api_versions = {
'1': {'functions': {'check_leader': self.check_leader_v1, 'check_health': self.check_health_v1}}
}
self.config = {}
if 'client_token' in self.instance:
self.http.options['headers']['X-Vault-Token'] = self.instance['client_token']
def check(self, instance):
config = self.get_config(instance)
if config is None:
return
api = config['api']
tags = list(config['tags'])
# We access the version of the Vault API corresponding to each instance's `api_url`.
try:
api['check_leader'](config, tags)
api['check_health'](config, tags)
except ApiUnreachable:
raise
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.OK, tags=tags)
def check_leader_v1(self, config, tags):
url = config['api_url'] + '/sys/leader'
leader_data = self.access_api(url, tags)
is_leader = is_affirmative(leader_data.get('is_self'))
tags.append('is_leader:{}'.format('true' if is_leader else 'false'))
self.gauge('vault.is_leader', int(is_leader), tags=tags)
current_leader = leader_data.get('leader_address')
previous_leader = config['leader']
if config['detect_leader'] and current_leader:
if previous_leader is not None and current_leader != previous_leader:
self.event(
{
'timestamp': timestamp(),
'event_type': self.EVENT_LEADER_CHANGE,
'msg_title': 'Leader change',
'msg_text': 'Leader changed from `{}` to `{}`.'.format(previous_leader, current_leader),
'alert_type': 'info',
'source_type_name': self.CHECK_NAME,
'host': self.hostname,
'tags': tags,
}
)
config['leader'] = current_leader
def check_health_v1(self, config, tags):
url = config['api_url'] + '/sys/health'
health_params = {'standbyok': True, 'perfstandbyok': True}
health_data = self.access_api(url, tags, params=health_params)
cluster_name = health_data.get('cluster_name')
if cluster_name:
tags.append('cluster_name:{}'.format(cluster_name))
vault_version = health_data.get('version')
if vault_version:
tags.append('vault_version:{}'.format(vault_version))
unsealed = not is_affirmative(health_data.get('sealed'))
if unsealed:
self.service_check(self.SERVICE_CHECK_UNSEALED, AgentCheck.OK, tags=tags)
else:
self.service_check(self.SERVICE_CHECK_UNSEALED, AgentCheck.CRITICAL, tags=tags)
initialized = is_affirmative(health_data.get('initialized'))
if initialized:
self.service_check(self.SERVICE_CHECK_INITIALIZED, AgentCheck.OK, tags=tags)
else:
self.service_check(self.SERVICE_CHECK_INITIALIZED, AgentCheck.CRITICAL, tags=tags)
def get_config(self, instance):
instance_id = hash_mutable(instance)
config = self.config.get(instance_id)
if config is None:
config = {}
try:
api_url = instance['api_url']
api_version = api_url[-1]
if api_version not in self.api_versions:
self.log.warning(
'Unknown Vault API version `{}`, using version '
'`{}`'.format(api_version, self.DEFAULT_API_VERSION)
)
api_url = api_url[:-1] + self.DEFAULT_API_VERSION
api_version = self.DEFAULT_API_VERSION
config['api_url'] = api_url
config['api'] = self.api_versions[api_version]['functions']
except KeyError:
self.log.error('Vault configuration setting `api_url` is required')
return
config['tags'] = instance.get('tags', [])
# Keep track of the previous cluster leader to detect changes.
config['leader'] = None
config['detect_leader'] = is_affirmative(instance.get('detect_leader'))
self.config[instance_id] = config
return config
def access_api(self, url, tags, params=None):
try:
response = self.http.get(url, params=params)
response.raise_for_status()
json_data = response.json()
except requests.exceptions.HTTPError:
msg = 'The Vault endpoint `{}` returned {}.'.format(url, response.status_code)
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
except JSONDecodeError:
msg = 'The Vault endpoint `{}` returned invalid json data.'.format(url)
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
except requests.exceptions.Timeout:
msg = 'Vault endpoint `{}` timed out after {} seconds'.format(url, self.http.options['timeout'])
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
except (requests.exceptions.RequestException, requests.exceptions.ConnectionError):
msg = 'Error accessing Vault endpoint `{}`'.format(url)
self.service_check(self.SERVICE_CHECK_CONNECT, AgentCheck.CRITICAL, message=msg, tags=tags)
self.log.exception(msg)
raise ApiUnreachable
return json_data
| 40.54386 | 112 | 0.613587 | 772 | 6,933 | 5.281088 | 0.207254 | 0.06181 | 0.07064 | 0.04415 | 0.281825 | 0.241599 | 0.214619 | 0.214619 | 0.172676 | 0.155997 | 0 | 0.002604 | 0.279821 | 6,933 | 170 | 113 | 40.782353 | 0.81394 | 0.035194 | 0 | 0.155556 | 0 | 0 | 0.148586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.051852 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f5bffbf599d71755927d4883fc6254ba3f55d4 | 6,205 | py | Python | lib/tests/streamlit/arrow_altair_test.py | ChangHoon-Sung/streamlit | 83e0b80d2fa13e29e83d092a9fc4d946460bbf73 | [
"Apache-2.0"
] | 1 | 2019-11-01T08:37:00.000Z | 2019-11-01T08:37:00.000Z | lib/tests/streamlit/arrow_altair_test.py | ChangHoon-Sung/streamlit | 83e0b80d2fa13e29e83d092a9fc4d946460bbf73 | [
"Apache-2.0"
] | 35 | 2021-10-12T04:41:39.000Z | 2022-03-28T04:50:45.000Z | lib/tests/streamlit/arrow_altair_test.py | AlexRogalskiy/streamlit | d153db37d97faada87bf88972886cda5a624f8c8 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018-2022 Streamlit Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from datetime import date
from functools import reduce
import altair as alt
import pandas as pd
from tests import testutil
import streamlit as st
from streamlit.elements import arrow_altair as altair
from streamlit.elements.arrow_altair import ChartType
from streamlit.type_util import bytes_to_data_frame
def _deep_get(dictionary, *keys):
return reduce(
lambda d, key: d.get(key, None) if isinstance(d, dict) else None,
keys,
dictionary,
)
class ArrowAltairTest(testutil.DeltaGeneratorTestCase):
"""Test ability to marshall arrow_altair_chart proto."""
def test_altair_chart(self):
"""Test that it can be called with args."""
df = pd.DataFrame([["A", "B", "C", "D"], [28, 55, 43, 91]], index=["a", "b"]).T
chart = alt.Chart(df).mark_bar().encode(x="a", y="b")
EXPECTED_DATAFRAME = pd.DataFrame(
{
"a": ["A", "B", "C", "D"],
"b": [28, 55, 43, 91],
}
)
st._arrow_altair_chart(chart)
proto = self.get_delta_from_queue().new_element.arrow_vega_lite_chart
self.assertEqual(proto.HasField("data"), False)
self.assertEqual(len(proto.datasets), 1)
pd.testing.assert_frame_equal(
bytes_to_data_frame(proto.datasets[0].data.data), EXPECTED_DATAFRAME
)
spec_dict = json.loads(proto.spec)
self.assertEqual(
spec_dict["encoding"],
{
"y": {"field": "b", "type": "quantitative"},
"x": {"field": "a", "type": "nominal"},
},
)
self.assertEqual(spec_dict["data"], {"name": proto.datasets[0].name})
self.assertEqual(spec_dict["mark"], "bar")
self.assertTrue("config" in spec_dict)
self.assertTrue("encoding" in spec_dict)
def test_date_column_utc_scale(self):
"""Test that columns with date values have UTC time scale"""
df = pd.DataFrame(
{"index": [date(2019, 8, 9), date(2019, 8, 10)], "numbers": [1, 10]}
).set_index("index")
chart = altair._generate_chart(ChartType.LINE, df)
st._arrow_altair_chart(chart)
proto = self.get_delta_from_queue().new_element.arrow_vega_lite_chart
spec_dict = json.loads(proto.spec)
# The x axis should have scale="utc", because it uses date values.
x_scale = _deep_get(spec_dict, "encoding", "x", "scale", "type")
self.assertEqual(x_scale, "utc")
# The y axis should _not_ have scale="utc", because it doesn't
# use date values.
y_scale = _deep_get(spec_dict, "encoding", "y", "scale", "type")
self.assertNotEqual(y_scale, "utc")
class ArrowChartsTest(testutil.DeltaGeneratorTestCase):
"""Test Arrow charts."""
def test_arrow_line_chart(self):
"""Test st._arrow_line_chart."""
df = pd.DataFrame([[20, 30, 50]], columns=["a", "b", "c"])
EXPECTED_DATAFRAME = pd.DataFrame(
[[0, "a", 20], [0, "b", 30], [0, "c", 50]],
index=[0, 1, 2],
columns=["index", "variable", "value"],
)
st._arrow_line_chart(df)
proto = self.get_delta_from_queue().new_element.arrow_vega_lite_chart
chart_spec = json.loads(proto.spec)
self.assertEqual(chart_spec["mark"], "line")
pd.testing.assert_frame_equal(
bytes_to_data_frame(proto.datasets[0].data.data),
EXPECTED_DATAFRAME,
)
def test_arrow_line_chart_with_generic_index(self):
"""Test st._arrow_line_chart with a generic index."""
df = pd.DataFrame([[20, 30, 50]], columns=["a", "b", "c"])
df.set_index("a", inplace=True)
EXPECTED_DATAFRAME = pd.DataFrame(
[[20, "b", 30], [20, "c", 50]],
index=[0, 1],
columns=["a", "variable", "value"],
)
st._arrow_line_chart(df)
proto = self.get_delta_from_queue().new_element.arrow_vega_lite_chart
chart_spec = json.loads(proto.spec)
self.assertEqual(chart_spec["mark"], "line")
pd.testing.assert_frame_equal(
bytes_to_data_frame(proto.datasets[0].data.data),
EXPECTED_DATAFRAME,
)
def test_arrow_area_chart(self):
"""Test st._arrow_area_chart."""
df = pd.DataFrame([[20, 30, 50]], columns=["a", "b", "c"])
EXPECTED_DATAFRAME = pd.DataFrame(
[[0, "a", 20], [0, "b", 30], [0, "c", 50]],
index=[0, 1, 2],
columns=["index", "variable", "value"],
)
st._arrow_area_chart(df)
proto = self.get_delta_from_queue().new_element.arrow_vega_lite_chart
chart_spec = json.loads(proto.spec)
self.assertEqual(chart_spec["mark"], "area")
pd.testing.assert_frame_equal(
bytes_to_data_frame(proto.datasets[0].data.data),
EXPECTED_DATAFRAME,
)
def test_arrow_bar_chart(self):
"""Test st._arrow_bar_chart."""
df = pd.DataFrame([[20, 30, 50]], columns=["a", "b", "c"])
EXPECTED_DATAFRAME = pd.DataFrame(
[[0, "a", 20], [0, "b", 30], [0, "c", 50]],
index=[0, 1, 2],
columns=["index", "variable", "value"],
)
st._arrow_bar_chart(df)
proto = self.get_delta_from_queue().new_element.arrow_vega_lite_chart
chart_spec = json.loads(proto.spec)
self.assertEqual(chart_spec["mark"], "bar")
pd.testing.assert_frame_equal(
bytes_to_data_frame(proto.datasets[0].data.data),
EXPECTED_DATAFRAME,
)
| 35.457143 | 87 | 0.604029 | 808 | 6,205 | 4.428218 | 0.231436 | 0.033818 | 0.018446 | 0.026831 | 0.492733 | 0.454723 | 0.408329 | 0.408329 | 0.408329 | 0.408329 | 0 | 0.026157 | 0.254472 | 6,205 | 174 | 88 | 35.66092 | 0.747298 | 0.159549 | 0 | 0.411765 | 0 | 0 | 0.053886 | 0 | 0 | 0 | 0 | 0 | 0.151261 | 1 | 0.058824 | false | 0 | 0.084034 | 0.008403 | 0.168067 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f743b89cd8297d644907e7ca8b728d94812284 | 425 | py | Python | mros/ro/views.py | blocknodes/mros | 9b70a1751e171ef826dbb4b5e9fee32c9e1dd0c4 | [
"MIT"
] | null | null | null | mros/ro/views.py | blocknodes/mros | 9b70a1751e171ef826dbb4b5e9fee32c9e1dd0c4 | [
"MIT"
] | null | null | null | mros/ro/views.py | blocknodes/mros | 9b70a1751e171ef826dbb4b5e9fee32c9e1dd0c4 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.core.paginator import Paginator
from .models import MeetingRoom, MeetingRoomBinding
# Create your views here.
def index(request):
rooms = MeetingRoom.objects.all()
paginator = Paginator(rooms, 2)
page_num = request.GET.get('page')
page_objs = paginator.get_page(page_num)
context = {'rooms': page_objs}
return render(request, 'ro/index.html', context)
| 32.692308 | 52 | 0.741176 | 55 | 425 | 5.636364 | 0.527273 | 0.064516 | 0.070968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002786 | 0.155294 | 425 | 12 | 53 | 35.416667 | 0.860724 | 0.054118 | 0 | 0 | 0 | 0 | 0.055 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f79fd746e74084abf15ce5e4c8483eeeac1e07 | 8,178 | py | Python | Acquisition/io_operations.py | LCAV/Lauzhack2020 | c12a788daac869acc675fda760cfec65850791ae | [
"CC0-1.0"
] | null | null | null | Acquisition/io_operations.py | LCAV/Lauzhack2020 | c12a788daac869acc675fda760cfec65850791ae | [
"CC0-1.0"
] | null | null | null | Acquisition/io_operations.py | LCAV/Lauzhack2020 | c12a788daac869acc675fda760cfec65850791ae | [
"CC0-1.0"
] | null | null | null | #! /usr/bin/env python3
# -*- coding: utf-8 -*-
"""
io_operations.py: Some input-output operations used throughout the repository.
"""
import datetime
import json
import time
import os
import numpy as np
def get_available_files(output_dir, exclude=[]):
available = [f[:-4] for f in os.listdir(output_dir) if f[-3:] == 'csv']
available.sort()
[available.remove(excl) for excl in exclude if excl in available]
print('available files:')
for i, a in enumerate(available):
try:
value = datetime.datetime.fromtimestamp(int(a))
print(i, ':', a, '\t', value.strftime('%Y-%m-%d %H:%M:%S'))
except (TypeError, ValueError):
print(i, ':', a)
return available
def to_int_if_possible(input):
input = input.replace(' ', '')
try:
num = int(input)
except ValueError:
return input
return num
def make_dirs_safe(path):
""" Make directory only if it does not exist yet. """
dirname = os.path.dirname(path)
if not os.path.exists(dirname):
os.makedirs(dirname)
def read_json(file_json):
with open(file_json, 'r') as json_data:
return json.load(json_data)
def read_dataset(dataset_name, context=None):
''' Return panda dataframe of dataset, read from csv.
:param dataset_name: name of csv file.
:param context: optional, if given then all anchors that do not appear in context are ignored.
'''
import pandas as pd
if dataset_name[-4:] != '.csv':
dataset_name += '.csv'
# Sample 100 rows of data to determine dtypes.
df_test = pd.read_csv(dataset_name, nrows=100)
float_cols = [c for c in df_test if df_test[c].dtype == "float64"]
float32_cols = {c: np.float32 for c in float_cols}
# read data in correct precision.
raw_data = pd.read_csv(dataset_name, engine='c', dtype=float32_cols)
if context is not None:
valid_anchors = raw_data.anchor_id.isin(np.append(context.anchor_ids, np.nan))
if np.sum(valid_anchors) == 0:
raise ValueError('file {} does not contain any valid measurements.'.format(dataset_name))
print('number of measurements from unknown anchors: {} / {}'.format(
np.sum(~valid_anchors), len(raw_data)))
raw_data = raw_data[valid_anchors]
# convert anchor_id to string.
raw_data.loc[:, 'anchor_id'] = raw_data.loc[:, 'anchor_id'].astype(str)
# remove trailing white spaces
raw_data['anchor_id'] = raw_data['anchor_id'].map(str.strip)
# TODO(FD) this used to be necessary for pozyx, but not anymore
# and it actually messed up results at some point. I removed it for now
# but we should do it and make sure that his has the correct effect.
# raw_data = raw_data.sort_values('timestamp')
return raw_data
def find_next(output_dir, mask):
""" Increase i until file of structure output_dir + mask.format(i) does not exist. """
i = 0
while True:
file_path = output_dir + mask.format(i)
if os.path.exists(file_path):
i += 1
else:
return file_path
def parse_config(config_json):
""" Parse configuration json file
:param str config_json: path to config json file, relative to the repository's root
directory (where this file is stored).
:return: configuration dict
"""
from os import path
rootdir = path.dirname(path.abspath(__file__)) + "/"
try:
print("reading", config_json)
config = read_json(config_json)
try:
config["anchors_file"] = rootdir + config["anchors_file"]
if not path.isfile(config["anchors_file"]):
config["anchors_file"] = path.abspath(config["anchors_file"])
except:
print('no anchors_file found.')
try:
config["calibration_file"] = rootdir + config["calibration_file"]
if not path.isfile(config["calibration_file"]):
config["calibration_file"] = path.abspath(config["calibration_file"])
except:
print('no calibration_file found.')
except Exception as e:
print(
"Warning: did not find a valid config file at {}. Continuing with empty parameters. ".format(config_json))
print(e)
config = {"systems": []}
return config
def prepare_output(output_dir='./', output_name=''):
""" Creates output directory and output file.
:param str output_dir: output directory
:param str output_name: output name (optional)
:return output file path. If output_name is not given, it is set to <current timestamp in ms>.csv.
"""
if output_name.find('/') >= 0:
raise NameError('output_name needs to be a file name without directory.')
if output_name == '':
output_name = "{}.csv".format(int(time.time()))
if output_dir == '':
output_dir = './'
elif output_dir[-1] != '/':
output_dir = output_dir + '/'
make_dirs_safe(output_dir)
outfile = output_dir + output_name
# if os.path.exists(outfile):
# raise NameError("File {} already exists. Don't want to overwrite".format(outfile))
return outfile
# TODO: compare this with Context. We could
# potentially replace one by the other.
def fill_from_file(anchors_file, system_id, anchors_position=None, anchors_orientation=None, anchors_ids=None,
anchors_scale=None):
""" Read data about anchors from file.
:param anchors_file: file name.
:param system_id: system id to consider.
:param anchors_position: empty dict.
:param anchors_orientation: empty dict.
:param anchors_ids: empty array.
:param anchors_scale: empty dict. For now used only by Pixlive.
"""
import ubiment_parameters as UBI
with open(anchors_file, 'r') as af:
headers = af.readline()
for line in af:
spl = line.split(',')
this_system_id = int(spl[UBI.anchor_system_id])
if this_system_id == system_id:
anchor_id = to_int_if_possible(spl[UBI.anchor_anchor_id])
position = [
float(spl[UBI.anchor_px]),
float(spl[UBI.anchor_py]),
float(spl[UBI.anchor_pz])
]
try:
orientation = [
float(spl[UBI.anchor_theta_x]),
float(spl[UBI.anchor_theta_y]),
float(spl[UBI.anchor_theta_z])
]
except IndexError:
orientation = [0, 0, 0]
try:
scale = [
float(spl[UBI.anchor_scale_x]),
float(spl[UBI.anchor_scale_y])
]
except IndexError:
scale = [1, 1]
if anchors_position is not None:
anchors_position[anchor_id] = position
if anchors_orientation is not None:
anchors_orientation[anchor_id] = orientation
if anchors_ids is not None:
anchors_ids.append(anchor_id)
if anchors_scale is not None:
anchors_scale[anchor_id] = scale
def generate_empty_anchors_file(filename):
import ubiment_parameters as UBI
with open(filename, 'w') as f:
start = True
for a in UBI.anchor_fields:
if not start:
f.write(',')
f.write(a)
start = False
def extract_safe(dict_, key):
if key in dict_:
return dict_[key]
else:
return None
def save_params(filename, **kwargs):
import json
make_dirs_safe(filename)
for key in kwargs.keys():
try:
# convert numpy arrays to lists because they are ugly otherwise.
kwargs[key] = kwargs[key].tolist()
except AttributeError as e:
pass
with open(filename, 'w') as fp:
json.dump(kwargs, fp, indent=4)
print('saved parameters as', filename)
| 32.452381 | 118 | 0.597212 | 1,042 | 8,178 | 4.522073 | 0.268714 | 0.02674 | 0.025467 | 0.028862 | 0.080645 | 0.025891 | 0.01528 | 0 | 0 | 0 | 0 | 0.00524 | 0.299951 | 8,178 | 251 | 119 | 32.581673 | 0.817817 | 0.220836 | 0 | 0.115385 | 0 | 0 | 0.091771 | 0 | 0 | 0 | 0 | 0.007968 | 0 | 1 | 0.076923 | false | 0.00641 | 0.064103 | 0 | 0.205128 | 0.064103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96f978cec5aef77ba7742ff595f0727b3045dce9 | 2,462 | py | Python | tasks/build.py | dtczest/syrupy | c37d6521852c96cf1ae01873c02b94410d38b663 | [
"Apache-2.0"
] | 147 | 2019-11-24T22:44:39.000Z | 2022-03-28T17:19:34.000Z | tasks/build.py | dtczest/syrupy | c37d6521852c96cf1ae01873c02b94410d38b663 | [
"Apache-2.0"
] | 560 | 2019-11-19T19:15:19.000Z | 2022-03-18T19:29:14.000Z | tasks/build.py | dtczest/syrupy | c37d6521852c96cf1ae01873c02b94410d38b663 | [
"Apache-2.0"
] | 13 | 2020-03-07T23:23:10.000Z | 2022-01-25T17:05:07.000Z | import os
import re
from invoke import task
from setup import install_requires
from .utils import ctx_run
def _parse_min_versions(requirements):
result = []
for req in sorted(requirements):
match = re.match(r"([\w_]+)>=([^,]+),.*", req)
if match is None:
continue
pkg_name = match.group(1)
min_version = match.group(2)
result.append(f"{pkg_name}=={min_version}")
return result
@task
def requirements(ctx, upgrade=False):
"""
Build test & dev requirements lock file
"""
args = [
"--no-emit-find-links",
"--no-emit-index-url",
"--allow-unsafe",
"--rebuild",
]
if upgrade:
args.append("--upgrade")
ctx_run(
ctx,
f"echo '-e .[dev]' | python -m piptools compile "
f"{' '.join(args)} - -qo- | sed '/^-e / d' > dev_requirements.txt",
)
with open("min_requirements.constraints", "w", encoding="utf-8") as f:
min_requirements = _parse_min_versions(install_requires)
f.write("\n".join(min_requirements))
f.write("\n")
@task
def clean(ctx):
"""
Remove build files e.g. package, distributable, compiled etc.
"""
ctx_run(ctx, "rm -rf *.egg-info dist build __pycache__ .pytest_cache artifacts/*")
@task(pre=[clean])
def dist(ctx):
"""
Generate version from scm and build package distributable
"""
ctx_run(ctx, "python setup.py sdist bdist_wheel")
@task
def publish(ctx, dry_run=True):
"""
Upload built package to pypi
"""
repo_url = "--repository-url https://test.pypi.org/legacy/" if dry_run else ""
ctx_run(ctx, f"twine upload --skip-existing {repo_url} dist/*")
@task(pre=[dist])
def release(ctx, dry_run=True):
"""
Build and publish package to pypi index based on scm version
"""
from semver import parse_version_info
if not dry_run and not os.environ.get("CI"):
print("This is a CI only command")
exit(1)
# get version created in build
with open("version.txt", "r", encoding="utf-8") as f:
version = str(f.read())
try:
should_publish_to_pypi = not dry_run and parse_version_info(version)
except ValueError:
should_publish_to_pypi = False
# publish to test to verify builds
if dry_run:
publish(ctx, dry_run=True)
# publish to pypi if test succeeds
if should_publish_to_pypi:
publish(ctx, dry_run=False)
| 24.62 | 86 | 0.615353 | 335 | 2,462 | 4.370149 | 0.41791 | 0.032787 | 0.02459 | 0.032787 | 0.047814 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002712 | 0.251015 | 2,462 | 99 | 87 | 24.868687 | 0.791215 | 0.14013 | 0 | 0.05 | 0 | 0.016667 | 0.244237 | 0.025993 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.216667 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96fa8ecbff91a15e0a540ef770f260b4e75ea43d | 15,880 | py | Python | src/serialbox-python/serialbox/savepoint.py | elsagermann/serialbox | c590561d0876f3ce9a07878e4862a46003a37879 | [
"BSD-2-Clause"
] | 10 | 2017-04-18T14:28:07.000Z | 2019-10-23T03:22:16.000Z | src/serialbox-python/serialbox/savepoint.py | elsagermann/serialbox | c590561d0876f3ce9a07878e4862a46003a37879 | [
"BSD-2-Clause"
] | 172 | 2017-02-16T14:24:33.000Z | 2019-11-06T08:46:34.000Z | src/serialbox-python/serialbox/savepoint.py | elsagermann/serialbox | c590561d0876f3ce9a07878e4862a46003a37879 | [
"BSD-2-Clause"
] | 21 | 2016-12-15T15:22:02.000Z | 2019-10-02T09:40:10.000Z | #!/usr/bin/python3
# -*- coding: utf-8 -*-
##===-----------------------------------------------------------------------------*- Python -*-===##
##
## S E R I A L B O X
##
## This file is distributed under terms of BSD license.
## See LICENSE.txt for more information.
##
##===------------------------------------------------------------------------------------------===##
##
## This file contains the savepoint implementation of the Python Interface.
##
##===------------------------------------------------------------------------------------------===##
from abc import ABCMeta
from ctypes import c_char_p, c_void_p, c_int, Structure, POINTER, c_size_t
from .common import get_library, to_c_string
from .error import invoke, SerialboxError
from .metainfomap import MetainfoMap, MetainfoImpl
from .type import StringTypes
from .util import levenshtein
lib = get_library()
class SavepointImpl(Structure):
""" Mapping of serialboxSavepoint_t """
_fields_ = [("impl", c_void_p), ("ownsData", c_int)]
def register_library(library):
library.serialboxSavepointCreate.argtypes = [c_char_p]
library.serialboxSavepointCreate.restype = POINTER(SavepointImpl)
library.serialboxSavepointCreateFromSavepoint.argtypes = [POINTER(SavepointImpl)]
library.serialboxSavepointCreateFromSavepoint.restype = POINTER(SavepointImpl)
library.serialboxSavepointDestroy.argtypes = [POINTER(SavepointImpl)]
library.serialboxSavepointDestroy.restype = None
library.serialboxSavepointGetName.argtypes = [POINTER(SavepointImpl)]
library.serialboxSavepointGetName.restype = c_char_p
library.serialboxSavepointEqual.argtypes = [POINTER(SavepointImpl), POINTER(SavepointImpl)]
library.serialboxSavepointEqual.restype = c_int
library.serialboxSavepointToString.argtypes = [POINTER(SavepointImpl)]
library.serialboxSavepointToString.restype = c_char_p
library.serialboxSavepointHash.argtypes = [POINTER(SavepointImpl)]
library.serialboxSavepointHash.restype = c_size_t
library.serialboxSavepointGetMetainfo.argtypes = [POINTER(SavepointImpl)]
library.serialboxSavepointGetMetainfo.restype = POINTER(MetainfoImpl)
# ===--------------------------------------------------------------------------------------------===
# Savepoint
# ==---------------------------------------------------------------------------------------------===
class Savepoint(object):
"""Savepoints are used within the :class:`Serializer <serialbox.Serializer>` to discriminate
fields at different points in time. Savepoints in the :class:`Serializer <serialbox.Serializer>`
are unique and primarily identified by their :attr:`name <serialbox.Savepoint.name>`
>>> savepoint = Savepoint('savepoint')
>>> savepoint.name
'savepoint'
>>>
and further distinguished by their :attr:`metainfo <serialbox.Savepoint.metainfo>`
>>> savepoint = Savepoint('savepoint', {'key': 5})
>>> savepoint.metainfo
<MetainfoMap {"key": 5}>
>>>
"""
def __init__(self, name, metainfo=None, impl=None):
"""Initialize the Savepoint.
This method prepares the Savepoint for usage and gives a name, which is the only required
information for the savepoint to be usable. Meta-information can be added after the
initialization has been performed.
:param str name: Name of the savepoint
:param dict metainfo: {Key:value} pair dictionary used for initializing the meta-information
of the Savepont
:param SavepointImpl impl: Directly set the implementation pointer [internal use]
:raises serialbox.SerialboxError: if Savepoint could not be initialized
"""
if impl:
self.__savepoint = impl
else:
namestr = to_c_string(name)[0]
self.__savepoint = invoke(lib.serialboxSavepointCreate, namestr)
if metainfo:
if isinstance(metainfo, MetainfoMap):
metainfo = metainfo.to_dict()
metainfomap = self.metainfo
for key, value in metainfo.items():
metainfomap.insert(key, value)
@property
def name(self):
"""Name of the Savepoint.
>>> s = Savepoint('savepoint')
>>> s.name
'savepoint'
>>>
:return str: Name of the savepoint
:rtype: str
"""
return invoke(lib.serialboxSavepointGetName, self.__savepoint).decode()
@property
def metainfo(self):
"""Refrence to the meta-information of the Savepoint.
>>> s = Savepoint('savepoint', {'key': 5})
>>> s.metainfo['key']
5
>>> type(s.metainfo)
<class 'serialbox.metainfomap.MetainfoMap'>
>>> s.metainfo.insert('key2', 'str')
>>> s
<MetainfoMap {"key": 5, "key2": str}>
>>>
:return: Refrence to the meta-information map
:rtype: :class:`MetainfoMap <serialbox.MetainfoMap>`
"""
return MetainfoMap(impl=invoke(lib.serialboxSavepointGetMetainfo, self.__savepoint))
def clone(self):
"""Clone the Savepoint by performing a deepcopy.
>>> s = Savepoint('savepoint', {'key': 5})
>>> s_clone = s.clone()
>>> s.metainfo.clear()
>>> s_clone
<Savepoint sp {"key": 5}>
>>>
:return: Clone of the savepoint
:rtype: Savepoint
"""
return Savepoint('',
impl=invoke(lib.serialboxSavepointCreateFromSavepoint, self.__savepoint))
def __eq__(self, other):
"""Test for equality.
Savepoints compare equal if their :attr:`names <serialbox.Savepoint.name>` and
:attr:`metainfos <serialbox.Savepoint.metainfo>` compare equal.
>>> s1 = Savepoint('savepoint', {'key': 'str'})
>>> s2 = Savepoint('savepoint', {'key': 5})
>>> s1 == s2
False
>>>
:return: `True` if self == other, `False` otherwise
:rtype: bool
"""
return bool(invoke(lib.serialboxSavepointEqual, self.__savepoint, other.__savepoint))
def __ne__(self, other):
"""Test for inequality.
Savepoints compare equal if their :attr:`names <serialbox.Savepoint.name>` and
:attr:`metainfos <serialbox.Savepoint.metainfo>` compare equal.
>>> s1 = Savepoint('savepoint', {'key': 'str'})
>>> s2 = Savepoint('savepoint', {'key': 5})
>>> s1 != s2
True
>>>
:return: `True` if self != other, `False` otherwise
:rtype: bool
"""
return not self.__eq__(other)
def impl(self):
return self.__savepoint
def __del__(self):
invoke(lib.serialboxSavepointDestroy, self.__savepoint)
def __repr__(self):
return '<Savepoint {0}>'.format(self.__str__())
def __str__(self):
return invoke(lib.serialboxSavepointToString, self.__savepoint).decode()
def __hash__(self):
return invoke(lib.serialboxSavepointHash, self.__savepoint)
# ===--------------------------------------------------------------------------------------------===
# SavepointCollection
# ==---------------------------------------------------------------------------------------------===
class SavepointCollection(object, metaclass=ABCMeta):
"""Collection of savepoints. A collection can be obtained by using the
:attr:`savepoint <serialbox.Serializer.savepoint>` attribute of the
:class:`Serializer <serialbox.Serializer>`.
>>> ser = Serializer(OpenModeKind.Write, '.', 'field')
>>> ser.register_savepoint(Savepoint('s1'))
>>> ser.register_savepoint(Savepoint('s2'))
>>> isinstance(ser.savepoint, SavepointCollection)
True
>>> ser.savpoint.savepoints()
[<Savepoint s1 {}>, <Savepoint s2 {}>]
>>> ser.savepoint.as_savepoint()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "savepoint.py", line 227, in as_savepoint
raise SerialboxError(errstr)
serialbox.error.SerialboxError: Savepoint is ambiguous. Candidates are:
s1 {}
s2 {}
>>>
"""
def savepoints(self):
""" Get the list of savepoints in this collection. The savepoints are ordered in the way
they were inserted.
:return: List of savepoints in the collection.
:rtype: :class:`list` [:class:`Savepoint <serialbox.Savepoint>`]
"""
raise NotImplementedError()
def as_savepoint(self):
""" Return the unique savepoint in the list or raise an
:class:`SerialboxError <serialbox.SerialboxError>` if the list has more than 1 element.
:return: Unique savepoint in this collection.
:rtype: Savepoint
:raises serialbox.SerialboxError: if list has more than one Savepoint
"""
num_savepoints = len(self.savepoints())
if num_savepoints == 1:
return self.savepoints()[0]
if num_savepoints > 1:
errstr = "Savepoint is ambiguous. Candidates are:\n"
for sp in self.savepoints():
errstr += " {0}\n".format(str(sp))
raise SerialboxError(errstr)
else:
raise SerialboxError("SavepointCollection is empty")
def __str__(self):
s = "["
for sp in self.savepoints():
s += sp.__str__() + ", "
return s[:-2] + "]"
def __repr__(self):
return '<SavepointCollection {0}>'.format(self.__str__())
def transformed_equal(name, key):
""" Return True if ``name`` can be mapped to the transformed ``key`` such that ``key`` is a valid
python identifier.
The following transformation of ``key`` will be considered:
' ' ==> '_'
'-' ==> '_'
'.' ==> '_'
'[0-9]' ==> _[0-9]
"""
key_transformed = key.replace(' ', '_').replace('-', '_').replace('.', '_')
if key_transformed[0].isdigit():
key_transformed = '_' + key_transformed
return key_transformed == name
class SavepointTopCollection(SavepointCollection):
""" Collection of all savepoints.
"""
def __init__(self, savepoint_list):
self.__savepoint_list = savepoint_list
def savepoints(self):
return self.__savepoint_list
def __make_savepoint_collection(self, name, match_exact=False):
savepoint_list = []
for sp in self.__savepoint_list:
sp_name = sp.name
if name == sp_name:
savepoint_list += [sp]
elif not match_exact and transformed_equal(name, sp_name):
savepoint_list += [sp]
if not savepoint_list:
errstr = "savepoint with name '%s' does not exist" % name
# Make a suggestion if possible
dist = []
for sp in self.__savepoint_list:
dist += [levenshtein(name, sp.name)]
if min(dist) <= 3:
errstr += ", did you mean '%s'?" % self.__savepoint_list[dist.index(min(dist))].name
raise SerialboxError(errstr)
return SavepointNamedCollection(savepoint_list, None)
def __getattr__(self, name):
""" Access a collection of savepoints identified by `name`
:param name: Name of the savepoint
:type name: str
:return: Collection of savepoints sharing the same `name`
:rtype: SavepointNamedCollection
"""
return self.__make_savepoint_collection(name, False)
def __getitem__(self, index):
""" Access a collection of savepoints identified by `index`
If `index` is an integer (`isinstance(index, int`), the method returns the unique Savepoint
at poisition `index` in the savepoint list. Otherwise,
:param index: Name or index of the savepoint
:type index: str, int
:return: Collection of savepoints sharing the same ``name`` or unique savepoint.
:rtype: SavepointNamedCollection, Savepoint
"""
if isinstance(index, int):
return self.__savepoint_list[index]
return self.__make_savepoint_collection(index, True)
class SavepointNamedCollection(SavepointCollection):
""" Collection of Savepoints which all share the same `name`.
"""
def __init__(self, savepoint_list, prev_key):
self.__savepoint_list = savepoint_list
self.__prev_key = prev_key
def savepoints(self):
return self.__savepoint_list
def __make_named_savepoint_collection(self, key, match_exact=False):
savepoint_list = []
for sp in self.__savepoint_list:
# Exact match
if sp.metainfo.has_key(key):
savepoint_list += [sp]
if not savepoint_list:
# Try a little harder ... we iterate now over all keys of the savepoints in the
# collection.
keys = []
if not match_exact:
for sp in self.__savepoint_list:
sp_keys = sp.metainfo.to_dict()
for k in sp_keys:
if transformed_equal(key, k):
keys += [k]
savepoint_list += [sp]
# At his point we have to give up.. but not before we make a suggestion ;)
if not savepoint_list:
errstr = "no savepoint named '%s' has meta-info with key '%s'" % (
self.__savepoint_list[0].name, key)
raise SerialboxError(errstr)
# If we used match_exact=False and matched for example for key 'key_1': 'key 1' and
# 'key-1', we just abort as there is no point to handle this case ...
if keys.count(keys[0]) != len(keys):
errstr = "ambiguous match for key '%s' for savepoint with name '%s'" % (
key, self.__savepoint_list[0].name)
errstr += "Found matches:\n"
for k in keys:
errstr += " %s\n" % k
raise SerialboxError(errstr)
key = keys[0]
return SavepointNamedCollection(savepoint_list, key)
def __getattr__(self, key):
return self.__make_named_savepoint_collection(key, False)
def __getitem__(self, index):
#
# If `self.__prev_key` is not None, we have a query of the form
# `serializer.savepoint.key[1]` meaning we access the meta-info key=value pair with
# key=self.__prev_key and value=index. Otherwise, we have a query of the form
# `serializer.savepoint['key1']`.
#
if self.__prev_key:
savepoint_list = []
# Check if key=value pair exists
for sp in self.__savepoint_list:
if sp.metainfo[self.__prev_key] == index:
savepoint_list += [sp]
# Nothing found.. list the available savepoints and raise
if not savepoint_list:
errstr = "no savepoint named '%s' has meta-info: {\"%s\": %s}. Candidates are:\n" % (
self.__savepoint_list[0].name, self.__prev_key, index)
for sp in self.savepoints():
errstr += " {0}\n".format(str(sp))
raise SerialboxError(errstr)
return SavepointNamedCollection(savepoint_list, None)
else:
if not type(index) in StringTypes:
raise SerialboxError("expected string in query for meta-info of Savepoint '%s'" %
self.__savepoint_list[0].name)
return self.__make_named_savepoint_collection(index, True)
register_library(lib)
| 36.255708 | 101 | 0.580416 | 1,610 | 15,880 | 5.551553 | 0.191304 | 0.050906 | 0.032334 | 0.009846 | 0.257552 | 0.182927 | 0.149362 | 0.129671 | 0.106064 | 0.086597 | 0 | 0.005047 | 0.276385 | 15,880 | 437 | 102 | 36.338673 | 0.772779 | 0.415428 | 0 | 0.274854 | 0 | 0 | 0.054598 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.040936 | 0.046784 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96fab8207f023099b6525642ce859fcc781430ec | 6,269 | py | Python | __main__.py | quantcast/apex-linter | 7e32ff6cdd823c73cec74a34a0308eea81d4c618 | [
"Apache-2.0"
] | 3 | 2019-02-04T16:24:45.000Z | 2020-01-03T17:47:55.000Z | __main__.py | quantcast/apex-linter | 7e32ff6cdd823c73cec74a34a0308eea81d4c618 | [
"Apache-2.0"
] | null | null | null | __main__.py | quantcast/apex-linter | 7e32ff6cdd823c73cec74a34a0308eea81d4c618 | [
"Apache-2.0"
] | 2 | 2019-06-07T00:39:55.000Z | 2022-02-18T10:39:27.000Z | # Copyright 2018-19 Quantcast Corporation. All rights reserved.
#
# This file is part of Quantcast Apex Linter for Salesforce
#
# Licensed under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
#
import argparse
import functools
import itertools
import logging
import multiprocessing.pool
import pathlib
import sys
from typing import IO, Iterable, Optional, Sequence, Tuple, Type
from . import validators # import validators to register them
from . import PROGNAME, base, match, pathtools, terminfo
log = logging.getLogger(__name__)
def render_parallel(
paths: Iterable[pathlib.Path],
*,
pool: Optional[multiprocessing.pool.Pool],
**kwargs,
):
if not pool:
return match.render(paths, **kwargs)
return itertools.chain.from_iterable(
pool.imap(functools.partial(render, **kwargs), ((p,) for p in paths))
)
def render(*args, **kwargs):
return list(match.render(*args, **kwargs))
def lint(
paths: Iterable[pathlib.Path],
*,
jobs: Optional[int] = None,
output: Optional[IO] = sys.stdout,
output_count: Optional[IO] = None,
suppress: bool = True,
term: Optional[Type[terminfo.TermInfo]] = None,
validators: Sequence[Type[base.Validator]],
verbose: int = 0,
) -> Tuple[Iterable[str], Iterable[Exception]]:
messages = []
errors = []
with multiprocessing.Pool(jobs) as pool:
for message in render_parallel(
paths,
pool=(pool if jobs != 1 else None),
suppress=suppress,
term=term,
validators=validators,
verbose=verbose,
):
if isinstance(message, Exception):
errors.append(message)
continue
messages.append(message)
if output is not None:
print(message, file=output)
if output_count is not None:
print(len(messages), file=output_count)
return messages, errors
def main(
config: argparse.Namespace,
*,
output: IO = sys.stdout,
output_count: IO = sys.stderr,
):
messages, errors = lint(
pathtools.unique(pathtools.walk(pathtools.paths(config.files))),
jobs=config.jobs,
output_count=sys.stderr if config.count else None,
suppress=config.suppress,
term=terminfo.TermInfo.get(color=config.color),
validators=tuple(
validators.library(
select=frozenset(config.select),
ignore=frozenset(config.ignore),
)
),
verbose=config.verbose,
)
if messages:
return 1
if errors:
return 2
return 0
def parse_args(args: Sequence[str]) -> argparse.Namespace:
validator_names = validators.names()
parser = argparse.ArgumentParser(
description="Validate Salesforce code for common errors", prog=PROGNAME
)
parser.add_argument(
"files",
metavar="FILE",
default=["-"],
nargs="*",
help="files to validate",
)
class ColorAction(argparse.Action):
def __call__(self, parser, namespace, values, *args, **kwargs):
namespace.color = self.parse(values)
@staticmethod
def parse(values):
if values == "never":
return False
if values == "always":
return True
if values == "auto":
return sys.stdout.isatty()
parser.add_argument(
"--color",
action=ColorAction,
choices=("always", "auto", "never"),
default=ColorAction.parse("auto"),
metavar="WHEN",
help=("colorize the output; WHEN can be 'always', 'auto', or 'never'"),
)
parser.add_argument(
"--count",
action="store_true",
help="print total number of errors to standard error",
)
parser.add_argument(
"--debug", action="count", default=0, help="debug output"
)
parser.add_argument(
"--ignore",
action="append",
choices=validator_names,
default=[],
metavar="VALIDATOR",
help="list of errors to ignore (default: none)",
)
parser.add_argument(
"-j",
"--jobs",
default=None,
metavar="N",
type=int,
help="number of parallel checks (default: number of CPUs)",
)
parser.add_argument(
"--no-suppress",
action="store_false",
default=True,
dest="suppress",
help='disable the effect of "# noqa"; so suppression is ignored',
)
class QuietAction(argparse.Action):
def __call__(self, parser, namespace, values, *args, **kwargs):
namespace.verbose -= 1
parser.add_argument(
"-q",
"--quiet",
action=QuietAction,
nargs=0,
help="less verbose messages; see --verbose",
)
parser.add_argument(
"--select",
action="append",
choices=validator_names,
default=[],
metavar="VALIDATOR",
help="list of errors to enable (default: all)",
)
parser.add_argument(
"-v",
"--verbose",
action="count",
default=0,
help="more verbose messages",
)
return parser.parse_args(args)
if __name__ == "__main__":
config = parse_args(sys.argv[1:])
logging.basicConfig(
level=logging.DEBUG if config.debug else logging.INFO,
format=(("%(levelname)s: " if config.debug else "") + "%(message)s"),
handlers=(logging.StreamHandler(),),
)
try:
sys.exit(main(config))
except KeyboardInterrupt as e:
if config.debug:
log.exception("")
sys.exit(130)
except Exception as e:
log.exception(f"{PROGNAME}: {e}")
sys.exit(3)
| 26.676596 | 79 | 0.600574 | 691 | 6,269 | 5.382055 | 0.324168 | 0.0242 | 0.045711 | 0.008604 | 0.098951 | 0.074751 | 0.074751 | 0.074751 | 0.074751 | 0.074751 | 0 | 0.00534 | 0.283139 | 6,269 | 234 | 80 | 26.790598 | 0.822207 | 0.107194 | 0 | 0.152174 | 0 | 0 | 0.119735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.054348 | 0.005435 | 0.168478 | 0.016304 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96fb4a7637e4d6a4277cd557d44c5a9021c7ec3b | 1,758 | py | Python | ENV/lib/python3.5/site-packages/pyrogram/__init__.py | block1o1/CryptoPredicted | 7f660cdc456fb8252b3125028f31fd6f5a3ceea5 | [
"MIT"
] | 4 | 2021-10-14T21:22:25.000Z | 2022-03-12T19:58:48.000Z | ENV/lib/python3.5/site-packages/pyrogram/__init__.py | inevolin/CryptoPredicted | 7f660cdc456fb8252b3125028f31fd6f5a3ceea5 | [
"MIT"
] | null | null | null | ENV/lib/python3.5/site-packages/pyrogram/__init__.py | inevolin/CryptoPredicted | 7f660cdc456fb8252b3125028f31fd6f5a3ceea5 | [
"MIT"
] | 1 | 2022-03-15T22:52:53.000Z | 2022-03-15T22:52:53.000Z | # Pyrogram - Telegram MTProto API Client Library for Python
# Copyright (C) 2017-2018 Dan Tès <https://github.com/delivrance>
#
# This file is part of Pyrogram.
#
# Pyrogram is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Pyrogram is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Pyrogram. If not, see <http://www.gnu.org/licenses/>.
import sys
__copyright__ = "Copyright (C) 2017-2018 Dan Tès <https://github.com/delivrance>".replace(
"\xe8",
"e" if sys.getfilesystemencoding() != "utf-8" else "\xe8"
)
__license__ = "GNU Lesser General Public License v3 or later (LGPLv3+)"
__version__ = "0.7.5"
from .api.errors import Error
from .client.types import (
Audio, Chat, ChatMember, ChatPhoto, Contact, Document, InputMediaPhoto,
InputMediaVideo, InputPhoneContact, Location, Message, MessageEntity,
PhotoSize, Sticker, Update, User, UserProfilePhotos, Venue, GIF, Video,
VideoNote, Voice, CallbackQuery, Messages
)
from .client.types.reply_markup import (
ForceReply, InlineKeyboardButton, InlineKeyboardMarkup,
KeyboardButton, ReplyKeyboardMarkup, ReplyKeyboardRemove
)
from .client import (
Client, ChatAction, ParseMode, Emoji,
MessageHandler, DeletedMessagesHandler, CallbackQueryHandler,
RawUpdateHandler, DisconnectHandler, Filters
)
| 39.954545 | 90 | 0.758248 | 223 | 1,758 | 5.919283 | 0.641256 | 0.027273 | 0.048485 | 0.066667 | 0.170455 | 0.148485 | 0.124242 | 0.072727 | 0.072727 | 0.072727 | 0 | 0.016892 | 0.158134 | 1,758 | 43 | 91 | 40.883721 | 0.875 | 0.438567 | 0 | 0 | 0 | 0 | 0.141383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.217391 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
96fe6e5114c7371fadc8d7365c65dd1d9ea56075 | 2,026 | py | Python | scripts/rule_builder.py | DeNeutoy/Blackstone | 559d478389fa4af8c8c808a558921d220f7b7ae0 | [
"Apache-2.0"
] | 541 | 2019-03-27T09:48:51.000Z | 2022-03-23T08:58:29.000Z | scripts/rule_builder.py | DeNeutoy/Blackstone | 559d478389fa4af8c8c808a558921d220f7b7ae0 | [
"Apache-2.0"
] | 20 | 2019-04-25T19:43:59.000Z | 2022-01-15T22:59:11.000Z | scripts/rule_builder.py | DeNeutoy/Blackstone | 559d478389fa4af8c8c808a558921d220f7b7ae0 | [
"Apache-2.0"
] | 96 | 2019-04-25T19:37:46.000Z | 2022-03-29T13:09:38.000Z | """
Scrappy little function for generating a JSONL patterns file
from a terminology list for use in Prodigy and spaCy's EntityRuler()
"""
import spacy
from wasabi import Printer
import tqdm as tqdm
import plac
from pathlib import Path
msg = Printer()
@plac.annotations(
model=("Model name", "positional", None, Path),
TERMINOLOGY=("Terminlogy file with data", "positional", None, Path),
output_file=("Output JSONL file", "positional", None, Path),
label=("Label to add to rules", "positional", None, str),
)
def main(model=None, TERMINOLOGY=None, output_file=None, label=None):
"""
Create a JSONL patterns file from a terminology list for use in
Prodigy and spaCy's EntityRuler. This function receives a spaCy model,
the terminology list (a text file with a term per line), an output path
and the label to assign to the rule.
"""
nlp = spacy.load(model)
RULES = []
msg.info("Reading terminology list...")
with open(TERMINOLOGY) as data_in:
data = data_in.readlines()
msg.good("Terminlogy list in loaded.")
msg.info("Applying tokeniser...")
for i in tqdm.tqdm(data):
i = i.replace("\n", "")
i = i.replace("\\", "")
i = i.replace('"', "")
doc = nlp(i)
TOKENS = []
for token in doc:
TOKENS.append('{"ORTH": "%s"},' % (token.text))
rule = '{"label": "%s", "pattern": %s' % (label, TOKENS)
rule = str(rule).replace("'", "")
rule = str(rule).replace(",]", "]")
rule = rule + "}"
rule = rule.replace(",,", ", ")
# Get rid of the first determiner
rule = rule.replace('{"ORTH": "The"},', "")
RULES.append(rule)
TOKENS = []
msg.info("Writing rules to JSONL patterns file...")
with open(output_file, "a+") as data_out:
for i in tqdm.tqdm(RULES):
print(i)
data_out.write(i + "\n")
msg.good("Done!")
if __name__ == "__main__":
plac.call(main)
| 29.794118 | 75 | 0.583909 | 261 | 2,026 | 4.475096 | 0.329502 | 0.05137 | 0.043664 | 0.030822 | 0.183219 | 0.125 | 0.125 | 0.125 | 0.125 | 0.125 | 0 | 0 | 0.264561 | 2,026 | 67 | 76 | 30.238806 | 0.783893 | 0.200395 | 0 | 0.045455 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.113636 | 0 | 0.136364 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c019819699bb13701e8c7e0f766adadeeb18dab | 8,681 | py | Python | daisee_data_preprocessing.py | aenoboa1/Engagement-recognition-using-DAISEE-dataset | 30c43bca4aecc41e2b5e2a4bbb6f86395c2b46dd | [
"MIT"
] | null | null | null | daisee_data_preprocessing.py | aenoboa1/Engagement-recognition-using-DAISEE-dataset | 30c43bca4aecc41e2b5e2a4bbb6f86395c2b46dd | [
"MIT"
] | null | null | null | daisee_data_preprocessing.py | aenoboa1/Engagement-recognition-using-DAISEE-dataset | 30c43bca4aecc41e2b5e2a4bbb6f86395c2b46dd | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
import cv2
import os
from tqdm import tqdm
import tensorflow_datasets as tfds
import pandas as pd
import random
AUTOTUNE = tf.data.experimental.AUTOTUNE
batch_size = 32
np.random.seed(0)
class DataPreprocessing:
def __init__(self,
IMG_HEIGHT=224,
IMG_WIDTH=224,
dataset_dir='/content/DAiSEE/DAiSEE/DataSet/Data/',
test_dir='Test/',
train_dir='Train/',
val_dir='Validation/',
labels_dir='/content/DAiSEE/DAiSEE/Labels/',
test_label='TestLabels.csv',
train_label='TrainLabels.csv',
val_label='ValidationLabels.csv',
data_augmentation_flag=False,
max_frames=3
):
self.IMG_HEIGHT = IMG_HEIGHT
self.IMG_WIDTH = IMG_WIDTH
self.dataset_dir = dataset_dir
self.train_dir = self.dataset_dir+train_dir
self.test_dir = self.dataset_dir+test_dir
self.val_dir = self.dataset_dir+val_dir
self.labels_dir = labels_dir
self.train_label_dir = self.labels_dir + train_label
self.test_label_dir = self.labels_dir + test_label
self.val_label_dir = self.labels_dir + val_label
self.data_augmentation_flag = data_augmentation_flag
self.max_frames = max_frames
self.face_cascade = cv2.CascadeClassifier('/content/Engagement-recognition-using-DAISEE-dataset/dataset/haarcascade_frontalface_default.xml')
def get_images_from_set_dir(self, setdir):
'''
Method to find all images in the tree folder
'''
set_dir_images = []
humans = os.listdir(setdir)
for human in humans:
human_dir = setdir + human + "/"
videos = os.listdir(human_dir)
for video in videos:
video_dir = human_dir + video + "/"
pictures = os.listdir(video_dir)
pictures = random.sample(pictures, self.max_frames)
for picture in pictures:
picture_dir = video_dir + picture
if picture.endswith(".jpg"):
set_dir_images.append(picture_dir)
return set_dir_images
def get_labels_dataframe(self):
'''
Method to read pandas dataframe
'''
train_df = pd.read_csv(self.train_label_dir, sep=",")
test_df = pd.read_csv(self.test_label_dir, sep=",")
val_df = pd.read_csv(self.val_label_dir, sep=",")
return train_df, test_df, val_df
def resize(self, image):
return cv2.resize(image, (self.IMG_HEIGHT, self.IMG_WIDTH), interpolation=cv2.INTER_AREA)
def face_cropping(self, image):
# Crop and resize
faces = self.face_cascade.detectMultiScale(image, 1.3, 5)
try:
if faces != 0:
x, y, w, h = faces[0]
image = image[y:y+h, x:x+w]
except:
pass
return self.resize(image)
def random_crop(self, image, crop_height, crop_width):
max_x = image.shape[1] - crop_width
max_y = image.shape[0] - crop_height
x = np.random.randint(0, max_x)
y = np.random.randint(0, max_y)
crop = image[y: y + crop_height, x: x + crop_width]
return self.face_cropping(crop)
def augment_image(self, image):
'''
Applies some augmentation techniques
'''
# Mirror flip
flipped = tf.image.flip_left_right(image).numpy()
# Transpose flip
transposed = tf.image.transpose(image).numpy()
# Saturation
satured = tf.image.adjust_saturation(image, 3).numpy()
# Brightness
brightness = tf.image.adjust_brightness(image, 0.4).numpy()
# Contrast
contrast = tf.image.random_contrast(image, lower=0.0, upper=1.0).numpy()
# Resize at the end
images = [self.resize(image) for image in [flipped, transposed, satured, brightness, contrast]]
return images
def get_label_picture(self, image_path, label_df):
error_ = False
video = image_path.split("/")[-2]
label_series = label_df.loc[((label_df['ClipID'] == video+'.avi') | (label_df['ClipID'] == video+'.mp4'))]
try:
index = label_series.index.values[0]
label = np.array([label_series['Boredom'].get(index),
label_series['Engagement'].get(index),
label_series['Confusion'].get(index),
label_series['Frustration '].get(index)])
label_one_hot = (label >= 1).astype(np.uint8)
except:
print('Error in label picture')
print(image_path)
label_one_hot = ''
error_ = True
return label_one_hot, error_
def _int64_feature(self, value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(self, value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def writeTfRecord(self, output_dir, data_augmentation=False):
'''
Method to write tfrecord
'''
# open the TFRecords file
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Read dataframes
train_df, test_df, val_df = self.get_labels_dataframe()
# Objects to iterate
objs = [('train', self.train_dir, train_df),
('test', self.test_dir, test_df),
('val', self.val_dir, val_df)]
for name, dataset, label_df in tqdm(objs):
# Open Writer
writer = tf.io.TFRecordWriter(output_dir+name+'.tfrecords')
# Get all the images of a set
images_path = self.get_images_from_set_dir(dataset)
for image_path in tqdm(images_path, total=len(images_path)):
# Read the image from path
img = cv2.imread(image_path)[..., ::-1]
img = self.face_cropping(img)
# Read the label
label, error_ = self.get_label_picture(image_path, label_df)
if error_:
continue
# Create a feature
if data_augmentation:
images = self.augment_image(img)
else:
images = img
for image in images:
feature = {'label': self._bytes_feature(tf.compat.as_bytes(label.tostring())),
'image': self._bytes_feature(tf.compat.as_bytes(image.tostring()))}
# Create an example protocol buffer
example = tf.train.Example(features=tf.train.Features(feature=feature))
# Serialize to string and write on the file
writer.write(example.SerializeToString())
writer.close()
def decode(self, serialized_example):
"""
Parses an image and label from the given `serialized_example`.
It is used as a map function for `dataset.map`
"""
IMAGE_SHAPE = (self.IMG_HEIGHT, self.IMG_WIDTH, 3)
# 1. define a parser
features = tf.io.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image': tf.io.FixedLenFeature([], tf.string),
'label': tf.io.FixedLenFeature([], tf.string),
})
# 2. Convert the data
image = tf.io.decode_raw(features['image'], tf.uint8)
label = tf.io.decode_raw(features['label'], tf.uint8)
# Cast
label = tf.cast(label, tf.float32)
# 3. reshape
image = tf.convert_to_tensor(tf.reshape(image, IMAGE_SHAPE))
return image, label
if __name__ == '__main__':
preprocessing_class = DataPreprocessing()
# Write tf recordfloat32
preprocessing_class.writeTfRecord('tfrecords/', data_augmentation=True)
# Read TfRecord
tfrecord_path = 'tfrecords/train.tfrecords'
dataset = tf.data.TFRecordDataset(tfrecord_path)
# Parse the record into tensors with map.
# map takes a Python function and applies it to every sample.
dataset = dataset.map(preprocessing_class.decode)
# Divide in batch
dataset = dataset.batch(batch_size)
# Create an iterator
iterator = iter(dataset)
# Element of iterator
a = iterator.get_next()
| 36.47479 | 149 | 0.591407 | 1,033 | 8,681 | 4.754114 | 0.242982 | 0.017104 | 0.010588 | 0.013032 | 0.106088 | 0.044797 | 0.027286 | 0 | 0 | 0 | 0 | 0.008839 | 0.309296 | 8,681 | 237 | 150 | 36.628692 | 0.810207 | 0.103329 | 0 | 0.025806 | 0 | 0 | 0.054777 | 0.024505 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077419 | false | 0.006452 | 0.058065 | 0.019355 | 0.206452 | 0.019355 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c03569d017df8177b055018ea3ba5ba4d11939e | 5,802 | py | Python | tests/test_pagination.py | eadwinCode/django-ninja-extra | 16246c466ab8895ba1bf29d69f3d3e9337031edd | [
"MIT"
] | 43 | 2021-09-09T14:20:59.000Z | 2022-03-28T00:38:52.000Z | tests/test_pagination.py | eadwinCode/django-ninja-extra | 16246c466ab8895ba1bf29d69f3d3e9337031edd | [
"MIT"
] | 6 | 2022-01-04T10:53:11.000Z | 2022-03-28T19:53:46.000Z | tests/test_pagination.py | eadwinCode/django-ninja-extra | 16246c466ab8895ba1bf29d69f3d3e9337031edd | [
"MIT"
] | null | null | null | from ninja import Schema
from ninja_extra import NinjaExtraAPI, api_controller, route
from ninja_extra.pagination import (
PageNumberPagination,
PageNumberPaginationExtra,
PaginationBase,
paginate,
)
from ninja_extra.testing import TestClient
ITEMS = list(range(100))
class CustomPagination(PaginationBase):
# only offset param, defaults to 5 per page
class Input(Schema):
skip: int
def paginate_queryset(self, items, request, **params):
skip = params["pagination"].skip
return items[skip : skip + 5]
@api_controller
class SomeAPIController:
@route.get("/items_1")
@paginate # WITHOUT brackets (should use default pagination)
def items_1(self, **kwargs):
return ITEMS
@route.get("/items_2")
@paginate() # with brackets (should use default pagination)
def items_2(self, someparam: int = 0, **kwargs):
# also having custom param `someparam` - that should not be lost
return ITEMS
@route.get("/items_3")
@paginate(CustomPagination)
def items_3(self, **kwargs):
return ITEMS
@route.get("/items_4")
@paginate(PageNumberPaginationExtra, page_size=10)
def items_4(self, **kwargs):
return ITEMS
@route.get("/items_5")
@paginate(PageNumberPagination, page_size=10)
def items_5_without_kwargs(self):
return ITEMS
api = NinjaExtraAPI()
api.register_controllers(SomeAPIController)
client = TestClient(SomeAPIController)
class TestPagination:
def test_case1(self):
response = client.get("/items_1?limit=10").json()
assert response == ITEMS[:10]
schema = api.get_openapi_schema()["paths"]["/api/items_1"]["get"]
# print(schema)
assert schema["parameters"] == [
{
"in": "query",
"name": "limit",
"schema": {
"title": "Limit",
"default": 100,
"exclusiveMinimum": 0,
"type": "integer",
},
"required": False,
},
{
"in": "query",
"name": "offset",
"schema": {
"title": "Offset",
"default": 0,
"exclusiveMinimum": -1,
"type": "integer",
},
"required": False,
},
]
def test_case2(self):
response = client.get("/items_2?limit=10").json()
assert response == ITEMS[:10]
schema = api.get_openapi_schema()["paths"]["/api/items_2"]["get"]
# print(schema["parameters"])
assert schema["parameters"] == [
{
"in": "query",
"name": "someparam",
"schema": {"title": "Someparam", "default": 0, "type": "integer"},
"required": False,
},
{
"in": "query",
"name": "limit",
"schema": {
"title": "Limit",
"default": 100,
"exclusiveMinimum": 0,
"type": "integer",
},
"required": False,
},
{
"in": "query",
"name": "offset",
"schema": {
"title": "Offset",
"default": 0,
"exclusiveMinimum": -1,
"type": "integer",
},
"required": False,
},
]
def test_case3(self):
response = client.get("/items_3?skip=5").json()
assert response == ITEMS[5:10]
schema = api.get_openapi_schema()["paths"]["/api/items_3"]["get"]
# print(schema)
assert schema["parameters"] == [
{
"in": "query",
"name": "skip",
"schema": {"title": "Skip", "type": "integer"},
"required": True,
}
]
def test_case4(self):
response = client.get("/items_4?page=2").json()
assert response.get("results") == ITEMS[10:20]
assert response.get("count") == 100
assert response.get("next") == "http://testlocation/?page=3"
assert response.get("previous") == "http://testlocation/"
schema = api.get_openapi_schema()["paths"]["/api/items_4"]["get"]
# print(schema)
assert schema["parameters"] == [
{
"in": "query",
"name": "page",
"schema": {
"title": "Page",
"default": 1,
"exclusiveMinimum": 0,
"type": "integer",
},
"required": False,
},
{
"in": "query",
"name": "page_size",
"schema": {
"title": "Page Size",
"default": 10,
"exclusiveMaximum": 200,
"type": "integer",
},
"required": False,
},
]
def test_case5(self):
response = client.get("/items_5?page=2").json()
assert response == ITEMS[10:20]
schema = api.get_openapi_schema()["paths"]["/api/items_5"]["get"]
# print(schema)
assert schema["parameters"] == [
{
"in": "query",
"name": "page",
"schema": {
"title": "Page",
"default": 1,
"exclusiveMinimum": 0,
"type": "integer",
},
"required": False,
}
]
| 29.451777 | 82 | 0.454843 | 484 | 5,802 | 5.355372 | 0.206612 | 0.030864 | 0.038194 | 0.074074 | 0.557099 | 0.463349 | 0.438657 | 0.366898 | 0.327932 | 0.276235 | 0 | 0.023954 | 0.402792 | 5,802 | 196 | 83 | 29.602041 | 0.724098 | 0.048776 | 0 | 0.408537 | 0 | 0 | 0.179887 | 0 | 0 | 0 | 0 | 0 | 0.079268 | 1 | 0.067073 | false | 0 | 0.02439 | 0.030488 | 0.152439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c04c851fb6af5f1cc953dfe76cec42218d1c383 | 2,195 | py | Python | notifier/handlers.py | Gerschtli/teamspeak-update-notifier | 644474d7aee98a591c55b56a10f1d72cd0eaf8c7 | [
"MIT"
] | null | null | null | notifier/handlers.py | Gerschtli/teamspeak-update-notifier | 644474d7aee98a591c55b56a10f1d72cd0eaf8c7 | [
"MIT"
] | 8 | 2020-11-13T19:08:12.000Z | 2022-03-21T11:15:25.000Z | notifier/handlers.py | Gerschtli/teamspeak-update-notifier | 644474d7aee98a591c55b56a10f1d72cd0eaf8c7 | [
"MIT"
] | null | null | null | import logging
from abc import abstractmethod
from typing import Optional
from . import commands, errors, version_manager
from .message import Message
LOGGER: logging.Logger = logging.getLogger(__name__)
class Handler:
@staticmethod
@abstractmethod
def match(message: Message) -> bool:
raise NotImplementedError()
@abstractmethod
def execute(self, message: Message) -> Optional[commands.Command]:
raise NotImplementedError()
class ClientEnter(Handler):
def __init__(self, server_group_id: str, current_version: str) -> None:
self._server_group_id = server_group_id
self._current_version = current_version
@staticmethod
def match(message: Message) -> bool:
return message.command == "notifycliententerview"
def execute(self, message: Message) -> Optional[commands.Command]:
client_id = message.param("clid")
servergroups = message.param("client_servergroups")
nickname = message.param("client_nickname")
LOGGER.debug("client %s (id: %s) with server group %s entered", nickname, client_id,
servergroups)
if (servergroups != self._server_group_id
or client_id is None
or nickname is None
or not version_manager.need_update(self._current_version)):
return None
LOGGER.info("send message to client %s", nickname)
return commands.SendMessage(client_id, version_manager.build_message())
class ClientLeft(Handler):
def __init__(self, client_id: str) -> None:
self._client_id = client_id
@staticmethod
def match(message: Message) -> bool:
return message.command == "notifyclientleftview"
def execute( # pylint: disable=useless-return
self, message: Message
) -> Optional[commands.Command]:
# check for server down
if message.param("reasonid") == "11":
raise errors.ServerDisconnectError("server shutdown received")
# check for client disconnect
if message.param("clid") == self._client_id:
raise errors.ServerDisconnectError("client disconnected")
return None
| 31.811594 | 92 | 0.670615 | 235 | 2,195 | 6.07234 | 0.302128 | 0.044849 | 0.03644 | 0.046251 | 0.19972 | 0.1815 | 0.152768 | 0.152768 | 0.081289 | 0 | 0 | 0.001198 | 0.239636 | 2,195 | 68 | 93 | 32.279412 | 0.853805 | 0.036446 | 0 | 0.291667 | 0 | 0 | 0.098532 | 0.009948 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.104167 | 0.041667 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c0a27257d594cf93ba255a1b10bbc6ecef5cbef | 4,897 | py | Python | scripts/extract/squad_data.py | SBUNetSys/DeQA | 5baf2e151b8230dde3147d2a1e216a3e434375bb | [
"BSD-3-Clause"
] | 1 | 2020-01-09T04:42:10.000Z | 2020-01-09T04:42:10.000Z | scripts/extract/squad_data.py | SBUNetSys/DeQA | 5baf2e151b8230dde3147d2a1e216a3e434375bb | [
"BSD-3-Clause"
] | null | null | null | scripts/extract/squad_data.py | SBUNetSys/DeQA | 5baf2e151b8230dde3147d2a1e216a3e434375bb | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
import argparse
import json
import os
import random
def prettify_json(f):
if not f.endswith(".json"):
return
print("prettifying : {}".format(f))
parsed = json.load(open(f, 'r'))
pretty_path = "{}.txt".format(f)
with open(pretty_path, 'w') as p:
p.write(json.dumps(parsed, indent=2))
print("saved to : {}\n".format(pretty_path))
def extract_squad(input_json, size=100, seed=0):
if not input_json.endswith(".json"):
print("{} is not json file".format(input_json))
return
json_data = json.load(open(input_json, 'r'))
article_size = len(json_data['data'])
paragraph_size = 0
question_size = 0
for article_index, article in enumerate(json_data['data']):
paragraph_size += len(article["paragraphs"])
for paragraph_index, paragraph in enumerate(article["paragraphs"]):
question_size += len(paragraph["qas"])
print("total articles: {}".format(article_size))
print("total paragraphs: {}".format(paragraph_size))
print("total questions: {}".format(question_size))
random.seed(seed)
selected_indices = random.sample(range(0, question_size), size)
print("selected_indices: {}".format(selected_indices))
question_index = 0
selected_data = {}
articles = []
for article_index, article in enumerate(json_data['data']):
# print("article index:{}, title:{}".format(article_index, article["title"]))
# print(" paragraphs: {}".format(len(article["paragraphs"])))
paragraph_size += len(article["paragraphs"])
paragraphs = []
append_article = False
for paragraph_index, paragraph in enumerate(article["paragraphs"]):
# print("paragraph index:{}".format(paragraph_index))
# print(" questions: {}".format(len(paragraph["qas"])))
question_size += len(paragraph["qas"])
qas = []
append_paragraph = False
for qa_index, qa in enumerate(paragraph["qas"]):
if question_index in selected_indices:
# this is the question we want
append_article = True
append_paragraph = True
qas.append(qa)
question_index += 1
if append_paragraph:
paragraph["qas"] = qas
paragraphs.append(paragraph)
if append_article:
article["paragraphs"] = paragraphs
articles.append(article)
selected_data["data"] = articles
selected_data["version"] = "1.1"
output_dir = os.path.dirname(input_json)
output_name = "{}-{}.json".format(os.path.splitext(os.path.basename(input_json))[0], size)
output_path = os.path.join(output_dir, output_name)
json.dump(selected_data, open(output_path, 'w'))
prettify_json(output_path)
return output_path
def print_data_stats(input_json):
if not input_json.endswith(".json"):
print("{} is not json file".format(input_json))
return
json_data = json.load(open(input_json, 'r'))
article_size = len(json_data['data'])
paragraph_size = 0
question_size = 0
context_string_lengths = []
question_string_lengths = []
for article_index, article in enumerate(json_data['data']):
paragraph_size += len(article["paragraphs"])
for paragraph_index, paragraph in enumerate(article["paragraphs"]):
context_string_lengths.append(len(paragraph["context"]))
question_size += len(paragraph["qas"])
for qa in paragraph["qas"]:
question_string_lengths.append(len(qa["question"]))
print("total articles: {}".format(article_size))
print("total paragraphs: {}".format(paragraph_size))
print("total questions: {}".format(question_size))
print("context_string_lengths (sorted:{}".format(sorted(context_string_lengths)))
print("question_string_lengths (sorted): {}".format(sorted(question_string_lengths)))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-s', "--print_data_stats")
parser.add_argument('-e', "--extract_squad", nargs='*')
parser.add_argument('-p', "--prettify_json", nargs='*')
args = parser.parse_args()
if args.print_data_stats:
print_data_stats(args.print_data_stats)
if args.extract_squad:
input_squad_path = args.extract_squad[0]
extract_size = int(args.extract_squad[1])
if len(args.extract_squad) == 3:
extracted_path = extract_squad(input_squad_path, extract_size, int(args.extract_squad[2]))
else:
extracted_path = extract_squad(input_squad_path, extract_size)
if args.prettify_json:
prettify_json(extracted_path)
elif args.prettify_json:
for j in args.prettify_json:
prettify_json(j) | 38.865079 | 102 | 0.637942 | 580 | 4,897 | 5.144828 | 0.17931 | 0.030161 | 0.020107 | 0.02815 | 0.431635 | 0.345174 | 0.328753 | 0.328753 | 0.310657 | 0.262064 | 0 | 0.005291 | 0.228099 | 4,897 | 126 | 103 | 38.865079 | 0.784127 | 0.061466 | 0 | 0.317308 | 0 | 0 | 0.110869 | 0.009802 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028846 | false | 0 | 0.038462 | 0 | 0.105769 | 0.163462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c0ac731a894a3c3e63d272800948e1e88427bed | 1,263 | py | Python | scripts/merge_classifier_dataset.py | klangner/dataset-tools | 32453859123753cb40d42ee6df7df539c9ee0ec1 | [
"Apache-2.0"
] | null | null | null | scripts/merge_classifier_dataset.py | klangner/dataset-tools | 32453859123753cb40d42ee6df7df539c9ee0ec1 | [
"Apache-2.0"
] | 1 | 2018-04-17T14:12:15.000Z | 2018-04-17T14:12:15.000Z | scripts/merge_classifier_dataset.py | klangner/dataset-tools | 32453859123753cb40d42ee6df7df539c9ee0ec1 | [
"Apache-2.0"
] | null | null | null | #
# Merge dataset exported from Dataset-Recorder into format used by
# image classifiers.
# In this format images are saved in the subfolders with names taken from labels
# E.g. For Card-Colors dataset we have the following structure
# card-colors
# train
# clubs
# diamonds
# hearts
# spades
import os
import sys
import shutil
import pandas as pd
def merge(source_dataset, destination_folder):
"""
Create folders based on data labels and put files there
"""
source_folder = os.path.dirname(source_dataset)
df_source = pd.read_csv(source_dataset)
for row in df_source.values:
fname = row[0]
source_file = os.path.join(source_folder, fname)
label_folder = os.path.join(destination_folder, row[1])
if not os.path.exists(label_folder):
os.makedirs(label_folder)
destination_file = os.path.join(label_folder, fname)
if os.path.exists(destination_file):
print('File {:} already exists'.format(destination_file))
else:
shutil.copyfile(os.path.realpath(source_file), os.path.realpath(destination_file))
if __name__ == '__main__':
if len(sys.argv) != 3:
print("Usage:")
print(" python merge_classifier_dataset.py <source_dataset> <destination_folder>")
else:
merge(sys.argv[1], sys.argv[2]) | 29.372093 | 91 | 0.728424 | 183 | 1,263 | 4.852459 | 0.480874 | 0.054054 | 0.033784 | 0.067568 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004753 | 0.167063 | 1,263 | 43 | 92 | 29.372093 | 0.839354 | 0.2692 | 0 | 0.083333 | 0 | 0 | 0.122788 | 0.029867 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.166667 | 0 | 0.208333 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c0c0d1bde58c13dbe759d4f328500a7dee623f3 | 8,921 | py | Python | activereader/tcx.py | aaron-schroeder/py-activityreaders | cfbb391303ecd8c5af0febf411b19ab29b53691b | [
"MIT"
] | null | null | null | activereader/tcx.py | aaron-schroeder/py-activityreaders | cfbb391303ecd8c5af0febf411b19ab29b53691b | [
"MIT"
] | null | null | null | activereader/tcx.py | aaron-schroeder/py-activityreaders | cfbb391303ecd8c5af0febf411b19ab29b53691b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""".tcx file reader architecture.
Originated in `heartandsole <https://github.com/aaron-schroeder/heartandsole/blob/affc028c266e108e669d93a99b19dbb8e176db49/heartandsole/filereaders.py#L295>`_
back in the day.
See also:
`Garmin's TCX schema <https://www8.garmin.com/xmlschemas/TrainingCenterDatabasev2.xsd>`_
XML file describing the schema for TCX files.
`Garmin's ActivityExtension schema <https://www8.garmin.com/xmlschemas/ActivityExtensionv2.xsd>`_
XML file describing Garmin's extensions to the TCX schema.
"""
import datetime
import io
from dateutil import tz
from lxml import etree
from . import util
from .base import (
ActivityElement,
add_xml_data, add_xml_attr, add_xml_descendents,
# add_data_props, add_attr_props, add_descendent_props,
create_data_prop, create_attr_prop, create_descendent_prop
)
class Trackpoint(ActivityElement):
"""Represents a single data sample corresponding to a point in time.
The most granular of data contained in the file.
"""
TAG = 'Trackpoint'
time = create_data_prop('Time', datetime.datetime)
"""datetime.datetime: Timestamp when trackpoint was recorded.
See also:
:ref:`data.timestamp`
"""
# OR
# @data_prop('dummy_tag', float)
#
lat = create_data_prop('Position/LatitudeDegrees', float)
"""float: Latitude in degrees N (-90 to 90)."""
lon = create_data_prop('Position/LongitudeDegrees', float)
"""float: Longitude in degrees E (-180 to 180)."""
altitude_m = create_data_prop('AltitudeMeters', float)
"""float: Elevation of ground surface in meters above sea level."""
distance_m = create_data_prop('DistanceMeters', float)
"""float: Cumulative distance from the start of the activity, in meters.
See also:
:ref:`data.distance`
"""
hr = create_data_prop('HeartRateBpm/Value', int)
"""int: Heart rate."""
speed_ms = create_data_prop('Extensions/TPX/Speed', float)
"""float: Speed in meters per second.
TODO:
* Consider looping this in to the explanation for distance data,
maybe it can be a whole thing on sensor fusion with links.
"""
cadence_rpm = create_data_prop('Extensions/TPX/RunCadence', int)
"""int: Cadence in RPM.
See also:
:ref:`data.cadence`
"""
class Track(ActivityElement):
"""In a running TCX file, there is typically one Track per Lap.
As far as I can tell, in a running file, Tracks and Laps are one
and the same; when a Lap starts or ends, so does its contained Track.
Made up of 1 or more :class:`Trackpoint` in xml file.
"""
TAG = 'Track'
trackpoints = create_descendent_prop(Trackpoint)
@add_xml_data(
intensity=('Intensity', str),
trigger_method=('TriggerMethod', str),
)
class Lap(ActivityElement):
"""Represents one bout from {start/lap} -> {lap/stop}.
There is at least one lap per activity file, created by the `start` button
press and ended by the `stop` button press. Hitting the `lap` button begins
a new lap. Hitting the pause button stops data recording, but the same lap
resumes after the pause.
Made up of 1 or more :class:`Track` in the XML structure.
"""
TAG = 'Lap'
start_time = create_attr_prop('StartTime', datetime.datetime)
"""datetime.datetime: Timestamp of lap start.
See also:
:ref:`data.timestamp`
"""
total_time_s = create_data_prop('TotalTimeSeconds', float)
"""float: Total lap time, in seconds, as reported by the device.
This is timer time, not elapsed time; it does not include any time when
the device is paused.
See also:
:ref:`data.tcx.start_stop_pause`
"""
distance_m = create_data_prop('DistanceMeters', float)
"""float: Total lap distance, in meters, as reported by the device.
See also:
:ref:`data.distance`
"""
max_speed_ms = create_data_prop('MaximumSpeed', float)
"""float: The maximum speed achieved during the lap as reported by the
device, in meters per second.
"""
avg_speed_ms = create_data_prop('Extensions/LX/AvgSpeed', float)
"""float: The average speed during the lap as reported by the device,
in meters per second.
"""
hr_avg = create_data_prop('AverageHeartRateBpm/Value', int)
"""float: average heart rate during the lap as reported by the device."""
hr_max = create_data_prop('MaximumHeartRateBpm/Value', int)
"""float: maximum heart rate during the lap as reported by the device."""
cadence_avg = create_data_prop('Extensions/LX/AvgRunCadence', int)
"""float: average cadence during the lap as reported by the device, in RPM.
See also:
:ref:`data.cadence`
"""
cadence_max = create_data_prop('Extensions/LX/MaxRunCadence', int)
"""float: maximum cadence during the lap as reported by the device, in RPM.
See also:
:ref:`data.cadence`
"""
calories = create_data_prop('Calories', int)
"""float: Calories burned during the lap as approximated by the device."""
tracks = create_descendent_prop(Track)
trackpoints = create_descendent_prop(Trackpoint)
@add_xml_data(product_id=('Creator/ProductID', int))
class Activity(ActivityElement):
"""TCX files representing a run should only contain one Activity.
Contains one or more :class:`Lap` elements.
"""
TAG = 'Activity'
start_time = create_data_prop('Id', datetime.datetime)
"""datetime.datetime: Timestamp for activity start time.
See also:
:ref:`data.timestamp`
"""
sport = create_attr_prop('Sport', str)
"""str: Activity sport.
Restricted to "Running", "Biking", or "Other" according to Garmin's TCX
file schema.
"""
device = create_data_prop('Creator/Name', str)
"""str: Device brand name."""
device_id = create_data_prop('Creator/UnitId', int)
"""int: Device ID - specific to the individual device."""
@property
def version(self):
major = self.get_data('Creator/Version/VersionMajor', int)
minor = self.get_data('Creator/Version/VersionMinor', int)
return f'{major}.{minor}'
@property
def build(self):
major = self.get_data('Creator/Version/BuildMajor', int)
minor = self.get_data('Creator/Version/BuildMinor', int)
return f'{major}.{minor}'
laps = create_descendent_prop(Lap)
tracks = create_descendent_prop(Track)
trackpoints = create_descendent_prop(Trackpoint)
@add_xml_data(
creator=('Author/Name', str),
part_number=('Author/PartNumber', str)
)
class Tcx(ActivityElement):
"""Represents an entire .tcx file object."""
TAG = 'TrainingCenterDatabase'
@classmethod
def from_file(cls, file_obj):
"""Initialize a Tcx element from a file-like object.
Args:
file_obj (str, bytes, io.StringIO, io.BytesIO): File-like object.
If str, either filename or a string representation of XML
object. If str or StringIO, the encoding should not be declared
within the string.
Returns:
Tcx: An instance initialized with the :class:`~lxml.etree._Element`
that was read in.
See also:
https://lxml.de/tutorial.html#the-parse-function
TODO:
* Consider if the master file readers should use a separate for
creation, akin to etree.parse or from_string. Gah. I could make
it more flexible, sure. But it seems weird to just have one
random subclass with a different init. Maybe Tcx is the only
one who gets this special extra class. Also naturally am
thinking about delegating it.
* How many delegated methods are we looking at now? parse, from_string,
find_text, get, xpath, ... others?
"""
if not isinstance(file_obj, (str, bytes, io.StringIO, io.BytesIO)):
raise TypeError(f'file object type not accepted: {type(file_obj)}')
if isinstance(file_obj, str) and not file_obj.lower().endswith('.tcx'):
file_obj = io.StringIO(file_obj)
elif isinstance(file_obj, bytes):
file_obj = io.BytesIO(file_obj)
# Note: tree is an ElementTree, which is just a thin wrapper
# around root, which is an element
tree = etree.parse(file_obj)
root = tree.getroot()
util.strip_namespaces(root)
return cls(root)
# Below here are convenience properties that access data from
# descendent elements. Not sure if they all stay.
@property
def device(self):
return self.activities[0].device
@property
def distance_m(self):
return sum([lap.distance_m for lap in self.laps])
@property
def calories(self):
return sum([lap.calories for lap in self.laps])
@property
def lap_time_s(self):
return sum([lap.total_time_s for lap in self.laps])
@property
def num_laps(self):
return len(self.laps)
@property
def num_bouts(self):
return len(self.tracks)
@property
def num_records(self):
return len(self.trackpoints)
activities = create_descendent_prop(Activity)
laps = create_descendent_prop(Lap)
tracks = create_descendent_prop(Track)
trackpoints = create_descendent_prop(Trackpoint)
| 29.153595 | 158 | 0.705638 | 1,242 | 8,921 | 4.950081 | 0.289855 | 0.028627 | 0.04782 | 0.020494 | 0.273748 | 0.200878 | 0.179733 | 0.134027 | 0.107677 | 0.098569 | 0 | 0.006217 | 0.188656 | 8,921 | 305 | 159 | 29.24918 | 0.843189 | 0.295819 | 0 | 0.230769 | 0 | 0 | 0.164851 | 0.081683 | 0 | 0 | 0 | 0.006557 | 0 | 1 | 0.096154 | false | 0 | 0.057692 | 0.067308 | 0.653846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c0e9ac07f9238cb7d27983e805d702872d54598 | 508 | py | Python | ex3.2.2/f.py | GU1LH3RME-LIMA/EDL | ac0c0622552bfee51f16cfa3a14f0de748118137 | [
"MIT"
] | null | null | null | ex3.2.2/f.py | GU1LH3RME-LIMA/EDL | ac0c0622552bfee51f16cfa3a14f0de748118137 | [
"MIT"
] | null | null | null | ex3.2.2/f.py | GU1LH3RME-LIMA/EDL | ac0c0622552bfee51f16cfa3a14f0de748118137 | [
"MIT"
] | null | null | null | import math
def primo(n):#verá se o numero é primo ou não
if(n==1):
return False
for count in range(2,int(math.sqrt(n))+1):
if (n % count == 0):
return False
return True
def ffilter(function,lista):
newlist=[]
for i in range(0,len(lista)):
if(function(lista[i])): #Se for primo será adicionado a lista
newlist.append(lista[i])
return newlist
lista=[1,2,3,4,5]
print(ffilter(primo,lista)) #irá retornar uma lista com numeros primos
| 24.190476 | 71 | 0.614173 | 82 | 508 | 3.804878 | 0.54878 | 0.019231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026596 | 0.259843 | 508 | 20 | 72 | 25.4 | 0.803191 | 0.212598 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.4375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c1192f07d41179fcd6426af595a10043d1337ff | 2,710 | py | Python | test/entities.py | adly/blingalytics | 2e532108bda1cc7bd4a9a863c8c98546d249267c | [
"MIT"
] | 4 | 2015-04-01T06:02:49.000Z | 2021-08-16T08:08:37.000Z | test/entities.py | adly/blingalytics | 2e532108bda1cc7bd4a9a863c8c98546d249267c | [
"MIT"
] | null | null | null | test/entities.py | adly/blingalytics | 2e532108bda1cc7bd4a9a863c8c98546d249267c | [
"MIT"
] | null | null | null | from decimal import Decimal
import elixir
from elixir import Boolean, Entity, Field, Integer, Numeric, using_options
DB_URL = 'postgresql://bling:bling@localhost:5432/bling'
# Aggregate function `first` (http://wiki.postgresql.org/wiki/First_%28aggregate%29)
FIRST_FUNCTION = '''
DROP AGGREGATE IF EXISTS public.first(anyelement);
-- Create a function that always returns the first non-NULL item
CREATE OR REPLACE FUNCTION public.first_agg ( anyelement, anyelement )
RETURNS anyelement AS $$
SELECT CASE WHEN $1 IS NULL THEN $2 ELSE $1 END;
$$ LANGUAGE SQL STABLE;
-- And then wrap an aggregate around it
CREATE AGGREGATE public.first (
sfunc = public.first_agg,
basetype = anyelement,
stype = anyelement
);
COMMIT;
'''
def init_db():
"""Perform setup tasks to be able to connect to bling test db."""
elixir.metadata.bind = DB_URL
elixir.metadata.bind.echo = False
elixir.setup_all()
elixir.session.close()
def init_db_from_scratch():
"""Build the necessary stuff in the db to run."""
init_db()
elixir.drop_all()
elixir.create_all()
elixir.metadata.bind.execute(FIRST_FUNCTION)
filler_data()
def filler_data():
datas = [
{'user_id': 1, 'user_is_active': True, 'widget_id': 1, 'widget_price': Decimal('1.23')},
{'user_id': 1, 'user_is_active': True, 'widget_id': 2, 'widget_price': Decimal('2.34')},
{'user_id': 1, 'user_is_active': True, 'widget_id': 3, 'widget_price': Decimal('3.45')},
{'user_id': 2, 'user_is_active': False, 'widget_id': 4, 'widget_price': Decimal('50.00')},
]
for data in datas:
AllTheData(**data)
elixir.session.commit()
class AllTheData(Entity):
"""Star-schema-style Entity for testing purposes."""
using_options(tablename='all_the_data')
user_id = Field(Integer)
user_is_active = Field(Boolean)
widget_id = Field(Integer)
widget_price = Field(Numeric(10, 2))
class Compare(object):
"""
Value that compares equal for anything as long as it's always the same.
The first time you compare it, it always returns True. But it remembers
what you compared it to that first time. Every subsequent time you compare
it, it performs a standard comparison between that first value and the new
comparison. So...
i = Compare()
i == 2 # True
i == 6 # False
i == 2 # True
"""
def __init__(self):
self._compared = False
self._compare_value = None
def __eq__(self, other):
if self._compared:
return self._compare_value == other
else:
self._compared = True
self._compare_value = other
return True
| 31.882353 | 98 | 0.661624 | 371 | 2,710 | 4.663073 | 0.38814 | 0.017341 | 0.034682 | 0.019075 | 0.074566 | 0.053757 | 0.053757 | 0.053757 | 0.053757 | 0 | 0 | 0.018078 | 0.224354 | 2,710 | 84 | 99 | 32.261905 | 0.804948 | 0.224354 | 0 | 0 | 0 | 0 | 0.351643 | 0.034331 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087719 | false | 0 | 0.052632 | 0 | 0.280702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c14ecf124b508cf4a7da03640cad481ded199a3 | 8,587 | py | Python | docker/create_inventory.py | tdm-project/tdm-manage-cluster | 977e0b65081cee84a47c8d83823f5e227e700a85 | [
"Apache-2.0"
] | null | null | null | docker/create_inventory.py | tdm-project/tdm-manage-cluster | 977e0b65081cee84a47c8d83823f5e227e700a85 | [
"Apache-2.0"
] | 12 | 2019-02-19T10:57:16.000Z | 2021-09-23T23:24:18.000Z | docker/create_inventory.py | tdm-project/tdm-manage-cluster | 977e0b65081cee84a47c8d83823f5e227e700a85 | [
"Apache-2.0"
] | 2 | 2019-02-19T11:39:59.000Z | 2019-03-06T16:33:13.000Z | #!/usr/bin/env python3
# Copyright 2018-2019 CRS4 (http://www.crs4.it/)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import json
import argparse
import random
import configparser
VERSION = '0.2.0'
class KubeSprayGroupName(object):
# Legal group names
ALL = "all"
BASTION = 'bastion'
ETCD = 'etcd'
CLUSTER = 'k8s-cluster'
MASTER = 'kube-master'
NODE = 'kube-node'
# cache list
__list__ = None
@staticmethod
def list():
if KubeSprayGroupName.__list__ is None:
KubeSprayGroupName.__list__ = [KubeSprayGroupName.__dict__[x] for x in KubeSprayGroupName.__dict__ if
not x.startswith('_') and x != 'list']
KubeSprayGroupName.__list__.sort()
return KubeSprayGroupName.__list__
class Instance(object):
def __init__(self, raw_data):
super(Instance, self).__init__()
self._raw_data = raw_data
@property
def id(self):
return self._raw_data['primary']['attributes']['id']
@property
def name(self):
return self._raw_data['primary']['attributes']['name']
@property
def kubespray_groups(self):
return self._raw_data['primary']['attributes']['all_metadata.kubespray_groups'].split(',')
def is_bastion_node(self):
return 'bastion' in self.kubespray_groups
@property
def private_ip(self):
return self._raw_data['primary']['attributes']['access_ip_v4']
@property
def floating_ip(self):
return self._raw_data['primary']['attributes']['network.0.floating_ip']
@floating_ip.setter
def floating_ip(self, fip):
self._raw_data['primary']['attributes']['network.0.floating_ip'] = fip
@property
def ssh_user(self):
return self._raw_data['primary']['attributes']['metadata.ssh_user']
class TerraformState(object):
def __init__(self, json_data):
super(TerraformState, self).__init__()
self._json_data = json_data
self._instances, self._fip_associations, self._groups = TerraformState.parse_json_data(json_data)
@property
def instances(self):
return self._instances
def get_instance_by_id(self, id):
return self.instances[id]
@property
def kubespray_groups(self):
return self._groups
def has_private_instances(self):
for i in self._instances.values():
if not i.floating_ip:
return True
return False
def get_bastion_instance(self):
for i in self._instances.values():
if i.is_bastion_node():
return i
return None
def choose_random_fip_instance(self):
if len(self._fip_associations) == 0:
raise Exception("No public IP available!!!")
return random.choice(list(self._fip_associations.values()))
@staticmethod
def parse_json_data(json_data):
fips = {}
public_ips = {}
instances = {}
groups = {x: [] for x in KubeSprayGroupName.list()}
for m in json_data['modules']:
for r in m['resources']:
resource = m['resources'][r]
rtype = resource['type']
if rtype == 'openstack_compute_instance_v2':
instance = Instance(resource)
instances[instance.id] = instance
if not instance.is_bastion_node():
groups['all'].append(instance.id)
for g in instance.kubespray_groups:
if g in KubeSprayGroupName.list():
groups[g].append(instance.id)
if "k8s_master_ext_net" in r: # check master subtype by name
instance.floating_ip = resource['primary']['attributes']['network.0.fixed_ip_v4']
public_ips[instance.floating_ip] = instance
elif rtype == 'openstack_compute_floatingip_associate_v2':
fips[resource['primary']['attributes']['instance_id']] = resource
# associate floating ip to instances
for i_id, instance in instances.items():
if i_id in fips:
instance.floating_ip = fips[i_id]['primary']['attributes']['floating_ip']
public_ips[instance.floating_ip] = instance
return instances, public_ips, groups
@staticmethod
def load(filename):
with open(filename) as f:
return TerraformState(json.load(f))
class Inventory(object):
@staticmethod
def __format_node_name(instance):
return 'bastion' if instance.is_bastion_node() else instance.name
@staticmethod
def generate(terraform_state, output_stream):
"""
:param terraform_state:
:type terraform_state: TerraformState
:param output_stream:
:return:
"""
config = configparser.RawConfigParser(allow_no_value=True)
# get map group_name -> instance_id list
groups = terraform_state.kubespray_groups
# search for a bastion node
bastion = terraform_state.get_bastion_instance()
# fill the 'all' section
config.add_section("all")
for i in groups['all']:
instance = terraform_state.instances[i]
if not instance.is_bastion_node():
node_name = Inventory.__format_node_name(instance)
config.set('all', "{} ansible_host={} ip={} ansible_ssh_user={}"
.format(node_name,
instance.floating_ip or instance.private_ip,
instance.private_ip,
instance.ssh_user))
if bastion or terraform_state.has_private_instances():
bastion = bastion or terraform_state.choose_random_fip_instance()
config.set('all', "{} ansible_host={} ansible_user={}"
.format(KubeSprayGroupName.BASTION, bastion.floating_ip, bastion.ssh_user))
config.add_section(KubeSprayGroupName.BASTION)
config.set(KubeSprayGroupName.BASTION, KubeSprayGroupName.BASTION)
# fill remaining sections
for group in KubeSprayGroupName.list()[2:]:
instance_id_list = groups[group]
config.add_section(group)
for instance_id in instance_id_list:
instance = terraform_state.instances[instance_id]
config.set(group, Inventory.__format_node_name(instance))
all_vars = "all:vars"
config.add_section(all_vars)
config.set(all_vars, 'ansible_python_interpreter', '/usr/bin/python3')
# write the output file
if output_stream:
config.write(output_stream)
def run(terraform_state_filepath, inventory_filepath):
# load the terraform state
terraform_state = TerraformState.load(terraform_state_filepath)
# ensure the path exists
filepath = os.path.dirname(inventory_filepath)
if filepath and not os.path.exists(filepath):
os.makedirs(filepath)
# write the inventory file
with open(inventory_filepath, 'w') as inventory_file:
Inventory.generate(terraform_state, inventory_file)
def _build_parse_args():
parser = argparse.ArgumentParser(__file__, __doc__,
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--version', action='store_true', help='print version and exit')
parser.add_argument('-s', '--terraform-state', default='terraform.tfstate',
help='path of the terraform state file')
parser.add_argument('-o', '--output', default='hosts.ini', help='path of the output file')
return parser
def main():
parser = _build_parse_args()
args = parser.parse_args()
if args.version:
print('%s %s' % (__file__, VERSION))
parser.exit()
run(args.terraform_state, args.output)
parser.exit()
if __name__ == '__main__':
main()
| 34.211155 | 113 | 0.626528 | 970 | 8,587 | 5.281443 | 0.230928 | 0.046457 | 0.019325 | 0.024595 | 0.147179 | 0.11868 | 0.082764 | 0.040211 | 0.017958 | 0 | 0 | 0.004813 | 0.274135 | 8,587 | 250 | 114 | 34.348 | 0.817103 | 0.114475 | 0 | 0.136905 | 0 | 0 | 0.111421 | 0.024967 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.029762 | 0.065476 | 0.357143 | 0.011905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c1600767fc6b7ba8a792794fad2eba8e31e339b | 3,787 | py | Python | See/Model_000_f00/server.py | arc144/siim-pneumothorax | 98fdb1fe08e9c001e0191d5024ba6c56ec82a9c8 | [
"MIT"
] | 18 | 2019-09-08T02:12:52.000Z | 2021-09-11T09:45:53.000Z | See/Model_000_f00/server.py | rajatmodi62/siim-pneumothorax | 98fdb1fe08e9c001e0191d5024ba6c56ec82a9c8 | [
"MIT"
] | 10 | 2020-03-24T17:36:36.000Z | 2022-01-13T01:35:43.000Z | See/Model_000_f00/server.py | rajatmodi62/siim-pneumothorax | 98fdb1fe08e9c001e0191d5024ba6c56ec82a9c8 | [
"MIT"
] | 9 | 2019-09-09T06:03:26.000Z | 2021-04-11T13:28:46.000Z | import pandas as pd # noqa
import numpy as np
import argparse
from tqdm import tqdm; tqdm.monitor_interval = 0 # noqa
from concurrent.futures import ThreadPoolExecutor
from data import DicomDataset, load_gt
def score(yt, yp):
assert yt.dtype in ('uint8', 'int32'), yt.dtype
assert yp.dtype in ('uint8', 'int32'), yp.dtype
assert yt.shape == (1024, 1024), yt.shape
assert yt.shape == yp.shape, yp.shape
assert yt.max() <= 1, yt.max()
assert yp.max() <= 1, yp.max()
yt = (yt == 1)
yp = (yp == 1)
yt_sum = yt.sum()
yp_sum = yp.sum()
if yt_sum == 0:
if yp_sum != 0:
score = (0, 'empty', 'non-empty')
else:
score = (1, 'empty', 'empty')
return score
intersection = np.logical_and(yt, yp).sum()
dice_coeff = (2 * intersection) / (yt_sum + yp_sum)
score = (dice_coeff, 'non-empty',
'empty' if yp_sum == 0 else 'non-empty')
return score
def run_server(prediction_fn, gt_fn):
submission = load_gt(prediction_fn, rle_key='EncodedPixels')
gt = load_gt(gt_fn)
def compute_score(key):
yt = DicomDataset.rles_to_mask(gt[key], merge_masks=True)
yp = DicomDataset.rles_to_mask(submission[key], merge_masks=True)
return score(yt, yp)
scores = []
keys = list(submission)
with ThreadPoolExecutor(1) as e:
scores = list(tqdm(e.map(compute_score, keys), total=len(keys)))
empty_score = np.sum([s[0] for s in scores if s[1] == 'empty'])
num_empty = sum(1 for s in scores if s[1] == 'empty')
num_empty_pred = sum(1 for s in scores if s[-1] == 'empty')
num_non_empty_pred = sum(1 for s in scores if s[-1] == 'non-empty')
non_empty_score = np.sum([s[0] for s in scores if s[1] == 'non-empty'])
num_non_empty = len(scores) - num_empty
final_score = np.sum([s[0] for s in scores]) / len(scores)
print("[GT: %5d | P: %5d] %012s %.4f | %.4f" % (num_empty, num_empty_pred,
'Empty: ', empty_score / num_empty, empty_score / len(scores)))
print("[GT: %5d | P: %5d] %012s %.4f | %.4f" % (num_non_empty,
num_non_empty_pred, 'Non-Empty: ', non_empty_score / num_non_empty,
non_empty_score / len(scores)))
print("[%5d] Final: %.4f" % (len(scores), final_score))
return final_score
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--fn', type=str)
args = parser.parse_args()
final_score = run_server(args.fn, 'train-rle.csv')
print(round(final_score, 4))
if __name__ == '__main__':
main()
# def score_v2(yt, yp):
# assert yt.dtype == 'int32', yt.dtype
# assert yp.dtype == 'int32', yp.dtype
# assert yt.shape == (1024, 1024), yt.shape
# assert yt.shape == yp.shape, yp.shape
# num_gt_masks = yt.max()
# num_pred_masks = yp.max()
# if num_gt_masks == 0:
# if num_pred_masks != 0:
# score = (0, 'empty', 'non-empty')
# else:
# score = (1, 'empty', 'empty')
# return score
# per_image_scores = []
# matched_pred_indices = []
# for gt_index in range(1, num_gt_masks + 1):
# gt_mask = yt == gt_index
# best_dice_coeff = 0.
# best_pred_index = None
# for pred_index in range(1, num_pred_masks + 1):
# if pred_index in matched_pred_indices:
# continue
# pred_mask = yp == pred_index
# intersection = np.logical_and(gt_mask, pred_mask).sum()
# dice_coeff = (2 * intersection) / (gt_mask.sum() + pred_mask.sum())
# if dice_coeff > best_dice_coeff:
# best_dice_coeff = dice_coeff
# best_pred_index = pred_index
# matched_pred_indices.append(best_pred_index)
# per_image_scores.append(best_dice_coeff)
# # too many predictions
# per_image_scores.extend([0] * (num_pred_masks - len(matched_pred_indices)))
# score = (np.mean(per_image_scores), 'non-empty',
# 'empty' if num_gt_masks == 0 else 'non-empty')
# return score
| 32.930435 | 79 | 0.642461 | 590 | 3,787 | 3.89322 | 0.183051 | 0.059208 | 0.015673 | 0.031345 | 0.388768 | 0.265999 | 0.223335 | 0.223335 | 0.223335 | 0.205921 | 0 | 0.026228 | 0.204647 | 3,787 | 114 | 80 | 33.219298 | 0.736388 | 0.341695 | 0 | 0.032787 | 0 | 0 | 0.099837 | 0 | 0 | 0 | 0 | 0 | 0.098361 | 1 | 0.065574 | false | 0 | 0.098361 | 0 | 0.229508 | 0.065574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c172cc1e123e977829368c08b82c64684f18d06 | 5,057 | py | Python | fantalega/forms.py | bancaldo/djangofantalega | ffc93ff607106f4f90cfd9c84c175af8798a36d3 | [
"Unlicense"
] | null | null | null | fantalega/forms.py | bancaldo/djangofantalega | ffc93ff607106f4f90cfd9c84c175af8798a36d3 | [
"Unlicense"
] | null | null | null | fantalega/forms.py | bancaldo/djangofantalega | ffc93ff607106f4f90cfd9c84c175af8798a36d3 | [
"Unlicense"
] | null | null | null | # noinspection PyUnresolvedReferences
from django import forms
from .models import Player
# noinspection PyUnresolvedReferences
from django.utils.safestring import mark_safe
class AuctionPlayer(forms.Form):
def __init__(self, *args, **kwargs):
self.dict_values = kwargs.pop('initial')
super(AuctionPlayer, self).__init__(*args, **kwargs)
self.fields['player'] = forms.ChoiceField(
label=u'player', choices=self.dict_values['players'],
widget=forms.Select(), required=False)
self.fields['auction_value'] = forms.IntegerField()
self.fields['team'] = forms.ChoiceField(
label=u'team', choices=self.dict_values['teams'],
widget=forms.Select(), required=False)
class MatchDeadLineForm(forms.Form):
def __init__(self, *args, **kwargs):
self.dict_values = kwargs.pop('initial')
super(MatchDeadLineForm, self).__init__(*args, **kwargs)
m_day = self.dict_values.get('day')
days = self.dict_values.get('days')
self.fields['day'] = forms.ChoiceField(initial=m_day,
label=u'day', choices=days,
widget=forms.Select(), required=False)
self.fields['dead_line'] = forms.DateTimeField(required=True)
class TradeForm(forms.Form):
def __init__(self, *args, **kwargs):
self.dict_values = kwargs.pop('initial')
super(TradeForm, self).__init__(*args, **kwargs)
self.fields['player_out'] = forms.ChoiceField(
label=u'OUT', choices=self.dict_values['players'],
widget=forms.Select(), required=False)
self.fields['player_in'] = forms.ChoiceField(
label=u'IN', choices=self.dict_values['others'],
widget=forms.Select(), required=False)
class UploadVotesForm(forms.Form):
def __init__(self, *args, **kwargs):
self.dict_values = kwargs.pop('initial')
super(UploadVotesForm, self).__init__(*args, **kwargs)
self.fields['day'] = forms.IntegerField()
self.fields['season'] = forms.ChoiceField(label=u'season',
choices=self.dict_values['seasons'],
widget=forms.Select(),)
self.fields['file_in'] = forms.FileField()
class UploadLineupForm(forms.Form):
def __init__(self, *args, **kwargs):
self.dict_values = kwargs.pop('initial')
super(UploadLineupForm, self).__init__(*args, **kwargs)
self.fields['module'] = forms.ChoiceField(
label=u'module', choices=self.dict_values['modules'],
widget=forms.Select())
self.fields['day'] = forms.IntegerField(initial=self.dict_values['day'])
self.fields['holders'] = forms.MultipleChoiceField(
choices=self.dict_values['players'],
widget=forms.CheckboxSelectMultiple())
for n in range(1, 11):
self.fields['substitute_%s' % n] = forms.ChoiceField(
label=u'substitute %s' % n, choices=self.dict_values['players'],
widget=forms.Select(), required=False)
def check_holders(self):
error = ''
data = self.cleaned_data['holders']
substitutes = [self.cleaned_data.get('substitute_%s' % n)
for n in range(1, 11)]
if len(data) != 11:
return "holder players number is wrong!"
module = dict(self.fields['module'].choices)[
int(self.cleaned_data['module'])]
mod_defs, mod_mids, mod_forws = module
goalkeepers = len([code for code in data if int(code) < 200])
defenders = len([code for code in data if 200 < int(code) < 500])
midfielders = len([code for code in data if 500 < int(code) < 800])
forwarders = len([code for code in data if int(code) > 800])
if goalkeepers > 1:
return "To many goalkeepers!"
if defenders != int(mod_defs):
return "number of defenders doesn't match module!"
if midfielders != int(mod_mids):
return "number of midfielders doesn't match module!"
if forwarders != int(mod_forws):
return "number of forwarders doesn't match module!"
for code in substitutes:
player = Player.get_by_code(int(code),
self.dict_values['league'].season)
if code in data:
return "substitute %s is in holders!" % player.name
if substitutes.count(code) > 1:
return "Duplicate substitute %s in list!" % player.name
return error
class TeamSellPlayersForm(forms.Form):
def __init__(self, *args, **kwargs):
self.dict_values = kwargs.pop('initial')
super(TeamSellPlayersForm, self).__init__(*args, **kwargs)
self.fields['team_players'] = forms.MultipleChoiceField(
choices=self.dict_values['team_players'],
widget=forms.CheckboxSelectMultiple()) | 45.558559 | 80 | 0.600949 | 560 | 5,057 | 5.264286 | 0.194643 | 0.05156 | 0.090231 | 0.064111 | 0.412483 | 0.367707 | 0.282564 | 0.217775 | 0.217775 | 0.1981 | 0 | 0.007594 | 0.270912 | 5,057 | 111 | 81 | 45.558559 | 0.791972 | 0.01404 | 0 | 0.210526 | 0 | 0 | 0.109551 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073684 | false | 0 | 0.031579 | 0 | 0.252632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c182ba22d059d50545ce6b9ad7ba73861064bb7 | 12,134 | py | Python | kolejka/worker/stage0.py | dtracz/kolejka | ff62bd242b2a3f64f44c3b8e6379e083a67f211d | [
"MIT"
] | null | null | null | kolejka/worker/stage0.py | dtracz/kolejka | ff62bd242b2a3f64f44c3b8e6379e083a67f211d | [
"MIT"
] | null | null | null | kolejka/worker/stage0.py | dtracz/kolejka | ff62bd242b2a3f64f44c3b8e6379e083a67f211d | [
"MIT"
] | null | null | null | # vim:ts=4:sts=4:sw=4:expandtab
import copy
import datetime
import dateutil.parser
import glob
import json
import logging
import math
import os
import shutil
import subprocess
import sys
import tempfile
import time
import uuid
from kolejka.common.settings import OBSERVER_SOCKET, TASK_SPEC, RESULT_SPEC, WORKER_HOSTNAME, WORKER_DIRECTORY, WORKER_PYTHON_VOLUME
from kolejka.common import kolejka_config, worker_config
from kolejka.common import KolejkaTask, KolejkaResult, KolejkaLimits
from kolejka.common import ControlGroupSystem
from kolejka.common import MemoryAction, TimeAction
from kolejka.worker.volume import check_python_volume
def silent_call(*args, **kwargs):
kwargs['stdin'] = kwargs.get('stdin', subprocess.DEVNULL)
kwargs['stdout'] = kwargs.get('stderr', subprocess.DEVNULL)
kwargs['stderr'] = kwargs.get('stdout', subprocess.DEVNULL)
return subprocess.run(*args, **kwargs)
def stage0(task_path, result_path, temp_path=None, consume_task_folder=False):
config = worker_config()
cgs = ControlGroupSystem()
task = KolejkaTask(task_path)
if not task.id:
task.id = uuid.uuid4().hex
logging.warning('Assigned id {} to the task'.format(task.id))
if not task.image:
logging.error('Task does not define system image')
sys.exit(1)
if not task.args:
logging.error('Task does not define args')
sys.exit(1)
if not task.files.is_local:
logging.error('Task contains non-local files')
sys.exit(1)
limits = KolejkaLimits()
limits.cpus = config.cpus
limits.memory = config.memory
limits.swap = config.swap
limits.pids = config.pids
limits.storage = config.storage
limits.image = config.image
limits.workspace = config.workspace
limits.time = config.time
limits.network = config.network
task.limits.update(limits)
docker_task = 'kolejka_worker_{}'.format(task.id)
docker_cleanup = [
[ 'docker', 'kill', docker_task ],
[ 'docker', 'rm', docker_task ],
]
with tempfile.TemporaryDirectory(dir=temp_path) as jailed_path:
#TODO jailed_path size remains unlimited?
logging.debug('Using {} as temporary directory'.format(jailed_path))
jailed_task_path = os.path.join(jailed_path, 'task')
os.makedirs(jailed_task_path, exist_ok=True)
jailed_result_path = os.path.join(jailed_path, 'result')
os.makedirs(jailed_result_path, exist_ok=True)
jailed = KolejkaTask(os.path.join(jailed_path, 'task'))
jailed.load(task.dump())
jailed.files.clear()
volumes = list()
check_python_volume()
if os.path.exists(OBSERVER_SOCKET):
volumes.append((OBSERVER_SOCKET, OBSERVER_SOCKET, 'rw'))
else:
logging.warning('Observer is not running.')
volumes.append((jailed_result_path, os.path.join(WORKER_DIRECTORY, 'result'), 'rw'))
for key, val in task.files.items():
if key != TASK_SPEC:
src_path = os.path.join(task.path, val.path)
dst_path = os.path.join(jailed_path, 'task', key)
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
if consume_task_folder:
shutil.move(src_path, dst_path)
else:
shutil.copy(src_path, dst_path)
jailed.files.add(key)
jailed.files.add(TASK_SPEC)
#jailed.limits = KolejkaLimits() #TODO: Task is limited by docker, no need to limit it again?
jailed.commit()
volumes.append((jailed.path, os.path.join(WORKER_DIRECTORY, 'task'), 'rw'))
if consume_task_folder:
try:
shutil.rmtree(task_path)
except:
logging.warning('Failed to remove {}'.format(task_path))
pass
for spath in [ os.path.dirname(__file__) ]:
stage1 = os.path.join(spath, 'stage1.sh')
if os.path.isfile(stage1):
volumes.append((stage1, os.path.join(WORKER_DIRECTORY, 'stage1.sh'), 'ro'))
break
for spath in [ os.path.dirname(__file__) ]:
stage2 = os.path.join(spath, 'stage2.py')
if os.path.isfile(stage2):
volumes.append((stage2, os.path.join(WORKER_DIRECTORY, 'stage2.py'), 'ro'))
break
docker_call = [ 'docker', 'run' ]
docker_call += [ '--detach' ]
docker_call += [ '--name', docker_task ]
docker_call += [ '--entrypoint', os.path.join(WORKER_DIRECTORY, 'stage1.sh') ]
for key, val in task.environment.items():
docker_call += [ '--env', '{}={}'.format(key, val) ]
docker_call += [ '--hostname', WORKER_HOSTNAME ]
docker_call += [ '--init' ]
if task.limits.cpus is not None:
docker_call += [ '--cpuset-cpus', ','.join([str(c) for c in cgs.limited_cpuset(cgs.full_cpuset(), task.limits.cpus, task.limits.cpus_offset)]) ]
if task.limits.memory is not None:
docker_call += [ '--memory', str(task.limits.memory) ]
if task.limits.swap is not None:
docker_call += [ '--memory-swap', str(task.limits.memory + task.limits.swap) ]
if task.limits.storage is not None:
docker_info_run = subprocess.run(['docker', 'system', 'info', '--format', '{{json .Driver}}'], stdout=subprocess.PIPE, check=True)
storage_driver = str(json.loads(str(docker_info_run.stdout, 'utf-8')))
if storage_driver == 'overlay2':
docker_info_run = subprocess.run(['docker', 'system', 'info', '--format', '{{json .DriverStatus}}'], stdout=subprocess.PIPE, check=True)
storage_fs = dict(json.loads(str(docker_info_run.stdout, 'utf-8')))['Backing Filesystem']
if storage_fs in [ 'xfs' ]:
storage_limit = task.limits.storage
docker_call += [ '--storage-opt', 'size='+str(storage_limit) ]
else:
logging.warning("Storage limit on {} ({}) is not supported".format(storage_driver, storage_fs))
else:
logging.warning("Storage limit on {} is not supported".format(storage_driver))
if task.limits.network is not None:
if not task.limits.network:
docker_call += [ '--network=none' ]
docker_call += [ '--cap-add', 'SYS_NICE' ]
if task.limits.pids is not None:
docker_call += [ '--pids-limit', str(task.limits.pids) ]
if task.limits.time is not None:
docker_call += [ '--stop-timeout', str(int(math.ceil(task.limits.time.total_seconds()))) ]
docker_call += [ '--volume', '{}:{}:{}'.format(WORKER_PYTHON_VOLUME, os.path.join(WORKER_DIRECTORY, 'python3'), 'ro') ]
for v in volumes:
docker_call += [ '--volume', '{}:{}:{}'.format(os.path.realpath(v[0]), v[1], v[2]) ]
docker_call += [ '--workdir', WORKER_DIRECTORY ]
docker_image = task.image
docker_call += [ docker_image ]
docker_call += [ '--consume' ]
if config.debug:
docker_call += [ '--debug' ]
if config.verbose:
docker_call += [ '--verbose' ]
docker_call += [ os.path.join(WORKER_DIRECTORY, 'task') ]
docker_call += [ os.path.join(WORKER_DIRECTORY, 'result') ]
logging.debug('Docker call : {}'.format(docker_call))
pull_image = config.pull
if not pull_image:
docker_inspect_run = subprocess.run(['docker', 'image', 'inspect', docker_image], stdout=subprocess.DEVNULL, stderr=subprocess.STDOUT)
if docker_inspect_run.returncode != 0:
pull_image = True
if pull_image:
subprocess.run(['docker', 'pull', docker_image], check=True)
for docker_clean in docker_cleanup:
silent_call(docker_clean)
if os.path.exists(result_path):
shutil.rmtree(result_path)
os.makedirs(result_path, exist_ok=True)
result = KolejkaResult(result_path)
result.id = task.id
result.limits = task.limits
result.stdout = task.stdout
result.stderr = task.stderr
start_time = datetime.datetime.now()
docker_run = subprocess.run(docker_call, stdout=subprocess.PIPE)
cid = str(docker_run.stdout, 'utf-8').strip()
logging.info('Started container {}'.format(cid))
while True:
try:
docker_state_run = subprocess.run(['docker', 'inspect', '--format', '{{json .State}}', cid], stdout=subprocess.PIPE)
state = json.loads(str(docker_state_run.stdout, 'utf-8'))
except:
break
try:
result.stats.update(cgs.name_stats(cid))
except:
pass
time.sleep(0.1)
if not state['Running']:
result.result = state['ExitCode']
try:
result.stats.time = dateutil.parser.parse(state['FinishedAt']) - dateutil.parser.parse(state['StartedAt'])
except:
result.stats.time = None
break
if task.limits.time is not None and datetime.datetime.now() - start_time > task.limits.time + datetime.timedelta(seconds=2):
docker_kill_run = subprocess.run([ 'docker', 'kill', docker_task ])
subprocess.run(['docker', 'logs', cid], stdout=subprocess.PIPE)
try:
summary = KolejkaResult(jailed_result_path)
result.stats.update(summary.stats)
except:
pass
stop_time = datetime.datetime.now()
if result.stats.time is None:
result.stats.time = stop_time - start_time
result.stats.pids.usage = None
result.stats.memory.usage = None
result.stats.memory.swap = None
for dirpath, dirnames, filenames in os.walk(jailed_result_path):
for filename in filenames:
abspath = os.path.join(dirpath, filename)
realpath = os.path.realpath(abspath)
if realpath.startswith(os.path.realpath(jailed_result_path)+'/'):
relpath = abspath[len(jailed_result_path)+1:]
if relpath != RESULT_SPEC:
destpath = os.path.join(result.path, relpath)
os.makedirs(os.path.dirname(destpath), exist_ok=True)
shutil.move(realpath, destpath)
os.chmod(destpath, 0o640)
result.files.add(relpath)
result.commit()
os.chmod(result.spec_path, 0o640)
for docker_clean in docker_cleanup:
silent_call(docker_clean)
def config_parser(parser):
parser.add_argument("task", type=str, help='task folder')
parser.add_argument("result", type=str, help='result folder')
parser.add_argument("--temp", type=str, help='temp folder')
parser.add_argument('--pull', action='store_true', help='always pull images, even if local version is present', default=False)
parser.add_argument('--consume', action='store_true', default=False, help='consume task folder')
parser.add_argument('--cpus', type=int, help='cpus limit')
parser.add_argument('--memory', action=MemoryAction, help='memory limit')
parser.add_argument('--swap', action=MemoryAction, help='swap limit')
parser.add_argument('--pids', type=int, help='pids limit')
parser.add_argument('--storage', action=MemoryAction, help='storage limit')
parser.add_argument('--image', action=MemoryAction, help='image size limit')
parser.add_argument('--workspace', action=MemoryAction, help='workspace size limit')
parser.add_argument('--time', action=TimeAction, help='time limit')
parser.add_argument('--network',type=bool, help='allow netowrking')
def execute(args):
kolejka_config(args=args)
config = worker_config()
stage0(args.task, args.result, temp_path=config.temp_path, consume_task_folder=args.consume)
parser.set_defaults(execute=execute)
| 46.312977 | 156 | 0.614719 | 1,446 | 12,134 | 5.012448 | 0.174965 | 0.023179 | 0.023455 | 0.01766 | 0.201297 | 0.146799 | 0.096578 | 0.05574 | 0.05574 | 0.046082 | 0 | 0.004649 | 0.255398 | 12,134 | 261 | 157 | 46.490421 | 0.797565 | 0.013186 | 0 | 0.141667 | 0 | 0 | 0.108948 | 0 | 0 | 0 | 0 | 0.003831 | 0 | 1 | 0.016667 | false | 0.0125 | 0.083333 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c190583fb5783eba2740c28934157e38a3fd01b | 7,526 | py | Python | src/keri/vc/proving.py | hello0827/keripy | db41d612357acb231354ba3f353995635d91a02e | [
"Apache-2.0"
] | null | null | null | src/keri/vc/proving.py | hello0827/keripy | db41d612357acb231354ba3f353995635d91a02e | [
"Apache-2.0"
] | null | null | null | src/keri/vc/proving.py | hello0827/keripy | db41d612357acb231354ba3f353995635d91a02e | [
"Apache-2.0"
] | null | null | null | # -*- encoding: utf-8 -*-
"""
keri.vc.proving module
"""
from collections.abc import Iterable
from typing import Union
from .. import help
from ..core import coring
from ..core.coring import (Serials, Versify)
from ..db import subing
from ..kering import Version
KERI_REGISTRY_TYPE = "KERICredentialRegistry"
logger = help.ogler.getLogger()
def credential(schema,
issuer,
subject,
status=None,
source=None,
rules=None,
version=Version,
kind=Serials.json):
""" Returns Credentialer of new credential
Creates SAD for credential and Saidifyies it before creation.
Parameters:
schema (SAID): of schema for this credential
issuer (str): qb64 identifier prefix of the issuer
status (str): qb64 said of the credential registry
subject (dict): of the values being assigned to the subject of this credential
source (list): of source credentials to which this credential is chained
rules (list): ACDC rules section for credential
version (Version): version instance
kind (Serials): serialization kind
Returns:
Credentialer: credential instance
"""
vs = Versify(ident=coring.Idents.acdc, version=version, kind=kind, size=0)
source = source if source is not None else []
vc = dict(
v=vs,
d="",
s=schema,
i=issuer,
a={},
p=source
)
if status is not None:
subject["ri"] = status
if rules is not None:
vc["r"] = rules
_, sad = coring.Saider.saidify(sad=subject, kind=kind, label=coring.Ids.d)
vc["a"] = sad
_, vc = coring.Saider.saidify(sad=vc)
return Credentialer(ked=vc)
class Credentialer(coring.Sadder):
""" Credentialer is for creating ACDC chained credentials
Sub class of Sadder that adds credential specific validation and properties
Inherited Properties:
.raw is bytes of serialized event only
.ked is key event dict
.kind is serialization kind string value (see namedtuple coring.Serials)
.version is Versionage instance of event version
.size is int of number of bytes in serialed event only
.diger is Diger instance of digest of .raw
.dig is qb64 digest from .diger
.digb is qb64b digest from .diger
.verfers is list of Verfers converted from .ked["k"]
.werfers is list of Verfers converted from .ked["b"]
.tholder is Tholder instance from .ked["kt'] else None
.sn is int sequence number converted from .ked["s"]
.pre is qb64 str of identifier prefix from .ked["i"]
.preb is qb64b bytes of identifier prefix from .ked["i"]
.said is qb64 of .ked['d'] if present
.saidb is qb64b of .ked['d'] of present
Properties:
.crd (dict): synonym for .ked
.issuer (str): qb64 identifier prefix of credential issuer
.schema (str): qb64 SAID of JSONSchema for credential
.subject (str): qb64 identfier prefix of credential subject
.status (str): qb64 identfier prefix of issuance / revocation registry
"""
def __init__(self, raw=b'', ked=None, kind=None, sad=None, code=coring.MtrDex.Blake3_256):
""" Creates a serializer/deserializer for a ACDC Verifiable Credential in CESR Proof Format
Requires either raw or (crd and kind) to load credential from serialized form or in memory
Parameters:
raw (bytes): is raw credential
ked (dict): is populated credential
kind (is serialization kind
sad (Sadder): is clonable base class
code (MtrDex): is hashing codex
"""
super(Credentialer, self).__init__(raw=raw, ked=ked, kind=kind, sad=sad, code=code)
if self._ident != coring.Idents.acdc:
raise ValueError("Invalid ident {}, must be ACDC".format(self._ident))
@property
def crd(self):
""" issuer property getter"""
return self._ked
@property
def issuer(self):
""" issuer property getter"""
return self._ked["i"]
@property
def schema(self):
""" schema property getter"""
return self._ked["s"]
@property
def subject(self):
""" subject property getter"""
return self._ked["a"]
@property
def status(self):
""" status property getter"""
return self._ked["a"]["ri"]
class CrederSuber(subing.Suber):
""" Data serialization for Credentialer
Sub class of Suber where data is serialized Credentialer instance
Automatically serializes and deserializes using Credentialer methods
"""
def __init__(self, *pa, **kwa):
"""
Parameters:
*pa (list): list arguments passed through to Suber
**kwa (dict): keyword arguments passed through to Suber
"""
super(CrederSuber, self).__init__(*pa, **kwa)
def put(self, keys: Union[str, Iterable], val: Credentialer):
""" Puts val at key made from keys. Does not overwrite
Parameters:
keys (tuple): of key strs to be combined in order to form key
val (Credentialer): instance
Returns:
bool: True If successful, False otherwise, such as key
already in database.
"""
return (self.db.putVal(db=self.sdb,
key=self._tokey(keys),
val=val.raw))
def pin(self, keys: Union[str, Iterable], val: Credentialer):
""" Pins (sets) val at key made from keys. Overwrites.
Parameters:
keys (tuple): of key strs to be combined in order to form key
val (Credentialer): instance
Returns:
bool: True If successful. False otherwise.
"""
return (self.db.setVal(db=self.sdb,
key=self._tokey(keys),
val=val.raw))
def get(self, keys: Union[str, Iterable]):
""" Gets Credentialer at keys
Parameters:
keys (tuple): of key strs to be combined in order to form key
Returns:
Credentialer: instance at keys
None: if no entry at keys
Usage:
Use walrus operator to catch and raise missing entry
if (creder := mydb.get(keys)) is None:
raise ExceptionHere
use creder here
"""
val = self.db.getVal(db=self.sdb, key=self._tokey(keys))
return Credentialer(raw=bytes(val)) if val is not None else None
def rem(self, keys: Union[str, Iterable]):
""" Removes entry at keys
Parameters:
keys (tuple): of key strs to be combined in order to form key
Returns:
bool: True if key exists so delete successful. False otherwise
"""
return self.db.delVal(db=self.sdb, key=self._tokey(keys))
def getItemIter(self, keys: Union[str, Iterable] = b""):
""" Return iterator over the all the items in subdb
Parameters:
keys (tuple): of key strs to be combined in order to form key
Returns:
iterator: of tuples of keys tuple and val coring.Serder for
each entry in db
"""
for key, val in self.db.getTopItemIter(db=self.sdb, key=self._tokey(keys)):
yield self._tokeys(key), Credentialer(raw=bytes(val))
| 31.358333 | 99 | 0.604571 | 935 | 7,526 | 4.829947 | 0.270588 | 0.017715 | 0.022143 | 0.026572 | 0.280558 | 0.235164 | 0.178034 | 0.114039 | 0.114039 | 0.114039 | 0 | 0.00575 | 0.306803 | 7,526 | 239 | 100 | 31.48954 | 0.859881 | 0.525777 | 0 | 0.121622 | 0 | 0 | 0.021284 | 0.007552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175676 | false | 0 | 0.094595 | 0 | 0.432432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c1960727db055be31c109bf50e5cdfd1838d20a | 4,675 | py | Python | 2_import/08_de.py | weng-lab/SCREEN | e8e7203e2f9baa2de70e2f75bdad3ae24b568367 | [
"MIT"
] | 5 | 2020-07-30T02:35:20.000Z | 2020-12-24T01:26:47.000Z | 2_import/08_de.py | weng-lab/SCREEN | e8e7203e2f9baa2de70e2f75bdad3ae24b568367 | [
"MIT"
] | 6 | 2021-03-04T10:30:11.000Z | 2022-03-16T16:47:47.000Z | 2_import/08_de.py | weng-lab/SCREEN | e8e7203e2f9baa2de70e2f75bdad3ae24b568367 | [
"MIT"
] | 2 | 2020-12-08T10:05:02.000Z | 2022-03-10T09:41:19.000Z | #!/usr/bin/env python2
# SPDX-License-Identifier: MIT
# Copyright (c) 2016-2020 Michael Purcaro, Henry Pratt, Jill Moore, Zhiping Weng
from __future__ import print_function
import os
import sys
import json
import psycopg2
import argparse
import gzip
import StringIO
sys.path.append(os.path.join(os.path.dirname(__file__), '../common/'))
from dbconnect import db_connect
from constants import paths
sys.path.append(os.path.join(os.path.dirname(__file__), '../../metadata/utils/'))
from utils import Utils, printt
from db_utils import getcursor, makeIndex
from files_and_paths import Dirs
class ImportDE:
def __init__(self, curs):
self.curs = curs
self.tableName = "mm10_de"
self.ctTableName = "mm10_de_cts"
def setupDb(self):
printt("dropping and creating", self.tableName)
self.curs.execute("""
DROP TABLE IF EXISTS {tn};
CREATE TABLE {tn}(
id serial PRIMARY KEY,
leftCtId integer,
rightCtId integer,
ensembl text,
log2FoldChange real,
padj numeric
);
""".format(tn=self.tableName))
printt("dropping and creating", self.ctTableName)
self.curs.execute("""
DROP TABLE IF EXISTS {tn};
CREATE TABLE {tn}(
id serial PRIMARY KEY,
deCtName text,
biosample_summary text,
tissues text
);
""".format(tn=self.ctTableName))
def setupCellTypes(self, cts):
outF = StringIO.StringIO()
for ct in sorted(list(cts)):
outF.write(ct + '\n')
outF.seek(0)
printt("copying into", self.ctTableName)
cols = ["deCtName"]
self.curs.copy_from(outF, self.ctTableName, '\t', columns=cols)
printt("imported", self.curs.rowcount, "rows", self.ctTableName)
self.curs.execute("""
SELECT id, deCtName FROM {tn}
""".format(tn=self.ctTableName))
ctsToId = {r[1]: r[0] for r in self.curs.fetchall()}
return ctsToId
def loadFileLists(self):
cts = set()
d = os.path.join(paths.v4d, "mouse_epigenome/de_all_pairs/data")
fnps = []
for fn in os.listdir(d):
if not fn.endswith(".txt.gz"):
continue
toks = fn.replace(".txt.gz", '').split("_VS_")
cts.add(toks[0])
cts.add(toks[1])
fnps.append((os.path.join(d, fn), toks[0], toks[1]))
return cts, fnps
def readFile(self, fnp):
with gzip.open(fnp) as f:
f.readline() # consume header
data = []
skipped = 0
for r in f:
toks = r.rstrip().split('\t')
if "NA" == toks[2]:
skipped += 1
continue
padj = toks[5]
if 'NA' == padj:
padj = "1"
etoks = toks[0].split('.')
data.append([etoks[0], toks[2], padj])
return data, skipped
def setupAll(self, sample):
self.setupDb()
cts, fnps = self.loadFileLists()
ctsToId = self.setupCellTypes(cts)
cols = ["leftCtId", "rightCtId", "ensembl", "log2FoldChange", "padj"]
# baseMean log2FoldChange lfcSE stat pvalue padj
counter = 0
for fnp, ct1, ct2 in fnps:
counter += 1
if sample:
# if "_0" not in ct1 and "_0" not in ct2:
if "limb_15" not in ct1 or "limb_11" not in ct2:
continue
printt(counter, len(fnps), fnp)
data, skipped = self.readFile(fnp)
outF = StringIO.StringIO()
for d in data:
outF.write('\t'.join([str(ctsToId[ct1]),
str(ctsToId[ct2])] + d) + '\n')
outF.seek(0)
self.curs.copy_from(outF, self.tableName, '\t', columns=cols)
printt("copied in", self.curs.rowcount, "skipped", skipped)
def index(self):
makeIndex(self.curs, self.tableName, ["leftCtId", "rightCtId", "ensembl"])
def run(args, DBCONN):
printt('***********', "mm10")
with getcursor(DBCONN, "import DEs") as curs:
ide = ImportDE(curs)
if args.index:
return ide.index()
ide.setupAll(args.sample)
ide.index()
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--index', action="store_true", default=False)
parser.add_argument('--sample', action="store_true", default=False)
args = parser.parse_args()
return args
def main():
args = parse_args()
DBCONN = db_connect(os.path.realpath(__file__))
return run(args, DBCONN)
if __name__ == '__main__':
main()
| 28.680982 | 82 | 0.568128 | 561 | 4,675 | 4.634581 | 0.333333 | 0.033846 | 0.015385 | 0.018462 | 0.158077 | 0.099231 | 0.080769 | 0.080769 | 0.080769 | 0.080769 | 0 | 0.015892 | 0.300107 | 4,675 | 162 | 83 | 28.858025 | 0.778729 | 0.049412 | 0 | 0.15748 | 0 | 0 | 0.169896 | 0.012168 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07874 | false | 0 | 0.133858 | 0 | 0.267717 | 0.070866 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c1d894201ca46e3126f0b08cdfeb69da1dc7db6 | 8,423 | py | Python | importers/helpers/__init__.py | codeforIATI/codelist-updater | 624b685756f50444df5eb1af5e6f74f139c8fb46 | [
"MIT"
] | null | null | null | importers/helpers/__init__.py | codeforIATI/codelist-updater | 624b685756f50444df5eb1af5e6f74f139c8fb46 | [
"MIT"
] | 36 | 2019-11-11T09:41:14.000Z | 2022-03-19T23:12:42.000Z | importers/helpers/__init__.py | codeforIATI/codelist-updater | 624b685756f50444df5eb1af5e6f74f139c8fb46 | [
"MIT"
] | null | null | null | """
Converts codelist files from external sources into the format used by IATI.
"""
import argparse
from collections import OrderedDict
from io import StringIO
from os.path import join
import subprocess
import requests
from lxml import etree as ET
import csv
class Importer:
def __init__(self, tmpl_name, source_url, lookup,
source_data=None, order_by=None):
self.tmpl_name = tmpl_name
self.order_by = order_by
if source_data:
self.source_data = [OrderedDict([
(outp, x[inp]) for outp, inp in lookup
]) for x in source_data]
else:
code_lookup = [lookup_value for x, lookup_value in lookup
if x == 'code'][0]
r = fetch(source_url)
reader = csv.DictReader(StringIO(r.content.decode()))
self.source_data = [OrderedDict([
(k, x.get(v)) for k, v in lookup
]) for x in reader if x[code_lookup]]
self.run()
def run(self):
parser = argparse.ArgumentParser()
parser.add_argument('--deploy', action='store_true')
args = parser.parse_args()
self.source_to_xml()
if args.deploy:
self.push_changes()
def indent(self, elem, level=0, shift=2):
"""
Pretty print XML
Adapted from code at http://effbot.org/zone/element-lib.htm
"""
i = '\n' + level * ' ' * shift
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + ' ' * shift
if not elem.tail or not elem.tail.strip():
# hack to remove trailing newline
if level:
elem.tail = i
for elem in elem:
self.indent(elem, level + 1, shift)
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
def create_codelist_item(self, keys, xml=None, namespaces=None):
if not namespaces:
namespaces = {}
if xml is None:
xml = ET.Element('codelist-item')
xml.set('status', 'active')
for key in keys:
lang = None
if key.startswith('@'):
continue
if '_' in key:
key, lang = key.split('_')
if xml.xpath(key, namespaces=namespaces):
continue
if ':' in key:
ns, key = key.split(':')
key = '{{{namespace}}}{key}'.format(
namespace=namespaces[ns], key=key)
el = ET.Element(key)
if lang:
el.append(ET.Element('narrative'))
xml.append(el)
return xml
def update_codelist_item(self, codelist_item, code_dict, namespaces=None):
if not namespaces:
namespaces = {}
for k, v in code_dict.items():
if k.startswith('@'):
k = k[1:]
if v:
codelist_item.set(k, v)
continue
if '_' in k:
el, lang = k.split('_')
if lang == 'en':
narrative = codelist_item.xpath(
f'{el}/narrative[not(@xml:lang)]',
namespaces=namespaces)[0]
else:
narrative = codelist_item.xpath(
f'{el}/narrative[@xml:lang="{lang}"]',
namespaces=namespaces)
if narrative:
narrative = narrative[0]
if not v:
if narrative.text:
# remove newly empty nodes
narrative.getparent().remove(narrative)
continue
else:
# leave existing empty nodes
continue
elif v:
parent = codelist_item.xpath(el, namespaces=namespaces)[0]
narrative = ET.Element('narrative')
narrative.set(
'{http://www.w3.org/XML/1998/namespace}lang',
lang)
parent.append(narrative)
else:
continue
else:
narrative = codelist_item.xpath(k, namespaces=namespaces)[0]
narrative.text = v
return codelist_item
def source_to_xml(self):
etparser = ET.XMLParser(encoding='utf-8', remove_blank_text=True)
try:
old_xml = ET.parse(
join('codelist_repo', 'xml', '{}.xml'.format(self.tmpl_name)),
etparser)
old_codelist_els = old_xml.xpath('//codelist-item')
except OSError:
old_codelist_els = []
tmpl_path = join('templates', '{}.xml'.format(self.tmpl_name))
xml = ET.parse(tmpl_path, etparser)
namespaces = xml.getroot().nsmap
codelist_items = xml.find('codelist-items')
source_data_dict = OrderedDict([(source_data_row['code'].upper(), source_data_row) for source_data_row in self.source_data])
old_codelist_codes = [
old_codelist_el.find('code').text.upper()
for old_codelist_el in old_codelist_els]
while True:
if not old_codelist_els and not source_data_dict:
break
if source_data_dict:
new_code_dict = list(source_data_dict.values())[0]
if new_code_dict['code'].upper() not in old_codelist_codes:
# add a new code
new_codelist_item = self.create_codelist_item(new_code_dict.keys(), namespaces=namespaces)
new_codelist_item = self.update_codelist_item(new_codelist_item, new_code_dict, namespaces=namespaces)
codelist_items.append(new_codelist_item)
source_data_dict.popitem(last=False)
continue
if old_codelist_els:
old_codelist_el = old_codelist_els[0]
old_codelist_code = old_codelist_el.find('code').text.upper()
else:
old_codelist_code = None
if old_codelist_code in source_data_dict:
# it's in the current codes, so update it
new_code_dict = source_data_dict[old_codelist_code]
updated_codelist_item = self.create_codelist_item(new_code_dict.keys(), old_codelist_el, namespaces=namespaces)
updated_codelist_item = self.update_codelist_item(updated_codelist_item, new_code_dict, namespaces=namespaces)
codelist_items.append(updated_codelist_item)
del source_data_dict[old_codelist_code]
elif old_codelist_el.attrib.get('status') == 'withdrawn':
# it's an old withdrawn code, so just copy it
codelist_items.append(old_codelist_el)
elif codelist_items.xpath('//codelist-item/code[text()="{}"]/..'.format(old_codelist_el.find('code').text)):
# some codelist items are hard-coded, and should just
# be left as is
pass
else:
old_codelist_el.attrib['status'] = 'withdrawn'
# old_codelist_el.attrib['withdrawal-date'] = today
codelist_items.append(old_codelist_el)
old_codelist_els.pop(0)
if self.order_by:
codelist_items[:] = sorted(
codelist_items,
key=lambda x: x.xpath(self.order_by))
output_path = join('codelist_repo', 'xml', '{}.xml'.format(self.tmpl_name))
for el in xml.iter('*'):
if el.text is not None:
if not el.text.strip():
# force tag self-escaping
el.text = None
self.indent(xml.getroot(), 0, 4)
xml.write(output_path, encoding='utf-8', pretty_print=True)
def push_changes(self):
subprocess.run('./update.sh ' + self.tmpl_name, shell=True)
def fetch(url, *args, **kwargs):
r = requests.get(url, *args, **kwargs)
r.raise_for_status()
return r
| 39.176744 | 132 | 0.527366 | 944 | 8,423 | 4.509534 | 0.217161 | 0.064599 | 0.033592 | 0.017853 | 0.2288 | 0.209302 | 0.130843 | 0.098896 | 0.098896 | 0.080103 | 0 | 0.003992 | 0.375401 | 8,423 | 214 | 133 | 39.359813 | 0.80517 | 0.056631 | 0 | 0.171429 | 0 | 0 | 0.050899 | 0.012661 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045714 | false | 0.005714 | 0.051429 | 0 | 0.12 | 0.005714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c1db22dc42d5e75d329f133470d4d0f5e20e0f2 | 700 | py | Python | app/request.py | JoyWambui/write-a-way | 2b535406f5c62722d478db8c186562009cf50e65 | [
"MIT"
] | null | null | null | app/request.py | JoyWambui/write-a-way | 2b535406f5c62722d478db8c186562009cf50e65 | [
"MIT"
] | null | null | null | app/request.py | JoyWambui/write-a-way | 2b535406f5c62722d478db8c186562009cf50e65 | [
"MIT"
] | null | null | null | from urllib import response
import urllib.request,json
from .models import Quote
def get_quotes():
random_quote_url= 'http://quotes.stormconsultancy.co.uk/random.json'
with urllib.request.urlopen(random_quote_url) as url:
get_quotes_data= url.read()
response = json.loads(get_quotes_data)
if response:
quote_response= process_response(response)
return quote_response
def process_response(response):
id = response.get('id')
author = response.get('author')
random_quote= response.get('quote')
permalink = response.get('permalink')
quote_object= Quote(id,author,random_quote,permalink)
return quote_object
| 29.166667 | 72 | 0.7 | 87 | 700 | 5.436782 | 0.344828 | 0.093023 | 0.059197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204286 | 700 | 23 | 73 | 30.434783 | 0.849192 | 0 | 0 | 0 | 0 | 0 | 0.100143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.166667 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c1f7787f9533c0ad462597947a09f0a76ccdd0c | 7,283 | py | Python | pithy/ansi.py | gwk/glossy | 6976ca4fd1efc09d9cd670b1fe37817c05b4b529 | [
"CC0-1.0"
] | 7 | 2019-05-04T00:51:38.000Z | 2021-12-10T15:36:31.000Z | pithy/ansi.py | gwk/glossy | 6976ca4fd1efc09d9cd670b1fe37817c05b4b529 | [
"CC0-1.0"
] | null | null | null | pithy/ansi.py | gwk/glossy | 6976ca4fd1efc09d9cd670b1fe37817c05b4b529 | [
"CC0-1.0"
] | 1 | 2016-07-30T22:38:08.000Z | 2016-07-30T22:38:08.000Z | # Dedicated to the public domain under CC0: https://creativecommons.org/publicdomain/zero/1.0/.
'''
ANSI Control Sequences.
ANSI Select Graphics Rendition (SGR) sequences.
RST: reset
BOLD: bold
ULINE: underline
BLINK: blink
INVERT: invert
TXT: color text
BG: color background
K: black
W: white
D: dim gray
R: red
G: green
Y: yellow
B: blue
M: magenta
C: cyan
L: light gray
Incomplete
Additional, unimplemented commands are documented below.
As these are implemented, the command chars should be added to the cs_re pattern.
CSI n A CUU – Cursor Up Moves the cursor n (default 1) cells in the given direction.
If the cursor is already at the edge of the screen, this has no effect.
CSI n B: Cursor Down
CSI n C: Cursor Forward
CSI n D: Cursor Back
CSI n E: Moves cursor to beginning of the line n (default 1) lines down.
CSI n F: Moves cursor to beginning of the line n (default 1) lines up.
CSI n G: Moves the cursor to column n.
CSI n S: Scroll whole page up by n (default 1) lines. New lines are added at the bottom. (not ANSI.SYS)
CSI n T: Scroll whole page down by n (default 1) lines. New lines are added at the top. (not ANSI.SYS)
'''
import re as _re
from sys import stderr, stdout
from typing import Any, List
is_err_tty = stderr.isatty()
is_out_tty = stdout.isatty()
# Use these with `and` expressions to omit sgr for non-tty output, e.g. `TTY_OUT and sgr(...)`.
TTY_ERR = '!TTY_ERR' if is_err_tty else ''
TTY_OUT = '!TTY_OUT' if is_out_tty else ''
# ANSI control sequence indicator.
CSI = '\x1B['
# regex for detecting control sequences in strings.
# TODO: replace .*? wildcard with stricter character set.
ctrl_seq_re = _re.compile(r'\x1B\[.*?[hHJKlmsu]')
def ctrl_seq(c:str, *args:Any) -> str:
'Format a control sequence string for command character `c` and arguments.'
return f'{CSI}{";".join(str(a) for a in args)}{c}'
def strip_ctrl_seq(text: str) -> str:
'Strip control sequences from a string.'
return ctrl_seq_re.sub('', text)
def len_strip_ctrl_seq(s: str) -> int:
'Calculate the length of string if control sequences were stripped.'
l = len(s)
for m in ctrl_seq_re.finditer(s):
l -= m.end() - m.start()
return l
def sgr(*seq:Any) -> str:
'Select Graphic Rendition control sequence string.'
return ctrl_seq('m', *seq)
# reset command strings.
RST = sgr() # Equivalent to sgr(0).
RST_ERR = (TTY_ERR and RST)
RST_OUT = (TTY_OUT and RST)
(RST_BOLD, RST_ULINE, RST_BLINK, RST_INVERT, RST_TXT, RST_BG) = (22, 24, 25, 27, 39, 49)
# effect command strings.
(BOLD, ULINE, BLINK, INVERT) = (1, 4, 5, 7)
# "Primary" colors.
# Note that black and white acronyms are suffixed with T,
# because we prefer to use true black and white from xterm-256color, defined below.
# color text: black, red, green, yellow, blue, magenta, cyan, white.
txt_primary_indices = range(30, 38)
txt_primaries = tuple(sgr(i) for i in txt_primary_indices)
TXT_KT, TXT_R, TXT_G, TXT_Y, TXT_B, TXT_M, TXT_C, TXT_WT = txt_primaries
TXT_KT_ERR, TXT_R_ERR, TXT_G_ERR, TXT_Y_ERR, TXT_B_ERR, TXT_M_ERR, TXT_C_ERR, TXT_WT_ERR = (
(TTY_ERR and c) for c in txt_primaries)
TXT_KT_OUT, TXT_R_OUT, TXT_G_OUT, TXT_Y_OUT, TXT_B_OUT, TXT_M_OUT, TXT_C_OUT, TXT_WT_OUT = (
(TTY_OUT and c) for c in txt_primaries)
# color background: black, red, green, yellow, blue, magenta, cyan, white.
bg_primary_indices = range(40, 48)
bg_primaries = tuple(sgr(i) for i in bg_primary_indices)
BG_KT, BG_R, BG_G, BG_Y, BG_B, BG_M, BG_C, BG_WT = bg_primaries
BG_KT_ERR, BG_R_ERR, BG_G_ERR, BG_Y_ERR, BG_B_ERR, BG_M_ERR, BG_C_ERR, BG_WT_ERR = (
(TTY_ERR and c) for c in bg_primaries)
BG_KT_OUT, BG_R_OUT, BG_G_OUT, BG_Y_OUT, BG_B_OUT, BG_M_OUT, BG_C_OUT, BG_WT_OUT = (
(TTY_OUT and c) for c in bg_primaries)
# xterm-256 sequence initiators; these should be followed by a single color index.
# both text and background can be specified in a single sgr call.
TXT = '38;5'
BG = '48;5'
# RGB6 color cube: 6x6x6, from black to white.
K = 16 # black.
W = 231 # white.
# Grayscale: the 24 palette values have a suggested 8 bit grayscale range of [8, 238].
middle_gray_indices = range(232, 256)
KD = W + 4
D = W + 7
DN = W + 10
N = W + 13
NL = W + 16
L = W + 19
LW = W + 22
named_gray_indices = (K, KD, D, DN, N, NL, L, LW, W)
def gray26(n:int) -> int:
assert 0 <= n < 26
if n == 0: return K
if n == 25: return W
return W + n
def rgb6(r:int, g:int, b:int) -> int:
'index RGB triples into the 256-color palette (returns 16 for black, 231 for white).'
assert 0 <= r < 6
assert 0 <= g < 6
assert 0 <= b < 6
return (((r * 6) + g) * 6) + b + 16
TXT_K, TXT_KD, TXT_D, TXT_DN, TXT_N, TXT_NL, TXT_L, TXT_LW, TXT_W = txt_grays = tuple(sgr(TXT, code) for code in named_gray_indices)
TXT_K_OUT, TXT_KD_OUT, TXT_D_OUT, TXT_DN_OUT, TXT_N_OUT, TXT_NL_OUT, TXT_L_OUT, TXT_LW_OUT, TXT_W_OUT = ((TTY_OUT and c) for c in txt_grays)
TXT_K_ERR, TXT_KD_ERR, TXT_D_ERR, TXT_DN_ERR, TXT_N_ERR, TXT_NL_ERR, TXT_L_ERR, TXT_LW_ERR, TXT_W_ERR = ((TTY_ERR and c) for c in txt_grays)
BG_K, BG_KD, BG_D, BG_DN, BG_N, BG_NL, BG_L, BG_LW, BG_W = bg_grays = tuple(sgr(BG, code) for code in named_gray_indices)
BG_K_OUT, BG_KD_OUT, BG_D_OUT, BG_DN_OUT, BG_N_OUT, BG_NL_OUT, BG_L_OUT, BG_LW_OUT, BG_W_OUT = ((TTY_OUT and c) for c in bg_grays)
BG_K_ERR, BG_KD_ERR, BG_D_ERR, BG_DN_ERR, BG_N_ERR, BG_NL_ERR, BG_L_ERR, BG_LW_ERR, BG_W_ERR = ((TTY_ERR and c) for c in bg_grays)
def cursor_pos(x:int, y:int) -> str:
'''
Position the cursor.
Supposedly the 'f' suffix does the same thing.
x and y parameters are zero based.
'''
return ctrl_seq('H', y + 1, x + 1)
ERASE_LINE_F, ERASE_LINE_B, ERASE_LINE = (ctrl_seq('K', i) for i in range(3))
CLEAR_SCREEN_F, CLEAR_SCREEN_B, CLEAR_SCREEN = (ctrl_seq('J', i) for i in range(3))
FILL = ERASE_LINE_F + RST # Erase-line fills the background color to the end of line.
FILL_ERR = (TTY_ERR and FILL)
FILL_OUT = (TTY_OUT and FILL)
CURSOR_SAVE = ctrl_seq('s')
CURSOR_RESTORE = ctrl_seq('u')
CURSOR_HIDE = ctrl_seq('?25l')
CURSOR_SHOW = ctrl_seq('?25h')
CURSOR_REPORT = ctrl_seq('6n') # '\x1B[{x};{y}R' appears as if typed into the terminal.
ALT_ENTER = ctrl_seq('?1049h')
ALT_EXIT = ctrl_seq('?1049l')
def term_pos(x: int, y: int) -> str:
'''
position the cursor using 0-indexed x, y integer coordinates.
(supposedly the 'f' suffix does the same thing).
'''
return ctrl_seq('H', y + 1, x + 1)
def show_cursor(show_cursor: Any) -> str:
return CURSOR_SHOW if show_cursor else CURSOR_HIDE
def sanitize_for_console(*text:str, allow_sgr=False, allow_tab=False, escape=sgr(INVERT), unescape=sgr(RST_INVERT)) -> List[str]:
sanitized = []
for t in text:
for m in _sanitize_re.finditer(t):
s = m[0]
k = m.lastgroup
if k == 'vis' or (allow_sgr and k == 'sgr') or (allow_tab and k == 'tab'):
sanitized.append(s)
else: # Sanitize.
sanitized.append(f'{escape}{escape_char_for_console(s)}{unescape}')
return sanitized
_sanitize_re = _re.compile(r'''(?x)
(?P<vis> [\n -~]+ )
| (?P<sgr> \x1b (?= \[ [\d;]* m ))
| (?P<tab> \t )
| .
''')
def escape_char_for_console(char:str) -> str:
'Escape characters using ploy syntax.'
return escape_reprs.get(char) or f'\\{ord(char):x};'
escape_reprs = {
'\r': '\\r',
'\t': '\\t',
'\v': '\\v',
}
| 29.366935 | 140 | 0.691885 | 1,380 | 7,283 | 3.428986 | 0.223188 | 0.026627 | 0.011834 | 0.013525 | 0.16568 | 0.16568 | 0.160186 | 0.125951 | 0.086644 | 0.034658 | 0 | 0.022058 | 0.184539 | 7,283 | 247 | 141 | 29.48583 | 0.774541 | 0.373335 | 0 | 0.017241 | 0 | 0 | 0.12971 | 0.013795 | 0 | 0 | 0 | 0.004049 | 0.034483 | 1 | 0.094828 | false | 0 | 0.025862 | 0.008621 | 0.215517 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c236be905c2526a229334d3b2ad9ab2cbc93dc9 | 770 | py | Python | pesto-orm/pesto_orm/pipeline/repository.py | Dreampie/pesto | 3d48637c61396cdc3a5427b8f51378c06c2473d7 | [
"Apache-2.0"
] | 13 | 2020-05-22T09:32:48.000Z | 2021-09-09T09:57:43.000Z | pesto-orm/pesto_orm/pipeline/repository.py | Dreampie/pesto | 3d48637c61396cdc3a5427b8f51378c06c2473d7 | [
"Apache-2.0"
] | null | null | null | pesto-orm/pesto_orm/pipeline/repository.py | Dreampie/pesto | 3d48637c61396cdc3a5427b8f51378c06c2473d7 | [
"Apache-2.0"
] | 2 | 2020-05-25T18:05:20.000Z | 2020-05-25T18:50:22.000Z | #!/usr/bin/env python
# encoding: utf-8
from pesto_common.pipeline.step import PipelineStep
from pesto_orm.dialect.mysql.domain import MysqlBaseRepository
class MysqlPipelineRepository(PipelineStep, MysqlBaseRepository):
def __init__(self, db_name=None, table_name=None, primary_key='id', sql='', data={}, next_step=None, model_class=None, yield_able=True):
MysqlBaseRepository.__init__(self, model_class=model_class)
PipelineStep.__init__(self, data=data, next_step=next_step)
self.db_name = db_name
self.table_name = table_name
self.primary_key = primary_key
self.sql = sql
self.yield_able = yield_able
def _run(self):
self.result.rows = self.query(sql=self.sql, yield_able=self.yield_able)
| 36.666667 | 140 | 0.731169 | 104 | 770 | 5.076923 | 0.403846 | 0.085227 | 0.037879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00156 | 0.167532 | 770 | 20 | 141 | 38.5 | 0.822153 | 0.046753 | 0 | 0 | 0 | 0 | 0.002732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c2a72a171117903af475131459c480272dfc566 | 375 | py | Python | backend/src/webhooks/serializers.py | ggcarrots/HighFive | f8610c30240fd80cf45a4147d4e6237aa9d3f82c | [
"MIT"
] | 1 | 2019-06-08T09:15:18.000Z | 2019-06-08T09:15:18.000Z | backend/src/webhooks/serializers.py | ggcarrots/HighFive | f8610c30240fd80cf45a4147d4e6237aa9d3f82c | [
"MIT"
] | 13 | 2020-09-04T23:28:00.000Z | 2022-03-02T04:18:43.000Z | backend/src/webhooks/serializers.py | ggcarrots/HighFive | f8610c30240fd80cf45a4147d4e6237aa9d3f82c | [
"MIT"
] | null | null | null | from rest_framework import serializers
from webhooks.models import Topic
class TopicSerializer(serializers.ModelSerializer):
class Meta:
model = Topic
fields = '__all__'
read_only_fields = [
'id',
'initiator_id',
'dialogflow_sessions_id',
'date_created',
'source'
] | 22.058824 | 51 | 0.565333 | 32 | 375 | 6.28125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.362667 | 375 | 17 | 52 | 22.058824 | 0.841004 | 0 | 0 | 0 | 0 | 0 | 0.162234 | 0.058511 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c2b310b0a884dd88387deff9a316b6321018c3a | 1,762 | py | Python | exam_scheduler/main.py | bhukyavamshirathod/Exam_scheduler | 0a890350bad4a6d3453d0b379ec2248f2ccbe449 | [
"MIT"
] | 3 | 2019-02-07T11:53:35.000Z | 2020-11-18T08:44:30.000Z | exam_scheduler/main.py | bhukyavamshirathod/Exam_scheduler | 0a890350bad4a6d3453d0b379ec2248f2ccbe449 | [
"MIT"
] | 1 | 2019-05-15T22:49:12.000Z | 2019-05-15T22:49:12.000Z | exam_scheduler/main.py | bhukyavamshirathod/Exam_scheduler | 0a890350bad4a6d3453d0b379ec2248f2ccbe449 | [
"MIT"
] | 3 | 2019-01-11T05:52:24.000Z | 2019-10-17T06:55:00.000Z | #!/usr/bin/env python3
# PYTHON_ARGCOMPLETE_OK
import os
import sys
from srblib import Colour
from . import __version__, __mod_name__
from .scheduler import Scheduler
from .parser import get_parser
from .configurations import default_output_xlsx_path
from .verifier import Verifier
def main():
args = get_parser()
if args.version:
print(__mod_name__+'=='+__version__)
sys.exit()
if args.vr:
Verifier.verify_room_list(args.vr)
sys.exit()
if args.vs:
Verifier.verify_schedule_list(args.vs)
sys.exit()
if args.vt:
Verifier.verify_teachers_list(args.vt)
sys.exit()
if args.vw:
Verifier.verify_work_ratio(args.vw)
sys.exit()
if args.seed <= 0:
Colour.print('seed value should be a positive integer, got : ' + str(args.seed),Colour.RED)
sys.exit(1)
global default_output_xlsx_path
if args.output: default_output_xlsx_path = args.output
if args.reserved < 0:
Colour.print('Reserved number should be a non-negative integer, got : ' + str(args.reserved), Colour.RED)
sys.exit(1)
try:
scheduler = Scheduler(int(args.seed),int(args.reserved))
if args.debug: scheduler.debug = True
scheduler._configure_paths() # done manually
res = scheduler.compileall()
if not res:
Colour.print('Error during compilation',Colour.RED)
print(res)
sys.exit(1)
scheduler.schedule(default_output_xlsx_path)
Colour.print('Output written to : ' + Colour.END + default_output_xlsx_path, Colour.BLUE)
except KeyboardInterrupt:
Colour.print('Exiting on KeyboardInterrupt ...',Colour.YELLOW)
if(__name__=="__main__"):
main()
| 28.885246 | 113 | 0.660613 | 227 | 1,762 | 4.885463 | 0.361233 | 0.048693 | 0.076646 | 0.09468 | 0.079351 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004461 | 0.236663 | 1,762 | 60 | 114 | 29.366667 | 0.820074 | 0.03235 | 0 | 0.166667 | 0 | 0 | 0.111046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0 | 0.166667 | 0 | 0.1875 | 0.145833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c2e735e033b142f1c9ab984e5bcc14270435e6e | 3,997 | py | Python | mirai/session.py | lemon-chat/mirai-py | e898cdcaed6a4a3338bc7749ddceb67474c88bdb | [
"MIT"
] | 1 | 2021-03-29T17:30:24.000Z | 2021-03-29T17:30:24.000Z | mirai/session.py | lemon-chat/mirai-py | e898cdcaed6a4a3338bc7749ddceb67474c88bdb | [
"MIT"
] | null | null | null | mirai/session.py | lemon-chat/mirai-py | e898cdcaed6a4a3338bc7749ddceb67474c88bdb | [
"MIT"
] | null | null | null |
import os
import json
from functools import partial
from typing import Optional
from .httpapi import MiraiHttpApi
from .message.chain import MessageChain
from .message.messages import Message
class MiraiSession(object):
def __init__(self, api: MiraiHttpApi, account: str, authKey: str):
self.llapi = api
self.authKey = authKey
self.account = account
self.sessionKey = None
def auth(self):
sess_response = self.llapi.auth(authKey=self.authKey)
self.sessionKey = sess_response['session']
verify_response = self.llapi.verify(
sessionKey=self.sessionKey, qq=self.account)
if verify_response['msg'] != "success":
raise Exception(verify_response['msg'])
def leave(self):
release_response = self.llapi.release(
sessionKey=self.sessionKey, qq=self.account)
self.sessionKey = None
if release_response['msg'] != "success":
raise Exception(release_response['msg'])
def __enter__(self):
self.auth()
# print(f'打开session: {self.sessionKey}')
return self
def __exit__(self, type, value, trace):
# print(f'关闭session: {self.sessionKey}')
self.leave()
def recall(self, target:int):
'''
sessionKey String 已经激活的Session
target Int 需要撤回的消息的messageId
'''
return self.llapi.recall(sessionKey=self.sessionKey, target=target)
def sendGroupMessage(self, target:int, messageChain: MessageChain, quote:Optional[int]=None):
'''
sessionKey String 已经激活的Session
target Long 可选,发送消息目标群的群号
group Long 可选,target与group中需要有一个参数不为空,当target不为空时group将被忽略,同target
quote Int 引用一条消息的messageId进行回复
messageChain Array 消息链,是一个消息对象构成的数组
'''
if quote is None:
ret = self.llapi.sendGroupMessage(
sessionKey=self.sessionKey,
target=target,
messageChain=messageChain.dict()
)
else:
ret = self.llapi.sendGroupMessage(
sessionKey=self.sessionKey,
target=target,
quote=quote,
messageChain=messageChain.dict()
)
return Message(ret['messageId'], messageChain)
def sendFriendMessage(self, target:int, messageChain: MessageChain, quote:Optional[int]=None):
'''
sessionKey String 已经激活的Session
target Long 可选,发送消息目标群的群号
qq Long 可选,target与qq中需要有一个参数不为空,当target不为空时qq将被忽略,同target
quote Int 引用一条消息的messageId进行回复
messageChain Array 消息链,是一个消息对象构成的数组
'''
if quote is None:
ret = self.llapi.sendFriendMessage(
sessionKey=self.sessionKey,
target=target,
messageChain=messageChain.dict()
)
else:
ret = self.llapi.sendFriendMessage(
sessionKey=self.sessionKey,
target=target,
quote=quote,
messageChain=messageChain.dict()
)
return Message(ret['messageId'], messageChain)
def peekLatestMessage(self, count:int):
'''
sessionKey 你的session key
count 获取消息和事件的数量
'''
ret = self.llapi.peekLatestMessage(sessionKey=self.sessionKey, count=count)
data = ret['data']
return data
def friendList(self):
'''
使用此方法获取bot的好友列表
'''
ret = self.llapi.friendList(sessionKey=self.sessionKey)
return ret
def groupList(self):
'''
使用此方法获取bot的群列表
'''
ret = self.llapi.groupList(sessionKey=self.sessionKey)
return ret
def memberList(self, target:int):
'''
使用此方法获取bot的群列表
'''
ret = self.llapi.memberList(sessionKey=self.sessionKey, target=target)
return ret | 31.472441 | 98 | 0.589192 | 360 | 3,997 | 6.486111 | 0.244444 | 0.095931 | 0.113062 | 0.077088 | 0.485653 | 0.427409 | 0.364882 | 0.364882 | 0.364882 | 0.336617 | 0 | 0 | 0.323493 | 3,997 | 127 | 99 | 31.472441 | 0.863536 | 0.188141 | 0 | 0.407895 | 0 | 0 | 0.017851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.092105 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c30d2b3766f8711bfdb4dadb53210987bf14d7c | 554 | py | Python | Day 7/solution2.py | joll05/AdventOfCode2019 | faa61058dd048dfc039889eaa4bd361d34b9dc7b | [
"Unlicense"
] | null | null | null | Day 7/solution2.py | joll05/AdventOfCode2019 | faa61058dd048dfc039889eaa4bd361d34b9dc7b | [
"Unlicense"
] | null | null | null | Day 7/solution2.py | joll05/AdventOfCode2019 | faa61058dd048dfc039889eaa4bd361d34b9dc7b | [
"Unlicense"
] | null | null | null | import computer
import itertools
possibleOrders = list(itertools.permutations(range(5)))
inputs = [0, 0]
bestResult = 0
def RecieveOutput(output):
global inputs
inputs[1] = output
index = 0
def SendInput():
global index
index += 1
return(inputs[(index - 1) % len(inputs)])
for i in possibleOrders:
inputs = [0, 0]
index = 0
for j in i:
inputs[0] = j
computer.Run(RecieveOutput, SendInput)
result = inputs[1]
if(result > bestResult):
bestResult = result
print(bestResult)
| 15.828571 | 55 | 0.624549 | 67 | 554 | 5.164179 | 0.41791 | 0.060694 | 0.046243 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032099 | 0.268953 | 554 | 34 | 56 | 16.294118 | 0.822222 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.173913 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c33fd49a2f0c97ea51a9df3e20d3fb9c5ffeb0a | 5,574 | py | Python | survey/propositionA3.py | pbatko/abcvoting | 55a8e7e23e35a3620921e3f5426a09925e83640e | [
"MIT"
] | null | null | null | survey/propositionA3.py | pbatko/abcvoting | 55a8e7e23e35a3620921e3f5426a09925e83640e | [
"MIT"
] | null | null | null | survey/propositionA3.py | pbatko/abcvoting | 55a8e7e23e35a3620921e3f5426a09925e83640e | [
"MIT"
] | null | null | null | """Proposition A.3
from the survey: "Approval-Based Multi-Winner Voting:
Axioms, Algorithms, and Applications"
by Martin Lackner and Piotr Skowron
"""
from __future__ import print_function
import sys
sys.path.insert(0, '..')
from abcvoting import abcrules
from abcvoting.preferences import Profile
from abcvoting import misc
print(misc.header("Proposition A.3", "*"))
num_cand = 8
a, b, c, d, e, f, g, h = list(range(num_cand)) # a = 0, b = 1, c = 2, ...
names = "abcdefgh"
monotonicity_instances = [
("seqphrag", 3, # from Xavier Mora, Maria Oliver (2015)
[[0, 1]] * 10 + [[2]] * 3 + [[3]] * 12 + [[0, 1, 2]] * 21 + [[2, 3]] * 6,
True, [0, 1], [[0, 1, 3]], [[0, 2, 3], [1, 2, 3]]),
("seqphrag", 3, # from Xavier Mora, Maria Oliver (2015)
[[2]] * 7 + [[0, 1]] * 4 + [[0, 1, 2]] + [[0, 1, 3]] * 16 + [[2, 3]] * 4,
False, [0, 1], [[0, 1, 2]], [[0, 2, 3], [1, 2, 3]]),
("rule-x", 3,
[[1, 3], [0, 1], [1, 3, 4], [0, 4], [2, 3, 4], [2, 4], [2, 3, 4], [0, 2, 4], [1, 2, 3]],
True, [0], [[0, 3, 4]], [[1, 2, 4]]),
("rule-x", 3,
[[2, 4], [0, 1], [0, 4], [3, 4], [0, 1, 2], [3, 4], [1, 2, 4], [3, 4], [2, 4], [0, 1, 2], [1, 3], [2, 4], [0, 3], [3, 4], [2, 3], [1, 2, 3], [1, 2, 4], [1, 3], [2, 4]],
False, [1, 3], [[1, 3, 4]], [[2, 3, 4]]),
("revseqpav", 3,
[[0, 4], [1, 2, 3], [3, 4], [2, 4], [1, 3, 4], [2, 4], [0, 1, 2], [2, 3, 4], [0, 3, 4], [1, 3], [0, 4], [0, 3, 4], [0, 1], [0, 3], [0, 1, 3], [2, 4], [1, 2, 3], [1, 2]],
False, [2, 3], [[2, 3, 4]], [[1, 3, 4]]),
("greedy-monroe", 3,
[[1, 2, 3], [0, 2, 5], [0, 3, 4], [2, 4], [0, 1], [3, 5], [3, 5], [1, 4], [1, 5]],
True, [4], [[1, 4, 5]], [[1, 2, 3]]),
("pav", 4, # from Sanchez-Fernandez and Fisteus (2019)
[[0, 1], [0, 1, 2], [4, 5], [4, 5]] + [[0, 4], [1, 4], [2, 4], [0, 5], [1, 5], [2, 5], [0, 6], [1, 6], [2, 6]] * 3 + [[3]] * 100,
False, [2, 3], [[0, 1, 2, 3]], [[3, 4, 5, 6]]),
("cc", 3, # from Sanchez-Fernandez and Fisteus (2019)
[[a], [a, d], [a, e], [c, d], [c, e], [b]] * 2 + [[d]],
False, [b, c], [[a, b, c]], [[a, b, c], [b, d, e]]),
("monroe", 4,
[[a, e]] * 5 + [[a, g]] * 4 + [[b, e]] * 5 + [[b, h]] * 4 + [[c, f]] * 5 + [[c, g]] * 4 + [[d, f]] * 3 + [[d, h]] * 3,
True, [e], [[e, f, g, h]], [[a, b, c, d], [e, f, g, h]]),
("monroe", 3,
[[a], [a, d], [a, e]] * 2 + [[b], [c, d]] * 4 + [[b, e]] + [[c, e]] * 3,
False, [b, c], [[a, b, c]], [[a, b, c], [b, d, e]]),
("seqpav", 3,
[[1, 2], [1, 3], [4, 5], [0, 4], [2, 5], [0, 1], [1, 5], [0, 4]],
False, [4, 5], [[1, 4, 5]], [[0, 1, 5], [1, 4, 5]]),
("optphrag", 6, # from Sanchez-Fernandez and Fisteus (2019)
[[1, 2, 3, 4, 5]] * 13 + [[0, 6], [0]] * 2 + [[6]] * 1,
True, [0, 1, 2, 3, 4, 5], [[0, 1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6]],
[[0, 1, 2, 3, 4, 6], [0, 1, 2, 3, 5, 6], [0, 1, 2, 4, 5, 6], [0, 1, 3, 4, 5, 6], [0, 2, 3, 4, 5, 6]]),
("optphrag", 6, # from Sanchez-Fernandez and Fisteus (2019)
[[7]] + [[1, 2, 3, 4, 5]] * 13 + [[0, 6], [0]] * 2 + [[6]] * 1,
False, [0, 1, 2, 3, 4, 5], [[0, 1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6]],
[[0, 1, 2, 3, 4, 6], [0, 1, 2, 3, 5, 6], [0, 1, 2, 4, 5, 6], [0, 1, 3, 4, 5, 6], [0, 2, 3, 4, 5, 6]]),
]
for inst in monotonicity_instances:
(rule_id, committeesize, apprsets,
addvoter, extravote, commsfirst, commsafter) = inst
print(misc.header(abcrules.rules[rule_id].longname, "-"))
profile = Profile(num_cand, names=names)
profile.add_preferences(apprsets)
origvote = set(apprsets[0])
print(profile.str_compact())
# irresolute if possible
if False in abcrules.rules[rule_id].resolute:
resolute = False
else:
resolute = True
committees = abcrules.compute(
rule_id, profile, committeesize, resolute=resolute)
print("original winning committees:\n"
+ misc.str_candsets(committees, names))
# verify correctness
assert committees == commsfirst
some_variant = any(all(c in comm for c in extravote) for comm in commsfirst)
all_variant = all(all(c in comm for c in extravote) for comm in commsfirst)
assert some_variant or all_variant
if all_variant:
assert not all(all(c in comm for c in extravote) for comm in commsafter)
else:
assert not any(all(c in comm for c in extravote) for comm in commsafter)
if addvoter:
print("additional voter: " + misc.str_candset(extravote, names))
apprsets.append(extravote)
else:
apprsets[0] = list(set(extravote) | set(apprsets[0]))
print("change of voter 0: "
+ misc.str_candset(list(origvote), names)
+ " --> "
+ misc.str_candset(apprsets[0], names))
profile = Profile(num_cand, names=names)
profile.add_preferences(apprsets)
committees = abcrules.compute(
rule_id, profile, committeesize, resolute=resolute)
print("\nwinning committees after the modification:\n"
+ misc.str_candsets(committees, names))
# verify correctness
assert committees == commsafter
print(abcrules.rules[rule_id].shortname + " fails ", end="")
if addvoter:
if len(extravote) == 1:
print("candidate", end="")
else:
print("support", end="")
print(" monotonicity with additional voters")
else:
if len(set(extravote) - origvote) == 1:
print("candidate", end="")
else:
print("support", end="")
print(" monotonicity without additional voters")
print()
| 42.227273 | 174 | 0.483854 | 911 | 5,574 | 2.927552 | 0.15258 | 0.027747 | 0.024747 | 0.016498 | 0.445819 | 0.419948 | 0.387702 | 0.382452 | 0.315711 | 0.315711 | 0 | 0.115159 | 0.266236 | 5,574 | 131 | 175 | 42.549618 | 0.536919 | 0.085038 | 0 | 0.299065 | 0 | 0 | 0.06845 | 0 | 0 | 0 | 0 | 0 | 0.046729 | 1 | 0 | false | 0 | 0.046729 | 0 | 0.046729 | 0.149533 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c340efab3f4f16adcb01fa6b636cf354b1cf01a | 1,102 | py | Python | plib/center.py | slowrunner/GoPiLgc | e86505d83b2d2e7b1c5c2a04c1eed19774cf76b0 | [
"CC0-1.0"
] | null | null | null | plib/center.py | slowrunner/GoPiLgc | e86505d83b2d2e7b1c5c2a04c1eed19774cf76b0 | [
"CC0-1.0"
] | null | null | null | plib/center.py | slowrunner/GoPiLgc | e86505d83b2d2e7b1c5c2a04c1eed19774cf76b0 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
#
# FILE: center.py
# Results: When you run this program, the ROSbot Servo will face center, and turn off.
from __future__ import print_function # use python 3 syntax but make it compatible with python 2
from __future__ import division # ''
import time # import the time library for the sleep function
import gopigo3 # import the GoPiGo3 drivers
GPG = gopigo3.GoPiGo3() # Create an instance of the GoPiGo3 class. GPG will be the GoPiGo3 object.
SERVO_1_CENTER = 1424
SERVO_1_LEFT = 2098 # +674 70 degrees left of center
SERVO_1_RIGHT = 750 # -674 70 degrees right of center
SERVO_OFF = 0
SERVO_1_CENTER_DEG = 85
"""
NOTE: To use degree center:
egpg = easygopigo3.EasyGoPiGo3(use_mutex=True)
pan_servo = egpg.init_servo()
ps.rotate_servo(SERVO_1_CENTER_DEG)
time.sleep(1)
"""
try:
GPG.set_servo(GPG.SERVO_1, SERVO_1_CENTER)
time.sleep(1)
except KeyboardInterrupt:
GPG.set_servo(GPG.SERVO_1, SERVO_1_CENTER)
time.sleep(1)
finally:
GPG.set_servo(GPG.SERVO_1, SERVO_OFF) # relax servo
| 29 | 98 | 0.709619 | 170 | 1,102 | 4.382353 | 0.470588 | 0.080537 | 0.080537 | 0.056376 | 0.146309 | 0.146309 | 0.146309 | 0.112752 | 0.112752 | 0.112752 | 0 | 0.055427 | 0.214156 | 1,102 | 37 | 99 | 29.783784 | 0.80485 | 0.393829 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c392b9c574a0a2b97e90c7e0e7610fb49bb54e9 | 1,313 | py | Python | multiport/scanner.py | shimst3r/multiport | 91547867846cbb1ff9fc35cc5a834c4ee11fb0a3 | [
"MIT"
] | null | null | null | multiport/scanner.py | shimst3r/multiport | 91547867846cbb1ff9fc35cc5a834c4ee11fb0a3 | [
"MIT"
] | null | null | null | multiport/scanner.py | shimst3r/multiport | 91547867846cbb1ff9fc35cc5a834c4ee11fb0a3 | [
"MIT"
] | null | null | null | """
Scanner is a module that implements basic sync/async port scanning functionality.
"""
import logging
import socket
from dataclasses import dataclass
from typing import List
@dataclass
class Scanner:
"""
Scanner implements port scanning functionality.
"""
host: str
open_ports: List[int]
socket: socket.socket
def __init__(self, host: str):
self.host = host
self.open_ports = []
self.socket = socket.socket()
logging.debug(f"Socket at host {self.host} has been created.")
def __del__(self):
self.socket.close()
logging.debug(f"Socket at host {self.host} has been closed.")
def scan(self, port: int):
"""Scans whether the given port is open."""
try:
logging.debug(f"Trying to connect to {self.host}:{port}.")
self.socket.connect((self.host, port))
self.open_ports.append(port)
logging.debug(f"Port {self.host}:{port} is open.")
except ConnectionRefusedError:
logging.debug(f"Port {self.host}:{port} is closed.")
except OSError as os_error:
logging.debug(os_error)
else:
self.socket.shutdown(socket.SHUT_RDWR)
logging.debug(f"Connection to port {self.host}:{port} has been shut down.")
| 29.840909 | 87 | 0.626809 | 167 | 1,313 | 4.844311 | 0.359281 | 0.088999 | 0.096415 | 0.059333 | 0.175525 | 0.175525 | 0.175525 | 0.175525 | 0.098888 | 0.098888 | 0 | 0 | 0.260472 | 1,313 | 43 | 88 | 30.534884 | 0.833162 | 0.12719 | 0 | 0 | 0 | 0 | 0.223614 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.133333 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c39b7e432dbdc90984c1367db6a5912778f41c5 | 21,209 | py | Python | admin/views.py | MicroPyramid/ngo-cms | 5f0baf69ce646ab6b895d3ae2f49b782630c9959 | [
"MIT"
] | 5 | 2019-08-12T17:56:25.000Z | 2021-08-31T04:36:42.000Z | admin/views.py | MicroPyramid/ngo-cms | 5f0baf69ce646ab6b895d3ae2f49b782630c9959 | [
"MIT"
] | 12 | 2020-02-12T00:38:11.000Z | 2022-03-11T23:50:12.000Z | admin/views.py | MicroPyramid/ngo-cms | 5f0baf69ce646ab6b895d3ae2f49b782630c9959 | [
"MIT"
] | 8 | 2019-06-19T18:54:02.000Z | 2021-01-05T19:31:30.000Z | from django.shortcuts import render
from django.shortcuts import render_to_response
from django.contrib.auth.decorators import login_required
from django.views.decorators.csrf import csrf_exempt
from django.contrib.auth import authenticate, logout
from django.contrib.auth import login as signin
from django.http.response import HttpResponse, HttpResponseRedirect
from django.http import JsonResponse
import os
# from django.core.context_processors import csrf
from blog.models import Category, Gal_Image, Post, Menu, Page, Image_File, Banner
from blog.forms import CategoryForm, PostForm, MenuForm, PageForm, bannerForm, PasswordForm
from events.forms import EventForm
from events.models import Event
from PIL import Image
from django.core.files.base import File as fle
from django.core.files.storage import default_storage
from django.db.models import Max
@csrf_exempt
def upload_photos(request):
'''
takes all the images coming from the redactor editor and
stores it in the database and returns all the files'''
if request.FILES.get("upload"):
f = request.FILES.get("upload")
obj = Image_File.objects.create(upload=f, is_image=True)
size = (128, 128)
x = f.name
z = 'thumb' + f.name
y = open(x, 'w')
for i in f.chunks():
y.write(i)
y.close()
im = Image.open(x)
im.thumbnail(size)
im.save(z)
imdata = open(z)
obj.thumbnail.save(z, fle(imdata))
imdata.close()
# obj.thumbnail = imdata
os.remove(x)
os.remove(z)
upurl = default_storage.url(obj.upload.url)
upurl = upurl
return HttpResponse("""
<script type='text/javascript'>
window.parent.CKEDITOR.tools.callFunction({0}, '{1}');
</script>""".format(request.GET['CKEditorFuncNum'], upurl))
@csrf_exempt
def recent_photos(request):
''' returns all the images from the data base '''
imgs = []
for obj in Image_File.objects.filter(is_image=True).order_by("-date_created"):
upurl = default_storage.url(obj.upload.url)
thumburl = default_storage.url(obj.thumbnail.url)
imgs.append({'src': upurl, 'thumb': thumburl, 'is_image': True})
return render_to_response('admin/browse.html', {'files': imgs})
def login(request):
if request.user.is_authenticated:
if request.user.is_superuser:
posts = Post.objects.all().count()
categoryies = Category.objects.all().count()
menus = Menu.objects.all().count()
pages = Page.objects.all().count()
events = Event.objects.all().count()
return render_to_response("admin/index.html",
{'posts': posts,
'categoryies': categoryies,
'menus': menus,
'pages': pages,
'events': events})
return HttpResponseRedirect("/")
if request.method == "POST":
user = authenticate(email=request.POST.get("email"),
password=request.POST.get("password"))
if user is not None:
if user.is_superuser and user.is_active:
signin(request, user)
data = {"error": False}
return JsonResponse(data)
data = {"error": True,
"message": "Your account is not yet activated!"}
return JsonResponse(data)
data = {"error": True,
"message": "Username and password were incorrect."}
return JsonResponse(data)
return render(request, "admin/login.html")
@login_required
def category_list(request):
category_list = Category.objects.all()
return render_to_response('admin/category-list.html',
{'category_list': category_list})
@login_required
def post_list(request):
post_list = Post.objects.all().order_by('id')
return render_to_response('admin/post-list.html', {'post_list': post_list})
@login_required
def event_list(request):
event_list = Event.objects.all().order_by('id')
return render_to_response('admin/event-list.html',
{'event_list': event_list})
@login_required
def menu_list(request):
menu_list = Menu.objects.filter(parent=None)
return render_to_response('admin/menu-list.html', {'menu_list': menu_list})
@login_required
def page_list(request):
page_list = Page.objects.all()
return render_to_response('admin/page-list.html', {'page_list': page_list})
@login_required
def banner_list(request):
banner_list = Banner.objects.all()
return render_to_response('admin/banner-list.html',
{'banner_list': banner_list})
@login_required
def add_category(request):
if request.method == 'GET':
category_list = Category.objects.all()
return render(request, 'admin/category-add.html',
{'category_list': category_list})
validate_category = CategoryForm(request.POST)
errors = {}
if validate_category.is_valid():
new_category = validate_category.save(commit=False)
new_category.save()
data = {"data": 'Category created successfully', "error": False}
return JsonResponse(data)
for k in validate_category.errors:
errors[k] = validate_category.errors[k][0]
return JsonResponse(errors)
@login_required
def add_post(request):
if request.method == 'GET':
category_list = Category.objects.all()
post_list = Post.objects.all()
return render(request, 'admin/post-add.html',
{'category_list': category_list, 'post_list': post_list})
validate_post = PostForm(request.POST)
errors = {}
if validate_post.is_valid():
new_post = validate_post.save(commit=False)
if 'image' not in request.FILES:
errors['image'] = 'Please upload Image'
return JsonResponse(errors)
if request.FILES['image']:
new_post.image = request.FILES['image']
new_post.save()
photos = request.FILES.getlist('photos')
for p in photos:
img = Gal_Image.objects.create(image=p)
new_post.photos.add(img)
data = {"data": 'Post created successfully', "error": False}
return JsonResponse(data)
if 'image' not in request.FILES:
validate_post.errors['image'] = 'Please upload Image'
return JsonResponse(validate_post.errors)
@login_required
def add_event(request):
if request.method == 'GET':
event_list = Event.objects.all()
return render(request, 'admin/event-add.html',
{'event_list': event_list})
validate_event = EventForm(request.POST)
errors = {}
if validate_event.is_valid():
if validate_event.cleaned_data['end_date'] and validate_event.cleaned_data['start_date']:
if validate_event.cleaned_data['start_date'] > validate_event.cleaned_data['end_date']:
errors['date_err'] = 'Start Date should not greater than End Date'
return JsonResponse(errors)
if 'image' not in request.FILES:
errors['image'] = 'Please upload Image'
return JsonResponse(errors)
new_event = validate_event.save(commit=False)
new_event.image = request.FILES['image']
new_event.save()
data = {"data": 'event created successfully', "error": False}
return JsonResponse(data)
for k in validate_event.errors:
errors[k] = validate_event.errors[k][0]
if 'image' not in request.FILES:
errors['image'] = 'Please upload Image'
return JsonResponse(errors)
@login_required
def delete_category(request, pk):
category = Category.objects.get(pk=pk)
category.delete()
return HttpResponseRedirect('/admin/category/list/')
@login_required
def delete_post(request, pk):
post = Post.objects.get(pk=pk)
image_path = post.image.url
for img in post.photos.all():
photo_path = img.image.url
try:
os.remove(photo_path)
except FileNotFoundError:
pass
try:
os.remove(image_path)
except FileNotFoundError:
pass
post.delete()
return HttpResponseRedirect('/admin/article/list/')
@login_required
def edit_category(request, pk):
if request.method == "GET":
category = Category.objects.get(pk=pk)
category_list = Category.objects.all()
return render(request, 'admin/category-edit.html',
{'category_list': category_list, 'category': category})
c = Category.objects.get(pk=pk)
validate_category = CategoryForm(request.POST, instance=c)
errors = {}
if validate_category.is_valid():
validate_category.save()
data = {"data": 'Category edited successfully', "error": False}
return JsonResponse(data)
for k in validate_category.errors:
errors[k] = validate_category.errors[k][0]
return JsonResponse(errors)
@login_required
def edit_post(request, pk):
if request.method == "GET":
post = Post.objects.get(pk=pk)
category_list = Category.objects.all()
post_list = Post.objects.all()
return render(request, 'admin/post-edit.html',
{'post': post, 'post_list': post_list,
'category_list': category_list})
p = Post.objects.get(pk=pk)
validate_post = PostForm(request.POST, instance=p)
errors = {}
if validate_post.is_valid():
new_post = validate_post.save(commit=False)
if 'image' in request.FILES:
image_path = p.image.url
try:
os.remove(image_path)
except Exception:
pass
new_post.image = request.FILES['image']
new_post.save()
photos = request.FILES.getlist('photos')
for p in photos:
img = Gal_Image.objects.create(image=p)
new_post.photos.add(img)
return JsonResponse({"data": 'Post edited successfully', "error": False})
for k in validate_post.errors:
errors[k] = validate_post.errors[k][0]
return JsonResponse(errors)
@login_required
def edit_event(request, pk):
if request.method == "GET":
event = Event.objects.get(pk=pk)
event_list = Event.objects.all()
return render(request, 'admin/event-edit.html',
{'event': event, 'event_list': event_list})
e = Event.objects.get(pk=pk)
validate_event = EventForm(request.POST, instance=e)
errors = {}
if validate_event.is_valid():
if validate_event.cleaned_data['end_date'] and validate_event.cleaned_data['start_date']:
if validate_event.cleaned_data['start_date'] > validate_event.cleaned_data['end_date']:
errors['date_err'] = 'Start Date should not greater than End Date'
return JsonResponse(errors)
new_event = validate_event.save(commit=False)
if 'image' in request.FILES:
image_path = e.image.url
try:
os.remove(image_path)
except FileNotFoundError:
pass
new_event.image = request.FILES['image']
new_event.save()
return JsonResponse({"data": 'event edited successfully', "error": False})
for k in validate_event.errors:
errors[k] = validate_event.errors[k][0]
return JsonResponse(errors)
@login_required
def delete_event(request, pk):
event = Event.objects.get(pk=pk)
image_path = event.image.url
try:
os.remove(image_path)
except FileNotFoundError:
pass
event.delete()
return HttpResponseRedirect('/admin/event/list/')
def admin_logout(request):
logout(request)
return HttpResponseRedirect('/')
@login_required
def add_menu(request):
if request.method == 'GET':
menu_list = Menu.objects.filter(parent=None)
return render(request, 'admin/menu-add.html', {'menu_list': menu_list})
validate_menu = MenuForm(request.POST)
errors = {}
if request.POST['slug'] == "":
errors['slug'] = 'This field is required'
if request.POST['name'] == "":
errors['name'] = 'This field is required'
# if len(errors)>0:
# return HttpResponse(json.dumps(errors))
if validate_menu.is_valid():
new_menu = validate_menu.save(commit=False)
lvl_count = Menu.objects.filter(parent=new_menu.parent).count()
new_menu.lvl = lvl_count + 1
new_menu.save()
return JsonResponse({"data": 'Menu created successfully', "error": False})
for e in validate_menu.errors:
errors[e] = validate_menu.errors[e][0]
return JsonResponse(errors)
@login_required
def edit_menu(request, pk):
if request.method == 'GET':
menu = Menu.objects.get(pk=pk)
menu_list = Menu.objects.filter(parent=None)
return render(request, 'admin/menu-edit.html',
{'menu_list': menu_list, 'menu': menu})
m = Menu.objects.get(pk=pk)
old_parent = m.parent
validate_menu = MenuForm(request.POST, instance=m)
errors = {}
if validate_menu.is_valid():
menu = validate_menu.save(commit=False)
if old_parent == menu.parent:
menu.save()
else:
lvl_count = Menu.objects.filter(parent=menu.parent).count()
menu.lvl = lvl_count + 1
menu.save()
return JsonResponse({"data": 'Menu Edited successfully', "error": False})
for e in validate_menu.errors:
errors[e] = validate_menu.errors[e][0]
return JsonResponse(errors)
@login_required
def delete_menu(request, pk):
curent_menu = Menu.objects.get(pk=pk)
menu_parent = curent_menu.parent
menu_lvl = curent_menu.lvl
max_lvl = Menu.objects.filter(
parent=menu_parent).aggregate(Max('lvl'))['lvl__max']
Menu.objects.get(pk=pk).delete()
if max_lvl != 1:
for m in Menu.objects.filter(parent=menu_parent,
lvl__gt=menu_lvl, lvl__lte=max_lvl):
m.lvl -= 1
m.save()
return HttpResponseRedirect('/admin/menu/list/')
@login_required
def menu_state(request, pk):
menu = Menu.objects.get(pk=pk)
if menu.is_active is True:
menu.is_active = False
menu.save()
else:
menu.is_active = True
menu.save()
return HttpResponseRedirect('/admin/menu/list/')
@login_required
def menu_lvl_up(request, pk):
m_parent = Menu.objects.get(pk=pk).parent
curent_menu = Menu.objects.get(pk=pk)
up_menu = Menu.objects.get(parent=m_parent, lvl=curent_menu.lvl - 1)
curent_menu.lvl = curent_menu.lvl - 1
up_menu.lvl = up_menu.lvl + 1
curent_menu.save()
up_menu.save()
return HttpResponseRedirect('/admin/menu/list/')
@login_required
def menu_lvl_down(request, pk):
m_parent = Menu.objects.get(pk=pk).parent
curent_menu = Menu.objects.get(pk=pk)
down_menu = Menu.objects.get(parent=m_parent, lvl=curent_menu.lvl + 1)
curent_menu.lvl = curent_menu.lvl + 1
down_menu.lvl = down_menu.lvl - 1
curent_menu.save()
down_menu.save()
return HttpResponseRedirect('/admin/menu/list/')
@login_required
def post_state(request, pk):
post = Post.objects.get(pk=pk)
if post.is_active is True:
post.is_active = False
post.save()
else:
post.is_active = True
post.save()
return HttpResponseRedirect('/admin/article/list/')
@login_required
def event_state(request, pk):
event = Event.objects.get(pk=pk)
if event.is_active:
event.is_active = False
event.save()
else:
event.is_active = True
event.save()
return HttpResponseRedirect('/admin/event/list/')
@login_required
def delete_gal_image(request, pk, pid):
img = Gal_Image.objects.get(pk=pk)
image_path = img.image.url
try:
os.remove(image_path)
except FileNotFoundError:
pass
img.delete()
return HttpResponseRedirect('/admin/article/edit/' + pid)
@login_required
def delete_page_images(request, pk, pid):
img = Gal_Image.objects.get(pk=pk)
image_path = img.image.url
try:
os.remove(image_path)
except FileNotFoundError:
pass
img.delete()
return HttpResponseRedirect('/admin/page/edit/' + pid)
@login_required
def add_page(request):
if request.method == 'GET':
page_list = Page.objects.all()
return render(request, 'admin/page-add.html', {'page_list': page_list})
validate_page = PageForm(request.POST)
errors = {}
if validate_page.is_valid():
new_page = validate_page.save()
photos = request.FILES.getlist('photos')
for p in photos:
img = Gal_Image.objects.create(image=p)
new_page.photos.add(img)
new_page.save()
return JsonResponse({'data': 'Page Created successfully', "error": False})
for e in validate_page.errors:
errors[e] = validate_page.errors[e][0]
return JsonResponse(errors)
@login_required
def edit_page(request, pk):
if request.method == 'GET':
page = Page.objects.get(pk=pk)
page_list = Page.objects.all()
return render(request, 'admin/page-edit.html',
{'page': page, 'page_list': page_list})
p = Page.objects.get(pk=pk)
validate_page = PageForm(request.POST, instance=p)
errors = {}
if validate_page.is_valid():
page = validate_page.save()
photos = request.FILES.getlist('photos')
for p in photos:
img = Gal_Image.objects.create(image=p)
page.photos.add(img)
page.save()
return JsonResponse({'data': 'Page edited successfully', "error": False})
for e in validate_page.errors:
errors[e] = validate_page.errors[e][0]
return JsonResponse(errors)
@login_required
def delete_page(request, pk):
page = Page.objects.get(pk=pk)
page.delete()
return HttpResponseRedirect('/admin/page/list/')
@login_required
def change_password(request):
if request.method == 'GET':
return render(request, 'admin/change-pwd.html')
validate_password = PasswordForm(request.POST)
errors = {}
if validate_password.is_valid():
pwd = validate_password.cleaned_data['old_password']
if request.user.check_password(pwd):
if validate_password.cleaned_data['new_password'] == validate_password.cleaned_data['re_password']:
request.user.set_password(
validate_password.cleaned_data['new_password'])
request.user.save()
return JsonResponse({'data': 'password changed successfully',
'error': False})
errors['repwd'] = 'New password and Re-enter password are not same'
return JsonResponse(errors)
errors['oldpwd'] = 'please enter correct password'
return JsonResponse(errors)
for e in validate_password.errors:
errors[e] = validate_password.errors[e][0]
return JsonResponse(errors)
def add_banner(request):
if request.method == 'GET':
return render(request, 'admin/banner-add.html')
validate_banner = bannerForm(request.POST)
errors = {}
if validate_banner.is_valid():
new_banner = validate_banner.save(commit=False)
if 'image' not in request.FILES:
errors['image'] = 'Please Upload Banner Image'
if request.POST['title'] == "":
errors['title'] = 'This field is required'
if errors:
return JsonResponse(errors)
if request.FILES['image']:
new_banner.image = request.FILES['image']
new_banner.save()
return JsonResponse({"data": 'Banner created successfully', "error": False})
for k in validate_banner.errors:
errors[k] = validate_banner.errors[k][0]
return JsonResponse(errors)
def edit_banner(request, pk):
if request.method == 'GET':
banner = Banner.objects.get(pk=pk)
return render(request, 'admin/banner-edit.html',
{'banner': banner})
b = Banner.objects.get(pk=pk)
validate_banner = bannerForm(request.POST, instance=b)
errors = {}
if validate_banner.is_valid():
banner = validate_banner.save(commit=False)
if 'image' in request.FILES:
image_path = b.image.url
try:
os.remove(image_path)
except Exception:
pass
banner.image = request.FILES['image']
banner.save()
return JsonResponse({'data': 'Banner edited successfully', 'error': False})
for k in validate_banner.errors:
errors[k] = validate_banner.errors[k][0]
return JsonResponse(errors)
def delete_banner(request, pk):
b = Banner.objects.get(pk=pk)
img = b.image.url
try:
os.remove(img)
except FileNotFoundError:
pass
b.delete()
return HttpResponseRedirect('/admin/banner/list')
| 32.232523 | 111 | 0.630298 | 2,586 | 21,209 | 5.021268 | 0.085847 | 0.049904 | 0.035734 | 0.030189 | 0.683327 | 0.547863 | 0.458606 | 0.408394 | 0.36958 | 0.346708 | 0 | 0.001953 | 0.251733 | 21,209 | 657 | 112 | 32.281583 | 0.816257 | 0.013438 | 0 | 0.499055 | 0 | 0 | 0.113397 | 0.013828 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068053 | false | 0.047259 | 0.032136 | 0 | 0.240076 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c3da1d2aa924daff9b137d23508c3b2ced6224c | 5,279 | py | Python | app/library/forms.py | crudgenerator-io/django-admin-panel | b0f2d6a3ffd73a4b6e0608de486aff2cdb046222 | [
"CC-BY-4.0"
] | 1 | 2021-06-23T20:17:01.000Z | 2021-06-23T20:17:01.000Z | app/library/forms.py | crudgenerator-io/django-admin-panel | b0f2d6a3ffd73a4b6e0608de486aff2cdb046222 | [
"CC-BY-4.0"
] | null | null | null | app/library/forms.py | crudgenerator-io/django-admin-panel | b0f2d6a3ffd73a4b6e0608de486aff2cdb046222 | [
"CC-BY-4.0"
] | null | null | null | from django import forms
from django.contrib import admin
# Create your forms here.
from .models import (Catalog,Member,Account,Library,Book,Author)
class CatalogForm(forms.ModelForm):
book_set = forms.ModelMultipleChoiceField(queryset=Book.objects.all())
class Meta:
model = Catalog
exclude = ()
def get_initial_for_field(self, field, field_name):
if field_name == 'book_set':
try:
return self.instance.book_set.all()
except ValueError:
pass
return super().get_initial_for_field(field, field_name)
def save(self, commit=True):
if 'book_set' in self.changed_data:
updated_links = self.cleaned_data['book_set']
prev_links = self.instance.book_set.all()
for prev_link in prev_links:
if prev_link not in updated_links:
prev_link.delete()
self.instance.save()
self.instance.book_set.set(updated_links)
return super().save(commit)
class MemberForm(forms.ModelForm):
account_ref = forms.ModelChoiceField(queryset=Account.objects.all(), required=False, blank=True, help_text='Note: Removing or changing this connection will cause currently selected entity to be removed')
class Meta:
model = Member
exclude = ()
def get_initial_for_field(self, field, field_name):
if field_name == 'account_ref':
try:
return self.instance.account
except self._meta.model.account.RelatedObjectDoesNotExist:
pass
return super().get_initial_for_field(field, field_name)
def save(self, commit=True):
if 'account_ref' in self.changed_data:
new_link = self.cleaned_data['account_ref']
if new_link:
try:
self.instance.account.delete()
except self._meta.model.account.RelatedObjectDoesNotExist:
pass
self.instance.account = new_link
self.instance.save()
new_link.member = self.instance
new_link.save()
else:
self.instance.account.delete()
return super().save(commit)
class AccountForm(forms.ModelForm):
book_set = forms.ModelMultipleChoiceField(queryset=Book.objects.all(), required=False, blank=True)
class Meta:
model = Account
exclude = ()
def get_initial_for_field(self, field, field_name):
if field_name == 'book_set':
return self.instance.book_set.all()
return super().get_initial_for_field(field, field_name)
def save(self, commit=True):
if 'book_set' in self.changed_data:
updated_links = self.cleaned_data['book_set']
for existing_link in self.instance.book_set.all():
if existing_link not in updated_links:
self.instance.book_set.remove(existing_link)
for updated_link in updated_links:
try:
if updated_link.account and updated_link.account != self:
updated_link.account = None
updated_link.save()
except Account.DoesNotExist:
pass
self.instance.save()
self.instance.book_set.set(updated_links)
return super().save(commit)
class LibraryForm(forms.ModelForm):
book_set = forms.ModelMultipleChoiceField(queryset=Book.objects.all(), required=False, blank=True, help_text='Note: Unselecting any of the currently selected entities will cause corresponding entity/ies to be removed.')
class Meta:
model = Library
exclude = ()
def get_initial_for_field(self, field, field_name):
if field_name == 'book_set':
return self.instance.book_set.all()
return super().get_initial_for_field(field, field_name)
def save(self, commit=True):
if 'book_set' in self.changed_data:
updated_links = self.cleaned_data['book_set']
for existing_link in self.instance.book_set.all():
if existing_link not in updated_links:
existing_link.delete()
self.instance.save()
self.instance.book_set.set(updated_links)
return super().save(commit)
class BookForm(forms.ModelForm):
author_set = forms.ModelMultipleChoiceField(queryset=Author.objects.all())
class Meta:
model = Book
exclude = ()
def get_initial_for_field(self, field, field_name):
if field_name == 'author_set':
try:
return self.instance.author_set.all()
except ValueError:
pass
return super().get_initial_for_field(field, field_name)
def save(self, commit=True):
if 'author_set' in self.changed_data:
updated_links = self.cleaned_data['author_set']
prev_links = self.instance.author_set.all()
for prev_link in prev_links:
if prev_link not in updated_links:
prev_link.delete()
self.instance.save()
self.instance.author_set.set(updated_links)
return super().save(commit)
| 39.691729 | 223 | 0.619057 | 618 | 5,279 | 5.076052 | 0.153722 | 0.087982 | 0.041441 | 0.05738 | 0.706726 | 0.652215 | 0.629264 | 0.595473 | 0.583041 | 0.56519 | 0 | 0 | 0.293427 | 5,279 | 132 | 224 | 39.992424 | 0.841019 | 0.004357 | 0 | 0.649573 | 0 | 0 | 0.063773 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08547 | false | 0.042735 | 0.025641 | 0 | 0.367521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c3dcdbfdf72b384d0cffe4a2a2c53606c095fa5 | 3,667 | py | Python | multitest_transport/integration_test/event_handler_integration_test.py | maksonlee/multitest_transport | 9c20a48ac856307950a204854f52be7335705054 | [
"Apache-2.0"
] | null | null | null | multitest_transport/integration_test/event_handler_integration_test.py | maksonlee/multitest_transport | 9c20a48ac856307950a204854f52be7335705054 | [
"Apache-2.0"
] | null | null | null | multitest_transport/integration_test/event_handler_integration_test.py | maksonlee/multitest_transport | 9c20a48ac856307950a204854f52be7335705054 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MTT event handler integration tests."""
import logging
import uuid
from absl.testing import absltest
from multitest_transport.integration_test import integration_util
class EventHandlerIntegrationTest(integration_util.DockerContainerTest):
"""Tests that execution events get correctly handled."""
def setUp(self):
super(EventHandlerIntegrationTest, self).setUp()
# Schedule test run
self.device_serial = str(uuid.uuid4())
self.test_run_id = self.container.ScheduleTestRun(self.device_serial)['id']
self.container.WaitForState(self.test_run_id, 'QUEUED')
# Lease task and start test run
self.task = self.container.LeaseTask(
integration_util.DeviceInfo(self.device_serial))
self.container.SubmitCommandEvent(self.task, 'InvocationStarted')
self.container.WaitForState(self.test_run_id, 'RUNNING')
def testFatalError(self):
"""Test that fatal errors should stop run and set state to ERROR."""
self.container.SubmitCommandEvent(
self.task, 'ConfigurationError',
data={'error_status': 'CUSTOMER_ISSUE'})
self.container.WaitForState(self.test_run_id, 'ERROR')
def testError(self):
"""Test that non-fatal errors should trigger an automatic retry."""
self.container.SubmitCommandEvent(self.task, 'ExecuteFailed')
self.container.WaitForState(self.test_run_id, 'QUEUED') # Back to QUEUED
# Retry attempt can be leased
retry = self.container.LeaseTask(
integration_util.DeviceInfo(self.device_serial))
self.assertIsNotNone(retry)
self.assertNotEqual(self.task['attempt_id'], retry['attempt_id'])
def testCompleted_success(self):
"""Test that run can be notified of successful completion."""
self.container.SubmitCommandEvent(
self.task,
'InvocationCompleted',
data={
'failed_test_count': 0,
'passed_test_count': 2,
})
self.container.WaitForState(self.test_run_id, 'COMPLETED')
# Test run will contain result information
test_run = self.container.GetTestRun(self.test_run_id)
self.assertEqual('2', test_run['total_test_count'])
self.assertEqual('0', test_run['failed_test_count'])
def testCompleted_failure(self):
"""Test that failed tests will trigger an automatic retry."""
self.container.SubmitCommandEvent(
self.task,
'InvocationCompleted',
data={
'failed_test_count': 1,
'passed_test_count': 1,
})
self.container.WaitForState(self.test_run_id, 'QUEUED') # Back to QUEUED
# Retry attempt can be leased
retry = self.container.LeaseTask(
integration_util.DeviceInfo(self.device_serial))
self.assertIsNotNone(retry)
self.assertNotEqual(self.task['attempt_id'], retry['attempt_id'])
# Test run will contain result information
test_run = self.container.GetTestRun(self.test_run_id)
self.assertEqual('2', test_run['total_test_count'])
self.assertEqual('1', test_run['failed_test_count'])
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
absltest.main()
| 38.6 | 79 | 0.72348 | 455 | 3,667 | 5.681319 | 0.356044 | 0.051451 | 0.038298 | 0.045261 | 0.502515 | 0.45029 | 0.45029 | 0.40619 | 0.389168 | 0.356286 | 0 | 0.005607 | 0.173166 | 3,667 | 94 | 80 | 39.010638 | 0.846966 | 0.297246 | 0 | 0.473684 | 0 | 0 | 0.134204 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 1 | 0.087719 | false | 0.035088 | 0.070175 | 0 | 0.175439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c438e5c82b88a8434ff78a8641e39f4856cea41 | 3,615 | py | Python | PyDSS/pyPlots/Plots/FrequencySweep.py | dvaidhyn/PyDSS | 0d220d00900da4945e2ab6e7774de5edb58b36a9 | [
"BSD-3-Clause"
] | 21 | 2019-02-04T22:19:50.000Z | 2022-03-01T18:06:28.000Z | PyDSS/pyPlots/Plots/FrequencySweep.py | dvaidhyn/PyDSS | 0d220d00900da4945e2ab6e7774de5edb58b36a9 | [
"BSD-3-Clause"
] | 33 | 2020-01-28T22:47:44.000Z | 2022-03-30T20:05:00.000Z | PyDSS/pyPlots/Plots/FrequencySweep.py | dvaidhyn/PyDSS | 0d220d00900da4945e2ab6e7774de5edb58b36a9 | [
"BSD-3-Clause"
] | 11 | 2019-12-28T01:04:55.000Z | 2022-03-01T18:05:30.000Z |
from PyDSS.pyPlots.pyPlotAbstract import PlotAbstract
from bokeh.plotting import figure, curdoc
from bokeh.io import output_file
from bokeh.models import ColumnDataSource
from bokeh.client import push_session
class FrequencySweep(PlotAbstract):
def __init__(self, PlotProperties, dssBuses, dssObjects, dssCircuit, dssSolver):
super(FrequencySweep).__init__()
self.__dssSolver = dssSolver
self.__dssBuses = dssBuses
self.__dssObjs = dssObjects
self.__dssCircuit = dssCircuit
self.__PlotProperties = PlotProperties
self.plotted_object = self.getObject(PlotProperties['Object Name'], PlotProperties['Object Type'])
output_file(PlotProperties['FileName'])
freq = dssSolver.getFrequency()
yVal = self.getObjectValue(self.plotted_object, PlotProperties['Property'], PlotProperties['Indices'])
self.data = {'frequency': [0]}
self.data[PlotProperties['Property']] = [0]
self.data_source = ColumnDataSource(self.data)
self.__Figure = figure(plot_width=self.__PlotProperties['Width'],
plot_height=self.__PlotProperties['Height'],
title= 'Frequency sweep plot: ' + PlotProperties['Object Name'])
self.ScatterPlot = self.__Figure.line(x= 'frequency', y= PlotProperties['Property'], color='green',
source=self.data_source)
self.__Figure.yaxis.axis_label = PlotProperties['Property'] + ' - ' + PlotProperties['Indices']
self.__Figure.xaxis.axis_label = 'frequency [Hz]'
self.doc = curdoc()
self.doc.add_root(self.__Figure)
self.doc.title = "PyDSS"
self.session = push_session(self.doc)
self.session.show(self.__Figure) # open the document in a browser
self.__time = dssSolver.GetDateTime()
return
def GetSessionID(self):
return self.session.id
def getObjectValue(self, Obj, ObjPpty, Index):
pptyValue = Obj.GetVariable(ObjPpty)
if pptyValue is not None:
if isinstance(pptyValue, list):
if Index == 'SumEven':
result = sum(pptyValue[::2])
elif Index == 'SumOdd':
result = sum(pptyValue[1::2])
elif Index == 'Even':
result = pptyValue[::2]
elif Index == 'Odd':
result = pptyValue[1::2]
elif 'Index=' in Index:
c = int(Index.replace('Index=', ''))
result = pptyValue[c]
else:
result = pptyValue
return result
def getObject(self, ObjName, ObjType):
if ObjType == 'Element':
Obj = self.__dssObjs[ObjName]
elif ObjType == 'Bus':
Obj = self.__dssBuses[ObjName]
elif ObjType == 'Circuit':
Obj = self.__dssCircuit
else:
Obj = None
return Obj
def UpdatePlot(self):
if self.__dssSolver.GetDateTime() != self.__time:
#self.data_source.data = self.data
self.data = {'frequency': [0]}
self.data[self.__PlotProperties['Property']] = [0]
self.__time = self.__dssSolver.GetDateTime()
yVal = self.getObjectValue(self.plotted_object, self.__PlotProperties['Property'], self.__PlotProperties['Indices'])
freq = self.__dssSolver.getFrequency()
self.data[self.__PlotProperties['Property']].append(yVal)
self.data['frequency'].append(freq)
self.data_source.data = self.data
| 40.617978 | 124 | 0.602766 | 353 | 3,615 | 5.957507 | 0.305949 | 0.049453 | 0.026629 | 0.019971 | 0.17689 | 0.086543 | 0 | 0 | 0 | 0 | 0 | 0.003871 | 0.285477 | 3,615 | 88 | 125 | 41.079545 | 0.810298 | 0.017427 | 0 | 0.054054 | 0 | 0 | 0.074126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.067568 | 0.013514 | 0.202703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c449c75c85ba3e900d82667bb52b82f45f25f7e | 1,124 | py | Python | challenges/banner/banner.py | RobVor/Python | 5cfcd9a72c3899a453c0ec8f4fadea71fe453c49 | [
"FSFAP"
] | null | null | null | challenges/banner/banner.py | RobVor/Python | 5cfcd9a72c3899a453c0ec8f4fadea71fe453c49 | [
"FSFAP"
] | 4 | 2021-06-02T03:44:24.000Z | 2022-03-12T00:52:58.000Z | challenges/banner/banner.py | RobVor/Python | 5cfcd9a72c3899a453c0ec8f4fadea71fe453c49 | [
"FSFAP"
] | null | null | null | #!/usr/bin/env python3
#
# banner.py - Script to take an argument or clipboard content or normal text and convert it to a simple banner.
import os, sys, logging
banner_text = None
logging.basicConfig(level=logging.DEBUG, format=' %(asctime)s - %(levelname)s - %(message)s')
logging.disable(logging.DEBUG)
logging.debug('Program Start')
if len(sys.argv) < 2:
logging.debug('No arguments, moving to user input.')
print("Skipping system arguments and using manual input.")
print("Please type in the text or name you want in a banner.")
banner_text = input()
else:
banner_text = ""
for arg in sys.argv:
if arg == sys.argv[0]:
next
else:
arg.strip()
banner_text = banner_text + " " + arg
logging.debug('Argument or input available, starting routine.')
def banner_me(text):
logging.debug('Building banner.')
if text.startswith(" "):
text = text[1:]
newStr = '*' * (len(text) + 4)
print(newStr)
print()
print('* ' + text + ' *')
print()
print(newStr)
logging.debug('Banner done!')
banner_me(banner_text)
| 27.414634 | 111 | 0.635231 | 152 | 1,124 | 4.644737 | 0.493421 | 0.11898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005787 | 0.231317 | 1,124 | 40 | 112 | 28.1 | 0.811343 | 0.116548 | 0 | 0.193548 | 0 | 0 | 0.275758 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.032258 | 0 | 0.064516 | 0.225806 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c44d59f133bb95affcc34026ed80634c6371d4c | 883 | py | Python | deepface_private/deepface_private/detectors/MtcnnWrapper.py | michaelfeil/syssec | 52450a085c02ea4266b4eeeaae94ee7e015c9cee | [
"FTL"
] | null | null | null | deepface_private/deepface_private/detectors/MtcnnWrapper.py | michaelfeil/syssec | 52450a085c02ea4266b4eeeaae94ee7e015c9cee | [
"FTL"
] | null | null | null | deepface_private/deepface_private/detectors/MtcnnWrapper.py | michaelfeil/syssec | 52450a085c02ea4266b4eeeaae94ee7e015c9cee | [
"FTL"
] | null | null | null | import cv2
from deepface_private.detectors import FaceDetector
def build_model():
from mtcnn import MTCNN
face_detector = MTCNN()
return face_detector
def detect_face(face_detector, img, align = True):
resp = []
detected_face = None
img_region = [0, 0, img.shape[0], img.shape[1]]
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) #mtcnn expects RGB but OpenCV read BGR
detections = face_detector.detect_faces(img_rgb)
if len(detections) > 0:
for detection in detections:
x, y, w, h = detection["box"]
detected_face = img[int(y):int(y+h), int(x):int(x+w)]
img_region = [x, y, w, h]
if align:
keypoints = detection["keypoints"]
left_eye = keypoints["left_eye"]
right_eye = keypoints["right_eye"]
detected_face = FaceDetector.alignment_procedure(detected_face, left_eye, right_eye)
resp.append((detected_face, img_region))
return resp
| 25.228571 | 88 | 0.718007 | 132 | 883 | 4.606061 | 0.409091 | 0.098684 | 0.029605 | 0.013158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012212 | 0.165345 | 883 | 34 | 89 | 25.970588 | 0.812754 | 0.041903 | 0 | 0 | 0 | 0 | 0.03432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.125 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c4516f724322d07b6d52dfa206448370b349c82 | 6,569 | py | Python | lib/yi_simulation.py | maidenlane/five | bf14dd37b0f14d6998893c2b0478275a0fc55a82 | [
"BSD-3-Clause"
] | 1 | 2020-04-24T05:29:26.000Z | 2020-04-24T05:29:26.000Z | lib/yi_simulation.py | maidenlane/five | bf14dd37b0f14d6998893c2b0478275a0fc55a82 | [
"BSD-3-Clause"
] | null | null | null | lib/yi_simulation.py | maidenlane/five | bf14dd37b0f14d6998893c2b0478275a0fc55a82 | [
"BSD-3-Clause"
] | 1 | 2020-04-24T05:34:06.000Z | 2020-04-24T05:34:06.000Z | # Python Module for import Date : 2017-05-15
# vim: set fileencoding=utf-8 ff=unix tw=78 ai syn=python : per Python PEP 0263
'''
_______________| yi_simulation.py : simulation module for financial economics.
- Essential probabilistic functions for simulations.
- Simulate Gaussian mixture model GM(2).
- Pre-compute pool of asset returns.
- SPX 1957-2014
- Normalize, but include fat tails, so that mean and volatility can be specified.
- Design bootstrap to study alternate histories and small-sample statistics.
- Visualize price paths.
CHANGE LOG For latest version, see https://github.com/rsvp/fecon235
2017-05-15 Rewrite simug_mix() in terms of prob(second Gaussian).
Let N generally be the count := sample size.
2017-05-06 Add uniform randou(). Add maybe() random indicator function.
Add Gaussian randog(), simug(), and simug_mix().
2015-12-20 python3 compatible: lib import fix.
2015-12-17 python3 compatible: fix with yi_0sys
2014-12-12 First version adapted from yi_fred.py
'''
from __future__ import absolute_import, print_function, division
import numpy as np
from . import yi_0sys as system
from .yi_1tools import todf, georet
from .yi_fred import readfile
from .yi_plot import plotn
# ACTUAL SPX mean and volatility from 1957-01-03 to 2014-12-11 in percent.
# N = 15116
MEAN_PC_SPX = 7.6306
STD_PC_SPX = 15.5742
N_PC_SPX = 15116
def randou( upper=1.0 ):
'''Single random float, not integer, from Uniform[0.0, upper).'''
# Closed lower bound of zero, and argument for open upper bound.
# To generate arrays, please use np.random.random().
return np.random.uniform(low=0.0, high=upper, size=None)
def maybe( p=0.50 ):
'''Uniformly random indicator function such that prob(I=1=True) = p.'''
# Nice to have for random "if" conditional branching.
# Fun note: Python's boolean True is actually mapped to int 1.
if randou() <= p:
return 1
else:
return 0
def randog( sigma=1.0 ):
'''Single random float from Gaussian N(0.0, sigma^2).'''
# Argument sigma is the standard deviation, NOT the variance!
# For non-zero mean, just add it to randog later.
# To generate arrays, please use simug().
return np.random.normal(loc=0.0, scale=sigma, size=None)
def simug( sigma, N=256 ):
'''Simulate array of shape (N,) from Gaussian Normal(0.0, sigma^2).'''
# Argument sigma is the standard deviation, NOT the variance!
arr = sigma * np.random.randn( N )
# For non-zero mean, simply add it later: mu + simug(sigma)
return arr
def simug_mix( sigma1, sigma2, q=0.10, N=256 ):
'''Simulate array from zero-mean Gaussian mixture GM(2).'''
# Mathematical details in nb/gauss-mix-kurtosis.ipynb
# Pre-populate an array of shape (N,) with the FIRST Gaussian,
# so that most work is done quickly and memory efficient...
arr = simug( sigma1, N )
# ... except for some random replacements:
for i in range(N):
# p = 1-q = probability drawing from FIRST Gaussian.
# So with probability q, replace an element of arr
# with a float from the SECOND Gaussian:
if maybe( q ):
arr[i] = randog( sigma2 )
return arr
#==============================================================================
def GET_simu_spx_pcent():
'''Retrieve normalized SPX daily percent change 1957-2014.'''
# NORMALIZED s.t. sample mean=0 and std=1%.
datafile = 'SIMU-mn0-sd1pc-d4spx_1957-2014.csv.gz'
try:
df = readfile( datafile, compress='gzip' )
# print(' :: Import success: ' + datafile)
except:
df = 0
print(' !! Failed to find: ' + datafile)
return df
def SHAPE_simu_spx_pcent( mean=MEAN_PC_SPX, std=STD_PC_SPX ):
'''Generate SPX percent change (defaults are ACTUAL annualized numbers).'''
# Thus the default arguments can replicate actual time series
# given initial value: 1957-01-02 46.20
# Volatility is std := standard deviation.
spxpc = GET_simu_spx_pcent()
mean_offset = mean / 256.0
# Assumed days in a year.
std_multiple = std / 16.0
# sqrt(256)
return (spxpc * std_multiple) + mean_offset
def SHAPE_simu_spx_returns( mean=MEAN_PC_SPX, std=STD_PC_SPX ):
'''Convert percent form to return form.'''
# So e.g. 2% gain is converted to 1.02.
spxpc = SHAPE_simu_spx_pcent( mean, std )
return 1 + (spxpc / 100.0)
def array_spx_returns( mean=MEAN_PC_SPX, std=STD_PC_SPX ):
'''Array of SPX in return form.'''
# Array far better than list because of numpy efficiency.
# But if needed, use .tolist()
spxret = SHAPE_simu_spx_returns( mean, std )
# Use array to conveniently bootstrap sample later.
# The date index will no longer matter.
return spxret['Y'].values
def bootstrap( N, yarray ):
'''Randomly pick out N without replacment from yarray.'''
# In repeated simulations, yarray should be pre-computed,
# using array_spx_returns( ... ).
# http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html
return np.random.choice( yarray, size=N, replace=False )
def simu_prices( N, yarray ):
'''Convert bootstrap returns to price time-series into pandas DATAFRAME.'''
# Initial price implicitly starts at 1.
# Realize that its history is just the products of the returns.
ret = bootstrap( N, yarray )
# Cumulative product of array elements:
# cumprod is very fast, and keeps interim results!
# http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumprod.html
return todf( np.cumprod( ret ) )
def simu_plots_spx( charts=1, N=N_PC_SPX, mean=MEAN_PC_SPX, std=STD_PC_SPX ):
'''Display simulated SPX price charts of N days, given mean and std.'''
yarray = array_spx_returns( mean, std )
# Read in the data only once BEFORE the loop...
for i in range( charts ):
px = simu_prices( N, yarray )
plotn( px )
# Plot, then for the given prices, compute annualized:
# geometric mean, arithmetic mean, volatility.
print(' georet: ' + str( georet(px) ))
print(' ____________________________________')
print('')
return
if __name__ == "__main__":
system.endmodule()
| 37.971098 | 84 | 0.638301 | 913 | 6,569 | 4.447974 | 0.388828 | 0.014775 | 0.011081 | 0.012805 | 0.11869 | 0.079783 | 0.079783 | 0.079783 | 0.067964 | 0.044817 | 0 | 0.041233 | 0.254224 | 6,569 | 172 | 85 | 38.19186 | 0.787712 | 0.605419 | 0 | 0.032258 | 0 | 0 | 0.049597 | 0.029435 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0 | 0.096774 | 0 | 0.5 | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c45582a4af30cd409c9a8b68bcbdc3dd7335389 | 12,598 | py | Python | src/capsgnn.py | Anak2016/CapsGCN | c4549a997dcff80fcc1905074029ba348921c494 | [
"MIT"
] | 1 | 2020-04-14T00:20:32.000Z | 2020-04-14T00:20:32.000Z | src/capsgnn.py | Anak2016/CapsGCN | c4549a997dcff80fcc1905074029ba348921c494 | [
"MIT"
] | 1 | 2019-12-05T17:12:02.000Z | 2019-12-05T17:12:02.000Z | src/capsgnn.py | Anak2016/CapsGCN | c4549a997dcff80fcc1905074029ba348921c494 | [
"MIT"
] | null | null | null | import glob
import json
import torch
import random
import numpy as np
import pandas as pd
from tqdm import tqdm, trange
from torch_geometric.nn import GCNConv
from utils import create_numeric_mapping
from layers import ListModule, PrimaryCapsuleLayer, Attention, SecondaryCapsuleLayer, margin_loss
class CapsGNN(torch.nn.Module):
"""
An implementation of themodel described in the following paper:
https://openreview.net/forum?id=Byl8BnRcYm
"""
def __init__(self, args, number_of_features, number_of_targets):
super(CapsGNN, self).__init__()
"""
:param args: Arguments object.
:param number_of_features: Number of vertex features.
:param number_of_targets: Number of classes.
"""
self.args = args
self.number_of_features = number_of_features
self.number_of_targets = number_of_targets
self._setup_layers()
def _setup_base_layers(self):
"""
Creating GCN layers.
"""
self.base_layers = [GCNConv(self.number_of_features, self.args.gcn_filters)]
for layer in range(self.args.gcn_layers-1):
# (N, # feature) -> (N, # features)
self.base_layers.append(GCNConv( self.args.gcn_filters, self.args.gcn_filters))
self.base_layers = ListModule(*self.base_layers)
def _setup_primary_capsules(self):
"""
Creating primary capsules.
"""
self.first_capsule = PrimaryCapsuleLayer(in_units = self.args.gcn_filters, in_channels = self.args.gcn_layers, num_units = self.args.gcn_layers, capsule_dimensions = self.args.capsule_dimensions)
def _setup_attention(self):
"""
Creating attention layer.
"""
self.attention = Attention(self.args.gcn_layers* self.args.capsule_dimensions, self.args.inner_attention_dimension)
def _setup_graph_capsules(self):
"""
Creating graph capsules.
"""
self.graph_capsule = SecondaryCapsuleLayer(self.args.gcn_layers, self.args.capsule_dimensions, self.args.number_of_capsules, self.args.capsule_dimensions)
def _setup_class_capsule(self):
"""
Creating class capsules.
"""
self.class_capsule = SecondaryCapsuleLayer(self.args.capsule_dimensions,self.args.number_of_capsules, self.number_of_targets, self.args.capsule_dimensions)
def _setup_reconstruction_layers(self):
"""
Creating histogram reconstruction layers.
"""
self.reconstruction_layer_1 = torch.nn.Linear(self.number_of_targets*self.args.capsule_dimensions, int((self.number_of_features * 2) / 3))
self.reconstruction_layer_2 = torch.nn.Linear(int((self.number_of_features * 2) / 3), int((self.number_of_features * 3) / 2))
self.reconstruction_layer_3 = torch.nn.Linear(int((self.number_of_features * 3) / 2), self.number_of_features)
def _setup_layers(self):
"""
Creating layers of model.
1. GCN layers.
2. Primary capsules.
3. Attention
4. Graph capsules.
5. Class capsules.
6. Reconstruction layers.
"""
self._setup_base_layers()
self._setup_primary_capsules()
self._setup_attention()
self._setup_graph_capsules()
self._setup_class_capsule()
self._setup_reconstruction_layers()
def calculate_reconstruction_loss(self, capsule_input, features):
"""
Calculating the reconstruction loss of the model.
:param capsule_input: Output of class capsule.
:param features: Feature matrix.
:return reconstrcution_loss: Loss of reconstruction.
"""
v_mag = torch.sqrt((capsule_input**2).sum(dim=1))
_, v_max_index = v_mag.max(dim=0)
v_max_index = v_max_index.data
capsule_masked = torch.autograd.Variable(torch.zeros(capsule_input.size()))
capsule_masked[v_max_index,:] = capsule_input[v_max_index,:]
capsule_masked = capsule_masked.view(1, -1)
feature_counts = features.sum(dim=0)
feature_counts = feature_counts/feature_counts.sum()
reconstruction_output = torch.nn.functional.relu(self.reconstruction_layer_1(capsule_masked))
reconstruction_output = torch.nn.functional.relu(self.reconstruction_layer_2(reconstruction_output))
reconstruction_output = torch.softmax(self.reconstruction_layer_3(reconstruction_output),dim=1)
reconstruction_output = reconstruction_output.view(1, self.number_of_features)
reconstruction_loss = torch.sum((features-reconstruction_output)**2)
return reconstruction_loss
def forward(self, data):
"""
Forward propagation pass.
:param data: Dictionary of tensors with features and edges.
:return class_capsule_output: Class capsule outputs.
"""
features = data["features"]
edges = data["edges"]
hidden_representations = []
for layer in self.base_layers:
features = torch.nn.functional.relu(layer(features, edges))
hidden_representations.append(features)
hidden_representations = torch.cat(tuple(hidden_representations))
hidden_representations = hidden_representations.view(1, self.args.gcn_layers, self.args.gcn_filters,-1)
first_capsule_output = self.first_capsule(hidden_representations)
first_capsule_output = first_capsule_output.view(-1,self.args.gcn_layers* self.args.capsule_dimensions)
rescaled_capsule_output = self.attention(first_capsule_output)
rescaled_first_capsule_output = rescaled_capsule_output.view(-1, self.args.gcn_layers, self.args.capsule_dimensions)
graph_capsule_output = self.graph_capsule(rescaled_first_capsule_output)
reshaped_graph_capsule_output = graph_capsule_output.view(-1, self.args.capsule_dimensions, self.args.number_of_capsules )
class_capsule_output = self.class_capsule(reshaped_graph_capsule_output)
class_capsule_output = class_capsule_output.view(-1, self.number_of_targets*self.args.capsule_dimensions )
class_capsule_output = torch.mean(class_capsule_output,dim=0).view(1,self.number_of_targets,self.args.capsule_dimensions)
reconstruction_loss = self.calculate_reconstruction_loss(class_capsule_output.view(self.number_of_targets,self.args.capsule_dimensions), data["features"])
return class_capsule_output, reconstruction_loss
class CapsGNNTrainer(object):
"""
CapsGNN training and scoring.
"""
def __init__(self,args):
"""
:param args: Arguments object.
"""
self.args = args
self.setup_model()
def enumerate_unique_labels_and_targets(self):
"""
Enumerating the features and targets in order to setup weights later.
"""
print("\nEnumerating feature and target values.\n")
ending = "*.json"
self.train_graph_paths = glob.glob(self.train_grpah_paths+ending)
self.test_graph_paths = glob.glob(self.args.test_graph_folder+ending)
graph_paths = self.train_graph_paths + self.test_graph_paths
targets = set()
features = set()
for path in tqdm(graph_paths):
data = json.load(open(path))
targets = targets.union(set([data["target"]]))
features = features.union(set(data["labels"]))
self.target_map = create_numeric_mapping(targets)
self.feature_map = create_numeric_mapping(features)
self.number_of_features = len(self.feature_map)
self.number_of_targets = len(self.target_map)
def setup_model(self):
"""
Enumerating labels and initializing a CapsGNN.
"""
self.enumerate_unique_labels_and_targets()
self.model = CapsGNN(self.args, self.number_of_features, self.number_of_targets)
def create_batches(self):
"""
Batching the graphs for training.
"""
self.batches = [self.train_graph_paths[i:i + self.args.batch_size] for i in range(0,len(self.train_graph_paths), self.args.batch_size)]
def create_data_dictionary(self, target, edges, features):
"""
Creating a data dictionary.
:param target: Target vector.
:param edges: Edge list tensor.
:param features: Feature tensor.
"""
to_pass_forward = dict()
to_pass_forward["target"] = target
to_pass_forward["edges"] = edges
to_pass_forward["features"] = features
return to_pass_forward
def create_target(self, data):
"""
Target createn based on data dicionary.
:param data: Data dictionary.
:return : Target vector.
"""
return torch.FloatTensor([0.0 if i != data["target"] else 1.0 for i in range(self.number_of_targets)])
def create_edges(self,data):
"""
Create an edge matrix.
:param data: Data dictionary.
:return : Edge matrix.
"""
return torch.t(torch.LongTensor(data["edges"]))
def create_features(self,data):
"""
Create feature matrix.
:param data: Data dictionary.
:return features: Matrix of features.
"""
features = np.zeros((len(data["labels"]), self.number_of_features))
node_indices = [node for node in range(len(data["labels"]))]
feature_indices = [self.feature_map[label] for label in data["labels"].values()]
features[node_indices,feature_indices] = 1.0
features = torch.FloatTensor(features)
return features
def create_input_data(self, path):
"""
Creating tensors and a data dictionary with Torch tensors.
:param path: path to the data JSON.
:return to_pass_forward: Data dictionary.
"""
data = json.load(open(path))
target = self.create_target(data)
edges = self.create_edges(data)
features = self.create_features(data)
to_pass_forward = self.create_data_dictionary(target, edges, features)
return to_pass_forward
def fit(self):
"""
Training a model on the training set.
"""
print("\nTraining started.\n")
self.model.train()
optimizer = torch.optim.Adam(self.model.parameters(), lr=self.args.learning_rate, weight_decay=self.args.weight_decay)
for epoch in tqdm(range(self.args.epochs), desc = "Epochs: ", leave = True):
random.shuffle(self.train_graph_paths)
self.create_batches()
losses = 0
self.steps = trange(len(self.batches), desc="Loss")
for step in self.steps:
accumulated_losses = 0
optimizer.zero_grad()
batch = self.batches[step]
for path in batch:
data = self.create_input_data(path)
prediction, reconstruction_loss = self.model(data)
loss = margin_loss(prediction, data["target"], self.args.lambd)+self.args.theta*reconstruction_loss
accumulated_losses = accumulated_losses + loss
accumulated_losses = accumulated_losses/len(batch)
accumulated_losses.backward()
optimizer.step()
losses = losses + accumulated_losses.item()
average_loss = losses/(step + 1)
self.steps.set_description("CapsGNN (Loss=%g)" % round(average_loss,4))
def score(self):
"""
Scoring on the test set.
"""
print("\n\nScoring.\n")
self.model.eval()
self.predictions = []
self.hits = []
for path in tqdm(self.test_graph_paths):
data = self.create_input_data(path)
prediction, reconstruction_loss = self.model(data)
prediction_mag = torch.sqrt((prediction**2).sum(dim=2))
_, prediction_max_index = prediction_mag.max(dim=1)
prediction = prediction_max_index.data.view(-1).item()
self.predictions.append(prediction)
self.hits.append(data["target"][prediction]==1.0)
print("\nAccuracy: " + str(round(np.mean(self.hits),4)))
def save_predictions(self):
"""
Saving the test set predictions.
"""
identifiers = [path.split("/")[-1].strip(".json") for path in self.test_graph_paths]
out = pd.DataFrame()
out["id"] = identifiers
out["predictions"] = self.predictions
out.to_csv(self.args.prediction_path, index = None)
| 41.169935 | 203 | 0.655421 | 1,486 | 12,598 | 5.305518 | 0.15747 | 0.044647 | 0.030441 | 0.041223 | 0.244292 | 0.180112 | 0.135211 | 0.122907 | 0.089295 | 0.066844 | 0 | 0.006091 | 0.244086 | 12,598 | 305 | 204 | 41.304918 | 0.8218 | 0.126687 | 0 | 0.059172 | 0 | 0 | 0.023281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130178 | false | 0.04142 | 0.059172 | 0 | 0.242604 | 0.023669 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c46bfb0aa6f5f7935e902d15c7d11b53ac2aba8 | 2,813 | py | Python | networks/components/partial_conv.py | zhou3968322/dl-lab | f6f028df2bd3f68146b3285800938afe71eba442 | [
"MIT"
] | null | null | null | networks/components/partial_conv.py | zhou3968322/dl-lab | f6f028df2bd3f68146b3285800938afe71eba442 | [
"MIT"
] | null | null | null | networks/components/partial_conv.py | zhou3968322/dl-lab | f6f028df2bd3f68146b3285800938afe71eba442 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
# email:bingchengzhou@foxmail.com
# create: 2021/1/12
"""
from https://github.com/naoto0804/pytorch-inpainting-with-partial-conv
"""
import torch
from torch import nn, cuda
import math
def weights_init(init_type='gaussian'):
def init_fun(m):
classname = m.__class__.__name__
if (classname.find('Conv') == 0 or classname.find(
'Linear') == 0) and hasattr(m, 'weight'):
if init_type == 'gaussian':
nn.init.normal_(m.weight, 0.0, 0.02)
elif init_type == 'xavier':
nn.init.xavier_normal_(m.weight, gain=math.sqrt(2))
elif init_type == 'kaiming':
nn.init.kaiming_normal_(m.weight, a=0, mode='fan_in')
elif init_type == 'orthogonal':
nn.init.orthogonal_(m.weight, gain=math.sqrt(2))
elif init_type == 'default':
pass
else:
assert 0, "Unsupported initialization: {}".format(init_type)
if hasattr(m, 'bias') and m.bias is not None:
nn.init.constant_(m.bias, 0.0)
return init_fun
class PartialConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True):
super().__init__()
self.input_conv = nn.Conv2d(in_channels, out_channels, kernel_size,
stride, padding, dilation, groups, bias)
self.mask_conv = nn.Conv2d(in_channels, out_channels, kernel_size,
stride, padding, dilation, groups, False)
self.input_conv.apply(weights_init('kaiming'))
torch.nn.init.constant_(self.mask_conv.weight, 1.0)
# mask is not updated
for param in self.mask_conv.parameters():
param.requires_grad = False
def forward(self, input, mask):
# http://masc.cs.gmu.edu/wiki/partialconv
# C(X) = W^T * X + b, C(0) = b, D(M) = 1 * M + 0 = sum(M)
# W^T* (M .* X) / sum(M) + b = [C(M .* X) – C(0)] / D(M) + C(0)
output = self.input_conv(input * mask)
if self.input_conv.bias is not None:
output_bias = self.input_conv.bias.view(1, -1, 1, 1).expand_as(
output)
else:
output_bias = torch.zeros_like(output)
with torch.no_grad():
output_mask = self.mask_conv(mask)
no_update_holes = output_mask == 0
mask_sum = output_mask.masked_fill_(no_update_holes, 1.0)
output_pre = (output - output_bias) / mask_sum + output_bias
output = output_pre.masked_fill_(no_update_holes, 0.0)
new_mask = torch.ones_like(output)
new_mask = new_mask.masked_fill_(no_update_holes, 0.0)
return output, new_mask
| 37.013158 | 76 | 0.581585 | 382 | 2,813 | 4.054974 | 0.319372 | 0.036152 | 0.041963 | 0.040671 | 0.207876 | 0.207876 | 0.187863 | 0.131698 | 0.131698 | 0.090381 | 0 | 0.024574 | 0.291148 | 2,813 | 75 | 77 | 37.506667 | 0.751755 | 0.113758 | 0 | 0.039216 | 0 | 0 | 0.043969 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 1 | 0.078431 | false | 0.019608 | 0.058824 | 0 | 0.196078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c4bb4f395ddd51ef9e3b27f20dd5528ae3e426f | 1,081 | py | Python | lab/memory.py | ramalho/leanstr | 3070ac0adc99ec1e7e9df8b490e88a613b690d19 | [
"BSD-2-Clause"
] | 23 | 2020-06-20T10:30:20.000Z | 2020-07-01T17:09:54.000Z | lab/memory.py | ramalho/leanstr | 3070ac0adc99ec1e7e9df8b490e88a613b690d19 | [
"BSD-2-Clause"
] | 1 | 2020-06-25T06:30:28.000Z | 2020-06-25T06:30:28.000Z | lab/memory.py | ramalho/leanstr | 3070ac0adc99ec1e7e9df8b490e88a613b690d19 | [
"BSD-2-Clause"
] | 1 | 2020-11-10T20:01:49.000Z | 2020-11-10T20:01:49.000Z | import sys
import time
sys.path.insert(0, '../')
from leanstr import LeanStr
ascii_text = 'War_and_Peace-ASCII.txt'
same_with_ant = 'War_and_Peace-ASCII-with-ant.txt'
@profile
def load_ascii():
with open(ascii_text) as fp:
py_str = fp.read()
with open(ascii_text, 'rb') as fp:
my_str = LeanStr(data=fp.read())
return py_str, my_str
@profile
def load_with_ant():
with open(same_with_ant) as fp:
py_str = fp.read()
with open(same_with_ant, 'rb') as fp:
my_str = LeanStr(data=fp.read())
return py_str, my_str
def clock(label, py_str, my_str):
print(label)
t0 = time.perf_counter()
n = len(py_str)
print('len(py_str) =', n)
t1 = time.perf_counter()
print(f' dt = {t1 - t0:0.5f}s')
t0 = time.perf_counter()
n = len(my_str)
print('len(my_str) =', n)
t1 = time.perf_counter()
print(f' dt = {t1 - t0:0.5f}s')
print()
if __name__ == '__main__':
a, b = load_ascii()
clock('ASCII', a, b)
a, b = load_with_ant()
clock('ASCII with ant', a, b)
| 22.520833 | 52 | 0.597595 | 178 | 1,081 | 3.376404 | 0.269663 | 0.081531 | 0.099834 | 0.049917 | 0.46589 | 0.415973 | 0.34609 | 0.34609 | 0.269551 | 0.269551 | 0 | 0.016049 | 0.250694 | 1,081 | 47 | 53 | 23 | 0.725926 | 0 | 0 | 0.368421 | 0 | 0 | 0.160037 | 0.050879 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.078947 | 0 | 0.210526 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c4c44a88055eda645abf2ce1351b9fc820533e7 | 4,888 | py | Python | ml_params/datasets.py | SamuelMarks/ml-params | fe7a98826699f35f619bdc70698b6e7e59950903 | [
"Apache-2.0",
"MIT"
] | 2 | 2020-09-08T08:30:38.000Z | 2020-10-14T10:35:00.000Z | ml_params/datasets.py | SamuelMarks/ml-params | fe7a98826699f35f619bdc70698b6e7e59950903 | [
"Apache-2.0",
"MIT"
] | null | null | null | ml_params/datasets.py | SamuelMarks/ml-params | fe7a98826699f35f619bdc70698b6e7e59950903 | [
"Apache-2.0",
"MIT"
] | null | null | null | """
Implementation of some common datasets, expected to be called from projects which import ml-params.
"""
from functools import partial
from os import environ, path
try:
from ml_prepare.datasets import datasets2classes
except ImportError:
datasets2classes = {}
from ml_params.tf_utils import get_from_tensorflow_datasets
from ml_params.utils import common_dataset_handler
def load_data_from_ml_prepare(
dataset_name,
tfds_dir=environ.get(
"TFDS_DATA_DIR", path.join(path.expanduser("~"), "tensorflow_datasets")
),
generate_dir=None,
retrieve_dir=None,
K=None,
as_numpy=False,
scale=None,
):
"""
Acquire from the official tensorflow_datasets model zoo, or the ophthalmology focussed ml-prepare library
:param dataset_name: name of dataset
:type dataset_name: ```str```
:param tfds_dir: directory to look for models in. Defaults to ~/tensorflow_datasets
:type tfds_dir: ```Optional[str]```
:param generate_dir:
:type generate_dir: ```Optional[str]```
:param retrieve_dir:
:type retrieve_dir: ```Optional[str]```
:param K: backend engine, e.g., `np` or `tf`
:type K: ```Literal['np', 'tf']```
:param as_numpy: Convert to numpy ndarrays
:type as_numpy: ```bool```
:param scale: scale (height, width)
:type scale: ```Tuple[int, int]```
:return: Train and tests dataset splits
:rtype: ```Union[Tuple[tf.data.Dataset,tf.data.Dataset,tfds.core.DatasetInfo], Tuple[np.ndarray,np.ndarray,Any]]```
"""
import tensorflow_datasets.public_api as tfds
from ml_prepare.executors import build_tfds_dataset
assert dataset_name in datasets2classes
ds_builder = build_tfds_dataset(
dataset_name=dataset_name,
tfds_dir=tfds_dir,
generate_dir=generate_dir,
retrieve_dir=retrieve_dir,
**({} if scale is None else {"image_height": scale[0], "image_width": scale[1]})
)
if hasattr(ds_builder, "download_and_prepare_kwargs"):
download_and_prepare_kwargs = getattr(ds_builder, "download_and_prepare_kwargs")
delattr(ds_builder, "download_and_prepare_kwargs")
else:
# Reasonable defaults
download_and_prepare_kwargs = dict(
download_config=tfds.download.DownloadConfig(
extract_dir=tfds_dir,
download_mode=tfds.core.dataset_builder.REUSE_DATASET_IF_EXISTS,
manual_dir=path.join(tfds_dir, "downloads", dataset_name),
),
download_dir=tfds_dir,
)
return common_dataset_handler(
ds_builder=ds_builder,
scale=None, # Keep this as None, the processing is done above
K=K,
as_numpy=as_numpy,
**download_and_prepare_kwargs
)
def load_data_from_tfds_or_ml_prepare___ml_params(
dataset_name,
tfds_dir=environ.get(
"TFDS_DATA_DIR", path.join(path.expanduser("~"), "tensorflow_datasets")
),
K=None,
as_numpy=False,
acquire_and_concat_validation_to_train=True,
**data_loader_kwargs
):
"""
Acquire from the official tensorflow_datasets model zoo, or the ophthalmology focussed ml-prepare library
:param dataset_name: name of dataset
:type dataset_name: ```str```
:param tfds_dir: directory to look for models in.
:type tfds_dir: ```Optional[str]```
:param K: backend engine, e.g., `np` or `tf`
:type K: ```Literal['np', 'tf']```
:param as_numpy: Convert to numpy ndarrays
:type as_numpy: ```bool```
:param acquire_and_concat_validation_to_train: Whether to acquire the validation split
and then concatenate it to train
:param data_loader_kwargs: pass this as arguments to data_loader function
:type data_loader_kwargs: ```**data_loader_kwargs```
:return: Train and tests dataset splits
:rtype: ```Union[Tuple[tf.data.Dataset, tf.data.Dataset], Tuple[np.ndarray, np.ndarray]]```
"""
from ml_prepare.executors import build_tfds_dataset
ds_builder = (
partial(build_tfds_dataset, tfds_dir=tfds_dir)
if dataset_name in datasets2classes
else partial(get_from_tensorflow_datasets, data_dir=tfds_dir)
)(
dataset_name=dataset_name,
**{
k: v
for k, v in data_loader_kwargs.items()
if v is not None and k != "tfds_dir"
}
)
if hasattr(ds_builder, "download_and_prepare_kwargs"):
download_and_prepare_kwargs = getattr(ds_builder, "download_and_prepare_kwargs")
delattr(ds_builder, "download_and_prepare_kwargs")
else:
download_and_prepare_kwargs = {}
return common_dataset_handler(
ds_builder=ds_builder,
scale=None,
K=K,
as_numpy=as_numpy,
acquire_and_concat_validation_to_train=acquire_and_concat_validation_to_train,
**download_and_prepare_kwargs
)
| 31.535484 | 119 | 0.682692 | 634 | 4,888 | 4.963722 | 0.217666 | 0.033365 | 0.068637 | 0.091516 | 0.523991 | 0.49857 | 0.435335 | 0.435335 | 0.407372 | 0.407372 | 0 | 0.001571 | 0.218699 | 4,888 | 154 | 120 | 31.74026 | 0.822467 | 0.361088 | 0 | 0.47619 | 0 | 0 | 0.090418 | 0.054656 | 0 | 0 | 0 | 0 | 0.011905 | 1 | 0.02381 | false | 0 | 0.107143 | 0 | 0.154762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c5307353bb28538ac5bdfcebd6ddaa70510b35f | 1,957 | py | Python | vanderplas/my-practice/matplotlib/graph-hist.py | CaiqueCoelho/deep-learning | c39a8f30a5219abed7a07d5db1cb7450a3ecde93 | [
"Apache-2.0"
] | 7 | 2020-09-20T02:50:24.000Z | 2021-06-30T03:25:46.000Z | vanderplas/my-practice/matplotlib/graph-hist.py | CaiqueCoelho/deep-learning | c39a8f30a5219abed7a07d5db1cb7450a3ecde93 | [
"Apache-2.0"
] | 8 | 2020-08-07T21:22:52.000Z | 2022-03-31T05:27:51.000Z | vanderplas/my-practice/matplotlib/graph-hist.py | JennEYoon/deep-learning-y20 | e019bc67134beb970f82be05a2e12b67dca2684f | [
"Apache-2.0"
] | 2 | 2020-11-13T03:01:32.000Z | 2021-10-07T01:26:35.000Z | # graph-hist.py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
plt.style.use('seaborn-whitegrid')
data = np.random.randn(1000)
plt.hist(data)
plt.title("fig-histogram 1")
plt.savefig("fig-hist1.png")
plt.show()
# 3-histograms.
x1 = np.random.normal(0, 0.8, 100) # (loc=mean, scale=std, size=samples drawn)
x2 = np.random.normal(-2, 1, 100)
x3 = np.random.normal(3, 2, 100)
kwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40)
plt.hist(x1, **kwargs)
plt.hist(x2, **kwargs)
plt.hist(x3, **kwargs)
plt.title("Fig - hist12")
plt.savefig("fig-hist2.png")
plt.show()
############ normal distribution histogram ########
mu, sigma = 0, 0.3
s = np.random.normal(mu, sigma, 1000)
count, bins, ignored = plt.hist(s, 30, density=True)
plt.suptitle("Fig - normal distribution")
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2, color='r')
plt.savefig("fig-hist2.png")
plt.show()
#### 2D histogram binnings ##########
mean = [0, 0]
cov = [[1, 1], [1, 2]]
x, y = np.random.multivariate_normal(mean, cov, 1000000).T
plt.hist2d(x, y, bins=30, cmap='Blues')
cb = plt.colorbar()
cb.set_label('counts in bin')
plt.savefig("fig-2dhist1.png")
plt.show()
# Not sure what this is doing.
plt.hexbin(x, y, gridsize=30, cmap="Reds")
cb = plt.colorbar(label='count in bin')
plt.savefig("fig-2dhist.png")
plt.show()
####### sns histogram grid ##########
import seaborn as sns
tips = sns.load_dataset("tips")
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row='sex', col='time', margin_titles=True)
grid.map(plt.hist, 'tip_pct', bins=np.linspace(0, 40, 15))
plt.suptitle("Figure histogram-grid tips")
plt.subplots_adjust(top = 0.85, bottom=0.1, right=0.9, hspace=0.2, wspace=0.2)
plt.savefig("fig-hist-tips.png")
plt.show()
| 27.180556 | 80 | 0.63209 | 315 | 1,957 | 3.901587 | 0.412698 | 0.039056 | 0.063466 | 0.029292 | 0.074858 | 0.045566 | 0.045566 | 0 | 0 | 0 | 0 | 0.056638 | 0.160961 | 1,957 | 71 | 81 | 27.56338 | 0.691839 | 0.08789 | 0 | 0.16 | 0 | 0 | 0.158721 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c533947ac5252cc082ecd55e08977774336c638 | 12,295 | py | Python | Tools/SeeDot/seedot/main.py | krantikiran/EdgeML | e5c7bd7c56884ca61f6d54cedb0074553cfdc896 | [
"MIT"
] | 1 | 2020-03-26T17:19:54.000Z | 2020-03-26T17:19:54.000Z | Tools/SeeDot/seedot/main.py | krantikiran/EdgeML | e5c7bd7c56884ca61f6d54cedb0074553cfdc896 | [
"MIT"
] | 2 | 2020-03-26T02:59:12.000Z | 2020-04-23T19:09:00.000Z | Tools/SeeDot/seedot/main.py | krantikiran/EdgeML | e5c7bd7c56884ca61f6d54cedb0074553cfdc896 | [
"MIT"
] | 3 | 2020-03-25T18:45:39.000Z | 2020-12-17T19:09:54.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT license.
import argparse
import datetime
from distutils.dir_util import copy_tree
import os
import shutil
import sys
import operator
import tempfile
import traceback
from seedot.compiler.converter.converter import Converter
import seedot.config as config
from seedot.compiler.compiler import Compiler
from seedot.predictor import Predictor
import seedot.util as Util
class Main:
def __init__(self, algo, version, target, trainingFile, testingFile, modelDir, sf):
self.algo, self.version, self.target = algo, version, target
self.trainingFile, self.testingFile, self.modelDir = trainingFile, testingFile, modelDir
self.sf = sf
self.accuracy = {}
def setup(self):
curr_dir = os.path.dirname(os.path.realpath(__file__))
copy_tree(os.path.join(curr_dir, "Predictor"), os.path.join(config.tempdir, "Predictor"))
for fileName in ["arduino.ino", "config.h", "predict.h"]:
srcFile = os.path.join(curr_dir, "arduino", fileName)
destFile = os.path.join(config.outdir, fileName)
shutil.copyfile(srcFile, destFile)
# Generate the fixed-point code using the input generated from the
# Converter project
def compile(self, version, target, sf):
print("Generating code...", end='')
# Set input and output files
inputFile = os.path.join(self.modelDir, "input.sd")
profileLogFile = os.path.join(
config.tempdir, "Predictor", "output", "float", "profile.txt")
logDir = os.path.join(config.outdir, "output")
os.makedirs(logDir, exist_ok=True)
if version == config.Version.floatt:
outputLogFile = os.path.join(logDir, "log-float.txt")
else:
outputLogFile = os.path.join(
logDir, "log-fixed-" + str(abs(sf)) + ".txt")
if target == config.Target.arduino:
outputDir = os.path.join(config.outdir, "arduino")
elif target == config.Target.x86:
outputDir = os.path.join(config.tempdir, "Predictor")
try:
obj = Compiler(self.algo, version, target, inputFile, outputDir,
profileLogFile, sf, outputLogFile)
obj.run()
except:
print("failed!\n")
#traceback.print_exc()
return False
self.scaleForX = obj.scaleForX
print("completed")
return True
# Run the converter project to generate the input files using reading the
# training model
def convert(self, version, datasetType, target):
print("Generating input files for %s %s dataset..." %
(version, datasetType), end='')
# Create output dirs
if target == config.Target.arduino:
outputDir = os.path.join(config.outdir, "input")
datasetOutputDir = outputDir
elif target == config.Target.x86:
outputDir = os.path.join(config.tempdir, "Predictor")
datasetOutputDir = os.path.join(config.tempdir, "Predictor", "input")
else:
assert False
os.makedirs(datasetOutputDir, exist_ok=True)
os.makedirs(outputDir, exist_ok=True)
inputFile = os.path.join(self.modelDir, "input.sd")
try:
obj = Converter(self.algo, version, datasetType, target,
datasetOutputDir, outputDir)
obj.setInput(inputFile, self.modelDir,
self.trainingFile, self.testingFile)
obj.run()
except Exception as e:
traceback.print_exc()
return False
print("done\n")
return True
# Build and run the Predictor project
def predict(self, version, datasetType):
outputDir = os.path.join("output", version)
curDir = os.getcwd()
os.chdir(os.path.join(config.tempdir, "Predictor"))
obj = Predictor(self.algo, version, datasetType,
outputDir, self.scaleForX)
acc = obj.run()
os.chdir(curDir)
return acc
# Compile and run the generated code once for a given scaling factor
def runOnce(self, version, datasetType, target, sf):
res = self.compile(version, target, sf)
if res == False:
return False, False
acc = self.predict(version, datasetType)
if acc == None:
return False, True
self.accuracy[sf] = acc
print("Accuracy is %.3f%%\n" % (acc))
return True, False
# Iterate over multiple scaling factors and store their accuracies
def performSearch(self):
start, end = config.maxScaleRange
searching = False
for i in range(start, end, -1):
print("Testing with max scale factor of " + str(i))
res, exit = self.runOnce(
config.Version.fixed, config.DatasetType.training, config.Target.x86, i)
if exit == True:
return False
# The iterator logic is as follows:
# Search begins when the first valid scaling factor is found (runOnce returns True)
# Search ends when the execution fails on a particular scaling factor (runOnce returns False)
# This is the window where valid scaling factors exist and we
# select the one with the best accuracy
if res == True:
searching = True
elif searching == True:
# break
pass
# If search didn't begin at all, something went wrong
if searching == False:
return False
print("\nSearch completed\n")
print("----------------------------------------------")
print("Best performing scaling factors with accuracy:")
self.sf = self.getBestScale()
return True
# Reverse sort the accuracies, print the top 5 accuracies and return the
# best scaling factor
def getBestScale(self):
sorted_accuracy = dict(
sorted(self.accuracy.items(), key=operator.itemgetter(1), reverse=True)[:5])
print(sorted_accuracy)
return next(iter(sorted_accuracy))
# Find the scaling factor which works best on the training dataset and
# predict on the testing dataset
def findBestScalingFactor(self):
print("-------------------------------------------------")
print("Performing search to find the best scaling factor")
print("-------------------------------------------------\n")
# Generate input files for training dataset
res = self.convert(config.Version.fixed,
config.DatasetType.training, config.Target.x86)
if res == False:
return False
# Search for the best scaling factor
res = self.performSearch()
if res == False:
return False
print("Best scaling factor = %d" % (self.sf))
return True
def runOnTestingDataset(self):
print("\n-------------------------------")
print("Prediction on testing dataset")
print("-------------------------------\n")
print("Setting max scaling factor to %d\n" % (self.sf))
# Generate files for the testing dataset
res = self.convert(config.Version.fixed,
config.DatasetType.testing, config.Target.x86)
if res == False:
return False
# Compile and run code using the best scaling factor
res = self.runOnce(
config.Version.fixed, config.DatasetType.testing, config.Target.x86, self.sf)
if res == False:
return False
return True
# Generate files for training dataset and perform a profiled execution
def collectProfileData(self):
print("-----------------------")
print("Collecting profile data")
print("-----------------------")
res = self.convert(config.Version.floatt,
config.DatasetType.training, config.Target.x86)
if res == False:
return False
res = self.compile(config.Version.floatt, config.Target.x86, self.sf)
if res == False:
return False
acc = self.predict(config.Version.floatt, config.DatasetType.training)
if acc == None:
return False
print("Accuracy is %.3f%%\n" % (acc))
# Generate code for Arduino
def compileFixedForTarget(self):
print("------------------------------")
print("Generating code for %s..." % (self.target))
print("------------------------------\n")
res = self.convert(config.Version.fixed,
config.DatasetType.testing, self.target)
if res == False:
return False
# Copy file
srcFile = os.path.join(config.outdir, "input", "model_fixed.h")
destFile = os.path.join(config.outdir, "model.h")
shutil.copyfile(srcFile, destFile)
# Copy library.h file
curr_dir = os.path.dirname(os.path.realpath(__file__))
srcFile = os.path.join(curr_dir, self.target, "library", "library_fixed.h")
destFile = os.path.join(config.outdir, "library.h")
shutil.copyfile(srcFile, destFile)
res = self.compile(config.Version.fixed, self.target, self.sf)
if res == False:
return False
return True
def runForFixed(self):
# Collect runtime profile
res = self.collectProfileData()
if res == False:
return False
# Obtain best scaling factor
if self.sf == None:
res = self.findBestScalingFactor()
if res == False:
return False
res = self.runOnTestingDataset()
if res == False:
return False
else:
self.testingAccuracy = self.accuracy[self.sf]
# Generate code for target
if self.target == config.Target.arduino:
self.compileFixedForTarget()
print("\nArduino sketch dumped in the folder %s\n" % (config.outdir))
return True
def compileFloatForTarget(self):
print("------------------------------")
print("Generating code for %s..." % (self.target))
print("------------------------------\n")
res = self.convert(config.Version.floatt,
config.DatasetType.testing, self.target)
if res == False:
return False
res = self.compile(config.Version.floatt, self.target, self.sf)
if res == False:
return False
# Copy model.h
srcFile = os.path.join(config.outdir, "Streamer", "input", "model_float.h")
destFile = os.path.join(config.outdir, self.target, "model.h")
shutil.copyfile(srcFile, destFile)
# Copy library.h file
srcFile = os.path.join(config.outdir, self.target, "library", "library_float.h")
destFile = os.path.join(config.outdir, self.target, "library.h")
shutil.copyfile(srcFile, destFile)
return True
def runForFloat(self):
print("---------------------------")
print("Executing for X86 target...")
print("---------------------------\n")
res = self.convert(config.Version.floatt,
config.DatasetType.testing, config.Target.x86)
if res == False:
return False
res = self.compile(config.Version.floatt, config.Target.x86, self.sf)
if res == False:
return False
acc = self.predict(config.Version.floatt, config.DatasetType.testing)
if acc == None:
return False
else:
self.testingAccuracy = acc
print("Accuracy is %.3f%%\n" % (acc))
if self.target == config.Target.arduino:
self.compileFloatForTarget()
print("\nArduino sketch dumped in the folder %s\n" % (config.outdir))
return True
def run(self):
sys.setrecursionlimit(10000)
self.setup()
if self.version == config.Version.fixed:
return self.runForFixed()
else:
return self.runForFloat()
| 33.319783 | 105 | 0.572265 | 1,325 | 12,295 | 5.286038 | 0.180377 | 0.024843 | 0.035694 | 0.038835 | 0.431182 | 0.389349 | 0.318961 | 0.289263 | 0.256853 | 0.206453 | 0 | 0.003931 | 0.296462 | 12,295 | 368 | 106 | 33.410326 | 0.80578 | 0.117934 | 0 | 0.414634 | 0 | 0 | 0.121577 | 0.040526 | 0 | 0 | 0 | 0 | 0.004065 | 1 | 0.065041 | false | 0.004065 | 0.056911 | 0 | 0.276423 | 0.146341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c540b8312dd0e0b9c2b7f2993ff5ba81b1163bb | 2,286 | py | Python | select-key-frames.py | eyeNsky/qgis-scripts | 050e5c622b10211cf89c028b44990604992c99d6 | [
"MIT"
] | null | null | null | select-key-frames.py | eyeNsky/qgis-scripts | 050e5c622b10211cf89c028b44990604992c99d6 | [
"MIT"
] | null | null | null | select-key-frames.py | eyeNsky/qgis-scripts | 050e5c622b10211cf89c028b44990604992c99d6 | [
"MIT"
] | null | null | null | ##[TBT-Tools]=group
##Input_Footprints=vector
##Image_IDs=field Input_Footprints
##Overlap_Threshold_0_to_1=number 0.6
from qgis.utils import *
from osgeo import ogr
from osgeo import osr
KEEP_TRESHOLD=Overlap_Threshold_0_to_1
IMAGE_IDS = Image_IDs
def calcIntersection(fpA,fpB):
fpA = fpA.Buffer(0)
if not fpA.Intersect(fpB):
return False, 0
if fpA.Intersect(fpB):
areaOfIntersection = fpA.Intersection(fpB).GetArea()
percentOfIntersection = areaOfIntersection/(fpB.GetArea())
return True,percentOfIntersection
def getFPs(fpIn,IMAGE_IDS):
'''SQLite of Footprints as input, selects key frames based on overlap thresehold'''
fp = ogr.Open(fpIn,0)
progress.setText(fpIn)
fpLayer = fp.GetLayer(0) #assumes the footprints are the first layer
newGeom = ogr.Geometry(type=ogr.wkbGeometryCollection)
numFps = fpLayer.GetFeatureCount()
IMAGE_IDS = IMAGE_IDS.encode('utf-8') # str is imported from future, sets type to newstr. ogr does not recognize
currFp = fpLayer.GetFeature(1) # get first geom to populate the keepers poly
currFpGeom = currFp.geometry()
keepGeom = ogr.Geometry(type=ogr.wkbGeometryCollection) # create a geom to hold keepers
keepGeom.AddGeometry(currFpGeom) # add first fp
keepList = [1,numFps] # list to hold the keepers ids, with first and last frame.
keepers = 0
for i in range(2,numFps):
currFp = fpLayer.GetFeature(i)
nextFp = fpLayer.GetFeature(i+1)
thisImage = currFp.GetField(IMAGE_IDS)
nextImage = nextFp.GetField(IMAGE_IDS)
thisTime = int(thisImage[-8:])
nextTime = int(nextImage[-8:])
absDiff = abs(nextTime-thisTime)
if absDiff > 30:
keepList.append(i)
continue
a = keepGeom
b = currFp.geometry()
doesIntersect, percentIntersect = calcIntersection(a,b)
if percentIntersect < KEEP_TRESHOLD:
keepGeom.AddGeometry(b)
keepers += 1
keepList.append(i)
keepTxt = 'keeping %s of of %s images' %(keepers,numFps)
progress.setInfo(keepTxt)
time.sleep(1)
fpLayer = processing.getObject(fpIn)
fpLayer.select(keepList)
return keepList
fpIn = Input_Footprints
fps = getFPs(fpIn,IMAGE_IDS)
| 36.285714 | 116 | 0.685914 | 286 | 2,286 | 5.405594 | 0.454545 | 0.046572 | 0.021992 | 0.02458 | 0.076326 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012339 | 0.220035 | 2,286 | 62 | 117 | 36.870968 | 0.854739 | 0.194226 | 0 | 0.038462 | 0 | 0 | 0.017005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.057692 | 0 | 0.153846 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c54c408d318fc21f77f2e5b5bb834831d378158 | 2,602 | py | Python | src/app/views/shop/catalog.py | samara-hackathon-it-2022/web-merchandise-shop | 9e620dda44227d1788ca2afdcff28a3555232721 | [
"MIT"
] | 5 | 2022-02-08T05:52:21.000Z | 2022-02-23T17:06:06.000Z | src/app/views/shop/catalog.py | samara-hackathon-it-2022/web-merchandise-shop | 9e620dda44227d1788ca2afdcff28a3555232721 | [
"MIT"
] | 13 | 2022-02-09T07:18:20.000Z | 2022-03-03T08:29:43.000Z | src/app/views/shop/catalog.py | samara-hackathon-it-2022/web-merchandise-shop | 9e620dda44227d1788ca2afdcff28a3555232721 | [
"MIT"
] | 1 | 2022-02-23T17:00:26.000Z | 2022-02-23T17:00:26.000Z | #!usr/bin/python
"""
Merchandise shop application catalog views.
"""
from flask import Blueprint, render_template, request, redirect, url_for, Response
from flask_login import current_user
from ...models.item.item import Item
from ...models.category import Category
from ... import db
bp_catalog = Blueprint("catalog", __name__)
@bp_catalog.route("/catalog", methods=["GET"])
def index() -> str:
"""Catalog index page. Displays catalog with list of items. """
limit = request.args.get("l", type=int, default=9)
offset = request.args.get("o", type=int, default=0)
query = request.args.get("q", type=str, default="")
category_id = request.args.get("cid", type=int, default=0)
items, count = Item.search(query, category_id, limit, offset)
category = Category.get_by_id(category_id)
return render_template("catalog/index.jinja",
items=items, count=count,
category=category, query=query,
user=current_user)
@bp_catalog.route("/categories", methods=["GET"])
def categories() -> str:
"""Categories view page. Displays list of all categories."""
limit = request.args.get("l", type=int, default=30)
offset = request.args.get("o", type=int, default=0)
items, count = Category.get_paginated(limit, offset)
return render_template("catalog/categories.jinja",
categories=items, count=count,
user=current_user)
@bp_catalog.route("/catalog/debug", methods=["GET"])
def debug() -> Response:
"""
Debug view, should be removed later.
Fills catalog with random debug information / items.
:return:
"""
from random import randrange, choice
from ...models.discount import Discount
n = 30
_random_names = [
"Кружка ", "Футболка ",
"Рубашка ", "Худи ",
"Блокнот ", "Ручка ",
"Подарочный набор "
]
for _ in range(n):
db.session.add(Category(choice(_random_names)))
db.session.commit()
for _ in range(n):
title = choice(_random_names) + "".join([chr(randrange(1072, 1103, 1)) for _ in range(5)])
description = "".join([chr(randrange(1072, 1103, 1)) for _ in range(100)])
item = Item(title, description, "{}", randrange(1, 9999, 1), randrange(1, 9999, 1), randrange(1, 30))
db.session.add(item)
db.session.commit()
if choice([True, False]):
db.session.add(Discount(randrange(5, 95, 1), item.id))
db.session.commit()
return redirect(url_for("catalog.index"))
| 31.349398 | 109 | 0.624135 | 320 | 2,602 | 4.971875 | 0.325 | 0.041483 | 0.052797 | 0.028284 | 0.215588 | 0.215588 | 0.131992 | 0.131992 | 0.089252 | 0 | 0 | 0.024537 | 0.232513 | 2,602 | 82 | 110 | 31.731707 | 0.772158 | 0.104151 | 0 | 0.18 | 0 | 0 | 0.076553 | 0.010499 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.14 | 0 | 0.26 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c5a0ba1dff0cfca9b6097fd8fb47d6a046668a7 | 1,367 | py | Python | bkm3t_DHBKHN/Cau2/Product/service_predict/service.py | atheros98/OLP-FOSS-2018 | c3ba261a60e80a6e355da34b6015c767a4d69fba | [
"Apache-2.0"
] | 4 | 2018-11-29T09:17:22.000Z | 2018-12-07T09:11:14.000Z | bkm3t_DHBKHN/Cau2/Product/service_predict/service.py | atheros98/OLP-FOSS-2018 | c3ba261a60e80a6e355da34b6015c767a4d69fba | [
"Apache-2.0"
] | null | null | null | bkm3t_DHBKHN/Cau2/Product/service_predict/service.py | atheros98/OLP-FOSS-2018 | c3ba261a60e80a6e355da34b6015c767a4d69fba | [
"Apache-2.0"
] | 12 | 2018-11-29T00:44:26.000Z | 2018-12-04T06:34:11.000Z |
from flask import request, Flask, jsonify
import numpy as np
import pandas as pd
import pickle
import json
from preprocess import segment_data, get_features, parse_data
from gesture import get_gestures
model_activity = pickle.load(open('model_saved/svm.model', 'rb'))
mapping_activity = {
1: "walking",
2: "upstairs",
3: "downstairs",
4: "sitting",
5: "standing",
6: "laying"
}
app = Flask(__name__)
@app.route('/predict_gesture', methods=['GET', 'POST'])
def predict_gesture():
if request.method == 'POST':
data = request.data.decode('utf8')
print(data)
data = parse_data(data)
a = data[['ax', 'ay', 'az']]
a.to_csv('data/templace_right.csv', index=False)
print(data)
result = get_gestures(data)
print(result)
if result is not None:
return result
else:
return 'unknown'
@app.route('/predict_activity', methods=['GET', 'POST'])
def predict_activity():
if request.method == 'POST':
data = request.data.decode('utf8')
data = parse_data(data)
features = get_features(segment_data(data[['yaw', 'pitch', 'roll', 'ax', 'ay', 'az']], 2, 10, 0.5))
print(features.shape)
return mapping_activity[model_activity.predict(features)[0]]
return 'unknown'
app.run('0.0.0.0', 5000, debug=True)
| 25.792453 | 107 | 0.6218 | 177 | 1,367 | 4.666667 | 0.446328 | 0.038741 | 0.03632 | 0.041162 | 0.164649 | 0.106538 | 0.106538 | 0.106538 | 0.106538 | 0 | 0 | 0.020873 | 0.228969 | 1,367 | 52 | 108 | 26.288462 | 0.762808 | 0 | 0 | 0.238095 | 0 | 0 | 0.14652 | 0.032234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.166667 | 0 | 0.309524 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c5baf8ff4d96da979a9734ed30c5559a7ddb447 | 13,970 | py | Python | mlrun/api/utils/projects/follower.py | george0st/mlrun | 6467d3a5ceadf6cd35512b84b3ddc3da611cf39a | [
"Apache-2.0"
] | null | null | null | mlrun/api/utils/projects/follower.py | george0st/mlrun | 6467d3a5ceadf6cd35512b84b3ddc3da611cf39a | [
"Apache-2.0"
] | null | null | null | mlrun/api/utils/projects/follower.py | george0st/mlrun | 6467d3a5ceadf6cd35512b84b3ddc3da611cf39a | [
"Apache-2.0"
] | null | null | null | import typing
import humanfriendly
import mergedeep
import sqlalchemy.orm
import mlrun.api.crud
import mlrun.api.db.session
import mlrun.api.schemas
import mlrun.api.utils.auth.verifier
import mlrun.api.utils.clients.iguazio
import mlrun.api.utils.clients.nuclio
import mlrun.api.utils.periodic
import mlrun.api.utils.projects.member
import mlrun.api.utils.projects.remotes.leader
import mlrun.api.utils.projects.remotes.nop_leader
import mlrun.config
import mlrun.errors
import mlrun.utils
import mlrun.utils.helpers
import mlrun.utils.regex
import mlrun.utils.singleton
from mlrun.utils import logger
class Member(
mlrun.api.utils.projects.member.Member,
metaclass=mlrun.utils.singleton.AbstractSingleton,
):
def initialize(self):
logger.info("Initializing projects follower")
self._leader_name = mlrun.mlconf.httpdb.projects.leader
self._sync_session = None
self._leader_client: mlrun.api.utils.projects.remotes.leader.Member
if self._leader_name == "iguazio":
self._leader_client = mlrun.api.utils.clients.iguazio.Client()
if not mlrun.mlconf.httpdb.projects.iguazio_access_key:
raise mlrun.errors.MLRunInvalidArgumentError(
"Iguazio access key must be configured when the leader is Iguazio"
)
self._sync_session = mlrun.mlconf.httpdb.projects.iguazio_access_key
elif self._leader_name == "nop":
self._leader_client = mlrun.api.utils.projects.remotes.nop_leader.Member()
else:
raise NotImplementedError("Unsupported project leader")
self._periodic_sync_interval_seconds = humanfriendly.parse_timespan(
mlrun.mlconf.httpdb.projects.periodic_sync_interval
)
self._synced_until_datetime = None
# run one sync to start off on the right foot and fill out the cache but don't fail initialization on it
try:
# Basically the delete operation in our projects mechanism is fully consistent, meaning the leader won't
# remove the project from its persistency (the source of truth) until it was successfully removed from all
# followers. Therefore, when syncing projects from the leader, we don't need to search for the deletions
# that may happened without us knowing about it (therefore full_sync by default is false). When we
# introduced the chief/worker mechanism, we needed to change the follower to keep its projects in the DB
# instead of in cache. On the switch, since we were using cache and the projects table in the DB was not
# maintained, we know we may have projects that shouldn't be there anymore, ideally we would have trigger
# the full sync only once on the switch, but since we don't have a good heuristic to identify the switch
# we're doing a full_sync on every initialization
full_sync = (
mlrun.mlconf.httpdb.clusterization.role
== mlrun.api.schemas.ClusterizationRole.chief
)
self._sync_projects(full_sync=full_sync)
except Exception as exc:
logger.warning("Initial projects sync failed", exc=str(exc))
self._start_periodic_sync()
def shutdown(self):
logger.info("Shutting down projects leader")
self._stop_periodic_sync()
def create_project(
self,
db_session: sqlalchemy.orm.Session,
project: mlrun.api.schemas.Project,
projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole] = None,
leader_session: typing.Optional[str] = None,
wait_for_completion: bool = True,
) -> typing.Tuple[typing.Optional[mlrun.api.schemas.Project], bool]:
if self._is_request_from_leader(projects_role):
mlrun.api.crud.Projects().create_project(db_session, project)
return project, False
else:
is_running_in_background = self._leader_client.create_project(
leader_session, project, wait_for_completion
)
created_project = None
if not is_running_in_background:
created_project = self.get_project(
db_session, project.metadata.name, leader_session
)
return created_project, is_running_in_background
def store_project(
self,
db_session: sqlalchemy.orm.Session,
name: str,
project: mlrun.api.schemas.Project,
projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole] = None,
leader_session: typing.Optional[str] = None,
wait_for_completion: bool = True,
) -> typing.Tuple[typing.Optional[mlrun.api.schemas.Project], bool]:
if self._is_request_from_leader(projects_role):
mlrun.api.crud.Projects().store_project(db_session, name, project)
return project, False
else:
try:
self.get_project(db_session, name, leader_session)
except mlrun.errors.MLRunNotFoundError:
return self.create_project(
db_session,
project,
projects_role,
leader_session,
wait_for_completion,
)
else:
self._leader_client.update_project(leader_session, name, project)
return self.get_project(db_session, name, leader_session), False
def patch_project(
self,
db_session: sqlalchemy.orm.Session,
name: str,
project: dict,
patch_mode: mlrun.api.schemas.PatchMode = mlrun.api.schemas.PatchMode.replace,
projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole] = None,
leader_session: typing.Optional[str] = None,
wait_for_completion: bool = True,
) -> typing.Tuple[typing.Optional[mlrun.api.schemas.Project], bool]:
if self._is_request_from_leader(projects_role):
# No real scenario for this to be useful currently - in iguazio patch is transformed to store request
raise NotImplementedError("Patch operation not supported from leader")
else:
current_project = self.get_project(db_session, name, leader_session)
strategy = patch_mode.to_mergedeep_strategy()
current_project_dict = current_project.dict(exclude_unset=True)
mergedeep.merge(current_project_dict, project, strategy=strategy)
patched_project = mlrun.api.schemas.Project(**current_project_dict)
return self.store_project(
db_session,
name,
patched_project,
projects_role,
leader_session,
wait_for_completion,
)
def delete_project(
self,
db_session: sqlalchemy.orm.Session,
name: str,
deletion_strategy: mlrun.api.schemas.DeletionStrategy = mlrun.api.schemas.DeletionStrategy.default(),
projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole] = None,
auth_info: mlrun.api.schemas.AuthInfo = mlrun.api.schemas.AuthInfo(),
wait_for_completion: bool = True,
) -> bool:
if self._is_request_from_leader(projects_role):
mlrun.api.crud.Projects().delete_project(
db_session, name, deletion_strategy
)
else:
return self._leader_client.delete_project(
auth_info.session,
name,
deletion_strategy,
wait_for_completion,
)
return False
def get_project(
self,
db_session: sqlalchemy.orm.Session,
name: str,
leader_session: typing.Optional[str] = None,
) -> mlrun.api.schemas.Project:
return mlrun.api.crud.Projects().get_project(db_session, name)
def get_project_owner(
self,
db_session: sqlalchemy.orm.Session,
name: str,
) -> mlrun.api.schemas.ProjectOwner:
return self._leader_client.get_project_owner(self._sync_session, name)
def list_projects(
self,
db_session: sqlalchemy.orm.Session,
owner: str = None,
format_: mlrun.api.schemas.ProjectsFormat = mlrun.api.schemas.ProjectsFormat.full,
labels: typing.List[str] = None,
state: mlrun.api.schemas.ProjectState = None,
# needed only for external usage when requesting leader format
projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole] = None,
leader_session: typing.Optional[str] = None,
names: typing.Optional[typing.List[str]] = None,
) -> mlrun.api.schemas.ProjectsOutput:
if (
format_ == mlrun.api.schemas.ProjectsFormat.leader
and not self._is_request_from_leader(projects_role)
):
raise mlrun.errors.MLRunAccessDeniedError(
"Leader format is allowed only to the leader"
)
projects_output = mlrun.api.crud.Projects().list_projects(
db_session, owner, format_, labels, state, names
)
if format_ == mlrun.api.schemas.ProjectsFormat.leader:
leader_projects = [
self._leader_client.format_as_leader_project(project)
for project in projects_output.projects
]
projects_output.projects = leader_projects
return projects_output
async def list_project_summaries(
self,
db_session: sqlalchemy.orm.Session,
owner: str = None,
labels: typing.List[str] = None,
state: mlrun.api.schemas.ProjectState = None,
projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole] = None,
leader_session: typing.Optional[str] = None,
names: typing.Optional[typing.List[str]] = None,
) -> mlrun.api.schemas.ProjectSummariesOutput:
return await mlrun.api.crud.Projects().list_project_summaries(
db_session, owner, labels, state, names
)
async def get_project_summary(
self,
db_session: sqlalchemy.orm.Session,
name: str,
leader_session: typing.Optional[str] = None,
) -> mlrun.api.schemas.ProjectSummary:
return await mlrun.api.crud.Projects().get_project_summary(db_session, name)
def _start_periodic_sync(self):
# the > 0 condition is to allow ourselves to disable the sync from configuration
if self._periodic_sync_interval_seconds > 0:
logger.info(
"Starting periodic projects sync",
interval=self._periodic_sync_interval_seconds,
)
mlrun.api.utils.periodic.run_function_periodically(
self._periodic_sync_interval_seconds,
self._sync_projects.__name__,
False,
self._sync_projects,
)
def _stop_periodic_sync(self):
mlrun.api.utils.periodic.cancel_periodic_function(self._sync_projects.__name__)
def _sync_projects(self, full_sync=False):
"""
:param full_sync: when set to true, in addition to syncing project creation/updates from the leader, we will
also sync deletions that may occur without updating us the follower
"""
leader_projects, latest_updated_at = self._leader_client.list_projects(
self._sync_session, self._synced_until_datetime
)
db_session = mlrun.api.db.session.create_session()
db_projects = mlrun.api.crud.Projects().list_projects(
db_session, format_=mlrun.api.schemas.ProjectsFormat.name_only
)
# Don't add projects in non terminal state if they didn't exist before to prevent race conditions
filtered_projects = []
for leader_project in leader_projects:
if (
leader_project.status.state
not in mlrun.api.schemas.ProjectState.terminal_states()
and leader_project.metadata.name not in db_projects.projects
):
continue
filtered_projects.append(leader_project)
for project in filtered_projects:
mlrun.api.crud.Projects().store_project(
db_session, project.metadata.name, project
)
if full_sync:
logger.info("Performing full sync")
leader_project_names = [
project.metadata.name for project in leader_projects
]
projects_to_remove = list(
set(db_projects.projects).difference(leader_project_names)
)
for project_to_remove in projects_to_remove:
logger.info(
"Found project in the DB that is not in leader. Removing",
name=project_to_remove,
)
mlrun.api.crud.Projects().delete_project(
db_session,
project_to_remove,
mlrun.api.schemas.DeletionStrategy.cascading,
)
self._synced_until_datetime = latest_updated_at
def _is_request_from_leader(
self, projects_role: typing.Optional[mlrun.api.schemas.ProjectsRole]
) -> bool:
if projects_role and projects_role.value == self._leader_name:
return True
return False
@staticmethod
def _is_project_matching_labels(
labels: typing.List[str], project: mlrun.api.schemas.Project
):
if not project.metadata.labels:
return False
for label in labels:
if "=" in label:
name, value = [v.strip() for v in label.split("=", 1)]
if name not in project.metadata.labels:
return False
return value == project.metadata.labels[name]
else:
return label in project.metadata.labels
| 42.852761 | 118 | 0.641303 | 1,595 | 13,970 | 5.411912 | 0.178683 | 0.057461 | 0.062558 | 0.025487 | 0.403846 | 0.328892 | 0.295297 | 0.261817 | 0.191497 | 0.168211 | 0 | 0.0003 | 0.283035 | 13,970 | 325 | 119 | 42.984615 | 0.861522 | 0.106299 | 0 | 0.339161 | 0 | 0 | 0.030466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048951 | false | 0 | 0.073427 | 0.006993 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c5c89febd06d4828da804829e7d62d9b76b17be | 4,100 | py | Python | src/primaires/scripting/fonctions/hauteur_meche.py | vlegoff/tsunami | 36b3b974f6eefbf15cd5d5f099fc14630e66570b | [
"BSD-3-Clause"
] | 14 | 2015-08-21T19:15:21.000Z | 2017-11-26T13:59:17.000Z | src/primaires/scripting/fonctions/hauteur_meche.py | vincent-lg/tsunami | 36b3b974f6eefbf15cd5d5f099fc14630e66570b | [
"BSD-3-Clause"
] | 20 | 2015-09-29T20:50:45.000Z | 2018-06-21T12:58:30.000Z | src/primaires/scripting/fonctions/hauteur_meche.py | vlegoff/tsunami | 36b3b974f6eefbf15cd5d5f099fc14630e66570b | [
"BSD-3-Clause"
] | 3 | 2015-05-02T19:42:03.000Z | 2018-09-06T10:55:00.000Z | # -*-coding:Utf-8 -*
# Copyright (c) 2010-2017 LE GOFF Vincent
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Fichier contenant la fonction hauteur_meche."""
from fractions import Fraction
from primaires.scripting.fonction import Fonction
from primaires.scripting.instruction import ErreurExecution
class ClasseFonction(Fonction):
"""Renvoie la hauteur de la mèche (en pourcent)."""
@classmethod
def init_types(cls):
cls.ajouter_types(cls.hauteur_meche, "Objet")
cls.ajouter_types(cls.prototype_hauteur_meche, "PrototypeObjet")
@staticmethod
def hauteur_meche(objet):
"""Retourne la hauteur de la mèche entre 1 et 100.
Paramètres à préciser :
* objet : l'objet (type objet lumière)
Cette fonction retourne la hauteur de la mèche en pourcent.
Une lumière qui n'a pas du tout brûlée est à 100. Une
lumière qui n'a plus de mèche (ne peut plus brûler) est à
0. Quand une lampe est allumée, sa mèche descend
généralement. On parle ici de mèche, mais bien sûr cela
dépend de la lumière (ce peut être une torche, une lampe ou
même quelque chose de magique qui n'a pas de mèche).
Exemple d'utilisation :
hauteur = hauteur_meche(objet)
si hauteur = 100:
dire personnage "Cette torche est intacte et n'a jamais servie."
sinon si hauteur > 80:
dire personnage "Cette torche est à peine brûlée." longtemps."
sinon si hauteur > 60:
dire personnage "Cette torche n'est pas encore à moitié brûlée."
sinon si hauteur > 40:
dire personnage "Cette torche est à moitié brûlée."
...
sinon si hauteur = 0:
# Note : cela ne se produit que quand la torche est
# éteinte et ne peut être rallumée
dire personnage "Cette torche est fichue, passez à la suivante."
finsi
"""
if not objet.est_de_type("lumière"):
raise ErreurExecution("L'objet {} n'est pas une lumière.".format(
objet.identifiant))
if objet.duree is not None and objet.duree >= objet.duree_max:
return Fraction(0)
return Fraction(100 - (objet.duree_en_cours() / objet.duree_max * \
100))
@staticmethod
def prototype_hauteur_meche(prototype):
"""Retourne invariablement 100.
Cette fonction est simplement utilisée pour la compatibilité
avec l'objet. Quand on examine le prototype, la mèche doit
être intacte.
"""
return Fraction(100)
| 40.196078 | 79 | 0.689268 | 552 | 4,100 | 5.088768 | 0.449275 | 0.025632 | 0.03382 | 0.0445 | 0.167675 | 0.119972 | 0.068352 | 0.048416 | 0.048416 | 0.048416 | 0 | 0.013021 | 0.250732 | 4,100 | 101 | 80 | 40.594059 | 0.901367 | 0.707805 | 0 | 0.1 | 0 | 0 | 0.06505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.15 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c5cb8299b1217efd15ac3d94c3b8bae9c563fbc | 4,437 | py | Python | src/fetch-CEDAR-artifacts.py | fair-data-collective/zonmw-project-admin | 4a24b03ea51fd17ece8022ceb07b4daca23c56a1 | [
"CC-BY-4.0"
] | null | null | null | src/fetch-CEDAR-artifacts.py | fair-data-collective/zonmw-project-admin | 4a24b03ea51fd17ece8022ceb07b4daca23c56a1 | [
"CC-BY-4.0"
] | null | null | null | src/fetch-CEDAR-artifacts.py | fair-data-collective/zonmw-project-admin | 4a24b03ea51fd17ece8022ceb07b4daca23c56a1 | [
"CC-BY-4.0"
] | null | null | null | import requests
import json
import os
from pathlib import Path
def _detect_artifact(artifact_id):
"""Detects type of artifact based on artifact URL.
Parameters
----------
artifact_id : str
String in form of resolvable URL
Returns
-------
artifact_type : str
String representing artifact type, it can be ['field',element', 'template' or 'instance template']
artifact_url_str : str
Unique string from the artifact id representing artifact_type
"""
artifact_type = {
"field": "/template-fields/",
"element": "/template-elements/",
"template": "/templates/",
"instance": "/template-instances/",
}
if artifact_id.find(artifact_type["field"]) != -1:
return "field", artifact_type["field"].replace("/", "")
if artifact_id.find(artifact_type["element"]) != -1:
return "element", artifact_type["element"].replace("/", "")
if artifact_id.find(artifact_type["template"]) != -1:
return "template", artifact_type["template"].replace("/", "")
if artifact_id.find(artifact_type["instance"]) != -1:
return "instance", artifact_type["instance"].replace("/", "")
else:
raise ValueError(
"artifact_id does not contain information about CEDAR artifact."
)
def _get_api_url(artifact_id):
"""Gets artifact URL and artifact UUID for curl command.
Parameters
----------
artifact_id : str
String in form of resolvable URL
Returns
-------
api_url : str
URL structured according to the CEDAR API for curl get command
artifact_uuid : str
Universally unique identifier of the CEDAR artifact
"""
base_url = "https://resource.metadatacenter.org/artifact_str/https%3A%2F%2Frepo.metadatacenter.org%2Fartifact_str%2F"
artifact_type, artifact_url_str = _detect_artifact(artifact_id)
base_url = base_url.replace("artifact_str", artifact_url_str)
url_drop = "https://repo.metadatacenter.org/artifact_str/".replace(
"artifact_str", artifact_url_str
)
artifact_uuid = artifact_id.replace(url_drop, "")
return artifact_uuid, base_url + artifact_uuid
def _get_artifact(artifact_id, api_key):
artifact_type, artifact_url_str = _detect_artifact(artifact_id)
artifact_uuid, api_url = _get_api_url(artifact_id)
headers = {
"Accept": "application/json",
"Authorization": "apiKey " + api_key,
}
artifact_json = requests.get(api_url, headers=headers).json()
artifact_name = artifact_json["schema:name"]
return artifact_name, artifact_type, artifact_uuid, artifact_json
def store_artifact(artifact_id, api_key, base_path="./cedar-assets/"):
artifact_name, artifact_type, artifact_uuid, artifact_json = _get_artifact(
artifact_id, api_key
)
dir_path = os.path.dirname(base_path + artifact_type.capitalize() + "s/")
if not os.path.isdir(dir_path):
os.makedirs(dir_path)
file_name = artifact_name + "_" + artifact_uuid[-6:] + ".json"
file_path = dir_path + "/" + file_name
with open(file_path, "w") as json_file:
json.dump(artifact_json, json_file, indent=4)
CEDAR_API = os.environ["CEDAR_API"]
HEADER = {
"Accept": "application/json",
"Authorization": "apiKey " + CEDAR_API,
}
FOLDER_CMD = "contents?version=all&publication_status=all&sort=name"
uuid_templates = "50b3f988-1443-454f-9f4a-365844e49ef1" # Project Template Templates
uuid_elements = "8c5c1001-1864-46df-8888-8f60b9e96deb" # Project Template Element
folder_url_templates = (
"https://resource.metadatacenter.org/folders/https%3A%2F%2Frepo.metadatacenter.org%2Ffolders%2F"
+ uuid_templates
+ "/"
)
folder_url_elements = (
"https://resource.metadatacenter.org/folders/https%3A%2F%2Frepo.metadatacenter.org%2Ffolders%2F"
+ uuid_elements
+ "/"
)
folder_content = requests.get(folder_url_templates + FOLDER_CMD, headers=HEADER).json()
for artifact in folder_content["resources"]:
if artifact["resourceType"] != "instance" and artifact["resourceType"] != "folder":
store_artifact(artifact["@id"], CEDAR_API)
folder_content = requests.get(folder_url_elements + FOLDER_CMD, headers=HEADER).json()
for artifact in folder_content["resources"]:
if artifact["resourceType"] != "instance" and artifact["resourceType"] != "folder":
store_artifact(artifact["@id"], CEDAR_API)
| 32.152174 | 121 | 0.685148 | 536 | 4,437 | 5.421642 | 0.238806 | 0.065382 | 0.049553 | 0.022023 | 0.437371 | 0.388507 | 0.304542 | 0.26841 | 0.235375 | 0.200964 | 0 | 0.018554 | 0.186162 | 4,437 | 137 | 122 | 32.386861 | 0.786209 | 0.160694 | 0 | 0.177215 | 0 | 0.037975 | 0.265672 | 0.034521 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.050633 | 0 | 0.177215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c5d0144f7894563a5b227971531335da278506b | 575 | py | Python | tests/test_json_models.py | sblack-usu/hs_rdf | 9b681ef8c2178d99439170da10e93680e9b484a6 | [
"BSD-3-Clause"
] | null | null | null | tests/test_json_models.py | sblack-usu/hs_rdf | 9b681ef8c2178d99439170da10e93680e9b484a6 | [
"BSD-3-Clause"
] | 4 | 2020-12-19T03:57:28.000Z | 2021-01-06T20:36:17.000Z | tests/test_json_models.py | sblack-usu/hs_rdf | 9b681ef8c2178d99439170da10e93680e9b484a6 | [
"BSD-3-Clause"
] | null | null | null | from hsclient.json_models import ResourcePreview
def test_resource_preview_authors_field_default_is_empty_list():
"""verify all `authors` fields are instantiated with [] values."""
test_data_dict = {"authors": None}
test_data_json = '{"authors": null}'
base_case = ResourcePreview()
from_kwargs = ResourcePreview(**test_data_dict)
from_dict = ResourcePreview.parse_obj(test_data_dict)
from_json = ResourcePreview.parse_raw(test_data_json)
assert all(
[x.authors == [] for x in [base_case, from_kwargs, from_dict, from_json]]
)
| 33.823529 | 81 | 0.730435 | 74 | 575 | 5.283784 | 0.5 | 0.102302 | 0.092072 | 0.081841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166957 | 575 | 16 | 82 | 35.9375 | 0.816284 | 0.104348 | 0 | 0 | 0 | 0 | 0.047151 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c60d54b921a6e709780dcffcf29e6543532775b | 1,579 | py | Python | python-scrapy/store.py | readpage/seckill | 1a4861b2df0eb249c9b9762ee42d901c44b5537f | [
"Apache-2.0"
] | null | null | null | python-scrapy/store.py | readpage/seckill | 1a4861b2df0eb249c9b9762ee42d901c44b5537f | [
"Apache-2.0"
] | 1 | 2021-07-19T06:39:57.000Z | 2021-07-19T06:39:57.000Z | python-scrapy/store.py | readpage/seckill | 1a4861b2df0eb249c9b9762ee42d901c44b5537f | [
"Apache-2.0"
] | null | null | null | import requests
from fake_useragent import UserAgent
from bs4 import BeautifulSoup
import pymysql
import time
import random
#打开数据库连接
conn = pymysql.connect("localhost", "root", "root", "seckill")
headers={
'User-Agent': UserAgent().random
}
url= "https://list.tmall.com/search_product.htm?spm=875.7931836/B.category2016015.1.13af4265iTAVT4&q=%CA%D6%BB%FA&vmarket=&from=mallfp..pc_1_searchbutton&acm=lb-zebra-148799-667863.1003.4.708026&type=p&scm=1003.4.lb-zebra-148799-667863.OTHER_14561662186585_708026"
r = requests.get(url, headers=headers, timeout=10)
soup=BeautifulSoup(r.text, "lxml")
def product(soup):
list=soup.select(".product-iWrap")
for l in list[10:30]:
name=l.select(".productTitle")[0].a["title"]
img=l.select(".productImg-wrap img")[0]
img_url=img["data-ks-lazyload"]
index=img_url.rindex("!")
img=img_url[index+1:]
if img_url[:2]=="//":
img_url="http:" +img_url
r=requests.get(img_url, headers=headers, timeout=10)
path="D:/picture/seckill/" +img
with open(path, "wb") as f:
f.write(r.content)
price=l.select(".productPrice")[0].em["title"]
evaluate=l.select(".productStatus span a")[0].text
store=l.select(".productShop .productShop-name")[0].text
stock=random.randint(500,2000)
#使用cursor()方法获取操作游标
cursor=conn.cursor()
sql="INSERT INTO goods(name, img, price, evaluate, store, stock) VALUES(%s, %s, %s, %s, %s, %s)"
cursor.execute(sql, (name, img, price, evaluate, store, stock))
#提交到数据库执行
conn.commit()
conn.close()
if __name__ == "__main__":
product(soup)
| 33.595745 | 264 | 0.69221 | 234 | 1,579 | 4.581197 | 0.521368 | 0.039179 | 0.011194 | 0.011194 | 0.110075 | 0.05597 | 0 | 0 | 0 | 0 | 0 | 0.079942 | 0.128562 | 1,579 | 46 | 265 | 34.326087 | 0.699128 | 0.020899 | 0 | 0 | 0 | 0.052632 | 0.362281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.157895 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8c653b610810b89565acb81fed79b7fce4063108 | 7,331 | py | Python | tests/test_publish_from_s3_to_redis.py | virdesai/stock-analysis-engine | 0ca501277c632150717ca499121a34f8f8c71ccb | [
"Apache-2.0"
] | 819 | 2018-09-16T20:33:11.000Z | 2022-03-30T21:18:23.000Z | tests/test_publish_from_s3_to_redis.py | gvpathi/stock-analysis-engine | 0ca501277c632150717ca499121a34f8f8c71ccb | [
"Apache-2.0"
] | 14 | 2018-09-16T20:52:25.000Z | 2020-09-06T12:36:36.000Z | tests/test_publish_from_s3_to_redis.py | gvpathi/stock-analysis-engine | 0ca501277c632150717ca499121a34f8f8c71ccb | [
"Apache-2.0"
] | 226 | 2018-09-16T20:04:32.000Z | 2022-03-31T01:41:14.000Z | """
Test file for - publish from s3 to redis
========================================
Integration Tests
-----------------
Please ensure ``redis`` and ``minio`` are running and export this:
::
export INT_TESTS=1
"""
import json
import mock
import analysis_engine.mocks.mock_boto3_s3
import analysis_engine.mocks.mock_redis
from analysis_engine.mocks.base_test import BaseTestCase
from analysis_engine.consts import S3_ACCESS_KEY
from analysis_engine.consts import S3_SECRET_KEY
from analysis_engine.consts import S3_REGION_NAME
from analysis_engine.consts import S3_ADDRESS
from analysis_engine.consts import S3_SECURE
from analysis_engine.consts import REDIS_ADDRESS
from analysis_engine.consts import REDIS_KEY
from analysis_engine.consts import REDIS_PASSWORD
from analysis_engine.consts import REDIS_DB
from analysis_engine.consts import REDIS_EXPIRE
from analysis_engine.consts import TICKER
from analysis_engine.consts import SUCCESS
from analysis_engine.consts import ERR
from analysis_engine.consts import ev
from analysis_engine.api_requests \
import build_cache_ready_pricing_dataset
from analysis_engine.work_tasks.publish_from_s3_to_redis \
import run_publish_from_s3_to_redis
from analysis_engine.api_requests \
import build_publish_from_s3_to_redis_request
from spylunking.log.setup_logging import build_colorized_logger
log = build_colorized_logger(
name=__name__)
def mock_success_task_result(
**kwargs):
"""mock_success_task_result
:param kwargs: keyword args dict
"""
log.info('MOCK - mock_success_task_result')
res = kwargs
res['result']['status'] = SUCCESS
res['result']['err'] = None
return res
# end of mock_success_task_result
def mock_err_task_result(
**kwargs):
"""mock_err_task_result
:param kwargs: keyword args dict
"""
log.info('MOCK - mock_err_task_result')
res = kwargs
res['result']['status'] = ERR
res['result']['err'] = 'test exception'
return res
# end of mock_err_task_result
def mock_s3_read_contents_from_key(
s3,
s3_bucket_name,
s3_key,
encoding='utf-8',
convert_as_json=True):
"""mock_s3_read_contents_from_key
Download the S3 key contents as a string. This
will raise exceptions.
:param s3_obj: existing S3 object
:param s3_bucket_name: bucket name
:param s3_key: S3 key
:param encoding: utf-8 by default
:param convert_to_json: auto-convert to a dict
"""
log.info('MOCK - mock_s3_read_contents_from_key')
data = build_cache_ready_pricing_dataset()
if not convert_as_json:
data = json.dumps(data)
return data
# end of mock_s3_read_contents_from_key
class TestPublishFromS3ToRedis(BaseTestCase):
"""TestPublishFromS3ToRedis"""
@mock.patch(
('boto3.resource'),
new=analysis_engine.mocks.mock_boto3_s3.build_boto3_resource)
@mock.patch(
('redis.Redis'),
new=analysis_engine.mocks.mock_redis.MockRedis)
@mock.patch(
('analysis_engine.get_task_results.'
'get_task_results'),
new=mock_success_task_result)
@mock.patch(
('analysis_engine.s3_read_contents_from_key.'
's3_read_contents_from_key'),
new=mock_s3_read_contents_from_key)
def test_success_publish_from_s3_to_redis(self):
"""test_success_publish_from_s3_to_redis"""
work = build_publish_from_s3_to_redis_request()
work['s3_enabled'] = 1
work['redis_enabled'] = 1
work['s3_access_key'] = S3_ACCESS_KEY
work['s3_secret_key'] = S3_SECRET_KEY
work['s3_region_name'] = S3_REGION_NAME
work['s3_address'] = S3_ADDRESS
work['s3_secure'] = S3_SECURE
work['redis_address'] = REDIS_ADDRESS
work['redis_db'] = REDIS_DB
work['redis_key'] = REDIS_KEY
work['redis_password'] = REDIS_PASSWORD
work['redis_expire'] = REDIS_EXPIRE
work['s3_bucket'] = 'integration-tests'
work['s3_key'] = 'integration-test-v1'
work['redis_key'] = 'integration-test-v1'
res = run_publish_from_s3_to_redis(
work)
self.assertTrue(
res['status'] == SUCCESS)
self.assertTrue(
res['err'] is None)
self.assertTrue(
res['rec'] is not None)
record = res['rec']
self.assertEqual(
record['ticker'],
TICKER)
self.assertEqual(
record['s3_enabled'],
True)
self.assertEqual(
record['redis_enabled'],
True)
self.assertEqual(
record['s3_bucket'],
work['s3_bucket'])
self.assertEqual(
record['s3_key'],
work['s3_key'])
self.assertEqual(
record['redis_key'],
work['redis_key'])
# end of test_success_publish_from_s3_to_redis
def test_err_publish_from_s3_to_redis(self):
"""test_err_publish_from_s3_to_redis"""
work = build_publish_from_s3_to_redis_request()
work['ticker'] = None
res = run_publish_from_s3_to_redis(
work)
self.assertTrue(
res['status'] == ERR)
self.assertTrue(
res['err'] == 'missing ticker')
# end of test_err_publish_from_s3_to_redis
"""
Integration Tests
Please ensure redis and minio are running and run this:
::
export INT_TESTS=1
"""
@mock.patch(
('analysis_engine.get_task_results.'
'get_task_results'),
new=mock_success_task_result)
def test_integration_publish_from_s3_to_redis(self):
"""test_integration_publish_from_s3_to_redis"""
if ev('INT_TESTS', '0') == '0':
return
work = build_publish_from_s3_to_redis_request()
work['s3_enabled'] = 1
work['redis_enabled'] = 1
work['s3_access_key'] = S3_ACCESS_KEY
work['s3_secret_key'] = S3_SECRET_KEY
work['s3_region_name'] = S3_REGION_NAME
work['s3_address'] = S3_ADDRESS
work['s3_secure'] = S3_SECURE
work['redis_address'] = REDIS_ADDRESS
work['redis_db'] = REDIS_DB
work['redis_key'] = REDIS_KEY
work['redis_password'] = REDIS_PASSWORD
work['redis_expire'] = REDIS_EXPIRE
work['s3_bucket'] = 'integration-tests'
work['s3_key'] = 'integration-test-v1'
work['redis_key'] = 'integration-test-v1'
res = run_publish_from_s3_to_redis(
work)
self.assertTrue(
res['status'] == SUCCESS)
self.assertTrue(
res['err'] is None)
self.assertTrue(
res['rec'] is not None)
record = res['rec']
self.assertEqual(
record['ticker'],
TICKER)
self.assertEqual(
record['s3_enabled'],
True)
self.assertEqual(
record['redis_enabled'],
True)
self.assertEqual(
record['s3_bucket'],
work['s3_bucket'])
self.assertEqual(
record['s3_key'],
work['s3_key'])
self.assertEqual(
record['redis_key'],
work['redis_key'])
# end of test_integration_publish_from_s3_to_redis
# end of TestPublishFromS3ToRedis
| 29.800813 | 69 | 0.643978 | 916 | 7,331 | 4.789301 | 0.132096 | 0.079781 | 0.056303 | 0.064965 | 0.780032 | 0.689309 | 0.570549 | 0.463871 | 0.463871 | 0.463871 | 0 | 0.018066 | 0.252489 | 7,331 | 245 | 70 | 29.922449 | 0.782482 | 0.137498 | 0 | 0.643275 | 0 | 0 | 0.16486 | 0.030706 | 0 | 0 | 0 | 0 | 0.116959 | 1 | 0.035088 | false | 0.017544 | 0.134503 | 0 | 0.19883 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fb20d656fc9b2b49adbd512dba7e5b31035e190 | 3,955 | py | Python | 03-file-authentication/authentication.py | mithi/simple-cryptography | 13559f8b4dd9da2545e6f8b3564fda5e33ae7410 | [
"MIT"
] | 49 | 2019-03-16T06:26:36.000Z | 2022-03-13T17:21:11.000Z | basic-cryptography-scripts/basic-cryptography-scripts/03-file-authentication/authentication.py | paulveillard/cybersecurity-cryptography | f081a793130550c68ef668b561f653c21231d5dc | [
"Apache-2.0"
] | 8 | 2019-03-11T05:11:51.000Z | 2019-03-15T18:12:30.000Z | basic-cryptography-scripts/basic-cryptography-scripts/03-file-authentication/authentication.py | paulveillard/cybersecurity-cryptography | f081a793130550c68ef668b561f653c21231d5dc | [
"Apache-2.0"
] | 11 | 2019-03-31T06:09:22.000Z | 2022-03-13T17:21:12.000Z | import argparse
from hashlib import sha256
import os
import subprocess
HASHSIZE = 32
class StreamReceiver:
def __init__(self, path, h, buffersize=1024):
self.stream = path
self.buffersize = buffersize
self.h0 = bytes.fromhex(h)
# A generator that outputs a stream of bytes as
# expected by our system
def output_stream(self):
with open(self.stream, 'rb') as f:
n = HASHSIZE + self.buffersize
for chunk in iter(lambda: f.read(n), ''):
if len(chunk) == 0: break
yield chunk
def write_file(self, path):
print("h0: ", self.h0.hex())
print("Verifying...")
chunksize = self.buffersize + HASHSIZE
h = self.h0
gen = self.output_stream()
with open(path, 'wb') as f:
for chunk in gen:
if sha256(chunk).digest() != h:
raise ValueError
h = chunk[-HASHSIZE:]
f.write(chunk[:self.buffersize])
print("File created: ", path)
class StreamSender:
def __init__(self, path, buffersize):
self.file = path
self.buffersize = buffersize
self.hashes = []
self.h0 = None
# A generator that reads a block of data (size `buffersize` in bytes)
# at a time, starting from the last block to the first block of the file
# The last block of the file (which is the first block we read)
# might be less than the buffersize while all other blocks are exactly
# of length `buffersize`
def read_block_reverse(self):
with open(self.file, 'rb') as f:
f.seek(0, os.SEEK_END)
filesize = f.tell()
firstchunk = filesize % self.buffersize
if firstchunk != 0:
f.seek(filesize - firstchunk)
yield f.read(firstchunk)
f.seek(-firstchunk-self.buffersize, os.SEEK_END)
move = -2*self.buffersize
while True:
yield f.read(self.buffersize)
if f.tell() <= self.buffersize: break
f.seek(move, 1)
# Given a file, write all the hashes that must be sent so that the
# receiver can authenticate the file.
# The first hash is the hash of the last block
# The second hash is the hash of the the concatenation of the
# last block and the first hash
# and so on until the last hash written is the hash of
# the concatenation of the last block and the second to the last hash
# The last hash written would be removed from the list and stored
# in `self.h0` as this will be distributed to users
def build_hashes(self):
print("Writing hash in memory...")
gen = self.read_block_reverse()
h = bytes()
for i in gen:
h = sha256(i + h).digest()
self.hashes.append(h)
self.h0 = self.hashes.pop()
# A generator that returns a chunk of bytes to be sent inorder
# the first hash is not written as part of the file.
# the first chunk is the first block of the file and the second hash
# the third chunk is the second block of the file and the third hash
# finally, the last chunk to be returned is the last block of the file
def read_block_hash(self):
self.build_hashes()
with open(self.file, 'rb') as f:
while True:
yield f.read(self.buffersize) + self.hashes.pop()
if len(self.hashes) == 0:
yield f.read(self.buffersize)
print("Hashes depleted.")
break
def write_file(self, path):
print("Signing...")
gen = self.read_block_hash()
with open(path, 'wb') as f:
for chunk in gen:
f.write(chunk)
print("h0: ", self.h0.hex())
print("File created: ", path)
def get_first_hash(self):
return self.h0.hex()
| 32.154472 | 76 | 0.581795 | 540 | 3,955 | 4.212963 | 0.255556 | 0.073846 | 0.031648 | 0.030769 | 0.252308 | 0.208791 | 0.105495 | 0.058022 | 0.026374 | 0.026374 | 0 | 0.011725 | 0.331479 | 3,955 | 122 | 77 | 32.418033 | 0.848714 | 0.290013 | 0 | 0.236842 | 0 | 0 | 0.039124 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118421 | false | 0 | 0.052632 | 0.013158 | 0.210526 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fb57290c5f1a2a949e0708eab1d9cb25bf29117 | 3,245 | py | Python | ch03/ch03-04-time_series.py | alexmalins/kagglebook | 260f6634b6bbaa94c2e989770e75dc7101f5c614 | [
"BSD-3-Clause"
] | 13 | 2021-02-20T08:57:28.000Z | 2022-03-31T12:47:08.000Z | ch03/ch03-04-time_series.py | Tharunkumar01/kagglebook | 260f6634b6bbaa94c2e989770e75dc7101f5c614 | [
"BSD-3-Clause"
] | null | null | null | ch03/ch03-04-time_series.py | Tharunkumar01/kagglebook | 260f6634b6bbaa94c2e989770e75dc7101f5c614 | [
"BSD-3-Clause"
] | 2 | 2021-07-15T03:56:39.000Z | 2021-07-29T00:53:54.000Z | import numpy as np
import pandas as pd
# -----------------------------------
# Wide format, long format
# -----------------------------------
# Load wide format data
df_wide = pd.read_csv('../input/ch03/time_series_wide.csv', index_col=0)
# Convert the index column to datetime dtype
df_wide.index = pd.to_datetime(df_wide.index)
print(df_wide.iloc[:5, :3])
'''
A B C
date
2016-07-01 532 3314 1136
2016-07-02 798 2461 1188
2016-07-03 823 3522 1711
2016-07-04 937 5451 1977
2016-07-05 881 4729 1975
'''
# Convert to long format
df_long = df_wide.stack().reset_index(1)
df_long.columns = ['id', 'value']
print(df_long.head(10))
'''
id value
date
2016-07-01 A 532
2016-07-01 B 3314
2016-07-01 C 1136
2016-07-02 A 798
2016-07-02 B 2461
2016-07-02 C 1188
2016-07-03 A 823
2016-07-03 B 3522
2016-07-03 C 1711
2016-07-04 A 937
...
'''
# Restore wide format
df_wide = df_long.pivot(index=None, columns='id', values='value')
# -----------------------------------
# Lag variables
# -----------------------------------
# Set data to wide format
x = df_wide
# -----------------------------------
# x is the wide format data frame
# The index is the date or timestamp, assume the columns store data of interest such as sales etc. for users or stores
# Create lag data for one period ago
x_lag1 = x.shift(1)
# Create lag data for seven periods ago
x_lag7 = x.shift(7)
# -----------------------------------
# Calculate moving averages for three periods from one period before
x_avg3 = x.shift(1).rolling(window=3).mean()
# -----------------------------------
# Calculate max values over seven periods from one period before
x_max7 = x.shift(1).rolling(window=7).max()
# -----------------------------------
# Calculate average of data from 7, 14, 21 and 28 periods before
x_e7_avg = (x.shift(7) + x.shift(14) + x.shift(21) + x.shift(28)) / 4.0
# -----------------------------------
# Create values for one period ahead
x_lead1 = x.shift(-1)
# -----------------------------------
# Lag variables
# -----------------------------------
# Load the data
train_x = pd.read_csv('../input/ch03/time_series_train.csv')
event_history = pd.read_csv('../input/ch03/time_series_events.csv')
train_x['date'] = pd.to_datetime(train_x['date'])
event_history['date'] = pd.to_datetime(event_history['date'])
# -----------------------------------
# train_x is training data in a data frame with columns for user id and date
# event_history contains data from past events in a data frame with date and event columns
# occurrences is a data frame with columns for date and whether a sale was made or not
dates = np.sort(train_x['date'].unique())
occurrences = pd.DataFrame(dates, columns=['date'])
sale_history = event_history[event_history['event'] == 'sale']
occurrences['sale'] = occurrences['date'].isin(sale_history['date'])
# Take cumulative sums to calculate to number of occurrences on each date
# occurrences is now a data frame with columns for date and cumulative number of sales on that date
occurrences['sale'] = occurrences['sale'].cumsum()
# Using the timestamp as a key, combine with the training dataset
train_x = train_x.merge(occurrences, on='date', how='left')
| 31.201923 | 118 | 0.614484 | 495 | 3,245 | 3.933333 | 0.317172 | 0.046225 | 0.016436 | 0.028762 | 0.144838 | 0.115049 | 0.074987 | 0.031844 | 0 | 0 | 0 | 0.092038 | 0.15624 | 3,245 | 103 | 119 | 31.504854 | 0.619065 | 0.481664 | 0 | 0 | 0 | 0 | 0.147662 | 0.086136 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4fb6171b0226aac1dfedc8cf9e176bb3cb8d4b76 | 12,310 | py | Python | 07.gsm_random_forest/wdfproc/convert.py | predora005/wheather-forecasting | deb3592ac52751ccaf81d7aa8bbb00a14d232f9f | [
"MIT"
] | null | null | null | 07.gsm_random_forest/wdfproc/convert.py | predora005/wheather-forecasting | deb3592ac52751ccaf81d7aa8bbb00a14d232f9f | [
"MIT"
] | null | null | null | 07.gsm_random_forest/wdfproc/convert.py | predora005/wheather-forecasting | deb3592ac52751ccaf81d7aa8bbb00a14d232f9f | [
"MIT"
] | null | null | null | # coding: utf-8
import math
from enum import Enum
import numpy as np
import pandas as pd
import re
# 風向きをラジアンに変換するマップ
__WIND_DIRECTION_TO_ANGLE_MAP = {
'東' : 0.0, '東北東': math.pi * 1 / 8, '北東' : math.pi * 2 / 8,
'北北東': math.pi * 3 / 8, '北' : math.pi * 4 / 8, '北北西': math.pi * 5 / 8,
'北西' : math.pi * 6 / 8, '西北西': math.pi * 7 / 8, '西' : math.pi,
'西南西': -math.pi * 7 / 8, '南西' : -math.pi * 6 / 8, '南南西': -math.pi * 5 / 8,
'南' : -math.pi * 4 / 8, '南南東': -math.pi * 3 / 8, '南東' : -math.pi * 2 / 8,
'東南東': -math.pi * 1 / 8,
#'×' : 0.0
}
# 天気を変換する際のモード
class WeatherConvertMode(Enum):
Coarse = 1 # 粗い
Fine = 2 # 細かい
RainOrNot = 3 # 雨か否かの二者択一
# 天気を整数値に変換するマップ
__WEATHER_TO_INT_MAP = {
'快晴' : 1,
'晴れ' : 2,
'薄曇' : 3,
'曇' : 4,
'煙霧' : 5,
'砂じん嵐' : 6,
'地ふぶき' : 7,
'霧' : 8,
'霧雨' : 9,
'雨' : 10,
'みぞれ' : 11,
'雪' : 12,
'あられ' : 13,
'ひょう' : 14,
'雷' : 15,
'しゅう雨または止み間のある雨' : 16,
'着氷性の雨' : 17,
'着氷性の霧雨' : 18,
'しゅう雪または止み間のある雪' : 19,
'霧雪' : 22,
'凍雨' : 23,
'細氷' : 24,
'もや' : 28,
'降水またはしゅう雨性の降水' : 101,
}
# 天気を変換するマップ(粗め)
__WEATHER_REPLACE_MAP_COARSE = {
1: 0, 2: 0, # 快晴, 晴れ -> 晴れ
3: 1, 4: 1, # 薄曇, 曇り -> くもり
8: 2, 9: 2, 10: 2, # 霧, 霧雨, 雨 -> 雨
11: 2, 12: 2, 13: 2, # みぞれ, 雪, あられ -> 雨
14: 2, 16: 2, 17: 2, # ひょう, しゅう雨, 着氷性の雨 -> 雨
18: 2, 19: 2, 22: 2, # 着氷性の霧雨, しゅう雪, 霧雪 -> 雨
23: 2, 24: 2, 28: 2, # 凍雨, 細氷, もや -> 雨
101:2, # 降水 -> 雨
5: 3, 6: 3, 7: 3, # 煙霧, 砂じん嵐, 地ふぶき -> その他
15: 3, 0: 3 # 雷, 不明 -> その他
}
# 天気を変換するマップ(細かめ)
__WEATHER_REPLACE_MAP_FINE = {
1: 0, # 快晴 -> 快晴
2: 1, # 晴れ -> 晴れ
3: 2, # 薄曇 -> 薄曇
4: 3, # 曇り -> 曇り
8: 4, 9: 4, 10: 4, # 霧, 霧雨, 雨 -> 雨
11: 4, 12: 4, 13: 4, # みぞれ, 雪, あられ -> 雨
14: 4, 16: 4, 17: 4, # ひょう, しゅう雨, 着氷性の雨 -> 雨
18: 4, 19: 4, 22: 4, # 着氷性の霧雨, しゅう雪, 霧雪 -> 雨
23: 4, 24: 4, 28: 4, # 凍雨, 細氷, もや -> 雨
101:4, # 降水 -> 雨
5: 5, 6: 5, 7: 5, # 煙霧, 砂じん嵐, 地ふぶき -> その他
15: 5, 0: 5 # 雷, 不明 -> その他
}
# 天気を変換するマップ(雨か否かの二者択一)
__WEATHER_REPLACE_MAP_RAIN_OR_NOT= {
1: 0, 2: 0, # 快晴, 晴れ -> 雨以外
3: 0, 4: 0, # 薄曇, 曇り -> 雨以外
8: 1, 9: 1, 10: 1, # 霧, 霧雨, 雨 -> 雨
11: 1, 12: 1, 13: 1, # みぞれ, 雪, あられ -> 雨
14: 1, 16: 1, 17: 1, # ひょう, しゅう雨, 着氷性の雨 -> 雨
18: 1, 19: 1, 22: 1, # 着氷性の霧雨, しゅう雪, 霧雪 -> 雨
23: 1, 24: 1, 28: 1, # 凍雨, 細氷, もや -> 雨
101:1, # 降水 -> 雨
5: 0, 6: 0, 7: 0, # 煙霧, 砂じん嵐, 地ふぶき -> 雨以外
15: 0, 0: 0 # 雷, 不明 -> 雨以外
}
# 雲量を浮動小数点数に変換するマップ
__CLOUD_VOLUME_TO_FLOAT_MAP = {
'0+' : 0.5,
'10-' : 9.5,
}
##################################################
# 天気記号を数値に変換する
##################################################
def convert_symbol_to_number(df, inplace=True):
""" 天気記号を数値に変換する
Args:
df(DataFrame) : 変換対象のDataFrame
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
if inplace:
new_df = df
else:
new_df = df.copy()
# 天気記号を数値に変換する
def to_number(element):
if type(element) is str:
if ')' in element:
value = element.replace(')', '')
elif ']' in element:
value = element.replace(']', '')
else:
value = element
else:
value = element
return value
# 天気を整数値に変換する
for col in df.columns:
new_df[col] = new_df[col].map(lambda element : to_number(element))
return new_df
##################################################
# 風速・風向きをX,Y方向のベクトルに変換する(地上気象データ用)
##################################################
def convert_wind_to_vector_ground(df, inplace=True):
""" 風速・風向きをX,Y方向のベクトルに変換する
(地上気象データ用)
Args:
df(DataFrame) : 変換対象のDataFrame
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
if inplace:
new_df = df
else:
new_df = df.copy()
# 風向きを数値に変換する関数
def to_angle(wind_direction):
angle = 0.0
if wind_direction in __WIND_DIRECTION_TO_ANGLE_MAP:
angle = __WIND_DIRECTION_TO_ANGLE_MAP[wind_direction]
else:
anble = 0.0
return angle
# 風向きを数値に変換する
wind_dir_cols = [col for col in new_df.columns if('風向' in col)]
for col in wind_dir_cols:
new_col = col + '(角度)'
new_df[new_col] = new_df[col].map(lambda col : to_angle(col))
# 風速のうち無効なデータを0に補正する
wind_speed_cols = [col for col in new_df.columns if('風速' in col)]
for col in wind_speed_cols:
if df[col].dtype == object:
new_df.replace({col: {'×': 0.0}}, inplace=True)
# 風向き・風速を、X,Y方向の風速に変換する
wind_angle_cols = [col for col in new_df.columns if('風向(角度)' in col)]
for angle_col in wind_angle_cols:
result = re.search(r"(\D+)_風向\(角度\)", angle_col)
place_name = result.group(1)
speed_col = place_name + '_' + '風速(m/s)'
new_df = new_df.astype({speed_col: float})
wind_x_col = place_name + '_' + '風速(m/s)_X'
wind_y_col = place_name + '_' + '風速(m/s)_Y'
new_df[wind_x_col] = new_df[speed_col] * np.cos(df[angle_col])
new_df[wind_y_col] = new_df[speed_col] * np.sin(df[angle_col])
# 新しく追加した列の小数点を丸める
new_df = new_df.round({wind_x_col: 3, wind_y_col: 3})
# 元の風向き・風速を削除する
new_df = new_df.drop(columns=wind_dir_cols)
new_df = new_df.drop(columns=wind_speed_cols)
new_df = new_df.drop(columns=wind_angle_cols)
return new_df
##################################################
# 天気を整数値に変換する
##################################################
def convert_weather_to_interger(df, inplace=True):
""" 天気を整数値に変換する
Args:
df(DataFrame) : 変換対象のDataFrame
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
if inplace:
new_df = df
else:
new_df = df.copy()
# 天気を整数値に変換する関数
def to_integer(name):
value = 0
if name in __WEATHER_TO_INT_MAP:
value = __WEATHER_TO_INT_MAP[name]
else:
value = 0
return value
# 天気を整数値に変換する
weather_cols = [col for col in new_df.columns if('天気' in col)]
for col in weather_cols:
new_df[col] = new_df[col].map(lambda col : to_integer(col))
return new_df
##################################################
# 雲量を浮動小数点数に変換する
##################################################
def convert_cloud_volume_to_float(df, inplace=True):
""" 雲量を浮動小数点数に変換する
Args:
df(DataFrame) : 変換対象のDataFrame
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
if inplace:
new_df = df
else:
new_df = df.copy()
# 雲量を浮動小数点数に変換する
def to_float(name):
value = 0.0
if name in __CLOUD_VOLUME_TO_FLOAT_MAP:
value = __CLOUD_VOLUME_TO_FLOAT_MAP[name]
else:
value = 0.0
return value
# 天気を整数値に変換する
cloud_volume_cols = [col for col in new_df.columns if('雲量' in col)]
for col in cloud_volume_cols:
new_df[col] = new_df[col].map(lambda col : to_float(col))
return new_df
##################################################
# 天気を指定した境界値で分類する
##################################################
def classify_weather_boundary(df, boudaries=None, colums=None, inplace=True):
""" 天気を指定した境界値で分類する
Args:
df(DataFrame) : 変換対象のDataFrame
boudaries(List) : 境界値のリスト
columns(List) : 変換対象の列名
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
# 元のDataFrameを上書きするか否か
if inplace:
new_df = df
else:
new_df = df.copy()
# 境界値が未設定の場合はデフォルト値を使用する
if boudaries is None:
boudaries = [3, 10]
# 天気を境界値で分類する関数
def classify(weather):
class_value = -1
for i, boundary in enumerate(boudaries):
# 境界値未満だったらループ離脱
if boundary > weather:
class_value = i
break
# 全ての境界値以上の場合
if class_value < 0:
class_value = len(boudaries)
return class_value
# 列名が未設定の場合は、名称に天気を含む列を抽出する
if colums is None:
weather_cols = [col for col in new_df.columns if('天気' in col)]
else:
weather_cols = colums
# 天気を指定した境界値で分類する
for col in weather_cols:
new_df[col] = new_df[col].map(lambda col : classify(col))
return new_df
##################################################
# 天気を指定したマップで置換する
##################################################
def replace_weather(df, rmap=None, columns=None, mode=WeatherConvertMode.Coarse, inplace=True):
""" 天気を指定したマップで置換する
Args:
df(DataFrame) : 変換対象のDataFrame
rmap(Dict) : 変換用のマップ
columns(List) : 変換対象の列名
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
# 元のDataFrameを上書きするか否か
if inplace:
new_df = df
else:
new_df = df.copy()
# 境界値が未設定の場合はデフォルト値を使用する
if rmap is None:
if mode == WeatherConvertMode.Fine:
rmap = __WEATHER_REPLACE_MAP_FINE
elif mode == WeatherConvertMode.RainOrNot:
rmap = __WEATHER_REPLACE_MAP_RAIN_OR_NOT
else:
rmap = __WEATHER_REPLACE_MAP_COARSE
# 列名が未設定の場合は、名称に天気を含む列を抽出する
if columns is None:
weather_cols = [col for col in new_df.columns if('天気' in col)]
else:
weather_cols = columns
# 天気を指定したマップで置換する
for col in weather_cols:
new_df = new_df.replace({col: rmap})
return new_df
##################################################
# 風速・風向きをX,Y方向のベクトルに変換する(高層気象データ用)
##################################################
def convert_wind_to_vector_highrise(df, inplace=True):
""" 風速・風向きをX,Y方向のベクトルに変換する
(高層気象データ用)
Args:
df(DataFrame) : 変換対象のDataFrame
inplace(bool) : 元のDataFrameを変更するか否か
Returns:
DataFrame : 変換後のDataFrame
"""
if inplace:
new_df = df
else:
new_df = df.copy()
# 風向きを度数(°)からラジアンに変換する関数
def to_radian(wind_deg):
radian = (-wind_deg + 90)/180 * math.pi
return radian
wind_radian_cols = []
# 風向きを度数(°)からラジアンに変換する
wind_dir_cols = [col for col in new_df.columns if('風向' in col)]
for col in wind_dir_cols:
new_col = col + '(rad)'
new_df[new_col] = new_df[col].map(lambda col : to_radian(col))
wind_radian_cols.append(new_col)
# 風速のうち無効なデータを0に補正する
wind_speed_cols = [col for col in new_df.columns if('風速' in col)]
for col in wind_speed_cols:
if df[col].dtype == object:
new_df.replace({col: ['−', '静穏']}, 0.0, inplace=True)
# 風向き・風速を、X,Y方向の風速に変換する
for radian_col in wind_radian_cols:
result = re.search(r"(.+)_風向.*\(rad\)", radian_col)
prifix = result.group(1)
speed_col = prifix + '_' + '風速(m/s)'
new_df = new_df.astype({speed_col: float})
wind_x_col = prifix + '_' + '風速(m/s)_X'
wind_y_col = prifix + '_' + '風速(m/s)_Y'
new_df[wind_x_col] = new_df[speed_col] * np.cos(df[radian_col])
new_df[wind_y_col] = new_df[speed_col] * np.sin(df[radian_col])
# 新しく追加した列の小数点を丸める
new_df = new_df.round({wind_x_col: 3, wind_y_col: 3})
# 元の風向き・風速を削除する
new_df = new_df.drop(columns=wind_dir_cols)
new_df = new_df.drop(columns=wind_speed_cols)
new_df = new_df.drop(columns=wind_radian_cols)
return new_df
| 28.694639 | 95 | 0.499269 | 1,524 | 12,310 | 3.83727 | 0.153543 | 0.063269 | 0.024624 | 0.028215 | 0.551642 | 0.447845 | 0.431088 | 0.399966 | 0.395691 | 0.379788 | 0 | 0.038004 | 0.33095 | 12,310 | 429 | 96 | 28.694639 | 0.670471 | 0.219334 | 0 | 0.318367 | 0 | 0 | 0.030035 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053061 | false | 0 | 0.020408 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |