hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b2656b416920f4d1aba9880f13ed0435a705862e | 4,104 | py | Python | task_geo/data_sources/covid/fr_covidata/fr_covidata.py | cryptox31/task-geo | 3c2ffa7250ba4145f83973b8a9a9cfa810c52f34 | [
"MIT"
] | 12 | 2020-03-29T12:02:24.000Z | 2021-05-08T22:05:11.000Z | task_geo/data_sources/covid/fr_covidata/fr_covidata.py | cryptox31/task-geo | 3c2ffa7250ba4145f83973b8a9a9cfa810c52f34 | [
"MIT"
] | 51 | 2020-03-29T11:31:54.000Z | 2020-06-13T16:55:30.000Z | task_geo/data_sources/covid/fr_covidata/fr_covidata.py | cryptox31/task-geo | 3c2ffa7250ba4145f83973b8a9a9cfa810c52f34 | [
"MIT"
] | 27 | 2020-03-29T09:36:26.000Z | 2021-10-31T19:44:19.000Z | """
fr_covidata.py
Functions:
- fr_covidata_connector: Extracts data from CSV URL
- fr_covidata_formatter: Cleans CSV data
- fr_covidata: Combines the two previous functions
Data Credits:
OpenCOVID19-fr
https://www.data.gouv.fr/en/datasets/chiffres-cles-concernant-lepidemie-de-covid19-en-france/
https://github.com/opencovid19-fr/data
"""
import io
import numpy as np
import pandas as pd
import requests
url = (
'https://raw.githubusercontent.com/opencovid19-fr/'
'data/master/dist/chiffres-cles.csv'
)
def fr_covidata():
"""Data Source for the French COVID-19 Data.
Arguments:
None
Returns:
pandas.DataFrame
"""
df = fr_covidata_connector()
return fr_covidata_formatter(df)
def fr_covidata_connector():
"""Extract data from OpenCOVID19-fr's Github repository.
Description:
- Downloads the URL's data in a Unicode CSV Format
- Unicode CSV Format: ACS 5Y UTF-8
Returns:
dataset (DataFrame with CSV Data)
"""
urlData = requests.get(url).content
dataset = pd.read_csv(io.StringIO(urlData.decode('utf-8')))
return dataset
def fr_covidata_formatter(dataset):
"""Formatter for FR COVID-19 Data.
Arguments:
dataset(pandas.DataFrame): Data as returned by fr_covidata_connector.
Description:
- Drop unnecessary rows with irrelevant regions' info and only keep
info related to subregions in Metropolitan France, as well as
repetitive data
- Check the dataset for instances where there are more than one source
of data in the same subregion for the same date, then complement all
the sources information, and take the highest value in case there are
different values for the same column, while aggregating the sources
info
- Rename/Translate the column titles, and add a country column (France)
Returns:
frcovidata(pandas.DataFrame)
"""
no_gr = ['region', 'monde', 'pays', 'collectivite-outremer']
no_mc = ['DEP-971', 'DEP-972', 'DEP-973', 'DEP-974', 'DEP-976']
dataset = dataset[
(~dataset.granularite.isin(no_gr)) & (~dataset.maille_code.isin(no_mc))
]
dataset = dataset.drop(['depistes', 'granularite'], axis=1)
dataset = dataset.drop_duplicates(
subset=['date', 'maille_code', 'cas_confirmes', 'deces',
'reanimation',
'hospitalises', 'gueris'], keep=False)
dataset['date'] = pd.to_datetime(dataset['date'].astype(str)).dt.date
# Reset indices:
dataset = dataset.reset_index(drop=True)
# Turn source columns' values type to string:
str_columns = ['source_nom', 'source_url',
'source_archive', 'source_type']
dataset[str_columns] = dataset[str_columns].astype(str)
aggre = {
'cas_confirmes': np.max,
'cas_ehpad': np.max,
'cas_confirmes_ehpad': np.max,
'cas_possibles_ehpad': np.max,
'deces': np.max,
'deces_ehpad': np.max,
'reanimation': np.max,
'hospitalises': np.max,
'gueris': np.max,
'source_nom': ','.join,
'source_url': ','.join,
'source_archive': ','.join,
'source_type': ','.join
}
dataset = dataset.groupby(['date',
'maille_code',
'maille_nom']).aggregate(aggre).reset_index()
# Rename/Translate the column titles:
dataset = dataset.rename(
columns={"maille_code": "subregion_code",
"maille_nom": "subregion_name", "cas_confirmes": "confirmed",
"deces": "deaths", "reanimation": "recovering",
"hospitalises": "hospitalized", "gueris": "recovered",
"source_nom": "source_name"})
dataset['country'] = 'France'
frcovidata = dataset[
'subregion_code', 'subregion_name', 'country', 'date', 'confirmed',
'hospitalized', 'recovering', 'recovered',
'deaths', 'source_name', 'source_url', 'source_archive',
'source_type']
return frcovidata
| 32.832 | 97 | 0.628899 | 477 | 4,104 | 5.280922 | 0.383648 | 0.039698 | 0.030171 | 0.015879 | 0.049226 | 0.025407 | 0 | 0 | 0 | 0 | 0 | 0.010704 | 0.248782 | 4,104 | 124 | 98 | 33.096774 | 0.806357 | 0.356238 | 0 | 0 | 0 | 0 | 0.310359 | 0.021912 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.064516 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2666632240c4d31c7d2c837a894de6d260a7648 | 446 | py | Python | profiling/xi_mm_accuracy/accuracy_script.py | marcpaterno/cluster_toolkit | 3025b3b1733ed05bfe1ac3844a368326b26a78e4 | [
"MIT"
] | 18 | 2017-12-05T18:20:12.000Z | 2021-06-02T06:26:30.000Z | profiling/xi_mm_accuracy/accuracy_script.py | matthewkirby/cluster_toolkit | ee5352d799aa1048cc1d5a1b4e01890be429f94b | [
"MIT"
] | 18 | 2017-11-23T04:23:58.000Z | 2019-09-17T17:48:19.000Z | profiling/xi_mm_accuracy/accuracy_script.py | matthewkirby/cluster_toolkit | ee5352d799aa1048cc1d5a1b4e01890be429f94b | [
"MIT"
] | 9 | 2018-07-27T20:00:35.000Z | 2021-06-13T19:38:01.000Z | import pytest
from cluster_toolkit import xi
import numpy as np
import matplotlib.pyplot as plt
R = np.logspace(-1, 3, 100)
k = np.loadtxt("k.txt")
p = np.loadtxt("p.txt")
for i in range(-5,5,1):
ximm = xi.xi_mm_at_R(R, k, p, step=0.005+i*0.001)
plt.loglog(R, ximm, label='%d'%i)
plt.legend()
plt.show()
for i in range(195, 205, 1):
ximm = xi.xi_mm_at_R(R, k, p, N=i)
plt.loglog(R, ximm, label='%d'%i)
plt.legend()
plt.show()
| 20.272727 | 53 | 0.636771 | 94 | 446 | 2.946809 | 0.43617 | 0.043321 | 0.043321 | 0.079422 | 0.389892 | 0.389892 | 0.389892 | 0.389892 | 0.389892 | 0.389892 | 0 | 0.062331 | 0.172646 | 446 | 21 | 54 | 21.238095 | 0.688347 | 0 | 0 | 0.352941 | 0 | 0 | 0.03139 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b267c8136a475ea9645b055001453ef1d5abc120 | 1,247 | py | Python | test_files/read_write_files.py | fchirono/pywopwop | 2dc09a48f91e07d413950fa8d7639976dd2bed70 | [
"BSD-3-Clause"
] | 3 | 2021-11-17T23:15:59.000Z | 2021-12-17T20:04:15.000Z | test_files/read_write_files.py | fchirono/pywopwop | 2dc09a48f91e07d413950fa8d7639976dd2bed70 | [
"BSD-3-Clause"
] | null | null | null | test_files/read_write_files.py | fchirono/pywopwop | 2dc09a48f91e07d413950fa8d7639976dd2bed70 | [
"BSD-3-Clause"
] | null | null | null | """
Test script for pywopwop - https://github.com/fchirono/pywopwop
This script reads a pair of geometry and loading PSU-WOPWOP files, and
rewrites the same data using different file names.
The files can then be compared for bitwise equality in bash using:
cmp /path/to/filename1 /path/to/filename2
Author:
Fabio Casagrande Hirono
Oct 2021
"""
import numpy as np
rng = np.random.default_rng()
import pywopwop as PWW
# %% Define directory containing geometry and loading files, plus filenames
wopwop_dir = '../../../PSU-WOPWOP_v3.4.4/case1/'
geometry_filename = 'gyrodyne.dat'
loading_filename = 'gyrodyneLoading.dat'
# # aperiodic loading not supported yet - loading file will not be identical
# wopwop_dir = '../../../PSU-WOPWOP_v3.4.4/case5/'
# geometry_filename = 'constGeo_short.dat'
# loading_filename = 'AperLoadingShort.dat'
# %%
# initialize new instance, read files
myWopwopData = PWW.PWWPatch()
myWopwopData.read_geometry_file(wopwop_dir + geometry_filename)
myWopwopData.read_loading_file(wopwop_dir + loading_filename)
myWopwopData.print_info()
# rewrite same data in different filenames
myWopwopData.write_geometry_file('geometry_file.dat')
myWopwopData.write_loading_file('loading_file.dat') | 28.340909 | 76 | 0.76263 | 168 | 1,247 | 5.511905 | 0.517857 | 0.038877 | 0.038877 | 0.038877 | 0.047516 | 0.047516 | 0.047516 | 0 | 0 | 0 | 0 | 0.013048 | 0.139535 | 1,247 | 44 | 77 | 28.340909 | 0.849953 | 0.584603 | 0 | 0 | 0 | 0 | 0.192843 | 0.065606 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26ab41bdfc8b1376d5189050664e232fe0baba3 | 364 | py | Python | demo.py | Peiiii/wkstock | 540cec91a5575716d6e79d99371770509dbea4dd | [
"MIT"
] | 1 | 2020-11-23T13:53:49.000Z | 2020-11-23T13:53:49.000Z | demo.py | Peiiii/pystock | 540cec91a5575716d6e79d99371770509dbea4dd | [
"MIT"
] | null | null | null | demo.py | Peiiii/pystock | 540cec91a5575716d6e79d99371770509dbea4dd | [
"MIT"
] | null | null | null | import wkstock
import pandas_datareader as pdr
import random
stocks=wkstock.load_china_stock_info()
# print(stocks)
codes=wkstock.get_stock_codes()
print(codes)
# x=pdr.get_data_yahoo('000816.sz')
# x=pdr.get_data_yahoo('688179.ss')
x=wkstock.get_yahoo_code(random.choice(codes))
print(x)
x=wkstock.get_stock_info_form_yahoo(x,'2010-01-01','2020-12-31')
print(x) | 24.266667 | 64 | 0.791209 | 64 | 364 | 4.25 | 0.46875 | 0.110294 | 0.110294 | 0.080882 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081871 | 0.06044 | 364 | 15 | 65 | 24.266667 | 0.71345 | 0.222527 | 0 | 0.2 | 0 | 0 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3 | 0 | 0.3 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26d4b1b791c2d3a1dfb1146170732b9d587ce4d | 1,277 | py | Python | src/db-up/azext_db_up/vendored_sdks/azure_mgmt_sql/sql/models/sync_group_schema_py3.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 207 | 2017-11-29T06:59:41.000Z | 2022-03-31T10:00:53.000Z | src/db-up/azext_db_up/vendored_sdks/azure_mgmt_sql/sql/models/sync_group_schema_py3.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 4,061 | 2017-10-27T23:19:56.000Z | 2022-03-31T23:18:30.000Z | src/db-up/azext_db_up/vendored_sdks/azure_mgmt_sql/sql/models/sync_group_schema_py3.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 802 | 2017-10-11T17:36:26.000Z | 2022-03-31T22:24:32.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class SyncGroupSchema(Model):
"""Properties of sync group schema.
:param tables: List of tables in sync group schema.
:type tables: list[~azure.mgmt.sql.models.SyncGroupSchemaTable]
:param master_sync_member_name: Name of master sync member where the
schema is from.
:type master_sync_member_name: str
"""
_attribute_map = {
'tables': {'key': 'tables', 'type': '[SyncGroupSchemaTable]'},
'master_sync_member_name': {'key': 'masterSyncMemberName', 'type': 'str'},
}
def __init__(self, *, tables=None, master_sync_member_name: str=None, **kwargs) -> None:
super(SyncGroupSchema, self).__init__(**kwargs)
self.tables = tables
self.master_sync_member_name = master_sync_member_name
| 37.558824 | 92 | 0.625685 | 143 | 1,277 | 5.391608 | 0.531469 | 0.090791 | 0.145266 | 0.155642 | 0.059663 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00094 | 0.166797 | 1,277 | 33 | 93 | 38.69697 | 0.723684 | 0.566954 | 0 | 0 | 0 | 0 | 0.183236 | 0.087719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26ddefdcfd3245e86c49f5e61dd5ff3a9294f51 | 1,059 | py | Python | odm/build-docker.py | cn-cerc/summer-install | b3bf942c354593d938015f7571667a4a1511ff07 | [
"Apache-2.0"
] | 4 | 2017-10-29T18:29:15.000Z | 2019-11-06T06:06:00.000Z | odm/build-docker.py | cn-cerc/summer-install | b3bf942c354593d938015f7571667a4a1511ff07 | [
"Apache-2.0"
] | 14 | 2016-11-09T02:43:02.000Z | 2020-10-23T02:29:39.000Z | odm/build-docker.py | cn-cerc/summer-install | b3bf942c354593d938015f7571667a4a1511ff07 | [
"Apache-2.0"
] | 4 | 2019-02-21T07:03:34.000Z | 2020-11-11T08:39:08.000Z | #!/usr/bin/python
import os
import sys
import time
def sh(cmd):
print(cmd)
os.system(cmd)
hosts = ['bourse', 'wallet', 'tw', 'wcm', 'aw', 'card']
i = 0
for app in hosts:
# 创建物理文件夹
sh("""mkdir tomcats""" )
sh("""mkdir tomcats/%s""" % (app))
sh("""mkdir tomcats/%s/webapps""" % (app))
sh("""mkdir tomcats/%s/logs""" % (app))
sh("""mkdir tomcats/%s/root""" % (app))
# 拷贝tomcat配置
sh("""cp -R ~/summer-install/docker/factory/tomcat-conf/ ~/tomcats/%s/conf/""" % (app))
# 创建应用docker
i = i + 1
port = 8080 + i
sh("""docker stop %s-app && docker rm %s-app""" % (app, app))
# sh("""docker run -d --name %s-app -p %s:8080 --restart=always -h %s --link=%s-redis:redis \
sh("""docker run -d --name %s-app -p %s:8080 --restart=always -h %s \
-v ~/tomcats/%s/webapps:/opt/tomcat/webapps \
-v ~/tomcats/%s/conf:/opt/tomcat/conf \
-v ~/tomcats/%s/logs:/opt/tomcat/logs \
-v ~/tomcats/%s/root:/root \
summer/tomcat""" % (app, port, app,
app, app, app, app))
| 27.868421 | 96 | 0.539188 | 156 | 1,059 | 3.660256 | 0.365385 | 0.126095 | 0.122592 | 0.105079 | 0.238179 | 0.143608 | 0.143608 | 0.143608 | 0.143608 | 0.143608 | 0 | 0.017094 | 0.226629 | 1,059 | 37 | 97 | 28.621622 | 0.680098 | 0.133144 | 0 | 0 | 0 | 0.08 | 0.544359 | 0.191676 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.12 | 0 | 0.16 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26deebe691ea3d5c517bd5bc451c6f6a5a5aa1f | 18,512 | py | Python | utils/scan_tools.py | Syler1984/seismo-performer | c1b9190532d3d952675823a925e1e14bbfb7b5a4 | [
"MIT"
] | 8 | 2021-09-26T00:57:21.000Z | 2022-01-21T14:24:26.000Z | utils/scan_tools.py | Syler1984/seismo-performer | c1b9190532d3d952675823a925e1e14bbfb7b5a4 | [
"MIT"
] | null | null | null | utils/scan_tools.py | Syler1984/seismo-performer | c1b9190532d3d952675823a925e1e14bbfb7b5a4 | [
"MIT"
] | 1 | 2022-03-08T07:29:38.000Z | 2022-03-08T07:29:38.000Z | import obspy.core as oc
from scipy.signal import find_peaks
import numpy as np
import matplotlib.pyplot as plt
import os
from time import time
from obspy.core.utcdatetime import UTCDateTime
def pre_process_stream(stream, no_filter = False, no_detrend = False):
"""
Does preprocessing on the stream (changes it's frequency), does linear detrend and
highpass filtering with frequency of 2 Hz.
Arguments:
stream -- obspy.core.stream object to pre process
frequency -- required frequency
"""
if not no_detrend:
stream.detrend(type="linear")
if not no_filter:
stream.filter(type="highpass", freq = 2)
frequency = 100.
required_dt = 1. / frequency
dt = stream[0].stats.delta
if dt != required_dt:
stream.interpolate(frequency)
def trim_streams(streams, start = None, end = None):
"""
Trims streams to the same overall time span.
:return: list of trimmed streams
"""
max_start_time = start
min_end_time = end
for stream in streams:
current_start = min([x.stats.starttime for x in stream])
current_end = max([x.stats.endtime for x in stream])
if not max_start_time:
max_start_time = current_start
if not min_end_time:
min_end_time = current_end
if current_start > max_start_time:
max_start_time = current_start
if current_end < min_end_time:
min_end_time = current_end
cut_streams = []
for st in streams:
cut_streams.append(st.slice(max_start_time, min_end_time))
return cut_streams
def get_traces(streams, i):
"""
Returns traces with specified index
:return: list of traces
"""
traces = [st[i] for st in streams] # get traces
# Trim traces to the same length
start_time = max([trace.stats.starttime for trace in traces])
end_time = min([trace.stats.endtime for trace in traces])
for j in range(len(traces)):
traces[j] = traces[j].slice(start_time, end_time)
return traces
def progress_bar(progress, characters_count = 20,
erase_line = True,
empty_bar = '.', filled_bar = '=', filled_edge = '>',
prefix = '', postfix = '',
add_space_around = True):
"""
Prints progress bar.
:param progress: percentage (0..1) of progress, or int number of characters filled in progress bar.
:param characters_count: length of the bar in characters.
:param erase_line: preform return carriage.
:param empty_bar: empty bar character.
:param filled_bar: progress character.
:param filled_edge: progress character on the borderline between progressed and empty,
set to None to disable.
:param prefix: progress bar prefix.
:param postfix: progress bar postfix.
:param add_space_around: add space after prefix and before postfix.
:return:
"""
space_characters = ' \t\n'
if add_space_around:
if len(prefix) > 0 and prefix[-1] not in space_characters:
prefix += ' '
if len(postfix) > 0 and postfix[0] not in space_characters:
postfix = ' ' + postfix
if erase_line:
print('\r', end = '')
progress_num = int(characters_count * progress)
if filled_edge is None:
print(prefix + filled_bar * progress_num + empty_bar * (characters_count - progress_num) + postfix, end = '')
else:
bar_str = prefix + filled_bar * progress_num
bar_str += filled_edge * min(characters_count - progress_num, 1)
bar_str += empty_bar * (characters_count - progress_num - 1)
bar_str += postfix
print(bar_str, end = '')
def cut_traces(*_traces):
"""
Cut traces to same timeframe (same start time and end time). Returns list of new traces.
Positional arguments:
Any number of traces (depends on the amount of channels). Unpack * if passing a list of traces.
e.g. scan_traces(*trs)
"""
_start_time = max([x.stats.starttime for x in _traces])
_end_time = max([x.stats.endtime for x in _traces])
return_traces = [x.slice(_start_time, _end_time) for x in _traces]
return return_traces
def sliding_window(data, n_features, n_shift):
"""
Return NumPy array of sliding windows. Which is basically a view into a copy of original data array.
Arguments:
data -- numpy array to make a sliding windows on
n_features -- length in samples of the individual window
n_shift -- shift between windows starting points
"""
# Get sliding windows shape
win_count = np.floor(data.shape[0]/n_shift - n_features/n_shift + 1).astype(int)
shape = (win_count, n_features)
try:
windows = np.zeros(shape)
except ValueError:
print(f'\ndata.shape: {data.shape}')
print('shape: ', shape)
raise
for _i in range(win_count):
_start_pos = _i * n_shift
_end_pos = _start_pos + n_features
windows[_i][:] = data[_start_pos : _end_pos]
return windows.copy()
def sliding_window_strided(data, n_features, n_shift, copy = False):
"""
Return NumPy array of sliding windows. Which is basically a view into a copy of original data array.
Arguments:
data -- numpy array to make a sliding windows on. Shape (n_samples, n_channels)
n_features -- length in samples of the individual window
n_shift -- shift between windows starting points
copy -- copy data or return a view into existing data? Default: False
"""
from numpy.lib.stride_tricks import as_strided
# Get sliding windows shape
stride_shape = (data.shape[0] - n_features + n_shift) // n_shift
stride_shape = [stride_shape, n_features, data.shape[-1]]
strides = [data.strides[0]*n_shift, *data.strides]
windows = as_strided(data, stride_shape, strides)
if copy:
return windows
else:
return windows.copy()
def normalize_windows_global(windows):
"""
Normalizes sliding windows array. IMPORTANT: windows should have separate memory, striped windows would break.
:param windows:
:return:
"""
# Shape (windows_number, n_features, channels_number)
n_win = windows.shape[0]
ch_num = windows.shape[2]
for _i in range(n_win):
win_max = np.max(np.abs(windows[_i, :, :]))
windows[_i, :, :] = windows[_i, :, :] / win_max
def normalize_global(data):
"""
Normalizes sliding windows array. IMPORTANT: windows should have separate memory, striped windows would break.
:param data: NumPy array to normalize
:return:
"""
# Shape (windows_number, n_features, channels_number)
m = np.max(np.abs(data[:]))
data /= m
def plot_positives(scores, windows, threshold):
idx = 0
save_name = 'positive_' + str(idx) + '.jpg'
while os.path.exists(save_name):
idx += 1
save_name = 'positive_' + str(idx) + '.jpg'
for i in range(len(scores)):
if scores[i][1] > threshold:
fig, (ax1, ax2, ax3) = plt.subplots(3, sharex = True)
ax1.set_ylabel('N', rotation = 0.)
ax1.plot(windows[i, :, 0], 'r')
ax2.set_ylabel('E', rotation = 0.)
ax2.plot(windows[i, :, 1], 'g')
ax3.set_ylabel('Z', rotation = 0.)
ax3.plot(windows[i, :, 2], 'y')
plt.savefig(save_name)
plt.clf()
"""
np_s_array = np.zeros((400, 3))
np_s_array[:, 0] = windows[i, :, 0]
np_s_array[:, 1] = windows[i, :, 1]
np_s_array[:, 2] = windows[i, :, 2]
np.save(params['plot_path'] + code + '_' + str(i) + '_p' + '.npy', np_s_array)
"""
def plot_oririnal_positives(scores, original_windows, threshold, original_scores = None):
idx = 0
save_name = 'original_positive_' + str(idx) + '.jpg'
while os.path.exists(save_name):
idx += 1
save_name = 'original_positive_' + str(idx) + '.jpg'
for i in range(len(scores)):
if scores[i][1] > threshold:
fig, (ax1, ax2, ax3) = plt.subplots(3, sharex = True)
ax1.set_ylabel('N', rotation = 0.)
ax1.plot(original_windows[i, :, 0], 'r')
ax2.set_ylabel('E', rotation = 0.)
ax2.plot(original_windows[i, :, 1], 'g')
ax3.set_ylabel('Z', rotation = 0.)
ax3.plot(original_windows[i, :, 2], 'y')
plt.savefig(save_name)
plt.clf()
def scan_traces(*_traces, model = None, args = None, n_features = 400, shift = 10, original_data = None):
"""
Get predictions on the group of traces.
Positional arguments:
Any number of traces (depends on the amount of channels). Unpack * if passing a list of traces.
e.g. scan_traces(*trs)
Keyword arguments
model -- NN model
n_features -- number of input features in a single channel
shift -- amount of samples between windows
global_normalize -- normalize globaly all traces if True or locally if False
batch_size -- model.fit batch size
"""
# Check args
import argparse
if not args and type(args) != argparse.Namespace:
raise AttributeError('args should have an argparse.Namespace type')
batch_size = args.batch_size
# Check input types
for x in _traces:
if type(x) != oc.trace.Trace:
raise TypeError('traces should be a list or containing obspy.core.trace.Trace objects')
# plot-positives-original
# Cut all traces to a same timeframe
_traces = cut_traces(*_traces)
# normalize_traces(*traces, global_normalize = global_normalize)
# Get sliding window arrays
l_windows = []
try:
for x in _traces:
l_windows.append(sliding_window(x.data, n_features = n_features, n_shift = shift))
except ValueError:
return None, 0
if args.plot_positives_original:
original_l_windows = []
for x in original_data:
original_l_windows.append(sliding_window(x.data, n_features = n_features, n_shift = args.shift))
w_length = min([x.shape[0] for x in l_windows])
# Prepare data
windows = np.zeros((w_length, n_features, len(l_windows)))
for _i in range(len(l_windows)):
windows[:, :, _i] = l_windows[_i][:w_length]
if args.plot_positives_original:
original_windows = np.zeros((w_length, n_features, len(original_l_windows)))
for _i in range(len(original_l_windows)):
original_windows[:, :, _i] = original_l_windows[_i][:w_length]
# Global max normalization:
normalize_windows_global(windows)
if args.plot_positives_original:
normalize_windows_global(original_windows)
else:
min_size = min([tr.data.shape[0] for tr in _traces])
data = np.zeros((min_size, len(_traces)))
for i, tr in enumerate(_traces):
data[:, i] = tr.data[:min_size]
normalize_global(data)
windows = sliding_window_strided(data, 400, args.shift, False)
if args.plot_positives_original:
original_windows = windows.copy()
# Predict
start_time = time()
_scores = model.predict(windows, verbose = False, batch_size = batch_size)
performance_time = time() - start_time
# TODO: create another flag for this, e.g. --culculate-original-probs or something
if args.plot_positives_original:
original_scores = model.predict(original_windows, verbose = False, batch_size = batch_size)
# Plot
# if args and args.plot_positives:
# plot_threshold_scores(scores, windows, params['threshold'], file_name, params['plot_labels'])
# Save scores
# if args and args.save_positives:
# save_threshold_scores(scores, windows, params['threshold'],
# params['positives_h5_path'], params['save_h5_labels'])
# Positives plotting
if args.plot_positives:
plot_positives(_scores, windows, args.threshold)
if args.plot_positives_original:
plot_oririnal_positives(_scores, original_windows, args.threshold, original_scores)
return _scores, performance_time
def restore_scores(_scores, shape, shift):
"""
Restores scores to original size using linear interpolation.
Arguments:
scores -- original 'compressed' scores
shape -- shape of the restored scores
shift -- sliding windows shift
"""
new_scores = np.zeros(shape)
for i in range(1, _scores.shape[0]):
for j in range(_scores.shape[1]):
start_i = (i - 1) * shift
end_i = i * shift
if end_i >= shape[0]:
end_i = shape[0] - 1
new_scores[start_i : end_i, j] = np.linspace(_scores[i - 1, j], _scores[i, j], shift + 1)[:end_i - start_i]
return new_scores
def get_positives(_scores, peak_idx, other_idxs, peak_dist = 10000, avg_window_half_size = 100, threshold = 0.8):
"""
Returns positive prediction list in format: [[sample, pseudo-probability], ...]
"""
_positives = []
x = _scores[:, peak_idx]
peaks = find_peaks(x, distance = peak_dist, height=[threshold, 1.])
for _i in range(len(peaks[0])):
start_id = peaks[0][_i] - avg_window_half_size
if start_id < 0:
start_id = 0
end_id = start_id + avg_window_half_size*2
if end_id > len(x):
end_id = len(x) - 1
start_id = end_id - avg_window_half_size*2
# Get mean values
peak_mean = x[start_id : end_id].mean()
means = []
for idx in other_idxs:
means.append(_scores[:, idx][start_id : end_id].mean())
is_max = True
for m in means:
if m > peak_mean:
is_max = False
if is_max:
_positives.append([peaks[0][_i], peaks[1]['peak_heights'][_i]])
return _positives
def truncate(f, n):
"""
Floors float to n-digits after comma.
"""
import math
return math.floor(f * 10 ** n) / 10 ** n
def print_results(_detected_peaks, filename, precision = 2, upper_case = True, station = None):
"""
Prints out peaks in the file.
"""
with open(filename, 'a') as f:
for record in _detected_peaks:
line = ''
# Print station if provided
if station:
line += f'{station} '
# Print wave type
tp = record['type'].upper() if upper_case else record['type']
line += f'{tp} '
# Print pseudo-probability
line += f'{truncate(record["pseudo-probability"], precision):1.{precision}f} '
# Print time
dt_str = record["datetime"].strftime("%d.%m.%Y %H:%M:%S.%f").rstrip('0')
line += f'{dt_str}\n'
# Write
f.write(line)
def parse_archive_csv(path):
"""
Parses archives names file. Returns list of filename lists: [[archive1, archive2, archive3], ...]
:param path:
:return:
"""
with open(path) as f:
lines = f.readlines()
_archives = []
for line in lines:
_archives.append([x for x in line.split()])
return _archives
def plot_wave_scores(file_token, wave, scores,
start_time, predictions, right_shift = 0,
channel_names = ['N', 'E', 'Z'],
score_names = ['P', 'S', 'N']):
"""
Plots waveform and prediction scores as an image
"""
channels_num = wave.shape[1]
classes_num = scores.shape[1]
scores_length = scores.shape[0]
# TODO: Make figure size dynamically chosen, based on the input length
fig = plt.figure(figsize = (9.8, 7.), dpi = 160)
axes = fig.subplots(channels_num + classes_num, 1, sharex = True)
# Plot wave
for i in range(channels_num):
axes[i].plot(wave[:, i], color = '#000000', linewidth = 1.)
axes[i].locator_params(axis = 'both', nbins = 4)
axes[i].set_ylabel(channel_names[i])
# Process events and ticks
freq = 100. # TODO: implement through Trace.stats
labels = {'p': 0, 's': 1} # TODO: configure labels through options
# TODO: make sure that labels are not too close.
ticks = [100, scores_length - 100]
events = {}
for label, index in labels.items():
label_events = []
for pos, _ in predictions[label]:
pos += right_shift
label_events.append(pos)
ticks.append(pos)
events[index] = label_events
# Plot scores
for i in range(classes_num):
axes[channels_num + i].plot(scores[:, i], color = '#0022cc', linewidth = 1.)
if i in events:
for pos in events[i]:
axes[channels_num + i].plot([pos], scores[:, i][pos], 'r*', markersize = 7)
axes[channels_num + i].set_ylabel(score_names[i])
# Set x-ticks
for ax in axes:
ax.set_xticks(ticks)
# Configure ticks labels
xlabels = []
for pos in axes[-1].get_xticks():
c_time = start_time + pos/freq
micro = c_time.strftime('%f')[:2]
xlabels.append(c_time.strftime('%H:%M:%S') + f'.{micro}')
axes[-1].set_xticklabels(xlabels)
# Add date text
date = start_time.strftime('%Y-%m-%d')
fig.text(0.095, 1., date, va = 'center')
# Finalize and save
fig.tight_layout()
fig.savefig(file_token + '.jpg')
fig.clear()
def print_scores(data, scores, predictions, file_token, window_length = 400):
"""
Prints scores and waveforms.
"""
right_shift = window_length // 2
shapes = [d.data.shape[0] for d in data] + [scores.shape[0]]
shapes = set(shapes)
if len(shapes) != 1:
raise AttributeError('All waveforms and scores must have similar length!')
length = shapes.pop()
waveforms = np.zeros((length, len(data)))
for i, d in enumerate(data):
waveforms[:, i] = d.data
# Shift scores
shifted_scores = np.zeros((length, len(data)))
shifted_scores[right_shift:] = scores[:-right_shift]
plot_wave_scores(file_token, waveforms, shifted_scores, data[0].stats.starttime, predictions,
right_shift = right_shift)
# TODO: Save predictions samples in .csv ?
np.save(f'{file_token}_wave.npy', waveforms)
np.save(f'{file_token}_scores.npy', shifted_scores)
| 30.397373 | 119 | 0.615277 | 2,459 | 18,512 | 4.441236 | 0.163481 | 0.015658 | 0.005494 | 0.010072 | 0.272411 | 0.231023 | 0.19696 | 0.166835 | 0.14669 | 0.139731 | 0 | 0.013084 | 0.273336 | 18,512 | 608 | 120 | 30.447368 | 0.798766 | 0.239682 | 0 | 0.157191 | 0 | 0 | 0.041551 | 0.009861 | 0 | 0 | 0 | 0.004934 | 0 | 1 | 0.063545 | false | 0.003344 | 0.033445 | 0 | 0.137124 | 0.023411 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26ed820a7f3d87f0cd594319c1f9b9754e2220d | 2,374 | py | Python | source/repositories/wait_for_pipeline.py | stackArmor/compliant-framework-for-federal-and-dod-workloads-in-aws-govcloud-us | 8d2aa7426e352242bbded64e3c6da88199aef025 | [
"Apache-2.0"
] | 36 | 2020-12-22T18:40:50.000Z | 2022-03-23T03:48:13.000Z | source/repositories/wait_for_pipeline.py | stackArmor/compliant-framework-for-federal-and-dod-workloads-in-aws-govcloud-us | 8d2aa7426e352242bbded64e3c6da88199aef025 | [
"Apache-2.0"
] | 10 | 2021-03-01T20:29:10.000Z | 2022-02-15T23:28:58.000Z | source/repositories/wait_for_pipeline.py | stackArmor/compliant-framework-for-federal-and-dod-workloads-in-aws-govcloud-us | 8d2aa7426e352242bbded64e3c6da88199aef025 | [
"Apache-2.0"
] | 17 | 2020-12-18T15:23:15.000Z | 2022-03-20T17:47:39.000Z | ######################################################################################################################
# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
# with the License. A copy of the License is located at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES #
# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
# and limitations under the License. #
######################################################################################################################
import os
import json
import boto3
import optparse
import time
parser = optparse.OptionParser()
parser.add_option('--name', action='store', dest='name')
options, _remainder = parser.parse_args()
pipeline_client = boto3.client('codepipeline')
retries = 60 # One hour to allow pipline to build
while retries > 0:
retries = retries-1
response = pipeline_client.get_pipeline_state(
name=options.name
)
# print(json.dumps(response, indent=2, default=str))
pipeline_succeeded = True
for stage_state in response['stageStates']:
if stage_state['latestExecution']['status'] != 'Succeeded':
pipeline_succeeded = False
break # ouf of for loop
# Success - exit
if pipeline_succeeded:
print('Pipeline execution complete.')
exit(0) # Success!!
# Try again after a minute
print('Waiting for pipeline to finish...')
time.sleep(60)
exit(1) # Failure
| 46.54902 | 118 | 0.448189 | 205 | 2,374 | 5.131707 | 0.595122 | 0.057034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013094 | 0.388795 | 2,374 | 50 | 119 | 47.48 | 0.711923 | 0.555181 | 0 | 0 | 0 | 0 | 0.163085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.192308 | 0 | 0.192308 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26ee057d204124f6e49165aaf55014f24c83cef | 7,694 | py | Python | pandas/tests/groupby/test_min_max.py | luftwurzel/pandas | 8980af7ce9d98713b0f8792e38f0fe43088e8780 | [
"BSD-3-Clause"
] | 1 | 2022-03-29T01:38:03.000Z | 2022-03-29T01:38:03.000Z | pandas/tests/groupby/test_min_max.py | luftwurzel/pandas | 8980af7ce9d98713b0f8792e38f0fe43088e8780 | [
"BSD-3-Clause"
] | 1 | 2022-03-08T02:15:07.000Z | 2022-03-08T02:15:07.000Z | pandas/tests/groupby/test_min_max.py | luftwurzel/pandas | 8980af7ce9d98713b0f8792e38f0fe43088e8780 | [
"BSD-3-Clause"
] | 1 | 2022-03-22T11:50:25.000Z | 2022-03-22T11:50:25.000Z | import numpy as np
import pytest
from pandas._libs.tslibs import iNaT
import pandas as pd
from pandas import (
DataFrame,
Index,
Series,
)
import pandas._testing as tm
from pandas.core.api import Int64Index
def test_max_min_non_numeric():
# #2700
aa = DataFrame({"nn": [11, 11, 22, 22], "ii": [1, 2, 3, 4], "ss": 4 * ["mama"]})
result = aa.groupby("nn").max()
assert "ss" in result
result = aa.groupby("nn").max(numeric_only=False)
assert "ss" in result
result = aa.groupby("nn").min()
assert "ss" in result
result = aa.groupby("nn").min(numeric_only=False)
assert "ss" in result
def test_max_min_object_multiple_columns(using_array_manager):
# GH#41111 case where the aggregation is valid for some columns but not
# others; we split object blocks column-wise, consistent with
# DataFrame._reduce
df = DataFrame(
{
"A": [1, 1, 2, 2, 3],
"B": [1, "foo", 2, "bar", False],
"C": ["a", "b", "c", "d", "e"],
}
)
df._consolidate_inplace() # should already be consolidate, but double-check
if not using_array_manager:
assert len(df._mgr.blocks) == 2
gb = df.groupby("A")
with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"):
result = gb.max(numeric_only=False)
# "max" is valid for column "C" but not for "B"
ei = Index([1, 2, 3], name="A")
expected = DataFrame({"C": ["b", "d", "e"]}, index=ei)
tm.assert_frame_equal(result, expected)
with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"):
result = gb.min(numeric_only=False)
# "min" is valid for column "C" but not for "B"
ei = Index([1, 2, 3], name="A")
expected = DataFrame({"C": ["a", "c", "e"]}, index=ei)
tm.assert_frame_equal(result, expected)
def test_min_date_with_nans():
# GH26321
dates = pd.to_datetime(
Series(["2019-05-09", "2019-05-09", "2019-05-09"]), format="%Y-%m-%d"
).dt.date
df = DataFrame({"a": [np.nan, "1", np.nan], "b": [0, 1, 1], "c": dates})
result = df.groupby("b", as_index=False)["c"].min()["c"]
expected = pd.to_datetime(
Series(["2019-05-09", "2019-05-09"], name="c"), format="%Y-%m-%d"
).dt.date
tm.assert_series_equal(result, expected)
result = df.groupby("b")["c"].min()
expected.index.name = "b"
tm.assert_series_equal(result, expected)
def test_max_inat():
# GH#40767 dont interpret iNaT as NaN
ser = Series([1, iNaT])
gb = ser.groupby([1, 1])
result = gb.max(min_count=2)
expected = Series({1: 1}, dtype=np.int64)
tm.assert_series_equal(result, expected, check_exact=True)
result = gb.min(min_count=2)
expected = Series({1: iNaT}, dtype=np.int64)
tm.assert_series_equal(result, expected, check_exact=True)
# not enough entries -> gets masked to NaN
result = gb.min(min_count=3)
expected = Series({1: np.nan})
tm.assert_series_equal(result, expected, check_exact=True)
def test_max_inat_not_all_na():
# GH#40767 dont interpret iNaT as NaN
# make sure we dont round iNaT+1 to iNaT
ser = Series([1, iNaT, 2, iNaT + 1])
gb = ser.groupby([1, 2, 3, 3])
result = gb.min(min_count=2)
# Note: in converting to float64, the iNaT + 1 maps to iNaT, i.e. is lossy
expected = Series({1: np.nan, 2: np.nan, 3: iNaT + 1})
tm.assert_series_equal(result, expected, check_exact=True)
@pytest.mark.parametrize("func", ["min", "max"])
def test_groupby_aggregate_period_column(func):
# GH 31471
groups = [1, 2]
periods = pd.period_range("2020", periods=2, freq="Y")
df = DataFrame({"a": groups, "b": periods})
result = getattr(df.groupby("a")["b"], func)()
idx = Int64Index([1, 2], name="a")
expected = Series(periods, index=idx, name="b")
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("func", ["min", "max"])
def test_groupby_aggregate_period_frame(func):
# GH 31471
groups = [1, 2]
periods = pd.period_range("2020", periods=2, freq="Y")
df = DataFrame({"a": groups, "b": periods})
result = getattr(df.groupby("a"), func)()
idx = Int64Index([1, 2], name="a")
expected = DataFrame({"b": periods}, index=idx)
tm.assert_frame_equal(result, expected)
def test_aggregate_numeric_object_dtype():
# https://github.com/pandas-dev/pandas/issues/39329
# simplified case: multiple object columns where one is all-NaN
# -> gets split as the all-NaN is inferred as float
df = DataFrame(
{"key": ["A", "A", "B", "B"], "col1": list("abcd"), "col2": [np.nan] * 4},
).astype(object)
result = df.groupby("key").min()
expected = DataFrame(
{"key": ["A", "B"], "col1": ["a", "c"], "col2": [np.nan, np.nan]}
).set_index("key")
tm.assert_frame_equal(result, expected)
# same but with numbers
df = DataFrame(
{"key": ["A", "A", "B", "B"], "col1": list("abcd"), "col2": range(4)},
).astype(object)
result = df.groupby("key").min()
expected = DataFrame(
{"key": ["A", "B"], "col1": ["a", "c"], "col2": [0, 2]}
).set_index("key")
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("func", ["min", "max"])
def test_aggregate_categorical_lost_index(func: str):
# GH: 28641 groupby drops index, when grouping over categorical column with min/max
ds = Series(["b"], dtype="category").cat.as_ordered()
df = DataFrame({"A": [1997], "B": ds})
result = df.groupby("A").agg({"B": func})
expected = DataFrame({"B": ["b"]}, index=Index([1997], name="A"))
# ordered categorical dtype should be preserved
expected["B"] = expected["B"].astype(ds.dtype)
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("dtype", ["Int64", "Int32", "Float64", "Float32", "boolean"])
def test_groupby_min_max_nullable(dtype):
if dtype == "Int64":
# GH#41743 avoid precision loss
ts = 1618556707013635762
elif dtype == "boolean":
ts = 0
else:
ts = 4.0
df = DataFrame({"id": [2, 2], "ts": [ts, ts + 1]})
df["ts"] = df["ts"].astype(dtype)
gb = df.groupby("id")
result = gb.min()
expected = df.iloc[:1].set_index("id")
tm.assert_frame_equal(result, expected)
res_max = gb.max()
expected_max = df.iloc[1:].set_index("id")
tm.assert_frame_equal(res_max, expected_max)
result2 = gb.min(min_count=3)
expected2 = DataFrame({"ts": [pd.NA]}, index=expected.index, dtype=dtype)
tm.assert_frame_equal(result2, expected2)
res_max2 = gb.max(min_count=3)
tm.assert_frame_equal(res_max2, expected2)
# Case with NA values
df2 = DataFrame({"id": [2, 2, 2], "ts": [ts, pd.NA, ts + 1]})
df2["ts"] = df2["ts"].astype(dtype)
gb2 = df2.groupby("id")
result3 = gb2.min()
tm.assert_frame_equal(result3, expected)
res_max3 = gb2.max()
tm.assert_frame_equal(res_max3, expected_max)
result4 = gb2.min(min_count=100)
tm.assert_frame_equal(result4, expected2)
res_max4 = gb2.max(min_count=100)
tm.assert_frame_equal(res_max4, expected2)
def test_min_max_nullable_uint64_empty_group():
# don't raise NotImplementedError from libgroupby
cat = pd.Categorical([0] * 10, categories=[0, 1])
df = DataFrame({"A": cat, "B": pd.array(np.arange(10, dtype=np.uint64))})
gb = df.groupby("A")
res = gb.min()
idx = pd.CategoricalIndex([0, 1], dtype=cat.dtype, name="A")
expected = DataFrame({"B": pd.array([0, pd.NA], dtype="UInt64")}, index=idx)
tm.assert_frame_equal(res, expected)
res = gb.max()
expected.iloc[0, 0] = 9
tm.assert_frame_equal(res, expected)
| 31.404082 | 87 | 0.619444 | 1,125 | 7,694 | 4.109333 | 0.2 | 0.043262 | 0.044992 | 0.062297 | 0.497296 | 0.451222 | 0.39974 | 0.364266 | 0.321004 | 0.241834 | 0 | 0.046997 | 0.203535 | 7,694 | 244 | 88 | 31.532787 | 0.707409 | 0.123343 | 0 | 0.322981 | 0 | 0 | 0.058675 | 0 | 0 | 0 | 0 | 0 | 0.186335 | 1 | 0.068323 | false | 0 | 0.043478 | 0 | 0.111801 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b26fa462f0e800854a4efa33e6276134a040eede | 2,035 | py | Python | src/lowrank/pruners/alignment_pruner_loss_based.py | yliu1021/tensorflow-low-rank | c85ef1b766e5e4f9ab3b4fd94a5d20da0601c92a | [
"MIT"
] | 1 | 2021-12-15T05:55:29.000Z | 2021-12-15T05:55:29.000Z | src/lowrank/pruners/alignment_pruner_loss_based.py | yliu1021/tensorflow-low-rank | c85ef1b766e5e4f9ab3b4fd94a5d20da0601c92a | [
"MIT"
] | 1 | 2022-03-16T00:52:37.000Z | 2022-03-24T11:15:08.000Z | src/lowrank/pruners/alignment_pruner_loss_based.py | yliu1021/tensorflow-low-rank | c85ef1b766e5e4f9ab3b4fd94a5d20da0601c92a | [
"MIT"
] | null | null | null | """
Alignment Pruner Loss Based
"""
from typing import List
import numpy as np
import tensorflow as tf
from lowrank.pruners import AbstractPrunerBase, create_mask
class AlignmentPrunerLossBased(AbstractPrunerBase):
"""
Alignment pruners scores singular vectors based on how
much each singular vector perturbs the model output from
the baseline
"""
def compute_scores(self) -> List[np.ndarray]:
"""
Score = Magnitude of the vector difference between output of model when passed all 1s
(with singular vector zeroed out and not)
Intuition = the singular vectors that change the output vector the most from baseline
activation are the most important
"""
assert self.data_x is not None, "Data x is none, cannot infer input shape"
for layer in self.layers_to_prune:
layer.mask = np.ones(layer.max_rank)
self.model._reset_compile_cache()
scores = []
data_ind = np.random.choice(len(self.data_x), 64, replace=False)
data_x = self.data_x[data_ind]
print("Getting baseline output")
baseline_output = self.model(data_x)
for layer_ind, layer in enumerate(self.layers_to_prune):
print(f"Pruning low rank layer {layer_ind}")
layer_scores = []
for sv_ind in range(layer.max_rank):
# for each singular vector, mask it out and compute new output
print(f"\rEvaluting singular value {sv_ind}", end="", flush=True)
new_mask = np.ones(layer.max_rank)
new_mask[sv_ind] = 0
layer.mask = new_mask
self.model._reset_compile_cache()
new_output = self.model(data_x)
layer_scores.append(tf.norm(tf.math.subtract(baseline_output, new_output)))
layer.mask = np.ones(layer.max_rank)
self.model._reset_compile_cache()
print()
scores.append(np.array(layer_scores))
return scores
| 39.134615 | 93 | 0.638329 | 264 | 2,035 | 4.761364 | 0.409091 | 0.027844 | 0.038186 | 0.0358 | 0.154336 | 0.10183 | 0.084328 | 0.084328 | 0.084328 | 0.084328 | 0 | 0.002745 | 0.284029 | 2,035 | 51 | 94 | 39.901961 | 0.859986 | 0.227027 | 0 | 0.16129 | 0 | 0 | 0.088294 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.032258 | false | 0 | 0.129032 | 0 | 0.225806 | 0.129032 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b27036bbc5948e504c9e4e9ea200d27040810242 | 1,623 | py | Python | Books/WebScrapingwithPython/ImageHandling/ImageTextScraping.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | 2 | 2020-12-05T07:42:55.000Z | 2021-01-06T23:23:18.000Z | Books/WebScrapingwithPython/ImageHandling/ImageTextScraping.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | Books/WebScrapingwithPython/ImageHandling/ImageTextScraping.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | import time
from urllib.request import urlretrieve
import subprocess
from selenium import webdriver
# 셀레니움 드라이버를 만듭니다.
# driver = webdriver.PhantomJS(executable_path='D:\\KYH\\02.PYTHON\\vEnvDjango3_5_2_64\\phantomjs-2.1.1-windows\\bin\\phantomjs')
driver = webdriver.Chrome(executable_path='D:\\KYH\\02.PYTHON\\vEnvDjango3_5_2_64\\chromedriver.exe')
# 가끔 팬텀JS가 이 페이지에 있는 요소를 찾아내지 못할 때가 있습니다.
# 그럴 경우 다음 행의 주석을 제거해서 셀레니움 대신 크롬드라이버를 쓰세요. (chromedriver.exe 다운 받아야함)
driver.get('http://www.amazon.com/Alice-Wonderland-Large-Lewis-Carroll/dp/145155558X')
time.sleep(2)
# 책 미리보기 버튼을 클릭합니다.
driver.find_element_by_id('sitbLogoImg').click()
imageList = set()
# 페이지 로드를 기다립니다.
time.sleep(3)
cnt = 1
# 오른쪽 화살표를 클릭할 수 있으면 계속 클릭해서 페이지를 넘깁니다.
while 'pointer' in driver.find_element_by_id('sitbReaderRightPageTurner').get_attribute('style'):
driver.find_element_by_id('sitbReaderRightPageTurner').click()
time.sleep(2)
# 새로 불러온 페이지를 가져옵니다. 한 번에 여러 페이지를 불러올 때도 있지만,
# 세트에는 중복된 요소는 들어가지는 않습니다.
# xpath 는 BeautifulSoup 에서는 지원하지 않습니다.
pages = driver.find_elements_by_xpath("//div[@class='pageImage']/div/img")
for page in pages:
image = page.get_attribute('src')
imageList.add(image)
driver.quit()
# 수집된 이미지를 테서랙트로 처리합니다.
i = 0
for image in sorted(imageList):
urlretrieve(image, 'ScrapedImages/page'+str(i)+'.jpg')
p = subprocess.Popen(['tesseract', 'ScrapedImages/page'+str(i)+'.jpg', 'ScrapedImages/page'+str(i)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
f = open('ScrapedImages/page'+str(i)+'.txt', 'r', encoding='utf-8')
print(f.read())
i += 1 | 34.531915 | 152 | 0.71411 | 245 | 1,623 | 4.640816 | 0.62449 | 0.03518 | 0.070361 | 0.073879 | 0.21372 | 0.153034 | 0.07212 | 0.07212 | 0.07212 | 0.07212 | 0 | 0.023639 | 0.139864 | 1,623 | 47 | 153 | 34.531915 | 0.790831 | 0.278497 | 0 | 0.074074 | 0 | 0.037037 | 0.289655 | 0.119828 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.148148 | 0 | 0.148148 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b270b9af48b935d3ed23c08911b3564628973a74 | 16,133 | py | Python | telemetry/telemetry/internal/results/page_test_results.py | brave-experiments/catapult | 0d8246fe06fb598577f2344efcbc4b4e5b3aa323 | [
"BSD-3-Clause"
] | null | null | null | telemetry/telemetry/internal/results/page_test_results.py | brave-experiments/catapult | 0d8246fe06fb598577f2344efcbc4b4e5b3aa323 | [
"BSD-3-Clause"
] | 1 | 2022-03-02T13:24:12.000Z | 2022-03-02T13:24:12.000Z | telemetry/telemetry/internal/results/page_test_results.py | brave-experiments/catapult | 0d8246fe06fb598577f2344efcbc4b4e5b3aa323 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import json
import logging
import os
import posixpath
import shutil
import tempfile
import time
import traceback
from telemetry import value as value_module
from telemetry.internal.results import chart_json_output_formatter
from telemetry.internal.results import html_output_formatter
from telemetry.internal.results import gtest_progress_reporter
from telemetry.internal.results import results_processor
from telemetry.internal.results import story_run
from tracing.value import convert_chart_json
from tracing.value import histogram_set
from tracing.value.diagnostics import all_diagnostics
from tracing.value.diagnostics import reserved_infos
class PageTestResults(object):
def __init__(self, output_formatters=None, progress_stream=None,
output_dir=None, should_add_value=None, benchmark_name=None,
benchmark_description=None,
upload_bucket=None, results_label=None):
"""
Args:
output_formatters: A list of output formatters. The output
formatters are typically used to format the test results, such
as CsvOutputFormatter, which output the test results as CSV.
progress_stream: A file-like object where to write progress reports as
stories are being run. Can be None to suppress progress reporting.
output_dir: A string specifying the directory where to store the test
artifacts, e.g: trace, videos, etc.
should_add_value: A function that takes two arguments: a value name and
a boolean (True when the value belongs to the first run of the
corresponding story). It returns True if the value should be added
to the test results and False otherwise.
benchmark_name: A string with the name of the currently running benchmark.
benchmark_description: A string with a description of the currently
running benchmark.
upload_bucket: A string identifting a cloud storage bucket where to
upload artifacts.
results_label: A string that serves as an identifier for the current
benchmark run.
"""
super(PageTestResults, self).__init__()
self._progress_reporter = gtest_progress_reporter.GTestProgressReporter(
progress_stream)
self._output_formatters = (
output_formatters if output_formatters is not None else [])
self._output_dir = output_dir
self._upload_bucket = upload_bucket
if should_add_value is not None:
self._should_add_value = should_add_value
else:
self._should_add_value = lambda v, is_first: True
self._current_story_run = None
self._all_story_runs = []
self._all_stories = set()
self._representative_value_for_each_value_name = {}
self._all_summary_values = []
self._histograms = histogram_set.HistogramSet()
self._benchmark_name = benchmark_name or '(unknown benchmark)'
self._benchmark_description = benchmark_description or ''
self._benchmark_start_us = time.time() * 1e6
# |_interruption| is None if the benchmark has not been interrupted.
# Otherwise it is a string explaining the reason for the interruption.
# Interruptions occur for unrecoverable exceptions.
self._interruption = None
self._results_label = results_label
self._histogram_dicts_to_add = []
@property
def benchmark_name(self):
return self._benchmark_name
@property
def benchmark_description(self):
return self._benchmark_description
@property
def benchmark_start_us(self):
return self._benchmark_start_us
@property
def benchmark_interrupted(self):
return bool(self._interruption)
@property
def benchmark_interruption(self):
"""Returns a string explaining why the benchmark was interrupted."""
return self._interruption
@property
def label(self):
return self._results_label
@property
def output_dir(self):
return self._output_dir
@property
def upload_bucket(self):
return self._upload_bucket
def AsHistogramDicts(self):
return self._histograms.AsDicts()
def PopulateHistogramSet(self):
if len(self._histograms):
return
# We ensure that html traces are serialized and uploaded if necessary
results_processor.SerializeAndUploadHtmlTraces(self)
chart_json = chart_json_output_formatter.ResultsAsChartDict(self)
chart_json['label'] = self.label
chart_json['benchmarkStartMs'] = self.benchmark_start_us / 1000.0
file_descriptor, chart_json_path = tempfile.mkstemp()
os.close(file_descriptor)
json.dump(chart_json, file(chart_json_path, 'w'))
vinn_result = convert_chart_json.ConvertChartJson(chart_json_path)
os.remove(chart_json_path)
if vinn_result.returncode != 0:
logging.error('Error converting chart json to Histograms:\n' +
vinn_result.stdout)
return []
self._histograms.ImportDicts(json.loads(vinn_result.stdout))
self._histograms.ImportDicts(self._histogram_dicts_to_add)
@property
def all_summary_values(self):
return self._all_summary_values
@property
def current_page(self):
"""DEPRECATED: Use current_story instead."""
return self.current_story
@property
def current_story(self):
assert self._current_story_run, 'Not currently running test.'
return self._current_story_run.story
@property
def current_story_run(self):
return self._current_story_run
@property
def had_successes(self):
"""If there were any actual successes, not including skipped stories."""
return any(run.ok for run in self._all_story_runs)
@property
def num_successful(self):
"""Number of successful stories."""
return sum(1 for run in self._all_story_runs if run.ok)
@property
def num_expected(self):
"""Number of stories that succeeded or were expected skips."""
return sum(1 for run in self._all_story_runs if run.is_expected)
@property
def had_failures(self):
"""If there where any failed stories."""
return any(run.failed for run in self._all_story_runs)
@property
def num_failed(self):
"""Number of failed stories."""
return sum(1 for run in self._all_story_runs if run.failed)
@property
def had_skips(self):
"""If there where any skipped stories."""
return any(run.skipped for run in self._IterAllStoryRuns())
@property
def num_skipped(self):
"""Number of skipped stories."""
return sum(1 for run in self._all_story_runs if run.skipped)
def _IterAllStoryRuns(self):
# TODO(crbug.com/973837): Check whether all clients can just be switched
# to iterate over _all_story_runs directly.
for run in self._all_story_runs:
yield run
if self._current_story_run:
yield self._current_story_run
@property
def empty(self):
"""Whether there were any story runs or results."""
return not self._all_story_runs and not self._all_summary_values
def IterStoryRuns(self):
return iter(self._all_story_runs)
def IterAllLegacyValues(self):
for run in self._IterAllStoryRuns():
for value in run.values:
yield value
def CloseOutputFormatters(self):
"""
Clean up any open output formatters contained within this results object
"""
for output_formatter in self._output_formatters:
output_formatter.output_stream.close()
def __enter__(self):
return self
def __exit__(self, _, __, ___):
self.CloseOutputFormatters()
def WillRunPage(self, page, story_run_index=0):
assert not self._current_story_run, 'Did not call DidRunPage.'
self._current_story_run = story_run.StoryRun(
page, self._output_dir, story_run_index)
self._progress_reporter.WillRunStory(self)
def DidRunPage(self, page): # pylint: disable=unused-argument
"""
Args:
page: The current page under test.
"""
assert self._current_story_run, 'Did not call WillRunPage.'
self._current_story_run.Finish()
self._progress_reporter.DidRunStory(self)
self._all_story_runs.append(self._current_story_run)
story = self._current_story_run.story
self._all_stories.add(story)
self._current_story_run = None
def AddMetricPageResults(self, result):
"""Add results from metric computation.
Args:
result: A dict produced by results_processor._ComputeMetricsInPool.
"""
self._current_story_run = result['run']
try:
for fail in result['fail']:
self.Fail(fail)
if result['histogram_dicts']:
self.ImportHistogramDicts(result['histogram_dicts'])
for scalar in result['scalars']:
self.AddValue(scalar)
finally:
self._current_story_run = None
def InterruptBenchmark(self, reason):
"""Mark the benchmark as interrupted.
Interrupted benchmarks are assumed to be stuck in some irrecoverably
broken state.
Note that the interruption_reason will always be the first interruption.
This is because later interruptions may be simply additional fallout from
the first interruption.
"""
assert reason, 'A reason string to interrupt must be provided.'
logging.fatal(reason)
self._interruption = self._interruption or reason
def AddHistogram(self, hist):
if self._ShouldAddHistogram(hist):
diags = self._GetDiagnostics()
for diag in diags.itervalues():
self._histograms.AddSharedDiagnostic(diag)
self._histograms.AddHistogram(hist, diags)
def _GetDiagnostics(self):
"""Get benchmark and current story details as histogram diagnostics."""
diag_values = [
(reserved_infos.BENCHMARKS, self.benchmark_name),
(reserved_infos.BENCHMARK_START, self.benchmark_start_us),
(reserved_infos.BENCHMARK_DESCRIPTIONS, self.benchmark_description),
(reserved_infos.LABELS, self.label),
(reserved_infos.HAD_FAILURES, self.current_story_run.failed),
(reserved_infos.STORIES, self.current_story.name),
(reserved_infos.STORY_TAGS, self.current_story.GetStoryTagsList()),
(reserved_infos.STORYSET_REPEATS, self.current_story_run.index),
(reserved_infos.TRACE_START, self.current_story_run.start_us),
]
diags = {}
for diag, value in diag_values:
if value is None or value == []:
continue
if diag.type == 'GenericSet' and not isinstance(value, list):
value = [value]
elif diag.type == 'DateRange':
# We store timestamps in microseconds, DateRange expects milliseconds.
value = value / 1e3 # pylint: disable=redefined-variable-type
diag_class = all_diagnostics.GetDiagnosticClassForName(diag.type)
diags[diag.name] = diag_class(value)
return diags
def ImportHistogramDicts(self, histogram_dicts, import_immediately=True):
histograms = histogram_set.HistogramSet()
histograms.ImportDicts(histogram_dicts)
histograms.FilterHistograms(lambda hist: not self._ShouldAddHistogram(hist))
dicts_to_add = histograms.AsDicts()
# For measurements that add both TBMv2 and legacy metrics to results, we
# want TBMv2 histograms be imported at the end, when PopulateHistogramSet is
# called so that legacy histograms can be built, too, from scalar value
# data.
#
# Measurements that add only TBMv2 metrics and also add scalar value data
# should set import_immediately to True (i.e. the default behaviour) to
# prevent PopulateHistogramSet from trying to build more histograms from the
# scalar value data.
if import_immediately:
self._histograms.ImportDicts(dicts_to_add)
else:
self._histogram_dicts_to_add.extend(dicts_to_add)
def _ShouldAddHistogram(self, hist):
assert self._current_story_run, 'Not currently running test.'
is_first_result = (
self._current_story_run.story not in self._all_stories)
# TODO(eakuefner): Stop doing this once AddValue doesn't exist
stat_names = [
'%s_%s' % (hist.name, s) for s in hist.statistics_scalars.iterkeys()]
return any(self._should_add_value(s, is_first_result) for s in stat_names)
def AddValue(self, value):
"""Associate a legacy Telemetry value with the current story run.
This should not be used in new benchmarks. All values/measurements should
be recorded in traces.
"""
assert self._current_story_run, 'Not currently running a story.'
self._ValidateValue(value)
is_first_result = (
self._current_story_run.story not in self._all_stories)
if not self._should_add_value(value.name, is_first_result):
return
self._current_story_run.AddValue(value)
def AddSharedDiagnosticToAllHistograms(self, name, diagnostic):
self._histograms.AddSharedDiagnosticToAllHistograms(name, diagnostic)
def Fail(self, failure):
"""Mark the current story run as failed.
This method will print a GTest-style failure annotation and mark the
current story run as failed.
Args:
failure: A string or exc_info describing the reason for failure.
"""
# TODO(#4258): Relax this assertion.
assert self._current_story_run, 'Not currently running test.'
if isinstance(failure, basestring):
failure_str = 'Failure recorded for page %s: %s' % (
self._current_story_run.story.name, failure)
else:
failure_str = ''.join(traceback.format_exception(*failure))
logging.error(failure_str)
self._current_story_run.SetFailed(failure_str)
def Skip(self, reason, is_expected=True):
assert self._current_story_run, 'Not currently running test.'
self._current_story_run.Skip(reason, is_expected)
def CreateArtifact(self, name):
assert self._current_story_run, 'Not currently running test.'
return self._current_story_run.CreateArtifact(name)
def CaptureArtifact(self, name):
assert self._current_story_run, 'Not currently running test.'
return self._current_story_run.CaptureArtifact(name)
def AddTraces(self, traces, tbm_metrics=None):
"""Associate some recorded traces with the current story run.
Args:
traces: A TraceDataBuilder object with traces recorded from all
tracing agents.
tbm_metrics: Optional list of TBMv2 metrics to be computed from the
input traces.
"""
assert self._current_story_run, 'Not currently running test.'
for part, filename in traces.IterTraceParts():
artifact_name = posixpath.join('trace', part, os.path.basename(filename))
with self.CaptureArtifact(artifact_name) as artifact_path:
shutil.copy(filename, artifact_path)
if tbm_metrics:
self._current_story_run.SetTbmMetrics(tbm_metrics)
def AddSummaryValue(self, value):
assert value.page is None
self._ValidateValue(value)
self._all_summary_values.append(value)
def _ValidateValue(self, value):
assert isinstance(value, value_module.Value)
if value.name not in self._representative_value_for_each_value_name:
self._representative_value_for_each_value_name[value.name] = value
representative_value = self._representative_value_for_each_value_name[
value.name]
assert value.IsMergableWith(representative_value)
def PrintSummary(self):
self._progress_reporter.DidFinishAllStories(self)
# Only serialize the trace if output_format is json or html.
if (self._output_dir and
any(isinstance(o, html_output_formatter.HtmlOutputFormatter)
for o in self._output_formatters)):
# Just to make sure that html trace is there in artifacts dir
results_processor.SerializeAndUploadHtmlTraces(self)
for output_formatter in self._output_formatters:
output_formatter.Format(self)
output_formatter.PrintViewResults()
def FindAllPageSpecificValuesNamed(self, value_name):
"""DEPRECATED: New benchmarks should not use legacy values."""
return [v for v in self.IterAllLegacyValues() if v.name == value_name]
def IterRunsWithTraces(self):
for run in self._IterAllStoryRuns():
if run.HasArtifactsInDir('trace/'):
yield run
| 36.091723 | 80 | 0.73235 | 2,078 | 16,133 | 5.450433 | 0.201155 | 0.047678 | 0.051651 | 0.057037 | 0.199011 | 0.147625 | 0.127494 | 0.1003 | 0.09306 | 0.052975 | 0 | 0.002534 | 0.192649 | 16,133 | 446 | 81 | 36.172646 | 0.867025 | 0.254261 | 0 | 0.175 | 0 | 0 | 0.043582 | 0 | 0 | 0 | 0 | 0.006726 | 0.05 | 1 | 0.175 | false | 0 | 0.089286 | 0.042857 | 0.378571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b27135b7136a9552d11278c3d41fe7d7a7f43c70 | 8,352 | py | Python | deployment_manager/models.py | Verbozeteam/web | 2aecd67ec823e9d6ac243d6f8a71849dd0f9ed9d | [
"MIT"
] | 1 | 2018-12-17T15:31:03.000Z | 2018-12-17T15:31:03.000Z | deployment_manager/models.py | Verbozeteam/web | 2aecd67ec823e9d6ac243d6f8a71849dd0f9ed9d | [
"MIT"
] | null | null | null | deployment_manager/models.py | Verbozeteam/web | 2aecd67ec823e9d6ac243d6f8a71849dd0f9ed9d | [
"MIT"
] | null | null | null | from django.db import models
from channels import Channel
from django.contrib.auth import get_user_model
from api.models import AdminUser
class Repository(models.Model):
"""
Represents a repository that can be used in a deployment
"""
remote_path = models.CharField(max_length=2048, unique=True) # remote repository (e.g. github)
local_path = models.CharField(max_length=256, default="") # local path relative to mounted FS
def __str__(self):
return self.remote_path
class RepositoryBuildOption(models.Model):
"""
Represents an option to be executed when a repository is cloned (e.g. make)
"""
repo = models.ForeignKey(Repository, on_delete=models.CASCADE)
option_name = models.CharField(max_length=256)
option_command = models.CharField(max_length=1024)
option_priority = models.IntegerField(default=1)
def __str__(self):
return "{}: {} ({})".format(self.repo, self.option_name, self.option_priority)
class DeploymentConfig(models.Model):
class Meta:
unique_together = ('name', 'version')
"""
Represents a deployment scheme. This model and all its children cannot be
modified if there is a deployment made using this configuration.
"""
parent = models.ForeignKey('DeploymentConfig', on_delete=models.CASCADE, blank=True, null=True, default=None)
name = models.CharField(max_length=256)
version = models.IntegerField(default=1)
def can_be_changed(self):
if not self.pk:
return True # new deployment config - no problem
if Deployment.objects.filter(config=self.pk).exists():
return False # Deployment configuration is deployed somewhere
if DeploymentConfig.objects.filter(name=self.name, version=self.version+1).exists():
return False # If a new version is available, this is not editable
# make sure no child deployment config has been deployed either
children = list(DeploymentConfig.objects.filter(parent=self))
while len(children) > 0:
c = children[0]
children = children[1:]
if Deployment.objects.filter(config=c.pk).exists():
return False # Child deployment configuration is deployed somewhere
children += list(DeploymentConfig.objects.filter(parent=c)) # get grandchildren
return True
def save(self, *args, **kwargs):
if not self.can_be_changed():
raise Exception('Already deployed! you cannot change this!')
super(DeploymentConfig, self).save(*args, **kwargs)
def __str__(self):
return "{} (v{})".format(self.name, self.version)
class DeploymentFile(models.Model):
"""
Represents a file that is to be cp'd (forced) onto the deployed image
"""
class Meta:
unique_together = ('deployment', 'target_filename')
deployment = models.ForeignKey(DeploymentConfig, on_delete=models.CASCADE)
target_filename = models.CharField(max_length=256)
file_contents = models.TextField(default="", blank=True)
is_executable = models.BooleanField(default=False)
def save(self, *args, **kwargs):
if not self.deployment.can_be_changed():
raise Exception('Already deployed! you cannot change this!')
super(DeploymentFile, self).save(*args, **kwargs)
def __str__(self):
return "[{}] {}".format(self.deployment, self.target_filename)
class FileDefaultParameter(models.Model):
"""
A default parameter to use when a file is deployed
"""
class Meta:
unique_together = ('file', 'parameter_name')
file = models.ForeignKey(DeploymentFile, on_delete=models.CASCADE)
is_required = models.BooleanField()
parameter_name = models.CharField(max_length=64)
parameter_value = models.CharField(max_length=512, blank=True)
def __str__(self):
return "[{}] {}:{} ({})".format(self.file, self.parameter_name, self.parameter_value, "required" if self.is_required else "not required")
class DeploymentRepository(models.Model):
"""
A repository to be cloned on deployment
"""
class Meta:
unique_together = ('deployment', 'repo')
deployment = models.ForeignKey(DeploymentConfig, on_delete=models.CASCADE)
repo = models.ForeignKey(Repository, on_delete=models.CASCADE)
commit = models.CharField(max_length=256, default='master')
def save(self, *args, **kwargs):
if not self.deployment.can_be_changed():
raise Exception('Already deployed! you cannot change this!')
super(DeploymentRepository, self).save(*args, **kwargs)
def __str__(self):
return "[{}] {} ({})".format(self.deployment, self.repo, self.commit)
class Deployment(models.Model):
"""
A deployment that has already occurred
"""
config = models.ForeignKey(DeploymentConfig, on_delete=models.PROTECT)
date = models.DateTimeField(auto_now_add=True)
target = models.CharField(max_length=256)
comment = models.TextField(blank=True)
def __str__(self):
return "[{}] {}".format(self.date, self.config)
class DeploymentParameter(models.Model):
"""
A parameter used in a deployment
"""
deployment = models.ForeignKey(Deployment, on_delete=models.CASCADE)
parameter_name = models.CharField(max_length=64)
parameter_value = models.CharField(max_length=512)
def __str__(self):
return "[{}] {}:{}".format(self.deployment, self.parameter_name, self.parameter_value)
class DeploymentBuildOption(models.Model):
"""
A build option used in a deployment
"""
deployment = models.ForeignKey(
Deployment,
on_delete=models.CASCADE,
null=True,
related_name='build_options',
related_query_name='build_option'
)
option_name = models.CharField(max_length=256)
option_command = models.CharField(max_length=1024)
option_priority = models.IntegerField(default=1)
def __str__(self):
return "{}: {}".format(self.deployment, self.option_name, self.option_command, self.option_priority)
class RunningDeployment(models.Model):
"""
A deployment currently happening
"""
deployment = models.ForeignKey(
Deployment,
on_delete=models.CASCADE,
null=True,
related_name="running_deployments",
related_query_name="running_deployment"
)
status = models.TextField(default="", blank=True)
stdout = models.TextField(default="", blank=True)
def __str__(self):
return "{}: {}".format(self.deployment, self.status)
class RemoteDeploymentMachine(models.Model):
"""
An actively connected remote deployment machines
(record is deleted when websocket disconnects)
"""
channel_name = models.CharField(max_length=128, default="", blank=True)
name = models.CharField(max_length=128, default="", unique=True)
admin_user = models.ForeignKey(
AdminUser,
default=None,
on_delete=models.CASCADE,
related_name="remote_deployment_machines",
related_query_name="remote_deployment_machine"
)
def __str__(self):
return "{}".format(self.name)
def ws_send_message(self, message):
Channel(self.channel_name).send(message)
class DeploymentTarget(models.Model):
"""
A target available on a Remote Deployment Machine
"""
remote_deployment_machine = models.ForeignKey(
RemoteDeploymentMachine,
default=None,
on_delete=models.CASCADE,
related_name="targets",
related_query_name="target"
)
identifier = models.CharField(max_length=128, default="")
status = models.CharField(max_length=128, default="")
def __str__(self):
return "{} on {}".format(self.identifier, self.remote_deployment_machine)
class Firmware(models.Model):
"""
Represents an existing .img file to be dd'd on an SD card
Available on Remote Deployment Machine
"""
remote_deployment_machine = models.ForeignKey(RemoteDeploymentMachine, on_delete=models.CASCADE, related_name="firmwares", related_query_name="firmware")
name = models.CharField(max_length=256, unique=True)
def __str__(self):
return "{} on {}".format(self.name, self.remote_deployment_machine)
| 36.471616 | 157 | 0.681154 | 964 | 8,352 | 5.729253 | 0.200207 | 0.051602 | 0.061923 | 0.082564 | 0.526707 | 0.443599 | 0.359949 | 0.326453 | 0.226688 | 0.195908 | 0 | 0.009823 | 0.207735 | 8,352 | 228 | 158 | 36.631579 | 0.824845 | 0.121049 | 0 | 0.369863 | 0 | 0 | 0.068756 | 0.007367 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123288 | false | 0 | 0.027397 | 0.089041 | 0.678082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b276906a4a37e37fd6a0f7464238773efb84b1a5 | 733 | py | Python | web_flask/9-states.py | ralexrivero/AirBnB_clone_v3 | f6ae9107e8e1ccd53575bb82eb45f07379f480de | [
"MIT"
] | null | null | null | web_flask/9-states.py | ralexrivero/AirBnB_clone_v3 | f6ae9107e8e1ccd53575bb82eb45f07379f480de | [
"MIT"
] | null | null | null | web_flask/9-states.py | ralexrivero/AirBnB_clone_v3 | f6ae9107e8e1ccd53575bb82eb45f07379f480de | [
"MIT"
] | null | null | null | #!/usr/bin/python3
"""
starts a Flask web application
"""
from flask import Flask, render_template
from models import *
from models import storage
app = Flask(__name__)
@app.route('/states', strict_slashes=False)
@app.route('/states/<state_id>', strict_slashes=False)
def states(state_id=None):
"""display the states and cities listed in alphabetical order"""
states = storage.all("State")
if state_id is not None:
state_id = 'State.' + state_id
return render_template('9-states.html', states=states, state_id=state_id)
@app.teardown_appcontext
def teardown_db(exception):
"""closes the storage on teardown"""
storage.close()
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')
| 24.433333 | 77 | 0.706685 | 105 | 733 | 4.695238 | 0.504762 | 0.099391 | 0.079108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016155 | 0.155525 | 733 | 29 | 78 | 25.275862 | 0.780291 | 0.188267 | 0 | 0 | 0 | 0 | 0.117851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.1875 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b276bc5b205f7c4e62aaa7704ab77db79e83db48 | 393 | py | Python | ABC085/C.py | shimomura314/AtcoderCodes | db1d62a7715f5f1b3c40eceff8d34f0f34839f41 | [
"MIT"
] | null | null | null | ABC085/C.py | shimomura314/AtcoderCodes | db1d62a7715f5f1b3c40eceff8d34f0f34839f41 | [
"MIT"
] | null | null | null | ABC085/C.py | shimomura314/AtcoderCodes | db1d62a7715f5f1b3c40eceff8d34f0f34839f41 | [
"MIT"
] | null | null | null | n,y = map(int,input().split())
y //= 1000
if n*10 == y:
print(n, 0, 0)
exit()
if n*10 < y:
print(-1, -1, -1)
exit()
for number10 in range(y//10 + 1):
for number5 in range((y-number10*10)//5 + 1):
number1 = y - number10*10 - number5*5
if number10 + number5 + number1 == n:
print(number10, number5, number1)
exit()
print(-1, -1, -1) | 21.833333 | 49 | 0.51145 | 62 | 393 | 3.241935 | 0.322581 | 0.039801 | 0.049751 | 0.059701 | 0.109453 | 0 | 0 | 0 | 0 | 0 | 0 | 0.156364 | 0.300254 | 393 | 18 | 50 | 21.833333 | 0.574545 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.266667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2778ff253236fbfc44dd305804ba4a992cc3ce4 | 837 | py | Python | Problems/Euler Project 95.py | vishwas-21/Project-Euler | ecc6cd843425647582488bcaaaa1815439251d56 | [
"MIT"
] | null | null | null | Problems/Euler Project 95.py | vishwas-21/Project-Euler | ecc6cd843425647582488bcaaaa1815439251d56 | [
"MIT"
] | null | null | null | Problems/Euler Project 95.py | vishwas-21/Project-Euler | ecc6cd843425647582488bcaaaa1815439251d56 | [
"MIT"
] | null | null | null | import time
import math
start = time.time()
def sum_prop_div(n):
sumProp = 1
for i in range(2, int(math.sqrt(n)) + 1):
if n % i == 0:
if i != n // i:
sumProp += i + n // i
else:
sumProp += i
return sumProp
def length_chain(n):
ans = 1
dic = {n : True}
temp = n
while True:
if temp >= 1000000:
return 0
temp = sum_prop_div(temp)
try:
_ = dic[temp]
if temp == n:
return ans
return 0
except:
dic[temp] = True
ans += 1
ma = 0
ans = 0
for i in range(10, 1000000):
temp = length_chain(i)
if temp > ma:
ans = i
ma = temp
if i % 25000 == 0:
print(i, ans, ma, "Time -", time.time() - start)
| 18.6 | 56 | 0.433692 | 113 | 837 | 3.150442 | 0.318584 | 0.067416 | 0.05618 | 0.061798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070022 | 0.454002 | 837 | 44 | 57 | 19.022727 | 0.708972 | 0 | 0 | 0.054054 | 0 | 0 | 0.007168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.054054 | 0 | 0.216216 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b278a24e684e5d54486bbcede5598efebf11f49a | 381 | py | Python | tools/formatTableHTML.py | iuyte/g19abot | e022662753a0ad5fcbe145805ebdd3aaea4b260a | [
"Unlicense"
] | 1 | 2019-07-28T01:26:46.000Z | 2019-07-28T01:26:46.000Z | tools/formatTableHTML.py | iuyte/g19abot | e022662753a0ad5fcbe145805ebdd3aaea4b260a | [
"Unlicense"
] | null | null | null | tools/formatTableHTML.py | iuyte/g19abot | e022662753a0ad5fcbe145805ebdd3aaea4b260a | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python3
import re
c = ""
with open("docs/table.html", mode="r") as f:
c += f.read()
c = c.replace("discussion-timeline js-quote-selection-container","").replace("container new-discussion-timeline experiment-repo-nav", "")
c = c.replace("preview-page", "").replace("timeline-comment-wrapper", "")
with open("docs/table.html", mode="w") as f:
f.write(c)
| 23.8125 | 137 | 0.658793 | 57 | 381 | 4.403509 | 0.596491 | 0.063745 | 0.095618 | 0.135458 | 0.199203 | 0.199203 | 0 | 0 | 0 | 0 | 0 | 0.002994 | 0.12336 | 381 | 15 | 138 | 25.4 | 0.748503 | 0.055118 | 0 | 0 | 0 | 0 | 0.472067 | 0.209497 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b279f52eef122bff5d45afa981e59b2c9f320f83 | 834 | py | Python | atcoder/other/lang_update_202001/practice_2.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | 1 | 2018-11-12T15:18:55.000Z | 2018-11-12T15:18:55.000Z | atcoder/other/lang_update_202001/practice_2.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | null | null | null | atcoder/other/lang_update_202001/practice_2.py | knuu/competitive-programming | 16bc68fdaedd6f96ae24310d697585ca8836ab6e | [
"MIT"
] | null | null | null | import string
cnt = 0
def merge_sort(left, right, A):
# sort [left, right)
if left + 1 == right:
return [A[left]]
mid = (left + right) >> 1
l_arr = merge_sort(left, mid, A)
r_arr = merge_sort(mid, right, A)
ret = []
while l_arr and r_arr:
print(f"? {l_arr[0]} {r_arr[0]}", flush=True)
ans = input()
if ans == "<":
ret.append(l_arr.pop(0))
else:
ret.append(r_arr.pop(0))
if l_arr:
ret.extend(l_arr)
if r_arr:
ret.extend(r_arr)
return ret
def solve26(N) -> None:
A = list(string.ascii_uppercase)[:N]
ans = merge_sort(0, N, A)
print(f"! {''.join(ans)}", flush=True)
def main() -> None:
N, _ = map(int, input().split())
if N != 5:
solve26(N)
if __name__ == '__main__':
main()
| 19.395349 | 53 | 0.515588 | 127 | 834 | 3.181102 | 0.338583 | 0.059406 | 0.064356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022767 | 0.315348 | 834 | 42 | 54 | 19.857143 | 0.684764 | 0.021583 | 0 | 0 | 0 | 0 | 0.058968 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.032258 | 0 | 0.193548 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b27a51c5f0f147e3fa87d792d9d0c9c6ce86db8c | 2,784 | py | Python | RFM/recency.py | simmieyungie/PyRFM | 9db83caab5fed97af83bf8725cbc287e64fb4c3e | [
"MIT"
] | null | null | null | RFM/recency.py | simmieyungie/PyRFM | 9db83caab5fed97af83bf8725cbc287e64fb4c3e | [
"MIT"
] | null | null | null | RFM/recency.py | simmieyungie/PyRFM | 9db83caab5fed97af83bf8725cbc287e64fb4c3e | [
"MIT"
] | null | null | null | import pandas as pd
#define the function
def recency(data, id, date, present_date, bins = {5 : [5,4,3,2,1]}):
'''
This function will calculate the recency.
data: Dataframe
A dataframe containing at least the customer ID, Transaction/Order Date
id: String, Integer
(Order/Customer/Transaction) Id column (string or int)
date: Date, Datetime, Object
A date column (NB: A type conversion will be attempted by the function but you can also ensure the date type is proper)
threshold: An integer value, specifying how many days should be considered
present_date: datetime
The present date or most recent date to serve as a reference point for recency
bins: dictionary, default 5
A dictionary containing the number of segments to be created from recency score and the labels associated
to quantile groups. NB: that for Recency Score we need to give inverse labels as the more active
the customer is the lower the value of Recency i
'''
#error instance to check if input is either a pandas dataframe and is also not none
if isinstance(data, pd.DataFrame) == False or data is None:
raise ValueError("data: Expecting a Dataframe or got 'None'")
#error instance to check if id column is an identified column name
if id not in data.columns:
raise ValueError("id: Expected an id (a column) in Dataframe")
#examine the datatype of the date column
if data.dtypes[date].name not in ["datetime64[ns]", "datetime.datetime"]:
raise ValueError("date: Expected a date datatype, 'convert the date/datetime type'")
#print("This Column is not a of date/datetime data type")
if present_date is None:
raise ValueError("present_date: Specify the recent date to calculate recency")
#aggregation to calculate recency
result = data.groupby(id) \
.agg({date: lambda date: (present_date - date.max()).days})\
.reset_index() #reset index
if isinstance(bins, dict) == False:
raise ValueError("bin: Value needs to be an integer. default is 5")
if len(list(bins.values())) != int(list(bins.keys())[0]):
print("Warning: The number of bins for quantile is not same as the label. default is {}".format(bins))
#rename column
result.rename(columns = {date : "recency"}, inplace = True)
#get bin value
bin_value = int(list(bins.keys())[0])
#get bin labels
bin_labels = list(bins.values())[0]
#logic to use bins value
result["recency_bins"] = pd.qcut(result.recency, bin_value, bin_labels)
#note that if bin changes then label needs to change also
return result
| 39.771429 | 127 | 0.662356 | 399 | 2,784 | 4.593985 | 0.368421 | 0.036007 | 0.016367 | 0.021822 | 0.041462 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006289 | 0.257543 | 2,784 | 69 | 128 | 40.347826 | 0.880503 | 0.461925 | 0 | 0 | 0 | 0 | 0.273835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.045455 | 0 | 0.136364 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b27b6ce67b8ae405ee57b280021b8ad7db45bfd2 | 1,413 | py | Python | scripts/albert/common/strutil/stringhandler.py | ArthurRizar/dialog_state_tracking_bert | 8ee16a854d0ccaad44918c4ede16851acf11ceaa | [
"Apache-2.0"
] | null | null | null | scripts/albert/common/strutil/stringhandler.py | ArthurRizar/dialog_state_tracking_bert | 8ee16a854d0ccaad44918c4ede16851acf11ceaa | [
"Apache-2.0"
] | null | null | null | scripts/albert/common/strutil/stringhandler.py | ArthurRizar/dialog_state_tracking_bert | 8ee16a854d0ccaad44918c4ede16851acf11ceaa | [
"Apache-2.0"
] | null | null | null | #coding:utf-8
###################################################
# File Name: stringhandler.py
# Author: Meng Zhao
# mail: @
# Created Time: Mon 26 Mar 2018 10:40:57 AM CST
#=============================================================
def normalize_num(word):
if word.isdigit() and len(word) == 11:
return '<phone>'
elif word.isdigit():
return '<num>'
else:
return word
def filter_sent(sent_str, stop_set):
sent = sent_str.split(' ')
new_sents = []
for word in sent:
new_word = normalize_num(word)
if new_word not in stop_set:
new_sents.append(new_word)
new_sents_str = ' '.join(new_sents)
return new_sents_str
def split_word_and_seg(trunks_str, stop_set):
trunks = trunks_str.split(' ')
words = []
segs = []
for trunk_str in trunks:
trunk = trunk_str.rsplit('/', 1)
if len(trunk) != 2:
continue
word, seg = trunk
new_word = normalize_num(word)
if new_word not in stop_set:
words.append(new_word)
segs.append(seg)
sent = ' '.join(words)
segs_str = ' '.join(segs)
return sent, segs_str
if __name__ == '__main__':
print(normalize_num('12345'))
print(normalize_num('aa'))
print(normalize_num('12a345'))
sent, segs_str = split_word_and_seg('泛微/n 软件/n 大楼/n //a', set())
print(sent)
print(segs_str)
| 26.660377 | 68 | 0.553432 | 185 | 1,413 | 3.967568 | 0.362162 | 0.098093 | 0.065395 | 0.073569 | 0.119891 | 0.119891 | 0.119891 | 0.119891 | 0.119891 | 0.119891 | 0 | 0.025544 | 0.251946 | 1,413 | 52 | 69 | 27.173077 | 0.668874 | 0.122435 | 0 | 0.102564 | 0 | 0 | 0.048183 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.205128 | 0.128205 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b27b77c3afc3eb1321dc07ee4da6effa63a88fac | 6,072 | py | Python | videoflow/processors/vision/transformers.py | Muflhi01/videoflow | c49d3fe6c814574bcda1a4e907ce52ea86e1617c | [
"MIT"
] | 1,022 | 2019-05-24T21:27:49.000Z | 2022-03-30T04:08:35.000Z | videoflow/processors/vision/transformers.py | Muflhi01/videoflow | c49d3fe6c814574bcda1a4e907ce52ea86e1617c | [
"MIT"
] | 57 | 2019-05-25T06:48:44.000Z | 2021-06-23T17:17:51.000Z | videoflow/processors/vision/transformers.py | Muflhi01/videoflow | c49d3fe6c814574bcda1a4e907ce52ea86e1617c | [
"MIT"
] | 88 | 2019-05-23T14:24:14.000Z | 2022-03-28T05:06:33.000Z | from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from typing import Optional, List
import numpy as np
import cv2
from ...core.node import ProcessorNode
from ...utils.transforms import resize_add_padding
class CropImageTransformer(ProcessorNode):
'''
- Arguments:
- crop_dimensions: np.array of shape (nb_boxes, 4) \
second dimension entries are [ymin, xmin, ymax, xmax] \
or None
- Raises:
- ValueError:
- If any of crop_dimensions less than 0
- If ymin > ymax or xmin > xmax
'''
def __init__(self, crop_dimensions: Optional[np.array] = None):
if crop_dimensions:
self._check_crop_dimensions(crop_dimensions)
self.crop_dimensions = crop_dimensions
super(CropImageTransformer, self).__init__()
@staticmethod
def _check_crop_dimensions(crop_dimensions: np.array):
'''
- Arguments:
- crop_dimensions: np.array of shape (nb_boxes, 4) \
second dimension entries are [ymin, xmin, ymax, xmax]
- Raises:
- ValueError:
- If any of crop_dimensions less than 0
- If ymin > ymax or xmin > xmax
'''
if (crop_dimensions < 0).any():
raise ValueError('One of the crop values is less than 0')
if ((crop_dimensions[:, 0] > crop_dimensions[:, 2]).any()
or (crop_dimensions[:, 1] > crop_dimensions[:, 3]).any()):
raise ValueError('ymin > ymax or xmin > xmax')
def _crop(self, im: np.array, crop_dimensions: Optional[np.array] = None) -> List[np.array]:
'''
- Arguments:
- im (np.array): shape of (h, w, 3)
- crop_dimensions: np.array of shape (nb_boxes, 4) \
second dimension entries are [ymin, xmin, ymax, xmax] \
or None
- Raises:
- ValueError:
- If any of crop_dimensions less than 0
- If any of crop_dimensions out of bounds
- If ymin > ymax or xmin > xmax
- Returns:
- list of np.arrays: Returns a list of cropped images of the same size as crop_dimensions
'''
if crop_dimensions is None:
if self.crop_dimensions is None:
raise RuntimeError("Crop dimensions were not specified")
crop_dimensions = self.crop_dimensions
self._check_crop_dimensions(crop_dimensions)
if ((crop_dimensions[:, 0] > im.shape[0]).any()
or (crop_dimensions[:, 2] > im.shape[1]).any()):
raise ValueError('One of the crop indexes is out of bounds')
result = []
for crop_dimensions_x in crop_dimensions:
ymin, ymax = int(crop_dimensions_x[0]), int(crop_dimensions_x[2])
xmin, xmax = int(crop_dimensions_x[1]), int(crop_dimensions_x[3])
im_cropped = im[ymin:ymax, xmin:xmax, :]
result.append(im_cropped)
return result
def process(self, im: np.array, crop_dimensions: Optional[np.array]) -> List[np.array]:
'''
Crops image according to the coordinates in crop_dimensions.
If those coordinates are out of bounds, it will raise errors
- Arguments:
- im (np.array): shape of (h, w, 3)
- crop_dimensions: np.array of shape (nb_boxes, 4) \
second dimension entries are [ymin, xmin, ymax, xmax] \
or None
- Raises:
- ValueError:
- If any of crop_dimensions less than 0
- If any of crop_dimensions out of bounds
- If ymin > ymax or xmin > xmax
- Returns:
- list of np.arrays: Returns a list of cropped images of the same size as crop_dimensions
'''
to_transform = np.array(im)
return self._crop(to_transform, crop_dimensions)
class MaskImageTransformer(ProcessorNode):
def __init__(self):
super(MaskImageTransformer, self).__init__()
def _mask(self, im : np.array, mask : np.array) -> np.array:
if mask.shape[:2] != im.shape[:2]:
raise ValueError("`mask` does not have same dimensions as `im`")
im = im.astype(float)
alpha = cv2.merge((mask, mask, mask))
masked = cv2.multiply(im, alpha)
return masked.astype(np.uint8)
def process(self, im : np.array, mask : np.array) -> np.array:
'''
Masks an image according to given masks
- Arguments:
- im (np.array): shape of (h, w, 3)
- mask (np.array): (h, w) of type np.float32, with \
values between zero and one
- Raises:
- ValueError:
- If ``mask`` does not have same height and width as \
``im``
'''
to_transform = np.array(im)
return self._mask(im, mask)
class ResizeImageTransformer(ProcessorNode):
def __init__(self, maintain_ratio = False):
self._maintain_ratio = maintain_ratio
super(ResizeImageTransformer, self).__init__()
def _resize(self, im : np.array, new_size) -> np.array:
height, width = new_size
if height < 0 or width < 0:
raise ValueError("One of `width` or `height` is a negative value")
if self._maintain_ratio:
im = resize_add_padding(im, height, width)
else:
im = cv2.resize(im, (width, height))
return im
def process(self, im : np.array, new_size) -> np.array:
'''
Resizes image according to coordinates in new_size
- Arguments:
- im (np.array): shape of (h, w, 3)
- new_size (tuple): (new_height, new_width)
- Raises:
- ValueError:
- If ``new_height`` or ``new_width`` are negative
'''
to_transform = np.array(im)
return self._resize(im, new_size) | 36.359281 | 101 | 0.573617 | 734 | 6,072 | 4.570845 | 0.178474 | 0.183607 | 0.026826 | 0.019672 | 0.482861 | 0.441431 | 0.414307 | 0.369598 | 0.3231 | 0.262891 | 0 | 0.009078 | 0.328722 | 6,072 | 167 | 102 | 36.359281 | 0.814033 | 0.360507 | 0 | 0.072464 | 0 | 0 | 0.066471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144928 | false | 0 | 0.115942 | 0 | 0.391304 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b28035953a6de976e1cb88aa44d13b4ff1968237 | 2,370 | py | Python | tests/test_adam_igt.py | seba-1511/igt.pth | 3c0632fc91c8025f7169c90876ca57503837bcc3 | [
"Apache-2.0"
] | 21 | 2019-06-11T00:44:05.000Z | 2021-02-22T10:00:57.000Z | tests/test_adam_igt.py | ozansener/igt.pth | 3c0632fc91c8025f7169c90876ca57503837bcc3 | [
"Apache-2.0"
] | null | null | null | tests/test_adam_igt.py | ozansener/igt.pth | 3c0632fc91c8025f7169c90876ca57503837bcc3 | [
"Apache-2.0"
] | 4 | 2019-07-23T13:26:27.000Z | 2021-02-12T06:19:45.000Z | #!/usr/bin/env python3
import torch as th
import torch.nn as nn
from torch import Tensor as T
from torch.autograd import Variable as V
import torch.nn.functional as F
import torch_igt
class Convnet(nn.Module):
def __init__(self):
super(Convnet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def dist(x, y):
return (x - y).pow(2).sum()
def close(x, y):
return dist(x, y) < 1e-8
if __name__ == '__main__':
th.manual_seed(1234)
model1 = Convnet().double()
model2 = Convnet().double()
for p1, p2 in zip(model1.parameters(), model2.parameters()):
p1.data.copy_(p2.data)
ref = torch_igt.AdamIGT(model1.parameters(), lr=0.001)
opt = th.optim.Adam(model2.parameters(), lr=0.001)
igt = torch_igt.IGTransporter(model2.parameters(), opt)
x = V(th.randn(3, 1, 28, 28).double(), requires_grad=False)
for i in range(100):
for p1, p2 in zip(model1.parameters(), model2.parameters()):
assert close(p1.data, p2.data)
# Compute reference gradients
ref.train()
ref.zero_grad()
loss1 = model1.forward(x).pow(2).mean()
loss1.backward()
# Compute wrapper gradients
igt.train()
igt.zero_grad()
loss2 = model2.forward(x).pow(2).mean()
loss2.backward()
assert close(loss1.data, loss2.data)
# Test identical gradients
for p1, p2 in zip(model1.parameters(), model2.parameters()):
assert close(p1.grad.data, p2.grad.data)
# Take on step
ref.step()
igt.step()
# Test identical parameters (train)
for p1, p2 in zip(model1.parameters(), model2.parameters()):
assert close(p1.data, p2.data)
# Test identical parameters (eval)
ref.eval()
igt.eval()
for p1, p2 in zip(model1.parameters(), model2.parameters()):
assert close(p1.data, p2.data)
ref.train()
igt.train()
| 26.931818 | 68 | 0.589873 | 339 | 2,370 | 4.038348 | 0.321534 | 0.081812 | 0.025566 | 0.032871 | 0.273192 | 0.249817 | 0.249817 | 0.220599 | 0.220599 | 0.188459 | 0 | 0.0616 | 0.267089 | 2,370 | 87 | 69 | 27.241379 | 0.72654 | 0.075949 | 0 | 0.206897 | 0 | 0 | 0.003665 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 1 | 0.068966 | false | 0 | 0.103448 | 0.034483 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b280a9669692fe9b1d6bc281bdc614de02f7e2c0 | 2,959 | py | Python | pqkmeans/encoder/itq_encoder.py | Hi-king/pqkmeans | 5a0f3dc06df81693168916d3048b610ed9756665 | [
"MIT"
] | 224 | 2017-09-14T00:49:00.000Z | 2022-03-29T02:13:58.000Z | pqkmeans/encoder/itq_encoder.py | Hi-king/pqkmeans | 5a0f3dc06df81693168916d3048b610ed9756665 | [
"MIT"
] | 30 | 2017-09-14T01:47:12.000Z | 2021-12-30T00:54:35.000Z | pqkmeans/encoder/itq_encoder.py | Hi-king/pqkmeans | 5a0f3dc06df81693168916d3048b610ed9756665 | [
"MIT"
] | 49 | 2017-09-14T02:23:02.000Z | 2022-01-17T18:31:23.000Z | import typing
import numpy
import logging
import sklearn.decomposition
from .encoder_base import EncoderBase
class ITQEncoder(EncoderBase):
class TrainedITQEncoder(object):
def __init__(self, R, pca, bits):
self.R, self.pca, self.bits = R, pca, bits
def encode(self, vector):
vector_pca = self.pca.transform(vector)
vector_projection = vector_pca.dot(self.R)
return vector_projection >= 0
def encode_multi(self, data_matrix):
data_matrix_pca = self.pca.transform(data_matrix)
data_matrix_projection = data_matrix_pca.dot(self.R)
return data_matrix_projection >= 0
def __init__(self, iteration=50, num_bit=32):
# type: (int, int) -> None
self.iteration = iteration
self.num_bit = num_bit
self.trained_encoder = None # type: ITQEncoder.TrainedITQEncoder
def __preprocess(self, data, bits):
pca = sklearn.decomposition.PCA(n_components=bits)
pca.fit(data)
data_pca = pca.transform(data)
return data_pca, pca
def __fit(self, data, bits, iteration):
# initialize rotation randomly
R_raw = numpy.random.rand(bits, bits)
R, _S, _V = numpy.linalg.svd(R_raw, full_matrices=True, compute_uv=True)
V = data
for i in range(iteration):
## projection step
logging.debug("R: {}".format(R.shape))
logging.debug("V: {}".format(V.shape))
VR = V.dot(R)
## binary assignment
B = numpy.ones(VR.shape)
B[VR < 0] = -1
## error
error = numpy.sum((B - VR) * (B - VR))
logging.debug("error: {}".format(error))
error = B.T.dot(V)
# update
# minimize whole_error
# https://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem
UB, sigma, UA = numpy.linalg.svd(error)
R = UB.dot(UA)
return R
def fit(self, x_train):
# type: (numpy.array) -> None
assert len(x_train.shape) == 2
assert x_train.shape[1] >= self.num_bit, "target dimension should be larger than input dimension"
data_preprocessed, pca = self.__preprocess(x_train, self.num_bit)
R = self.__fit(data_preprocessed, self.num_bit, self.iteration)
self.trained_encoder = self.TrainedITQEncoder(R, pca, self.num_bit)
def transform_generator(self, x_test):
# type: (typing.Iterable[typing.Iterator[float]]) -> Any
assert self.trained_encoder is not None, "This ITQEncoder instance is not fitted yet. Call 'fit' with appropriate arguments before using this method."
return self._buffered_process(x_test, self.trained_encoder.encode_multi)
def inverse_transform_generator(self, x_test):
# type: (typing.Iterable[typing.Iterator[int]]) -> Any
raise ("cannot decode binary to original with ITQ")
| 37.455696 | 158 | 0.623184 | 374 | 2,959 | 4.743316 | 0.350267 | 0.023675 | 0.028185 | 0.021421 | 0.085682 | 0.066516 | 0.066516 | 0.066516 | 0.066516 | 0.066516 | 0 | 0.004634 | 0.2707 | 2,959 | 78 | 159 | 37.935897 | 0.817424 | 0.118959 | 0 | 0 | 0 | 0 | 0.085295 | 0 | 0 | 0 | 0 | 0 | 0.056604 | 1 | 0.169811 | false | 0 | 0.09434 | 0 | 0.396226 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b280f4adff572c94a64e46c4f065a54887c3e792 | 2,963 | py | Python | tests/conftest.py | Cytomine-ULiege/pims | 3c13f054be3ce9b6755428ccd9c5e0c1a8fb02d4 | [
"Apache-2.0"
] | 2 | 2022-01-19T08:58:12.000Z | 2022-01-28T14:40:41.000Z | tests/conftest.py | Cytomine-ULiege/pims | 3c13f054be3ce9b6755428ccd9c5e0c1a8fb02d4 | [
"Apache-2.0"
] | 18 | 2021-09-20T08:47:11.000Z | 2022-03-14T15:51:37.000Z | tests/conftest.py | Cytomine-ULiege/pims | 3c13f054be3ce9b6755428ccd9c5e0c1a8fb02d4 | [
"Apache-2.0"
] | null | null | null | # * Copyright (c) 2020-2021. Authors: see NOTICE file.
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
import os
import shutil
from contextlib import contextmanager
from pathlib import Path
import pytest
from fastapi.testclient import TestClient
from pims import config
os.environ['CONFIG_FILE'] = "./pims-config.env"
with open(os.path.join(os.path.dirname(__file__), 'fake_files.csv'), 'r') as f:
lines = f.read().splitlines()
_fake_files = dict()
for line in lines[1:]:
filetype, filepath, link, role, kind = line.split(",")
_fake_files[filepath] = {
"filetype": filetype,
"filepath": filepath,
"link": link,
"role": role,
"collection": (kind == "collection")
}
fake_files_info = _fake_files.values()
def create_fake_files(fake_files):
root = Path(test_root())
for ff in fake_files.values():
path = root / Path(ff['filepath'])
path.parent.mkdir(exist_ok=True, parents=True)
if ff['filetype'] == "f":
path.touch(exist_ok=True)
elif ff['filetype'] == "d":
path.mkdir(exist_ok=True, parents=True)
elif ff['filetype'] == "l" and not path.exists():
link = root / Path(ff['link'])
target_is_directory = True if fake_files[ff['link']]['filetype'] == "d" else False
path.symlink_to(link, target_is_directory=target_is_directory)
@pytest.fixture(scope="session")
def fake_files(request):
create_fake_files(_fake_files)
def teardown():
shutil.rmtree(test_root())
request.addfinalizer(teardown)
return _fake_files
def test_root():
return get_settings().root
def get_settings():
return config.Settings(
_env_file=os.getenv("CONFIG_FILE")
)
@pytest.fixture
def settings():
return get_settings()
@pytest.fixture
def app():
from pims import application as main
main.app.dependency_overrides[config.get_settings] = get_settings
return main.app
@pytest.fixture
def client(app):
return TestClient(app)
@contextmanager
def not_raises(expected_exc):
try:
yield
except expected_exc as err:
raise AssertionError(
f"Did raise exception {repr(expected_exc)} when it should not!"
)
except Exception as err:
raise AssertionError(
f"An unexpected exception {repr(err)} raised."
)
| 26.455357 | 94 | 0.653392 | 383 | 2,963 | 4.91906 | 0.409922 | 0.062102 | 0.017516 | 0.016985 | 0.080679 | 0.028662 | 0 | 0 | 0 | 0 | 0 | 0.005729 | 0.234222 | 2,963 | 111 | 95 | 26.693694 | 0.824592 | 0.20621 | 0 | 0.071429 | 0 | 0 | 0.111682 | 0 | 0 | 0 | 0 | 0 | 0.028571 | 1 | 0.128571 | false | 0 | 0.114286 | 0.057143 | 0.328571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b285dd65057928f2d0062e21be0d8d213000da4d | 6,841 | py | Python | scanner.py | GioLomia/FinalProject | e9f249fb6129dc104f2e013d49dac8580605a621 | [
"MIT"
] | 3 | 2019-02-15T12:03:29.000Z | 2019-07-27T17:24:44.000Z | scanner.py | GioLomia/Barcode-QR-Detective | e9f249fb6129dc104f2e013d49dac8580605a621 | [
"MIT"
] | null | null | null | scanner.py | GioLomia/Barcode-QR-Detective | e9f249fb6129dc104f2e013d49dac8580605a621 | [
"MIT"
] | null | null | null | ######################################################################
# Author: Giorgi Lomia
# Username: lomiag
# #
# Assignment: Project 1
#
#Class defenitiions of the Scanner and GUI classes
#######################################################################
from a08_upc_start import *
from bar_detector import *
import tkinter
import tkinter.filedialog as filer
class Scanner:
"""
This is a barcode scanner class that works as a scanner of different data types.
"""
all_mods=["NUMBER","IMAGE","LIVE"]
browse_mode=["ON","OFF"]
def __init__(self,value_to_unpack="",on_off="ON",browser_mode_on_off="ON",qr_on_off="OFF",):
self.mode = ""
self.on_off = on_off
self.value_to_unpack=str(value_to_unpack)
self.barcode = ""
self.browser = browser_mode_on_off
self.qr_mode = qr_on_off
def decode(self):
"""
Decodes in different modes
:return:
"""
#If the scanner is ON
#Number mode
if self.mode=="NUMBER":
#makes the barcode parameter of the class equal to the input plus the modulo
return self.barcode
#Image mode
elif self.mode=="IMAGE":
#Asks user for the image file to read until the file is actually in the library
#catches the errors
self.barcode=bar_decoder(self.value_to_unpack)
if self.barcode.isalpha():
self.qr_mode="ON"
#Returns the barcode.
return self.barcode
#Live video feed mode
elif self.mode=="LIVE":
#Capturing video from the webcam and taking a photo in the right moment. For referance check the bar_detector.py
webcam_capture(VideoCapture(0))
try:
#barcode from the image decoded
self.barcode=bar_decoder("Product.png")
os.remove("Product.png")
except:
self.barcode=""
return self.barcode
def decode_anything(self):
"""
full final decoding
:return: None
"""
self.decode()
if self.barcode != None:
if self.barcode.isnumeric():
self.qr_mode="OFF"
else:
self.qr_mode="ON"
webbrowser.open(self.web_browse())
def web_browse(self):
"""
Opens the product page in a web browser
:return: barcode
"""
#If we are reading a regular barcode
if self.qr_mode == "OFF" and self.browser=="ON":
#If the scanner is on
#Open the product page
return 'https://www.barcodelookup.com/'+self.barcode
else:
#Print the code on the screen
return self.barcode
class GUI:
def __init__(self,master,scanner):
"""
Definition of all the attributes of the GUI class
:param master: The window that needs to be used
:param scanner: The scanner that needs to be used
"""
##################Screen##################
self.master = master
self.scanner=scanner
self.label = tkinter.Label(master, text="Barcode Number")
self.label.pack()
self.master.minsize(width=700, height=500)
self.master.maxsize(width=1550, height=1100)
self.master.title("Barcode Detective")
##################Interface###############
#Number entry box
self.number_box=tkinter.Entry()
self.number_box.pack()
#Number mode button
self.number_button=tkinter.Button(self.master, text="Number", command=self.get_number)
self.number_button.pack(padx=30,pady=10)
#Image mode button
self.image_button=tkinter.Button(self.master, text="Image", command=self.get_image)
self.image_button.pack(padx=30,pady=10)
#Live mode button
self.live_button=tkinter.Button(self.master, text="Live", command=self.get_live)
self.live_button.pack(padx=30,pady=10)
#Close button
self.close_button=tkinter.Button(self.master, text="Close", command=exit)
self.close_button.pack(padx=30,pady=10)
#Helper button
self.help_button=tkinter.Button(self.master, text="Help", command=self.helper)
self.help_button.pack(side="bottom")
#Helping Text
self.text_helper=tkinter.StringVar()
self.words=tkinter.Label(master,textvariable=self.text_helper)
self.words.pack()
def get_number(self):
"""
The function necessary for the number decoding
:return: None
"""
self.scanner.mode="NUMBER"
self.scanner.barcode=self.number_box.get()
self.text_helper.set("")
checker=self.scanner.barcode[:11]
if is_valid_input(self.scanner.barcode[:11]):
if is_valid_modulo(checker)==self.scanner.barcode:
self.scanner.decode_anything()
else:
self.text_helper.set("This Is an Invalid Barcode, Please Enter Again.")
else:
self.text_helper.set("Input Too Short, Please Enter Again")
def get_image(self):
"""
The function necessary for the image decoding
:return: None
"""
self.scanner.mode="IMAGE"
self.text_helper.set("")
try:
self.scanner.value_to_unpack=filer.askopenfilename()
self.scanner.decode_anything()
except:
self.text_helper.set("Invalid File, Please Select an Image to open")
def get_live(self):
"""
The function necessary for the live video decoding
:return: None
"""
self.scanner.mode="LIVE"
self.scanner.decode_anything()
def helper(self):
"""
A helping hand for the user.
:return: None
"""
self.text_helper.set("""
Barcode Detective is a scanner application, It has 3 modules of operation.
The scanner scans barcodes translates them to EAN system in order to easily find it.
It can also translate QR codes into web links and open them.
1) Number mode - Opens a page for a number you put in into the entry box, if it is a valid barcode.
*Type in a 12 digit number from the barcode you want to find.
2)Image mode - Allows you to select an image from your computer if it has a barcode. It opens a webpage.
*Select an Image from your computer.
3)Live mode - Starts up a webcamera and allows live scanning of barcodes and opens webpage for the product.
*Put a barcode in front of your camera, if want to quit the webcam mode press 'q'.""")
| 36.978378 | 124 | 0.578424 | 844 | 6,841 | 4.591232 | 0.263033 | 0.034065 | 0.028903 | 0.026323 | 0.171871 | 0.144 | 0.014968 | 0 | 0 | 0 | 0 | 0.009232 | 0.303318 | 6,841 | 184 | 125 | 37.179348 | 0.803819 | 0.191639 | 0 | 0.207921 | 0 | 0.039604 | 0.245817 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089109 | false | 0 | 0.039604 | 0 | 0.217822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2863a91e7b4b24907515b543038be555f050df6 | 989 | py | Python | app/spider/utils.py | iszhouhua/weixin-sogou | 5c39fe249a0e053060d76c6e4a7d9cad6783b5b1 | [
"Apache-2.0"
] | 1 | 2021-11-14T13:52:57.000Z | 2021-11-14T13:52:57.000Z | app/spider/utils.py | iszhouhua/weixin-sogou | 5c39fe249a0e053060d76c6e4a7d9cad6783b5b1 | [
"Apache-2.0"
] | 1 | 2021-12-08T07:33:21.000Z | 2021-12-08T07:33:21.000Z | app/spider/utils.py | iszhouhua/weixin-sogou | 5c39fe249a0e053060d76c6e4a7d9cad6783b5b1 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals, print_function
import re
import time
def format_url(url, base):
if not url:
return ''
return base + url if not re.match(r'http(s?)://', url) else url
def format_time(timestamp, format_str="%Y-%m-%d %H:%M:%S"):
if type(timestamp) == str:
timestamp = int(timestamp)
struct_time = time.localtime(timestamp)
return time.strftime(format_str, struct_time)
def get_elem_text(element, path=None):
"""
获取节点中的文字
"""
if element is None:
return ''
if path:
element = element.xpath(path)
if element is None:
return ''
return ''.join([node.strip() for node in element]).replace('\n', '').replace('\r', '')
def get_first_elem(element, path=None):
"""
获取节点中的首个元素
"""
if element is None:
return None
if path:
element = element.xpath(path)
return element[0] if element else None
| 22.477273 | 90 | 0.61274 | 132 | 989 | 4.462121 | 0.416667 | 0.061121 | 0.056027 | 0.076401 | 0.205433 | 0.098472 | 0 | 0 | 0 | 0 | 0 | 0.002706 | 0.252781 | 989 | 43 | 91 | 23 | 0.794317 | 0.042467 | 0 | 0.384615 | 0 | 0 | 0.034935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.115385 | 0 | 0.576923 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b28a65c03adf3b40d955da1cf03db278f3f102fb | 2,941 | py | Python | src/schoology-extractor/edfi_schoology_extractor/mapping/assignments.py | stephenfuqua/Ed-Fi-X-Fizz | 94597eda585d4f62f69c12e2a58fa8e8846db11b | [
"Apache-2.0"
] | 3 | 2020-10-15T10:29:59.000Z | 2020-12-01T21:40:55.000Z | src/schoology-extractor/edfi_schoology_extractor/mapping/assignments.py | stephenfuqua/Ed-Fi-X-Fizz | 94597eda585d4f62f69c12e2a58fa8e8846db11b | [
"Apache-2.0"
] | 40 | 2020-08-17T21:08:33.000Z | 2021-02-02T19:56:09.000Z | src/schoology-extractor/edfi_schoology_extractor/mapping/assignments.py | stephenfuqua/Ed-Fi-X-Fizz | 94597eda585d4f62f69c12e2a58fa8e8846db11b | [
"Apache-2.0"
] | 10 | 2021-06-10T16:27:27.000Z | 2021-12-27T12:31:57.000Z | # SPDX-License-Identifier: Apache-2.0
# Licensed to the Ed-Fi Alliance under one or more agreements.
# The Ed-Fi Alliance licenses this file to you under the Apache License, Version 2.0.
# See the LICENSE and NOTICES files in the project root for more information.
from datetime import datetime
import pandas as pd
from . import constants
def map_to_udm(assignments_df: pd.DataFrame, section_id: int) -> pd.DataFrame:
"""
Maps a DataFrame containing Schoology assignments into the Ed-Fi LMS Unified Data
Model (UDM) format.
Parameters
----------
assignments_df: DataFrame
Pandas DataFrame containing Schoology assignments for a section
section_id: int
The Section ID to which the assignments belong
Returns
-------
DataFrame
A LMSUsers-formatted DataFrame
Notes
-----
DataFrame columns are:
SourceSystemIdentifier: A unique number or alphanumeric code assigned to a user by the source system
SourceSystem: The system code or name providing the user data
Title: Assignment's title
AssignmentCategory: Category/type of assignment
AssignmentDescription: Full text description of the assignment
StartDateTime: Date/time stamp when the assignment opens
EndDateTime: Date/time stamp when the assignment closes
DueDateTime: Date/time stamp when the assignment is due for full credit
SubmissionType: Type of submission
MaxPoints: Maximum points available for the submission
LMSSectionSourceSystemIdentifier: Section identifier as recorded in the LMS
CreateDate: datetime at which the record was first retrieved
LastModifiedDate: datetime when the record was modified, or when first retrieved
SourceCreateDate: Date this record was created in the LMS
SourceLastModifiedDate: Date this record was last updated in the LMS
"""
if assignments_df.empty:
return assignments_df
df = assignments_df[
[
"id",
"title",
"description",
"due",
"max_points",
"type",
"CreateDate",
"LastModifiedDate",
]
].copy()
df["SourceSystem"] = constants.SOURCE_SYSTEM
df.rename(
columns={
"id": "SourceSystemIdentifier",
"title": "Title",
"description": "AssignmentDescription",
"max_points": "MaxPoints",
"type": "AssignmentCategory",
"due": "DueDateTime",
},
inplace=True,
)
df["DueDateTime"] = df["DueDateTime"].apply(
lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S")
)
df["LMSSectionSourceSystemIdentifier"] = section_id
df["SubmissionType"] = None
df["SourceCreateDate"] = ""
df["SourceLastModifiedDate"] = ""
df["StartDateTime"] = ""
df["EndDateTime"] = ""
return df
| 31.967391 | 108 | 0.647739 | 322 | 2,941 | 5.875776 | 0.434783 | 0.034355 | 0.011099 | 0.026956 | 0.047569 | 0.047569 | 0 | 0 | 0 | 0 | 0 | 0.001868 | 0.272016 | 2,941 | 91 | 109 | 32.318681 | 0.881831 | 0.551173 | 0 | 0 | 0 | 0 | 0.287764 | 0.081857 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.075 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b28bb2ceaf8a004e16e9ed24cd2eaec8d81074f0 | 5,513 | py | Python | atve/cmd.py | TE-ToshiakiTanaka/atve | e6ad4d2343dc9271d173729c2680eddf3d5dd8a6 | [
"MIT"
] | null | null | null | atve/cmd.py | TE-ToshiakiTanaka/atve | e6ad4d2343dc9271d173729c2680eddf3d5dd8a6 | [
"MIT"
] | null | null | null | atve/cmd.py | TE-ToshiakiTanaka/atve | e6ad4d2343dc9271d173729c2680eddf3d5dd8a6 | [
"MIT"
] | null | null | null | import os
import sys
import time
import errno
import threading
import subprocess
from atve import STRING_SET
from atve import PYTHON_VERSION
from atve.exception import *
if PYTHON_VERSION == 2:
FileNotFoundError = IOError
class ThreadWithReturn(threading.Thread):
def __init__(self, *args, **kwargs):
super(ThreadWithReturn, self).__init__(*args, **kwargs)
self._return = None
def run(self):
if self._Thread__target is not None:
self._return = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
def join(self, timeout=None):
super(ThreadWithReturn, self).join(timeout=timeout)
return self._return
def run_ab(cmd, debug=False, shell=False):
if shell == False and type(cmd) in STRING_SET:
cmd = [c for c in cmd.split() if c != '']
if debug:
sys.stderr.write(''.join(cmd) + '\n')
sys.stderr.flush()
try:
subproc_args = { 'stdin' : subprocess.PIPE,
'stdout' : subprocess.PIPE,
'stderr' : subprocess.STDOUT,
'shell' : shell }
try:
proc = subprocess.call(cmd, **subproc_args)
except FileNotFoundError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
except Exception as e:
if proc != None: proc.kill()
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise TimeoutError({
'cmd' : cmd,
'out' : None,
'message' : 'command %s is time out' % cmd
})
except OSError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
except RuntimeError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
def run(cmd, cwd=None, timeout=60, debug=False, shell=False):
if shell == False and type(cmd) in STRING_SET:
cmd = [c for c in cmd.split() if c != '']
if debug:
sys.stderr.write(''.join(cmd) + '\n')
sys.stderr.flush()
try:
if PYTHON_VERSION == 2:
proc = subprocess.Popen(cmd,
cwd = cwd,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
shell = shell)
proc_thread = ThreadWithReturn(target=proc.communicate)
proc_thread.start()
result = proc_thread.join(timeout)
if proc_thread.is_alive():
try:
proc.kill()
except OSError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
raise TimeoutError({
'cmd' : cmd,
'out' : None,
'message' : 'command %s is time out' % cmd
})
returncode = proc.returncode
if shell: returncode = 0
if result == None:
out = None; err = None;
else:
out = result[0]; err = result[1]
else:
try:
proc2 = subprocess.Popen(cmd,
cwd = cwd,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE,
shell = shell)
out, err = proc2.communicate(timeout=timeout)
returncode = proc2.returncode
if shell: returncode = 0
except FileNotFoundError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
except Exception as e:
if proc2 != None: proc2.kill()
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise TimeoutError({
'cmd' : cmd,
'out' : None,
'message' : 'command %s is time out' % cmd
})
except OSError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
except RuntimeError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
raise RunError(cmd, None, message='Raise Exception : %s' % out)
try:
if PYTHON_VERSION == 2:
if isinstance(out, bytes): out = out.decode("utf8")
if isinstance(err, bytes): err = err.decode("utf8")
else:
if isinstance(out, bytes): out = str(out.decode(sys.stdin.encoding))
if isinstance(err, bytes): err = str(err.decode(sys.stdin.encoding))
except UnicodeDecodeError as e:
out = "{0}: {1}\n{2}".format(type(e).__name__, e, traceback.format_exc())
sys.stderr.write(out)
return (returncode, out, err)
| 41.451128 | 93 | 0.506439 | 608 | 5,513 | 4.447368 | 0.154605 | 0.011095 | 0.018491 | 0.022189 | 0.627219 | 0.546228 | 0.546228 | 0.546228 | 0.546228 | 0.546228 | 0 | 0.013241 | 0.369853 | 5,513 | 132 | 94 | 41.765152 | 0.765112 | 0 | 0 | 0.598361 | 0 | 0 | 0.074188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040984 | false | 0 | 0.07377 | 0 | 0.139344 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b295f892ed9d9bf3578e1626833e04b363d2da97 | 762 | py | Python | sfm/ui/migrations/0020_auto_20180608_1144.py | Xtuden-com/sfm-ui | 4c294a79f0946924e5877e864d94ad76e1edd5bd | [
"MIT"
] | 129 | 2015-10-08T18:49:38.000Z | 2022-02-19T23:16:24.000Z | sfm/ui/migrations/0020_auto_20180608_1144.py | Xtuden-com/sfm-ui | 4c294a79f0946924e5877e864d94ad76e1edd5bd | [
"MIT"
] | 941 | 2015-08-31T17:57:54.000Z | 2022-03-16T22:02:34.000Z | sfm/ui/migrations/0020_auto_20180608_1144.py | Xtuden-com/sfm-ui | 4c294a79f0946924e5877e864d94ad76e1edd5bd | [
"MIT"
] | 31 | 2016-03-06T09:25:02.000Z | 2021-02-03T11:53:29.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ui', '0019_auto_20180530_1831'),
]
operations = [
migrations.AddField(
model_name='collection',
name='link',
field=models.CharField(help_text=b'Link to a public version of this collection.', max_length=512, verbose_name=b'Public link', blank=True),
),
migrations.AddField(
model_name='historicalcollection',
name='link',
field=models.CharField(help_text=b'Link to a public version of this collection.', max_length=512, verbose_name=b'Public link', blank=True),
),
]
| 30.48 | 151 | 0.635171 | 87 | 762 | 5.37931 | 0.505747 | 0.076923 | 0.098291 | 0.115385 | 0.495727 | 0.495727 | 0.495727 | 0.495727 | 0.495727 | 0.495727 | 0 | 0.04028 | 0.250656 | 762 | 24 | 152 | 31.75 | 0.779335 | 0.027559 | 0 | 0.444444 | 0 | 0 | 0.2341 | 0.031123 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b296c0631a321b2b8a3f6fd2050b6d562f4be8df | 9,272 | py | Python | qtplotter.py | cover-me/qtplot | d95f085227071a58a3dba69bcfbe56605d93e7bd | [
"MIT"
] | null | null | null | qtplotter.py | cover-me/qtplot | d95f085227071a58a3dba69bcfbe56605d93e7bd | [
"MIT"
] | null | null | null | qtplotter.py | cover-me/qtplot | d95f085227071a58a3dba69bcfbe56605d93e7bd | [
"MIT"
] | null | null | null | # pieces of code taken from project qtplot
import os
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from scipy import ndimage
plt.rcParams['figure.facecolor'] = 'white'
def create_kernel(x_dev, y_dev, cutoff, distr):
distributions = {
'gaussian': lambda r: np.exp(-(r**2) / 2.0),
'exponential': lambda r: np.exp(-abs(r) * np.sqrt(2.0)),
'lorentzian': lambda r: 1.0 / (r**2+1.0),
'thermal': lambda r: np.exp(r) / (1 * (1+np.exp(r))**2)
}
func = distributions[distr]
hx = int(np.floor((x_dev * cutoff) / 2.0))
hy = int(np.floor((y_dev * cutoff) / 2.0))
x = np.zeros(1) if x_dev==0 else np.linspace(-hx, hx, hx * 2 + 1) / x_dev
y = np.zeros(1) if y_dev==0 else np.linspace(-hy, hy, hy * 2 + 1) / y_dev
xv, yv = np.meshgrid(x, y)
kernel = func(np.sqrt(xv**2+yv**2))
kernel /= np.sum(kernel)
return kernel
def yderiv(d):
# https://en.wikipedia.org/wiki/Finite_difference_coefficient
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html
y = d[1]
z = d[2]
dzdy0 = (z[1]-z[0])/(y[1]-y[0])
dzdy1 = (z[-2]-z[-1])/(y[-2]-y[-1])
z[1:-1] = (z[2:] - z[:-2])/(y[2:] - y[:-2])
z[0] = dzdy0
z[-1] = dzdy1
def lowpass(d, x_width=0.5, y_height=0.5, method='gaussian'):
"""Perform a low-pass filter."""
z = d[2]
kernel = create_kernel(x_width, y_height, 7, method)
z[:] = ndimage.filters.convolve(z, kernel)
def scale(d,amp=[]):
for i, ai in enumerate(amp):
d[i] *= ai
def offset(d,off=[]):
for i, oi in enumerate(off):
d[i] += oi
def g_in_g2(d, rin):
"""z = z/(1-(z*Rin))/7.74809e-5. z: conductance in unit 'S', R in unit 'ohm' (SI units)"""
G2 = 7.74809e-5#ohm^-1, 2e^2/h
d[2] = d[2]/(1-(d[2]*rin))/G2
def readMTX(fPath):
'''
read 2D .mtx files, which have the form:
Units, Dataset name, xname, xmin, xmax, yname, ymin, ymax, zname, zmin, zmax
nx ny nz length
[binary data....]
'''
with open(fPath, 'rb') as f:
line1 = f.readline().decode().rstrip('\n\t\r')
if line1.startswith('Units'):#MTX file
_ = line1.split(',')
labels = [x.strip() for x in [_[2],_[5],_[8],_[1]]]
line2 = f.readline().decode()
shape = [int(x) for x in line2.split(' ')]
x = np.linspace(float(_[3]),float(_[4]),shape[0])
y = np.linspace(float(_[6]),float(_[7]),shape[1])
z = np.linspace(float(_[9]),float(_[10]),shape[2])
z,y,x = np.meshgrid(z,y,x,indexing='ij')
dtp = np.float64 if shape[3] == 8 else np.float32#data type
shape = shape[0:3]
w = np.fromfile(f,dtype=dtp).reshape(shape).T
if shape[2] == 1:# only care about .mtx files with 2D data
return x[0],y[0],w[0],[labels[0],labels[1],labels[3]]
def readDat(fPath,cols=[0,1,3],cook=None,a3=2,a3index=0):
'''
read .dat files
'''
sizes = []# nx,ny,nz for each dimension of scan. Assume a 3D scan (1D and 2D are also kinds of 3D).
labels = []# labels for each column
# read comments
with open(fPath, 'r') as f:
for line in f:
line = line.rstrip('\n\t\r')
if line.startswith('#\tname'):
labels.append(line.split(': ', 1)[1])
elif line.startswith('#\tsize'):
sizes.append(int(line.split(': ', 1)[1]))
if len(line) > 0 and line[0] != '#':# where comments end
break
# load data
print('File: %s, cols: %s'%(os.path.split(fPath)[1],[labels[i] for i in cols]))
d = np.loadtxt(fPath,usecols=cols)
#assume this is data from a 3D scan, we call the element of D1/D2/D3 the point/line/page
n_per_line = sizes[0]
n_per_page = sizes[0]*sizes[1]
n_dp = d.shape[0]# Real number of datapoints
n_pg = int(np.ceil(float(n_dp)/n_per_page))# number of pages, it may be smaller than sizes[2] because sometimes the scan is interrupted by a user
pivot = np.full((len(cols),n_per_page*n_pg), np.nan)# initialize with na.nan
pivot[:,:n_dp] = d.T
pivot = pivot.reshape([len(cols),n_pg,sizes[1],sizes[0]])
# You have a 3D scan, you want to extract a 2D slice. a3 and a3index are the parameters for slicing
if a3 == 0:#slice with x_index=const
pivot = pivot[:,:,:,a3index]
elif a3 == 1:#y_ind=const
pivot = pivot[:,:,a3index,:]
elif a3 == 2:#z_ind=const
pivot = pivot[:,a3index,:,:]
# remove nan lines in x,y,w...
nans = np.isnan(pivot[0,:,0])
pivot = pivot[:,~nans,:]
# Some values in the last line of x and y may be nan. Recalculate these values. Keep w untouched.
nans = np.isnan(pivot[0,-1,:])
pivot[:2,-1,nans] = pivot[:2,-2,nans]*2.-pivot[:2,-3,nans]
# autoflip for filters and imshow()
pivot = autoflip(pivot)
if cook:
cook(pivot)
x,y,w = pivot[:3]
return x,y,w,[labels[cols[i]] for i in range(3)]
def autoflip(d):
# Make the order of elements in x and y good for imshow() and filters
x = d[0]
y = d[1]
xa = abs(x[0,0]-x[0,-1])
xb = abs(x[0,0]-x[-1,0])
ya = abs(y[0,0]-y[0,-1])
yb = abs(y[0,0]-y[-1,0])
if (xa<xb and yb<ya) or (xa>xb and yb<ya and yb/ya<xb/xa) or (xa<xb and yb>ya and ya/yb>xa/xb):
d = np.transpose(d, (0, 2, 1))# swap axis 1 and 2
x = d[0]#note: x y won't unpdate after d changes. There maybe nan in last lines of x and y.
y = d[1]
if x[0,0]>x[0,-1]:
d = d[:,:,::-1]
if y[0,0]>y[-1,0]:
d = d[:,::-1,:]
return d
def get_quad(x):
'''
Calculate the patch corners for pcolormesh
More discussion can be found here: https://cover-me.github.io/2019/02/17/Save-2d-data-as-a-figure.html, https://cover-me.github.io/2019/04/04/Save-2d-data-as-a-figure-II.html
'''
l0, l1 = x[:,[0]], x[:,[1]]
r1, r0 = x[:,[-2]], x[:,[-1]]
x = np.hstack((2*l0 - l1, x, 2*r0 - r1))
t0, t1 = x[0], x[1]
b1, b0 = x[-2], x[-1]
x = np.vstack([2*t0 - t1, x, 2*b0 - b1])
x = (x[:-1,:-1]+x[:-1,1:]+x[1:,:-1]+x[1:,1:])/4.
return x
def plotMTX(fPath,**kw):
x,y,w,labels = readMTX(fPath)
if 'labels' not in kw:
kw['labels'] = labels
return plot2d(x,y,w,**kw)
def plotDat(fPath,cols=[0,1,3],cook=None,**kw):
x,y,w,labels = readDat(fPath,cols,cook)
if 'labels' not in kw:
kw['labels'] = labels
return plot2d(x,y,w,**kw)
def plot2d(x,y,w,**kw):
'''Plot 2D figure'''
#plot settings
ps = {'labels':['','',''],'useImshow':True,'gamma':0,'gmode':'moveColor',
'cmap':'seismic','vmin':None, 'vmax':None,'plotCbar':True}
for i in ps:
if i in kw:
ps[i] = kw[i]
if 'ax' in kw and 'fig' in kw:
# sometimes you want to use customized axes
fig = kw['fig']
ax = kw['ax']
else:
fig, ax = plt.subplots(figsize=(3.375,2),dpi=120)
x1 = get_quad(x)
y1 = get_quad(y)
imkw = {'cmap':ps['cmap'],'vmin':ps['vmin'],'vmax':ps['vmax']}
gamma_real = 10.0**(ps['gamma'] / 100.0)#to be consistent with qtplot
if gamma_real != 1:
if ps['gmode']=='moveColor':
_ = 1024# default: 256
cmap = mpl.cm.get_cmap(imkw['cmap'], _)
cmap = mpl.colors.ListedColormap(cmap(np.linspace(0, 1, _)**gamma_real))
imkw['cmap'] = cmap
else:
imkw['norm'] = mpl.colors.PowerNorm(gamma=gamma_real)
if ps['useImshow']:#slightly different from pcolormesh, especially if saved as vector formats. Imshow is better if it works. See the links in get_quad().
xy_range = (x1[0,0],x1[0,-1],y1[0,0],y1[-1,0])
im = ax.imshow(w,aspect='auto',interpolation='none',origin='lower',extent=xy_range,**imkw)
else:
im = ax.pcolormesh(x1,y1,w,**imkw)
if ps['plotCbar']:
cbar = fig.colorbar(im,ax=ax)
cbar.set_label(ps['labels'][2])
else:
cbar = None
ax.set_xlabel(ps['labels'][0])
ax.set_ylabel(ps['labels'][1])
return fig,ax,cbar,im
def simpAx(ax,cbar,im,n=(None,None,None),pad=(-5,-15,-10)):
_min,_max = ax.get_xlim()
if n[0] is not None:
a = 10**(-n[0])
_min = np.ceil(_min/a)*a
_max = np.floor(_max/a)*a
ax.set_xlim(_min,_max)
ax.set_xticks([_min,_max])
ax.xaxis.labelpad = pad[0]
_min,_max = ax.get_ylim()
if n[1] is not None:
a = 10**(-n[1])
_min = np.ceil(_min/a)*a
_max = np.floor(_max/a)*a
ax.set_ylim(_min,_max)
ax.set_yticks([_min,_max])
ax.yaxis.labelpad = pad[1]
#assumes a vertical colorbar
_min,_max = cbar.ax.get_ylim()
label = cbar.ax.get_ylabel()
if n[2] is not None:
a = 10**(-n[2])
_min = np.ceil(_min/a)*a
_max = np.floor(_max/a)*a
im.set_clim(_min,_max)
cbar.set_ticks([_min,_max])
cbar.ax.yaxis.labelpad = pad[2]
cbar.ax.set_ylabel(label) | 35.937984 | 179 | 0.537317 | 1,537 | 9,272 | 3.173715 | 0.258295 | 0.0041 | 0.00492 | 0.00738 | 0.145141 | 0.102091 | 0.057196 | 0.042435 | 0.042435 | 0.039975 | 0 | 0.04971 | 0.275345 | 9,272 | 258 | 180 | 35.937985 | 0.676291 | 0.203624 | 0 | 0.118557 | 0 | 0 | 0.046154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072165 | false | 0.005155 | 0.025773 | 0 | 0.139175 | 0.005155 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b299b8c90a5e559e7aa1178f046079755f96cc97 | 18,233 | py | Python | evaluator.py | papercore-dev/dmfont | 3f1bccb5a2022a4b8547951e83b72f19296b257b | [
"MIT"
] | 95 | 2020-07-07T09:11:56.000Z | 2022-03-07T01:11:33.000Z | evaluator.py | papercore-dev/dmfont | 3f1bccb5a2022a4b8547951e83b72f19296b257b | [
"MIT"
] | 11 | 2020-10-16T09:29:59.000Z | 2022-03-13T03:55:33.000Z | evaluator.py | papercore-dev/dmfont | 3f1bccb5a2022a4b8547951e83b72f19296b257b | [
"MIT"
] | 23 | 2020-07-07T13:26:09.000Z | 2021-11-15T10:52:24.000Z | """
DMFont
Copyright (c) 2020-present NAVER Corp.
MIT license
"""
from itertools import chain
from pathlib import Path
import json
import argparse
import random
import numpy as np
import torch
import torch.nn.functional as F
from torchvision import transforms
from tqdm import tqdm
from sconf import Config
import utils
from logger import Logger
from models import MACore
from datasets import uniform_sample
from datasets import kor_decompose as kor
from datasets import thai_decompose as thai
from inference import (
infer, get_val_loader,
infer_2stage, get_val_encode_loader, get_val_decode_loader
)
from ssim import SSIM, MSSSIM
def torch_eval(val_fn):
@torch.no_grad()
def decorated(self, gen, *args, **kwargs):
gen.eval()
ret = val_fn(self, gen, *args, **kwargs)
gen.train()
return ret
return decorated
class Evaluator:
"""DMFont evaluator.
The evaluator provides pixel-level evaluation and glyphs generation
from the reference style samples.
"""
def __init__(self, data, trn_avails, logger, writer, batch_size, transform,
content_font, language, meta, val_loaders, n_workers=2):
self.data = data
self.logger = logger
self.writer = writer
self.batch_size = batch_size
self.transform = transform
self.n_workers = n_workers
self.unify_resize_method = True
self.trn_avails = trn_avails
self.val_loaders = val_loaders
self.content_font = content_font
self.language = language
if self.language == 'kor':
self.n_comp_types = 3
elif self.language == 'thai':
self.n_comp_types = 4
else:
raise ValueError()
# setup cross-validation
self.SSIM = SSIM().cuda()
weights = [0.25, 0.3, 0.3, 0.15]
self.MSSSIM = MSSSIM(weights=weights).cuda()
n_batches = [len(loader) for loader in self.val_loaders.values()]
self.n_cv_batches = min(n_batches)
self.logger.info("# of cross-validation batches = {}".format(self.n_cv_batches))
# the number of chars/fonts for CV visualization
n_chars = 16
n_fonts = 16
seen_chars = uniform_sample(meta['train']['chars'], n_chars//2)
unseen_chars = uniform_sample(meta['valid']['chars'], n_chars//2)
unseen_fonts = uniform_sample(meta['valid']['fonts'], n_fonts)
self.cv_comparable_fonts = unseen_fonts
self.cv_comparable_chars = seen_chars + unseen_chars
allchars = meta['train']['chars'] + meta['valid']['chars']
self.cv_comparable_avails = {
font: allchars
for font in self.cv_comparable_fonts
}
def validation(self, gen, step, extra_tag=''):
self.comparable_validset_validation(gen, step, True, 'comparable_val'+extra_tag)
plot_dic = {}
for tag, loader in self.val_loaders.items():
tag = tag + extra_tag
l1, ssim, msssim = self.cross_validation(
gen, step, loader, tag, n_batches=self.n_cv_batches
)
plot_dic[f'val/{tag}/l1'] = l1
plot_dic[f'val/{tag}/ssim'] = ssim
plot_dic[f'val/{tag}/ms-ssim'] = msssim if not np.isnan(msssim) else 0.
self.writer.add_scalars(plot_dic, step)
return plot_dic
@torch_eval
def comparable_validset_validation(self, gen, step, compare_inputs=False, tag='comparable_val'):
"""Comparable validation on validation set from CV"""
comparable_grid = self.comparable_validation(
gen, self.cv_comparable_avails, self.cv_comparable_fonts, self.cv_comparable_chars,
n_max_match=1, compare_inputs=compare_inputs
)
self.writer.add_image(tag, comparable_grid, global_step=step)
@torch_eval
def comparable_validation(self, gen, style_avails, target_fonts, target_chars, n_max_match=3,
compare_inputs=False):
"""Compare horizontally for target fonts and chars"""
# infer
loader = get_val_loader(
self.data, target_fonts, target_chars, style_avails,
B=self.batch_size, n_max_match=n_max_match, transform=self.transform,
content_font=self.content_font, language=self.language, n_workers=self.n_workers
)
out = infer(gen, loader) # [B, 1, 128, 128]
# ref original chars
refs = self.get_charimages(target_fonts, target_chars)
compare_batches = [refs, out]
if compare_inputs:
compare_batches += self.get_inputimages(loader)
nrow = len(target_chars)
comparable_grid = utils.make_comparable_grid(*compare_batches, nrow=nrow)
return comparable_grid
@torch_eval
def cross_validation(self, gen, step, loader, tag, n_batches, n_log=64, save_dir=None):
"""Validation using splitted cross-validation set
Args:
n_log: # of images to log
save_dir: if given, images are saved to save_dir
"""
if save_dir:
save_dir = Path(save_dir)
save_dir.mkdir(parents=True, exist_ok=True)
outs = []
trgs = []
n_accum = 0
losses = utils.AverageMeters("l1", "ssim", "msssim")
for i, (style_ids, style_comp_ids, style_imgs,
trg_ids, trg_comp_ids, content_imgs, trg_imgs) in enumerate(loader):
if i == n_batches:
break
style_ids = style_ids.cuda()
style_comp_ids = style_comp_ids.cuda()
style_imgs = style_imgs.cuda()
trg_ids = trg_ids.cuda()
trg_comp_ids = trg_comp_ids.cuda()
trg_imgs = trg_imgs.cuda()
gen.encode_write(style_ids, style_comp_ids, style_imgs)
out = gen.read_decode(trg_ids, trg_comp_ids)
B = len(out)
# log images
if n_accum < n_log:
trgs.append(trg_imgs)
outs.append(out)
n_accum += B
if n_accum >= n_log:
# log results
outs = torch.cat(outs)[:n_log]
trgs = torch.cat(trgs)[:n_log]
self.merge_and_log_image(tag, outs, trgs, step)
l1, ssim, msssim = self.get_pixel_losses(out, trg_imgs, self.unify_resize_method)
losses.updates({
"l1": l1.item(),
"ssim": ssim.item(),
"msssim": msssim.item()
}, B)
# save images
if save_dir:
font_ids = trg_ids.detach().cpu().numpy()
images = out.detach().cpu() # [B, 1, 128, 128]
char_comp_ids = trg_comp_ids.detach().cpu().numpy() # [B, n_comp_types]
for font_id, image, comp_ids in zip(font_ids, images, char_comp_ids):
font_name = loader.dataset.fonts[font_id] # name.ttf
font_name = Path(font_name).stem # remove ext
(save_dir / font_name).mkdir(parents=True, exist_ok=True)
if self.language == 'kor':
char = kor.compose(*comp_ids)
elif self.language == 'thai':
char = thai.compose_ids(*comp_ids)
uni = "".join([f'{ord(each):04X}' for each in char])
path = save_dir / font_name / "{}_{}.png".format(font_name, uni)
utils.save_tensor_to_image(image, path)
self.logger.info(
" [Valid] {tag:30s} | Step {step:7d} L1 {L.l1.avg:7.4f} SSIM {L.ssim.avg:7.4f}"
" MSSSIM {L.msssim.avg:7.4f}"
.format(tag=tag, step=step, L=losses))
return losses.l1.avg, losses.ssim.avg, losses.msssim.avg
def get_pixel_losses(self, out, trg_imgs, unify):
"""
Args:
out: generated images
trg_imgs: target GT images
unify: if True is given, unify glyph size and resize method before evaluation.
This option give us the fair evaluation setting, which is used in the paper.
"""
def unify_resize_method(img):
# Unify various glyph size and resize method for fair evaluation
size = img.size(-1)
if size == 128:
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize([64, 64]),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
img = torch.stack([transform(_img) for _img in img.cpu()]).cuda()
img = F.interpolate(img, scale_factor=2.0, mode='bicubic', align_corners=True)
return img
if unify:
out = unify_resize_method(out)
trg_imgs = unify_resize_method(trg_imgs)
l1 = F.l1_loss(out, trg_imgs)
ssim = self.SSIM(out, trg_imgs)
msssim = self.MSSSIM(out, trg_imgs)
return l1, ssim, msssim
@torch_eval
def handwritten_validation_2stage(self, gen, step, fonts, style_chars, target_chars,
comparable=False, save_dir=None, tag='hw_validation_2stage'):
"""2-stage handwritten validation
Args:
fonts: [font_name1, font_name2, ...]
save_dir: if given, do not write image grid, instead save every image into save_dir
"""
if save_dir is not None:
save_dir = Path(save_dir)
save_dir.mkdir(parents=True, exist_ok=True)
outs = []
for font_name in tqdm(fonts):
encode_loader = get_val_encode_loader(
self.data, font_name, style_chars, self.language, self.transform
)
decode_loader = get_val_decode_loader(target_chars, self.language)
out = infer_2stage(gen, encode_loader, decode_loader)
outs.append(out)
if save_dir:
for char, glyph in zip(target_chars, out):
uni = "".join([f'{ord(each):04X}' for each in char])
path = save_dir / font_name / "{}_{}.png".format(font_name, uni)
path.parent.mkdir(parents=True, exist_ok=True)
utils.save_tensor_to_image(glyph, path)
if save_dir: # do not write grid
return
out = torch.cat(outs)
if comparable:
# ref original chars
refs = self.get_charimages(fonts, target_chars)
nrow = len(target_chars)
grid = utils.make_comparable_grid(refs, out, nrow=nrow)
else:
grid = utils.to_grid(out, 'torch', nrow=len(target_chars))
tag = tag + target_chars[:4]
self.writer.add_image(tag, grid, global_step=step)
def get_inputimages(self, val_loader):
# integrate style images
inputs = []
for style_ids, style_comp_ids, style_imgs, trg_ids, trg_comp_ids, content_imgs \
in val_loader:
inputs.append(style_imgs)
inputs = torch.cat(inputs)
shape = inputs.shape
inputs = inputs.view(shape[0]//self.n_comp_types, self.n_comp_types, *shape[1:])
batches = [inputs[:, i] for i in range(self.n_comp_types)]
return batches
def get_charimages(self, fonts, chars, empty_header=False, as_tensor=True):
""" get char images from self.data
Return:
2d list of charimages or 5d tensor:
[
[charimage1, charimage2, ...] (font1),
...
]
or
Tensor [n_fonts, n_chars, 1, 128, 128]
"""
empty_box = torch.ones(1, 128, 128)
charimages = [
[self.data.get(font_name, char, empty_box) for char in chars]
for font_name in fonts
]
if empty_header:
header = [empty_box for _ in chars]
charimages.insert(0, header)
if as_tensor:
charimages = torch.stack(list(chain.from_iterable(charimages)))
return charimages
def merge_and_log_image(self, name, out, target, step):
""" Merge out and target into 2-column grid and log it """
merge = utils.make_merged_grid([out, target], merge_dim=2)
self.writer.add_image(name, merge, global_step=step)
def eval_ckpt():
from train import (
setup_language_dependent, setup_data, setup_cv_dset_loader,
get_dset_loader
)
logger = Logger.get()
parser = argparse.ArgumentParser('MaHFG-eval')
parser.add_argument(
"name", help="name is used for directory name of the user-study generation results"
)
parser.add_argument("resume")
parser.add_argument("img_dir")
parser.add_argument("config_paths", nargs="+")
parser.add_argument("--show", action="store_true", default=False)
parser.add_argument(
"--mode", default="eval",
help="eval (default) / cv-save / user-study / user-study-save. "
"`eval` generates comparable grid and computes pixel-level CV scores. "
"`cv-save` generates and saves all target characters in CV. "
"`user-study` generates comparable grid for the ramdomly sampled target characters. "
"`user-study-save` generates and saves all target characters in user-study."
)
parser.add_argument("--deterministic", default=False, action="store_true")
parser.add_argument("--debug", default=False, action="store_true")
args, left_argv = parser.parse_known_args()
cfg = Config(*args.config_paths)
cfg.argv_update(left_argv)
torch.backends.cudnn.benchmark = True
cfg['data_dir'] = Path(cfg['data_dir'])
if args.show:
exit()
# seed
np.random.seed(cfg['seed'])
torch.manual_seed(cfg['seed'])
random.seed(cfg['seed'])
if args.deterministic:
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
cfg['n_workers'] = 0
logger.info("#" * 80)
logger.info("# Deterministic option is activated !")
logger.info("# Deterministic evaluator only ensure the deterministic cross-validation")
logger.info("#" * 80)
else:
torch.backends.cudnn.benchmark = True
if args.mode.startswith('mix'):
assert cfg['g_args']['style_enc']['use'], \
"Style mixing is only available with style encoder model"
#####################################
# Dataset
####################################
# setup language dependent values
content_font, n_comp_types, n_comps = setup_language_dependent(cfg)
# setup transform
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])
])
# setup data
hdf5_data, meta = setup_data(cfg, transform)
# setup dataset
trn_dset, loader = get_dset_loader(
hdf5_data, meta['train']['fonts'], meta['train']['chars'], transform, True, cfg,
content_font=content_font
)
val_loaders = setup_cv_dset_loader(
hdf5_data, meta, transform, n_comp_types, content_font, cfg
)
#####################################
# Model
####################################
# setup generator only
g_kwargs = cfg.get('g_args', {})
gen = MACore(
1, cfg['C'], 1, **g_kwargs, n_comps=n_comps, n_comp_types=n_comp_types,
language=cfg['language']
)
gen.cuda()
ckpt = torch.load(args.resume)
logger.info("Use EMA generator as default")
gen.load_state_dict(ckpt['generator_ema'])
step = ckpt['epoch']
loss = ckpt['loss']
logger.info("Resumed checkpoint from {} (Step {}, Loss {:7.3f})".format(
args.resume, step, loss))
writer = utils.DiskWriter(args.img_dir, 0.6)
evaluator = Evaluator(
hdf5_data, trn_dset.avails, logger, writer, cfg['batch_size'],
content_font=content_font, transform=transform, language=cfg['language'],
val_loaders=val_loaders, meta=meta
)
evaluator.n_cv_batches = -1
logger.info("Update n_cv_batches = -1 to evaluate about full data")
if args.debug:
evaluator.n_cv_batches = 10
logger.info("!!! DEBUG MODE: n_cv_batches = 10 !!!")
if args.mode == 'eval':
logger.info("Start validation ...")
dic = evaluator.validation(gen, step)
logger.info("Validation is done. Result images are saved to {}".format(args.img_dir))
elif args.mode.startswith('user-study'):
meta = json.load(open('meta/kor-unrefined.json'))
target_chars = meta['target_chars']
style_chars = meta['style_chars']
fonts = meta['fonts']
if args.mode == 'user-study':
sampled_target_chars = uniform_sample(target_chars, 20)
logger.info("Start generation kor-unrefined ...")
logger.info("Sampled chars = {}".format(sampled_target_chars))
evaluator.handwritten_validation_2stage(
gen, step, fonts, style_chars, sampled_target_chars,
comparable=True, tag='userstudy-{}'.format(args.name)
)
elif args.mode == 'user-study-save':
logger.info("Start generation & saving kor-unrefined ...")
save_dir = Path(args.img_dir) / "{}-{}".format(args.name, step)
evaluator.handwritten_validation_2stage(
gen, step, fonts, style_chars, target_chars,
comparable=True, save_dir=save_dir
)
logger.info("Validation is done. Result images are saved to {}".format(args.img_dir))
elif args.mode == 'cv-save':
save_dir = Path(args.img_dir) / "cv_images_{}".format(step)
logger.info("Save CV results to {} ...".format(save_dir))
utils.rm(save_dir)
for tag, loader in val_loaders.items():
l1, ssim, msssim = evaluator.cross_validation(
gen, step, loader, tag, n_batches=evaluator.n_cv_batches, save_dir=(save_dir / tag)
)
else:
raise ValueError(args.mode)
if __name__ == "__main__":
eval_ckpt()
| 36.248509 | 100 | 0.595349 | 2,263 | 18,233 | 4.583297 | 0.163942 | 0.018897 | 0.009641 | 0.006749 | 0.184632 | 0.112129 | 0.100559 | 0.090629 | 0.060933 | 0.049749 | 0 | 0.011197 | 0.289749 | 18,233 | 502 | 101 | 36.320717 | 0.78973 | 0.082104 | 0 | 0.128852 | 0 | 0.002801 | 0.106466 | 0.00141 | 0 | 0 | 0 | 0 | 0.002801 | 1 | 0.039216 | false | 0 | 0.056022 | 0 | 0.12605 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b29af2a5950c58e35a3aebb86953b7458a83ef63 | 3,843 | py | Python | zhihulogin.py | MichaelWongX/login | 3bbf1445eb58d937598dd79b70f415228742111f | [
"MIT"
] | null | null | null | zhihulogin.py | MichaelWongX/login | 3bbf1445eb58d937598dd79b70f415228742111f | [
"MIT"
] | null | null | null | zhihulogin.py | MichaelWongX/login | 3bbf1445eb58d937598dd79b70f415228742111f | [
"MIT"
] | null | null | null | import requests
import time
import re
from PIL import Image
captcha_url = 'https://www.zhihu.com/captcha.gif?r={}&type=login&lang=cn'
main_url = 'https://zhihu.com'
login_url = 'https://zhihu.com/login/email'
# a dict contains the position of downside character position, will be replaced with the true position
captcha_pos = {"img_size":[200,44],"input_points":[[15.49,27.64],[152.29,22.05]]}
data = {'email':'username', # username to login
'password':'password', # password for the login
'captcha_type':'cn', # the type of captcha type, 7 chinese character,with 1 or 2 upside down
'_xsrf':'', # the _xsrf extract from the page
'captcha':captcha_pos,
}
def clean_headers(data,delimiter=':',sep='\n'):
""" clean the headers or cookies copied from firefox"""
tmp = {}
lines = [line.strip() for line in data.split(sep) if len(line) > 5 and 'Content-Length' not in line]
for line in lines:
k,v = line.split(delimiter,maxsplit=1)
tmp[k] = v.strip()
return tmp
def get_r():
"""generate the timestamp and format the captcha_url"""
tmp = str(time.time()*1000)[:13]
return captcha_url.format(tmp)
if __name__ == "__main__":
headers_content = """Host: www.zhihu.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0
Accept: */*
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate, br
X-Xsrftoken: d59a6b5cbb326bc05328a62d9cbad4e6
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Referer: https://www.zhihu.com/
"""
cookies_content = """q_c1=0ac7d18b42ef49fca2ed2d2329f08806|1508755382000|1503108834000; \
q_c1=0ac7d18b42ef49fca2ed2d2329f08806|1508755382000|1503108834000; \
_zap=085b7d4a-cc5d-4d46-9484-eddfae6335f9; \
r_cap_id="NGNkYzMzMzVkNjMxNDliMGFhNjI4NjVmN2I4NjZiOTA=|1509270328|3d95cad72848b29ea4ebf4fc23d19625ce3a5daf"; \
cap_id="ZDIxODgxNjg4YjJkNDliZTg4MmJhMTA1NDI5YTRhNTU=|1509270328|6329931af284734c55371d889da754e086deb76f"; \
__utma=51854390.1832205118.1503829450.1507911395.1509268878.5; \
__utmz=51854390.1507911395.4.4.utmcsr=zhihu.com|utmccn=(referral)|utmcmd=referral|utmcct=/question/31851633; \
__utmv=51854390.000--|2=registration_date=20140104=1^3=entry_date=20170819=1; d_c0="ADCCAsLhSAyPTj8w_8ikHEn95wo0plMro9M=|1503829449"; \
aliyungf_tc=AQAAAGtsiFglFgoA2wBa2uB6FZZPn0iP; _xsrf=d59a6b5cbb326bc05328a62d9cbad4e6; __utmb=51854390.0.10.1509268878; \
__utmc=51854390; _xsrf=d59a6b5cbb326bc05328a62d9cbad4e6; \
l_cap_id="ZGYxMmNhYjI1Mjk4NDZlYmJhZjg5MTc0ZmNlMGQyYTM=|1509270328|54b243a5996b47a2732eeebf946f2392f578d50d"
"""
headers = clean_headers(headers_content)
cookies = clean_headers(cookies_content,delimiter='=',sep=';')
sess = requests.session()
r = sess.get(url=main_url)
if r.ok:
tmp =re.search('input type="hidden" name="_xsrf" value="([a-z0-9]*)',r.text)
if tmp:
print(tmp.group(1))
data['_xsrf'] = tmp.group(1)
print('the xsrf is: %s' % tmp.group(1))
r = sess.get(get_r())
if r.ok:
with open('xx.gif','wb') as file:
file.write(r.content)
print('save the captcha,please open and input the position')
img = Image.open('xx.gif')
img.show()
pos1 = input('input the position 1 ')
pos2 = input('input the position 2')
data['captcha']['input_points'][0][0] = int(pos1) * 45 + 0.25
data['captcha']['input_points'][1][0] = int(pos2) * 48 + 0.67
headers['X-Xsrftoken'] = data['_xsrf']
cookies['_xsrf'] = data['_xsrf']
r = sess.post(url=login_url,data=data,headers=headers,cookies=cookies)
try:
print(r.json())
except:
pass
print(sess.cookies.items())
# print(req.text)
| 39.214286 | 139 | 0.689826 | 503 | 3,843 | 5.145129 | 0.44334 | 0.018547 | 0.012751 | 0.012365 | 0.063369 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163602 | 0.164975 | 3,843 | 97 | 140 | 39.618557 | 0.642879 | 0.093417 | 0 | 0.052632 | 0 | 0.065789 | 0.542148 | 0.303984 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0.026316 | 0.052632 | 0 | 0.105263 | 0.065789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b29c022403f064588c61480d99bd115285a5f23d | 559 | py | Python | webapp/database.py | olizhu10/newsroom | 0a6dccd21da28892cc089e0924c53e0723b42785 | [
"Apache-2.0"
] | null | null | null | webapp/database.py | olizhu10/newsroom | 0a6dccd21da28892cc089e0924c53e0723b42785 | [
"Apache-2.0"
] | null | null | null | webapp/database.py | olizhu10/newsroom | 0a6dccd21da28892cc089e0924c53e0723b42785 | [
"Apache-2.0"
] | 1 | 2019-10-04T03:24:35.000Z | 2019-10-04T03:24:35.000Z | import sqlite3
DATABASE_NAME = '../databases/databaseRefined_0.9.db'
def get_articles(cluster_id):
db = sqlite3.connect(DATABASE_NAME)
c = db.cursor()
q = "SELECT * FROM articles WHERE cluster=?"
t = (cluster_id,)
c.execute(q,t)
articles = c.fetchall()
db.close()
return articles
def remove_cluster(cluster_id):
db = sqlite3.connect(DATABASE_NAME)
c = db.cursor()
print(cluster_id)
q = "DELETE FROM articles WHERE cluster=?"
t = (cluster_id,)
c.execute(q,t)
db.commit()
db.close()
return
| 22.36 | 53 | 0.645796 | 76 | 559 | 4.605263 | 0.394737 | 0.128571 | 0.062857 | 0.102857 | 0.514286 | 0.514286 | 0.514286 | 0.514286 | 0.514286 | 0.514286 | 0 | 0.011521 | 0.223614 | 559 | 24 | 54 | 23.291667 | 0.794931 | 0 | 0 | 0.47619 | 0 | 0 | 0.194991 | 0.062612 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.238095 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2a1fe70c700c6e10ca8f81c109319de90e0fd80 | 4,449 | py | Python | run.py | redshoga/DeepNude_NoWatermark_withModel | b7e9c899d35258a23966d8a018e4cd181a3cf06b | [
"MIT"
] | null | null | null | run.py | redshoga/DeepNude_NoWatermark_withModel | b7e9c899d35258a23966d8a018e4cd181a3cf06b | [
"MIT"
] | null | null | null | run.py | redshoga/DeepNude_NoWatermark_withModel | b7e9c899d35258a23966d8a018e4cd181a3cf06b | [
"MIT"
] | null | null | null | import cv2
#Import Neural Network Model
from gan import DataLoader, DeepModel, tensor2im
#OpenCv Transform:
from opencv_transform.mask_to_maskref import create_maskref
from opencv_transform.maskdet_to_maskfin import create_maskfin
from opencv_transform.dress_to_correct import create_correct
from opencv_transform.nude_to_watermark import create_watermark
"""
run.py
This script manage the entire transormation.
Transformation happens in 6 phases:
0: dress -> correct [opencv] dress_to_correct
1: correct -> mask [GAN] correct_to_mask
2: mask -> maskref [opencv] mask_to_maskref
3: maskref -> maskdet [GAN] maskref_to_maskdet
4: maskdet -> maskfin [opencv] maskdet_to_maskfin
5: maskfin -> nude [GAN] maskfin_to_nude
6: nude -> watermark [opencv] nude_to_watermark
"""
phases = ["dress_to_correct", "correct_to_mask", "mask_to_maskref", "maskref_to_maskdet", "maskdet_to_maskfin", "maskfin_to_nude", "nude_to_watermark"]
class Options():
#Init options with default values
def __init__(self):
# experiment specifics
self.norm = 'batch' #instance normalization or batch normalization
self.use_dropout = False #use dropout for the generator
self.data_type = 32 #Supported data type i.e. 8, 16, 32 bit
# input/output sizes
self.batchSize = 1 #input batch size
self.input_nc = 3 # of input image channels
self.output_nc = 3 # of output image channels
# for setting inputs
self.serial_batches = True #if true, takes images in order to make batches, otherwise takes them randomly
self.nThreads = 1 ## threads for loading data (???)
self.max_dataset_size = 1 #Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.
# for generator
self.netG = 'global' #selects model to use for netG
self.ngf = 64 ## of gen filters in first conv layer
self.n_downsample_global = 4 #number of downsampling layers in netG
self.n_blocks_global = 9 #number of residual blocks in the global generator network
self.n_blocks_local = 0 #number of residual blocks in the local enhancer network
self.n_local_enhancers = 0 #number of local enhancers to use
self.niter_fix_global = 0 #number of epochs that we only train the outmost local enhancer
#Phase specific options
self.checkpoints_dir = ""
self.dataroot = ""
#Changes options accordlying to actual phase
def updateOptions(self, phase):
if phase == "correct_to_mask":
self.checkpoints_dir = "checkpoints/cm.lib"
elif phase == "maskref_to_maskdet":
self.checkpoints_dir = "checkpoints/mm.lib"
elif phase == "maskfin_to_nude":
self.checkpoints_dir = "checkpoints/mn.lib"
# process(cv_img, mode)
# return:
# watermark image
def process(cv_img):
#InMemory cv2 images:
dress = cv_img
correct = None
mask = None
maskref = None
maskfin = None
maskdet = None
nude = None
watermark = None
for index, phase in enumerate(phases):
print("Executing phase: " + phase)
#GAN phases:
if (phase == "correct_to_mask") or (phase == "maskref_to_maskdet") or (phase == "maskfin_to_nude"):
#Load global option
opt = Options()
#Load custom phase options:
opt.updateOptions(phase)
#Load Data
if (phase == "correct_to_mask"):
data_loader = DataLoader(opt, correct)
elif (phase == "maskref_to_maskdet"):
data_loader = DataLoader(opt, maskref)
elif (phase == "maskfin_to_nude"):
data_loader = DataLoader(opt, maskfin)
dataset = data_loader.load_data()
#Create Model
model = DeepModel()
model.initialize(opt)
#Run for every image:
for i, data in enumerate(dataset):
generated = model.inference(data['label'], data['inst'])
im = tensor2im(generated.data[0])
#Save Data
if (phase == "correct_to_mask"):
mask = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
elif (phase == "maskref_to_maskdet"):
maskdet = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
elif (phase == "maskfin_to_nude"):
nude = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
#Correcting:
elif (phase == 'dress_to_correct'):
correct = create_correct(dress)
#mask_ref phase (opencv)
elif (phase == "mask_to_maskref"):
maskref = create_maskref(mask, correct)
#mask_fin phase (opencv)
elif (phase == "maskdet_to_maskfin"):
maskfin = create_maskfin(maskref, maskdet)
#nude_to_watermark phase (opencv)
elif (phase == "nude_to_watermark"):
watermark = create_watermark(nude)
return watermark
| 29.463576 | 162 | 0.728254 | 621 | 4,449 | 5.020934 | 0.293076 | 0.028865 | 0.025016 | 0.020526 | 0.117704 | 0.065427 | 0.023733 | 0.023733 | 0 | 0 | 0 | 0.011184 | 0.175995 | 4,449 | 150 | 163 | 29.66 | 0.839334 | 0.269724 | 0 | 0.08 | 0 | 0 | 0.166248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.08 | 0 | 0.146667 | 0.013333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2a38f01e734df9759c4acadf099eb5d7a5ce4bb | 1,129 | py | Python | vaetc/models/bavae.py | ganmodokix/vaetc | 866b79677b4f06603203376d967989dedadbffae | [
"MIT"
] | null | null | null | vaetc/models/bavae.py | ganmodokix/vaetc | 866b79677b4f06603203376d967989dedadbffae | [
"MIT"
] | null | null | null | vaetc/models/bavae.py | ganmodokix/vaetc | 866b79677b4f06603203376d967989dedadbffae | [
"MIT"
] | null | null | null | from typing import Optional
import math
import torch
from .utils import detach_dict
from vaetc.network.reparam import reparameterize
from vaetc.network.losses import neglogpxz_gaussian, kl_gaussian
from .vae import VAE
class BetaAnnealedVAE(VAE):
""" :math:`\\beta`-Annealed VAE
[Sankarapandian et al., NeurIPS 2020 Workshop (https://arxiv.org/abs/2107.10667)]
"""
def __init__(self, hyperparameters: dict):
super().__init__(hyperparameters)
self.beta = float(hyperparameters["beta"])
def loss(self, x, z, mean, logvar, x2, progress: Optional[float] = None):
# Losses
loss_ae = torch.mean(neglogpxz_gaussian(x, x2))
loss_reg = torch.mean(kl_gaussian(mean, logvar))
# Cyclical Annealing
t = progress if progress is not None else 1.0
t = 1 - t
annealed_beta = self.beta * t
# Total loss
loss = loss_ae + loss_reg * annealed_beta
return loss, detach_dict({
"loss": loss,
"loss_ae": loss_ae,
"loss_reg": loss_reg,
"annealed_beta": annealed_beta,
}) | 26.880952 | 85 | 0.635961 | 140 | 1,129 | 4.942857 | 0.435714 | 0.034682 | 0.043353 | 0.040462 | 0.052023 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021609 | 0.262179 | 1,129 | 42 | 86 | 26.880952 | 0.809124 | 0.129318 | 0 | 0 | 0 | 0 | 0.037306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.291667 | 0 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2a8300488076a38efafd42a98c8b3c425b9ca5b | 1,690 | py | Python | upload_image.py | wpwbb510582246/buaa-daka | cd5dadfcc5f5095ad3e2dbc53d0252b970dfdeb8 | [
"MIT"
] | null | null | null | upload_image.py | wpwbb510582246/buaa-daka | cd5dadfcc5f5095ad3e2dbc53d0252b970dfdeb8 | [
"MIT"
] | null | null | null | upload_image.py | wpwbb510582246/buaa-daka | cd5dadfcc5f5095ad3e2dbc53d0252b970dfdeb8 | [
"MIT"
] | 1 | 2021-12-17T04:05:07.000Z | 2021-12-17T04:05:07.000Z | #!/usr/bin/python3
# -*- coding: utf-8 -*-
# @Author : Grayson
# @Time : 2020-12-07 20:47
# @Email : weipengweibeibei@163.com
# @Description :
# encoding:utf-8
# !/usr/bin/env python
from werkzeug.utils import secure_filename
from flask import Flask, render_template, jsonify, request, make_response, send_from_directory, abort
import time
import os
import base64
from Pic_str import Pic_str
from buaa_daka_orc import get_daka
app = Flask(__name__)
UPLOAD_FOLDER = 'upload'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['JSON_AS_ASCII'] = False
basedir = os.path.abspath(os.path.dirname(__file__))
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'JPG', 'jpeg', 'PNG', 'gif', 'GIF'])
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS
@app.route('/')
def index():
return render_template('index.html')
@app.route('/upload')
def upload_test():
return render_template('up.html')
# 上传文件
@app.route('/up_photo', methods=['POST'], strict_slashes=False)
def api_upload():
file_dir = os.path.join(basedir, app.config['UPLOAD_FOLDER'])
if not os.path.exists(file_dir):
os.makedirs(file_dir)
f = request.files['photo']
if f and allowed_file(f.filename):
fname = secure_filename(f.filename)
ext = fname.rsplit('.', 1)[1]
new_filename = Pic_str().create_uuid() + '.' + ext
join_path = os.path.join(file_dir, new_filename)
print(join_path)
f.save(join_path)
notice_info = get_daka(join_path)
return notice_info
else:
return jsonify({"error": 1001, "msg": "上传失败"})
if __name__ == '__main__':
app.run(debug=True) | 27.704918 | 101 | 0.674556 | 237 | 1,690 | 4.565401 | 0.468354 | 0.027726 | 0.033272 | 0.038817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020101 | 0.17574 | 1,690 | 61 | 102 | 27.704918 | 0.75664 | 0.108876 | 0 | 0 | 0 | 0 | 0.089453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.175 | 0.075 | 0.4 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2a9341f80d2ea5c6e04e6f3a4d64e832cf36150 | 1,118 | py | Python | manully_tune_point_cloud.py | RustIron/libSmart | fd610c369db3a20a8ea6046061841f69b551b5fc | [
"BSD-3-Clause"
] | 2 | 2019-09-11T05:09:56.000Z | 2020-12-16T09:23:28.000Z | manully_tune_point_cloud.py | RustIron/libSmart | fd610c369db3a20a8ea6046061841f69b551b5fc | [
"BSD-3-Clause"
] | null | null | null | manully_tune_point_cloud.py | RustIron/libSmart | fd610c369db3a20a8ea6046061841f69b551b5fc | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from libSmart.DataFlow.dataProcessor.preprocessor import Pose,PointCloud, PointProc
from libSmart.DataFlow.dataProcessor.config import distorb_function_list, Jointlists3, board_bounding_box, min_point_before_moving
from IPython.core.debugger import Tracer
pcs_l = [] # the list to put more point cloud together
for idx in [0,1,2,3]:
p = np.load('pointcloud_%d.npy'%idx) # load from stored data, collected by "collect_raw_point_cloud.py"
pc = PointCloud(p[0],p[1])
pose = Pose(Jointlists3[idx])
pc.trans(pose.transmat) # transform the point cloud to robot base coordinating system
#pc.show()
''' fixed process cascade '''
PProc = PointProc()
PProc.load_data(pc)
bbox = board_bounding_box[idx]
distorb_function = distorb_function_list[idx]
min_p_before_move = min_point_before_moving[idx]
pc = PProc.fixed_tuned_process_cascaded(bbox,min_p_before_move,distorb_function)
print(pc.pts.min(axis = 0))
print(pc.pts.max(axis = 0))
# pc.show()
pcs_l.append(pc)
#pcs = pcs_l[2]
pcs = pcs_l[0] + pcs_l[1] + pcs_l[2] + pcs_l[3]
pcs.show()
#pcs.save("./box1") | 32.882353 | 130 | 0.73703 | 178 | 1,118 | 4.421348 | 0.438202 | 0.035578 | 0.050826 | 0.083863 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016701 | 0.143113 | 1,118 | 34 | 131 | 32.882353 | 0.804802 | 0.194097 | 0 | 0 | 0 | 0 | 0.019653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.190476 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2aa4a37e524af0ec9df777e43cc943b205937c8 | 5,775 | py | Python | layers/gcn.py | EricDai0/AdvGCGAN | 325c03e69670b8c950f3599de41d85ce82e023e9 | [
"MIT"
] | 1 | 2022-01-06T08:09:53.000Z | 2022-01-06T08:09:53.000Z | layers/gcn.py | EricDai0/AdvGCGAN | 325c03e69670b8c950f3599de41d85ce82e023e9 | [
"MIT"
] | null | null | null | layers/gcn.py | EricDai0/AdvGCGAN | 325c03e69670b8c950f3599de41d85ce82e023e9 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.init as init
import math
class TreeGCN(nn.Module):
def __init__(self, batch, depth, features, degrees, support=10, node=1, upsample=False, activation=True):
self.batch = batch
self.depth = depth
self.in_feature = features[depth]
self.out_feature = features[depth+1]
self.node = node
self.degree = degrees[depth]
self.upsample = upsample
self.activation = activation
super(TreeGCN, self).__init__()
self.W_root = nn.ModuleList([nn.Linear(features[inx], self.out_feature, bias=False) for inx in range(self.depth+1)])
if self.upsample:
self.W_branch = nn.Parameter(torch.FloatTensor(self.node, self.in_feature, self.degree*self.in_feature))
if self.node > 1024:
self.N_L = nn.Sequential(
nn.Conv2d(self.in_feature*2, self.in_feature*4, [1, 20//2+1], [1, 1]), # Fin, Fout, kernel_size, stride
nn.BatchNorm2d(self.in_feature*4),
nn.LeakyReLU(inplace=True)
)
self.N_branch = nn.Sequential(nn.Conv2d(self.node, self.node, [1, 20], [1, 1]),
nn.BatchNorm2d(self.node))
self.N_fea = nn.Linear(self.in_feature*2, self.in_feature)
self.W_loop = nn.Sequential(nn.Linear(self.in_feature, self.in_feature*support, bias=False),
nn.Linear(self.in_feature*support, self.out_feature, bias=False))
self.bias = nn.Parameter(torch.FloatTensor(1, self.degree, self.out_feature))
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)
self.init_param()
def init_param(self):
if self.upsample:
init.xavier_uniform_(self.W_branch.data, gain=init.calculate_gain('relu'))
stdv = 1. / math.sqrt(self.out_feature)
self.bias.data.uniform_(-stdv, stdv)
def get_edge_features_xyz(self, x, k=20, num=-1):
"""
Args:
x: point cloud [B, dims, N]
k: kNN neighbours
Return:
[B, 2dims, N, k]
idx
"""
B, dims, N = x.shape
# ----------------------------------------------------------------
# batched pair-wise distance in feature space maybe is can be changed to coordinate space
# ----------------------------------------------------------------
xt = x.permute(0, 2, 1)
xi = -2 * torch.bmm(xt, x)
xs = torch.sum(xt**2, dim=2, keepdim=True)
xst = xs.permute(0, 2, 1)
dist = xi + xs + xst # [B, N, N]
# get k NN id
_, idx_o = torch.sort(dist, dim=2)
idx = idx_o[: ,: ,1:k+1] # [B, N, k]
idx = idx.contiguous().view(B, N*k)
# gather
neighbors = []
for b in range(B):
tmp = torch.index_select(x[b], 1, idx[b]) # [d, N*k] <- [d, N], 0, [N*k]
tmp = tmp.view(dims, N, k)
neighbors.append(tmp)
neighbors = torch.stack(neighbors) # [B, d, N, k]
# centralize
central = x.unsqueeze(3).repeat(1, 1, 1, k) # [B, d, N, 1] -> [B, d, N, k]
e_fea = neighbors - central
e_fea = torch.cat((central, e_fea), 1)
return e_fea
def forward(self, tree):
root = 0
for inx in range(self.depth+1):
root_num = tree[inx].size(1)
repeat_num = int(self.node / root_num)
root_node = self.W_root[inx](tree[inx])
root = root + root_node.repeat(1,1,repeat_num).view(self.batch,-1,self.out_feature)
branch = 0
if self.upsample and self.node <= 1024:
branch = tree[-1].unsqueeze(2) @ self.W_branch
branch = self.leaky_relu(branch)
branch = branch.view(self.batch,self.node*self.degree,self.in_feature)
branch = self.W_loop(branch)
branch = root.repeat(1,1,self.degree).view(self.batch,-1,self.out_feature) + branch
else:
if self.node > 1024:
points = tree[-1].permute(0, 2, 1)
branch = self.get_edge_features_xyz(points)
B, C, N, K = branch.size()
branch = self.N_L(branch)
branch = branch.transpose(2, 1) # BxNx2Cxk/2
branch = branch.contiguous().view(B, N, C, 2, K//2) # BxNxCx2x(k//2+1)
branch = branch.contiguous().view(B, N, C, K) # BxNxCx(k+2)
branch = self.N_branch(branch)
branch = branch.view(self.batch,self.node,self.in_feature*2)
points = tree[-1].unsqueeze(2) @ self.W_branch
points = self.leaky_relu(points)
points = points.view(self.batch,self.node,self.in_feature*2)
branch = torch.cat((points, branch), 1)
branch = self.N_fea(branch)
branch = self.leaky_relu(branch)
branch = branch.view(self.batch,self.node*self.degree,self.in_feature)
branch = self.W_loop(branch)
branch = root.repeat(1,1,self.degree).view(self.batch,-1,self.out_feature) + branch
if not self.upsample:
branch = self.W_loop(tree[-1])
branch = root + branch
if self.activation:
branch = self.leaky_relu(branch + self.bias.repeat(1,self.node,1))
tree.append(branch)
return tree | 37.993421 | 128 | 0.50961 | 732 | 5,775 | 3.911202 | 0.193989 | 0.050297 | 0.06811 | 0.01956 | 0.305623 | 0.232623 | 0.232623 | 0.161369 | 0.149494 | 0.118757 | 0 | 0.027027 | 0.346494 | 5,775 | 152 | 129 | 37.993421 | 0.731585 | 0.088485 | 0 | 0.122449 | 0 | 0 | 0.000771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.040816 | 0 | 0.112245 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2aa68d83eb0446159f02c9de3924225f39a8563 | 316 | py | Python | atcoder/abc048/c.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 506 | 2018-08-22T10:30:38.000Z | 2022-03-31T10:01:49.000Z | atcoder/abc048/c.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 13 | 2019-08-07T18:31:18.000Z | 2020-12-15T21:54:41.000Z | atcoder/abc048/c.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 234 | 2018-08-06T17:11:41.000Z | 2022-03-26T10:56:42.000Z | #!/usr/bin/env python3
# https://abc048.contest.atcoder.jp/tasks/arc064_a
n, x = [int(x) for x in input().split()]
a = [int(x) for x in input().split()]
c = 0
for i in range(1, len(a)):
b = a[i - 1] + a[i]
d = b - x
if d > 0:
c += d
e = min(a[i], d)
a[i] -= e
d -= e
if d > 0:
a[i - 1] -= d
print(c)
| 18.588235 | 50 | 0.5 | 71 | 316 | 2.211268 | 0.43662 | 0.063694 | 0.089172 | 0.101911 | 0.254777 | 0.254777 | 0.254777 | 0 | 0 | 0 | 0 | 0.055556 | 0.259494 | 316 | 16 | 51 | 19.75 | 0.615385 | 0.221519 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b2ace989afc849cef1b4695d54a2a214a231035b | 8,727 | py | Python | parallax/parallax/core/python/ps/in_graph_parallel.py | snuspl/parallax | 83791254ccd5d7a55213687a8dff4c2e04372694 | [
"Apache-2.0"
] | 126 | 2018-07-26T07:06:56.000Z | 2022-01-25T08:11:25.000Z | parallal/parallax/core/python/ps/in_graph_parallel.py | nikkkkhil/auto-parallal-deeplearning | 80e350fd933b8f82721bdc4950bc969ba0c630a4 | [
"Apache-2.0"
] | 20 | 2018-08-07T04:51:16.000Z | 2020-05-15T04:47:20.000Z | parallal/parallax/core/python/ps/in_graph_parallel.py | nikkkkhil/auto-parallal-deeplearning | 80e350fd933b8f82721bdc4950bc969ba0c630a4 | [
"Apache-2.0"
] | 30 | 2018-08-01T13:25:43.000Z | 2022-02-28T01:28:34.000Z | # Copyright (C) 2018 Seoul National University
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import time
import tensorflow as tf
from tensorflow.core.protobuf import meta_graph_pb2
from tensorflow.python.framework import ops
from parallax.core.python.common.graph_transform_lib import *
from parallax.core.python.common.lib import *
def _get_ops_to_replicate(gradiend_info_list):
grads = [gradient_info._grad for gradient_info in gradiend_info_list]
grad_related = set()
for grad in grads:
if isinstance(grad, tf.IndexedSlices):
grad_related.add(grad.indices)
grad_related.add(grad.values)
grad_related.add(grad.dense_shape)
elif isinstance(grad, tf.Tensor):
grad_related.add(grad)
else:
raise RuntimeError("Incorrect grad.")
grads_ancestor_ops = get_ancestors([grad.op for grad in grad_related],
include_control_inputs=True)
pipeline_ops = get_pipeline_ops(grads_ancestor_ops)
global_var_related_ops = set()
for global_var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
global_var_related_ops.add(global_var.op)
global_var_related_ops.add(global_var.initializer)
global_var_related_ops.add(global_var._snapshot.op)
table_related_ops = set()
for table_init in tf.get_collection(tf.GraphKeys.TABLE_INITIALIZERS):
table_related_ops.add(table_init)
table_related_ops.add(table_init.inputs[0].op)
# Assume that all variables are member of either GLOBAL_VARIABLES
# or LOCAL_VARIABLES.
local_var_op_to_var = \
dict([(var.op, var)
for var in tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES)])
local_var_ops = set(local_var_op_to_var.keys())
local_var_ops.intersection_update(grads_ancestor_ops)
ops_to_replicate = set()
ops_to_replicate.update(grads_ancestor_ops)
ops_to_replicate.update(pipeline_ops)
ops_to_replicate.difference_update(global_var_related_ops)
ops_to_replicate.difference_update(table_related_ops)
ops_to_replicate.update(
[local_var_op_to_var[var_op].initializer for var_op in local_var_ops])
return ops_to_replicate
def _get_multi_gpu_meta_graph(single_gpu_meta_graph_def, op_names_to_replicate,
op_names_to_share, num_replicas,
tensor_or_op_name_to_replica_names):
multi_gpu_graph_def = \
construct_multi_gpu_graph_def(
single_gpu_meta_graph_def.graph_def,
op_names_to_replicate,
op_names_to_share,
num_replicas,
tensor_or_op_name_to_replica_names)
multi_gpu_meta_graph_def = meta_graph_pb2.MetaGraphDef()
multi_gpu_meta_graph_def.CopyFrom(single_gpu_meta_graph_def)
multi_gpu_meta_graph_def.graph_def.Clear()
multi_gpu_meta_graph_def.graph_def.CopyFrom(multi_gpu_graph_def)
return multi_gpu_meta_graph_def
def _handle_collection_def(multi_gpu_meta_graph_def, op_names_to_replicate,
num_replicas):
allow_bytes_list_keys = [tf.GraphKeys.QUEUE_RUNNERS,
tf.GraphKeys.GLOBAL_VARIABLES,
tf.GraphKeys.TRAINABLE_VARIABLES,
tf.GraphKeys.MOVING_AVERAGE_VARIABLES,
tf.GraphKeys.LOCAL_VARIABLES,
tf.GraphKeys.MODEL_VARIABLES,
tf.GraphKeys.GRADIENTS_INFO,
tf.GraphKeys.GLOBAL_STEP]
keys_to_remove = []
for key, col_def in multi_gpu_meta_graph_def.collection_def.items():
kind = col_def.WhichOneof("kind")
# Update node_list collections (e.g., GLOBAL_STEP, TRAIN_OP, UPDATE_OP,
# LOSSES, ...)
if kind == 'node_list':
new_col_def = get_new_col_def_of_node_list(
col_def, op_names_to_replicate, num_replicas)
multi_gpu_meta_graph_def.collection_def[key].Clear()
multi_gpu_meta_graph_def.collection_def[key].CopyFrom(new_col_def)
elif kind == 'bytes_list':
if ops.get_from_proto_function(key):
# Collections in allow_bytes_list_keys will be handled
# explicitly below
# (e.g., QUEUE_RUNNERS, LOCAL_VARIABLES, ...)
if key in allow_bytes_list_keys:
continue
# Remove unhandled collections (e.g., COND_CONTEXT)
# TODO: Handle all protos in tf.GraphKeys
else:
keys_to_remove.append(key)
# Keep collections without proto function
# (e.g., user defined string)
else:
continue
else:
raise RuntimeError("Should not reach here")
for key in keys_to_remove:
del multi_gpu_meta_graph_def.collection_def[key]
# Update QUEUE_RUNNERS and LOCAL_VARIABLES collection
update_queue_runners(multi_gpu_meta_graph_def, op_names_to_replicate,
num_replicas)
update_local_variables(multi_gpu_meta_graph_def, op_names_to_replicate,
num_replicas)
update_shard_info_for_in_graph(multi_gpu_meta_graph_def, num_replicas)
def in_graph_auto_parallel_compute(single_gpu_meta_graph_def,
num_replicas,
config,
op_library_path,
tensor_or_op_name_to_replica_names):
"""Returns a graph replica. This is for in-graph replication.
Args:
single_gpu_meta_graph_def: Target meta graph definition proto for replicas.
num_replicas: shape {1}. Number of replicas, gpus are utilized
as many as num_replicas.
average_sparse: shape {1}. Whether to average sparse values or not.
Returns:
A tensor which contains serialized meta-graph def proto after replication.
The output tensor is converted into numpy array.
"""
parallax_log.debug("InGraphAutoParallelOpKernel: start")
start_time = time.time()
if op_library_path is not None:
tf.load_op_library(op_library_path)
average_option = SPARSE_AVERAGE_BY_COUNTER\
if config.average_sparse else SPARSE_NO_AVERAGE
with tf.Graph().as_default():
import_start_time = time.time()
tf.train.import_meta_graph(single_gpu_meta_graph_def)
import_duration = time.time() - import_start_time
parallax_log.debug(
"Time to import single-GPU meta graph : %.3f seconds"
% import_duration)
gradient_info_list = \
[gi for gi in tf.get_collection(tf.GraphKeys.GRADIENTS_INFO)]
ops_to_replicate = _get_ops_to_replicate(gradient_info_list)
op_names_to_replicate = set([op.name for op in ops_to_replicate])
ops_to_share = set(tf.get_default_graph().get_operations())
ops_to_share.difference_update(ops_to_replicate)
op_names_to_share = set([op.name for op in ops_to_share])
op_to_control_consumer_ops = \
get_all_control_consumers(tf.get_default_graph())
multi_gpu_meta_graph_def = \
_get_multi_gpu_meta_graph(single_gpu_meta_graph_def,
op_names_to_replicate, op_names_to_share,
num_replicas,
tensor_or_op_name_to_replica_names)
_handle_collection_def(multi_gpu_meta_graph_def, op_names_to_replicate,
num_replicas)
# Delete GRADIENTS_INFO collection temporarily
del multi_gpu_meta_graph_def.collection_def[tf.GraphKeys.GRADIENTS_INFO]
multi_gpu_meta_graph_def = add_aggregate_gradients_ops(
multi_gpu_meta_graph_def,
op_names_to_replicate,
op_to_control_consumer_ops,
gradient_info_list,
num_replicas,
average_option)
duration = time.time() - start_time
parallax_log.debug(
"InGraphAutoParallelOpKernel: end (took %.3f seconds)" % duration)
return multi_gpu_meta_graph_def
| 42.15942 | 80 | 0.670219 | 1,130 | 8,727 | 4.757522 | 0.216814 | 0.05692 | 0.064732 | 0.072545 | 0.362723 | 0.292225 | 0.22433 | 0.14881 | 0.113653 | 0.106213 | 0 | 0.002319 | 0.258737 | 8,727 | 206 | 81 | 42.364078 | 0.828722 | 0.186662 | 0 | 0.165468 | 0 | 0 | 0.027817 | 0.007948 | 0 | 0 | 0 | 0.004854 | 0 | 1 | 0.028777 | false | 0 | 0.079137 | 0 | 0.129496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22c47face87529e7f46dd7ea330ea086d1a9edd | 3,299 | py | Python | alteia/apis/client/auth/usersimpl.py | alteia-ai/alteia-python-sdk | 27ec7458334334ed6a1edae52cb25d5ce8734177 | [
"MIT"
] | 11 | 2020-12-22T14:39:21.000Z | 2022-02-18T16:34:34.000Z | alteia/apis/client/auth/usersimpl.py | alteia-ai/alteia-python-sdk | 27ec7458334334ed6a1edae52cb25d5ce8734177 | [
"MIT"
] | 1 | 2021-08-05T14:21:12.000Z | 2021-08-09T13:22:55.000Z | alteia/apis/client/auth/usersimpl.py | alteia-ai/alteia-python-sdk | 27ec7458334334ed6a1edae52cb25d5ce8734177 | [
"MIT"
] | null | null | null | from typing import Generator, List, Union
from alteia.apis.provider import AuthAPI
from alteia.core.resources.resource import Resource, ResourcesWithTotal
from alteia.core.resources.utils import search, search_generator
from alteia.core.utils.typing import ResourceId
class UsersImpl:
def __init__(self, auth_api: AuthAPI, **kwargs):
self._provider = auth_api
def describe(self, user: ResourceId = None, **kwargs) -> Resource:
"""Describe a user.
Args:
user: User identifier to describe.
**kwargs: Optional keyword arguments. Those arguments are
passed as is to the API provider.
Returns:
User : User resource.
"""
data = kwargs
if user:
data['user'] = user
content = self._provider.post(path='describe-user', data=data)
return Resource(**content)
def search(self, *, filter: dict = None, limit: int = None,
page: int = None, sort: dict = None, return_total: bool = False,
**kwargs) -> Union[ResourcesWithTotal, List[Resource]]:
"""Search users.
Args:
filter: Search filter (refer to
``/search-users`` definition in the User
and company management API for a detailed description
of supported operators).
limit: Maximum number of results to extract.
page: Page number (starting at page 1).
sort: Sort the results on the specified attributes
(``1`` is sorting in ascending order,
``-1`` is sorting in descending order).
return_total: Return the number of results found.
**kwargs: Optional keyword arguments. Those arguments are
passed as is to the API provider.
Returns:
Resources: A list of resources OR a namedtuple
with total number of results and list of resources.
"""
return search(
self,
url='search-users',
filter=filter,
limit=limit,
page=page,
sort=sort,
return_total=return_total,
**kwargs
)
def search_generator(self, *, filter: dict = None, limit: int = 50,
page: int = None,
**kwargs) -> Generator[Resource, None, None]:
"""Return a generator to search through users.
The generator allows the user not to care about the pagination of
results, while being memory-effective.
Found users are sorted chronologically in order to allow
new resources to be found during the search.
Args:
page: Optional page number to start the search at (default is 1).
filter: Search filter dictionary.
limit: Optional maximum number of results by search
request (default to 50).
**kwargs: Optional keyword arguments. Those arguments are
passed as is to the API provider.
Returns:
A generator yielding found users.
"""
return search_generator(self, first_page=1, filter=filter, limit=limit,
page=page, **kwargs)
| 32.029126 | 79 | 0.582904 | 367 | 3,299 | 5.196185 | 0.307902 | 0.023597 | 0.031463 | 0.047195 | 0.184583 | 0.184583 | 0.125852 | 0.125852 | 0.125852 | 0.125852 | 0 | 0.004161 | 0.344347 | 3,299 | 102 | 80 | 32.343137 | 0.877485 | 0.467414 | 0 | 0 | 0 | 0 | 0.020266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.15625 | 0 | 0.40625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22c77191fc279573b119f484f866a71bc319830 | 1,458 | py | Python | OCR_DB/GSE129785/get_common_peaks.py | howchihlee/ScATACAnnotater | 5eba0e8a8e6be489339e25925f8633f6749618b3 | [
"MIT"
] | null | null | null | OCR_DB/GSE129785/get_common_peaks.py | howchihlee/ScATACAnnotater | 5eba0e8a8e6be489339e25925f8633f6749618b3 | [
"MIT"
] | null | null | null | OCR_DB/GSE129785/get_common_peaks.py | howchihlee/ScATACAnnotater | 5eba0e8a8e6be489339e25925f8633f6749618b3 | [
"MIT"
] | null | null | null | import os
import pandas as pd
import numpy as np
import scipy.stats as stats
from scipy.stats import fisher_exact
from scipy.io import mmread
cl2ct = {4:'LMPP',
5:'CLP',
1:'HSC_MPP',
2:'MEP',
3:'CMP_BMP',
8:'GMP',
6:'Pro-B',
9:'MDP',
10:'pDC',
7:'Pre-B',
21:'Naive CD4 T1',
22:'Naive CD4 T2',
20:'Mature NK2',
17:'Basophil',
16:'Plasma cell',
19:'Mature NK1',
18:'Immature NK',
15:'Memory B',
11:'cDC',
13:'Monocyte 2',
12:'Monocyte 1',
14:'Naive B',
28:'Naive CD8 T3',
29:'Central memory CD8 T',
27:'Naive CD8 T2',
24:'Memory CD4 T',
23:'Naive Treg',
26:'Naive CD8 T1',
25:'Treg',
30:'Effector memory CD8 T',
31:'Gamma delta T'}
if __name__ == '__main__':
df_barcode = pd.read_csv('./GSE129785_scATAC-Hematopoiesis-All.cell_barcodes.txt.gz', sep = '\t')
id2fea = pd.read_csv('./GSE129785_scATAC-Hematopoiesis-All.peaks.txt.gz').Feature.values
id2fea = [f.split('_') for f in id2fea]
cluster = np.array([cl2ct[int(c[7:])] for c in df_barcode.Clusters.values])
data_mat = mmread('./GSE129785_scATAC-Hematopoiesis-All.mtx')
data_mat = data_mat.tocsr()
for ct in set(cluster):
ind = cluster == ct
vec = np.array(data_mat[:, ind].mean(axis = 1))[:, 0]
ind_peak = vec > 0.25
diffpeaks = [id2fea[i] for i in np.where(ind_peak)[0]]
print(ct, sum(ind_peak))
pd.DataFrame(diffpeaks).to_csv('%s.bed' % ct.replace(' ', '_'), index = False, sep = '\t', header = False) | 25.578947 | 114 | 0.632373 | 241 | 1,458 | 3.705394 | 0.560166 | 0.031355 | 0.094065 | 0.104143 | 0.089586 | 0.089586 | 0.089586 | 0 | 0 | 0 | 0 | 0.084448 | 0.179698 | 1,458 | 57 | 114 | 25.578947 | 0.662207 | 0 | 0 | 0 | 0 | 0 | 0.300206 | 0.100069 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0.019608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22d511d1c439e0f770769017ad9b54eb1b1ee41 | 3,284 | py | Python | src/ingest-pipeline/airflow/plugins/hubmap_operators/flex_multi_dag_run.py | AustinHartman/ingest-pipeline | 788d9310792c9396a38650deda3dad11483b368c | [
"MIT"
] | 6 | 2020-02-18T19:09:59.000Z | 2021-10-07T20:38:46.000Z | src/ingest-pipeline/airflow/plugins/hubmap_operators/flex_multi_dag_run.py | AustinHartman/ingest-pipeline | 788d9310792c9396a38650deda3dad11483b368c | [
"MIT"
] | 324 | 2020-02-06T22:08:50.000Z | 2022-03-24T20:44:33.000Z | src/ingest-pipeline/airflow/plugins/hubmap_operators/flex_multi_dag_run.py | AustinHartman/ingest-pipeline | 788d9310792c9396a38650deda3dad11483b368c | [
"MIT"
] | 2 | 2020-07-20T14:43:49.000Z | 2021-10-29T18:24:36.000Z | #! /usr/bin/env python
"""
This module is cloned with modifications from https://pypi.org/project/airflow-multi-dagrun/
https://raw.githubusercontent.com/mastak/airflow_multi_dagrun/master/airflow_multi_dagrun/operators/multi_dagrun.py
Author Ihor Liubymov infunt@gmail.com
Maintainer https://pypi.org/user/mastak/
The original iterates over multiple executions of the same trigger_dag_id; this version
allows that to change between iterations.
"""
from airflow import settings
from airflow.models import DagBag
from airflow.operators.dagrun_operator import DagRunOrder
from airflow.utils.decorators import apply_defaults
from airflow.utils.state import State
from airflow.api.common.experimental.trigger_dag import trigger_dag
from airflow.models import BaseOperator
import datetime as dt
class FlexMultiDagRunOperator(BaseOperator):
"""
Triggers zero or more DAG runs based on output of a generator
:param conf: Configuration for the DAG run
:type conf: dict
:param execution_date: Execution date for the dag (templated)
:type execution_date: str or datetime.datetime
"""
template_fields = ("execution_date", "conf")
ui_color = "#ffefeb"
CREATED_DAGRUN_KEY = 'created_dagrun_key'
@apply_defaults
def __init__(self, python_callable,
conf=None, execution_date=None,
op_args=None, op_kwargs=None,
provide_context=False, *args, **kwargs):
super(FlexMultiDagRunOperator, self).__init__(*args, **kwargs)
self.python_callable = python_callable
self.conf = conf
if not isinstance(execution_date, (str, dt.datetime, type(None))):
raise TypeError(
"Expected str or datetime.datetime type for execution_date."
"Got {}".format(type(execution_date))
)
self.execution_date = execution_date
self.op_args = op_args or []
self.op_kwargs = op_kwargs or {}
self.provide_context = provide_context
def execute(self, context):
if self.provide_context:
context.update(self.op_kwargs)
self.op_kwargs = context
session = settings.Session()
created_dr_ids = []
for tuple in self.python_callable(*self.op_args, **self.op_kwargs):
if not tuple:
break
trigger_dag_id, dro = tuple
if not isinstance(dro, DagRunOrder):
dro = DagRunOrder(payload=dro)
now = dt.datetime.now(dt.timezone.utc)
if dro.run_id is None:
dro.run_id = 'trig__' + now.isoformat()
dbag = DagBag(settings.DAGS_FOLDER)
trigger_dag = dbag.get_dag(trigger_dag_id)
dr = trigger_dag.create_dagrun(
run_id=dro.run_id,
execution_date=now,
state=State.RUNNING,
conf=dro.payload,
external_trigger=True,
)
created_dr_ids.append(dr.id)
self.log.info("Created DagRun %s, %s", dr, now)
if created_dr_ids:
session.commit()
context['ti'].xcom_push(self.CREATED_DAGRUN_KEY, created_dr_ids)
else:
self.log.info("No DagRun created")
session.close() | 36.087912 | 115 | 0.651035 | 402 | 3,284 | 5.116915 | 0.378109 | 0.069519 | 0.023335 | 0.022363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263094 | 3,284 | 91 | 116 | 36.087912 | 0.85 | 0.204629 | 0 | 0 | 0 | 0 | 0.059441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0 | 0.131148 | 0 | 0.229508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22daebaa39b536533b79ec7580196961b675d5b | 4,774 | py | Python | Athos/CompilerScripts/change_onnx_output.py | krantikiran68/EzPC | cacf10f31cddf55e4a06908fcfc64f8d7d0f85bd | [
"MIT"
] | 221 | 2019-05-16T16:42:49.000Z | 2022-03-29T14:05:31.000Z | Athos/CompilerScripts/change_onnx_output.py | krantikiran68/EzPC | cacf10f31cddf55e4a06908fcfc64f8d7d0f85bd | [
"MIT"
] | 63 | 2019-07-02T11:50:15.000Z | 2022-03-31T08:14:02.000Z | Athos/CompilerScripts/change_onnx_output.py | krantikiran68/EzPC | cacf10f31cddf55e4a06908fcfc64f8d7d0f85bd | [
"MIT"
] | 67 | 2019-08-30T08:44:47.000Z | 2022-03-23T08:08:33.000Z | """
Authors: Pratik Bhatu.
Copyright:
Copyright (c) 2020 Microsoft Research
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import onnx
import onnxruntime
import numpy as np
from onnx import helper, shape_inference, checker
from onnx import ValueInfoProto, ModelProto, TensorProto
import os
model_name = "shufflenet_may17.onnx"
output_model_name = "processed_" + model_name
inputs = ["data"]
nodes_to_remove = ["LabelSelector", "LabelIndexExtractor", "ZipMap", "activation37"]
new_output_names = ["fc"]
batch_size = 1
def fix_shape(shape_list, batch_size):
if "None" not in shape_list:
return shape_list
else:
shape_list[0] = batch_size
assert (
"None" not in shape_list
), """Other than batch size there are input
params with unkown dimension"""
return shape_list
def fix_inp_shape(inp, batch_size):
if inp.type.tensor_type.shape.dim[0].dim_param == "None":
inp.type.tensor_type.shape.dim[0].dim_value = batch_size
return
def get_np_type_from_onnxruntime(typ_str):
np_types = {
"tensor(float)": np.float32,
"tensor(float64)": np.float64,
"tensor(int)": np.int32,
"tensor(int64)": np.int64,
}
return np_types[typ_str]
def get_onnx_type(arr):
onnx_types = {
np.float32: TensorProto.FLOAT,
np.float64: TensorProto.DOUBLE,
np.int32: TensorProto.INT32,
np.int64: TensorProto.INT64,
}
return onnx_types[arr.dtype.type]
model = onnx.load(model_name)
# 1. Inputs to remove
# Inputs to dead nodes should not show up as inputs for the model
# and also not in the initialization list.
inputs_to_remove = [
inp for i in model.graph.node if i.name in nodes_to_remove for inp in i.input
]
new_inputs = [i for i in model.graph.input if i.name not in inputs_to_remove]
# Fix batch size
fix_inp_shape(new_inputs[0], batch_size)
# 2. Remove their initializers
new_initializers = [
init
for init in model.graph.initializer
if init.name not in nodes_to_remove and init.name not in inputs_to_remove
]
# 3. Remove nodes
new_nodes = [n for n in model.graph.node if n.name not in nodes_to_remove]
# Get Ouput Tensor Types to create ValueInfo for output info
# by running model on dummy input
temp_model = ModelProto()
temp_model.CopyFrom(model)
for i in new_output_names:
op = ValueInfoProto()
op.name = i
temp_model.graph.output.append(op)
onnx.save(temp_model, "__temp.onnx")
sess = onnxruntime.InferenceSession("__temp.onnx")
sess_inps = sess.get_inputs()
input_dict = {}
for i in sess_inps:
shape = fix_shape(i.shape, batch_size)
typ = get_np_type_from_onnxruntime(i.type)
input_dict[i.name] = np.random.rand(*shape).astype(typ)
output_tensors = sess.run(new_output_names, input_dict)
if os.path.exists("__temp.onnx"):
os.remove("__temp.onnx")
# 4. Create new output list
new_outputs = []
for i in range(0, len(new_output_names)):
name = new_output_names[i]
typ = get_onnx_type(output_tensors[i])
shape = output_tensors[i].shape
val_info = helper.make_tensor_value_info(name, typ, shape)
new_outputs.append(val_info)
new_graph = helper.make_graph(
new_nodes,
model.graph.name,
new_inputs,
new_outputs,
initializer=new_initializers,
doc_string=model.graph.doc_string,
value_info=model.graph.value_info,
)
new_model = helper.make_model(
new_graph,
ir_version=model.ir_version,
doc_string=model.doc_string,
model_version=model.model_version,
domain=model.domain,
producer_name="MPCOpRemover",
)
new_model.metadata_props.extend(model.metadata_props)
new_model.opset_import.pop()
new_model.opset_import.extend(model.opset_import)
onnx.save(new_model, "processed_" + model_name)
| 31.407895 | 84 | 0.733138 | 722 | 4,774 | 4.65928 | 0.310249 | 0.024078 | 0.020809 | 0.013377 | 0.085612 | 0.043995 | 0.017241 | 0.017241 | 0 | 0 | 0 | 0.010744 | 0.18119 | 4,774 | 151 | 85 | 31.615894 | 0.849834 | 0.292417 | 0 | 0.020202 | 0 | 0 | 0.096131 | 0.00625 | 0 | 0 | 0 | 0 | 0.010101 | 1 | 0.040404 | false | 0 | 0.080808 | 0 | 0.171717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22e6a9f734dc5907c329f911cf0286be792813f | 1,404 | py | Python | crawl_mails/scamdex.py | wardballoon/social-engineering-defense | 0713f6afba31206e86b180f74264b138cd84688e | [
"BSD-2-Clause-FreeBSD",
"MIT"
] | 4 | 2019-11-10T17:06:06.000Z | 2020-05-28T13:08:26.000Z | crawl_mails/scamdex.py | codingsoo/social-engineering-defense | 0713f6afba31206e86b180f74264b138cd84688e | [
"BSD-2-Clause-FreeBSD",
"MIT"
] | 9 | 2017-09-28T08:32:40.000Z | 2017-11-12T20:48:29.000Z | crawl_mails/scamdex.py | zerobugplz/social-engineering-defense | 0713f6afba31206e86b180f74264b138cd84688e | [
"BSD-2-Clause-FreeBSD",
"MIT"
] | 9 | 2017-09-28T01:07:24.000Z | 2018-07-25T22:52:40.000Z | import langid
from bs4 import BeautifulSoup
import requests
import json
url = 'http://www.scamdex.com/__INCLUDES/__getLatestScamEmails.php'
base_url = 'http://www.scamdex.com'
payload = {'num' : 62700, 'frum' : 0}
response = requests.post(url, data=payload)
soup = BeautifulSoup(response.text, 'html.parser')
scam_url = soup.find_all('a')
url_data = []
for scam_number in range(len(scam_url)):
if 'email-scam-database' in str(scam_url[scam_number].get('href')):
url_data.append(scam_url[scam_number].get('href'))
scam_data = []
img_data = []
num_of_image = 0
num = 0
for i, scam_path in enumerate(url_data):
try:
req = requests.get(base_url+scam_path)
soup = BeautifulSoup(req.text, 'html.parser')
scam_content = soup.find(id='HEADBODYSEP')
scam_image = scam_content.find('img')
if scam_image:
num_of_image = num_of_image + 1
img_data.append(scam_image.get('alt'))
if langid.classify(scam_content.text)[0] == 'en' and len(scam_content.text) > 10:
scam_data.append(scam_content.text)
print(i, "/", 62700)
num = num + 1
except:
print(Exception)
with open('scamdex_url_dataset.json','w') as f:
json.dump(url_data,f)
with open('scamdex_data.json','w') as f:
json.dump(scam_data,f)
with open('scamdex_image_name.json','w') as f:
json.dump(img_data,f)
| 28.08 | 89 | 0.658832 | 209 | 1,404 | 4.210526 | 0.344498 | 0.039773 | 0.047727 | 0.027273 | 0.2 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0.016874 | 0.198006 | 1,404 | 49 | 90 | 28.653061 | 0.764654 | 0 | 0 | 0 | 0 | 0 | 0.160256 | 0.033476 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.102564 | 0 | 0.102564 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22f421f047cf0dc2fca812751f8d622f6d2dd18 | 1,517 | py | Python | main.py | JDaniloC/Individual-Bet365Bot | d719805d6f83f6bbe1bbba3794e0aceb1e41442d | [
"MIT"
] | 41 | 2021-01-14T14:35:12.000Z | 2022-03-02T22:46:13.000Z | main.py | profxx/Individual-Bet365Bot | 14b8df8cf16c8ab1fb58c1367f232373552434b9 | [
"MIT"
] | 11 | 2021-02-24T15:56:29.000Z | 2022-02-17T10:41:59.000Z | main.py | profxx/Individual-Bet365Bot | 14b8df8cf16c8ab1fb58c1367f232373552434b9 | [
"MIT"
] | 17 | 2021-01-28T21:54:04.000Z | 2022-03-21T23:39:39.000Z | import eel, time, threading
from datetime import datetime
from src.database import MongoDB
from src.bot import BetBot
class Updater:
def __init__(self, account:dict):
self.MongoDB = MongoDB
self.account = account
def update_balance(self, balance:float):
self.account["license"]['actual_value'] = balance
if self.account["license"]['original_value'] == -1:
self.account["license"]['original_value'] = balance
self.MongoDB.modifica_usuario(self.account, self.account['username'])
eel.updateBalance(balance)
def session_gain(balance:float):
eel.sessionGain(balance)
def expire_warning():
eel.expireWarning()
@eel.expose
def handle_login(account:dict):
conta = MongoDB.login(
account["username"], account["password"])
if conta:
if conta["license"]['to_date'] < time.time():
Updater.expire_warning()
return False
conta['password'] = account["password"]
conta["license"]['from_date'] = datetime.fromtimestamp(
conta["license"]['from_date']).strftime('%d/%m/%Y')
conta["license"]['to_date'] = datetime.fromtimestamp(
conta["license"]['to_date']).strftime('%d/%m/%Y')
return conta
return False
@eel.expose
def operate(account:dict):
atualizador = Updater(account)
bot = BetBot(account, atualizador)
threading.Thread(target=bot.start,
daemon = True).start()
eel.init('src/web')
eel.start('index.html') | 32.978261 | 77 | 0.644034 | 174 | 1,517 | 5.511494 | 0.350575 | 0.080292 | 0.056309 | 0.056309 | 0.173097 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000836 | 0.211602 | 1,517 | 46 | 78 | 32.978261 | 0.801003 | 0 | 0 | 0.097561 | 0 | 0 | 0.137022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0.04878 | 0.097561 | 0 | 0.341463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a22fc9add84414cd9c974975661107315c5979df | 2,250 | py | Python | src/framework/utils.py | swankyalex/tms-course | cb42d1732b827698683bb078d651a62236c41e8f | [
"MIT"
] | null | null | null | src/framework/utils.py | swankyalex/tms-course | cb42d1732b827698683bb078d651a62236c41e8f | [
"MIT"
] | 4 | 2020-10-16T20:16:28.000Z | 2021-09-23T10:46:12.000Z | src/framework/utils.py | swankyalex/tms-course | cb42d1732b827698683bb078d651a62236c41e8f | [
"MIT"
] | null | null | null | import http
import random
from http.cookies import SimpleCookie
from typing import Any
from typing import Dict
from typing import Optional
from urllib.parse import parse_qs
from framework.consts import DIR_STATIC
from framework.consts import USER_COOKIE
from framework.types import RequestT
from framework.types import ResponseT
def read_static(file_name: str) -> bytes:
path = DIR_STATIC / file_name
with path.open("rb") as fp:
payload = fp.read()
return payload
def generate_404(request: RequestT) -> ResponseT:
url = request.path
pin = random.randint(1, 1000)
msg = f"Hello world! Your path: {url} not found. Pin: {pin}"
headers_strings = [f"{h} -> {v}" for h, v in request.headers.items()]
headers_txt = ""
for item in headers_strings:
headers_txt = headers_txt + item + "\n"
document = f"""404! path: {url}, pin{pin}
{headers_txt}
"""
payload = document.encode()
status = build_status(404)
headers_strings = {
"Content-type": "text/plain",
}
return ResponseT(status, headers_strings, payload)
def get_form_data(body: bytes) -> Dict[str, Any]:
qs = body.decode()
form_data = parse_qs(qs or "")
return form_data
def get_body(environ: dict) -> bytes:
fp = environ["wsgi.input"]
cl = int(environ.get("CONTENT_LENGTH") or 0)
if not cl:
return b""
content = fp.read(cl)
return content
def build_status(code: int) -> str:
status = http.HTTPStatus(code)
def _process_word(_word: str) -> str:
if _word == "OK":
return _word
return _word.capitalize()
reason = " ".join(_process_word(word) for word in status.name.split("_"))
text = f"{code} {reason}"
return text
def get_user_id(headers: Dict) -> Optional[str]:
cookies_header = headers.get("COOKIE", "")
cookies = SimpleCookie(cookies_header)
if USER_COOKIE not in cookies:
return None
user_id = cookies[USER_COOKIE].value
return user_id
def get_request_headers(environ: dict) -> dict:
environ_headers = filter(lambda _kv: _kv[0].startswith("HTTP_"), environ.items())
request_headers = {key[5:]: value for key, value in environ_headers}
return request_headers
| 23.195876 | 85 | 0.666667 | 307 | 2,250 | 4.71987 | 0.335505 | 0.035887 | 0.033126 | 0.034507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009681 | 0.219556 | 2,250 | 96 | 86 | 23.4375 | 0.81549 | 0 | 0 | 0 | 0 | 0 | 0.081333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.171875 | 0 | 0.46875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2309362499244546ee8b4343811155900945008 | 530 | py | Python | examples/try12.py | h4l/pyvips | 954468a9bfe03d77c84943a7c2d66137553f5861 | [
"MIT"
] | 142 | 2017-08-01T12:33:20.000Z | 2018-09-15T16:50:32.000Z | examples/try12.py | h4l/pyvips | 954468a9bfe03d77c84943a7c2d66137553f5861 | [
"MIT"
] | 62 | 2017-08-01T16:22:09.000Z | 2018-09-20T08:00:40.000Z | examples/try12.py | h4l/pyvips | 954468a9bfe03d77c84943a7c2d66137553f5861 | [
"MIT"
] | 15 | 2017-08-04T09:51:29.000Z | 2018-08-25T18:42:49.000Z | #!/usr/bin/python
import sys
import pyvips
im = pyvips.Image.new_from_file(sys.argv[1], access=pyvips.Access.SEQUENTIAL)
footer = pyvips.Image.black(im.width, 150)
left_text = pyvips.Image.text("left corner", dpi=300)
right_text = pyvips.Image.text("right corner", dpi=300)
footer = footer.insert(left_text, 50, 50)
footer = footer.insert(right_text, im.width - right_text.width - 50, 50)
footer = footer.ifthenelse(0, [255, 0, 0], blend=True)
im = im.insert(footer, 0, im.height, expand=True)
im.write_to_file(sys.argv[2])
| 27.894737 | 77 | 0.733962 | 89 | 530 | 4.269663 | 0.41573 | 0.115789 | 0.057895 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054968 | 0.107547 | 530 | 18 | 78 | 29.444444 | 0.748414 | 0.030189 | 0 | 0 | 0 | 0 | 0.044834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a231b2a2cc40995b67cde3cc25f50f2493288878 | 12,871 | py | Python | excut/explanations_mining/simple_miner/description_miner_extended.py | mhmgad/ExCut | 09e943a23207381de3c3a9e6f70015882b8ec4af | [
"Apache-2.0"
] | 5 | 2020-11-17T19:59:49.000Z | 2021-09-23T23:10:39.000Z | excut/explanations_mining/simple_miner/description_miner_extended.py | mhmgad/ExCut | 09e943a23207381de3c3a9e6f70015882b8ec4af | [
"Apache-2.0"
] | null | null | null | excut/explanations_mining/simple_miner/description_miner_extended.py | mhmgad/ExCut | 09e943a23207381de3c3a9e6f70015882b8ec4af | [
"Apache-2.0"
] | null | null | null | from copy import deepcopy
from enum import Enum
import excut.explanations_mining.explanations_quality_functions as qm
from excut.explanations_mining.descriptions import rank
from excut.explanations_mining.simple_miner import constants
from excut.explanations_mining.descriptions_new import Description2, is_var, Atom
from excut.kg.kg_query_interface_extended import KGQueryInterfaceExtended, EndPointKGQueryInterfaceExtended
from excut.utils.logging import logger
from itertools import permutations
import numpy as np
"""
Mines patterns starting from a single variables
"""
class ExplanationStructure(Enum):
PATH = 1
TREE = 2
CATEGORICAL = 3
SUBGRAPH = 4
class DescriptionMinerExtended:
def __init__(self, query_interface: KGQueryInterfaceExtended,
per_pattern_binding_limit=-1,
min_description_size=0,
with_constants=True,
relations_with_const_object=constants.RELATIONS_WITH_CONSTANTS,
categorical_relations=constants.CATEGORICAL_RELATIONS,
excluded_relations=None,
pattern_structure: ExplanationStructure = ExplanationStructure.SUBGRAPH
):
self.pattern_structure = pattern_structure
self.with_constants = with_constants
self.min_description_size = min_description_size
self.categorical_relations = categorical_relations if self.with_constants else []
self.relations_with_const_object = relations_with_const_object if self.with_constants else []
self.excluded_relations = excluded_relations if excluded_relations else []
self.query_interface = query_interface
# self.min_support=min_support
self.per_pattern_binding_limit = per_pattern_binding_limit
self.init_miner()
def init_miner(self):
logger.debug("Data Statistics")
stats = np.array(self.query_interface.get_data_stats(), dtype=object)
print(stats.dtype)
stats[:, [0, 2]] = stats[:, [0, 2]].astype(int)
relative_stats = np.column_stack([stats[:, 0] / stats[:, 2], stats[:, 1], stats[:, 2] / stats[:, 0]])
self.relations_with_const_object = self.relations_with_const_object + list(relative_stats[relative_stats[:, 0] > 10, 1])
self.categorical_relations = self.categorical_relations + \
list(relative_stats[relative_stats[:, 0] > 50, 1])
# print(relations_with_const_object)
# print(categorical_relations)
logger.debug("Data Statistics")
def mine_with_constants(self, head, max_length=2, min_coverage=-1.0, negative_heads=None):
if isinstance(head, tuple):
head = Atom(head[0], head[1], head[2])
negative_heads = negative_heads if negative_heads else []
logger.info('Learn descriptions for ' + str(head))
# start_var = head.subject if head.subject else '?x'
descriptions = []
# for evaluation
target_head_size = self.query_interface.count(Description2(head=head))
# logger.info('Taget head size %i' % target_head_size)
min_support = int(min_coverage * target_head_size)
# print(negative_heads)
negative_heads_sizes = [self.query_interface.count(Description2(head=neg_head)) for neg_head in negative_heads]
# logger.info('Neagtive head sizes %r' % negative_heads_sizes)
base_description = Description2(head=head)
previous_level_descriptions = [base_description]
# TODO the last iteration will be only to bind constants in the last predicate (better way to be implemented)
# const_iteration = max_length + 1
for i in range(1, max_length + 1):
logger.info("Discovering Level: %i" % (i))
level_descriptions = []
for cur_pattern in previous_level_descriptions:
logger.debug('Expand Description Pattern: %r', cur_pattern)
# expand()
description_extended_patterns = self._expand_pattern(cur_pattern, i)
logger.debug('Expanded patterns Size: %i' % len(description_extended_patterns))
# bind predicates
query_bind_descriptions = self._bind_patterns(description_extended_patterns, min_support)
# bind args if required
descriptions_with_constants = self._get_patterns_with_bindable_args(query_bind_descriptions)
query_bind_descriptions += self._bind_patterns(descriptions_with_constants, min_support)
# Prune bind descriptions
query_bind_descriptions = list(filter(self._filter_level_descriptions, query_bind_descriptions))
# Add Quality Scores to binede descriptions
self._add_quality_to_descriptions(query_bind_descriptions, target_head_size, negative_heads,
negative_heads_sizes)
level_descriptions += query_bind_descriptions
# Remove identical but different order body atoms
# WARN: may not work well becasue of the trivial implementation of __eq__ and __hash__ of Description2
level_descriptions = set(level_descriptions)
# TODO make the filter global
descriptions += list(filter(self._filter_output_descriptions, level_descriptions))
previous_level_descriptions = level_descriptions
logger.info("Done level: " + str(i) + ' level descriptions: ' + str(
len(level_descriptions)) + ' total descriptions: ' + str(len(descriptions)))
return descriptions
def _add_quality_to_descriptions(self, query_bind_descriptions, target_head_size, negative_heads,
negative_heads_sizes):
for description in query_bind_descriptions:
description_n_heads_support = [self.query_interface.count(description, alternative_head=n_head) for n_head
in
negative_heads]
# add quality
self._compute_qualities(description, description.target_head_support, description_n_heads_support,
target_head_size,
negative_heads_sizes)
def _expand_pattern(self, pattern_to_expand: Description2, iteration_number):
# print('Pattern\n%s' % pattern_to_expand.str_readable())
extended_query_patterns = []
new_pred = '?p'
new_var_arg = '?x' + str(iteration_number)
# add predicate with one new variable (once as subject and once as object)
# add predicate with 2 old variables (twice in both directions)
var_permutations = self.get_variable_permutations(pattern_to_expand, new_var_arg)
for var_perm in var_permutations:
new_des = deepcopy(pattern_to_expand)
new_des.add_atom(Atom(var_perm[0], new_pred, var_perm[1]))
extended_query_patterns.append(new_des)
return extended_query_patterns
def get_variable_permutations(self, pattern_to_expand: Description2, new_var_arg):
if self.pattern_structure == ExplanationStructure.PATH:
# x p x1 ^ x1 p2 x2 ^ x2 p3 x4
var_args = pattern_to_expand.get_open_var_arg() + [new_var_arg]
if len(var_args) < 2:
return []
return permutations(var_args, 2)
elif self.pattern_structure == ExplanationStructure.CATEGORICAL:
var_args = list(pattern_to_expand.anchor_vars) + [new_var_arg]
return permutations(var_args, 2)
elif self.pattern_structure == ExplanationStructure.SUBGRAPH:
var_args = list(pattern_to_expand.get_uniq_var_args()) + [new_var_arg]
return permutations(var_args, 2)
elif self.pattern_structure == ExplanationStructure.TREE:
perms = []
for arg in pattern_to_expand.get_uniq_var_args():
perms.append((arg, new_var_arg)) # out edge
perms.append((new_var_arg, arg)) # in edge
return perms
else:
raise Exception('%r is not a supported Explanation Langauage' % self.pattern_structure)
def _bind_patterns(self, level_query_patterns, min_support):
level_descriptions = []
for query_pattern in level_query_patterns:
res = self.query_interface.get_pattern_bindings(query_pattern, min_support, self.per_pattern_binding_limit)
for r in res:
description = deepcopy(query_pattern)
if len(description.get_var_predicates()) > 0:
# it is a predicate
description.get_last_atom().predicate = str(r[0])
else:
description.set_dangling_arg(str(r[0]))
logger.debug("** After binding: %s" % str(description))
description.target_head_support = int(r[1])
level_descriptions.append(description)
return level_descriptions
def _compute_qualities(self, description, target_head_support, n_heads_support, t_head_size, n_heads_sizes):
description.add_quality('c_support', target_head_support)
for q_name, q in qm.quality_functions.items():
description.add_quality(q_name, q(target_head_support, t_head_size, n_heads_support, n_heads_sizes))
def unique(self, desc_dict):
pass
def has_unbind_categorical_atom(self, description: Description2):
# check if a description a description has non-bind categoral atoms
# e.g.(?x rdf:type ?y) where ?y should be bind to constant
return any(a.predicate in self.categorical_relations and is_var(a.object) for a in description.body)
def _filter_output_descriptions(self, description: Description2):
# Remove descriptions with non-bind categoral atoms e.g.(?x rdf:type ?y) where ?y should be bind to constant
if self.has_unbind_categorical_atom(description):
return False
# More filters may be added
return True
def _filter_level_descriptions(self, description: Description2):
# Remove descriptions with non-bind categoral atoms e.g.(?x rdf:type ?y) where ?y should be bind to constant
if self.has_unbind_categorical_atom(description):
return False
return True
def close(self):
self.query_interface.close()
def _get_patterns_with_bindable_args(self, query_descriptions):
# if last predicate was in relation that has interesting constant in check constants
patterns_with_bindable_args = []
for pattern_to_expand in query_descriptions:
if pattern_to_expand.get_last_atom().predicate in self.relations_with_const_object:
if is_var(pattern_to_expand.get_dangling_arg()): # I think this is a redundant check
patterns_with_bindable_args.append(pattern_to_expand)
# logger.debug(str(level_query_patterns[-1]))
# print('Extended to bind constants\n%s' % pattern_to_expand.str_readable())
return patterns_with_bindable_args
if __name__ == '__main__':
vos_executer = EndPointKGQueryInterfaceExtended('http://halimede:8890/sparql',
['http://yago-expr.org', 'http://yago-expr.org.alltypes'],
labels_identifier='http://yago-expr.org.art-labels')
p = DescriptionMinerExtended(vos_executer,
per_pattern_binding_limit=30,
pattern_structure=ExplanationStructure.SUBGRAPH)
# print(p.mine_iteratively(head=('?x', 'http://execute_aux.org/auxBelongsTo', 'http://execute_aux.org/auxC2'),
# min_coverage=0.4,
# negative_heads=[('?x', 'http://execute_aux.org/auxBelongsTo', 'http://execute_aux.org/auxC0'),
# ('?x', 'http://execute_aux.org/auxBelongsTo', 'http://execute_aux.org/auxC1'),
# ('?x', 'http://execute_aux.org/auxBelongsTo', 'http://execute_aux.org/auxC3'),
# ('?x', 'http://execute_aux.org/auxBelongsTo', 'http://execute_aux.org/auxC4')]))
ds = p.mine_with_constants(head=Atom('?x', 'http://excute.org/label', 'http://exp-data.org/wordnet_song_107048000'),
max_length=2,
min_coverage=0.2
)
# ,negative_heads=[Atom('?x', 'http://excute.org/label', 'http://exp-data.org/wordnet_song_107048000')]
for d in rank(ds, method='x_coverage'):
print(d.str_readable())
| 49.503846 | 128 | 0.649367 | 1,479 | 12,871 | 5.342123 | 0.192022 | 0.026326 | 0.026579 | 0.021516 | 0.295785 | 0.208834 | 0.164663 | 0.141121 | 0.141121 | 0.141121 | 0 | 0.010082 | 0.267889 | 12,871 | 259 | 129 | 49.694981 | 0.828399 | 0.180872 | 0 | 0.10241 | 0 | 0 | 0.043278 | 0 | 0 | 0 | 0 | 0.003861 | 0 | 1 | 0.084337 | false | 0.006024 | 0.060241 | 0.006024 | 0.26506 | 0.012048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a233528f0197b3da5d5b0db9d12f008bd4f4b04a | 2,764 | py | Python | scripts/duck_smd_runs.py | mihaelasmilova/duck | ff0a4fdd6ab2f789ab56f8753f3e3cb03d8cccf0 | [
"Apache-2.0"
] | 1 | 2020-06-20T23:27:46.000Z | 2020-06-20T23:27:46.000Z | scripts/duck_smd_runs.py | mihaelasmilova/duck | ff0a4fdd6ab2f789ab56f8753f3e3cb03d8cccf0 | [
"Apache-2.0"
] | null | null | null | scripts/duck_smd_runs.py | mihaelasmilova/duck | ff0a4fdd6ab2f789ab56f8753f3e3cb03d8cccf0 | [
"Apache-2.0"
] | 1 | 2020-06-02T13:52:40.000Z | 2020-06-02T13:52:40.000Z | import argparse
import shutil
try:
from duck.steps.normal_md import perform_md
from duck.steps.steered_md import run_steered_md
from duck.utils.check_system import check_if_equlibrated
except ModuleNotFoundError:
print('Dependencies missing; check openmm, pdbfixer, and yank are installed from Omnia.')
def main():
parser = argparse.ArgumentParser(description='Perform SMD runs for dynamic undocking')
parser.add_argument('-i', '--input', help='Equilibrated system as input')
parser.add_argument('-p', '--pickle', help='Pickle file')
parser.add_argument('-n', '--num-runs', type=int, help='Number of SMD runs.')
# parser.add_argument('-o', '--output', help="PDB output")
parser.add_argument('-l', '--md-len', type=float, help='MD run length.')
parser.add_argument('-d', '--start-dist', type=float, help='Starting distance')
parser.add_argument('-v', '--init-velocity', type=float, help='Initial velocity')
parser.add_argument('--gpu-id', type=int, help='GPU ID (optional); if not specified, runs on CPU only.')
args = parser.parse_args()
shutil.copyfile(args.input, "equil.chk")
shutil.copyfile(args.pickle, "complex_system.pickle")
# Now do the MD
# remember start_dist
for i in range(args.num_runs):
if i == 0:
md_start = "equil.chk"
else:
md_start = "md_" + str(i - 1) + ".chk"
log_file = "md_" + str(i) + ".csv"
perform_md(
md_start,
"md_" + str(i) + ".chk",
log_file,
"md_" + str(i) + ".pdb",
md_len=args.md_len,
gpu_id=args.gpu_id,
)
# Open the file and check that the potential is stable and negative
if not check_if_equlibrated(log_file, 3):
print("SYSTEM NOT EQUILIBRATED")
sys.exit()
# Now find the interaction and save to a file
run_steered_md(
300,
"md_" + str(i) + ".chk",
"smd_" + str(i) + "_300.csv",
"smd_" + str(i) + "_300.dat",
"smd_" + str(i) + "_300.pdb",
"smd_" + str(i) + "_300.dcd",
args.start_dist,
init_velocity=args.init_velocity,
gpu_id=args.gpu_id,
)
run_steered_md(
325,
"md_" + str(i) + ".chk",
"smd_" + str(i) + "_325.csv",
"smd_" + str(i) + "_325.dat",
"smd_" + str(i) + "_325.pdb",
"smd_" + str(i) + "_325.dcd",
args.start_dist,
init_velocity=args.init_velocity,
gpu_id=args.gpu_id,
)
# sed -i.bak -e 's/\$\$\$\$/> <SCORE.DUCK_WQB>\n-0.015815293149895382\n\n\$\$\$\$/g' tst.sdf
if __name__ == "__main__":
main()
| 36.853333 | 108 | 0.565847 | 361 | 2,764 | 4.116343 | 0.34903 | 0.037685 | 0.091521 | 0.026918 | 0.148048 | 0.121131 | 0.099596 | 0.078062 | 0.078062 | 0.078062 | 0 | 0.026118 | 0.279667 | 2,764 | 74 | 109 | 37.351351 | 0.720241 | 0.105282 | 0 | 0.196721 | 0 | 0 | 0.229116 | 0.008516 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0 | 0.081967 | 0 | 0.098361 | 0.032787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2345be8735cc1849b734848a629591793a4f526 | 2,025 | py | Python | anatomy/surgery_planning_2.py | FedeClaudi/LocomotionControl | 1281f7894825096ad212407351463a2105c5152a | [
"MIT"
] | null | null | null | anatomy/surgery_planning_2.py | FedeClaudi/LocomotionControl | 1281f7894825096ad212407351463a2105c5152a | [
"MIT"
] | null | null | null | anatomy/surgery_planning_2.py | FedeClaudi/LocomotionControl | 1281f7894825096ad212407351463a2105c5152a | [
"MIT"
] | null | null | null | from brainrender.actors import Cylinder, Point
from brainrender import Scene, settings
import numpy as np
from vedo import shapes
from pathlib import Path
# To measure distances and angles
save_folder = Path(
"/Users/federicoclaudi/Dropbox (UCL)/Rotation_vte/Presentations/Presentations/Fiete lab"
)
settings.SHOW_AXES = False
BREGMA = np.array([5400, 0, 0]) # AP # DV # ML
top = np.array([4.136, -2.106, 0]) * 1000 + BREGMA # AP ML DV
tip = np.array([5.507, -0.584, 6.489]) * 1000 + BREGMA
scene = Scene(inset=False, screenshots_folder=save_folder,)
cun, grn, mos = scene.add_brain_region("CUN", "GRN", "MOs", alpha=0.1)
mos5 = scene.add_brain_region("MOs5", alpha=0.7)
orb, olf = scene.add_brain_region("ORB", "OLF")
# CUN/GRN probe
tip[1] = tip[1] + scene.root.centerOfMass()[2]
top[1] = top[1] + scene.root.centerOfMass()[2]
top = top[[0, 2, 1]]
tip = tip[[0, 2, 1]]
scene.add(shapes.Cylinder(pos=[top, tip], c="k", r=50, alpha=1))
BREGMA_centered = [
5400, # AP
0, # DV
5700, # ML
]
M2_probe_position = BREGMA_centered + np.array([-2500, 2500, -1000])
scene.add(Point(M2_probe_position, color="red", res=12, radius=150))
scene.add(Point(BREGMA_centered, color="blue"))
# MOs probe
mos_center = mos.centerOfMass() + np.array([1000, 0, -800])
for x in [0, 250, 500, 750]:
scene.add(
Cylinder(
M2_probe_position + np.array([0, 0, -x]),
scene.root,
color="k",
radius=75 / 2,
)
)
scene.slice(
scene.atlas.get_plane(
M2_probe_position + np.array([-100, 0, 0]), norm=(1, 0, 0)
),
actors=[mos, scene.root, mos5, orb, olf],
)
scene.slice(
scene.atlas.get_plane(
M2_probe_position + np.array([1000, 0, 0]), norm=(-1, 0, 0)
),
)
camera = {
"pos": (-6374, -5444, 26602),
"viewup": (0, -1, 0),
"clippingRange": (19433, 56931),
"focalPoint": (7830, 4296, -5694),
"distance": 36602,
}
scene.render(interactive=True, camera="frontal")
scene.screenshot(name="probes")
| 25 | 92 | 0.626667 | 302 | 2,025 | 4.112583 | 0.407285 | 0.045089 | 0.060386 | 0.045894 | 0.154589 | 0.136876 | 0.080515 | 0.080515 | 0.080515 | 0.080515 | 0 | 0.105166 | 0.197037 | 2,025 | 80 | 93 | 25.3125 | 0.658672 | 0.042469 | 0 | 0.101695 | 0 | 0 | 0.086618 | 0.042012 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.084746 | 0 | 0.084746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a234fa260bb47c3b8d686d94a94e4b12096d35d7 | 3,342 | py | Python | models/recall/mhcn/dygraph_model.py | renmada/PaddleRec | 89f532f2cb68407e2710e1b4c2438b0057fcd1eb | [
"Apache-2.0"
] | null | null | null | models/recall/mhcn/dygraph_model.py | renmada/PaddleRec | 89f532f2cb68407e2710e1b4c2438b0057fcd1eb | [
"Apache-2.0"
] | null | null | null | models/recall/mhcn/dygraph_model.py | renmada/PaddleRec | 89f532f2cb68407e2710e1b4c2438b0057fcd1eb | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import math
from net import MHCN
class DygraphModel():
# define model
def create_model(self, config):
n_layers = config.get("hyper_parameters.n_layer")
emb_size = config.get("hyper_parameters.num_factors")
mhcn_model = MHCN(n_layers=n_layers, emb_size=emb_size, config=config)
return mhcn_model
# define feeds which convert numpy of batch data to paddle.tensor
def create_feeds(self, batch_data):
# u_idx, v_idx, neg_idx = inputs
user_input = paddle.squeeze(
paddle.to_tensor(batch_data[0].numpy().astype('int64')
.reshape(-1, 1)),
axis=1)
item_input = paddle.squeeze(
paddle.to_tensor(batch_data[1].numpy().astype('int64')
.reshape(-1, 1)),
axis=1)
neg_item_input = paddle.squeeze(
paddle.to_tensor(batch_data[2].numpy().astype('int64')
.reshape(-1, 1)),
axis=1)
return [user_input, item_input, neg_item_input]
# define loss function
def create_loss(self, outputs):
user_emb, pos_item_emb, neg_item_emb, ss_loss = outputs
score = paddle.sum(paddle.multiply(user_emb, pos_item_emb),
1) - paddle.sum(
paddle.multiply(user_emb, neg_item_emb), 1)
rec_loss = -paddle.sum(paddle.log(F.sigmoid(score) + 10e-8))
ss_loss = ss_loss * 0.01
loss = rec_loss + ss_loss
return loss, rec_loss, ss_loss
# define optimizer
def create_optimizer(self, dy_model, config):
lr = config.get("hyper_parameters.optimizer.learning_rate", 0.001)
optimizer = paddle.optimizer.Adam(
learning_rate=lr, parameters=dy_model.parameters())
return optimizer
# define metrics such as auc/acc
# multi-task need to define multi metric
def create_metrics(self):
metrics_list_name = []
metrics_list = []
return metrics_list, metrics_list_name
# construct train forward phase
def train_forward(self, dy_model, metrics_list, batch_data, config):
inputs = self.create_feeds(batch_data)
prediction = dy_model.forward(inputs)
loss, rec_loss, ss_loss = self.create_loss(prediction)
# update metrics
print_dict = {"loss": loss, "rec_loss": rec_loss}
return loss, metrics_list, print_dict
def infer_forward(self, dy_model, metrics_list, batch_data, config):
inputs = self.create_feeds(batch_data)
prediction = dy_model.forward(inputs)
return metrics_list, prediction
| 35.553191 | 78 | 0.652603 | 446 | 3,342 | 4.692825 | 0.349776 | 0.0387 | 0.026278 | 0.0344 | 0.270903 | 0.233636 | 0.204969 | 0.204969 | 0.142379 | 0.099379 | 0 | 0.015323 | 0.257929 | 3,342 | 93 | 79 | 35.935484 | 0.828629 | 0.252543 | 0 | 0.188679 | 0 | 0 | 0.048081 | 0.037172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132075 | false | 0 | 0.09434 | 0 | 0.377358 | 0.037736 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a23594ef3019daa813ac02c09e1a7df7edaa1d3a | 16,748 | py | Python | evaluate.py | doanmanhduy0210/cosface | 5daaab3ae4f5b45cd92ebcbe94d9a4f2c01bee89 | [
"MIT"
] | 13 | 2020-03-30T13:05:24.000Z | 2022-01-03T09:12:15.000Z | evaluate.py | doanmanhduy0210/cosface | 5daaab3ae4f5b45cd92ebcbe94d9a4f2c01bee89 | [
"MIT"
] | 11 | 2020-01-28T22:59:31.000Z | 2022-03-11T23:59:38.000Z | evaluate.py | doanmanhduy0210/cosface | 5daaab3ae4f5b45cd92ebcbe94d9a4f2c01bee89 | [
"MIT"
] | 4 | 2020-05-18T10:31:13.000Z | 2021-11-04T14:11:40.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
from PIL import Image
import torch
from torch.utils import data
import numpy as np
from torchvision import transforms as T
import torchvision
from sklearn import metrics
from scipy.optimize import brentq
from scipy import interpolate
import argparse
from evaluate_helpers import *
from helpers import get_model
from pdb import set_trace as bp
"""
EXAMPLE:
python3 evaluate.py \
--model_path ./pth/IR_50_MODEL_arcface_ms1celeb_epoch90_lfw9962.pth \
--model_type IR_50 \
--num_workers 8 \
--batch_size 100
"""
class EvaluateDataset(data.Dataset):
def __init__(self, paths, actual_issame, input_size):
self.paths = paths
self.actual_issame = actual_issame
self.nrof_embeddings = len(self.actual_issame)*2 # nrof_pairs * nrof_images_per_pair
self.labels_array = np.arange(0,self.nrof_embeddings)
normalize = T.Normalize(mean=[0.5], std=[0.5])
self.transforms = T.Compose([
T.Resize(input_size),
T.ToTensor(),
normalize
])
def __getitem__(self, index):
img_path = self.paths[index]
img = Image.open(img_path)
data = img.convert('RGB')
data = self.transforms(data)
label = self.labels_array[index]
return data.float(), label
def __len__(self):
return len(self.paths)
def evaluate_forward_pass(model, lfw_loader, lfw_dataset, embedding_size, device, lfw_nrof_folds, distance_metric, subtract_mean):
nrof_images = lfw_dataset.nrof_embeddings
emb_array = np.zeros((nrof_images, embedding_size))
lab_array = np.zeros((nrof_images,))
with torch.no_grad():
for i, (data, label) in enumerate(lfw_loader):
data, label = data.to(device), label.to(device)
feats = model(data)
emb = feats.cpu().numpy()
lab = label.detach().cpu().numpy()
lab_array[lab] = lab
emb_array[lab, :] = emb
if i % 10 == 9:
print('.', end='')
sys.stdout.flush()
print('')
embeddings = emb_array
# np.save('embeddings.npy', embeddings)
# embeddings = np.load('embeddings.npy')
# np.save('embeddings_casia.npy', embeddings)
# embeddings = np.load('embeddings_casia.npy')
assert np.array_equal(lab_array, np.arange(nrof_images))==True, 'Wrong labels used for evaluation, possibly caused by training examples left in the input pipeline'
tpr, fpr, accuracy, val, val_std, far = evaluate(embeddings, lfw_dataset.actual_issame, nrof_folds=lfw_nrof_folds, distance_metric=distance_metric, subtract_mean=subtract_mean)
return tpr, fpr, accuracy, val, val_std, far
#-------------------------------------------------------------
# LFW
def get_paths_issame_LFW(lfw_dir):
lfw_images_dir = lfw_dir + '/images'
lfw_pairs = lfw_dir + '/pairs_LFW.txt'
# Read the file containing the pairs used for testing
pairs = read_pairs(os.path.expanduser(lfw_pairs))
# Get the paths for the corresponding images
paths, actual_issame = get_paths(os.path.expanduser(lfw_images_dir), pairs)
return paths, actual_issame
#-------------------------------------------------------------
# CPLFW
def get_paths_issame_CPLFW(cplfw_dir):
cplfw_images_dir = cplfw_dir + '/images'
cplfw_pairs = cplfw_dir + '/pairs_CPLFW.txt'
return get_paths_issame_ca_or_cp_lfw(cplfw_images_dir, cplfw_pairs)
# CALFW
def get_paths_issame_CALFW(calfw_dir):
calfw_images_dir = calfw_dir + '/images'
calfw_pairs = calfw_dir + '/pairs_CALFW.txt'
return get_paths_issame_ca_or_cp_lfw(calfw_images_dir, calfw_pairs)
def get_paths_issame_ca_or_cp_lfw(lfw_dir, lfw_pairs):
pairs = []
with open(lfw_pairs, 'r') as f:
for line in f.readlines()[0:]:
pair = line.strip().split()
pairs.append(pair)
arr = np.array(pairs)
paths = []
actual_issame = []
for count, person in enumerate(arr, 1): # Start counting from 1
if count % 2 == 0:
first_in_pair = arr[count-2]
second_in_pair = person
dir = os.path.expanduser(lfw_dir)
path1 = os.path.join(dir, first_in_pair[0])
path2 = os.path.join(dir, second_in_pair[0])
paths.append(path1)
paths.append(path2)
if first_in_pair[1] != '0':
actual_issame.append(True)
else:
actual_issame.append(False)
return paths, actual_issame
#-------------------------------------------------------------
# CFP_FF and CFP_FP
def get_paths_issame_CFP(cfp_dir, type='FF'):
pairs_list_F = cfp_dir + '/Pair_list_F.txt'
pairs_list_P = cfp_dir + '/Pair_list_P.txt'
path_hash_F = {}
with open(pairs_list_F, 'r') as f:
for line in f.readlines()[0:]:
pair = line.strip().split()
path_hash_F[pair[0]] = cfp_dir + '/' + pair[1]
path_hash_P = {}
with open(pairs_list_P, 'r') as f:
for line in f.readlines()[0:]:
pair = line.strip().split()
path_hash_P[pair[0]] = cfp_dir + '/' + pair[1]
paths = []
actual_issame = []
if type == 'FF':
root_FF_or_FP = cfp_dir + '/Split/FF'
else:
root_FF_or_FP = cfp_dir + '/Split/FP'
for subdir, _, files in os.walk(root_FF_or_FP):
for file in files:
filepath = os.path.join(subdir, file)
pairs_arr = parse_dif_same_file(filepath)
for pair in pairs_arr:
first = path_hash_F[pair[0]]
if type == 'FF':
second = path_hash_F[pair[1]]
else:
second = path_hash_P[pair[1]]
paths.append(first)
paths.append(second)
if file == 'diff.txt':
actual_issame.append(False)
else:
actual_issame.append(True)
return paths, actual_issame
def parse_dif_same_file(filepath):
pairs_arr = []
with open(filepath, 'r') as f:
for line in f.readlines()[0:]:
pair = line.strip().split(',')
pairs_arr.append(pair)
return pairs_arr
#-------------------------------------------------------------
def get_evaluate_dataset_and_loader(root_dir, type='LFW', num_workers=2, input_size=[112, 112], batch_size=100):
######## dataset setup
if type == 'CALFW':
paths, actual_issame = get_paths_issame_CALFW(root_dir)
elif type == 'CPLFW':
paths, actual_issame = get_paths_issame_CPLFW(root_dir)
elif type == 'CFP_FF':
paths, actual_issame = get_paths_issame_CFP(root_dir, type='FF')
elif type == 'CFP_FP':
paths, actual_issame = get_paths_issame_CFP(root_dir, type='FP')
else:
paths, actual_issame = get_paths_issame_LFW(root_dir)
dataset = EvaluateDataset(paths=paths, actual_issame=actual_issame, input_size=input_size)
loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers)
return dataset, loader
def print_evaluate_result(type, tpr, fpr, accuracy, val, val_std, far):
print("=" * 60)
print("Validation TYPE: {}".format(type))
print('Accuracy: %2.5f+-%2.5f' % (np.mean(accuracy), np.std(accuracy)))
print('Validation rate: %2.5f+-%2.5f @ FAR=%2.5f' % (val, val_std, far))
auc = metrics.auc(fpr, tpr)
print('Area Under Curve (AUC): %1.3f' % auc)
# eer = brentq(lambda x: 1. - x - interpolate.interp1d(fpr, tpr)(x), 0., 1.)
# print('Equal Error Rate (EER): %1.3f' % eer)
print("=" * 60)
def main(ARGS):
if ARGS.model_path == None:
raise AssertionError("Path should not be None")
######### distance_metric = 1 #### if CenterLoss = 0, If Cosface = 1
####### Device setup
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
####### Model setup
print("Use CUDA: " + str(use_cuda))
print('Model type: %s' % ARGS.model_type)
model = get_model(ARGS.model_type, ARGS.input_size)
if use_cuda:
model.load_state_dict(torch.load(ARGS.model_path))
else:
model.load_state_dict(torch.load(ARGS.model_path, map_location='cpu'))
model.to(device)
embedding_size = 512
model.eval()
##########################################################################################
#### Evaluate LFW Example
type='LFW'
root_dir='./data/lfw_112'
dataset, loader = get_evaluate_dataset_and_loader(root_dir=root_dir,
type=type,
num_workers=ARGS.num_workers,
input_size=[112, 112],
batch_size=ARGS.batch_size)
print('Runnning forward pass on {} images'.format(type))
tpr, fpr, accuracy, val, val_std, far = evaluate_forward_pass(model,
loader,
dataset,
embedding_size,
device,
lfw_nrof_folds=10,
distance_metric=1,
subtract_mean=False)
print_evaluate_result(type, tpr, fpr, accuracy, val, val_std, far)
#### End of Evaluate LFW Example
##########################################################################################
##########################################################################################
### Evaluate CALFW Example
type='CALFW'
root_dir='./data/calfw_112'
dataset, loader = get_evaluate_dataset_and_loader(root_dir=root_dir,
type=type,
num_workers=ARGS.num_workers,
input_size=[112, 112],
batch_size=ARGS.batch_size)
print('Runnning forward pass on {} images'.format(type))
tpr, fpr, accuracy, val, val_std, far = evaluate_forward_pass(model,
loader,
dataset,
embedding_size,
device,
lfw_nrof_folds=10,
distance_metric=1,
subtract_mean=False)
print_evaluate_result(type, tpr, fpr, accuracy, val, val_std, far)
#### End of Evaluate CALFW Example
##########################################################################################
##########################################################################################
### Evaluate CPLFW Example
type='CPLFW'
root_dir='./data/cplfw_112'
dataset, loader = get_evaluate_dataset_and_loader(root_dir=root_dir,
type=type,
num_workers=ARGS.num_workers,
input_size=[112, 112],
batch_size=ARGS.batch_size)
print('Runnning forward pass on {} images'.format(type))
tpr, fpr, accuracy, val, val_std, far = evaluate_forward_pass(model,
loader,
dataset,
embedding_size,
device,
lfw_nrof_folds=10,
distance_metric=1,
subtract_mean=False)
print_evaluate_result(type, tpr, fpr, accuracy, val, val_std, far)
#### End of Evaluate CPLFW Example
##########################################################################################
##########################################################################################
### Evaluate CFP_FF Example
type='CFP_FF'
root_dir='./data/cfp_112'
dataset, loader = get_evaluate_dataset_and_loader(root_dir=root_dir,
type=type,
num_workers=ARGS.num_workers,
input_size=[112, 112],
batch_size=ARGS.batch_size)
print('Runnning forward pass on {} images'.format(type))
tpr, fpr, accuracy, val, val_std, far = evaluate_forward_pass(model,
loader,
dataset,
embedding_size,
device,
lfw_nrof_folds=10,
distance_metric=1,
subtract_mean=False)
print_evaluate_result(type, tpr, fpr, accuracy, val, val_std, far)
#### End of Evaluate CFP_FF Example
##########################################################################################
##########################################################################################
### Evaluate CFP_FP Example
type='CFP_FP'
root_dir='./data/cfp_112'
dataset, loader = get_evaluate_dataset_and_loader(root_dir=root_dir,
type=type,
num_workers=ARGS.num_workers,
input_size=[112, 112],
batch_size=ARGS.batch_size)
print('Runnning forward pass on {} images'.format(type))
tpr, fpr, accuracy, val, val_std, far = evaluate_forward_pass(model,
loader,
dataset,
embedding_size,
device,
lfw_nrof_folds=10,
distance_metric=1,
subtract_mean=False)
print_evaluate_result(type, tpr, fpr, accuracy, val, val_std, far)
#### End of Evaluate CFP_FP Example
##########################################################################################
def parse_arguments(argv):
parser = argparse.ArgumentParser()
parser.add_argument('--model_path', type=str, help='Model weights.', default=None)
parser.add_argument('--model_type', type=str, help='Model type to use for training.', default='IR_50')# support: ['ResNet_50', 'ResNet_101', 'ResNet_152', 'IR_50', 'IR_101', 'IR_152', 'IR_SE_50', 'IR_SE_101', 'IR_SE_152']
parser.add_argument('--input_size', type=str, help='support: [112, 112] and [224, 224]', default=[112, 112])
parser.add_argument('--num_workers', type=int, help='Number of threads to use for data pipeline.', default=8)
parser.add_argument('--batch_size', type=int, help='Number of batches while validating model.', default=100)
return parser.parse_args(argv)
if __name__ == '__main__':
main(parse_arguments(sys.argv[1:]))
| 40.259615 | 225 | 0.47743 | 1,701 | 16,748 | 4.429159 | 0.153439 | 0.035041 | 0.016724 | 0.022299 | 0.389567 | 0.361694 | 0.331564 | 0.309928 | 0.305415 | 0.286302 | 0 | 0.018574 | 0.36673 | 16,748 | 415 | 226 | 40.356627 | 0.691778 | 0.071232 | 0 | 0.387454 | 0 | 0 | 0.065768 | 0 | 0 | 0 | 0 | 0 | 0.00738 | 1 | 0.051661 | false | 0.04059 | 0.066421 | 0.00369 | 0.162362 | 0.081181 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a23777174805cacd4a99fef00e571e09029a49ce | 3,550 | py | Python | intent_detection/batchers_test.py | EdwardBurgin/polyai-models | 20cc6ff59396a2884a748509526a022347a7340c | [
"Apache-2.0"
] | 2 | 2020-10-16T11:30:59.000Z | 2021-03-28T04:51:25.000Z | intent_detection/batchers_test.py | EdwardBurgin/polyai-models | 20cc6ff59396a2884a748509526a022347a7340c | [
"Apache-2.0"
] | null | null | null | intent_detection/batchers_test.py | EdwardBurgin/polyai-models | 20cc6ff59396a2884a748509526a022347a7340c | [
"Apache-2.0"
] | 3 | 2020-10-13T09:13:01.000Z | 2021-05-31T04:20:18.000Z | """Tests for batchers.py
Copyright PolyAI Limited.
"""
import unittest
import numpy as np
from intent_detection import batchers
class SamplingBatcherTest(unittest.TestCase):
def test_rejects_bad_input_labels_not_array(self):
with self.assertRaises(ValueError):
batchers.SamplingBatcher(
examples=np.arange(10),
labels=list(np.arange(10)),
batch_size=1
)
def test_rejects_bad_input_examples_not_array(self):
with self.assertRaises(ValueError):
batchers.SamplingBatcher(
examples=list(np.arange(10)),
labels=np.arange(10),
batch_size=1
)
def test_rejects_bad_input_mismatched_dims(self):
with self.assertRaises(ValueError):
batchers.SamplingBatcher(
examples=np.arange(10),
labels=np.arange(9),
batch_size=1
)
def _test_batcher(self, batch_size, steps, sample_distribution=None):
np.random.seed(0)
examples = np.arange(20)
labels = np.concatenate((
np.full(
shape=(5,),
fill_value=0
),
np.full(
shape=(5,),
fill_value=1
),
np.full(
shape=(5,),
fill_value=3
),
np.full(
shape=(5,),
fill_value=4
)
))
batcher = batchers.SamplingBatcher(
examples=examples,
labels=labels,
batch_size=batch_size,
sample_distribution=sample_distribution,
)
seen_labels = set()
for counter, (ex, lab) in enumerate(batcher):
if counter == steps:
break
self.assertEqual(ex.shape, lab.shape)
self.assertEqual(lab.size, batch_size)
for x in ex:
self.assertTrue(x in examples)
for y in lab:
seen_labels.add(y)
self.assertTrue(y in labels)
for x, y in zip(ex, lab):
self.assertEqual(labels[x], y)
self.assertEqual(steps, counter)
return seen_labels
def test_batcher_less_classes_than_size(self):
self._test_batcher(
batch_size=20,
steps=5,
)
def test_batcher_more_classes_than_size(self):
self._test_batcher(
batch_size=3,
steps=20,
)
def test_batcher_zero_weight(self):
seen_labels = self._test_batcher(
batch_size=3,
steps=20,
sample_distribution={0: 1., 1: 2, 3: 3, 4: 0}
)
self.assertNotIn(4, seen_labels)
def test_batcher_bad_label_in_distribution(self):
with self.assertRaisesRegex(
ValueError,
"label 999 in sample distribution does not exist"):
self._test_batcher(
batch_size=3,
steps=20,
sample_distribution={0: 1., 1: 2, 3: 3, 999: 0}
)
def test_batcher_bad_weight_in_distribution(self):
with self.assertRaisesRegex(
ValueError,
"weight -1 for label 4 is negative"):
self._test_batcher(
batch_size=3,
steps=20,
sample_distribution={0: 1., 1: 2, 3: 3, 4: -1}
)
if __name__ == "__main__":
unittest.main()
| 27.734375 | 73 | 0.523099 | 375 | 3,550 | 4.717333 | 0.242667 | 0.061051 | 0.047484 | 0.056529 | 0.50424 | 0.455059 | 0.394008 | 0.334087 | 0.329565 | 0.28095 | 0 | 0.03318 | 0.388732 | 3,550 | 127 | 74 | 27.952756 | 0.782028 | 0.013521 | 0 | 0.365385 | 0 | 0 | 0.025179 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 1 | 0.086538 | false | 0 | 0.028846 | 0 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a23b68cd9c875b20269122ab2a42fbafd602046e | 5,391 | py | Python | src/management/commands/restore.py | t47io/django-Daslab-server | 872cc3d2b586bbe64e01cd1fe0fdbb31618c33b8 | [
"BSD-Source-Code"
] | 1 | 2016-10-27T13:58:42.000Z | 2016-10-27T13:58:42.000Z | src/management/commands/restore.py | t47io/django-Daslab-server | 872cc3d2b586bbe64e01cd1fe0fdbb31618c33b8 | [
"BSD-Source-Code"
] | null | null | null | src/management/commands/restore.py | t47io/django-Daslab-server | 872cc3d2b586bbe64e01cd1fe0fdbb31618c33b8 | [
"BSD-Source-Code"
] | null | null | null | import os
import shutil
import subprocess
import sys
import tarfile
import time
import traceback
from django.core.management.base import BaseCommand
from src.settings import *
from src.console import send_notify_slack, send_error_slack
class Command(BaseCommand):
help = 'Restores MySQL database, static files, Apache2 settings and config settings from local backup/ folder.'
def add_arguments(self, parser):
parser.add_argument('--item', nargs='+', type=str, help='List of backup itmes, choose from (\'apache\', \'config\', \'mysql\', \'static\')')
def handle(self, *args, **options):
t0 = time.time()
self.stdout.write('%s:\t%s' % (time.ctime(), ' '.join(sys.argv)))
if options['item']:
is_apache = 'apache' in options['item']
is_config = 'config' in options['item']
is_mysql = 'mysql' in options['item']
is_static = 'static' in options['item']
else:
is_apache, is_config, is_mysql, is_static = True, True, True, True
flag = False
if is_mysql:
t = time.time()
self.stdout.write("#1: Restoring MySQL database...")
try:
tarfile.open('%s/backup/backup_mysql.tgz' % MEDIA_ROOT, 'r:gz').extractall()
subprocess.check_call('cat %s/backup/backup_mysql | mysql -u %s -p%s %s' % (MEDIA_ROOT, env.db()['USER'], env.db()['PASSWORD'], env.db()['NAME']), shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
os.remove('%s/backup/backup_mysql' % MEDIA_ROOT)
except Exception:
send_error_slack(traceback.format_exc(), 'Restore MySQL Database', ' '.join(sys.argv), 'log_cron_restore.log')
flag = True
else:
self.stdout.write(" \033[92mSUCCESS\033[0m: \033[94mMySQL\033[0m database overwritten.")
self.stdout.write("Time elapsed: %.1f s." % (time.time() - t))
if is_static:
t = time.time()
self.stdout.write("#2: Restoring static files...")
try:
shutil.rmtree('%s/backup/data' % MEDIA_ROOT)
tarfile.open('%s/backup/backup_static.tgz' % MEDIA_ROOT, 'r:gz').extractall()
shutil.rmtree('%s/data' % MEDIA_ROOT)
shutil.move('%s/backup/data' % MEDIA_ROOT, '%s' % MEDIA_ROOT)
shutil.rmtree('%s/backup/data' % MEDIA_ROOT)
if (not DEBUG):
subprocess.check_call('%s/util_chmod.sh' % MEDIA_ROOT, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception:
send_error_slack(traceback.format_exc(), 'Restore Static Files', ' '.join(sys.argv), 'log_cron_restore.log')
flag = True
else:
self.stdout.write(" \033[92mSUCCESS\033[0m: \033[94mstatic\033[0m files overwritten.")
self.stdout.write("Time elapsed: %.1f s." % (time.time() - t))
if is_apache:
t = time.time()
self.stdout.write("#3: Restoring apache2 settings...")
try:
shutil.rmtree('%s/backup/apache2' % MEDIA_ROOT)
tarfile.open('%s/backup/backup_apache.tgz' % MEDIA_ROOT, 'r:gz').extractall()
shutil.rmtree('/etc/apache2')
shutil.move('%s/backup/apache2' % MEDIA_ROOT, '/etc/apache2')
shutil.rmtree('%s/backup/apache2' % MEDIA_ROOT)
if (not DEBUG):
subprocess.check_call('apache2ctl restart', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception:
send_error_slack(traceback.format_exc(), 'Restore Apache2 Settings', ' '.join(sys.argv), 'log_cron_restore.log')
flag = True
else:
self.stdout.write(" \033[92mSUCCESS\033[0m: \033[94mapache2\033[0m settings overwritten.")
self.stdout.write("Time elapsed: %.1f s.\n" % (time.time() - t))
if is_config:
t = time.time()
self.stdout.write("#4: Restoring config settings...")
try:
shutil.rmtree('%s/backup/config' % MEDIA_ROOT)
tarfile.open('%s/backup/backup_config.tgz' % MEDIA_ROOT, 'r:gz').extractall()
shutil.rmtree('%s/config' % MEDIA_ROOT)
shutil.move('%s/backup/config' % MEDIA_ROOT, '%s/config' % MEDIA_ROOT)
shutil.rmtree('%s/backup/config' % MEDIA_ROOT)
if (not DEBUG):
subprocess.check_call('apache2ctl restart', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception:
send_error_slack(traceback.format_exc(), 'Restore Config Settings', ' '.join(sys.argv), 'log_cron_restore.log')
flag = True
else:
self.stdout.write(" \033[92mSUCCESS\033[0m: \033[94mconfig\033[0m settings overwritten.")
self.stdout.write("Time elapsed: %.1f s.\n" % (time.time() - t))
if flag:
self.stdout.write("Finished with errors!")
self.stdout.write("Time elapsed: %.1f s." % (time.time() - t0))
sys.exit(1)
else:
self.stdout.write("All done successfully!")
self.stdout.write("Time elapsed: %.1f s." % (time.time() - t0))
| 47.289474 | 224 | 0.575032 | 638 | 5,391 | 4.750784 | 0.205329 | 0.059386 | 0.084131 | 0.037611 | 0.623887 | 0.585615 | 0.515341 | 0.43385 | 0.39228 | 0.34939 | 0 | 0.025006 | 0.280467 | 5,391 | 113 | 225 | 47.707965 | 0.756381 | 0 | 0 | 0.40625 | 0 | 0 | 0.261733 | 0.056947 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020833 | false | 0.010417 | 0.104167 | 0 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a23e8cf3c69a0c1e4a7bfd5e5a1bf427171f1d94 | 13,746 | py | Python | models.py | omidsakhi/tpu_glow | c4a878775156729bd48254574fc9b0d3e032c117 | [
"MIT"
] | 5 | 2018-11-10T14:59:05.000Z | 2022-01-09T11:13:42.000Z | models.py | omidsakhi/tpu_glow | c4a878775156729bd48254574fc9b0d3e032c117 | [
"MIT"
] | null | null | null | models.py | omidsakhi/tpu_glow | c4a878775156729bd48254574fc9b0d3e032c117 | [
"MIT"
] | null | null | null | import ops
import numpy as np
import tensorflow as tf
import memory_saving_gradients
def codec(cfg):
def encoder(z, objective):
eps = []
for i in range(cfg.n_levels):
z, objective = revnet2d(i, str(i), z, objective, cfg)
if i < cfg.n_levels-1:
z, objective, _eps = split2d(
"pool"+str(i), z, objective=objective, cfg=cfg)
eps.append(_eps)
return z, objective, eps
def decoder(z, eps=[None]*cfg.n_levels, eps_std=None):
for i in reversed(range(cfg.n_levels)):
if i < cfg.n_levels-1:
z = split2d_reverse(
"pool"+str(i), z, cfg=cfg, eps=eps[i], eps_std=eps_std)
z, _ = revnet2d(i, str(i), z, 0, cfg, reverse=True)
return z
return encoder, decoder
def revnet2d(index, name, z, logdet, cfg, reverse=False):
if cfg.depth == -1:
depth = cfg.depth_dict[index]
else:
depth = cfg.depth
with tf.variable_scope(name):
if not reverse:
for i in range(depth):
if cfg.memory_saving_gradients:
z, logdet = checkpoint(z, logdet)
z, logdet = revnet2d_step(str(i), z, logdet, cfg, reverse)
if cfg.memory_saving_gradients:
z, logdet = checkpoint(z, logdet)
else:
for i in reversed(range(depth)):
z, logdet = revnet2d_step(str(i), z, logdet, cfg, reverse)
return z, logdet
# Simpler, new version
def revnet2d_step(name, z, logdet, cfg, reverse):
shape = ops.int_shape(z)
n_z = shape[3]
assert n_z % 2 == 0
with tf.variable_scope(name):
if not reverse:
z, logdet = ops.scale_bias("actnorm", z, logdet=logdet)
z = ops.reverse_features("reverse", z)
#z, logdet = invertible_1x1_conv("invconv", z, logdet)
z1 = z[:, :, :, :n_z // 2]
z2 = z[:, :, :, n_z // 2:]
h = f_("f1", z1, cfg, n_z)
shift = h[:, :, :, 0::2]
logs = h[:, :, :, 1::2] / 4.0
z2 += shift
z2 *= tf.exp(logs)
logdet += tf.reduce_sum(logs, axis=[1, 2, 3])
z = tf.concat([z1, z2], 3)
else:
z1 = z[:, :, :, :n_z // 2]
z2 = z[:, :, :, n_z // 2:]
h = f_("f1", z1, cfg, n_z)
shift = h[:, :, :, 0::2]
logs = h[:, :, :, 1::2] / 4.0
z2 *= tf.exp(-1.0 * logs)
z2 -= shift
logdet -= tf.reduce_sum(logs, axis=[1, 2, 3])
z = tf.concat([z1, z2], 3)
z = ops.reverse_features("reverse", z)
#z, logdet = invertible_1x1_conv("invconv", z, logdet, reverse=True)
z, logdet = ops.scale_bias(
"actnorm", z, logdet=logdet, reverse=True)
return z, logdet
def f_(name, h, cfg, n_out=None):
width = cfg.width
if width == -1:
assert(int(h.get_shape()[1]) == int(h.get_shape()[2]))
img_width = int(h.get_shape()[2])
width = cfg.width_dict[img_width]
n_out = n_out or int(h.get_shape()[3])
with tf.variable_scope(name):
h = ops._conv2d("l_1", h, width, [3, 3], 1, relu=True)
h = ops._conv2d("l_2", h, 2 * width, [1, 1], 1, relu=True)
h = ops._conv2d("l_3", h, n_out, [3, 3],
1, relu=False, init_zero=True, pn=True)
return h
def split2d(name, z, cfg, objective=0.):
with tf.variable_scope(name):
n_z = ops.int_shape(z)[3]
z1 = z[:, :, :, :n_z // 2]
z2 = z[:, :, :, n_z // 2:]
pz = split2d_prior(z1, cfg)
objective += pz.logp(z2)
z1 = ops.squeeze2d(z1)
eps = pz.get_eps(z2)
return z1, objective, eps
def split2d_reverse(name, z, eps, eps_std, cfg):
with tf.variable_scope(name):
z1 = ops.unsqueeze2d(z)
pz = split2d_prior(z1, cfg)
if eps is not None:
# Already sampled eps
z2 = pz.sample_eps(eps)
elif eps_std is not None:
# Sample with given eps_std
z2 = pz.sample_eps(pz.eps * tf.reshape(eps_std, [-1, 1, 1, 1]))
else:
# Sample normally
z2 = pz.sample(1.0)
z = tf.concat([z1, z2], 3)
return z
def split2d_prior(z, cfg):
n_z = int(z.get_shape()[3])
h = f_('split2d_prior', z, cfg, n_out=n_z * 2)
mean = h[:, :, :, 0::2]
logs = h[:, :, :, 1::2]
return ops.gaussian_diag(mean, logs)
# Invertible 1x1 conv
def invertible_1x1_conv(name, z, logdet, reverse=False):
if True: # Set to "False" to use the LU-decomposed version
with tf.variable_scope(name):
shape = ops.int_shape(z)
C = shape[3]
w = tf.get_variable("w", shape=(
C, C), dtype=tf.float32, initializer=tf.initializers.orthogonal())
dlogdet = tf.cast(tf.log(abs(tf.matrix_determinant(
tf.cast(w, 'float64')))), 'float32') * shape[1]*shape[2]
if not reverse:
w = tf.reshape(w, [1, 1, C, C])
z = tf.nn.conv2d(z, w, [1, 1, 1, 1],
'SAME', data_format='NHWC')
logdet += dlogdet
return z, logdet
else:
w = tf.matrix_inverse(w)
w = tf.reshape(w, [1, 1, C, C])
z = tf.nn.conv2d(z, w, [1, 1, 1, 1],
'SAME', data_format='NHWC')
logdet -= dlogdet
return z, logdet
else:
# LU-decomposed version
shape = ops.int_shape(z)
with tf.variable_scope(name):
dtype = 'float64'
# Random orthogonal matrix:
import scipy
np_w = scipy.linalg.qr(np.random.randn(shape[3], shape[3]))[
0].astype('float32')
np_p, np_l, np_u = scipy.linalg.lu(np_w) # pylint: disable=E1101
np_s = np.diag(np_u)
np_sign_s = np.sign(np_s)
np_log_s = np.log(abs(np_s))
np_u = np.triu(np_u, k=1)
p = tf.get_variable("P", initializer=np_p, trainable=False)
l = tf.get_variable("L", initializer=np_l)
sign_s = tf.get_variable(
"sign_S", initializer=np_sign_s, trainable=False)
log_s = tf.get_variable("log_S", initializer=np_log_s)
# S = tf.get_variable("S", initializer=np_s)
u = tf.get_variable("U", initializer=np_u)
p = tf.cast(p, dtype)
l = tf.cast(l, dtype)
sign_s = tf.cast(sign_s, dtype)
log_s = tf.cast(log_s, dtype)
u = tf.cast(u, dtype)
w_shape = [shape[3], shape[3]]
l_mask = np.tril(np.ones(w_shape, dtype=dtype), -1)
l = l * l_mask + tf.eye(*w_shape, dtype=dtype)
u = u * np.transpose(l_mask) + tf.diag(sign_s * tf.exp(log_s))
w = tf.matmul(p, tf.matmul(l, u))
if True:
u_inv = tf.matrix_inverse(u)
l_inv = tf.matrix_inverse(l)
p_inv = tf.matrix_inverse(p)
w_inv = tf.matmul(u_inv, tf.matmul(l_inv, p_inv))
else:
w_inv = tf.matrix_inverse(w)
w = tf.cast(w, tf.float32)
w_inv = tf.cast(w_inv, tf.float32)
log_s = tf.cast(log_s, tf.float32)
if not reverse:
w = tf.reshape(w, [1, 1] + w_shape)
z = tf.nn.conv2d(z, w, [1, 1, 1, 1],
'SAME', data_format='NHWC')
logdet += tf.reduce_sum(log_s) * (shape[1]*shape[2])
return z, logdet
else:
w_inv = tf.reshape(w_inv, [1, 1]+w_shape)
z = tf.nn.conv2d(
z, w_inv, [1, 1, 1, 1], 'SAME', data_format='NHWC')
logdet -= tf.reduce_sum(log_s) * (shape[1]*shape[2])
return z, logdet
def checkpoint(z, logdet):
zshape = ops.int_shape(z)
z = tf.reshape(z, [-1, zshape[1]*zshape[2]*zshape[3]])
logdet = tf.reshape(logdet, [-1, 1])
combined = tf.concat([z, logdet], axis=1)
tf.add_to_collection('checkpoints', combined)
logdet = combined[:, -1]
z = tf.reshape(combined[:, :-1], [-1, zshape[1], zshape[2], zshape[3]])
return z, logdet
def prior(name, y_onehot, cfg):
with tf.variable_scope(name):
cfg.top_shape = [1, 1, 768]
n_z = cfg.top_shape[-1]
h = tf.zeros([tf.shape(y_onehot)[0]]+cfg.top_shape[:2]+[2*n_z])
if cfg.learntop:
assert(False)
h = ops._conv2d('p', h, 2*n_z, 3, 1, True)
if cfg.ycond:
assert(False)
h += tf.reshape(ops.dense("y_emb", y_onehot, 2*n_z,
True, init_zero=True), [-1, 1, 1, 2 * n_z])
pz = ops.gaussian_diag(h[:, :, :, :n_z], h[:, :, :, n_z:])
def logp(z1):
objective = pz.logp(z1)
return objective
def sample(eps=None, eps_std=None, temp=1.0):
if eps is not None:
# Already sampled eps. Don't use eps_std
z = pz.sample_eps(eps)
elif eps_std is not None:
# Sample with given eps_std
z = pz.sample_eps(pz.eps * tf.reshape(eps_std, [-1, 1, 1, 1]))
else:
# Sample normally
z = pz.sample(temp)
return z
def eps(z1):
return pz.get_eps(z1)
return logp, sample, eps
class model(object):
cfg = None
encoder = None
decoder = None
def __init__(self, cfg):
self.cfg = cfg
self.encoder, self.decoder = codec(cfg)
self.cfg.n_bins = 2. ** self.cfg.n_bits_x
def _f_loss(self, x, y):
with tf.variable_scope('model', reuse=tf.AUTO_REUSE):
y_onehot = tf.cast(tf.one_hot(y, self.cfg.n_y, 1, 0), 'float32')
# Discrete -> Continuous
objective = tf.zeros_like(x, dtype='float32')[:, 0, 0, 0]
z = x # + tf.random_uniform(tf.shape(x), 0, 1./self.cfg.n_bins)
objective += - np.log(self.cfg.n_bins) * \
np.prod(ops.int_shape(z)[1:])
# Encode
z = ops.squeeze2d(z, 2) # > 16x16x12
z, objective, eps = self.encoder(z, objective)
# Prior
self.cfg.top_shape = ops.int_shape(z)[1:]
logp, _, _ = prior("prior", y_onehot, self.cfg)
objective += logp(z)
# Generative loss
nobj = - objective
bits_x = nobj / (np.log(2.) * int(x.get_shape()[1]) * int(
x.get_shape()[2]) * int(x.get_shape()[3])) # bits per subpixel
# Predictive loss
if self.cfg.weight_y > 0 and self.cfg.ycond:
assert(False)
# Classification loss
h_y = tf.reduce_mean(z, axis=[1, 2])
y_logits = ops.dense(
"classifier", h_y, self.cfg.n_y, has_bn=False)
bits_y = tf.nn.softmax_cross_entropy_with_logits_v2(
labels=y_onehot, logits=y_logits) / np.log(2.)
# Classification accuracy
y_predicted = tf.argmax(y_logits, 1, output_type=tf.int32)
classification_error = 1 - \
tf.cast(tf.equal(y_predicted, y), tf.float32)
else:
bits_y = tf.zeros_like(bits_x)
classification_error = tf.ones_like(bits_x)
return bits_x, bits_y, classification_error, eps
def f_loss(self, x, y):
bits_x, bits_y, pred_loss, eps = self._f_loss(x, y)
local_loss = bits_x + self.cfg.weight_y * bits_y
return local_loss, eps
# === Sampling function
def sample(self, y, temp=1.0, eps=None, post_process=True):
if eps is None:
eps = [None]*self.cfg.n_levels
with tf.variable_scope('model', reuse=tf.AUTO_REUSE):
y_onehot = tf.cast(tf.one_hot(y, self.cfg.n_y, 1, 0), 'float32')
_, sample, _ = prior("prior", y_onehot, self.cfg)
z = sample(temp=temp)
x = self.decoder(z, eps)
x = ops.unsqueeze2d(x, 2) # 8x8x12 -> 16x16x3
if post_process:
x = self.postprocess(x)
return x
def postprocess(self, x):
return tf.cast(tf.clip_by_value(tf.floor((x + .5)*self.cfg.n_bins)*(256./self.cfg.n_bins), 0, 255), 'uint8')
def f_encode(self, x, y):
with tf.variable_scope('model', reuse=tf.AUTO_REUSE):
y_onehot = tf.cast(tf.one_hot(y, self.cfg.n_y, 1, 0), 'float32')
# Discrete -> Continuous
objective = tf.zeros_like(x, dtype='float32')[:, 0, 0, 0]
z = x + tf.random_uniform(tf.shape(x), 0, 1. / self.cfg.n_bins)
objective += - np.log(self.cfg.n_bins) * \
np.prod(ops.int_shape(z)[1:])
# Encode
z = ops.squeeze2d(z, 2) # > 16x16x12
z, objective, eps = self.encoder(z, objective)
# Prior
self.cfg.top_shape = ops.int_shape(z)[1:]
logp, _, _eps = prior("prior", y_onehot, self.cfg)
objective += logp(z)
eps.append(_eps(z))
return eps
def f_decode(self, y, eps):
with tf.variable_scope('model', reuse=tf.AUTO_REUSE):
y_onehot = tf.cast(tf.one_hot(y, self.cfg.n_y, 1, 0), 'float32')
_, sample, _ = prior("prior", y_onehot, self.cfg)
z = sample(eps=eps[-1])
z = self.decoder(z, eps=eps[:-1])
z = ops.unsqueeze2d(z, 2) # 8x8x12 -> 16x16x3
x = self.postprocess(z)
return x
| 34.109181 | 116 | 0.508293 | 1,950 | 13,746 | 3.424103 | 0.120513 | 0.008687 | 0.00629 | 0.034147 | 0.437472 | 0.382357 | 0.3548 | 0.336229 | 0.316459 | 0.288153 | 0 | 0.036814 | 0.347883 | 13,746 | 402 | 117 | 34.19403 | 0.708054 | 0.055725 | 0 | 0.353535 | 0 | 0 | 0.019387 | 0 | 0 | 0 | 0 | 0 | 0.016835 | 1 | 0.074074 | false | 0 | 0.016835 | 0.006734 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2431696fc397c0d9ddccbdeac6219fd3f93aee4 | 528 | py | Python | src/dataset/info/NorbInfo.py | dhruvtapasvi/implementation | 964980f431517f4548a87172a05107cdf700fb84 | [
"MIT"
] | 1 | 2020-05-25T10:24:58.000Z | 2020-05-25T10:24:58.000Z | src/dataset/info/NorbInfo.py | dhruvtapasvi/implementation | 964980f431517f4548a87172a05107cdf700fb84 | [
"MIT"
] | 1 | 2017-12-18T02:16:44.000Z | 2017-12-18T02:16:44.000Z | src/dataset/info/NorbInfo.py | dhruvtapasvi/implementation | 964980f431517f4548a87172a05107cdf700fb84 | [
"MIT"
] | null | null | null | from enum import Enum
from config import routes
NORB_RANGE = (0, 255)
NORB_VALIDATION_INSTANCES = 7
NORB_TEST_INSTANCES = 9
NORB_IMAGE_DIMENSIONS = (96, 96)
NORB_LABEL_DIMENSIONS = (6,)
NORB_ELEVATION_NAME = "NORB: ELEVATION ANGLE"
NORB_ELEVATION_FACTORS = (0, 6, 3, 8)
NORB_AZIMUTH_NAME = "NORB: AZIMUTH ANGLE"
NORB_AZIMUTH_FACTORS = (0, 8, 4, 12)
class NorbLabelIndex(Enum):
STEREO = 0
CATEGORY = 1
INSTANCE = 2
ELEVATION = 3
AZIMUTH = 4
LIGHTING = 5
NORB_HOME = routes.RESOURCE_ROUTE + "/norb"
| 19.555556 | 45 | 0.712121 | 76 | 528 | 4.697368 | 0.526316 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061176 | 0.195076 | 528 | 26 | 46 | 20.307692 | 0.778824 | 0 | 0 | 0 | 0 | 0 | 0.085227 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2449b6f0313ab7787b714121ca645c033eb91e3 | 2,832 | py | Python | oneview_redfish_toolkit/api/thermal.py | lhsvobodaj/oneview-redfish-toolkit | fc5c2055bb0710f706d45e9fe28039d9f0752c20 | [
"Apache-2.0"
] | null | null | null | oneview_redfish_toolkit/api/thermal.py | lhsvobodaj/oneview-redfish-toolkit | fc5c2055bb0710f706d45e9fe28039d9f0752c20 | [
"Apache-2.0"
] | null | null | null | oneview_redfish_toolkit/api/thermal.py | lhsvobodaj/oneview-redfish-toolkit | fc5c2055bb0710f706d45e9fe28039d9f0752c20 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (2017) Hewlett Packard Enterprise Development LP
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
from oneview_redfish_toolkit.api.redfish_json_validator \
import RedfishJsonValidator
class Thermal(RedfishJsonValidator):
"""Creates a Thermal Redfish dict
Populates self.redfish with Thermal data retrieved from OneView
"""
SCHEMA_NAME = 'Thermal'
def __init__(self, utilization, uuid, name):
"""Thermal constructor
Populates self.redfish with the contents of utilization or
topology dict.
Args:
utilization: Hardware utilization or topology dict from OneView
"""
super().__init__(self.SCHEMA_NAME)
self.redfish["@odata.type"] = "#Thermal.v1_3_0.Thermal"
self.redfish["Id"] = uuid
self.redfish["Name"] = name + " Thermal"
self.redfish["Temperatures"] = list()
self.redfish["Temperatures"].append(collections.OrderedDict())
self.redfish["Temperatures"][0]["@odata.id"] = \
"/redfish/v1/Chassis/" + uuid + "/Thermal/Temperatures/0"
self.redfish["Temperatures"][0]["MemberId"] = "0"
self.redfish["Temperatures"][0]["Name"] = "AmbientTemperature"
self.redfish["Temperatures"][0]["Status"] = collections.OrderedDict()
self.redfish["Temperatures"][0]["Status"]["State"] = "Enabled"
self.redfish["Temperatures"][0]["Status"]["Health"] = "OK"
self.redfish["Temperatures"][0]["PhysicalContext"] = "Intake"
if name is not 'Rack':
self.redfish["Temperatures"][0]["ReadingCelsius"] = \
utilization["metricList"][0]["metricSamples"][0][1]
self.redfish["Temperatures"][0]["UpperThresholdCritical"] = \
utilization["metricList"][0]["metricCapacity"]
self.redfish["Temperatures"][0]["MinReadingRangeTemp"] = 10
self.redfish["Temperatures"][0]["MaxReadingRangeTemp"] = 35
else:
self.redfish["Temperatures"][0]["ReadingCelsius"] = \
utilization["peakTemp"]
self.redfish["@odata.context"] = \
"/redfish/v1/$metadata#Thermal.Thermal"
self.redfish["@odata.id"] = "/redfish/v1/Chassis/" + uuid + "/Thermal"
self._validate()
| 41.043478 | 79 | 0.644774 | 306 | 2,832 | 5.911765 | 0.441176 | 0.127695 | 0.177999 | 0.159204 | 0.206744 | 0.14262 | 0.03759 | 0 | 0 | 0 | 0 | 0.016712 | 0.21822 | 2,832 | 68 | 80 | 41.647059 | 0.800361 | 0.305438 | 0 | 0.058824 | 0 | 0 | 0.320342 | 0.05606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.058824 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a244b33458e34e37c0e52ea963be0b63be2e34a5 | 317 | py | Python | arrays/remove_element.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | null | null | null | arrays/remove_element.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | null | null | null | arrays/remove_element.py | ahcode0919/python-ds-algorithms | 0d617b78c50b6c18da40d9fa101438749bfc82e1 | [
"MIT"
] | 3 | 2020-10-07T20:24:45.000Z | 2020-12-16T04:53:19.000Z | from typing import List
def remove_element(nums: List[int], val: int) -> int:
index = 0
count = 0
length = len(nums)
while index < length:
if nums[index] == val:
count += 1
else:
nums[index - count] = nums[index]
index += 1
return length - count
| 19.8125 | 53 | 0.529968 | 40 | 317 | 4.175 | 0.5 | 0.161677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019802 | 0.362776 | 317 | 15 | 54 | 21.133333 | 0.806931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a249ea906559790d34bcae5f07d40c8d3fd57c3f | 8,186 | py | Python | pypiprivate/storage.py | helpshift/pypiprivate | 63e78b1d8f9ae67084104d89fa737bd06f8369e8 | [
"MIT"
] | 39 | 2018-10-18T16:56:24.000Z | 2022-01-26T23:00:22.000Z | pypiprivate/storage.py | helpshift/pypiprivate | 63e78b1d8f9ae67084104d89fa737bd06f8369e8 | [
"MIT"
] | 13 | 2018-11-02T10:05:27.000Z | 2020-10-19T17:11:00.000Z | pypiprivate/storage.py | helpshift/pypiprivate | 63e78b1d8f9ae67084104d89fa737bd06f8369e8 | [
"MIT"
] | 16 | 2019-03-19T09:39:03.000Z | 2020-06-24T08:07:49.000Z | import os
import errno
import shutil
import mimetypes
import logging
import boto3
from botocore.exceptions import ClientError
logger = logging.getLogger(__name__)
def guess_content_type(path, default='application/octet-stream'):
ctype = mimetypes.guess_type(path)[0] or default
logger.debug('Guessed ctype of "{0}": "{1}"'.format(path, ctype))
return ctype
class StorageException(Exception):
pass
class PathNotFound(StorageException):
pass
class Storage(object):
def join_path(self, *args):
raise NotImplementedError
def listdir(self, path):
raise NotImplementedError
def path_exists(self, path):
raise NotImplementedError
def put_contents(self, contents, dest, sync=False):
raise NotImplementedError
def put_file(self, src, dest, sync=False):
raise NotImplementedError
class LocalFileSystemStorage(Storage):
def __init__(self, base_path):
self.base_path = base_path
@classmethod
def from_config(cls, config):
storage_config = config.storage_config
return cls(storage_config['base_path'])
def join_path(self, *args):
return os.path.join(*args)
def listdir(self, path):
path = self.join_path(self.base_path, path)
try:
return os.listdir(path)
except OSError as e:
if e.errno == errno.ENOENT:
raise PathNotFound('Path {0} not found'.format(path))
raise e
def path_exists(self, path):
path = self.join_path(self.base_path, path)
return os.path.exists(path)
def ensure_dir(self, path):
if not os.path.exists(path):
os.makedirs(path)
def put_contents(self, contents, dest, sync=False):
dest_path = self.join_path(self.base_path, dest)
self.ensure_dir(os.path.dirname(dest_path))
with open(dest_path, 'w') as f:
f.write(contents)
# In LocalFileSystemStorage sync makes no sense
return dest_path
def put_file(self, src, dest, sync=False):
dest_path = self.join_path(self.base_path, dest)
self.ensure_dir(os.path.dirname(dest_path))
shutil.copy(src, dest_path)
return dest_path
def __repr__(self):
return (
'<LocalFileSystemStorage(base_path="{0}")>'
).format(self.base_path)
class AWSS3Storage(Storage):
def __init__(self, bucket, acl, creds=None, prefix=None,
endpoint=None, region=None):
if creds:
logger.info('S3 Auth: using explicitly passed credentials')
access_key, secret_key, session_token = creds
session = boto3.Session(aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token)
else:
logger.info('S3 Auth: using default boto3 methods')
session = boto3.Session()
self.endpoint = endpoint
self.region = region
kwargs = dict()
if endpoint is not None:
kwargs['endpoint_url'] = endpoint
if region is not None:
kwargs['region_name'] = region
self.s3 = s3 = session.resource('s3', **kwargs)
self.bucket = s3.Bucket(bucket)
self.prefix = prefix
self.acl = acl
@classmethod
def from_config(cls, config):
storage_config = config.storage_config
env = config.env
bucket = storage_config['bucket']
prefix = storage_config.get('prefix')
acl = storage_config.get('acl', 'private')
endpoint = storage_config.get('endpoint', None)
region = storage_config.get('region', None)
# Following 2 are the required env vars for s3 auth. If any of
# these are not set, we try using the default boto3 methods
# (same as the ones that AWS CLI and other tools support)
pp_cred_keys = ['PP_S3_ACCESS_KEY', 'PP_S3_SECRET_KEY']
if all([(k in env) for k in pp_cred_keys]):
logger.debug('PP_S3_* env vars found: using them for auth')
creds = (env['PP_S3_ACCESS_KEY'],
env['PP_S3_SECRET_KEY'],
env.get('PP_S3_SESSION_TOKEN', None))
else:
logger.debug((
'PP_S3_* env vars not found: '
'Falling back to default methods supported by boto3'
))
creds = None
return cls(bucket, acl, creds=creds, prefix=prefix,
endpoint=endpoint, region=region)
def join_path(self, *args):
return '/'.join(args)
def prefixed_path(self, path):
parts = []
if self.prefix:
parts.append(self.prefix)
if path != '.':
parts.append(path)
return self.join_path(*parts)
def listdir(self, path):
path = self.prefixed_path(path)
if path != '' and not path.endswith('/'):
s3_prefix = '{0}/'.format(path)
else:
s3_prefix = path
logger.debug('Listing objects prefixed with: {0}'.format(s3_prefix))
client = self.s3.meta.client
paginator = client.get_paginator('list_objects')
response = paginator.paginate(Bucket=self.bucket.name,
Prefix=s3_prefix,
Delimiter='/')
file_objs = [c for c in response.search('Contents') if c]
dir_objs = [cp for cp in response.search('CommonPrefixes') if cp]
# If no objs found, it means the path doesn't exist
if len(file_objs) == len(dir_objs) == 0:
raise PathNotFound('Path {0} not found'.format(s3_prefix))
files = (c['Key'][len(s3_prefix):] for c in file_objs)
files = [f for f in files if f != '']
dirs = [cp['Prefix'][len(s3_prefix):].rstrip('/') for cp in dir_objs]
return files + dirs
def path_exists(self, path):
path = self.prefixed_path(path)
logger.debug('Checking if key exists: {0}'.format(path))
client = self.s3.meta.client
try:
client.head_object(Bucket=self.bucket.name, Key=path)
except ClientError as e:
logger.debug('Handled ClientError: {0}'.format(e))
return False
else:
return True
def put_contents(self, contents, dest, sync=False):
dest_path = self.prefixed_path(dest)
client = self.s3.meta.client
logger.debug('Writing content to s3: {0}'.format(dest_path))
client.put_object(Bucket=self.bucket.name,
Key=dest_path,
Body=contents.encode('utf-8'),
ContentType=guess_content_type(dest),
ACL=self.acl)
if sync:
waiter = client.get_waiter('object_exists')
waiter.wait(Bucket=self.bucket.name, Key=dest_path)
def put_file(self, src, dest, sync=False):
dest_path = self.prefixed_path(dest)
client = self.s3.meta.client
logger.debug('Uploading file to s3: {0} -> {1}'.format(src, dest_path))
with open(src, 'rb') as f:
client.put_object(Bucket=self.bucket.name,
Key=dest_path,
Body=f,
ContentType=guess_content_type(dest),
ACL=self.acl)
if sync:
waiter = client.get_waiter('object_exists')
waiter.wait(Bucket=self.bucket.name, Key=dest_path)
def __repr__(self):
return (
'<AWSS3Storage(bucket="{0}", prefix="{1}")>'
).format(self.bucket.name, self.prefix)
def load_storage(config):
if config.storage == 'local-filesystem':
return LocalFileSystemStorage.from_config(config)
elif config.storage == 'aws-s3':
return AWSS3Storage.from_config(config)
elif config.storage == 'azure':
from pypiprivate.azure import AzureBlobStorage
return AzureBlobStorage.from_config(config)
else:
raise ValueError('Unsupported storage "{0}"'.format(config.storage))
| 34.686441 | 79 | 0.59443 | 991 | 8,186 | 4.753784 | 0.196771 | 0.028869 | 0.017831 | 0.025472 | 0.354914 | 0.306092 | 0.253449 | 0.219062 | 0.204415 | 0.204415 | 0 | 0.009771 | 0.299902 | 8,186 | 235 | 80 | 34.834043 | 0.812249 | 0.032983 | 0 | 0.363158 | 0 | 0 | 0.102023 | 0.011631 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131579 | false | 0.015789 | 0.042105 | 0.021053 | 0.294737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2537a178443ba3da5ab1aa05208e69d6d6cd88c | 8,981 | py | Python | happinesspackets/messaging/tests/test_views.py | glasnt/happinesspackets | b891e8fe67990e6d1af373bb8272b12a41ca9209 | [
"Apache-2.0"
] | 1 | 2021-01-03T00:59:51.000Z | 2021-01-03T00:59:51.000Z | happinesspackets/messaging/tests/test_views.py | glasnt/happinesspackets | b891e8fe67990e6d1af373bb8272b12a41ca9209 | [
"Apache-2.0"
] | null | null | null | happinesspackets/messaging/tests/test_views.py | glasnt/happinesspackets | b891e8fe67990e6d1af373bb8272b12a41ca9209 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.core import mail
from django.core.urlresolvers import reverse
from django.test import TestCase
from django.utils.crypto import salted_hmac
from .test_models import MessageModelFactory, BlacklistedEmailFactory
from ..models import Message, BLACKLIST_HMAC_SALT, BlacklistedEmail
class StartViewTest(TestCase):
url = reverse('messaging:start')
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
class FaqViewTest(TestCase):
url = reverse('messaging:faq')
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
class InspirationViewTest(TestCase):
url = reverse('messaging:inspiration')
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
class ArchiveViewTest(TestCase):
url = reverse('messaging:archive')
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
class BlacklistViewTest(TestCase):
url_name = 'messaging:blacklist_email'
def setUp(self):
self.message = MessageModelFactory()
self.correct_digest = salted_hmac(BLACKLIST_HMAC_SALT, self.message.recipient_email).hexdigest()
self.url_kwargs = {'email': self.message.recipient_email, 'digest': self.correct_digest}
self.url = reverse(self.url_name, kwargs=self.url_kwargs)
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
def test_confirm(self):
response = self.client.post(self.url)
self.assertRedirects(response, reverse('messaging:start'))
obj = BlacklistedEmail.objects.get()
self.assertEqual(obj.email, self.message.recipient_email)
self.assertEqual(obj.stripped_email, 'recipientrecipient@null')
def test_validates_digest(self):
self.url_kwargs['email'] = self.message.sender_email
self.url = reverse(self.url_name, kwargs=self.url_kwargs)
response = self.client.get(self.url)
self.assertEqual(response.status_code, 404)
response = self.client.post(self.url)
self.assertEqual(response.status_code, 404)
self.assertFalse(BlacklistedEmail.objects.count())
class SendViewTest(TestCase):
url = reverse('messaging:send')
def setUp(self):
super(SendViewTest, self).setUp()
self.post_data = {
'sender_name': 'sender name',
'sender_email': 'SEN.DER+FOOBAR@erik.io',
'recipient_name': 'recipient name',
'recipient_email': 'recipient@erik.io',
'message': 'message',
'sender_named': True,
'sender_approved_public': True,
'sender_approved_public_named': True,
}
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
def test_post_valid(self):
response = self.client.post(self.url, self.post_data)
self.assertRedirects(response, reverse('messaging:sender_confirmation_sent'))
self.assertEqual(len(mail.outbox), 1)
message = Message.objects.get()
self.assertEqual(message.status, Message.STATUS.pending_sender_confirmation)
self.assertEqual(mail.outbox[0].recipients(), [message.sender_email])
self.assertTrue(message.identifier in mail.outbox[0].body)
self.assertTrue(message.sender_email_token in mail.outbox[0].body)
def test_post_invalid_conflicting_publicity(self):
self.post_data['sender_approved_public'] = False
response = self.client.post(self.url, self.post_data)
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['form'].errors), 1)
def test_post_blacklisted_sender(self):
BlacklistedEmailFactory(email='sender@erik.io', stripped_email='sender@erikio')
response = self.client.post(self.url, self.post_data)
self.assertRedirects(response, reverse('messaging:sender_confirmation_sent'))
self.assertEqual(len(mail.outbox), 0)
class MessageSentViewTest(TestCase):
url = reverse('messaging:sender_confirmation_sent')
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
class MessageSenderConfirmationView(TestCase):
url_name = 'messaging:sender_confirm'
def setUp(self):
self.message = MessageModelFactory(sender_email_token='a-b-c', status=Message.STATUS.pending_sender_confirmation)
url_kwargs = {'identifier': self.message.identifier, 'token': self.message.sender_email_token}
self.url = reverse(self.url_name, kwargs=url_kwargs)
def test_confirm_anonymous(self):
response = self.client.get(self.url)
self.assertRedirects(response, reverse('messaging:sender_confirmed'))
self.assertEqual(len(mail.outbox), 1)
self.message.refresh_from_db()
self.assertEqual(self.message.status, Message.STATUS.sent)
self.assertEqual(mail.outbox[0].recipients(), [self.message.recipient_email])
self.assertFalse(self.message.sender_name in mail.outbox[0].body)
self.assertFalse(self.message.sender_email in mail.outbox[0].body)
self.assertTrue(self.message.identifier in mail.outbox[0].body)
self.assertTrue(self.message.recipient_email_token in mail.outbox[0].body)
def test_confirm_named(self):
self.message.sender_named = True
self.message.save()
response = self.client.get(self.url)
self.assertRedirects(response, reverse('messaging:sender_confirmed'))
self.assertEqual(len(mail.outbox), 1)
self.message.refresh_from_db()
self.assertTrue(mail.outbox[0].recipients(), [self.message.recipient_email])
self.assertTrue(self.message.sender_name in mail.outbox[0].body)
self.assertTrue(self.message.identifier in mail.outbox[0].body)
def test_bad_token(self):
self.message.sender_email_token = 'o-t-h-e-r'
self.message.recipient_email_token = 'a-b-c'
self.message.save()
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
self.assertTrue(response.context['not_found'])
def test_bad_status(self):
self.message.status = Message.STATUS.sent
self.message.save()
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
self.assertTrue(response.context['already_confirmed'])
def test_confirm_blacklisted_recipient(self):
BlacklistedEmailFactory(email='recipient@erik.io', stripped_email='recipientrecipient@null')
response = self.client.get(self.url)
self.assertRedirects(response, reverse('messaging:sender_confirmed'))
self.assertEqual(len(mail.outbox), 0)
class MessageSenderConfirmedView(TestCase):
url = reverse('messaging:sender_confirmed')
def test_renders(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
class MessageRecipientMessageUpdate(TestCase):
url_name = 'messaging:recipient_message_update'
def setUp(self):
self.message = MessageModelFactory(recipient_email_token='a-b-c', status=Message.STATUS.sent)
url_kwargs = {'identifier': self.message.identifier, 'token': self.message.recipient_email_token}
self.url = reverse(self.url_name, kwargs=url_kwargs)
def test_confirm(self):
response = self.client.get(self.url)
self.assertEqual(response.status_code, 200)
def test_post_valid(self):
self.assertFalse(self.message.recipient_approved_public)
response = self.client.post(self.url, {'recipient_approved_public': True})
self.assertRedirects(response, self.url)
self.message.refresh_from_db()
self.assertTrue(self.message.recipient_approved_public)
def test_post_invalid(self):
self.assertFalse(self.message.recipient_approved_public_named)
response = self.client.post(self.url, {'recipient_approved_public_named': True})
self.assertEqual(response.status_code, 200)
self.message.refresh_from_db()
self.assertFalse(self.message.recipient_approved_public_named)
def test_bad_token(self):
self.message.sender_email_token = 'a-b-c'
self.message.recipient_email_token = 'o-t-h-e-r'
self.message.save()
response = self.client.get(self.url)
self.assertEqual(response.status_code, 404)
def test_bad_status(self):
self.message.status = Message.STATUS.pending_sender_confirmation
self.message.save()
response = self.client.get(self.url)
self.assertEqual(response.status_code, 404)
| 38.711207 | 121 | 0.703708 | 1,076 | 8,981 | 5.704461 | 0.11803 | 0.071685 | 0.070381 | 0.058162 | 0.693874 | 0.655425 | 0.607038 | 0.562561 | 0.499185 | 0.419681 | 0 | 0.009398 | 0.182496 | 8,981 | 231 | 122 | 38.878788 | 0.826614 | 0.002338 | 0 | 0.494253 | 0 | 0 | 0.09578 | 0.056486 | 0 | 0 | 0 | 0 | 0.298851 | 1 | 0.155172 | false | 0 | 0.04023 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a253aca18897f1102f03662b429488c3367511cf | 13,609 | py | Python | scripts/FoxNetwork.py | hprice99/ENGG4811_code | 0651216e261594008261bacc6af86bfa14cbd04d | [
"MIT"
] | null | null | null | scripts/FoxNetwork.py | hprice99/ENGG4811_code | 0651216e261594008261bacc6af86bfa14cbd04d | [
"MIT"
] | null | null | null | scripts/FoxNetwork.py | hprice99/ENGG4811_code | 0651216e261594008261bacc6af86bfa14cbd04d | [
"MIT"
] | null | null | null | import numpy as np
import math
import os
from numpy.matrixlib.defmatrix import matrix
from FoxPacket import *
from MulticastConfig import *
from Firmware import *
class FoxNetwork:
def __init__(self, *, networkRows, networkCols, resultNodeCoord, \
romNodeCoord, \
totalMatrixSize, foxNetworkStages, multicastGroupBits, \
multicastCoordBits, \
readyFlagBits, resultFlagBits, matrixTypeBits, matrixCoordBits, \
foxFirmware, resultFirmware, A=None, B=None, \
useMatrixInitFile=True, multicastAvailable, useMulticast, multicastGroupNodes, \
multicastNetworkRows, multicastNetworkCols, \
multicastFifoDepth, \
foxNodeFifos, resultNodeFifos, \
resultUartFifoDepth, \
hdlFolder=None, firmwareFolder=None):
# Entire network details
self.networkRows = networkRows
self.networkCols = networkCols
self.networkNodes = self.networkRows * self.networkCols
self.resultNodeCoord = resultNodeCoord
self.romNodeCoord = romNodeCoord
# Fox algorithm network details
self.foxNetworkStages = foxNetworkStages
self.foxNetworkNodes = (self.foxNetworkStages ** 2)
coordBits = math.ceil(math.log2(max(self.networkRows, self.networkCols)))
matrixElementBits = 32
self.packetFormat = FoxPacket(coordBits=coordBits, multicastCoordBits=multicastCoordBits, multicastGroupBits=multicastGroupBits, readyFlagBits=readyFlagBits, resultFlagBits=resultFlagBits, matrixTypeBits=matrixTypeBits, matrixCoordBits=matrixCoordBits, matrixElementBits=matrixElementBits)
# Matrix details
self.totalMatrixSize = totalMatrixSize
self.totalMatrixElements = (self.totalMatrixSize ** 2)
self.foxMatrixSize = int(self.totalMatrixSize / self.foxNetworkStages)
self.foxMatrixElements = (self.foxMatrixSize ** 2)
self.foxFifoDepth = 2 * self.foxMatrixElements
self.resultFifoDepth = self.totalMatrixElements
self.foxNodeFifos = foxNodeFifos
self.resultNodeFifos = resultNodeFifos
self.resultUartFifoDepth = resultUartFifoDepth
# Do not set A or B by default
self.A = A
self.B = B
self.useMatrixInitFile = useMatrixInitFile
if A is not None and B is not None:
assert A.shape[0] == self.totalMatrixSize, "A matrix dimensions do not match totalMatrixSize"
assert B.shape[0] == self.totalMatrixSize, "B matrix dimensions do not match totalMatrixSize"
print(self.A)
print(self.B)
self.foxFirmware = foxFirmware
self.resultFirmware = resultFirmware
self.useMulticast = useMulticast
if multicastAvailable == True:
if self.useMulticast == True:
self.multicastConfig = MulticastConfig(useMulticast=useMulticast, \
multicastGroupNodes=multicastGroupNodes, \
multicastNetworkRows=multicastNetworkRows, \
multicastNetworkCols=multicastNetworkCols, \
multicastFifoDepth=multicastFifoDepth)
else:
self.multicastConfig = MulticastConfig(useMulticast=useMulticast, \
multicastGroupNodes=0, \
multicastNetworkRows=0, \
multicastNetworkCols=0, \
multicastFifoDepth=0)
else:
self.multicastConfig = None
if hdlFolder is None:
raise Exception("HDL folder not given")
self.hdlFolder = hdlFolder
if firmwareFolder is None:
raise Exception("Firmware folder not given")
self.firmwareFolder = firmwareFolder
'''
Convert a node's (x, y) coordinates into a node number
'''
def node_coord_to_node_number(self, coord):
nodeNumber = coord['y'] * self.networkRows + coord['x']
return nodeNumber
'''
Convert a node's number to (x, y) coordinates
'''
def node_number_to_node_coord(self, nodeNumber):
nodeCoords = {}
nodeCoords['x'] = nodeNumber % self.foxNetworkStages
nodeCoords['y'] = nodeNumber // self.foxNetworkStages
return nodeCoords
'''
Set the A and B matrices that will be multiplied using Fox's algorithm
'''
def set_matrices(self, *, A, B):
assert A.shape[0] == self.totalMatrixSize, "A matrix dimensions do not match totalMatrixSize"
assert B.shape[0] == self.totalMatrixSize, "B matrix dimensions do not match totalMatrixSize"
self.A = A
self.B = B
'''
Encode a matrix to packets and write to file
'''
def write_matrix_to_file(self, *, matrixFile, nodeCoord, multicastCoord, matrixType, matrix):
readyFlag = 0
resultFlag = 0
packets = self.packetFormat.encode_matrix(destCoord=nodeCoord, multicastCoord=multicastCoord, \
resultFlag=resultFlag, readyFlag=readyFlag, matrixType=matrixType, matrix=matrix)
# Append each packet to a file
file = open(matrixFile, "a")
for packet in packets:
file.write(packet)
file.close()
return packets
'''
Encode a matrix to packet
'''
def encode_matrix(self, *, nodeCoord, multicastCoord, matrixType, matrix):
readyFlag = 0
resultFlag = 0
packets = self.packetFormat.encode_matrix(destCoord=nodeCoord, multicastCoord=multicastCoord, \
resultFlag=resultFlag, readyFlag=readyFlag, matrixType=matrixType, matrix=matrix)
return packets
'''
Write a list of packets to a file
'''
def write_packets_to_file(self, *, packets, fileName):
# Append each packet to a file
file = open(fileName, "a")
for packet in packets:
file.write(packet)
file.close()
'''
Pad a matrix file with 0 entries
'''
def pad_matrix_file(self, *, matrixFile, nodeCoord, paddingRequired):
padding = []
multicastCoord = {'x' : 0, 'y' : 0}
for _ in range(paddingRequired):
padding.append(self.packetFormat.create_matrix_packet(destCoord=nodeCoord, multicastCoord=multicastCoord, \
readyFlag=0, resultFlag=0, matrixType=MatrixTypes.A, matrixCoord={'x' : 0, 'y' : 0}, matrixElement=0))
# Append padding to a file
file = open(matrixFile, "a")
for p in padding:
file.write(p)
file.close()
'''
Create memory initialisation files for each node and each matrix
'''
def create_matrix_init_files(self):
if self.useMatrixInitFile == False:
print("Matrix init file not used")
return
if self.A is None or self.B is None:
print("Matrices not initialised")
return
import os
scriptLocation = os.path.realpath(__file__)
scriptDirectory = os.path.dirname(scriptLocation)
initFilePrefix = "{directory}/../{hdlFolder}/memory/".format(directory=scriptDirectory, hdlFolder=self.hdlFolder)
initFileSuffix = ".mif"
aPackets = []
bPackets = []
packets = []
combinedFileName = initFilePrefix + "combined" + initFileSuffix
if os.path.exists(combinedFileName):
os.remove(combinedFileName)
# Loop through the nodes
for nodeNumber in range(self.foxNetworkNodes):
elementsWritten = 0
# Delete the file before writing to it
matrixFileName = initFilePrefix + "node{nodeNumber}".format(nodeNumber=nodeNumber) + initFileSuffix
if os.path.exists(matrixFileName):
os.remove(matrixFileName)
else:
# Make the memory directory
if not os.path.isdir("{directory}/../{hdlFolder}/memory".format(directory=scriptDirectory, hdlFolder=self.hdlFolder)):
os.mkdir("{directory}/../{hdlFolder}/memory".format(directory=scriptDirectory, hdlFolder=self.hdlFolder))
nodeCoord = self.node_number_to_node_coord(nodeNumber)
# Split the matrices
nodeMatrixXStart = int(nodeCoord['x'] * self.foxMatrixSize)
nodeMatrixXEnd = int(nodeCoord['x'] * self.foxMatrixSize + self.foxMatrixSize)
nodeMatrixYStart = int(nodeCoord['y'] * self.foxMatrixSize)
nodeMatrixYEnd = int(nodeCoord['y'] * self.foxMatrixSize + self.foxMatrixSize)
# Write A
nodeA = self.A[nodeMatrixYStart:nodeMatrixYEnd, nodeMatrixXStart:nodeMatrixXEnd]
multicastCoord = {'x' : 0, 'y' : 0}
matrixType = MatrixTypes.A
# Encode the matrix and write to file
newAPackets = self.encode_matrix(nodeCoord=nodeCoord, multicastCoord=multicastCoord, matrixType=matrixType, matrix=nodeA)
aPackets += newAPackets
elementsWritten += np.size(nodeA)
# Write B
nodeB = self.B[nodeMatrixYStart:nodeMatrixYEnd, nodeMatrixXStart:nodeMatrixXEnd]
multicastCoord = {'x' : 0, 'y' : 0}
matrixType = MatrixTypes.B
# Encode the matrix and write to file
newBPackets = self.encode_matrix(nodeCoord=nodeCoord, multicastCoord=multicastCoord, matrixType=matrixType, matrix=nodeB)
bPackets += newBPackets
elementsWritten += np.size(nodeB)
packets = aPackets + bPackets
self.write_packets_to_file(packets=packets, fileName=combinedFileName)
'''
Generate VHDL package containing network parameters
'''
def write_network_header_file(self, fileName="fox_defs.vhd"):
from jinja2 import Environment, FileSystemLoader
import os
scriptLocation = os.path.realpath(__file__)
scriptDirectory = os.path.dirname(scriptLocation)
fileLoader = FileSystemLoader('{directory}/templates'.format(directory=scriptDirectory))
env = Environment(loader=fileLoader, trim_blocks=True, lstrip_blocks=True)
template = env.get_template('fox_defs.vhd')
output = template.render(foxNetwork=self)
# Write output to file
headerFileName = '{directory}/../{hdlFolder}/src/{fileName}'.format(directory=scriptDirectory, hdlFolder=self.hdlFolder, fileName=fileName)
headerFile = open(headerFileName, 'w')
headerFile.write(output)
headerFile.close()
'''
Generate VHDL package containing packet format
'''
def write_packet_header_file(self, fileName="packet_defs.vhd"):
self.packetFormat.write_header_file(hdlFolder=self.hdlFolder, fileName=fileName)
'''
Generate VHDL package containing multicast configuration
'''
def write_multicast_header_file(self, fileName="multicast_defs.vhd"):
if self.multicastConfig is not None:
self.multicastConfig.write_header_file(hdlFolder=self.hdlFolder, fileName=fileName)
'''
Write matrix config files
'''
def write_matrix_config_file(self, vhdlFileName="matrix_config.vhd", \
cFileName="matrix_config.h"):
from jinja2 import Environment, FileSystemLoader
import os
scriptLocation = os.path.realpath(__file__)
scriptDirectory = os.path.dirname(scriptLocation)
fileLoader = FileSystemLoader('{directory}/templates'.format(directory=scriptDirectory))
env = Environment(loader=fileLoader, trim_blocks=True, lstrip_blocks=True)
vhdlTemplate = env.get_template('matrix_config.vhd')
vhdlOutput = vhdlTemplate.render(foxNetwork=self)
# Write output to file
vhdlHeaderFileName = '{directory}/../{hdlFolder}/src/{fileName}'.format(directory=scriptDirectory, hdlFolder=self.hdlFolder, fileName=vhdlFileName)
vhdlHeaderFile = open(vhdlHeaderFileName, 'w')
vhdlHeaderFile.write(vhdlOutput)
vhdlHeaderFile.close()
cTemplate = env.get_template('matrix_config.h')
cOutput = cTemplate.render(foxNetwork=self)
# Write output to file
cHeaderFileName = '{directory}/../{firmwareFolder}/{fileName}'.format(directory=scriptDirectory, firmwareFolder=self.firmwareFolder, fileName=cFileName)
cHeaderFile = open(cHeaderFileName, 'w')
cHeaderFile.write(cOutput)
cHeaderFile.close()
'''
Write firmware config files
'''
def write_firmware_config_file(self, vhdlFileName="firmware_config.vhd"):
from jinja2 import Environment, FileSystemLoader
import os
scriptLocation = os.path.realpath(__file__)
scriptDirectory = os.path.dirname(scriptLocation)
fileLoader = FileSystemLoader('{directory}/templates'.format(directory=scriptDirectory))
env = Environment(loader=fileLoader, trim_blocks=True, lstrip_blocks=True)
vhdlTemplate = env.get_template('firmware_config.vhd')
vhdlOutput = vhdlTemplate.render(foxNetwork=self)
# Write output to file
vhdlHeaderFileName = '{directory}/../{hdlFolder}/src/{fileName}'.format(directory=scriptDirectory, hdlFolder=self.hdlFolder, fileName=vhdlFileName)
vhdlHeaderFile = open(vhdlHeaderFileName, 'w')
vhdlHeaderFile.write(vhdlOutput)
vhdlHeaderFile.close()
| 38.335211 | 297 | 0.644426 | 1,248 | 13,609 | 6.951923 | 0.18109 | 0.007607 | 0.034578 | 0.026971 | 0.432112 | 0.4003 | 0.379783 | 0.364569 | 0.34071 | 0.314431 | 0 | 0.003514 | 0.268205 | 13,609 | 354 | 298 | 38.443503 | 0.867657 | 0.033507 | 0 | 0.327014 | 0 | 0 | 0.066704 | 0.026552 | 0 | 0 | 0 | 0 | 0.018957 | 1 | 0.066351 | false | 0 | 0.066351 | 0 | 0.165877 | 0.018957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2542d2cec3bf93aa7616b84f3dce07cd3caab66 | 16,499 | py | Python | scripts/data_modification/flatten_dataport.py | jmontp/Prosthetic_Adaptation | 0933a6eb830de744fa84ecbca70838f4e9e7340a | [
"MIT"
] | null | null | null | scripts/data_modification/flatten_dataport.py | jmontp/Prosthetic_Adaptation | 0933a6eb830de744fa84ecbca70838f4e9e7340a | [
"MIT"
] | null | null | null | scripts/data_modification/flatten_dataport.py | jmontp/Prosthetic_Adaptation | 0933a6eb830de744fa84ecbca70838f4e9e7340a | [
"MIT"
] | 1 | 2021-02-07T16:12:40.000Z | 2021-02-07T16:12:40.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Apr 14 22:17:32 2021
@author: jmontp
"""
# %%
from os import remove
import h5py
import numpy as np
import pandas as pd
from pandas import DataFrame
from functools import lru_cache
def get_column_name(column_string_list, num_last_keys):
filter_strings = ['right', 'left']
filtered_list = filter(
lambda x: not x in filter_strings, column_string_list)
column_string = '_'.join(filtered_list)
return column_string
def get_end_points(d, out_dict, parent_key='', sep='/', num_last_keys=4):
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, h5py._hl.group.Group):
get_end_points(v, out_dict, new_key, sep=sep)
# Where the magic happens when you reach an end point
else:
column_string_list = (parent_key+sep+k).split(sep)[-num_last_keys:]
column_string = get_column_name(column_string_list, num_last_keys)
if (column_string not in out_dict):
out_dict[column_string] = [[new_key], [v]]
else:
out_dict[column_string][0].append(new_key)
out_dict[column_string][1].append(v)
#Debug where the bad strides are in the datasets
def determine_zero_data_strides():
pass
#%%
file_name = '../local-storage/InclineExperiment.mat'
h5py_file = h5py.File(file_name)['Gaitcycle']
# Iterate through all the subjects, make a file per subject to keep it RAM bound
for subject in h5py_file.keys():
if subject != 'AB05':
continue
# Initialize variables for the subject
data = h5py_file[subject]
save_name = '../local-storage/test/dataport_flattened_partial_{}.parquet'.format(
subject)
# Store all the end points
columns_to_endpoint_list = {}
get_end_points(data, columns_to_endpoint_list)
for joint in columns_to_endpoint_list.keys():
joint_data_trial_name = zip(*columns_to_endpoint_list[joint])
total_datasets = len(columns_to_endpoint_list[joint][1])
sum = 0
bad_strides = 0
for num_dataset,trial_name_dataset in enumerate(joint_data_trial_name):
trial_name, dataset = trial_name_dataset
total_rows = dataset.shape[0]
for row_number, row in enumerate(dataset):
num_digits = np.count_nonzero(row)
if(num_digits == 0):
bad_strides += 1
if(row_number+1 != total_rows and "forceplate" not in joint):
print(subject + " " + joint + " dataset " + trial_name + " " + str(num_dataset) + "/" + str(total_datasets) + " bad row: " + str(row_number + 1) + "/" + str(total_rows))
sum += 1
#print(subject + " " + joint + " total bad strides = " + str(bad_strides) + " Total strides: " + str(sum))
# This is a helper function to determine where trials have different strides
def determine_different_strides():
pass
#%%
feature1 = 'jointangles_ankle_x'
feature2 = 'jointmoment_knee_x'
print("Comparing " + feature1 + " to " + feature2)
file_name = '../local-storage/InclineExperiment.mat'
h5py_file = h5py.File(file_name)['Gaitcycle']
bad_datasets = []
# Iterate through all the subjects, make a file per subject to keep it RAM bound
for subject in h5py_file.keys():
if(subject != "AB05"):
continue
# Initialize variables for the subject
data = h5py_file[subject]
# Store all the end points
columns_to_endpoint_list = {}
get_end_points(data, columns_to_endpoint_list)
#Get the data for the features that we want
data_feature1 = columns_to_endpoint_list[feature1]
data_feature2 = columns_to_endpoint_list[feature2]
bad_run_filter = lambda x: (x[:,:],[]) if (np.count_nonzero(x[-1,:]) or np.count_nonzero(x[-2,:]) == 0) else (x[:-1,:],[x.shape[0]-1])
#Update the dataset based on the filter implementation
data_feature1[1] = [bad_run_filter(x)[0] for x in data_feature1[1]]
data_feature2[1] = [bad_run_filter(x)[0] for x in data_feature2[1]]
#Create a mapping from trial to trial data shape
#Initialzie to zero
trial_length_dict_feature1 = {
x.split('/')[0] + " " + x.split('/')[-3]: [] for x in data_feature1[0]}
trial_length_dict_feature2 = {
x.split('/')[0] + " " + x.split('/')[-3]: [] for x in data_feature2[0]}
trial_to_dataset = {}
#Initialize the dictionary taking into conisderation left and right legs
for trial_long,data in zip(*data_feature1):
trial = trial_long.split('/')[0] + " " + trial_long.split('/')[-3]
trial_length_dict_feature1[trial].append(data)
for trial_long,data in zip(*data_feature2):
trial = trial_long.split('/')[0] + " " + trial_long.split('/')[-3]
trial_length_dict_feature2[trial].append(data)
sum_len1 = 0
sum_len2 = 0
#Verify each trial shape
for trial in trial_length_dict_feature1.keys():
trial_data_pair = zip(trial_length_dict_feature1[trial],
trial_length_dict_feature2[trial])
for single_data_trial1, single_data_trial2 in trial_data_pair:
len1 = single_data_trial1.shape[0]*single_data_trial1.shape[1]
len2 = single_data_trial2.shape[0]*single_data_trial2.shape[1]
if len1 != len2:
bad_datasets.append((single_data_trial1,single_data_trial2))
pass
print("!!!!!!!!!!!!!!!!! This trial does not match " + subject + " " + trial + " len1 " + str(len1) + " len2 " + str(len2))
else:
pass
print("Good " + subject + " " + trial + " len1 " + str(len1) + " len2 " + str(len2))
for dataset_pair in bad_datasets:
print(np.count_nonzero(np.array(dataset_pair[1]).flatten()[-150:]))
bad_datasets_np = [ (x[:,:], y[:,:]) for x,y in bad_datasets]
bad_datasets_np.insert(0,(feature1,feature2))
#Conclusion, there are datasets that have zero final stride inconsistently
# I found a way to identify them
# Need to implement into the dataset flattening technique
# I think they are all on the left leg
#%%
def quick_flatten_dataport():
pass
# %%
file_name = '../local-storage/InclineExperiment.mat'
h5py_file = h5py.File(file_name)['Gaitcycle']
# Iterate through all the subjects, make a file per subject to keep it RAM bound
for subject in h5py_file.keys():
#Uncomment if you want to debug a specific subject
# if subject != "AB05":
# continue
print("Flattening subject: " + subject)
# Initialize variables for the subject
data = h5py_file[subject]
save_name = '../local-storage/test/dataport_flattened_partial_{}.parquet'.format(
subject)
# Store all the end points
columns_to_endpoint_list = {}
get_end_points(data, columns_to_endpoint_list)
# This dictionary stores dataframes based on the amount of strides that
# they have
strides_to_dataframes_dict = {}
# Which column will be used to get information about each row
selected_column = 'jointangles_ankle_x'
# Main loop - process each potential column
for column_name, endpoint_list in columns_to_endpoint_list.items():
# If the enpoints contain any of this, ignore the endpoint
if('subjectdetails' in column_name or
'cycles' in column_name or
'stepsout' in column_name or
'description' in column_name or
'mean' in column_name or
'std' in column_name):
#print(column_name + " " + str(len(endpoint_list[1])) + " (ignored)")
continue
# Else: this is a valid column
#print(column_name + " " + str(len(endpoint_list[1])))
# Filter the endpoint list for bad
#This version removes elements just in the end
bad_run_filter = lambda x: (x[:,:],[]) if (np.count_nonzero(x[-1,:]) or np.count_nonzero(x[-2,:]) == 0) else (x[:-1,:],[x.shape[0]-1])
#This version removes elements in the middle
# def bad_run_filter(trial_dataset):
# remove_list = []
# for row_index,row in enumerate(trial_dataset):
# num_digits = np.count_nonzero(row)
# if(num_digits == 0):
# remove_list.append(row_index)
# #Remove elements
# # if remove list is empty, nothing is deleted
# return np.delete(trial_dataset[:,:], remove_list, axis=0), remove_list
endpoint_list_filtered = [bad_run_filter(x)[0] for x in endpoint_list[1]]
# Get the data to add it to a dataframe
data_array = np.concatenate(endpoint_list_filtered, axis=0).flatten()
# Calculate how many strides are in the dataframe
len_arr = data_array.shape[0]
len_key = len_arr/150.0
# Add the key to the dataframe
try:
strides_to_dataframes_dict[len_key][column_name] = data_array
except KeyError:
strides_to_dataframes_dict[len_key] = DataFrame()
strides_to_dataframes_dict[len_key][column_name] = data_array
# All the dataframes have been created, add information about phase and task
# Helper functions to get time, ramp to append task information to dataframe
@lru_cache(maxsize=5)
def get_time(trial, leg):
return data[trial]['cycles'][leg]['time']
@lru_cache(maxsize=5)
def get_ramp(trial):
return data[data[trial]['description'][1][1]][0][0]
@lru_cache(maxsize=5)
def get_speed(trial):
return data[data[trial]['description'][1][0]][0][0]
# Iterate by row to get phase information
# Ugly but effective
# We need to use the unfiltered version to get the remove_list again
# This is used to filter the time column
endpoint_list = columns_to_endpoint_list[selected_column]
# Create lists to store all the phase dot and stride length information
trials = []
legs = []
phase_dot_list = []
stride_length_list = []
# Iterate by trial to get time, speed
for experiment_name, dataset in zip(*endpoint_list):
filtered_dataset, remove_list = bad_run_filter(dataset)
endpoint_split = experiment_name.split('/')
trial = endpoint_split[0]
leg = endpoint_split[-3]
trials.extend([trial]*filtered_dataset.shape[0]*filtered_dataset.shape[1])
legs.extend([leg]*filtered_dataset.shape[0]*filtered_dataset.shape[1])
time = get_time(trial, leg)
speed = get_speed(trial)
#Filter out times that are not being used since there is no data
time = np.delete(time,remove_list,axis=0)
time_delta = (time[:, -1]-time[:, 0])
phase_dot_list.append(np.repeat(1/time_delta, 150))
stride_length_list.append(np.repeat(speed*time_delta, 150))
# Get the corresponding dataframe to the selected column
df = None
for dataframe in strides_to_dataframes_dict.values():
if selected_column in dataframe.columns:
df = dataframe
# print(len(trials))
# print(df.shape[0])
df['ramp'] = [get_ramp(trial) for trial in trials]
df['speed'] = [get_speed(trial) for trial in trials]
df['trial'] = trials
# We don't want phase to reach one because 0=1 in terms of phase
phase = np.linspace(0, (1-1/150), 150)
df['leg'] = legs
df['phase'] = np.tile(phase, int(df.shape[0]/150))
df['phase_dot'] = np.concatenate(phase_dot_list, axis=0)
df['stride_length'] = np.concatenate(stride_length_list, axis=0)
print("Number of columns: " + str(len(df.columns)))
print("Columns: " + df.columns)
print("strides to length " + str([(strides,len(dataset.columns)) for strides, dataset in strides_to_dataframes_dict.items()]))
# Comment out to not save
df.to_parquet(save_name)
# Uncomment break to just get one person
#break
# %%
def add_global_shank_angle():
pass
# %%
# #Get the subjects
subjects = [
('AB10', '../local-storage/test/dataport_flattened_partial_AB10.parquet')]
for i in range(1, 10):
subjects.append(
('AB0'+str(i), '../local-storage/test/dataport_flattened_partial_AB0'+str(i)+'.parquet'))
for subject in subjects:
df = pd.read_parquet(subject[1])
print(df.columns)
# Create the shank angles based on foot and ankle
# df.drop(columns=['jointangle_shank_x','jointangle_shank_y','jointangle_shank_z'])
df['jointangles_shank_x'] = df['jointangles_foot_x'] + \
df['jointangles_ankle_x']
df['jointangles_shank_y'] = df['jointangles_foot_y'] + \
df['jointangles_ankle_y']
df['jointangles_shank_z'] = df['jointangles_foot_z'] + \
df['jointangles_ankle_z']
# Create the thigh angle based on pelvis and ip
df['jointangles_thigh_x'] = df['jointangles_pelvis_x'] + \
df['jointangles_hip_x']
df['jointangles_thigh_y'] = df['jointangles_pelvis_y'] + \
df['jointangles_hip_y']
df['jointangles_thigh_z'] = df['jointangles_pelvis_z'] + \
df['jointangles_hip_z']
# Calculate the derivative of foot dot manually
shank_anles_cutoff = df['jointangles_shank_x'].values[:-1]
shank_angles_future = df['jointangles_shank_x'].values[1:]
phase_rate = df['phase_dot'].values[:-1]
measured_shank_derivative = (
shank_angles_future-shank_anles_cutoff)*(phase_rate)*150
measured_shank_derivative = np.append(measured_shank_derivative, 0)
df['jointangles_shank_dot_x'] = measured_shank_derivative
# Calculate the derivative of foot dot manually
foot_anles_cutoff = df['jointangles_foot_x'].values[:-1]
foot_angles_future = df['jointangles_foot_x'].values[1:]
measured_foot_derivative = (
foot_angles_future-foot_anles_cutoff)*(phase_rate)*150
measured_foot_derivative = np.append(measured_foot_derivative, 0)
df['jointangles_foot_dot_x'] = measured_foot_derivative
# Calculate the derivative of knee dot manually
anles_cutoff = df['jointangles_knee_x'].values[:-1]
angles_future = df['jointangles_knee_x'].values[1:]
measured_foot_derivative = (
angles_future-anles_cutoff)*(phase_rate)*150
measured_foot_derivative = np.append(measured_foot_derivative, 0)
df['jointangles_knee_dot_x'] = measured_foot_derivative
# Calculate the derivative of hip dot manually
anles_cutoff = df['jointangles_hip_x'].values[:-1]
angles_future = df['jointangles_hip_x'].values[1:]
measured_foot_derivative = (
angles_future-anles_cutoff)*(phase_rate)*150
measured_foot_derivative = np.append(measured_foot_derivative, 0)
df['jointangles_hip_dot_x'] = measured_foot_derivative
# Calculate the derivative of thigh dot manually
anles_cutoff = df['jointangles_thigh_x'].values[:-1]
angles_future = df['jointangles_thigh_x'].values[1:]
measured_foot_derivative = (
angles_future-anles_cutoff)*(phase_rate)*150
measured_foot_derivative = np.append(measured_foot_derivative, 0)
df['jointangles_thigh_dot_x'] = measured_foot_derivative
df.to_parquet(subject[1])
# %%
if __name__ == '__main__':
quick_flatten_dataport()
add_global_shank_angle()
#determine_different_strides()
#determine_zero_data_strides()
pass
# %%
| 39.283333 | 197 | 0.616583 | 2,101 | 16,499 | 4.585911 | 0.156116 | 0.044525 | 0.036533 | 0.028334 | 0.389414 | 0.352776 | 0.293721 | 0.262688 | 0.237779 | 0.213908 | 0 | 0.019407 | 0.278562 | 16,499 | 419 | 198 | 39.377088 | 0.790053 | 0.214801 | 0 | 0.236052 | 0 | 0 | 0.111293 | 0.035464 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038627 | false | 0.030043 | 0.025751 | 0.012876 | 0.081545 | 0.042918 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a25445a632ed128aca5965668a5e89d6da17379e | 1,722 | py | Python | tests/workers/test_sig_algos_openssl3_0_0_ed25519_ed448.py | timb-machine-mirrors/tlsmate | 1313161b9170311f466a3a43b3d84797cecc0291 | [
"MIT"
] | null | null | null | tests/workers/test_sig_algos_openssl3_0_0_ed25519_ed448.py | timb-machine-mirrors/tlsmate | 1313161b9170311f466a3a43b3d84797cecc0291 | [
"MIT"
] | null | null | null | tests/workers/test_sig_algos_openssl3_0_0_ed25519_ed448.py | timb-machine-mirrors/tlsmate | 1313161b9170311f466a3a43b3d84797cecc0291 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Implements a class to be used for unit testing.
"""
import pathlib
from tlsmate.workers.sig_algo import ScanSigAlgs
from tlsmate.tlssuite import TlsSuiteTester
from tlsmate.tlssuite import TlsLibrary
sig_algs = ["ED25519", "ED448"]
sig_algs_tls13 = [
"ECDSA_SECP384R1_SHA384",
"RSA_PSS_RSAE_SHA256",
"RSA_PSS_RSAE_SHA384",
"RSA_PSS_RSAE_SHA512",
]
class TestCase(TlsSuiteTester):
"""Class used for tests with pytest.
For more information refer to the documentation of the TcRecorder class.
"""
sp_in_yaml = "profile_supported_groups_openssl3_0_0_ed25519_ed448"
sp_out_yaml = "profile_sig_algos_openssl3_0_0_ed25519_ed448"
recorder_yaml = "recorder_sig_algos_openssl3_0_0_ed25519_ed448"
path = pathlib.Path(__file__)
server_cmd = (
"utils/start_openssl --version {library} --port {server_port} "
"--cert1 server-ed448 --cert2 server-ed25519 "
"-- -www"
)
library = TlsLibrary.openssl3_0_0
server = "localhost"
def check_sig_algo(self, prof):
assert len(prof["algorithms"]) == len(sig_algs)
for a, b in zip(sig_algs, prof["algorithms"]):
assert a == b["name"]
def check_profile(self, profile):
self.check_sig_algo(profile["versions"][4]["signature_algorithms"])
self.check_sig_algo(profile["versions"][5]["signature_algorithms"])
def run(self, tlsmate, is_replaying):
server_profile = tlsmate.server_profile
client = tlsmate.client
client.init_profile()
ScanSigAlgs(tlsmate).run()
self.check_profile(server_profile.make_serializable())
if __name__ == "__main__":
TestCase().entry(is_replaying=False)
| 29.689655 | 76 | 0.695122 | 217 | 1,722 | 5.16129 | 0.442396 | 0.025 | 0.035714 | 0.045536 | 0.128571 | 0.108929 | 0.053571 | 0 | 0 | 0 | 0 | 0.053996 | 0.19338 | 1,722 | 57 | 77 | 30.210526 | 0.75234 | 0.103368 | 0 | 0 | 0 | 0 | 0.288903 | 0.106369 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.078947 | false | 0 | 0.105263 | 0 | 0.394737 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2567a31d1608a2be704f6f5c09c7ca757ea66ed | 5,955 | py | Python | examples/application_robot_calibration/model_training/model.py | Tabor-Research-Group/phoenics | e14a4dfc76035aa434f2962f9db5af7bc76dcf58 | [
"Apache-2.0"
] | 72 | 2018-01-19T21:08:38.000Z | 2022-03-26T08:44:49.000Z | examples/application_robot_calibration/model_training/model.py | Tabor-Research-Group/phoenics | e14a4dfc76035aa434f2962f9db5af7bc76dcf58 | [
"Apache-2.0"
] | 6 | 2018-12-14T02:44:51.000Z | 2022-02-16T08:01:12.000Z | examples/application_robot_calibration/model_training/model.py | Tabor-Research-Group/phoenics | e14a4dfc76035aa434f2962f9db5af7bc76dcf58 | [
"Apache-2.0"
] | 16 | 2018-06-20T11:34:30.000Z | 2022-01-07T17:51:22.000Z | #!/usr/bin/env python
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import pickle
import numpy as np
import tensorflow as tf
from single_model import SingleModel
#=====================================================================
NUM_FOLDS = 10
#=====================================================================
class Model(object):
def __init__(self, data_file, index_file, model_path, plot = False):
self.models_are_loaded = False
self.model_path = model_path
self.plot = plot
self.dataset = pickle.load(open(data_file, 'rb'))
self.indices = pickle.load(open(index_file, 'rb'))
self._read_indices()
self._assemble_training_sets()
def _read_indices(self):
self.work_indices = self.indices['work_indices']
self.test_indices = self.indices['test_indices']
self.train_indices = [self.indices['cross_validation_sets'][index]['train_indices'] for index in range(NUM_FOLDS)]
self.valid_indices = [self.indices['cross_validation_sets'][index]['valid_indices'] for index in range(NUM_FOLDS)]
def _assemble_training_sets(self):
params = self.dataset['parameters']
areas = self.dataset['peak_area']
times = self.dataset['execution_time']
self.features = params[self.work_indices]
self.targets = np.array([areas[self.work_indices], times[self.work_indices]]).transpose()
self.test_features = params[self.test_indices]
self.test_targets = np.array([areas[self.test_indices], times[self.test_indices]]).transpose()
max_features = np.amax(np.amax(self.features, axis = 0), axis = 0)
min_features = np.amin(np.amin(self.features, axis = 0), axis = 0)
mean_features = np.mean(np.mean(self.features, axis = 0), axis = 0)
std_features = np.std(np.std(self.features, axis = 0), axis = 0)
max_targets = np.amax(self.targets, axis = 0)
min_targets = np.amin(self.targets, axis = 0)
mean_targets = np.mean(self.targets, axis = 0)
details_dict = {'min_features': min_features, 'max_features': max_features,
'mean_features': mean_features, 'std_features': std_features,
'min_targets': min_targets, 'max_targets': max_targets, 'mean_targets': mean_targets,
'features_shape': self.features.shape, 'targets_shape': self.targets.shape}
pickle.dump(details_dict, open('dataset_details.pkl', 'wb'))
self.dataset_details = 'dataset_details.pkl'
self.train_features, self.train_targets = [], []
self.valid_features, self.valid_targets = [], []
for index in range(NUM_FOLDS):
train_features = params[self.train_indices[index]]
valid_features = params[self.valid_indices[index]]
train_targets = np.array([areas[self.train_indices[index]], times[self.train_indices[index]]]).transpose()
valid_targets = np.array([areas[self.valid_indices[index]], times[self.valid_indices[index]]]).transpose()
self.train_features.append(train_features)
self.train_targets.append(train_targets)
self.valid_features.append(np.concatenate([valid_features for i in range(len(train_features) // len(valid_features))]))
self.valid_targets.append(np.concatenate([valid_targets for i in range(len(train_targets) // len(valid_targets))]))
def initialize_models(self, batch_size = 1):
self.models = []
self.graphs = [tf.Graph() for i in range(NUM_FOLDS)]
for fold_index in range(NUM_FOLDS):
with self.graphs[fold_index].as_default():
single_model = SingleModel(self.graphs[fold_index], self.dataset_details, scope = 'fold_%d' % fold_index, batch_size = batch_size)
self.models.append(single_model)
def set_hyperparameters(self, hyperparam_dict):
for model in self.models:
model.set_hyperparameters(hyperparam_dict)
def construct_models(self):
for model_index, model in enumerate(self.models):
print('constructing model %d ...' % model_index)
with self.graphs[model_index].as_default():
model.construct_graph()
def _load_models(self, batch_size = 1):
self.models = []
self.graphs = [tf.Graph() for i in range(NUM_FOLDS)]
for fold_index in range(NUM_FOLDS):
with self.graphs[fold_index].as_default():
single_model = SingleModel(self.graphs[fold_index], self.dataset_details, scope = 'fold_%d' % fold_index, batch_size = batch_size)
single_model.restore('%s/Fold_%d/model.ckpt' % (self.model_path, fold_index))
self.models.append(single_model)
self.models_are_loaded = True
def train(self):
for model_index, model in enumerate(self.models):
model.train(self.train_features[model_index], self.train_targets[model_index],
self.valid_features[model_index], self.valid_targets[model_index],
model_path = '%s/Fold_%d' % (self.model_path, model_index), plot = self.plot)
def predict(self, features):
if not self.models_are_loaded: self._load_models(batch_size = len(features))
pred_dict = {'samples': [], 'averages': [], 'uncertainties': []}
for fold_index in range(NUM_FOLDS):
single_pred_dict = self.models[fold_index].predict(features)
for key in pred_dict.keys():
pred_dict[key].append(single_pred_dict[key])
for key in pred_dict.keys():
pred_dict[key] = np.array(pred_dict[key])
pred_dict['averages'] = np.mean(pred_dict['averages'], axis = 0)
return pred_dict
#=====================================================================
if __name__ == '__main__':
hyperparam_dict = {'REG': 0.1, 'LEARNING_RATE': 10**-3.0}
dataset_file = 'data_set/experimental_data.pkl'
index_file = 'data_set/cross_validation_indices.pkl'
model = Model(dataset_file, index_file, model_path = './', plot = True)
model.initialize_models(batch_size = len(model.train_features[0]))
model.set_hyperparameters(hyperparam_dict)
#=== TRAINING
model.construct_models()
model.train()
#=== PREDICTING
# model.set_hyperparameters()
# pred = model.predict(model.test_features)
# import matplotlib.pyplot as plt
# import seaborn as sns
# plt.plot(model.test_targets, pred['averages'], ls = '', marker = '.')
# plt.show()
| 35.658683 | 134 | 0.702267 | 820 | 5,955 | 4.834146 | 0.162195 | 0.015136 | 0.020182 | 0.030272 | 0.335772 | 0.242684 | 0.191978 | 0.14884 | 0.14884 | 0.111504 | 0 | 0.004632 | 0.129975 | 5,955 | 166 | 135 | 35.873494 | 0.760471 | 0.077414 | 0 | 0.186275 | 0 | 0 | 0.092501 | 0.023718 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.04902 | 0 | 0.156863 | 0.009804 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2586b5ff30ddfea5f6314e19097d830a9096f5c | 641 | py | Python | wagtail_managed404/wagtail_hooks.py | mwesterhof/wagtail_managed404 | a961271c7fc70accb43ec329da9defe36e3dab3c | [
"MIT"
] | 1 | 2021-03-11T10:06:04.000Z | 2021-03-11T10:06:04.000Z | wagtail_managed404/wagtail_hooks.py | mwesterhof/wagtail_managed404 | a961271c7fc70accb43ec329da9defe36e3dab3c | [
"MIT"
] | null | null | null | wagtail_managed404/wagtail_hooks.py | mwesterhof/wagtail_managed404 | a961271c7fc70accb43ec329da9defe36e3dab3c | [
"MIT"
] | null | null | null | from wagtail.contrib.modeladmin.helpers import PermissionHelper
from wagtail.contrib.modeladmin.options import ModelAdmin, modeladmin_register
from .models import PageNotFoundEntry
class PageNotFoundPermissionHelper(PermissionHelper):
def user_can_create(self, user):
return False
@modeladmin_register
class PageNotFoundEntryAdmin(ModelAdmin):
permission_helper_class = PageNotFoundPermissionHelper
model = PageNotFoundEntry
menu_label = '404 results'
list_display = (
'url', 'site', 'hits', 'redirect_to', 'permanent', 'created')
list_filter = ('permanent', 'site')
menu_icon = 'fa-frown-o'
| 30.52381 | 78 | 0.75819 | 64 | 641 | 7.421875 | 0.65625 | 0.046316 | 0.075789 | 0.117895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005525 | 0.152886 | 641 | 20 | 79 | 32.05 | 0.869245 | 0 | 0 | 0 | 0 | 0 | 0.112324 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0.066667 | 0.866667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a258ce4f4f56882fe5862ad4efbc32b419382d5d | 2,086 | py | Python | src/collectors/cpuacct_cgroup/test/testcpuacct_cgroup.py | bbinet/Diamond | 05942d83fd1a86b5dc16a5610dcaba7ca4463e3a | [
"MIT"
] | 1 | 2015-10-18T17:37:30.000Z | 2015-10-18T17:37:30.000Z | src/collectors/cpuacct_cgroup/test/testcpuacct_cgroup.py | bbinet/Diamond | 05942d83fd1a86b5dc16a5610dcaba7ca4463e3a | [
"MIT"
] | null | null | null | src/collectors/cpuacct_cgroup/test/testcpuacct_cgroup.py | bbinet/Diamond | 05942d83fd1a86b5dc16a5610dcaba7ca4463e3a | [
"MIT"
] | null | null | null | #!/usr/bin/python
# coding=utf-8
################################################################################
import os
from test import CollectorTestCase
from test import get_collector_config
from test import unittest
from mock import Mock
from mock import patch
try:
from cStringIO import StringIO
StringIO # workaround for pyflakes issue #13
except ImportError:
from StringIO import StringIO
from diamond.collector import Collector
from cpuacct_cgroup import CpuAcctCgroupCollector
dirname = os.path.dirname(__file__)
fixtures_path = os.path.join(dirname, 'fixtures/')
fixtures = []
for root, dirnames, filenames in os.walk(fixtures_path):
fixtures.append([root, dirnames, filenames])
class TestCpuAcctCgroupCollector(CollectorTestCase):
def setUp(self):
config = get_collector_config('CpuAcctCgroupCollector', {
'interval': 10
})
self.collector = CpuAcctCgroupCollector(config, None)
def test_import(self):
self.assertTrue(CpuAcctCgroupCollector)
@patch('__builtin__.open')
@patch('os.walk', Mock(return_value=iter(fixtures)))
@patch.object(Collector, 'publish')
def test_should_open_all_cpuacct_stat(self, publish_mock, open_mock):
open_mock.side_effect = lambda x: StringIO('')
self.collector.collect()
open_mock.assert_any_call(
fixtures_path + 'lxc/testcontainer/cpuacct.stat')
open_mock.assert_any_call(fixtures_path + 'lxc/cpuacct.stat')
open_mock.assert_any_call(fixtures_path + 'cpuacct.stat')
@patch.object(Collector, 'publish')
def test_should_work_with_real_data(self, publish_mock):
CpuAcctCgroupCollector.CPUACCT_PATH = fixtures_path
self.collector.collect()
self.assertPublishedMany(publish_mock, {
'lxc.testcontainer.user': 1318,
'lxc.testcontainer.system': 332,
'lxc.user': 36891,
'lxc.system': 88927,
'system.user': 3781253,
'system.system': 4784004,
})
if __name__ == "__main__":
unittest.main()
| 32.092308 | 80 | 0.671141 | 227 | 2,086 | 5.92511 | 0.38326 | 0.053532 | 0.031227 | 0.037918 | 0.153903 | 0.153903 | 0.153903 | 0.094424 | 0.065428 | 0 | 0 | 0.021441 | 0.19511 | 2,086 | 64 | 81 | 32.59375 | 0.779631 | 0.029722 | 0 | 0.12 | 0 | 0 | 0.118557 | 0.050515 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.08 | false | 0 | 0.24 | 0 | 0.34 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a25c4ad5c22f95e89a80f59b5eb116b3c2186966 | 4,455 | py | Python | MangaLoader2.py | hukumka/MangaLoader | 0192d5d07002b637e5f3c6f8f4686eff3a6b20bb | [
"MIT"
] | null | null | null | MangaLoader2.py | hukumka/MangaLoader | 0192d5d07002b637e5f3c6f8f4686eff3a6b20bb | [
"MIT"
] | null | null | null | MangaLoader2.py | hukumka/MangaLoader | 0192d5d07002b637e5f3c6f8f4686eff3a6b20bb | [
"MIT"
] | null | null | null | from os import path
import os
from bs4 import BeautifulSoup
import requests
from MangaLoader import ensure_dir
from MangaLoader import PageImage
from MangaLoader import find_manga
class MangaPageScrapper:
BASE_URL = "https://mangapark.me"
def __init__(self, name, version):
"""
initialize main page scrapper
name: Manga url name
version: Desired version. Sometimes some versions could be skipped
(versions 1, 2, 3 and 5 exists, but not 4)
"""
self.__name = name
self.__version = version
r = requests.get(self.BASE_URL + "/manga/" + self.__name)
if r.status_code == 200:
self.__page_html = r.text
self.__soup = BeautifulSoup(self.__page_html, 'html.parser')
self.__contents_root = self.__soup.find(id="list")
version_root = self.__contents_root.find_all(id="stream_"+str(version))
if len(version_root) != 1:
raise MangaPageScrapperError("no such version!")
self.__version_root = version_root[0]
self._volumes = self.__scrap_volume_list()
else:
r.raise_for_status()
def __scrap_volume_list(self):
volumes = self.__version_root.find_all(class_=lambda x: x is not None and x.startswith("volume"))
return [{"root": x, "chapters": self.__scrap_volume_chapters(x)} for x in volumes][::-1]
def __scrap_volume_chapters(self, volume_root):
chapters = volume_root.find("ul", class_="chapter")
chapters = chapters.find_all("li")
return [{"root": x, "pages": self.__scrap_chapter_pages(x)} for x in chapters][::-1]
def __scrap_chapter_pages(self, chapter_root):
em = chapter_root.find("em")
page_count = int(em.contents[-1].strip()[3:])
base_url = em.find("a", text="all")["href"]
return ["{}{}/{}".format(self.BASE_URL, base_url, i+1) for i in range(page_count)]
def info(self):
vol_count = len(self._volumes)
chap_count = 0
page_count = 0
for v in self._volumes:
chap_count += len(v["chapters"])
for c in v["chapters"]:
page_count += len(c["pages"])
print("volume count = {}; chapter count = {}; page count = {}".format(vol_count, chap_count, page_count))
def iter_pages(self, volume_filter: lambda _: True):
for (vol_id, vol) in enumerate(self._volumes):
if volume_filter(vol):
for (chap_id, chap) in enumerate(vol["chapters"]):
for (page_id, page) in enumerate(chap["pages"]):
yield {
"volume_id": vol_id,
"chapter_id": chap_id,
"page_id": page_id,
"page_url": page
}
@staticmethod
def is_volume_null(volume):
volume_root = volume["root"]
return len(volume_root.h4.contents) == 3
class MangaPageScrapperError(Exception):
pass
class MangaLoader:
def __init__(self, name, version=1, volume_policy=MangaPageScrapper.is_volume_null):
self.__volume_policy = volume_policy
self.__scrapper = MangaPageScrapper(name, version)
def load(self, path):
for p in self.__scrapper.iter_pages(self.__volume_policy):
image_dir = "{}/{:03}/{:03}/".format(path, p["volume_id"], p["chapter_id"])
image_path = "{}{:03}".format(image_dir, p["page_id"])
ensure_dir(image_dir)
if self.need_to_load(image_path):
print("downloading: " + p["page_url"])
PageImage(p["page_url"]).save(image_path)
else:
print("skipping: " + p["page_url"])
def need_to_load(self, save_path):
dir, page_id = path.split(save_path)
for file in os.listdir(dir):
if file.startswith(page_id):
return False
return True
def info(self):
self.__scrapper.info()
@staticmethod
def find_and_load(name, path, version=1, volume_policy=MangaPageScrapper.is_volume_null):
name = find_manga(name)
print(name)
loader = MangaLoader(name, version, volume_policy)
loader.info()
loader.load(path)
if __name__ == "__main__":
MangaLoader.find_and_load("one piece", "D:/manga/one_piece", volume_policy=lambda _: True)
| 36.219512 | 113 | 0.597082 | 549 | 4,455 | 4.530055 | 0.233151 | 0.033776 | 0.025332 | 0.012063 | 0.054282 | 0.039405 | 0.039405 | 0.039405 | 0 | 0 | 0 | 0.008777 | 0.283951 | 4,455 | 122 | 114 | 36.516393 | 0.770846 | 0.038608 | 0 | 0.064516 | 0 | 0 | 0.088784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0.010753 | 0.075269 | 0 | 0.311828 | 0.043011 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2607781899d6ae8b21f0be258c4ecaf3587a40b | 1,109 | py | Python | tests/temp/test_base.py | Gabik21/afancontrol | 4f2c01bf5f20f595125f8b1c89a2b07bf463416e | [
"MIT"
] | 36 | 2019-06-15T15:54:45.000Z | 2022-03-23T06:33:41.000Z | tests/temp/test_base.py | Gabik21/afancontrol | 4f2c01bf5f20f595125f8b1c89a2b07bf463416e | [
"MIT"
] | 5 | 2020-05-07T13:25:08.000Z | 2021-04-18T19:41:22.000Z | tests/temp/test_base.py | Gabik21/afancontrol | 4f2c01bf5f20f595125f8b1c89a2b07bf463416e | [
"MIT"
] | 2 | 2020-06-09T06:47:25.000Z | 2021-03-13T22:45:31.000Z | from typing import Optional
from unittest.mock import patch
import pytest
from afancontrol.temp import Temp, TempCelsius, TempStatus
class DummyTemp(Temp):
def _get_temp(self):
pass
@pytest.mark.parametrize(
"temp, threshold, panic, is_threshold, is_panic",
[
(34.0, None, 60.0, False, False),
(42.0, None, 60.0, False, False),
(57.0, 55.0, 60.0, True, False),
(61.0, 55.0, 61.0, True, True),
(61.0, None, 61.0, False, True),
],
)
def test_temp(
temp: TempCelsius,
threshold: Optional[TempCelsius],
panic: TempCelsius,
is_threshold,
is_panic,
):
min = TempCelsius(40.0)
max = TempCelsius(50.0)
with patch.object(DummyTemp, "_get_temp") as mock_get_temp:
t = DummyTemp(panic=panic, threshold=threshold)
mock_get_temp.return_value = [temp, min, max]
assert t.get() == TempStatus(
temp=temp,
min=min,
max=max,
panic=panic,
threshold=threshold,
is_panic=is_panic,
is_threshold=is_threshold,
)
| 23.595745 | 63 | 0.593327 | 141 | 1,109 | 4.539007 | 0.319149 | 0.04375 | 0.060938 | 0.05625 | 0.05625 | 0.05625 | 0 | 0 | 0 | 0 | 0 | 0.053097 | 0.286745 | 1,109 | 46 | 64 | 24.108696 | 0.756005 | 0 | 0 | 0 | 0 | 0 | 0.049594 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.052632 | false | 0.026316 | 0.105263 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a260b7c5b171e1df173dec41f22475e4f3f8f6d2 | 25,813 | py | Python | riboflask.py | skiniry/Trips-Viz | a742a6c7d0c9758e3c439828e804025d7fc44b4f | [
"MIT"
] | 7 | 2019-07-25T14:34:48.000Z | 2021-10-19T07:52:29.000Z | riboflask.py | skiniry/Trips-Viz | a742a6c7d0c9758e3c439828e804025d7fc44b4f | [
"MIT"
] | 3 | 2021-06-07T23:26:38.000Z | 2021-11-15T22:37:43.000Z | riboflask.py | skiniry/Trips-Viz | a742a6c7d0c9758e3c439828e804025d7fc44b4f | [
"MIT"
] | 2 | 2019-09-04T08:51:25.000Z | 2022-03-10T20:58:40.000Z | import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.transforms import blended_transform_factory
import mpld3
import logging
from mpld3 import plugins,utils
import collections
from sqlitedict import SqliteDict
import pandas as pd
from fetch_shelve_reads2 import get_reads,get_seq_var,get_readlength_breakdown
import sqlite3
import os
import config
from new_plugins import InteractiveLegendPlugin,PointHTMLTooltip,TopToolbar,DownloadProfile,DownloadPNG
import time
# CSS for popup tables that appear when hovering over aug codons
point_tooltip_css = """
table
{
border-collapse: collapse;
}
th
{
color: #000000;
background-color: #d2d4d8;
}
td
{
background-color: #ffffff;
}
table, th, td
{
font-family:Arial, Helvetica, sans-serif;
border: 0px solid black;
text-align: left;
}
"""
color_dict = {'frames': ['#FF4A45', '#64FC44', '#5687F9']}
def get_user_defined_seqs(seq,seqhili):
iupac_dict = {"A":["A"],"U":["U"],"G":["G"],"C":["C"],"R":["A","G"],"Y":["C","U"],"S":["G","C"],"W":["A","U"],"K":["G","U"],
"M":["A","C"],"B":["C","G","U"],"D":["A","G","U"],"D":["A","G","U"],"H":["A","C","U"],"V":["A","C","G"],"N":["A","U","G","C"]}
signalhtml = {0:[],1:[],2:[]}
seq = seq.replace("T","U")
near_cog_starts = {0:[],1:[],2:[]}
for i in range(0,len(seq)):
for subseq in seqhili:
subseq = subseq.upper()
subseq = subseq.replace("T","U").replace(" ","")
partial_seq = list(seq[i:i+len(subseq)])
if len(partial_seq) != len(subseq):
continue
x= 0
for x in range(0,len(subseq)):
char = subseq[x]
if partial_seq[x] in iupac_dict[char]:
partial_seq[x] = char
partial_seq = "".join(partial_seq)
if partial_seq == subseq:
near_cog_starts[(i)%3].append(i+1)
datadict = {'sequence': [subseq]}
df = pd.DataFrame(datadict, columns=(["sequence"]))
label = df.iloc[[0], :].T
label.columns = ["Position: {}".format(i)]
signalhtml[(i)%3].append(str(label.to_html()))
return near_cog_starts,signalhtml
def merge_dicts(dict1,dict2):
print ("dict1, dict2,", dict1, dict2)
for nuc in dict2:
if nuc not in dict1:
dict1[nuc] = dict2[nuc]
else:
for pos in dict2[nuc]:
if pos not in dict1[nuc]:
dict1[nuc][pos] = dict2[nuc][pos]
else:
dict1[nuc][pos] += dict2[nuc][pos]
return dict1
def generate_plot(tran, ambig, min_read, max_read,lite,ribocoverage,organism,readscore, noisered, primetype, minfiles,nucseq, user_hili_starts, user_hili_stops,uga_diff,file_paths_dict, short_code, color_readlen_dist, background_col,uga_col, uag_col, uaa_col,advanced,seqhili,seq_rules,title_size,
subheading_size,axis_label_size,marker_size, transcriptome, trips_uploads_location,cds_marker_size,cds_marker_colour,legend_size,ribo_linewidth, secondary_readscore,pcr,mismatches, hili_start, hili_stop):
if lite == "n" and ribocoverage == True:
return_str = "Error: Cannot display Ribo-Seq Coverage when 'Line Graph' is turned off"
return return_str
labels = ["Frame 1 profiles","Frame 2 profiles","Frame 3 profiles","RNA", "Exon Junctions"]
start_visible=[True, True, True, True, True]
if mismatches == True:
labels.append("Mismatches A")
labels.append("Mismatches T")
labels.append("Mismatches G")
labels.append("Mismatches C")
start_visible.append(False)
start_visible.append(False)
start_visible.append(False)
start_visible.append(False)
start_visible.append(True)
labels.append("CDS markers")
#This is a list of booleans that decide if the interactive legends boxes are filled in or not.Needs to be same length as labels
stop_codons = ["TAG","TAA","TGA"]
frame_orfs = {1:[],2:[],3:[]}
connection = sqlite3.connect('{}/trips.sqlite'.format(config.SCRIPT_LOC))
connection.text_factory = str
cursor = connection.cursor()
cursor.execute("SELECT owner FROM organisms WHERE organism_name = '{}' and transcriptome_list = '{}';".format(organism, transcriptome))
owner = (cursor.fetchone())[0]
if owner == 1:
if os.path.isfile("{0}/{1}/{2}/{2}.{3}.sqlite".format(config.SCRIPT_LOC, config.ANNOTATION_DIR,organism,transcriptome)):
transhelve = sqlite3.connect("{0}/{1}/{2}/{2}.{3}.sqlite".format(config.SCRIPT_LOC, config.ANNOTATION_DIR,organism,transcriptome))
else:
return_str = "Cannot find annotation file {}.{}.sqlite".format(organism,transcriptome)
return return_str
else:
transhelve = sqlite3.connect("{0}transcriptomes/{1}/{2}/{3}/{2}_{3}.sqlite".format(trips_uploads_location,owner,organism,transcriptome))
connection.close()
cursor = transhelve.cursor()
cursor.execute("SELECT * from transcripts WHERE transcript = '{}'".format(tran))
result = cursor.fetchone()
traninfo = {"transcript":result[0] , "gene":result[1], "length":result[2] , "cds_start":result[3] , "cds_stop":result[4] , "seq":result[5] ,
"strand":result[6], "stop_list":result[7].split(","),"start_list":result[8].split(","), "exon_junctions":result[9].split(","),
"tran_type":result[10], "principal":result[11]}
try:
traninfo["stop_list"] = [int(x) for x in traninfo["stop_list"]]
except:
traninfo["stop_list"] = []
try:
traninfo["start_list"] = [int(x) for x in traninfo["start_list"]]
except:
traninfo["start_list"] = []
if str(traninfo["exon_junctions"][0]) != "":
traninfo["exon_junctions"] = [int(x) for x in traninfo["exon_junctions"]]
else:
traninfo["exon_junctions"] = []
all_cds_regions = []
# Check if the 'coding_regions' table exists
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='coding_regions';")
result = cursor.fetchone()
if result != None:
cursor.execute("SELECT * from coding_regions WHERE transcript = '{}'".format(tran))
result = cursor.fetchall()
for row in result:
all_cds_regions.append((row[1],row[2]))
transhelve.close()
gene = traninfo["gene"]
tranlen = traninfo["length"]
cds_start = traninfo["cds_start"]
cds_stop = traninfo["cds_stop"]
if cds_start == "NULL" or cds_start == None:
cds_start = 0
if cds_stop == "NULL" or cds_stop == None:
cds_stop = 0
all_starts = traninfo["start_list"]
all_stops = {"TAG":[],"TAA":[],"TGA":[]}
exon_junctions = traninfo["exon_junctions"]
seq = traninfo["seq"].upper()
for i in range(0,len(seq)):
if seq[i:i+3] in stop_codons:
all_stops[seq[i:i+3]].append(i+1)
# Error occurs if one of the frames is empty for any given start/stop, so we initialise with -5 as this won't be seen by user and will prevent the error
start_stop_dict = {1:{"starts":[-5], "stops":{"TGA":[-5],"TAG":[-5],"TAA":[-5]}},
2:{"starts":[-5], "stops":{"TGA":[-5],"TAG":[-5],"TAA":[-5]}},
3:{"starts":[-5], "stops":{"TGA":[-5],"TAG":[-5],"TAA":[-5]}}}
for start in all_starts:
rem = ((start-1)%3)+1
start_stop_dict[rem]["starts"].append(start)
for stop in all_stops:
for stop_pos in all_stops[stop]:
rem = ((stop_pos-1)%3)+1
start_stop_dict[rem]["stops"][stop].append(stop_pos)
#find all open reading frames
for frame in [1,2,3]:
for start in start_stop_dict[frame]["starts"]:
best_stop_pos = 10000000
for stop in start_stop_dict[frame]["stops"]:
for stop_pos in start_stop_dict[frame]["stops"][stop]:
if stop_pos > start and stop_pos < best_stop_pos:
best_stop_pos = stop_pos
if best_stop_pos != 10000000:
frame_orfs[frame].append((start, best_stop_pos))
#self.update_state(state='PROGRESS',meta={'current': 100, 'total': 100,'status': "Fetching RNA-Seq Reads"})
all_rna_reads, rna_seqvar_dict = get_reads(ambig, min_read, max_read, tran, file_paths_dict,tranlen,True, organism, False,noisered, primetype,"rnaseq",readscore,pcr,get_mismatches=mismatches)
#self.update_state(state='PROGRESS',meta={'current': 100, 'total': 100,'status': "Fetching Ribo-Seq Reads"})
all_subcodon_reads,ribo_seqvar_dict = get_reads(ambig, min_read, max_read, tran, file_paths_dict,tranlen,ribocoverage, organism, True,noisered, primetype,"riboseq",readscore,secondary_readscore,pcr,get_mismatches=mismatches)
print ("ribo_seqvar_dict", ribo_seqvar_dict)
seq_var_dict = merge_dicts(ribo_seqvar_dict, rna_seqvar_dict)
try:
rnamax = max(all_rna_reads.values())
except:
rnamax = 0
try:
subcodonmax = max(all_subcodon_reads.values())
except:
subcodonmax = 0
y_max = max(1,rnamax, subcodonmax)*1.1
fig = plt.figure(figsize=(13,8))
ax_main = plt.subplot2grid((30,1), (0,0),rowspan=22)
ax_main.spines['bottom'].set_visible(False)
for s in ['bottom', 'left','top','right']:
ax_main.spines[s].set_linewidth(15)
ax_main.spines[s].set_color("red")
alt_seq_type_vars = []
# Plot any alternative sequence types if there are any
for seq_type in file_paths_dict:
if seq_type != "riboseq" and seq_type != "rnaseq":
if file_paths_dict[seq_type] == {}:
continue
if seq_rules[seq_type]["frame_breakdown"] == 1:
frame_breakdown = True
else:
frame_breakdown = False
alt_sequence_reads,empty_seqvar_dict = get_reads(ambig, min_read, max_read, tran, file_paths_dict,tranlen,True, organism, frame_breakdown,noisered, primetype,seq_type,readscore)
if frame_breakdown == False:
alt_seq_plot = ax_main.plot(alt_sequence_reads.keys(), alt_sequence_reads.values(), alpha=1, label = seq_type, zorder=2, color='#5c5c5c', linewidth=2)
labels.append(seq_type)
start_visible.append(True)
alt_seq_type_vars.append(alt_seq_plot)
else:
alt_frame_counts = {0: collections.OrderedDict(), 1: collections.OrderedDict(), 2: collections.OrderedDict()}
for key in alt_sequence_reads:
start = key
rem = start % 3
if rem == 1: # frame 1
frame = 2
elif rem == 2: # frame 2
frame = 0
elif rem == 0: # frame 3
frame = 1
alt_frame_counts[frame][key] = alt_sequence_reads[key]
frame0_altseqplot = ax_main.plot(alt_frame_counts[0].keys(), alt_frame_counts[0].values(), alpha=0.75, label = seq_type+"frame0", zorder=2, color= "#FF4A45", linewidth=2)
frame1_altseqplot = ax_main.plot(alt_frame_counts[1].keys(), alt_frame_counts[1].values(), alpha=0.75, label = seq_type+"frame1", zorder=2, color= "#64FC44", linewidth=2)
frame2_altseqplot = ax_main.plot(alt_frame_counts[2].keys(), alt_frame_counts[2].values(), alpha=0.75, label = seq_type+"frame2*", zorder=2, color= "#5687F9", linewidth=2)
labels.append(seq_type+"frame 1")
labels.append(seq_type+"frame 2")
labels.append(seq_type+"frame 3")
start_visible.append(True)
start_visible.append(True)
start_visible.append(True)
alt_seq_type_vars.append(frame0_altseqplot)
alt_seq_type_vars.append(frame1_altseqplot)
alt_seq_type_vars.append(frame2_altseqplot)
if max(alt_sequence_reads.values()) > y_max:
y_max = max(alt_sequence_reads.values())
label = 'Reads'
ax_main.set_ylabel(label, fontsize=axis_label_size, labelpad=30)
label = 'Position (nucleotides)'
ax_main.set_xlabel(label, fontsize=axis_label_size,labelpad=-10)
ax_main.set_ylim(0, y_max)
if lite == "n":
rna_bars = ax_main.bar(all_rna_reads.keys(), all_rna_reads.values(), alpha=1, label = labels, zorder=1,color='lightgray', linewidth=0, width=1)
else:
rna_bars = ax_main.plot(all_rna_reads.keys(), all_rna_reads.values(), alpha=1, label = labels, zorder=1,color='#a7adb7', linewidth=4)
cds_markers = ax_main.plot((cds_start,cds_start), (0, y_max*0.97), color=cds_marker_colour,linestyle = 'solid', linewidth=cds_marker_size)
ax_main.text(cds_start,y_max*0.97,"CDS start",fontsize=18,color="black",ha="center")
#ax_main.annotate('axes fraction',xy=(3, 1), xycoords='data',xytext=(0.8, 0.95), textcoords='axes fraction',arrowprops=dict(facecolor='black', shrink=0.05),horizontalalignment='right', verticalalignment='top')
#trans = blended_transform_factory(ax_main.transData, ax_main.transAxes)
#ax_main.annotate('CDS RELATIVE START',(100,100),transform=trans)
#tform = blended_transform_factory(ax_main.transData, ax_main.transAxes)
#r=10
#ax_main.text(cds_start, 0.9, "CDS START OR WHATEVER", fontsize='xx-large', color='r', transform=tform)
cds_markers += ax_main.plot((cds_stop+1,cds_stop+1), (0, y_max*0.97), color=cds_marker_colour,linestyle = 'solid', linewidth=cds_marker_size)
ax_main.text(cds_stop,y_max*0.97,"CDS stop",fontsize=18,color="black",ha="center")
ax_cds = plt.subplot2grid((31,1), (26,0),rowspan=1,sharex=ax_main)
ax_cds.set_facecolor("white")
ax_cds.set_ylabel('Merged CDS', labelpad=4, verticalalignment='center',horizontalalignment="right",rotation="horizontal",color="black",fontsize=(axis_label_size/1.5))
ax_f1 = plt.subplot2grid((31,1), (27,0),rowspan=1,sharex=ax_main)
ax_f1.set_facecolor(color_dict['frames'][0])
ax_f2 = plt.subplot2grid((31,1), (28,0),rowspan=1,sharex=ax_main)
ax_f2.set_facecolor(color_dict['frames'][1])
ax_f3 = plt.subplot2grid((31,1), (29,0),rowspan=1,sharex=ax_main)
ax_f3.set_facecolor(color_dict['frames'][2])
ax_nucseq = plt.subplot2grid((31,1), (30,0),rowspan=1,sharex=ax_main)
ax_nucseq.set_xlabel('Transcript: {} Length: {} nt'.format(tran, tranlen), fontsize=subheading_size)
for tup in all_cds_regions:
ax_cds.fill_between([tup[0],tup[1]], [1, 1],zorder=0, alpha=1, color="#001285")
#plot a dummy exon junction at postion -1, needed in cases there are no exon junctions, this wont be seen
allexons = ax_main.plot((-1,-1), (0, 1), alpha=0.01,color='black',linestyle = '-.', linewidth=2)
print ("Exon junctions", exon_junctions)
for exon in exon_junctions:
allexons += ax_main.plot((exon,exon), (0, y_max), alpha=0.95,color='black',linestyle = ':', linewidth=3)
#dictionary for each frame in which the keys are the posistions and the values are the counts
frame_counts = {0: collections.OrderedDict(), 1: collections.OrderedDict(), 2: collections.OrderedDict()}
for key in all_subcodon_reads:
rem = key % 3
if rem == 1: # frame 1
frame = 0
elif rem == 2: # frame 2
frame = 1
elif rem == 0: # frame 3
frame = 2
frame_counts[frame][key] = all_subcodon_reads[key]
if lite == "n":
frame_counts[frame][key+1] = 0
frame_counts[frame][key+2] = 0
if lite == "n":
frame0subpro = ax_main.bar(frame_counts[0].keys(), frame_counts[0].values(), alpha=0.75, label = labels, zorder=2, color= "#FF4A45", edgecolor="#FF4A45", width=1, linewidth=4)
frame1subpro = ax_main.bar(frame_counts[1].keys(), frame_counts[1].values(), alpha=0.75, label = labels, zorder=2, color= "#64FC44", edgecolor="#64FC44", width=1, linewidth=4)
frame2subpro = ax_main.bar(frame_counts[2].keys(), frame_counts[2].values(), alpha=0.75, label = labels, zorder=2, color= "#5687F9", edgecolor="#5687F9", width=1, linewidth=4)
else:
frame0subpro = ax_main.plot(frame_counts[0].keys(), frame_counts[0].values(), alpha=0.75, label = labels, zorder=2, color= "#FF4A45", linewidth=ribo_linewidth)
frame1subpro = ax_main.plot(frame_counts[1].keys(), frame_counts[1].values(), alpha=0.75, label = labels, zorder=2, color= "#64FC44", linewidth=ribo_linewidth)
frame2subpro = ax_main.plot(frame_counts[2].keys(), frame_counts[2].values(), alpha=0.75, label = labels, zorder=2, color= "#5687F9", linewidth=ribo_linewidth)
if mismatches == True:
a_mismatches = ax_main.plot(seq_var_dict["A"].keys(), seq_var_dict["A"].values(),alpha=0.01, label = labels, zorder=2, color= "purple", linewidth=2)
t_mismatches = ax_main.plot(seq_var_dict["T"].keys(), seq_var_dict["T"].values(),alpha=0.01, label = labels, zorder=2, color= "yellow", linewidth=2)
g_mismatches = ax_main.plot(seq_var_dict["G"].keys(), seq_var_dict["G"].values(),alpha=0.01, label = labels, zorder=2, color= "orange", linewidth=2)
c_mismatches = ax_main.plot(seq_var_dict["C"].keys(), seq_var_dict["C"].values(),alpha=0.01, label = labels, zorder=2, color= "pink", linewidth=2)
xy = 0
if nucseq == True:
ax_nucseq.set_facecolor(background_col)
mrnaseq = seq.replace("T","U")
color_list = ["#FF4A45","#64FC44","#5687F9"]
char_frame = 0
for char in mrnaseq:
ax_nucseq.text((xy+1)-0.1,0.2,mrnaseq[xy],fontsize=20,color=color_list[char_frame%3])
xy += 1
char_frame += 1
# If the user passed a list of sequences to highlight, find and plot them here.
if seqhili != ['']:
near_cog_starts,signalhtml = get_user_defined_seqs(seq, seqhili)
for slip in near_cog_starts[0]:
try:
hili_sequences += ax_f1.plot((slip, slip),(0,0.5), alpha=1, label = labels, zorder=4,color='black', linewidth=5)
except Exception as e:
hili_sequences = ax_f1.plot((slip, slip),(0,0.5), alpha=1, label = labels, zorder=4, color='black', linewidth=5)
for slip in near_cog_starts[1]:
try:
hili_sequences += ax_f2.plot((slip, slip),(0,0.5), alpha=1, label = labels, zorder=4,color='black', linewidth=5)
except:
hili_sequences = ax_f2.plot((slip, slip),(0,0.5), alpha=1, label = labels, zorder=4,color='black',linewidth=5)
for slip in near_cog_starts[2]:
try:
hili_sequences += ax_f3.plot((slip, slip),(0,0.5), alpha=1, label = labels, zorder=4,color='black', linewidth=5)
except:
hili_sequences = ax_f3.plot((slip, slip),(0,0.5), alpha=1, label = labels, zorder=4,color='black', linewidth=5)
#Plot sequence identifiers which will create a popup telling user what the subsequence is (useful if they have passed multiple subsequences)
frame1_subsequences = ax_f1.plot(near_cog_starts[0], [0.25]*len(near_cog_starts[0]), 'o', color='b',mec='k', ms=12, mew=1, alpha=0, zorder=4)
frame2_subsequences = ax_f2.plot(near_cog_starts[1], [0.25]*len(near_cog_starts[1]), 'o', color='b',mec='k', ms=12, mew=1, alpha=0, zorder=4)
frame3_subsequences = ax_f3.plot(near_cog_starts[2], [0.25]*len(near_cog_starts[2]), 'o', color='b',mec='k', ms=12, mew=1, alpha=0, zorder=4)
#Attach the labels to the subsequences plotted above
signaltooltip1 = PointHTMLTooltip(frame1_subsequences[0], signalhtml[0], voffset=10, hoffset=10, css=point_tooltip_css)
signaltooltip2 = PointHTMLTooltip(frame2_subsequences[0], signalhtml[1], voffset=10, hoffset=10, css=point_tooltip_css)
signaltooltip3 = PointHTMLTooltip(frame3_subsequences[0], signalhtml[2], voffset=10, hoffset=10, css=point_tooltip_css)
for axisname in (ax_f1, ax_f2, ax_f3,ax_nucseq,ax_cds):
axisname.tick_params(top=False, bottom=False, labelleft=False, labelright=False, labelbottom=False)
for label in ax_main.xaxis.get_majorticklabels():
label.set_fontsize(36)
for axis, frame in ((ax_f1, 1), (ax_f2, 2), (ax_f3, 3)):
axis.set_xlim(1, tranlen)
starts = [(item, 1) for item in start_stop_dict[frame]['starts']]
uag_stops = [(item, 1) for item in start_stop_dict[frame]['stops']['TAG']]
uaa_stops = [(item, 1) for item in start_stop_dict[frame]['stops']['TAA']]
uga_stops = [(item, 1) for item in start_stop_dict[frame]['stops']['TGA']]
#Plot start positions
axis.broken_barh(starts, (0.30, 1),color="white", zorder=2,linewidth=7)
#Plot stop positions
axis.broken_barh(uag_stops, (0, 1), color=uag_col, zorder=2, linewidth=4)
axis.broken_barh(uaa_stops, (0, 1), color=uaa_col, zorder=2, linewidth=4)
axis.broken_barh(uga_stops, (0, 1), color=uga_col, zorder=2, linewidth=4)
axis.set_ylim(0, 1)
axis.set_ylabel('Frame {}'.format(frame), labelpad=4, verticalalignment='center',horizontalalignment="right",rotation="horizontal",color="black",fontsize=(axis_label_size/1.5))
title_str = '{} ({})'.format(gene,short_code)
plt.title(title_str, fontsize=50,y=38)
line_collections = [frame0subpro, frame1subpro, frame2subpro, rna_bars, allexons]
if mismatches == True:
line_collections.append(a_mismatches)
line_collections.append(t_mismatches)
line_collections.append(g_mismatches)
line_collections.append(c_mismatches)
line_collections.append(cds_markers)
if not (hili_start == 0 and hili_stop == 0):
hili_start = int(hili_start)
hili_stop = int(hili_stop)
hili = ax_main.fill_between([hili_start,hili_stop], [y_max, y_max],zorder=0, alpha=0.75, color="#fffbaf")
labels.append("Highligted region")
start_visible.append(True)
line_collections.append(hili)
for alt_plot in alt_seq_type_vars:
line_collections.append(alt_plot)
if 'hili_sequences' in locals():
labels.append("Highligted sequences")
start_visible.append(True)
line_collections.append(hili_sequences)
if user_hili_starts != [] and user_hili_stops != []:
for i in range(0,len(user_hili_starts)):
user_hili_start = int(user_hili_starts[i])
user_hili_stop = int(user_hili_stops[i])
try:
hili += ax_main.fill_between([user_hili_start,user_hili_stop],[y_max, y_max], alpha=0.75,color="#fffbaf")
except:
hili = ax_main.fill_between([user_hili_start,user_hili_stop],[y_max, y_max], alpha=0.75,color="#fffbaf")
labels.append("Highligter")
start_visible.append(True)
line_collections.append(hili)
leg_offset = (legend_size-17)*5
if leg_offset <0:
leg_offset = 0
ilp = InteractiveLegendPlugin(line_collections, labels, alpha_unsel=0.01,alpha_sel=1,start_visible=start_visible,xoffset=50)
htmllabels = {1:[],2:[],3:[]}
all_start_points = {1:[],2:[],3:[]}
try:
con_scores = SqliteDict("{0}/{1}/homo_sapiens/score_dict.sqlite".format(config.SCRIPT_LOC, config.ANNOTATION_DIR))
except Exception as e:
con_scores = []
for frame in [1,2,3]:
orf_list = frame_orfs[frame]
for tup in orf_list:
orf_ribo = 0.0
outframe_ribo = 0.0
orf_rna = 0.0001
start = tup[0]
try:
context = (seq[start-7:start+4].upper()).replace("T","U")
except Exception as e:
con_score = "?"
if len(context) != 11 or context[6:9] != "AUG":
con_score = "?"
else:
try:
con_score = con_scores[context.upper()]
except Exception as e:
con_score = "?"
all_start_points[frame].append(start-1)
stop = tup[1]
other_ribo = 0.0
otherother_ribo = 0.0
for i in range(start+2, stop,3):
for subframe in [0,1,2]:
if i in frame_counts[subframe]:
orf_ribo += frame_counts[subframe][i]
for i in range(start, stop,3):
for subframe in [0,1,2]:
if i in frame_counts[subframe]:
outframe_ribo += frame_counts[subframe][i]
for i in range(start+1, stop,3):
for subframe in [0,1,2]:
if i in frame_counts[subframe]:
outframe_ribo += frame_counts[subframe][i]
for i in range(start, stop+1):
if i in all_rna_reads:
orf_rna += all_rna_reads[i]
orf_te = float(orf_ribo)/float(orf_rna)
orf_len = int(stop-start)
try:
in_out_ratio = orf_ribo/outframe_ribo
except:
in_out_ratio = "Null"
datadict = {'inframe ribo': [orf_ribo],
'outframe ribo':[outframe_ribo],
'in/out ratio':[in_out_ratio],
'rna': [orf_rna],
'te': [orf_te],
'len': [orf_len],
'context_score':[str(con_score)+"/150"]}
df = pd.DataFrame(datadict, columns=(["inframe ribo", "outframe ribo", "in/out ratio","rna","te","len","context_score"]))
label = df.iloc[[0], :].T
label.columns = ["Start pos: {}".format(start-1)]
htmllabels[frame].append(str(label.to_html()))
points1 =ax_f1.plot(all_start_points[1], [0.75]*len(all_start_points[1]), 'o', color='b',mec='k', ms=13, mew=1, alpha=0, zorder=3)
points2 =ax_f2.plot(all_start_points[2], [0.75]*len(all_start_points[2]), 'o', color='b',mec='k', ms=13, mew=1, alpha=0, zorder=3)
points3 =ax_f3.plot(all_start_points[3], [0.75]*len(all_start_points[3]), 'o', color='b',mec='k', ms=13, mew=1, alpha=0, zorder=3)
tooltip1 = PointHTMLTooltip(points1[0], htmllabels[1],voffset=10, hoffset=10, css=point_tooltip_css)
tooltip2 = PointHTMLTooltip(points2[0], htmllabels[2],voffset=10, hoffset=10, css=point_tooltip_css)
tooltip3 = PointHTMLTooltip(points3[0], htmllabels[3],voffset=10, hoffset=10, css=point_tooltip_css)
#This works but hides axis labels for all axes
#ax_f1.axes.xaxis.set_ticklabels([])
#ax_f1.axes.yaxis.set_ticklabels([])
#ax_f2.axes.yaxis.set_ticklabels([])
#ax_f3.axes.yaxis.set_ticklabels([])
#ax_cds.axes.yaxis.set_ticklabels([])
#ax_nucseq.axes.yaxis.set_ticklabels([])
#ax_f1.get_xaxis().set_visible(False)
#ax_f1.get_yaxis().set_visible(False)
for key, spine in ax_f1.spines.items():
spine.set_visible(False)
#ax_f1.tick_params(which="both", bottom=False, color="lightgray")
returnstr = "Position,Sequence,Frame 1,Frame 2,Frame 3,RNA-Seq\n"
for i in range(0,len(seq)):
f1_count = 0
f2_count = 0
f3_count = 0
rna_count = 0
if i+1 in frame_counts[0]:
f1_count = frame_counts[0][i+1]
elif i+1 in frame_counts[1]:
f2_count = frame_counts[1][i+1]
elif i+1 in frame_counts[2]:
f3_count = frame_counts[2][i+1]
if i+1 in all_rna_reads:
rna_count = all_rna_reads[i+1]
returnstr += "{},{},{},{},{},{}\n".format(i+1,seq[i],f1_count, f2_count, f3_count,rna_count)
if seqhili == ['']:
plugins.connect(fig, ilp, tooltip1, tooltip2, tooltip3, TopToolbar(yoffset=-50,xoffset=-300),DownloadProfile(returnstr=returnstr),DownloadPNG(returnstr=title_str))
else:
plugins.connect(fig, ilp, tooltip1, tooltip2, tooltip3, signaltooltip1,signaltooltip2,signaltooltip3, TopToolbar(yoffset=-50,xoffset=-300),DownloadProfile(returnstr=returnstr),DownloadPNG(returnstr=title_str))
ax_main.set_facecolor("white")
# This changes the size of the tick markers, works on both firefox and chrome.
ax_main.tick_params('both', labelsize=marker_size)
ax_main.xaxis.set_major_locator(plt.MaxNLocator(3))
ax_main.yaxis.set_major_locator(plt.MaxNLocator(3))
#ax_main.grid(False, color="white", linewidth=30,linestyle="solid")
#Without this style tag the markers sizes will appear correct on browser but be original size when downloaded via png
graph = "<style>.mpld3-xaxis {{font-size: {0}px;}} .mpld3-yaxis {{font-size: {0}px;}}</style>".format(marker_size)
graph += "<div style='padding-left: 55px;padding-top: 22px;'> <a href='https://trips.ucc.ie/short/{0}' target='_blank' ><button class='button centerbutton' type='submit'><b>Direct link to this plot</b></button></a> </div>".format(short_code)
graph += mpld3.fig_to_html(fig)
return graph
| 47.537753 | 297 | 0.706737 | 4,043 | 25,813 | 4.320801 | 0.141974 | 0.017173 | 0.017517 | 0.010304 | 0.39945 | 0.343866 | 0.289713 | 0.245349 | 0.212262 | 0.178488 | 0 | 0.038988 | 0.125596 | 25,813 | 542 | 298 | 47.625461 | 0.73497 | 0.093054 | 0 | 0.193617 | 0 | 0.004255 | 0.111339 | 0.009542 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006383 | false | 0 | 0.034043 | 0 | 0.051064 | 0.006383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a260ec0eb82fb9b63378f28141a3588d88fe9d86 | 1,061 | py | Python | deepclr/utils/parsing.py | mhorn11/deepclr | 6ee21963a402776851950a51709eef849ff96b5f | [
"Apache-2.0"
] | 8 | 2020-12-01T21:22:01.000Z | 2022-03-13T13:11:56.000Z | deepclr/utils/parsing.py | mhorn11/deepclr | 6ee21963a402776851950a51709eef849ff96b5f | [
"Apache-2.0"
] | null | null | null | deepclr/utils/parsing.py | mhorn11/deepclr | 6ee21963a402776851950a51709eef849ff96b5f | [
"Apache-2.0"
] | null | null | null | import argparse
from typing import Any, Optional, Sequence, Union
class ParseEnum(argparse.Action):
"""Argparse action for automatic enumeration parsing."""
def __init__(self, option_strings: Sequence[str], enum_type: Any, *args: Any, **kwargs: Any):
self._enum_type = enum_type
kwargs['choices'] = [f.name for f in list(enum_type)]
if 'default' not in kwargs:
kwargs['default'] = None
super(ParseEnum, self).__init__(option_strings, *args, **kwargs)
def __call__(self, parser: argparse.ArgumentParser, namespace: argparse.Namespace,
values: Union[str, Sequence[Any], None], option_string: Optional[str] = None) -> None:
if isinstance(values, (list, tuple)):
value = str(values[0])
else:
value = str(values)
try:
enum_value = self._enum_type[value]
setattr(namespace, self.dest, enum_value)
except KeyError:
parser.error('Input {} is not a field of enum {}'.format(values, self._enum_type))
| 39.296296 | 103 | 0.63148 | 127 | 1,061 | 5.070866 | 0.456693 | 0.074534 | 0.055901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001256 | 0.249764 | 1,061 | 26 | 104 | 40.807692 | 0.807789 | 0.047125 | 0 | 0 | 0 | 0 | 0.054726 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a261f36a86e821b182db55f091a09a2619277289 | 1,025 | py | Python | camera.py | Shock3udt/pi-camera-stream-flask | 62e404ebc67bfced9d8238ec2b35e878aff1d365 | [
"MIT"
] | null | null | null | camera.py | Shock3udt/pi-camera-stream-flask | 62e404ebc67bfced9d8238ec2b35e878aff1d365 | [
"MIT"
] | null | null | null | camera.py | Shock3udt/pi-camera-stream-flask | 62e404ebc67bfced9d8238ec2b35e878aff1d365 | [
"MIT"
] | null | null | null | #Modified by smartbuilds.io
#Date: 27.09.20
#Desc: This scrtipt script..
import cv2
from imutils.video.pivideostream import PiVideoStream
import imutils
import time
import numpy as np
import threading
class VideoCamera(object):
def __init__(self, flip = False):
self.vs = PiVideoStream().start()
self.flip = flip
self.buffer = None
self.lock = threading.Lock()
time.sleep(2.0)
def _thread(self):
while True:
frame = self.get()
with self.lock:
self.buffer = frame
time.sleep(.1)
def __del__(self):
self.vs.stop()
def flip_if_needed(self, frame):
if self.flip:
return np.flip(frame, 0)
return frame
def get(self):
frame = self.flip_if_needed(self.vs.read())
ret, jpeg = cv2.imencode('.jpg', frame)
return jpeg.tobytes()
def get_frame(self):
frame = None
with self.lock:
frame = self.buffer
return frame | 24.404762 | 53 | 0.587317 | 129 | 1,025 | 4.55814 | 0.44186 | 0.054422 | 0.040816 | 0.054422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016997 | 0.31122 | 1,025 | 42 | 54 | 24.404762 | 0.815864 | 0.065366 | 0 | 0.117647 | 0 | 0 | 0.004184 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a262f984f5b11108aed02f01a0334b4ba97354f9 | 3,159 | py | Python | hangman.py | doubleu06/My-Python-programs | 24e61f32a70217bd56737bfc1e4c7f8a45f814f4 | [
"MIT"
] | 1 | 2022-02-14T17:56:47.000Z | 2022-02-14T17:56:47.000Z | hangman.py | doubleu06/My-Python-programs | 24e61f32a70217bd56737bfc1e4c7f8a45f814f4 | [
"MIT"
] | null | null | null | hangman.py | doubleu06/My-Python-programs | 24e61f32a70217bd56737bfc1e4c7f8a45f814f4 | [
"MIT"
] | null | null | null | import random
fruitsList = ['apple','kiwi','strawberry','pomogranate','starfruit','walnut','banana','raisin','guava','date','coconut','papaya']
vegetableList = ['cauliflower','brinjal','drumstick','potato','bittergourd','cabbage','onion','chilli','tomato','capsicum']
objectList = ['chair','table','scissors','book','pencil','diary','bottle','tubelight','bed']
appliancesList = ['oven','computer','television','iron','washingmachine','mixer','sewingmachine','pressurecooker']
sportsList = ['cricket','polo','tabletennis','lawntennis','horseracing','football','basketball','baseball','paragliding','judo','boxing','swimming']
animalList = ['lion','tiger','giraffe','aligator','crocodile','rabbit','cheetah','leopard','jackal','fox','bison','snake','elephant']
birdList = ['sparrow','peacock','cuckoo','cock','ostrich','kiwi','hen','crow','maina','flamingo','swan','duck','eagle','woodpecker']
aquaticAnimalList = ['octopus','shark','whale','dolphin','turtle','starfish','eel','seahorse','piranha']
treeList = ['oak','sal','teak','banyan','ashok','gulmohar','willow','peepal']
countryList = ['india','pakistan','america','nepal','indonesia','bangladesh','japan','china','eslovakia','brazil','canada','maldives']
wordLists = [fruitsList,vegetableList,objectList,appliancesList,sportsList,animalList,birdList,aquaticAnimalList,treeList,countryList]
wordList = random.choice(wordLists)
word = random.choice(wordList)
word = word.replace('\n', '')
guessword = ''
for i in word:
guessword += '*'
print('the game starts now.......\n\n')
if wordList == fruitsList:
print('this is a fruit.')
elif wordList == vegetableList:
print('this is a vegetable.')
elif wordList == objectList:
print('this is an object.')
elif wordList == sportsList:
print('this is a sport.')
elif wordList == animalList:
print('this is an animal.')
elif wordList == birdList:
print('this is a bird.')
elif wordList == aquaticAnimalList:
print('this is an aquatic animal.')
elif wordList == treeList:
print('this is a tree.')
elif wordList == countryList:
print('this is a country.')
else:
print('this is an appliance.')
print('your word is : ',guessword)
n = 0
guessed = {'',}
wrongCount = 4
while wrongCount > 0 and guessword != word:
nextGuess = input('Guess a character : ')
if nextGuess not in word:
print(nextGuess, 'not in word.')
wrongCount -= 1
print(f'you have {wrongCount} attempt left.')
elif nextGuess == '':
print('you have not guessed anything.')
elif nextGuess in guessed:
print('you have already guessed the character.')
else:
print(nextGuess, 'is in word.')
con = []
count = 0
for i in word:
if i == nextGuess:
con.append(count)
count += 1
guessList =[]
for i in guessword:
guessList.append(i)
for j in con:
guessList[j] = nextGuess
guessword = ''.join(guessList)
print(guessword)
guessed.add(nextGuess)
n += 1
print('the word was', word)
| 38.52439 | 149 | 0.630896 | 344 | 3,159 | 5.793605 | 0.517442 | 0.045158 | 0.055193 | 0.036126 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002725 | 0.186768 | 3,159 | 81 | 150 | 39 | 0.773063 | 0 | 0 | 0.056338 | 0 | 0 | 0.359141 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014085 | 0 | 0.014085 | 0.267606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2651eb27787a23dbe3a1433c8cfe25b0c866166 | 43,448 | py | Python | pose_model_estimator.py | wagnew3/Amodal-3D-Reconstruction-for-Robotic-Manipulationvia-Stability-and-Connectivity--Release | f55c6b0fac44d9d749e7804d99169a39d30c2111 | [
"MIT"
] | null | null | null | pose_model_estimator.py | wagnew3/Amodal-3D-Reconstruction-for-Robotic-Manipulationvia-Stability-and-Connectivity--Release | f55c6b0fac44d9d749e7804d99169a39d30c2111 | [
"MIT"
] | null | null | null | pose_model_estimator.py | wagnew3/Amodal-3D-Reconstruction-for-Robotic-Manipulationvia-Stability-and-Connectivity--Release | f55c6b0fac44d9d749e7804d99169a39d30c2111 | [
"MIT"
] | null | null | null | import shutil
import util.util as util_
import os
import cv2
import open3d as o3d
import pickle
import numpy as np
from scipy.optimize import linear_sum_assignment
import trimesh
from skimage import measure
import scipy
from sklearn.neighbors import KDTree
from scipy.ndimage.measurements import label
import data_augmentation
from genre.voxelization import voxel
import traceback
from genre.util import util_sph
from scipy import stats
from dm_control.mujoco.engine import Camera
from trajopt.mujoco_utils import add_object_to_mujoco, remove_objects_from_mujoco, get_mesh_list, compute_mujoco_int_transform
mesh_level=0.5
def chamfer_distance(pcd_1, pcd_2):
pcd_tree = KDTree(pcd_2)
nearest_distances_1, _=pcd_tree.query(pcd_1)
pcd_tree = KDTree(pcd_1)
nearest_distances_2, _=pcd_tree.query(pcd_2)
return np.sum(nearest_distances_1)/pcd_1.shape[0]+np.sum(nearest_distances_2)/pcd_2.shape[0]
#return outher shell of voxel shape
def hollow_dense_pointcloud(ptcld):
conv=scipy.ndimage.convolve(ptcld, np.ones((3,3,3)))
ptcld=np.where(conv<27, ptcld, 0)
return ptcld
def compute_xyz(depth_img, camera_params):
""" Compute ordered point cloud from depth image and camera parameters
@param depth_img: a [H x W] numpy array of depth values in meters
@param camera_params: a dictionary with parameters of the camera used
"""
# Compute focal length from camera parameters
if 'fx' in camera_params and 'fy' in camera_params:
fx = camera_params['fx']
fy = camera_params['fy']
else: # simulated data
aspect_ratio = camera_params['img_width'] / camera_params['img_height']
e = 1 / (np.tan(np.radians(camera_params['fov']/2.)))
t = camera_params['near'] / e; b = -t
r = t * aspect_ratio; l = -r
alpha = camera_params['img_width'] / (r-l) # pixels per meter
focal_length = camera_params['near'] * alpha # focal length of virtual camera (frustum camera)
fx = focal_length; fy = focal_length
if 'x_offset' in camera_params and 'y_offset' in camera_params:
x_offset = camera_params['x_offset']
y_offset = camera_params['y_offset']
else: # simulated data
x_offset = camera_params['img_width']/2
y_offset = camera_params['img_height']/2
indices = util_.build_matrix_of_indices(camera_params['img_height'], camera_params['img_width'])
indices[..., 0] = np.flipud(indices[..., 0]) # pixel indices start at top-left corner. for these equations, it starts at bottom-left
z_e = depth_img
x_e = (indices[..., 1] - x_offset) * z_e / fx
y_e = (indices[..., 0] - y_offset) * z_e / fy
xyz_img = np.stack([x_e, y_e, z_e], axis=-1) # Shape: [H x W x 3]
return xyz_img
upsample=1
xmap = np.array([[j for i in range(int(upsample*640))] for j in range(int(upsample*480))])
ymap = np.array([[i for i in range(int(upsample*640))] for j in range(int(upsample*480))])
#make pointcloud from depth image
def make_pointcloud_all_points(depth_image):
cam_scale = 1.0
cam_cx = 320.0
cam_cy = 240.0
camera_params={'fx':579.411255, 'fy':579.411255, 'img_width':640, 'img_height': 480}
depth_masked = depth_image.flatten()[:, np.newaxis].astype(np.float32)
xmap_masked = xmap.flatten()[:, np.newaxis].astype(np.float32)
ymap_masked = ymap.flatten()[:, np.newaxis].astype(np.float32)
pt2 = depth_masked / cam_scale
pt0 = (ymap_masked/upsample - cam_cx) * pt2 / (camera_params['fx'])
pt1 = (xmap_masked/upsample - cam_cy) * pt2 / (camera_params['fy'])
cloud = np.concatenate((pt0, -pt1, -pt2), axis=1)
return cloud
def color_code_objects(frame, state_id_to_model_pixels, display=False):
#generate object color mapping
labels=np.unique(frame)
exec_dir=os.path.dirname(os.path.realpath(__file__))
color_map_file_name=exec_dir+'/data/object_color_maps/object_color_map_size_'+str(labels.shape[0])+'.p'
if os.path.isfile(color_map_file_name):
object_color_map=pickle.load(open(color_map_file_name, "rb" ))
else:
self.object_color_map=glasbey.get_colors(len(state_id_to_model_pixels))
pickle.dump(self.object_color_map, open(color_map_file_name, "wb" ))
#create labelled image
labelled_frame=np.zeros((frame.shape[0], frame.shape[1], 3))
for label in range(labels.shape[0]):
object_pixel_positions_exact=np.argwhere(frame==label)
object_pixel_positions_exact_in_bounds=object_pixel_positions_exact.astype(int)
if len(object_pixel_positions_exact_in_bounds.shape)==2 and object_pixel_positions_exact_in_bounds.shape[0]>0 and object_pixel_positions_exact_in_bounds.shape[1]==2:
object_color=object_color_map[label]
labelled_frame[object_pixel_positions_exact_in_bounds[:, 0], object_pixel_positions_exact_in_bounds[:, 1]]=object_color
if display:
cv2.imshow('object labels', labelled_frame)
cv2.waitKey(20)
return labelled_frame
border_list = [-1, 40, 80, 120, 160, 200, 240, 280, 320, 360, 400, 440, 480, 520, 560, 600, 640, 680]
def get_bbox(bbx):
if bbx[0] < 0:
bbx[0] = 0
if bbx[1] >= 480:
bbx[1] = 479
if bbx[2] < 0:
bbx[2] = 0
if bbx[3] >= 640:
bbx[3] = 639
rmin, rmax, cmin, cmax = bbx[0], bbx[1], bbx[2], bbx[3]
r_b = rmax - rmin
for tt in range(len(border_list)):
if r_b > border_list[tt] and r_b < border_list[tt + 1]:
r_b = border_list[tt + 1]
break
c_b = cmax - cmin
for tt in range(len(border_list)):
if c_b > border_list[tt] and c_b < border_list[tt + 1]:
c_b = border_list[tt + 1]
break
center = [int((rmin + rmax) / 2), int((cmin + cmax) / 2)]
rmin = center[0] - int(r_b / 2)
rmax = center[0] + int(r_b / 2)
cmin = center[1] - int(c_b / 2)
cmax = center[1] + int(c_b / 2)
if rmin < 0:
delt = -rmin
rmin = 0
rmax += delt
if cmin < 0:
delt = -cmin
cmin = 0
cmax += delt
if rmax > 480:
delt = rmax - 480
rmax = 480
rmin -= delt
if cmax > 640:
delt = cmax - 640
cmax = 640
cmin -= delt
return rmin, rmax, cmin, cmax
#transform robot meshes into current position
def make_known_meshes(known_meshes, physics, geom_names):
transformed_known_meshes=[]
for known_mesh_ind in range(len(known_meshes)):
transformed_known_mesh=known_meshes[known_mesh_ind].copy()
transform=np.eye(4)
transform[0:3,0:3]=np.reshape(physics.named.data.geom_xmat[geom_names[known_mesh_ind]],(3,3))
transformed_known_mesh.apply_transform(transform)
transform=np.eye(4)
transform[0:3,3]=physics.named.data.geom_xpos[geom_names[known_mesh_ind]]
transformed_known_mesh.apply_transform(transform)
transformed_known_meshes.append(transformed_known_mesh)
return transformed_known_meshes
#select voxel points in cube around target object, also compute table surface height
def select_points_in_cube_voxelize_sphr_proj(self, all_points, i, grid_size=128, estimate_table=False, sub_vox=0, min_z=None, unocc=None):
low=np.array([-0.5,-0.5,-0.5])
hi=np.array([0.5,0.5,0.5])
points=all_points[np.argwhere(np.all(np.logical_and(all_points>=low, all_points<=hi), axis=1))][:,0,:]
voxels=np.zeros((grid_size,grid_size,grid_size))
inds=np.floor((points + 0.5) * grid_size).astype(int)
if sub_vox!=0:
inds[:,2]=inds[:,2]-sub_vox/(128/grid_size)
az_inds=np.argwhere(inds[:,2]>=0)
inds=inds[az_inds[:,0]]
inds=np.clip(inds, 0, grid_size-1)
if inds.shape[0]==0:
if estimate_table:
return np.zeros((128,128,128)), np.zeros((160,160)), None, 0
else:
return np.zeros((128,128,128)), np.zeros((160,160)), None
voxels[inds[:, 0], inds[:, 1], inds[:, 2]] = 1.0
if unocc is not None:
voxels=np.clip(voxels-unocc, 0, 1)
if estimate_table:
more_points=all_points[np.argwhere(np.all(np.logical_and(all_points>=np.array([-3,-3,-1]), all_points<=np.array([2,2,min_z+0.01])), axis=1))][:,0,:]
more_inds=np.floor((more_points + 0.5) * grid_size).astype(int)
a=more_inds[:,2]
max_inds=scipy.stats.mode(more_inds[:,2], axis=None)[0][0]
inds[:,2]=inds[:,2]-max_inds
az_inds=np.argwhere(inds[:,2]>=0)
inds=inds[az_inds[:,0]]
voxels=np.zeros((grid_size,grid_size,grid_size))
voxels[:,:,0]=1
voxels[inds[:, 0], inds[:, 1], inds[:, 2]] = 1.0
no_points=False
verts, faces, normals, values = measure.marching_cubes_lewiner(
voxels, spacing=(1 / grid_size, 1 / grid_size, 1 / grid_size))
mesh = trimesh.Trimesh(vertices=verts - 0.5, faces=faces, vertex_normals=normals)
trimesh.repair.fix_inversion(mesh)
if verts.shape[0]>50000:
mesh.export(f'/dev/shm/temp_mesh_conv_{i}.ply')
o3d_mesh=o3d.io.read_triangle_mesh(f'/dev/shm/temp_mesh_conv_{i}.ply')
o3d_mesh=o3d_mesh.simplify_vertex_clustering(0.05)
mesh=trimesh.Trimesh(vertices=np.asarray(o3d_mesh.vertices), faces=np.asarray(o3d_mesh.triangles), face_normals=np.asarray(o3d_mesh.triangle_normals), process=False)
os.remove(f'/dev/shm/temp_mesh_conv_{i}.ply')
proj=util_sph.proj_spherical(mesh)
if grid_size!=128:
full_voxels=np.zeros((128,128,128))
voxels=np.zeros((128,128,128))
inds=np.floor((points + 0.5) * 128).astype(int)
inds=np.clip(inds, 0, 128-1)
if sub_vox!=0:
inds[:,2]=inds[:,2]-sub_vox
az_inds=np.argwhere(inds[:,2]>=0)
inds=inds[az_inds[:,0]]
full_voxels[inds[:, 0], inds[:, 1], inds[:, 2]] = 1.0
voxels=full_voxels
if estimate_table:
return voxels, proj, no_points, max_inds
else:
return voxels, proj, no_points
class pose_model_estimator():
def __init__(self, physics, seg_send, seg_receive, recon_send, recon_receive, project_dir, mj_scene_xml, save_id, use_cuda_vox, extrusion_baseline, custom_recon_net=False, max_known_body_id=70, voxels_per_meter=256, model=None, simulate_model_quality=False, model_quality=0, quality_type='', four_channel=False):#
self.obs_cds=[]
self.past_filled_voxels=None
self.top_dir=project_dir
self.scene_xml_file=mj_scene_xml
self.save_id=save_id
self.simulate_model_quality=simulate_model_quality
self.model_quality=model_quality
self.quality_type=quality_type
self.four_channel=four_channel
self.custom_recon_net=custom_recon_net
self.seg_send=seg_send
self.seg_receive=seg_receive
self.recon_send=recon_send
self.recon_receive=recon_receive
#1: load and voxelize all known meshes
mesh_list, self.mesh_name_to_file, self.name_to_scale_dict=get_mesh_list(mj_scene_xml)[:max_known_body_id]
mesh_list=mesh_list[:69]
self.included_meshes=[]
self.geom_names=[]
self.pred_obj_meshes=[]
self.use_cuda_vox=use_cuda_vox
self.extrusion_baseline=extrusion_baseline
self.upsample=1
self.palm_mesh_verts=None
self.xmap = np.array([[j for i in range(int(self.upsample*640))] for j in range(int(self.upsample*480))])
self.geom_ids=[]
self.ymap = np.array([[i for i in range(int(self.upsample*640))] for j in range(int(self.upsample*480))])
for geom_name in model.named.data.geom_xpos.axes.row.names:
num_voxels=256
geom_id=model.model.name2id(geom_name, "geom")
self.geom_ids.append(geom_id)
if geom_id<71:
if model.model.geom_dataid[geom_id]>-1:
mesh_name=model.model.id2name(model.model.geom_dataid[geom_id], "mesh")
mesh=trimesh.load_mesh(self.mesh_name_to_file[model.model.id2name(model.model.geom_dataid[geom_id], "mesh")])
mesh_off_trans, mesh_off_rot=compute_mujoco_int_transform(self.mesh_name_to_file[model.model.id2name(model.model.geom_dataid[geom_id], "mesh")], save_id)
if geom_name=="herb/wam_1/bhand//unnamed_geom_0":
c_mesh=mesh.convex_hull
scale=2*np.amax(np.abs(c_mesh.bounds))
print('cmesh bounds', c_mesh.bounds)
scale_mat=np.eye(4)
scale_mat=scale_mat/scale
scale_mat[3,3]=1.0
s_palm_mesh_vertices=c_mesh.copy().apply_transform(scale_mat)
self.palm_mesh_verts=voxel.voxelize_model_binvox(s_palm_mesh_vertices, 32, self.save_id, binvox_add_param='-bb -.5 -.5 -.5 .5 .5 .5', use_cuda_vox=False)
a=np.argwhere(self.palm_mesh_verts)
self.palm_mesh_verts=(np.argwhere(hollow_dense_pointcloud(self.palm_mesh_verts))/32.0-0.5)*scale
self.palm_mesh_verts=self.palm_mesh_verts[np.argwhere(self.palm_mesh_verts[:,2]>0.075)][:,0,:]
self.palm_mesh_verts=self.palm_mesh_verts[np.argwhere(np.abs(self.palm_mesh_verts[:,1])<0.025)][:,0,:]
self.palm_mesh_verts=self.palm_mesh_verts[np.argwhere(np.abs(self.palm_mesh_verts[:,0])<0.02)][:,0,:]
self.palm_mesh_verts=np.matmul(mesh_off_rot.T, (self.palm_mesh_verts-mesh_off_trans).T).T
trans_mat=np.eye(4)
trans_mat[0:3, 3]=-mesh_off_trans
mesh.apply_transform(trans_mat)
trans_mat=np.eye(4)
trans_mat[0:3, 0:3]=mesh_off_rot.transpose()
mesh.apply_transform(trans_mat)
self.included_meshes.append(mesh)
self.geom_names.append(geom_name)
elif geom_id<len(mesh_list) and mesh_list[geom_id] is not None:
mesh=mesh_list[geom_id]
self.included_meshes.append(mesh)
self.geom_names.append(geom_name)
#2: transform voxel grids into scene, round into global voxel grid
#3: transform predicted voxel into global grid, comput intersection with known voxels, return non-intersecting
#4: marching cubes to convert voxels to mesh
self.camera_params={'fx':579.4112549695428, 'fy':579.4112549695428, 'img_width':640, 'img_height': 480}
self.past_poses=None
self.past_voxels_scales_translations=None
self.tracking_max_distance=0.25
make_known_meshes(self.included_meshes, physics, self.geom_names)
#remove intersections between predicted objects
def subtract_mesh_hull_no_stability_loss(self, resolve_dense_pointcloud, meshes_sub, cam_mat, translation, cam_pos, scale, inv_cm):
world_translation=np.matmul(cam_mat, translation)
dense_ptcld_cpy=resolve_dense_pointcloud-cam_pos-world_translation
dense_ptcld_cpy=np.round((dense_ptcld_cpy/scale+0.5)*128.0).astype(int)
voxels=np.zeros((128,128,128), dtype=int)
voxels[dense_ptcld_cpy[:, 0],dense_ptcld_cpy[:, 1],dense_ptcld_cpy[:, 2]]=1
verts, faces, normals, values = measure.marching_cubes_lewiner(
voxels, mesh_level, spacing=(1 / 128, 1 / 128, 1 / 128))
verts=(verts-0.5)*scale
verts=verts+cam_pos+world_translation
outside_mesh=np.ones(resolve_dense_pointcloud.shape[0])
for known_mesh in meshes_sub:
outside_mesh=np.logical_and(outside_mesh, 1-known_mesh.ray.contains_points(resolve_dense_pointcloud).astype(int))
pcd_tree = KDTree(known_mesh.vertices)
gt_pred_nn_dists, gt_pred_nn_inds=pcd_tree.query(resolve_dense_pointcloud)
outside_mesh=np.where(np.ndarray.flatten(gt_pred_nn_dists)<0.05, 0, outside_mesh)
dense_ptcld=resolve_dense_pointcloud[np.argwhere(outside_mesh)[:, 0]]
dense_ptcld=dense_ptcld-cam_pos-world_translation
dense_ptcld=np.round((dense_ptcld/scale+0.5)*128.0).astype(int)
voxels=np.zeros((128,128,128), dtype=int)
voxels[dense_ptcld[:, 0],dense_ptcld[:, 1],dense_ptcld[:, 2]]=1
if dense_ptcld.shape[0]>0:
verts, faces, normals, values = measure.marching_cubes_lewiner(
voxels, mesh_level, spacing=(1 / 128, 1 / 128, 1 / 128))
verts=(verts-0.5)*scale
else:
verts=np.zeros([0,3])
faces=np.zeros([0,3])
normals=np.zeros([0,3])
if verts.shape[0]>0:
mesh = trimesh.Trimesh(vertices=verts, faces=faces, vertex_normals=normals)
trimesh.repair.fix_inversion(mesh)
mesh.visual.face_colors = [109, 95, 119, 255]
if mesh.faces.shape[0]>0 and mesh.mass>10e-6:
combined_mesh=None
decomps=trimesh.decomposition.convex_decomposition(mesh, maxNumVerticesPerCH=1024, concavity=0.0025, resolution=500000)
if not isinstance(decomps, list):
decomps=[decomps]
new_decomps=[]
num_vertices=0
for decomp in decomps:
if decomp.mass>10e-9:
new_decomps.append(decomp)
num_vertices+=decomp.vertices.shape[0]
if combined_mesh==None:
combined_mesh=decomp
else:
combined_mesh+=decomp
print('num_vertices', num_vertices)
decomps=new_decomps
return decomps, combined_mesh
else:
return None, None
else:
return None, None
def projection_baseline(self, pointcloud):
projected_pointcloud=np.copy(pointcloud)
occupied_z_inds=np.argwhere(np.any(pointcloud, axis=2))
projected_pointcloud[occupied_z_inds[:,0], occupied_z_inds[:,1], 0]=1
return projected_pointcloud
#remove predicted mesh itnersections with robot and table
def refine_mesh_no_stability_loss(self, cam_mat, translation, gt_mesh, cam_pos, scale, pred_voxels, inv_cm, known_meshes, refine=False):
world_translation=np.matmul(cam_mat, translation)
transform_mesh=gt_mesh.copy()
transform=np.eye(4)
transform[:3,3]=-cam_pos-world_translation
transform_mesh.apply_transform(transform)
inv_cam_mat=np.linalg.inv(cam_mat)
transform=np.eye(4)
transform[:3, :3]=inv_cam_mat
transform_mesh.apply_transform(transform)
scale_mat=np.eye(4)
scale_mat=scale_mat/scale
scale_mat[3,3]=1.0
transform_mesh.apply_transform(scale_mat)
ground_truth_voxels=voxel.voxelize_model_binvox(transform_mesh, 128, self.save_id, binvox_add_param='-bb -.5 -.5 -.5 .5 .5 .5', use_cuda_vox=self.use_cuda_vox)
mesh_losses={}
try:
gt_points=np.argwhere(ground_truth_voxels)
pred_voxels=pred_voxels[0]
if self.simulate_model_quality:
pred_voxels, self.cd=self.change_model_quality(pred_voxels, ground_truth_voxels, scale/128.0)
thres_pred_voxels=pred_voxels>=mesh_level
thres_pred_points=np.argwhere(thres_pred_voxels)
if refine:
pcd_tree = KDTree(gt_points)
pred_gt_nn_dists, pred_gt_nn_inds=pcd_tree.query(thres_pred_points)
pred_gt_nn_dists=(pred_gt_nn_dists/128.0)*scale
pred_gt_nn_inds=pred_gt_nn_inds[:,0]
pcd_tree = KDTree(thres_pred_points)
gt_pred_nn_dists, gt_pred_nn_inds=pcd_tree.query(gt_points)
gt_pred_nn_dists=(gt_pred_nn_dists/128.0)*scale
gt_pred_nn_inds=gt_pred_nn_inds[:,0]
pg_loss=np.sum(pred_gt_nn_dists)/thres_pred_points.shape[0]
gp_loss=np.sum(gt_pred_nn_dists)/gt_points.shape[0]
mesh_losses['chamfer']=pg_loss+gp_loss
except:
print('gt voxels projection error!')
traceback.print_exc()
thres_pred_points=np.argwhere(pred_voxels>=mesh_level)
dense_ptcld=(thres_pred_points/128.0-0.5)*scale
dense_ptcld=dense_ptcld+world_translation+cam_pos
outside_mesh=np.ones(dense_ptcld.shape[0])
for known_mesh in known_meshes:
outside_mesh=np.logical_and(outside_mesh, 1-known_mesh.ray.contains_points(dense_ptcld).astype(int))
dense_ptcld=dense_ptcld[np.argwhere(outside_mesh)[:, 0]]
dense_ptcld=dense_ptcld[np.argwhere(dense_ptcld[:,2]>=0.3)[:, 0]]
resolve_dense_ptcld=np.copy(dense_ptcld)
dense_ptcld=dense_ptcld-cam_pos-world_translation
dense_ptcld=np.round((dense_ptcld/scale+0.5)*128.0).astype(int)
voxels=np.zeros((128,128,128), dtype=int)
voxels[dense_ptcld[:, 0],dense_ptcld[:, 1],dense_ptcld[:, 2]]=1
if dense_ptcld.shape[0]>0:
verts, faces, normals, values = measure.marching_cubes_lewiner(
voxels, mesh_level, spacing=(1 / 128, 1 / 128, 1 / 128))
verts=(verts-0.5)*scale
else:
verts=np.zeros([0,3])
faces=np.zeros([0,3])
normals=np.zeros([0,3])
mesh = trimesh.Trimesh(vertices=verts, faces=faces, vertex_normals=normals)
trimesh.repair.fix_inversion(mesh)
try:
trimesh.repair.fix_inversion(mesh)
except:
print('reconstruction error!')
decomps=[mesh]
return decomps, mesh_losses, mesh, resolve_dense_ptcld
#main method, reconstruct meshes in image
def estiamte_poses(self, rbg, depth, physics, cam_pos, cam_mat, pred_object_positions, pred_rotationss, object_labels, task, stability_loss, step=0, gt_mesh=None, use_gt_segs=True, single_threaded=False):
stability_loss=False
#preprocess data
target_id=72
background_id=0
table_id=4
rbg_seg_standardized=data_augmentation.standardize_image(rbg)
xyz_img=compute_xyz(depth, self.camera_params)
temp_scene_xml_file=os.path.join(self.top_dir, f'herb_reconf/temp_scene_{self.save_id}_{step}.xml')
shutil.copyfile(self.scene_xml_file, temp_scene_xml_file)
removed_objects=remove_objects_from_mujoco(temp_scene_xml_file, 9)
removed_mesh_name_to_body_name={removed_object[1]: removed_object[0] for removed_object in removed_objects}
removed_meshes=[]
poses=[]
world_translations=[]
ind=0
seg_to_ind={}
segs=[]
depths=[]
rots=[]
target_ind=0
if single_threaded:
u=0
else:
self.seg_send.put([self.save_id, rbg_seg_standardized, xyz_img])
if use_gt_segs: #use gt segmetnations (mujoco sim only)
camera=Camera(physics=physics, height=480, width=640, camera_id=1)
object_labels=camera.render(segmentation=True)
seg_id_to_geom_id={camera.scene.geoms[geom_ind][8]: camera.scene.geoms[geom_ind][3] for geom_ind in range(camera.scene.geoms.shape[0])}
seg_masks=np.copy(object_labels[:, :, 0])
for seg_label in np.unique(object_labels[:, :, 0]):
seg_inds=np.argwhere(object_labels[:, :, 0]==seg_label)
if seg_inds.shape[0]<100:
seg_masks[seg_inds[:,0], seg_inds[:,1]]=0
else: #use UOIS segs
seg_masks=self.seg_receive.get(timeout=120)[0]
#remove small/disjoint segmentations
for seg_label in np.unique(seg_masks):
if seg_label>0: #don't estiamte background pose
seg=seg_masks==seg_label
seg_inds=np.argwhere(seg)
object_ids=object_labels[seg_inds[:, 0], seg_inds[:,1], 0]
robot_pix=np.argwhere(np.logical_and(object_ids<target_id, np.logical_and(object_ids!=background_id, object_ids!=table_id)))
if robot_pix.shape[0]/float(seg_inds.shape[0])<0.5:
connected_segs=label(seg, structure=np.array([[1,1,1],[1,1,1],[1,1,1]]))[0]
largest_seg=None
num_largest_label=0
for seg_d in np.unique(connected_segs):
if seg_d>0: #no background
connected_label=connected_segs==seg_d
num_labels=np.sum(connected_label)
if num_labels>num_largest_label:
num_largest_label=num_labels
largest_seg=connected_label
if num_largest_label>100:
seg_to_ind[ind]=seg_label
o_id=stats.mode(object_ids)[0][0]
if o_id==72:
target_ind=len(segs)
if physics.model.id2name(physics.model.geom_dataid[o_id], "mesh") in self.mesh_name_to_file:
mesh=trimesh.load_mesh(self.mesh_name_to_file[physics.model.id2name(physics.model.geom_dataid[o_id], "mesh")])
scale=self.name_to_scale_dict[physics.model.id2name(physics.model.geom_dataid[o_id], "mesh")]
scale_mat=np.eye(4)
scale_mat=scale_mat*scale
scale_mat[3,3]=1.0
mesh.apply_transform(scale_mat)
transform=np.eye(4)
transform[:3,:3]=np.reshape(physics.named.data.xmat[removed_mesh_name_to_body_name[physics.model.id2name(physics.model.geom_dataid[o_id], "mesh")]], (3,3))
mesh.apply_transform(transform)
transform=np.eye(4)
transform[:3,3]=physics.named.data.xpos[removed_mesh_name_to_body_name[physics.model.id2name(physics.model.geom_dataid[o_id], "mesh")]]
mesh.apply_transform(transform)
removed_meshes.append(mesh)
segs+=[seg]
depths+=[1000.0*depth]
ind+=1
if len(depths)>0:
projections=None
voxels=None
scales=[]
translations=[]
sub_voxes=[]
accepted_segs=[]
for sample_ind in range(len(depths)):
#compute mesh reconstructions
inputs, scale, translation, rot, sub_vox=self.predict_preprocess(depths[sample_ind], segs[sample_ind], cam_mat, cam_pos, stability_loss=stability_loss)
if not inputs is None:
accepted_segs.append(segs[sample_ind])
scales.append(scale)
translations.append(translation)
rots.append(rot)
sub_voxes.append(sub_vox)
if projections is None:
projections=np.expand_dims(inputs['obs_proj'], 0)
voxels=np.expand_dims(inputs['obs_voxels'], 0)
else:
projections=np.concatenate((projections, np.expand_dims(inputs['obs_proj'], 0)), axis=0)
voxels=np.concatenate((voxels, np.expand_dims(inputs['obs_voxels'], 0)), axis=0)
if not voxels is None:
if single_threaded:
if self.custom_recon_net:
pred_voxelss=self.point_completion_model.forward_with_gt_depth((projections, voxels, rbg, depth, accepted_segs, cam_mat, cam_pos))
else:
pred_voxelss=self.point_completion_model.forward_with_gt_depth(projections, voxels)
else:
self.recon_send.put([self.save_id, projections, voxels, stability_loss])
pred_voxelss=self.recon_receive.get(timeout=120)[0]
for pred_voxels_ind in range(len(pred_voxelss)):
if self.extrusion_baseline:
grid_inds=np.tile(np.arange(128)[None, None, :], (128, 128, 1))
thres_voxels=pred_voxelss[pred_voxels_ind][0]>=mesh_level
z_inds=np.argmax(thres_voxels, axis=2).astype(np.int)
grid_inds=grid_inds-z_inds[:,:,None]
pred_voxelss[pred_voxels_ind][0]=np.where(grid_inds<0, 1, thres_voxels)
shifted_voxels=np.zeros((1, 128,128,128))
if sub_voxes[pred_voxels_ind]>0:
shifted_voxels[:, :,:,sub_voxes[pred_voxels_ind]:]=pred_voxelss[pred_voxels_ind][:,:,:,:-sub_voxes[pred_voxels_ind]]
else:
shifted_voxels=pred_voxelss[pred_voxels_ind]
if self.four_channel: #translate from table
pcd = o3d.geometry.PointCloud()
a=np.argwhere(shifted_voxels[0]>0.5)
pcd.points = o3d.utility.Vector3dVector(np.argwhere(shifted_voxels[0]>0.5).astype(np.float32))
cl, ind = pcd.remove_statistical_outlier(nb_neighbors=40, std_ratio=5)#(nb_neighbors=100, std_ratio=1)#
fil_cloud=pcd.select_down_sample(ind)
other_pointcloud=np.array(fil_cloud.points).astype(np.int)
shifted_voxels[0,:,:,:]=0
shifted_voxels[0,other_pointcloud[:,0], other_pointcloud[:,1], other_pointcloud[:,2]]=1
pred_voxelss[pred_voxels_ind]=shifted_voxels
else:
pred_voxelss=[]
scales=[]
translations=[]
else:
pred_voxelss=[]
scales=[]
translations=[]
known_meshes=make_known_meshes(self.included_meshes, physics, self.geom_names)
inv_cm=np.linalg.inv(cam_mat)
past_voxels_scales_translations=[]
for ind in range(len(pred_voxelss)):
past_voxels_scales_translations.append((pred_voxelss[ind], scales[ind], translations[ind]))
if stability_loss:
world_translation=np.matmul(cam_mat, translations[ind])
poses+=[cam_pos+world_translation]
world_translations+=[world_translation]
else:
world_translation=np.matmul(cam_mat, translations[ind])
poses+=[cam_pos+world_translation]
world_translations+=[cam_pos+world_translation]
poses=np.array(poses)
world_translations=np.array(world_translations)
#mesh tracking (not very important for these experiments
if self.past_poses is None:
if task in ['hard_pushing', 'grasping']:
target_center=np.array([0,0])
elif task=='easy_pushing':
target_center=np.array([-0.05, -0.35])
abs_distances=np.linalg.norm(poses[:, :2]-target_center, axis=1)
sorted_inds=np.argsort(abs_distances)
min_z_ind=np.argmin(abs_distances)
sorted_inds=np.ndarray.tolist(sorted_inds)
sorted_inds.remove(min_z_ind)
sorted_inds=[min_z_ind]+sorted_inds
sorted_inds=np.array(sorted_inds)
poses=poses[sorted_inds]
self.past_poses=poses
#match colors
removed_object_poses=[np.array(removed_object_info[2]) for removed_object_info in removed_objects]
removed_object_colors=np.array([np.array(removed_object_info[3]) for removed_object_info in removed_objects])
removed_object_poses=np.array(removed_object_poses)
cost_matrix=np.zeros((removed_object_poses.shape[0], poses.shape[0]))
for past_pos_ind in range(removed_object_poses.shape[0]):
for pos_ind in range(poses.shape[0]):
cost=np.linalg.norm(poses[pos_ind]-removed_object_poses[past_pos_ind])
cost_matrix[past_pos_ind, pos_ind]=cost
minimum_matching=linear_sum_assignment(cost_matrix)
removed_object_colors=removed_object_colors[minimum_matching[1]]
else:
sorted_inds=np.zeros(max(poses.shape[0], self.past_poses.shape[0]), dtype=np.int)
cost_matrix=np.zeros((self.past_poses.shape[0], poses.shape[0]))
for past_pos_ind in range(self.past_poses.shape[0]):
for pos_ind in range(poses.shape[0]):
cost=np.linalg.norm(self.past_poses[past_pos_ind]-poses[pos_ind])
if cost<self.tracking_max_distance:
cost_matrix[past_pos_ind, pos_ind]=cost
else:
cost_matrix[past_pos_ind, pos_ind]=1000000000.0
minimum_matching=linear_sum_assignment(cost_matrix)
self.past_poses[minimum_matching[0]]=poses[minimum_matching[1]]#np.zeros(poses.shape)
self.past_voxels_scales_translations=[]
for ind in range(len(sorted_inds)):
self.past_voxels_scales_translations.append(past_voxels_scales_translations[sorted_inds[ind]])
#resolve intersections between predicted meshes and robot/table
target_body_name=None
target_geom_names=None
target_mesh=None
target_cd=None
new_objects=[]
new_meshes=[]
new_mesh_inds=[]
mesh_volumes=[]
resolve_dense_pointclouds=[]
for ind in range(min(len(pred_voxelss), len(removed_meshes))):
mesh, cd, whole_mesh, resolve_dense_ptcld=self.refine_mesh_no_stability_loss(cam_mat, translations[sorted_inds[ind]], removed_meshes[sorted_inds[ind]], cam_pos, scales[sorted_inds[ind]], pred_voxelss[sorted_inds[ind]], inv_cm, known_meshes, refine=True)
for mesh_ind in range(len(mesh)):
resolve_dense_pointclouds.append(resolve_dense_ptcld)
if mesh[mesh_ind].faces.shape[0]>=1:
mesh_copy=mesh[mesh_ind].copy()
transform=np.eye(4)
transform[0:3,3]=self.past_poses[ind]
mesh_copy.apply_transform(transform)
new_meshes.append(mesh_copy)
new_mesh_inds.append(ind)
mesh_volumes.append(whole_mesh.volume)
if sorted_inds[ind]==target_ind:
target_cd=cd
#resolve intersections between predicted meshes
mesh_volumes=np.array(mesh_volumes)
mesh_volumes_inds=np.ndarray.tolist(np.argsort(mesh_volumes))
for new_mesh_ind in reversed(mesh_volumes_inds):
cur_mesh=new_meshes[new_mesh_ind]
other_meshes=new_meshes[:new_mesh_ind]
if new_mesh_ind<len(new_meshes):
other_meshes+=new_meshes[new_mesh_ind+1:]
ind=new_mesh_inds[new_mesh_ind]
cur_mesh, single_mesh=self.subtract_mesh_hull_no_stability_loss(resolve_dense_pointclouds[ind], other_meshes, cam_mat, translations[sorted_inds[ind]], cam_pos, scales[sorted_inds[ind]], inv_cm)
if cur_mesh!=None:
body_name, geom_names=add_object_to_mujoco(temp_scene_xml_file, cur_mesh, self.past_poses[ind], os.path.join(self.top_dir, f'herb_reconf/temp_{self.save_id}/'), ind, step, new_objects, joint=True, include_collisions=True, color=removed_object_colors[ind])#
new_objects+=geom_names
transform=np.eye(4)
transform[0:3,3]=self.past_poses[ind]
single_mesh_c=single_mesh.copy()
single_mesh_c.apply_transform(transform)
new_meshes[new_mesh_ind]=single_mesh_c
if sorted_inds[ind]==target_ind:
target_mesh=single_mesh
target_body_name=body_name
target_geom_names=geom_names
self.obs_cds.append(target_cd)
combined_mesh=target_mesh
combined_mesh.export(os.path.join(self.top_dir, f'herb_reconf/temp_{self.save_id}/target_mesh.stl'))
self.pred_obj_meshes=[combined_mesh]
color_seg_masks=color_code_objects(seg_masks, self.state_id_to_model_pixels)
return self.past_poses[0]+self.pred_obj_meshes[0].center_mass, color_seg_masks, temp_scene_xml_file, target_body_name, target_geom_names
def cleanup_files(self):
if os.path.exists(os.path.join(self.top_dir, f'herb_reconf/temp_{self.save_id}/target_mesh.stl')):
os.remove(os.path.join(self.top_dir, f'herb_reconf/temp_{self.save_id}/target_mesh.stl'))
if os.path.exists(os.path.join(self.top_dir, f'herb_reconf/temp_{self.save_id}')):
shutil.rmtree(os.path.join(self.top_dir, f'herb_reconf/temp_{self.save_id}'))
if os.path.exists(os.path.join(self.top_dir, f'herb_reconf/temp_scene_{self.save_id}_{0}.xml')):
os.remove(os.path.join(self.top_dir, f'herb_reconf/temp_scene_{self.save_id}_{0}.xml'))
def convert_to_float32(self, sample_loaded):
for k, v in sample_loaded.items():
if isinstance(v, np.ndarray):
if v.dtype != np.float32:
sample_loaded[k] = v.astype(np.float32)
def make_o3d_pcd(self, points, color):
pcd2 = o3d.geometry.PointCloud()
pcd2.points = o3d.utility.Vector3dVector(points)
pcd2.paint_uniform_color(np.array(color))
return pcd2
#preprocess images into four channel representation
def predict_preprocess(self, depth, label, cam_mat, cam_pos, stability_loss=False):
obj_points_inds=np.where(label, depth, 0.0).flatten().nonzero()[0]
other_points_inds=np.argwhere(np.where(label, depth, 0.0).flatten()==0)[:,0]
obs_ptcld=make_pointcloud_all_points(depth)
obs_ptcld=obs_ptcld/1000.0
obj_pointcloud=obs_ptcld[obj_points_inds]
other_pointcloud=obs_ptcld[other_points_inds]
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(obj_pointcloud)
pcd.paint_uniform_color(np.array([0,1,0]))
cl, ind = pcd.remove_statistical_outlier(nb_neighbors=50, std_ratio=2)
obj_pointcloud=np.array(pcd.select_down_sample(ind).points)
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(other_pointcloud)
pcd.paint_uniform_color(np.array([0,1,0]))
cl, ind = pcd.remove_statistical_outlier(nb_neighbors=100, std_ratio=2.5)
other_pointcloud=np.array(pcd.select_down_sample(ind).points)
inv_proj_mat=np.reshape(cam_mat, (3,3))
proj_mat=np.linalg.inv(inv_proj_mat)
other_pointcloud=inv_proj_mat.dot(other_pointcloud.T).T
other_pointcloud=proj_mat.dot(other_pointcloud.T).T
rot=None
ground_plane=[]
for x in range(100):
for y in range(100):
ground_plane.append([x/100.0,y/100.0,0.35])
ground_plane=np.array(ground_plane)
obs_ptcld_min=np.amin(obj_pointcloud, axis=0)
obs_ptcld_max=np.amax(obj_pointcloud, axis=0)
scale=4.0*float(np.max(obs_ptcld_max-obs_ptcld_min))
translation=np.mean(obj_pointcloud, axis=0)
obj_pointcloud=(obj_pointcloud-translation)/scale
other_pointcloud=(other_pointcloud-translation)/scale
obj_pointcloud=inv_proj_mat.dot(obj_pointcloud.T).T
other_pointcloud=inv_proj_mat.dot(other_pointcloud.T).T
if self.four_channel:
min_object_z=np.min(obj_pointcloud[:,2])
occupied_voxels, occupied_proj, no_points, sub_vox=self.select_points_in_cube_voxelize_sphr_proj(other_pointcloud, self.save_id, grid_size=128, estimate_table=True, min_z=min_object_z)
if occupied_voxels is None:
return None, None, None, None, None
obj_voxels, obs_proj, _=self.select_points_in_cube_voxelize_sphr_proj(obj_pointcloud, self.save_id, grid_size=128, sub_vox=sub_vox)
if obj_voxels is None:
return None, None, None, None, None
line_points=np.arange(50,200)/200.0
near_obs_ptcld=obs_ptcld[np.argwhere(np.logical_and(np.logical_and(obs_ptcld[:,0]>=translation[0]-0.5*scale, np.logical_and(obs_ptcld[:,1]>=translation[1]-0.5*scale, obs_ptcld[:,2]>=translation[2]-0.5*scale)),
np.logical_and(obs_ptcld[:,0]<=translation[0]+0.5*scale, np.logical_and(obs_ptcld[:,1]<=translation[1]+0.5*scale, obs_ptcld[:,2]<=translation[0]+0.5*scale))))][:,0,:]
unoccupied_points=np.reshape(near_obs_ptcld[:, None, :]*line_points[:, None], (-1, 3))
empty_points=(unoccupied_points-translation)/scale
empty_points=inv_proj_mat.dot(empty_points.T).T
empty_voxels, empty_proj, _=self.select_points_in_cube_voxelize_sphr_proj(empty_points, self.save_id, grid_size=128, sub_vox=sub_vox)
away_line_points=1+np.arange(1,200)/100.0
occupied_points=np.reshape(near_obs_ptcld[:, None, :]*away_line_points[:, None], (-1, 3))
unk_points=(occupied_points-translation)/scale
unk_points=inv_proj_mat.dot(unk_points.T).T
unk_voxels, unk_proj, _=self.select_points_in_cube_voxelize_sphr_proj(unk_points, self.save_id, grid_size=128, sub_vox=sub_vox, unocc=np.logical_or(occupied_voxels,np.logical_or(obj_voxels, empty_voxels)))
sample_loaded={}
sample_loaded['obs_voxels']=np.concatenate((obj_voxels[None,:,:,:], occupied_voxels[None,:,:,:], unk_voxels[None,:,:,:]), axis=0).astype(np.float32)
sample_loaded['obs_proj']=np.concatenate((obs_proj[0], occupied_proj[0], unk_proj[0]), axis=0).astype(np.float32)
else:
sub_vox=0
occupied_voxels, occupied_proj, no_points=self.select_points_in_cube_voxelize_sphr_proj(other_pointcloud, self.save_id, grid_size=128)
if occupied_voxels is None:
return None, None, None, None, None
obj_voxels, obs_proj, _=self.select_points_in_cube_voxelize_sphr_proj(obj_pointcloud, self.save_id, grid_size=128)
if obj_voxels is None:
return None, None, None, None, None
sample_loaded={}
sample_loaded['obs_voxels']=obj_voxels[None,:,:,:].astype(np.float32)
sample_loaded['obs_proj']=obs_proj.detach().cpu().numpy()[0]
self.convert_to_float32(sample_loaded)
return sample_loaded, scale, translation, rot, sub_vox
| 50.228902 | 317 | 0.624125 | 5,933 | 43,448 | 4.277431 | 0.102815 | 0.014974 | 0.007881 | 0.010718 | 0.441091 | 0.352195 | 0.304398 | 0.271928 | 0.248522 | 0.209552 | 0 | 0.034071 | 0.265029 | 43,448 | 864 | 318 | 50.287037 | 0.760655 | 0.036043 | 0 | 0.213499 | 0 | 0 | 0.023842 | 0.013009 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023416 | false | 0 | 0.027548 | 0 | 0.084022 | 0.006887 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a265b40f8dc3cafb7f8d12542f36dc35672dbbfa | 3,103 | py | Python | training.py | chungs31/HAET-2021-competition-baseline-code | 5a7b661886f4273fc2d7314be4d1a54a15f44672 | [
"MIT"
] | null | null | null | training.py | chungs31/HAET-2021-competition-baseline-code | 5a7b661886f4273fc2d7314be4d1a54a15f44672 | [
"MIT"
] | null | null | null | training.py | chungs31/HAET-2021-competition-baseline-code | 5a7b661886f4273fc2d7314be4d1a54a15f44672 | [
"MIT"
] | null | null | null | '''Train CIFAR10 with PyTorch.'''
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import os
import argparse
import time
from utils import progress_bar
from dla import DLA
start_epoch = 0
parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--lr', default=0.1, type=float, help='learning rate')
parser.add_argument('--net_sav', default='./checkpoint/ckpt.pth', type=str, help='network save file')
args = parser.parse_args()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Data
print('==> Preparing data..')
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
trainset = torchvision.datasets.CIFAR10(
root='./data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=2048, shuffle=True, num_workers=8, pin_memory=True)
testset = torchvision.datasets.CIFAR10(
root='./data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(
testset, batch_size=100, shuffle=False, num_workers=4, pin_memory=True)
net = DLA()
net = net.to(device)
if device == 'cuda':
net = torch.nn.DataParallel(net)
cudnn.benchmark = True
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=args.lr,
momentum=0.9, weight_decay=5e-4)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200)
# Training
def train(epoch):
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
correct = 0
total = 0
scaler = torch.cuda.amp.GradScaler()
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = net(inputs)
loss = criterion(outputs, targets)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
acc = 100.*correct/total
progress_bar(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (train_loss/(batch_idx+1), 100.*correct/total, correct, total))
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
torch.save(state, args.net_sav)
def run_training():
for epoch in range(start_epoch, start_epoch+250):
train(epoch)
scheduler.step()
run_training()
| 29.836538 | 101 | 0.667741 | 395 | 3,103 | 5.156962 | 0.379747 | 0.027 | 0.012764 | 0.036328 | 0.104075 | 0.104075 | 0.065783 | 0.065783 | 0.065783 | 0.065783 | 0 | 0.0428 | 0.194328 | 3,103 | 103 | 102 | 30.126214 | 0.772 | 0.013535 | 0 | 0.075 | 0 | 0 | 0.060596 | 0.006878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.15 | 0 | 0.175 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a26616dceddd6ef20f999e2eecbc167126a2bd39 | 20,269 | py | Python | scrapia_shell.py | laughingclouds/Scrapia-World | 2898c142a26be4decbcfbf4c4ceca171fd75af60 | [
"MIT"
] | 1 | 2021-05-14T07:17:38.000Z | 2021-05-14T07:17:38.000Z | scrapia_shell.py | laughingclouds/Scrapia-World | 2898c142a26be4decbcfbf4c4ceca171fd75af60 | [
"MIT"
] | 13 | 2021-08-14T18:06:11.000Z | 2022-02-25T20:58:19.000Z | scrapia_shell.py | r3a10god/Scrapia-World | 2898c142a26be4decbcfbf4c4ceca171fd75af60 | [
"MIT"
] | 1 | 2022-01-31T06:51:37.000Z | 2022-01-31T06:51:37.000Z | """
3) For missing novels, add the functionality for pausing the code while it's running and scrape those missing novels first.
4) Sometimes `panels` in the `novel page` have different names for different novels, for them, create a json file for
storing what kind of a panel they have. For that save it as "str type" and "int type" or simply hardcode that stuff...
5) Try to get the chapter no. from the current page, if it works, that should be the new value
of `BH_NO`. Why? We want to be consistent with exactly exceeding chapters being scraped and this will help in that.
"""
# scrapia_world = Scrape wuxia world...
import cmd
import threading
from json import load
from os import environ
from sys import exit, exc_info
from time import \
sleep # for timeouts, cuz' you don't wanna get your IP banned...
import click
import colorama
import mysql.connector
from mysql.connector.cursor import MySQLCursor
from selenium.common import exceptions
from selenium.webdriver.firefox.options import Options
from selenium.webdriver import DesiredCapabilities
from selenium.webdriver import Firefox
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.webdriver import WebDriver
from selenium.webdriver.remote.webelement import WebElement # for type hinting
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from termcolor import colored
def setup_browser(exec_path: str):
firefox_options = Options()
firefox_options.headless = True
prefs: dict = {
'profile.managed_default_content_settings.images': 2,
'disk-cache-size': 4096,
'intl.accept_languages': 'en-US'
}
args = {
"--dns-prefetch-disable",
"--no-sandbox",
}
for pref in prefs:
firefox_options.set_preference(pref, prefs[pref])
for arg in args:
firefox_options.add_argument(arg)
capabilities = DesiredCapabilities.FIREFOX
firefox_options.set_headless
return Firefox(executable_path=exec_path, desired_capabilities=capabilities, options=firefox_options)
colorama.init()
class ScrapiaShell(cmd.Cmd):
"""Shell for scraping...duh..."""
# ctx will be used in the class that overrides this one
def __init__(self, novel_name: str, ctx):
cmd.Cmd.__init__(self)
# self.FIRST_KEY: str = ''
self.SCRAPER_THREAD = threading.Thread(target=self.__start_scraping)
self.SCRAPER_THREAD.daemon = True # using sys.exit will now kill this thread.
self.ctx = ctx
self.__NOVEL = novel_name
self.is_ready: bool = False # To make sure certain functions run only after `setup` is invoked
self._save_src: bool = False # If set, we'll save as html instead.
with open("novel_page_info.json", 'r') as novel_page_fobj, open("panel_struct_info.json", 'r') as panel_struct_fobj: # Reading from the json file
# Refer to the above json files to understand this mess
novel_page_dict: dict = load(novel_page_fobj)
self.__PANEL_STRUCT_DICT: dict = load(panel_struct_fobj)[novel_name]
self.__EXECUTABLE_PATH_GECKO: str = novel_page_dict['drivers_and_extensions']['EXECUTABLE_PATH_GECKO']
self.__LOGIN_INFO: dict[str, str] = novel_page_dict['login_info']
self.__NOVEL_PAGE_INFO: dict[str, str] = novel_page_dict['novel_page_info'][novel_name]
self.__TABLE: str = novel_page_dict['sql_info']['TABLE']
self.__DATABASE: str = novel_page_dict['sql_info']['DATABASE']
# These will be used later on
self.CH_NO: int = 0
self.__CHAPTER_NO_SUF = self.__NOVEL_PAGE_INFO['CHAPTER_NO_SUF']
self.__CHAPTER_NO_PRE = self.__NOVEL_PAGE_INFO['CHAPTER_NO_PRE']
self.__EMAIL = self.__LOGIN_INFO['EMAIL']
self.__INITIAL_SCRAPE = self.__NOVEL_PAGE_INFO['INITIAL_SCRAPE']
self.__LATEST_CH_NO = int(self.__NOVEL_PAGE_INFO['LATEST_CH_NO'])
self.__NOVEL_SAVE_PATH_BASE = self.__NOVEL_PAGE_INFO['NOVEL_SAVE_PATH_BASE']
self.__PASSWORD = self.__LOGIN_INFO['PASSWORD']
self.__mydb = mysql.connector.connect(
host="localhost",
user="root",
database=self.__DATABASE,
password=environ['PASSWORD']
)
self.__cursor: MySQLCursor = self.__mydb.cursor(dictionary=True)
self.__cursor.execute(f"SELECT {self.__NOVEL} FROM {self.__TABLE};")
for row in self.__cursor:
self.CH_NO = row[self.__NOVEL]
self.__driver = setup_browser(self.__EXECUTABLE_PATH_GECKO)
self.prompt = colored(f"({self.__NOVEL}) ", 'red')
intro = colored("Hi! Enter `help` for...well...help...", 'green')
def __goto_next_page(self) -> None:
"""Does one simple task, and that is, it clicks on the button, that will take us to
the next page or to the next chapter."""
element: WebElement = self.__driver.find_element_by_class_name("top-bar-area")
elements = element.find_elements_by_xpath(".//a") # Returns a list of elements
elements[2].click() # This clicks the `next button`
def __increment_ch_no(self, commit: bool = False) -> None:
"""This function `by default`, will only increment `CH_NO` by 1.\n
But, setting `commit` to `True` will make it 'NOT INCREMENT' `CH_NO` and rather
just commit to database.
`commit` is to be set to `True` only when the script is about to close selenium."""
if commit:
self.__cursor.execute(f"UPDATE {self.__TABLE} SET {self.__NOVEL} = {self.CH_NO};")
self.__mydb.commit()
return None
self.CH_NO += 1
def __install_addon_clean_tabs_get_login_window(self) -> None:
"""Big name...ikr, this will first install an addon (`ghostery ad blocker`), then also go to the login window
of wuxiaworld. After sleeping for 7 seconds the code will then begin to close any unwanted tabs (closes any tabs that are not `MAIN_HANDLE`)
and also return focus to the login window.
Extra import requirements:\n
`from selenium.webdriver.common.by import By`\n
`from selenium.webdriver.support.ui import WebDriverWait`\n
`from selenium.webdriver.support import expected_conditions as EC`
"""
# driver.install_addon('/opt/WebDriver/fox_ext/touch_vpn_secure_vpn_proxy_for_unlimited_access-4.2.1-fx.xpi')
# Don't need the vpn if we're gonna go with the click implementation
self.__driver.install_addon('/opt/WebDriver/fox_ext/ghostery_privacy_ad_blocker-8.5.5-an+fx.xpi')
self.__driver.install_addon('/opt/WebDriver/fox_ext/privacy_badger-2021.2.2-an+fx.xpi')
self.__driver.get("https://www.wuxiaworld.com/account/login")
WebDriverWait(self.__driver, 7).until(EC.presence_of_all_elements_located((By.TAG_NAME, "form")))
# wait for the page to load
MAIN_HANDLE: str = self.__driver.current_window_handle # Points to ww /account/login
for window_handle in self.__driver.window_handles:
if window_handle != MAIN_HANDLE:
self.__driver.switch_to.window(window_handle)
self.__driver.close() # after closing all irrelevant tabs driver will focus back to the main one
self.__driver.switch_to.window(MAIN_HANDLE) # Puts focus back on ww /account/login
# Use this stuff to setup the login window (You don't want any junk in some new tab)
def __invoke_scrape_sleep_goto_next_page(self) -> None:
"""This function invokes all these three functions"""
self.do_scrape()
sleep(100) # DO NOT DELETE!!! Unless...you want to be seen as a bot and blocked?
self.__goto_next_page()
# For god's sake, don't push the json to github...
# For god's sake, don't push the json to github...
def __login_key_strokes_goto_chapterpage(self) -> None:
"""This will login through your credentials. And then send you to the chapter page.
Extra import requirements:\n
`from selenium.webdriver.common.keys import Keys`"""
inputElement = self.__driver.find_element_by_id('Email')
inputElement.send_keys(self.__EMAIL)
inputElement = self.__driver.find_element_by_id('Password')
inputElement.send_keys(self.__PASSWORD)
inputElement.send_keys(Keys.ENTER)
sleep(3)
self.__driver.get(self.__NOVEL_PAGE_INFO['NOVEL_PAGE']) # goto whatever novel whas entered
def __start_scraping(self) -> None:
"""Helper function that will be invoked by `self.do_start_scraping` in a thread.
This is the target function of `self.SCRAPER_THREAD` object."""
try:
if not self.is_ready:
self.do_setup()
if self.do_get_chapter_number_from_url(self.__INITIAL_SCRAPE) == self.CH_NO:
print(f"FOUND----------CHAPTER----------{self.CH_NO}")
self.__driver.get(self.__INITIAL_SCRAPE)
sleep(5)
self.__invoke_scrape_sleep_goto_next_page()
while self.CH_NO <= self.__LATEST_CH_NO:
print("WHILE----------LOOP----------Initialized")
self.__invoke_scrape_sleep_goto_next_page()
if self.CH_NO % 5 == 0:
self.__increment_ch_no(commit=True)
# optional, you could add a line to stop execution when a certain `CH_NO` has been scraped.
print("All present chapters scraped...\nEnding...")
self.do_end_cleanly()
except KeyboardInterrupt:
self.do_end_cleanly()
print("KEYBOARD----------INTERRUPT----------INVOKED")
return None
except Exception as e:
print("----------ERROR----------")
print(e)
self.do_end_cleanly()
return None
def do_ch_no(self, *args) -> None:
"""Perform operations on `self.CH_NO`."""
option = str(input("(show/change)? ")).strip()
if option == "show":
print(self.CH_NO)
elif option == "change":
try:
self.CH_NO = int(input("New value: ").strip())
except Exception as e:
print(e, "Retry with a the correct value next time.", sep='\n')
return None
else:
print("Aborting!")
def do_change_values(self, *args):
"""Gives a menu to change values of variables that might alter the behaviour of the code."""
# For now work with `self._save_src` only
new_value = input("change to true?\n(y/n) ")
if new_value == 'y':
self._save_src = True
else:
print("Aborted!")
def do_cls(self, *args) -> None:
"""Clear screen using `click`"""
click.clear()
def do_commit(self, *args) -> None:
"""Commits the current value of `self.CH_NO` to db. You can change the value before calling this
using `ch_no` function"""
self.__increment_ch_no(commit=True)
def do_current_url(self, *args) -> None:
try:
click.echo(f"We are in\n{self.__driver.current_url}")
except Exception as e:
click.echo(e + '\n\n' + "Try invoking `setup` first")
return None
def do_get_chapter_number_from_url(self, url: str, return_as_is: bool=False, *args) -> int:
""""Setting `return_as_is` to True will return the number as a string, this is used by the `end_cleanly` function."""
if not self.is_ready:
click.echo("Can run only after `setup` is invoked!")
return None
url = url.rstrip('/').split('/')[-1] # This returns only the relevant part (the part with the chapter no)
number_as_str: str = ''
was_prev_element_digit: bool = False
for element in url:
if element.isdigit():
number_as_str += element
was_prev_element_digit = True
elif not(element.isdigit()) and was_prev_element_digit:
break
else:
continue
if return_as_is:
return number_as_str
else:
return int(number_as_str)
def do_get(self, *args):
"""Prompts for a url and invokes `self.__driver.get(<url>)`"""
url: str = input("Enter url: ").strip()
self.__driver.get(url)
def do_end_cleanly(self, *args) -> None:
"""Invokes two functions:
`increment_ch_no(commit=True)` and
`driver.quit()`
\n
Note that `end_cleanly` does 'NOT' end the program execution, it just ends the browser and commits
to db."""
current_ch_no: str = self.do_get_chapter_number_from_url(self.__driver.current_url, return_as_is=True)
if current_ch_no: # we want to save the ch_no of the chapter we are presently in
self.CH_NO = int(current_ch_no)
self.__increment_ch_no(commit=True)
self.__driver.quit()
def do_exit(self, *args) -> bool:
"""Exits the interactive shell"""
try:
self.CH_NO = int(self.do_get_chapter_number_from_url(self.__driver.current_url, return_as_is=True))
except ValueError:
pass
finally:
self.do_end_cleanly()
exit() # This kills the daemon
def do_is_ready(self, show: bool=False, *args) -> None:
"""This is for manually telling the shell that we have now completed `setup`."""
if show:
print("This is the value of `self.is_ready`:", self.is_ready)
elif self.is_ready:
click.echo("It is already set to True!")
else:
self.is_ready = True
click.echo("Value has been set to True!")
def do_open_chapterhead_then_panel(self, *args) -> None:
"""The name makes it quite obvious...I'll come up with a better description at some later date."""
if not self.is_ready:
click.echo("Can run only after `setup` is invoked!")
return None
# The structure is quite simple, Dictionary{ tuple(lowerLimit, upperLimit): VolumeNumber }
# starts from '2' because in the initial setup the first panel is open by default, clicking on it, will close
# and hence, hide the chapters withing that panel.
self.__driver.find_element_by_partial_link_text("Chapters").click()
# This will open only the required panel in the chapters section of ww /atg
try:
for index, chapter_tuple in enumerate(self.__PANEL_STRUCT_DICT):
chapter_tuple: list[int, int] = eval(chapter_tuple) # Yes, this is actually a list
if chapter_tuple[0] <= self.CH_NO <= chapter_tuple[1]:
if index == 0:
continue
self.__driver.find_element_by_partial_link_text(self.__PANEL_STRUCT_DICT[str(chapter_tuple)]).click()
# Since chapter_tuple is also a key we use it to access the value in panel_struct_dict
return None
if index == 0:
self.__driver.find_element_by_partial_link_text(self.__PANEL_STRUCT_DICT[str(chapter_tuple)]).click()
print("First panel closed")
# This will close the first panel
continue
except exceptions.NoSuchElementException as e:
print(e, "From function: self.do_open_chapterhead_then_panel", sep='\n\n')
except Exception as e:
print(e, exc_info()[0], "From function: self.do_open_chapterhead_then_panel", sep='\n\n')
def do_pr_pgsrc(self, *args):
"""Prints the page source to stdout"""
print(self.__driver.page_source)
def do_reinitiate(self, *args) -> None:
"""Re-initiates the driver object for smoothly re-running from the terminal itself"""
option = input("THIS WILL CLOSE ANY RUNNING INSTANCES OF SELENIUM IN THIS THREAD\nCONTINUE? (y/n): ")
if option == 'y':
self.do_end_cleanly()
self.__driver = Firefox(executable_path=self.__EXECUTABLE_PATH_GECKO)
else:
return None
def do_reinitiate_everything(self, *args) -> None:
"""This will re-initiate everything, including the shell class."""
option = input("THIS WILL CLOSE ANY RUNNING INSTANCES OF SELENIUM IN THIS THREAD\nCONTINUE? (y/n): ")
if option == 'y':
novel_name: str = input(f"{self.prompt}Enter novel name: ").strip()
self.do_end_cleanly()
self.__init__(novel_name)
else:
return None
def do_scrape(self, *args) -> None:
"""`scrape` does the following:\n
Get relevant content from the website and then save it in a file `NOVEL_SAVE_PATH`.\n
Increment the value of global variable `CH_NO` by one and output the title of the webpage scraped."""
if not self.is_ready:
click.echo("Can run only after `setup` is invoked!")
return None
# the filename including the chapters from now on should be saved as `<last_part_of_url>.txt`
URL_LAST_PART: str = self.__driver.current_url.rstrip('/').split('/')[-1]
file_ext: str = '' # default value
if self._save_src:
file_ext = '.html'
story_content = self.__driver.page_source
else:
story_content = self.__driver.find_element_by_id('chapter-content').text
with open(self.__NOVEL_SAVE_PATH_BASE + URL_LAST_PART + file_ext, 'w') as f:
f.write(story_content)
self.__increment_ch_no()
print(f"{URL_LAST_PART} scraped successfully...\n")
# Setting up everything
def do_setup(self, *args) -> None:
"""This has something to do with the way the site is designed, go to any random chapter and inspect the page source
in the source a `div` has the `back button, the link to the novel page, the next button` (in this order)
what this code does is simply returns a list of the WebElement objects that refer to these elements respectively.
and since I know that I'm supposed to go to the next chapter, I simply choose the last element and click it."""
try:
self.is_ready = True
self.__install_addon_clean_tabs_get_login_window()
sleep(5)
self.__login_key_strokes_goto_chapterpage()
self.do_open_chapterhead_then_panel()
sleep(3) # just wait...it's extra safe
element_to_click: str = self.__CHAPTER_NO_PRE + str(self.CH_NO) + self.__CHAPTER_NO_SUF
self.__driver.find_element_by_partial_link_text(element_to_click).click() # For going to the required chapter
self.__driver.implicitly_wait(5)
# This is all it does.
# It's basically creating (or `setting up`) a scenario that makes scraping through the click method possible
except exceptions.NoSuchElementException as e:
print("EXCEPTION-----from self.do_setup")
print(e, "Try to invoke `start_scraping`", sep='\n\n')
except Exception as e:
print(e, "FROM self.do_setup", sep='\n\n')
self.is_ready = False
finally:
print("The start_scraping function should be working no matter what.",
"If you're having trouble with this function, consider manually going to the required chapter.",
"And invoking `start_scraping`, it should start scraping then.\n\n")
return None
# –
# -
def do_start_scraping(self, *args):
"""This will run the `self.__start_scraping` helper function in a thread. This particular function also
deals with any function calls that might try to `start` the same thread again."""
try:
self.SCRAPER_THREAD.start()
except RuntimeError as e:
print(e, "The function is probably already running!", sep='\n')
return None
| 45.961451 | 159 | 0.639499 | 2,756 | 20,269 | 4.463353 | 0.204644 | 0.011381 | 0.010406 | 0.013657 | 0.226973 | 0.18405 | 0.145923 | 0.102918 | 0.079262 | 0.065848 | 0 | 0.002947 | 0.263506 | 20,269 | 440 | 160 | 46.065909 | 0.821008 | 0.287681 | 0 | 0.247423 | 0 | 0 | 0.148383 | 0.039322 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085911 | false | 0.017182 | 0.072165 | 0 | 0.219931 | 0.072165 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2686140bc42896204fa1cb81cc3f85e54f3fd62 | 6,768 | py | Python | src/covid_model_seiir_pipeline/pipeline/forecasting/model/mandate_reimposition.py | ihmeuw/covid-model-seiir-pipeline | 9ec71e4156fe47c14379127936c5131636544b0d | [
"BSD-3-Clause"
] | 23 | 2020-05-25T00:20:32.000Z | 2022-01-18T10:32:09.000Z | src/covid_model_seiir_pipeline/pipeline/forecasting/model/mandate_reimposition.py | ihmeuw/covid-model-seiir-pipeline | 9ec71e4156fe47c14379127936c5131636544b0d | [
"BSD-3-Clause"
] | 15 | 2020-06-15T16:34:22.000Z | 2021-08-15T22:11:37.000Z | src/covid_model_seiir_pipeline/pipeline/forecasting/model/mandate_reimposition.py | ihmeuw/covid-model-seiir-pipeline | 9ec71e4156fe47c14379127936c5131636544b0d | [
"BSD-3-Clause"
] | 11 | 2020-05-24T21:57:29.000Z | 2021-09-07T18:21:15.000Z | from typing import Dict, List, NamedTuple
import numpy as np
import pandas as pd
from covid_model_seiir_pipeline.lib import static_vars
def compute_reimposition_threshold(past_deaths, population, reimposition_threshold, max_threshold):
population = population.reindex(past_deaths.index, level='location_id')
death_rate = past_deaths / population
death_rate = (death_rate
.groupby('location_id')
.apply(lambda x: x.iloc[-14:])
.reset_index(level=0, drop=True))
days_over_death_rate = (
(death_rate > reimposition_threshold.loc[death_rate.index])
.groupby('location_id')
.sum()
)
bad_locs = days_over_death_rate[days_over_death_rate >= 7].index
reimposition_threshold.loc[bad_locs] = max_threshold.loc[bad_locs]
# Do it a second time to some crazy stuff happening in central europe.
days_over_death_rate = (
(death_rate > reimposition_threshold.loc[death_rate.index])
.groupby('location_id')
.sum()
)
bad_locs = days_over_death_rate[days_over_death_rate >= 7].index
reimposition_threshold.loc[bad_locs] = 2*max_threshold.loc[bad_locs]
# Locations that have shown a propensity to impose mandates at
# any sign of covid in their populations.
immediate_lockdown_locations = [
71, # Australia
]
for location in immediate_lockdown_locations:
reimposition_threshold.loc[location] = 0.1 / 1_000_000
return reimposition_threshold
def compute_reimposition_date(deaths, population, reimposition_threshold,
min_wait, last_reimposition_end_date) -> pd.Series:
location_dates = deaths.index.to_frame().reset_index(drop=True)
deaths = deaths.reset_index(level='observed', drop=True)
death_rate = ((deaths / population.reindex(deaths.index, level='location_id'))
.rename('death_rate'))
reimposition_threshold = (reimposition_threshold
.reindex(death_rate.index)
.groupby('location_id')
.fillna(method='ffill'))
over_threshold = death_rate > reimposition_threshold
last_observed_date = (location_dates[location_dates.observed == 1]
.groupby('location_id')
.date
.max())
last_observed_date.loc[:] = last_observed_date.max()
min_reimposition_date = last_observed_date + min_wait
previously_implemented = last_reimposition_end_date[last_reimposition_end_date.notnull()].index
min_reimposition_date.loc[previously_implemented] = (
last_reimposition_end_date.loc[previously_implemented] + min_wait
)
min_reimposition_date = min_reimposition_date.loc[location_dates.location_id].reset_index(drop=True)
after_min_reimposition_date = location_dates['date'] >= min_reimposition_date
reimposition_date = (death_rate[over_threshold & after_min_reimposition_date.values]
.reset_index()
.groupby('location_id')['date']
.min()
.rename('reimposition_date'))
return reimposition_date
def compute_mobility_lower_bound(mobility: pd.DataFrame, mandate_effect: pd.DataFrame) -> pd.Series:
min_observed_mobility = mobility.groupby('location_id').min().rename('min_mobility')
max_mandate_mobility = (mandate_effect
.sum(axis=1)
.rename('min_mobility')
.reindex(min_observed_mobility.index, fill_value=100))
mobility_lower_bound = min_observed_mobility.where(min_observed_mobility <= max_mandate_mobility,
max_mandate_mobility)
return mobility_lower_bound
def compute_rampup(reimposition_date: pd.Series,
percent_mandates: pd.DataFrame,
days_on: pd.Timedelta) -> pd.DataFrame:
rampup = pd.merge(reimposition_date, percent_mandates.reset_index(level='date'), on='location_id', how='left')
rampup['rampup'] = rampup.groupby('location_id')['percent'].apply(lambda x: x / x.max())
rampup['first_date'] = rampup.groupby('location_id')['date'].transform('min')
rampup['diff_date'] = rampup['reimposition_date'] - rampup['first_date']
rampup['date'] = rampup['date'] + rampup['diff_date'] + days_on
rampup = rampup.reset_index()[['location_id', 'date', 'rampup']]
return rampup
def compute_new_mobility(old_mobility: pd.Series,
reimposition_date: pd.Series,
mobility_lower_bound: pd.Series,
percent_mandates: pd.DataFrame,
days_on: pd.Timedelta) -> pd.Series:
mobility = pd.merge(old_mobility.reset_index(level='date'), reimposition_date, how='left', on='location_id')
mobility = mobility.merge(mobility_lower_bound, how='left', on='location_id')
reimposes = mobility['reimposition_date'].notnull()
dates_on = ((mobility['reimposition_date'] <= mobility['date'])
& (mobility['date'] <= mobility['reimposition_date'] + days_on))
mobility['mobility_explosion'] = mobility['min_mobility'].where(reimposes & dates_on, np.nan)
rampup = compute_rampup(reimposition_date, percent_mandates, days_on)
mobility = mobility.merge(rampup, how='left', on=['location_id', 'date'])
post_reimplementation = ~(mobility['mobility_explosion'].isnull() & mobility['rampup'].notnull())
mobility['mobility_explosion'] = mobility['mobility_explosion'].where(
post_reimplementation,
mobility['min_mobility'] * mobility['rampup']
)
idx_columns = ['location_id', 'date']
mobility = (mobility[idx_columns + ['mobility', 'mobility_explosion']]
.set_index(idx_columns)
.sort_index()
.min(axis=1))
return mobility
class MandateReimpositionParams(NamedTuple):
min_wait: pd.Timedelta
days_on: pd.Timedelta
reimposition_threshold: pd.Series
max_threshold: pd.Series
def unpack_parameters(algorithm_parameters: Dict,
em_scalars: pd.Series) -> MandateReimpositionParams:
min_wait = pd.Timedelta(days=algorithm_parameters['minimum_delay'])
days_on = pd.Timedelta(days=static_vars.DAYS_PER_WEEK * algorithm_parameters['reimposition_duration'])
reimposition_threshold = (algorithm_parameters['death_threshold'] / 1e6 * em_scalars).rename('threshold')
max_threshold = (algorithm_parameters['max_threshold'] / 1e6 * em_scalars).rename('threshold')
return MandateReimpositionParams(min_wait, days_on, reimposition_threshold, max_threshold)
| 47.328671 | 114 | 0.669917 | 753 | 6,768 | 5.691899 | 0.197875 | 0.074662 | 0.035698 | 0.023798 | 0.190854 | 0.14489 | 0.100327 | 0.100327 | 0.100327 | 0.100327 | 0 | 0.005143 | 0.224291 | 6,768 | 142 | 115 | 47.661972 | 0.811238 | 0.026596 | 0 | 0.130435 | 0 | 0 | 0.099803 | 0.00319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052174 | false | 0 | 0.034783 | 0 | 0.182609 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a26a6b674499aa09d29a00e992f455a1e0cdc63d | 1,406 | py | Python | watcher_metering/publisher/app.py | b-com/watcher-metering | 7c09b243347146e5a421700d5b07d1d0a5c4d604 | [
"Apache-2.0"
] | 2 | 2015-10-22T19:44:57.000Z | 2017-06-15T15:01:07.000Z | watcher_metering/publisher/app.py | b-com/watcher-metering | 7c09b243347146e5a421700d5b07d1d0a5c4d604 | [
"Apache-2.0"
] | 1 | 2015-10-26T13:52:58.000Z | 2015-10-26T13:52:58.000Z | watcher_metering/publisher/app.py | b-com/watcher-metering | 7c09b243347146e5a421700d5b07d1d0a5c4d604 | [
"Apache-2.0"
] | 4 | 2015-10-10T13:59:39.000Z | 2020-05-29T11:47:07.000Z | # -*- encoding: utf-8 -*-
# Copyright (c) 2015 b<>com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import unicode_literals
from oslo_config import cfg
from oslo_log import log
from watcher_metering.publisher.opts import register_publisher_opts
from watcher_metering.publisher.publisher import Publisher
from watcher_metering import version
LOG = log.getLogger(__name__)
def load_config(conf):
log.register_options(conf)
register_publisher_opts(conf)
conf(
version=version.version_info.release_string(),
project='watcher_metering'
)
return conf
def start_publisher():
conf = load_config(cfg.CONF)
log.set_defaults()
log.setup(conf, "watcher_metering")
publisher_server = Publisher(**conf.publisher)
publisher_server.start()
publisher_server.join()
if __name__ == '__main__':
start_publisher() # pragma: no cover
| 27.038462 | 69 | 0.747511 | 193 | 1,406 | 5.243523 | 0.533679 | 0.059289 | 0.056324 | 0.031621 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007719 | 0.170697 | 1,406 | 51 | 70 | 27.568627 | 0.860206 | 0.417496 | 0 | 0 | 0 | 0 | 0.049875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a26d63bcbf14af5bcb20f30126154eddbea651fc | 10,103 | py | Python | server/logger_runner.py | anshika-agarwal/openrvdas | 69c0c53902a988b790faad8baa21a5f299d033df | [
"BSD-2-Clause"
] | null | null | null | server/logger_runner.py | anshika-agarwal/openrvdas | 69c0c53902a988b790faad8baa21a5f299d033df | [
"BSD-2-Clause"
] | null | null | null | server/logger_runner.py | anshika-agarwal/openrvdas | 69c0c53902a988b790faad8baa21a5f299d033df | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python3
"""Low-level class to run a logger config in its own process and write
its stderr to a file.
Can be run from the command line as follows:
```
server/logger_runner.py \
--config test/NBP1406/NBP1406_cruise.yaml:gyr1-\>net \
--stderr_file /var/log/openrvdas/gyr1.stderr
```
But its main intended use is to be invoked by another module to start
a logger in its own, non-blocking process:
```
runner = LoggerRunner(config=config, name=logger,
stderr_file=stderr_file,
logger_log_level=self.logger_log_level)
self.logger_runner_map[logger] = runner
self.logger_runner_map[logger].start()
```
Simulated Serial Ports:
The NBP1406_cruise.yaml file above specifies configs that read from
simulated serial ports and write to UDP port 6224. To get the configs
to actually run, you'll need to run
```
logger/utils/simulate_data.py --config test/NBP1406/simulate_NBP1406.yaml
```
in a separate terminal window to create the virtual serial ports the
sample config references and feed simulated data through them.)
To verify that the scripts are actually working as intended, you can
create a network listener on port 6224 in yet another window:
```
logger/listener/listen.py --network :6224
```
"""
import logging
import multiprocessing
import os
import pprint
import signal
import sys
import time
from importlib import reload
# Add the openrvdas/ directory to module search path
from os.path import dirname, realpath
sys.path.append(dirname(dirname(realpath(__file__))))
from logger.utils.read_config import read_config
from logger.utils.stderr_logging import DEFAULT_LOGGING_FORMAT
from logger.utils.timestamp import LOGGING_TIME_FORMAT
from logger.transforms.to_das_record_transform import ToDASRecordTransform
from logger.writers.cached_data_writer import CachedDataWriter
from logger.writers.composed_writer import ComposedWriter
from logger.listener.listen import ListenerFromLoggerConfig
################################################################################
def kill_handler(self, signum):
"""Translate an external signal (such as we'd get from os.kill) into a
KeyboardInterrupt, which will signal the start() loop to exit nicely."""
logging.info('Received external kill')
raise KeyboardInterrupt('Received external kill signal')
################################################################################
def config_from_filename(filename):
"""Load a logger configuration from a filename. If there's a ':' in
the config file name, then we expect what is before the colon to be
a cruise definition, and what is after to be the name of a
configuration inside that definition.
"""
config_name = None
if filename.find(':') > 0:
(filename, config_name) = filename.split(':', maxsplit=1)
config = read_config(filename)
if config_name:
config_dict = config.get('configs', None)
if not config_dict:
raise ValueError('Configuration name "%s" specified, but no '
'"configs" section found in file "%s"'
% (config_name, config_file))
config = config_dict.get(config_name, None)
if not config:
raise ValueError('Configuration name "%s" not found in file "%s"'
% (config_name, filename))
logging.info('Loaded config file: %s', pprint.pformat(config))
return config
################################################################################
def config_is_runnable(config):
"""Is this logger configuration runnable? (Or, e.g. does it just have
a name and no readers/transforms/writers?)
"""
if not config:
return False
return 'readers' in config or 'writers' in config
################################################################################
def run_logger(logger, config, stderr_file=None, log_level=logging.INFO):
"""Run a logger, sending its stderr to a cached data server if so indicated
logger - Name of logger
config - Config dict
stderr_file - If not None, send stderr to this file
log_level - Level at which logger should be logging (e.g logging.WARNING,
logging.INFO, etc.
"""
# Make sure we can write the file in question
if stderr_file:
os.makedirs(os.path.dirname(stderr_file), exist_ok=True)
# Need to reset logging to its freshly-imported state
reload(logging)
logging.basicConfig(format=DEFAULT_LOGGING_FORMAT,
filename=stderr_file,
level=log_level)
config_name = config.get('name', 'no_name')
logging.info('Starting logger %s config %s', logger, config_name)
if config_is_runnable(config):
listener = ListenerFromLoggerConfig(config=config)
try:
listener.run()
except KeyboardInterrupt:
logging.warning('Received quit for %s', self.name)
################################################################################
class LoggerRunner:
############################
def __init__(self, config, name=None, stderr_file=None,
logger_log_level=logging.WARNING):
"""Create a LoggerRunner.
```
config - Python dict containing the logger configuration to be run
name - Optional name to give to logger process.
stderr_file - Optional filename to direct stderr to
logger_log_level - At what logging level our logger should operate.
```
"""
self.config = config
self.name = name or config.get('name', 'Unnamed logger')
self.stderr_file = stderr_file
self.logger_log_level = logger_log_level
self.process = None # this is hold the logger process
self.failed = False # flag - has logger failed?
self.quit_flag = False # flag - has quit been signaled?
# Set the signal handler so that an external break will get
# translated into a KeyboardInterrupt. But signal only works if
# we're in the main thread - catch if we're not, and just assume
# everything's gonna be okay and we'll get shut down with a proper
# "quit()" call otherwise.
try:
signal.signal(signal.SIGTERM, kill_handler)
except ValueError:
logging.info('LoggerRunner not running in main thread; '
'shutting down with Ctl-C may not work.')
############################
def start(self):
"""Start a listener subprocess."""
self.quit_flag = False
self.failed = False
num_tries = 0
# We're going to go ahead and create the process, even if the
# config is not runnable, just so we can get log messages that the
# config has been started.
## If config is not runnable, just say so and be done with it.
#if not self.is_runnable():
# logging.info('Process %s is complete. Not running.', self.name)
# return
run_logger_kwargs = {
'logger': self.name,
'config': self.config,
'stderr_file': self.stderr_file,
'log_level': self.logger_log_level
}
self.process = multiprocessing.Process(target=run_logger,
kwargs=run_logger_kwargs,
daemon=True)
self.process.start()
############################
def is_runnable(self):
"""Is this logger configuration runnable? (Or, e.g. does it just have
a name and no readers/transforms/writers?)
"""
return config_is_runnable(self.config)
############################
def is_alive(self):
"""Is the logger in question alive?"""
return self.process and self.process.is_alive()
############################
def is_failed(self):
"""Return whether the logger has failed."""
return self.failed
############################
def quit(self):
"""Signal loop exit and send termination signal to process."""
self.quit_flag = True
if self.process:
self.process.terminate()
self.process.join()
self.process = None
self.failed = False
################################################################################
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--config', dest='config', action='store', required=True,
help='Logger configuration to run. May either be the '
'name of a file containing a single logger configuration '
'or filename:config_name, for a file containing a cruise '
'definition followed by the name of the specific '
'configuration inside that definition.')
parser.add_argument('--name', dest='name', action='store', default=None,
help='Name to give to logger process.')
parser.add_argument('--stderr_file', dest='stderr_file', default=None,
help='Optional filename to which stderr should be '
'written. Will attempt to create path if it does not '
'exist.')
parser.add_argument('-v', '--verbosity', dest='verbosity',
default=0, action='count',
help='Increase output verbosity')
parser.add_argument('-V', '--logger_verbosity', dest='logger_verbosity',
default=0, action='count',
help='Increase output verbosity of component loggers')
args = parser.parse_args()
# Set up logging first of all
LOG_LEVELS ={0:logging.WARNING, 1:logging.INFO, 2:logging.DEBUG}
log_level = LOG_LEVELS[min(args.verbosity, max(LOG_LEVELS))]
logging.basicConfig(format=DEFAULT_LOGGING_FORMAT)
logging.getLogger().setLevel(log_level)
# What level do we want our component loggers to write?
logger_log_level = LOG_LEVELS[min(args.logger_verbosity, max(LOG_LEVELS))]
config = config_from_filename(args.config)
# Finally, create our runner and run it
runner = LoggerRunner(config=config,
name=args.name,
stderr_file=args.stderr_file,
logger_log_level=logger_log_level)
runner.start()
# Wait for it to complete
runner.process.join()
| 36.472924 | 80 | 0.63397 | 1,271 | 10,103 | 4.926829 | 0.244689 | 0.028745 | 0.022357 | 0.011498 | 0.146918 | 0.099968 | 0.054296 | 0.044714 | 0.044714 | 0.027148 | 0 | 0.005457 | 0.220034 | 10,103 | 276 | 81 | 36.605072 | 0.789213 | 0.351282 | 0 | 0.098485 | 0 | 0 | 0.172144 | 0.003652 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075758 | false | 0 | 0.128788 | 0 | 0.257576 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a26ff86b2ded126f091a6542a9f7f8d92ee4b90c | 9,976 | py | Python | Python/rpgm_core.py | PIK-ICoNe/SyntheticNetworks_comparison | 9250fcc1105a4376c45a34c5c7c3626172a913e5 | [
"MIT"
] | null | null | null | Python/rpgm_core.py | PIK-ICoNe/SyntheticNetworks_comparison | 9250fcc1105a4376c45a34c5c7c3626172a913e5 | [
"MIT"
] | null | null | null | Python/rpgm_core.py | PIK-ICoNe/SyntheticNetworks_comparison | 9250fcc1105a4376c45a34c5c7c3626172a913e5 | [
"MIT"
] | null | null | null | __author__ = "Paul Schultz"
__date__ = "Mar 30, 2016"
__version__ = "v3.0"
# This file is based on the network creation algorithm published in:
#
# A Random Growth Model for Power Grids and Other Spatially Embedded Infrastructure Networks
# Paul Schultz, Jobst Heitzig, and Juergen Kurths
# Eur. Phys. J. Special Topics on "Resilient power grids and extreme events" (2014)
# DOI: 10.1140/epjst/e2014-02279-6
#
# The single-node basin stability predictor has appeared here:
#
# Detours around basin stability in power networks
# Paul Schultz, Jobst Heitzig, and Juergen Kurths
# New J. Phys. 16, 125001 (2014).
# DOI: 10.1088/1367-2630/16/12/125001
import numpy as np
from scipy.sparse import dok_matrix
from igraph import Graph, plot, palettes, rescale
import os
from rpgm_algo import RpgAlgorithm
#######################################################################################################################
#######################################################################################################################
#######################################################################################################################
class RPG(RpgAlgorithm):
def __init__(self):
super(RPG, self).__init__()
# where am I?
self.basepath = os.getcwd()
self.figdir = os.path.join(self.basepath, "figures/")
self.netdir = os.path.join(self.basepath, "networks/")
if not os.path.exists(self.figdir):
os.makedirs(self.figdir)
if not os.path.exists(self.netdir):
os.makedirs(self.netdir)
i = 0
self.identifier = "random_net" + str(i)
while os.path.exists(self.netdir + self.identifier + ".graphml"):
i += 1
self.identifier = "random_net" + str(i)
###############################################################################
# ## PUBLIC FUNCTIONS ## #
###############################################################################
def save_graph(self, info_file=True):
elist = sorted(set([self._s(key) for key in self.adjacency.keys()]))
G = Graph(self.added_nodes)
G.add_edges(elist)
G.vs['lat'] = self.lat
G.vs['lon'] = self.lon
G.write_graphml(self.netdir + self.identifier + ".gml")
if info_file:
if not hasattr(self, "_stat"):
self._stat = self.stats
with open(self.netdir + self.identifier + ".gml.info", "w") as f:
f.write(self._stat + "\n")
return self.netdir + self.identifier + ".gml"
@property
def stats(self):
from scipy.linalg import eigvals
elist = sorted(set([self._s(key) for key in self.adjacency.keys()]))
G = Graph(self.added_nodes)
G.add_edges(elist)
STR = str()
STR += "connected:" + str(G.is_connected()) + "\n"
STR += "undirected:" + str(not G.is_directed()) + "\n"
# STR += "\n"
STR += "* mean degree kbar:" + str(np.mean(G.degree())) + "\n"
STR += "ratio r{k>kbar}:" + str(sum(G.degree() > np.mean(G.degree())) * 1. / self.added_nodes) + "\n"
STR += "avg. neighbour's degree distribution:" + str(np.min(G.knn()[0])) + \
"..." + "<k>=" + str(np.mean(G.knn()[0])) + \
"..." + str(np.max(G.knn()[0])) + "\n"
STR += "degree - degree frequency - avg. neighbour's degree:"
fd, _ = np.histogram(G.degree(), bins=np.max(G.degree()))
for i, val in enumerate(G.knn()[1]):
STR += " " + str(i + 1) + "-" + str(fd[i]) + "-" + str(val) + "\n"
STR += "s.p. betweenness distribution:" + str(np.min(G.betweenness())) + \
"..." + "<b>=" + str(np.mean(G.betweenness())) + \
"..." + str(np.max(G.betweenness())) + "\n"
# STR += "\n"
STR += "* average shortest path length:" + str(G.average_path_length()) + "\n"
STR += "* network transistivity:" + str(G.transitivity_undirected()) + "\n"
STR += "* degree assortativity:" + str(G.assortativity_degree()) + "\n"
STR += "Fiedler eigenvalue lambda2:" + str(sorted(eigvals(G.laplacian()))[1]) + "\n"
# STR += "\n"
STR += "number of dead ends:" + str(G.degree().count(1)) + "\n"
n_a_dt = G.betweenness().count(self.added_nodes - 2) + \
G.betweenness().count(2 * self.added_nodes - 5) + \
G.betweenness().count(2 * self.added_nodes - 6) + \
G.betweenness().count(3 * self.added_nodes - 10) + \
G.betweenness().count(4 * self.added_nodes - 17) + \
G.betweenness().count(5 * self.added_nodes - 26)
STR += "number of nodes adjacent to dead trees (approx.):" + str(n_a_dt) + "\n"
STR += "number of detour nodes:" + str(G.betweenness().count(0) - G.degree().count(1)) + "\n"
STR += "number of edges:" + str(G) #added
# print(G) # added
self._stat = STR
return STR
def plot_net(self, name="random_network", labels=False):
elist = sorted(set([self._s(key) for key in self.adjacency.keys()]))
filename = self.figdir + name + "_" + self.identifier + ".pdf"
x = (np.max(self.lat) - np.min(self.lat))
y = np.cos(np.mean(self.lat)) * (np.max(self.lon) - np.min(self.lon))
#print x, y
G = Graph(self.added_nodes)
G.add_edges(elist)
visual_style = {}
def edgecolor(i,j):
if hasattr(self, "mst_edges"):
if self._s((i,j)) in self.mst_edges:
return "red"
else:
return "black"
scale = 2
visual_style["layout"] = list(zip(self.lat, self.lon))
visual_style["bbox"] = (x / y * 1024, 1024)
visual_style["margin"] = 10 * scale
visual_style["palette"] = palettes["heat"]
visual_style["edge_color"] = [edgecolor(edge.source,edge.target) for edge in G.es]
visual_style["edge_width"] = [2 * scale for edge in G.es]
visual_style["vertex_size"] = [10 * scale for i in range(G.vcount())]
visual_style["vertex_color"] = [int(x) for x in rescale(1. - self.bs_predictor(),
out_range=(0, len(visual_style["palette"]) - 1))]
if labels:
visual_style["vertex_label"] = [str(i) for i in range(G.vcount())]
plot(G, filename, **visual_style)
def bs_predictor(self):
try:
from pyunicorn.core import resistive_network as rn
except:
raise ImportWarning("pyunicorn not availiable")
return np.zeros(self.added_nodes)
def min_clust(W, N):
''' W is the admittance matrix'''
C = np.zeros(N)
for i in range(N):
norm = 0
for j in range(N):
for k in range(j):
if j!=k:
norm += W[i,k]*W[i,j]
if W[i,j]*W[i,k]*W[j,k]>0:
C[i] += min( W[i,k],W[j,k] ) * min( W[i,j],W[j,k] )
#print i,norm[i],C[i]
C[i] /= max(norm,1)
#print C[i]
return C
weights = self.adjacency
for key in list(weights.keys()):
weights[key] = self.distance[self._s(key)]
net = rn.ResNetwork(resistances=weights.todense())
''' explanatory variables '''
ED = net.admittive_degree()
#print 'ED'
ANED = np.divide(net.average_neighbors_admittive_degree(), ED)
#print 'ANED'
minC = min_clust(net.get_admittance(), net.N)
#print 'minC'
VCFB = np.zeros(net.N)
for a in range(net.N):
VCFB[a] = 1.0*net.vertex_current_flow_betweenness(a) *((net.N*(net.N-1)) / 2) /net.N
#print 'VCFB'
ERCC = np.zeros(net.N)
for a in range(net.N):
ERCC[a] = net.effective_resistance_closeness_centrality(a)
#print 'ERCC'
dead = np.zeros(net.N)
for i, b in enumerate(net.betweenness()):
if b in [net.N - 2, 2 * net.N - 5, 2 * net.N - 6, 3 * net.N - 10, 4 * net.N - 17, 5 * net.N - 26]:
dead[i] = 1
else:
dead[i] = 0
#print 'dead'
'''predicted probability to have poor basin stability'''
# %Coefficients:
# % Estimate Std. Error z value Pr(>|z|)
# %(Intercept) 3.325612 0.143404 23.19 <2e-16 ***
# %ED 0.088743 0.003832 23.16 <2e-16 ***
# %ANED -0.348762 0.011514 -30.29 <2e-16 ***
# %minC -10.389121 0.271047 -38.33 <2e-16 ***
# %VCFB -0.107782 0.007994 -13.48 <2e-16 ***
# %ERCC -1.515226 0.033785 -44.85 <2e-16 ***
# %dead 4.925139 0.084272 58.44 <2e-16 ***
g = 3.325612 + 0.088743*ED -0.348762*ANED -10.389121*minC -0.107782*VCFB -1.515226*ERCC + 4.925139*dead
prob = 1.0/(1.0+np.exp(-g))
t = 0.15
poor_bs = np.zeros(net.N)
for i in range(net.N):
if prob[i]>t:
poor_bs[i]=1
return prob
#######################################################################################################################
#######################################################################################################################
#######################################################################################################################
def main():
g = RPG()
assert(isinstance(g, RPG))
#g.debug = True
g.set_params(n=100, n0=1, r=1./3.)
g.initialise()
g.grow()
print(g)
print(g.stats)
g.save_graph()
g.plot_net()
if __name__ == "__main__":
main()
| 36.276364 | 119 | 0.475341 | 1,218 | 9,976 | 3.799672 | 0.268473 | 0.014693 | 0.033276 | 0.020743 | 0.19771 | 0.145419 | 0.107822 | 0.084054 | 0.063742 | 0.056396 | 0 | 0.048739 | 0.292502 | 9,976 | 274 | 120 | 36.408759 | 0.606971 | 0.133119 | 0 | 0.095541 | 0 | 0 | 0.092088 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 1 | 0.050955 | false | 0 | 0.050955 | 0 | 0.152866 | 0.012739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a27387da493382008b9ba5aa08ab6118facee3ba | 4,142 | py | Python | build/sage_bootstrap/flock.py | fchapoton/sage | 765c5cb3e24dd134708eca97e4c52e0221cd94ba | [
"BSL-1.0"
] | 4 | 2020-07-17T04:49:44.000Z | 2020-07-29T06:33:51.000Z | build/sage_bootstrap/flock.py | Ivo-Maffei/sage | 467fbc70a08b552b3de33d9065204ee9cbfb02c7 | [
"BSL-1.0"
] | 1 | 2020-04-18T16:30:43.000Z | 2020-04-18T16:30:43.000Z | build/sage_bootstrap/flock.py | dimpase/sage | 468f23815ade42a2192b0a9cd378de8fdc594dcd | [
"BSL-1.0"
] | 3 | 2020-03-29T17:13:36.000Z | 2021-05-03T18:11:28.000Z | #!/usr/bin/env sage-system-python
# vim: set filetype=python:
"""
This script runs the given command under a file lock (similar to the flock
command on some systems).
"""
# This is originally motivated by pip, but has since been generalized. We
# should avoid running pip while uninstalling a package because that is prone
# to race conditions. This script runs pip under a lock. For details, see
# https://trac.sagemath.org/ticket/21672
import fcntl
import os
import pipes
import sys
# Note that argparse is not part of Python 2.6, so we bundle it
try:
import argparse
except ImportError:
from sage_bootstrap.compat import argparse
class FileType(argparse.FileType):
"""
Version of argparse.FileType with the option to ensure that the full path
to the file exists.
"""
def __init__(self, mode='r', makedirs=False):
# Note, the base class __init__ takes other arguments too depending on
# the Python version but we don't care about them for this purpose
super(FileType, self).__init__(mode=mode)
self._makedirs = makedirs
def __call__(self, string):
if self._makedirs and string != '-':
dirname = os.path.dirname(string)
try:
os.makedirs(dirname)
except OSError as exc:
if not os.path.isdir(dirname):
raise argparse.ArgumentTypeError(
"can't create '{0}': {1}".format(dirname, exc))
return super(FileType, self).__call__(string)
class IntOrFileType(FileType):
"""
Like FileType but also accepts an int (e.g. for a file descriptor).
"""
def __call__(self, string):
try:
return int(string)
except ValueError:
return super(IntOrFileType, self).__call__(string)
def run(argv=None):
parser = argparse.ArgumentParser(description=__doc__)
group = parser.add_mutually_exclusive_group()
group.add_argument('-s', '--shared', action='store_true',
help='create a shared lock')
# Note: A exclusive lock is created by default if no other flags are given,
# but supplying the --exclusive flag explicitly may help clarity
group.add_argument('-x', '--exclusive', action='store_true',
help='create an exclusive lock (the default)')
group.add_argument('-u', '--unlock', action='store_true',
help='remove an existing lock')
parser.add_argument('lock', metavar='LOCK',
type=IntOrFileType('w+', makedirs=True),
help='filename of the lock an integer file descriptor')
parser.add_argument('command', metavar='COMMAND', nargs=argparse.REMAINDER,
help='command to run with the lock including any '
'arguments to that command')
args = parser.parse_args(argv)
if args.shared:
locktype = fcntl.LOCK_SH
elif args.unlock:
locktype = fcntl.LOCK_UN
else:
locktype = fcntl.LOCK_EX
lock = args.lock
command = args.command
if isinstance(lock, int) and command:
parser.error('sage-flock does not accept a command when passed '
'a file descriptor number')
# First try a non-blocking lock such that we can give an informative
# message while the user is waiting.
try:
fcntl.flock(lock, locktype | fcntl.LOCK_NB)
except IOError as exc:
if locktype == fcntl.LOCK_SH:
kind = "shared"
elif locktype == fcntl.LOCK_UN:
# This shouldn't happen
sys.stderr.write(
"Unexpected error trying to unlock fd: {0}\n".format(exc))
return 1
else:
kind = "exclusive"
sys.stderr.write("Waiting for {0} lock to run {1} ... ".format(
kind, ' '.join(pipes.quote(arg) for arg in command)))
fcntl.flock(lock, locktype)
sys.stderr.write("ok\n")
if not (args.unlock or isinstance(lock, int)):
os.execvp(command[0], command)
if __name__ == '__main__':
sys.exit(run())
| 33.403226 | 79 | 0.620956 | 527 | 4,142 | 4.764706 | 0.41556 | 0.031063 | 0.040621 | 0.0227 | 0.019912 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004706 | 0.281748 | 4,142 | 123 | 80 | 33.674797 | 0.839328 | 0.250845 | 0 | 0.108108 | 0 | 0 | 0.160105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.013514 | 0.094595 | 0 | 0.22973 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a274c52b70083e3b0c72b77002316986830f4ec9 | 4,861 | py | Python | Huy/singleQubit.py | npgh2009/VQE-CQSim | 9bee775bba69a8309ee4f3ac8a875f32913c3948 | [
"MIT"
] | null | null | null | Huy/singleQubit.py | npgh2009/VQE-CQSim | 9bee775bba69a8309ee4f3ac8a875f32913c3948 | [
"MIT"
] | null | null | null | Huy/singleQubit.py | npgh2009/VQE-CQSim | 9bee775bba69a8309ee4f3ac8a875f32913c3948 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Dec 2 23:22:40 2020
@author: lul
"""
import numpy as np
from random import random
from scipy.optimize import minimize
import qiskit
from qiskit.circuit.library.standard_gates import U2Gate
from qiskit.aqua.operators import WeightedPauliOperator
from qiskit.aqua.algorithms import NumPyEigensolver
def hamiltonian_operator(a, b, c, d):
"""
Creates a*I + b*Z + c*X + d*Y pauli sum
that will be our Hamiltonian operator.
"""
pauli_dict = {
'paulis': [{"coeff": {"imag": 0.0, "real": a}, "label": "I"},
{"coeff": {"imag": 0.0, "real": b}, "label": "Z"},
{"coeff": {"imag": 0.0, "real": c}, "label": "X"},
{"coeff": {"imag": 0.0, "real": d}, "label": "Y"}
]
}
return WeightedPauliOperator.from_dict(pauli_dict)
scale = 10
a, b, c, d = (scale*random(), scale*random(),
scale*random(), scale*random())
H = hamiltonian_operator(a, b, c, d)
exact_result = NumPyEigensolver(H).run()
reference_energy = min(np.real(exact_result.eigenvalues))
print('The exact ground state energy is: {}'.format(reference_energy))
def quantum_state_preparation(circuit, parameters):
q = circuit.qregs[0] # q is the quantum register where the info about qubits is stored
circuit.rx(parameters[0], q[0]) # q[0] is our one and only qubit XD
circuit.ry(parameters[1], q[0])
return circuit
"""
H_gate = U2Gate(0, np.pi).to_matrix()
print("H_gate:")
print((H_gate * np.sqrt(2)).round(5))
Y_gate = U2Gate(0, np.pi/2).to_matrix()
print("Y_gate:")
print((Y_gate * np.sqrt(2)).round(5))
"""
def vqe_circuit(parameters, measure):
"""
Creates a device ansatz circuit for optimization.
:param parameters_array: list of parameters for constructing ansatz state that should be optimized.
:param measure: measurement type. E.g. 'Z' stands for Z measurement.
:return: quantum circuit.
"""
q = qiskit.QuantumRegister(1)
c = qiskit.ClassicalRegister(1)
circuit = qiskit.QuantumCircuit(q, c)
# quantum state preparation
circuit = quantum_state_preparation(circuit, parameters)
# measurement
if measure == 'Z':
circuit.measure(q[0], c[0])
elif measure == 'X':
circuit.u(np.pi/2, 0, np.pi, q[0])
circuit.measure(q[0], c[0])
elif measure == 'Y':
circuit.u(np.pi/2, 0, np.pi/2, q[0])
circuit.measure(q[0], c[0])
else:
raise ValueError('Not valid input for measurement: input should be "X" or "Y" or "Z"')
return circuit
def quantum_module(parameters, measure):
# measure
if measure == 'I':
return 1
elif measure == 'Z':
circuit = vqe_circuit(parameters, 'Z')
elif measure == 'X':
circuit = vqe_circuit(parameters, 'X')
elif measure == 'Y':
circuit = vqe_circuit(parameters, 'Y')
else:
raise ValueError('Not valid input for measurement: input should be "I" or "X" or "Z" or "Y"')
shots = 8192
backend = qiskit.BasicAer.get_backend('qasm_simulator')
job = qiskit.execute(circuit, backend, shots=shots)
result = job.result()
counts = result.get_counts()
# expectation value estimation from counts
expectation_value = 0
for measure_result in counts:
sign = +1
if measure_result == '1':
sign = -1
expectation_value += sign * counts[measure_result] / shots
return expectation_value
def pauli_operator_to_dict(pauli_operator):
"""
from WeightedPauliOperator return a dict:
{I: 0.7, X: 0.6, Z: 0.1, Y: 0.5}.
:param pauli_operator: qiskit's WeightedPauliOperator
:return: a dict in the desired form.
"""
d = pauli_operator.to_dict()
paulis = d['paulis']
paulis_dict = {}
for x in paulis:
label = x['label']
coeff = x['coeff']['real']
paulis_dict[label] = coeff
return paulis_dict
pauli_dict = pauli_operator_to_dict(H)
def vqe(parameters):
# quantum_modules
quantum_module_I = pauli_dict['I'] * quantum_module(parameters, 'I')
quantum_module_Z = pauli_dict['Z'] * quantum_module(parameters, 'Z')
quantum_module_X = pauli_dict['X'] * quantum_module(parameters, 'X')
quantum_module_Y = pauli_dict['Y'] * quantum_module(parameters, 'Y')
# summing the measurement results
classical_adder = quantum_module_I + quantum_module_Z + quantum_module_X + quantum_module_Y
return classical_adder
parameters_array = np.array([np.pi, np.pi])
tol = 1e-3 # tolerance for optimization precision.
vqe_result = minimize(vqe, parameters_array, method="Powell", tol=tol)
#print('The exact ground state energy is: {}'.format(reference_energy))
print('The estimated ground state energy from VQE algorithm is: {}'.format(vqe_result.fun)) | 30.572327 | 103 | 0.641432 | 665 | 4,861 | 4.56391 | 0.254135 | 0.055684 | 0.037891 | 0.014498 | 0.208896 | 0.152883 | 0.112026 | 0.112026 | 0.073806 | 0.073806 | 0 | 0.019078 | 0.223617 | 4,861 | 159 | 104 | 30.572327 | 0.785109 | 0.186176 | 0 | 0.125 | 0 | 0.011364 | 0.102319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.079545 | 0 | 0.227273 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a275bc57e5427894757a51c95d95d6e82e513d44 | 1,247 | py | Python | readStuff.py | SonderSwift/PythonClass | de1e6076c14a7acf8b3bf605d1f64de9f953833c | [
"MIT"
] | 1 | 2020-01-25T01:02:42.000Z | 2020-01-25T01:02:42.000Z | readStuff.py | SonderSwift/PythonClass | de1e6076c14a7acf8b3bf605d1f64de9f953833c | [
"MIT"
] | null | null | null | readStuff.py | SonderSwift/PythonClass | de1e6076c14a7acf8b3bf605d1f64de9f953833c | [
"MIT"
] | null | null | null | import requests
from os import path
from pprint import pprint
from collections import Counter
def getPythonSite():
r = requests.get('https://www.python.org')
# print(r.content)
# Count # of spans in r.content
s = f'span count: {str(r.content).count("span")}'
print(s)
f = open('newFile.txt', 'w')
f.write(s)
f.close()
def getMobyDick():
"""
Download Moby Dick
"https://www.gutenberg.org/files/76/76-0.txt"
len == 691343
"""
# Implement file check
if path.exists("mobyDick.txt"):
print("Already downloaded!")
return
# f = open('mobyDick.txt', 'r')
# s = f.read()
r = requests.get('https://www.gutenberg.org/files/76/76-0.txt')
a = r.text.replace("\n", ' ').replace("\r", ' ')
# f = open('mobyDick.txt', 'w')
# f.write(str(r.content))
# f.close()
with open('mobyDick.txt', 'w') as f:
pprint(dir(f))
f.write(a)
def readBook(s):
""" count 10 most common words
Return dictionary
{
"word": 102
}
"""
with open(s) as f:
s = f.read()
words = s.split()
# create data structure of 10
# most common words with counts
c = Counter(words)
pprint(c.most_common(10))
print(f"Count a: {s.count(' a ')}")
if __name__ == "__main__":
# getPythonSite()
getMobyDick()
readBook('mobyDick.txt')
| 18.61194 | 64 | 0.628709 | 189 | 1,247 | 4.100529 | 0.396825 | 0.012903 | 0.058065 | 0.043871 | 0.126452 | 0.085161 | 0.085161 | 0.085161 | 0.085161 | 0 | 0 | 0.024462 | 0.180433 | 1,247 | 66 | 65 | 18.893939 | 0.733855 | 0.33761 | 0 | 0 | 0 | 0 | 0.262255 | 0.036765 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.133333 | 0 | 0.266667 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a277b74bb496ffb82161385190b5efe24a322890 | 3,731 | py | Python | python/ray/operator.py | firebolt55439/ray | 215300b070628c06f0106906fc6c03bd70ebf140 | [
"Apache-2.0"
] | 3 | 2020-12-03T17:48:45.000Z | 2022-01-22T08:09:46.000Z | python/ray/operator.py | firebolt55439/ray | 215300b070628c06f0106906fc6c03bd70ebf140 | [
"Apache-2.0"
] | 6 | 2022-03-18T14:06:24.000Z | 2022-03-26T07:13:16.000Z | python/ray/operator.py | firebolt55439/ray | 215300b070628c06f0106906fc6c03bd70ebf140 | [
"Apache-2.0"
] | 1 | 2020-12-12T13:59:22.000Z | 2020-12-12T13:59:22.000Z | """
Ray operator for Kubernetes.
Reads ray cluster config from a k8s ConfigMap, starts a ray head node pod using
create_or_update_cluster(), then runs an autoscaling loop in the operator pod
executing this script. Writes autoscaling logs to the directory
/root/ray-operator-logs.
In this setup, the ray head node does not run an autoscaler. It is important
NOT to supply an --autoscaling-config argument to head node's ray start command
in the cluster config when using this operator.
To run, first create a ConfigMap named ray-operator-configmap from a ray
cluster config. Then apply the manifest at python/ray/autoscaler/kubernetes/operator_configs/operator_config.yaml
For example:
kubectl create namespace raytest
kubectl -n raytest create configmap ray-operator-configmap --from-file=python/ray/autoscaler/kubernetes/operator_configs/test_cluster_config.yaml
kubectl -n raytest apply -f python/ray/autoscaler/kubernetes/operator_configs/operator_config.yaml
""" # noqa
import os
from typing import Any, Dict, IO, Tuple
import kubernetes
import yaml
from ray._private import services
from ray.autoscaler._private.commands import create_or_update_cluster
from ray.autoscaler._private.kubernetes import core_api
from ray.utils import open_log
from ray import ray_constants
RAY_CLUSTER_NAMESPACE = os.environ.get("RAY_OPERATOR_POD_NAMESPACE")
RAY_CONFIG_MAP = "ray-operator-configmap"
RAY_CONFIG_DIR = "/root"
LOG_DIR = "/root/ray-operator-logs"
ERR_NAME, OUT_NAME = "ray-operator.err", "ray-operator.out"
def prepare_ray_cluster_config() -> str:
config_map = core_api().read_namespaced_config_map(
name=RAY_CONFIG_MAP, namespace=RAY_CLUSTER_NAMESPACE)
# config_map.data consists of a single key:value pair
for config_file_name, config_string in config_map.data.items():
config = yaml.safe_load(config_string)
config["provider"]["namespace"] = RAY_CLUSTER_NAMESPACE
cluster_config_path = os.path.join(RAY_CONFIG_DIR, config_file_name)
with open(cluster_config_path, "w") as file:
yaml.dump(config, file)
return cluster_config_path
def get_ray_head_pod_ip(config: Dict[str, Any]) -> str:
cluster_name = config["cluster_name"]
label_selector = f"component=ray-head,ray-cluster-name={cluster_name}"
pods = core_api().list_namespaced_pod(
namespace=RAY_CLUSTER_NAMESPACE, label_selector=label_selector).items
assert (len(pods)) == 1
head_pod = pods.pop()
return head_pod.status.pod_ip
def get_logs() -> Tuple[IO, IO]:
try:
os.makedirs(LOG_DIR)
except OSError:
pass
err_path = os.path.join(LOG_DIR, ERR_NAME)
out_path = os.path.join(LOG_DIR, OUT_NAME)
return open_log(err_path), open_log(out_path)
def main():
kubernetes.config.load_incluster_config()
cluster_config_path = prepare_ray_cluster_config()
config = create_or_update_cluster(
cluster_config_path,
override_min_workers=None,
override_max_workers=None,
no_restart=False,
restart_only=False,
yes=True,
no_config_cache=True)
with open(cluster_config_path, "w") as file:
yaml.dump(config, file)
ray_head_pod_ip = get_ray_head_pod_ip(config)
# TODO: Add support for user-specified redis port and password
redis_address = services.address(ray_head_pod_ip,
ray_constants.DEFAULT_PORT)
stderr_file, stdout_file = get_logs()
services.start_monitor(
redis_address,
stdout_file=stdout_file,
stderr_file=stderr_file,
autoscaling_config=cluster_config_path,
redis_password=ray_constants.REDIS_DEFAULT_PASSWORD)
if __name__ == "__main__":
main()
| 34.229358 | 145 | 0.744841 | 536 | 3,731 | 4.902985 | 0.29291 | 0.064307 | 0.045282 | 0.018265 | 0.133181 | 0.133181 | 0.085236 | 0.085236 | 0.085236 | 0.038052 | 0 | 0.000649 | 0.174484 | 3,731 | 108 | 146 | 34.546296 | 0.852597 | 0.289735 | 0 | 0.061538 | 0 | 0 | 0.074621 | 0.045833 | 0 | 0 | 0 | 0.009259 | 0.015385 | 1 | 0.061538 | false | 0.030769 | 0.138462 | 0 | 0.246154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a279e33bf1c2840ec8d857bd275621e4f84027fe | 400 | py | Python | beegarden/themes/dark/__init__.py | rosin55/beegarden | 6d5173ead78d94ee39fec7182665ef950bf49fcc | [
"BSD-2-Clause-FreeBSD"
] | 4 | 2017-08-08T08:17:55.000Z | 2021-08-31T19:51:01.000Z | beegarden/themes/dark/__init__.py | rosin55/beegarden | 6d5173ead78d94ee39fec7182665ef950bf49fcc | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | beegarden/themes/dark/__init__.py | rosin55/beegarden | 6d5173ead78d94ee39fec7182665ef950bf49fcc | [
"BSD-2-Clause-FreeBSD"
] | 4 | 2016-02-22T18:47:00.000Z | 2019-03-25T21:30:32.000Z | # -*- coding: utf-8 -*-
import os
PICTURES_PATH = os.path.dirname(__file__)
# BACKGROUND_COLOR = (13, 13, 13)
BACKGROUND_IMAGE = 'background.jpg'
METER_1_COLOR = (80, 80, 80)
METER_2_COLOR = (70, 70, 90)
FIELD_WIDTH = 1200
FIELD_HEIGHT = 600
TEAMS_COUNT = 4
DEBUG = False
MAX_HEALTH = 100
STING_POWER = 50
HEALTH_TOP_UP_SPEED = 0.5
BEEHIVE_SAFE_DISTANCE = 200
# See robogame_engine.constants
| 16 | 41 | 0.7275 | 63 | 400 | 4.269841 | 0.777778 | 0.02974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116071 | 0.16 | 400 | 24 | 42 | 16.666667 | 0.684524 | 0.2075 | 0 | 0 | 0 | 0 | 0.044728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a27a891adce1d846c2b71dec6a2179f6660bcde2 | 1,767 | py | Python | factoranalysis/test/test_rotation.py | eribean/girth | d56958921d16dc535aceec2d6d47d341ff418036 | [
"MIT"
] | 43 | 2020-03-22T02:34:42.000Z | 2022-03-26T12:56:11.000Z | factoranalysis/test/test_rotation.py | eribean/girth | d56958921d16dc535aceec2d6d47d341ff418036 | [
"MIT"
] | 117 | 2020-03-01T13:35:14.000Z | 2022-01-31T01:13:17.000Z | factoranalysis/test/test_rotation.py | eribean/girth | d56958921d16dc535aceec2d6d47d341ff418036 | [
"MIT"
] | 9 | 2020-10-21T17:04:24.000Z | 2022-02-25T08:49:14.000Z | import unittest
import numpy as np
from girth.factoranalysis import sparsify_loadings
class TestRotation(unittest.TestCase):
"""Test fixture for testing Rotation."""
def test_rotation_orthogonal(self):
"""Testing recovery of orthogonal rotation."""
rotation_angle = np.radians(37.4)
cos_theta, sin_theta = np.cos(rotation_angle), np.sin(rotation_angle)
rotation = np.array([[cos_theta, -sin_theta], [sin_theta, cos_theta]])
# Uncorrelated Loadings
real_loadings = np.array([1, 0] * 5 + [0, 1]* 5).reshape(10, 2)
rotated_loadings = real_loadings @ rotation
np.random.seed(1496) # For the Basin Hopping Routine
loadings, bases = sparsify_loadings(rotated_loadings,
orthogonal=True)
np.testing.assert_allclose(loadings, real_loadings, rtol=1e-4, atol=1e-4)
np.testing.assert_allclose(bases, rotation, rtol=1e-4, atol=1e-4)
def test_rotation_oblique(self):
"""Testing recovery of oblique rotation."""
real_loadings = np.array([1, 0] * 5 + [0, 1]* 5).reshape(10, 2)
rotation_angle = np.radians(75.3)
transformation = np.array([[1, 0], [np.cos(rotation_angle), np.sin(rotation_angle)]])
transformed_loadings = real_loadings @ transformation
np.random.seed(262929) # For the Basin Hopping Routine
loadings, bases = sparsify_loadings(transformed_loadings,
orthogonal=False, alpha=0.0)
np.testing.assert_allclose(loadings, real_loadings, rtol=1e-4, atol=1e-4)
np.testing.assert_allclose(bases, transformation, rtol=1e-4, atol=1e-4)
if __name__ == "__main__":
unittest.main() | 37.595745 | 93 | 0.640068 | 219 | 1,767 | 4.977169 | 0.296804 | 0.022018 | 0.091743 | 0.084404 | 0.411009 | 0.411009 | 0.385321 | 0.385321 | 0.319266 | 0.220183 | 0 | 0.040571 | 0.246746 | 1,767 | 47 | 94 | 37.595745 | 0.778362 | 0.110922 | 0 | 0.148148 | 0 | 0 | 0.005148 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.074074 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a27c83c46a999e9291f8240041e86357873dd6b4 | 3,731 | py | Python | pyExamples/demo_quads_4x4.py | juandiegopozo/OpenSeesPyDoc | 80b07e7d90f5445381fc28895acb19c52bf8025e | [
"MIT"
] | null | null | null | pyExamples/demo_quads_4x4.py | juandiegopozo/OpenSeesPyDoc | 80b07e7d90f5445381fc28895acb19c52bf8025e | [
"MIT"
] | null | null | null | pyExamples/demo_quads_4x4.py | juandiegopozo/OpenSeesPyDoc | 80b07e7d90f5445381fc28895acb19c52bf8025e | [
"MIT"
] | null | null | null | import openseespy.opensees as ops
import openseespy.postprocessing.ops_vis as opsv
# import opensees as ops # local compilation
# import ops_vis as opsv # local
import matplotlib.pyplot as plt
ops.wipe()
ops.model('basic', '-ndm', 2, '-ndf', 2)
ops.node(1, 0., 0.)
ops.node(2, 0., 1.)
ops.node(3, 0., 2.)
ops.node(4, 0., 3.)
ops.node(5, 0., 4.)
ops.node(6, 1., 0.)
ops.node(7, 1., 1.)
ops.node(8, 1., 2.)
ops.node(9, 1., 3.)
ops.node(10, 1., 4.)
ops.node(11, 2., 0.)
ops.node(12, 2., 1.)
ops.node(13, 2., 2.)
ops.node(14, 2., 3.)
ops.node(15, 2., 4.)
ops.node(16, 3., 0.)
ops.node(17, 3., 1.)
ops.node(18, 3., 2.)
ops.node(19, 3., 3.)
ops.node(20, 3., 4.)
ops.node(21, 4., 0.)
ops.node(22, 4., 1.)
ops.node(23, 4., 2.)
ops.node(24, 4., 3.)
ops.node(25, 4., 4.)
ops.nDMaterial('ElasticIsotropic', 1, 1000, 0.3)
ops.element('quad', 1, 1, 6, 7, 2, 1, 'PlaneStress', 1)
ops.element('quad', 2, 2, 7, 8, 3, 1, 'PlaneStress', 1)
ops.element('quad', 3, 3, 8, 9, 4, 1, 'PlaneStress', 1)
ops.element('quad', 4, 4, 9, 10, 5, 1, 'PlaneStress', 1)
ops.element('quad', 5, 6, 11, 12, 7, 1, 'PlaneStress', 1)
ops.element('quad', 6, 7, 12, 13, 8, 1, 'PlaneStress', 1)
ops.element('quad', 7, 8, 13, 14, 9, 1, 'PlaneStress', 1)
ops.element('quad', 8, 9, 14, 15, 10, 1, 'PlaneStress', 1)
ops.element('quad', 9, 11, 16, 17, 12, 1, 'PlaneStress', 1)
ops.element('quad', 10, 12, 17, 18, 13, 1, 'PlaneStress', 1)
ops.element('quad', 11, 13, 18, 19, 14, 1, 'PlaneStress', 1)
ops.element('quad', 12, 14, 19, 20, 15, 1, 'PlaneStress', 1)
ops.element('quad', 13, 16, 21, 22, 17, 1, 'PlaneStress', 1)
ops.element('quad', 14, 17, 22, 23, 18, 1, 'PlaneStress', 1)
ops.element('quad', 15, 18, 23, 24, 19, 1, 'PlaneStress', 1)
ops.element('quad', 16, 19, 24, 25, 20, 1, 'PlaneStress', 1)
ops.fix(1, 1, 1)
ops.fix(6, 1, 1)
ops.fix(11, 1, 1)
ops.fix(16, 1, 1)
ops.fix(21, 1, 1)
ops.equalDOF(2, 22, 1, 2)
ops.equalDOF(3, 23, 1, 2)
ops.equalDOF(4, 24, 1, 2)
ops.equalDOF(5, 25, 1, 2)
ops.timeSeries('Linear', 1)
ops.pattern('Plain', 1, 1)
ops.load(15, 0., -1.)
ops.analysis('Static')
ops.analyze(1)
# - plot model
plt.figure()
opsv.plot_model()
plt.axis('equal')
# - plot deformation
plt.figure()
opsv.plot_defo()
# opsv.plot_defo(sfac, unDefoFlag=1, fmt_undefo='g:')
plt.axis('equal')
# get values at OpenSees nodes
sig_out = opsv.quad_sig_out_per_node()
print(f'sig_out:\n{sig_out}')
# - visu stress map
# !!! select from sig_out: e.g. vmises
# j, jstr = 0, 'sxx'
# j, jstr = 1, 'syy'
# j, jstr = 2, 'sxy'
j, jstr = 3, 'vmis'
# j, jstr = 4, 's1'
# j, jstr = 5, 's2'
# j, jstr = 6, 'alfa'
nds_val = sig_out[:, j]
# print(f'nds_val:\n{nds_val}')
plt.figure()
opsv.plot_stress_2d(nds_val)
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.title(f'{jstr}')
# plt.savefig(f'quads_4x4_{jstr}.png')
# for educational purposes show values at integration points and
# nodes which can finally be averaged at nodes
eles_ips_crd, eles_nds_crd, nds_crd, quads_conn = opsv.quad_crds_node_to_ip()
print(f'\neles_ips_crd:\n{eles_ips_crd}')
print(f'\neles_nds_crd:\n{eles_nds_crd}')
print(f'\nnds_crd:\n{nds_crd}')
print(f'\nquads_conn:\n{quads_conn}')
eles_ips_sig_out, eles_nds_sig_out = opsv.quad_sig_out_per_ele()
print(f'\neles_ips_sig_out:\n{eles_ips_sig_out}')
print(f'\neles_nds_sig_out:\n{eles_nds_sig_out}')
sig_out_indx = j # same as j, jstr
fig = plt.figure(figsize=(22./2.54, 18./2.54)) # centimeter to inch conversion
fig.subplots_adjust(left=.08, bottom=.08, right=.985, top=.94)
opsv.plot_mesh_with_ips_2d(nds_crd, eles_ips_crd, eles_nds_crd, quads_conn,
eles_ips_sig_out, eles_nds_sig_out, sig_out_indx)
plt.xlabel('x [m]')
plt.ylabel('y [m]')
# plt.savefig(f'quads_4x4_{jstr}_ips_nds_vals.png')
plt.show()
exit()
| 26.65 | 79 | 0.641651 | 723 | 3,731 | 3.182573 | 0.214385 | 0.050413 | 0.097349 | 0.111256 | 0.295958 | 0.295958 | 0.083877 | 0.052151 | 0.052151 | 0.030422 | 0 | 0.106575 | 0.139909 | 3,731 | 139 | 80 | 26.841727 | 0.610471 | 0.167515 | 0 | 0.096774 | 0 | 0 | 0.17294 | 0.060999 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.032258 | 0.075269 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a27d39b3673bbafe135696951df8576f704f98af | 3,041 | py | Python | src/bound_evaluation/data_frame_to_csv.py | paulnikolaus/python-mgf-calculator | b6a897e211e8003a7b4eee26fafdaf5f06e8a0c5 | [
"MIT"
] | 3 | 2020-06-01T14:46:15.000Z | 2022-03-24T15:02:03.000Z | src/bound_evaluation/data_frame_to_csv.py | paulnikolaus/snc-mgf-toolbox | bda07fedd92657628dd34e00911b344ee3535dad | [
"MIT"
] | null | null | null | src/bound_evaluation/data_frame_to_csv.py | paulnikolaus/snc-mgf-toolbox | bda07fedd92657628dd34e00911b344ee3535dad | [
"MIT"
] | null | null | null | """Writes a data frame of arrivals, service, and performance parameters
into a csv"""
import csv
from typing import Callable, List
import pandas as pd
from nc_arrivals.arrival_distribution import ArrivalDistribution
from nc_server.constant_rate_server import ConstantRateServer
from optimization.opt_method import OptMethod
from utils.perform_parameter import PerformParameter
from utils.perform_param_list import PerformParamList
def perform_param_list_to_csv(prefix: str,
data_frame_creator: Callable,
arr_list: List[ArrivalDistribution],
ser_list: List[ConstantRateServer],
perform_param_list: PerformParamList,
opt_method: OptMethod = None,
suffix="") -> pd.DataFrame:
filename = prefix + perform_param_list.to_name()
if opt_method is None:
data_frame = data_frame_creator(arr_list=arr_list,
ser_list=ser_list,
perform_param_list=perform_param_list)
else:
data_frame = data_frame_creator(arr_list=arr_list,
ser_list=ser_list,
opt_method=opt_method,
perform_param_list=perform_param_list)
filename += "_" + arr_list[0].to_name()
for i in range(len(arr_list)):
filename += "_"
filename += arr_list[i].to_value(number=i + 1)
for j in range(len(ser_list)):
filename += "_"
filename += ser_list[j].to_value(number=j + 1)
data_frame.to_csv(filename + suffix + '.csv',
index=True,
quoting=csv.QUOTE_NONNUMERIC)
return data_frame
def arrival_list_to_csv(prefix: str,
data_frame_creator: Callable,
list_arr_list: List[List[ArrivalDistribution]],
ser_list: List[ConstantRateServer],
server_index: int,
perform_param: PerformParameter,
opt_method: OptMethod = None,
suffix="_adjusting_arrivals") -> pd.DataFrame:
filename = prefix + perform_param.to_name()
if opt_method is None:
data_frame: pd.DataFrame = data_frame_creator(
list_arr_list=list_arr_list,
ser_list=ser_list,
server_index=server_index,
perform_param=perform_param)
else:
data_frame: pd.DataFrame = data_frame_creator(
list_arr_list=list_arr_list,
ser_list=ser_list,
server_index=server_index,
opt_method=opt_method,
perform_param=perform_param)
filename += "_" + list_arr_list[0][0].to_name()
data_frame.to_csv(filename + suffix + '.csv',
index=True,
quoting=csv.QUOTE_NONNUMERIC)
return data_frame
| 37.54321 | 78 | 0.580072 | 326 | 3,041 | 5.058282 | 0.217791 | 0.081868 | 0.077623 | 0.03396 | 0.56883 | 0.53487 | 0.422074 | 0.354154 | 0.354154 | 0.275318 | 0 | 0.00253 | 0.350214 | 3,041 | 80 | 79 | 38.0125 | 0.831984 | 0.025978 | 0 | 0.603175 | 0 | 0 | 0.010501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.126984 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a27dc045af7993fa6b66e16e9ace0c66c6563040 | 2,777 | py | Python | django_compat_patcher/django_legacy/django1_10/template/defaulttags.py | jayvdb/django-compat-patcher | 09a60f1fa7e860a32d506c92d684997492385dda | [
"MIT"
] | null | null | null | django_compat_patcher/django_legacy/django1_10/template/defaulttags.py | jayvdb/django-compat-patcher | 09a60f1fa7e860a32d506c92d684997492385dda | [
"MIT"
] | null | null | null | django_compat_patcher/django_legacy/django1_10/template/defaulttags.py | jayvdb/django-compat-patcher | 09a60f1fa7e860a32d506c92d684997492385dda | [
"MIT"
] | null | null | null | import os
import re
import sys
from datetime import datetime
from django.conf import settings
from django.template.base import (
Node, Template, TemplateSyntaxError,
)
from django_compat_patcher.deprecation import *
def include_is_allowed(filepath, allowed_include_roots):
filepath = os.path.abspath(filepath)
for root in allowed_include_roots:
if filepath.startswith(root):
return True
return False
class SsiNode(Node):
def __init__(self, filepath, parsed):
self.filepath = filepath
self.parsed = parsed
def render(self, context):
filepath = self.filepath.resolve(context)
if not include_is_allowed(filepath, context.template.engine.allowed_include_roots):
if settings.DEBUG:
return "[Didn't have permission to include file]"
else:
return '' # Fail silently for invalid includes.
try:
with open(filepath, 'r') as fp:
output = fp.read()
except IOError:
output = ''
if self.parsed:
try:
t = Template(output, name=filepath, engine=context.template.engine)
return t.render(context)
except TemplateSyntaxError as e:
if settings.DEBUG:
return "[Included template had syntax error: %s]" % e
else:
return '' # Fail silently for invalid included templates.
return output
#@register.tag
def ssi(parser, token):
"""
Outputs the contents of a given file into the page.
Like a simple "include" tag, the ``ssi`` tag includes the contents
of another file -- which must be specified using an absolute path --
in the current page::
{% ssi "/home/html/ljworld.com/includes/right_generic.html" %}
If the optional "parsed" parameter is given, the contents of the included
file are evaluated as template code, with the current context::
{% ssi "/home/html/ljworld.com/includes/right_generic.html" parsed %}
"""
warnings.warn(
"The {% ssi %} tag is deprecated. Use the {% include %} tag instead.",
RemovedInDjango110Warning,
)
bits = token.split_contents()
parsed = False
if len(bits) not in (2, 3):
raise TemplateSyntaxError("'ssi' tag takes one argument: the path to"
" the file to be included")
if len(bits) == 3:
if bits[2] == 'parsed':
parsed = True
else:
raise TemplateSyntaxError("Second (optional) argument to %s tag"
" must be 'parsed'" % bits[0])
filepath = parser.compile_filter(bits[1])
return SsiNode(filepath, parsed)
| 31.91954 | 91 | 0.60713 | 322 | 2,777 | 5.173913 | 0.39441 | 0.018007 | 0.034214 | 0.028812 | 0.092437 | 0.092437 | 0.054022 | 0.054022 | 0.054022 | 0 | 0 | 0.004668 | 0.305726 | 2,777 | 86 | 92 | 32.290698 | 0.85944 | 0.211739 | 0 | 0.152542 | 0 | 0 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.118644 | 0 | 0.355932 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a27e67458484d19b4af63f71c53d899cfda6065d | 5,949 | py | Python | strong_glm/glm/survival/survival_glm.py | strongio/strong-glm | db05cb8a297858e46961e5d91105a515531dfdbb | [
"MIT"
] | 2 | 2021-04-20T17:00:03.000Z | 2022-03-03T16:33:01.000Z | strong_glm/glm/survival/survival_glm.py | strongio/strong-glm | db05cb8a297858e46961e5d91105a515531dfdbb | [
"MIT"
] | 1 | 2020-02-26T16:48:56.000Z | 2020-02-26T16:48:56.000Z | strong_glm/glm/survival/survival_glm.py | strongio/strong-glm | db05cb8a297858e46961e5d91105a515531dfdbb | [
"MIT"
] | null | null | null | from typing import Sequence, Union, Optional, Type
import torch
from sklearn.compose import ColumnTransformer
from torch.distributions import Distribution
from strong_glm.glm.base import Glm, LBFGS
from strong_glm.glm.survival.censoring import CensScaler, km_summary
from strong_glm.glm.survival.loss import CensNegLogProbLoss
from strong_glm.utils import to_tensor
import numpy as np
class SurvivalGlm(Glm):
criterion_cls = CensNegLogProbLoss
def __init__(self,
distribution: Type[Distribution],
scale_y: bool,
lr: float = .05,
module: Optional[Type[torch.nn.Module]] = None,
optimizer: torch.optim.Optimizer = LBFGS,
distribution_param_names: Optional[Sequence[str]] = None,
**kwargs):
self.scale_y = scale_y
self.y_scaler_ = None
super().__init__(
distribution=distribution,
lr=lr,
module=module,
optimizer=optimizer,
distribution_param_names=distribution_param_names,
**kwargs
)
def fit(self, X, y=None, **fit_params):
# initialize/reset scaler:
if self.scale_y:
if not self.y_scaler_:
self.y_scaler_ = CensScaler()
self.y_scaler_._reset()
return super().fit(X=X, y=y, **fit_params)
def partial_fit(self, X, y=None, classes=None, **fit_params):
# (partial_)fit scaler:
if self.scale_y:
if not self.y_scaler_:
self.y_scaler_ = CensScaler()
self.y_scaler_.partial_fit(y)
y = self.y_scaler_.transform(y)
return super().partial_fit(X=X, y=y, **fit_params)
def predict(self, X: Union[torch.Tensor, 'SliceDict'], type: str = 'mean', *args, **kwargs) -> 'ndarray':
if self.verbose and self.scale_y:
print("Reminder: Model was fit w/scale_y=True, so predictions are after scaling. See `model.y_scaler_`")
return super().predict(X=X, type=type, *args, **kwargs)
def km_summary(self,
dataframe: 'DataFrame',
preprocessor: Union[ColumnTransformer, Sequence],
time_colname: str,
censor_colname: str,
start_time_colname: Optional[str] = None) -> 'DataFrame':
"""
:param dataframe: A pandas DataFrame, or a DataFrameGroupBy (i.e., the result of calling `df.groupby([...])`).
:param preprocessor: Either a sklearn ColumnTransformer that takes the dataframe and returns X, or a list of
column-names (such that `X = dataframe.loc[:,preprocessor].values`)
:param time_colname: The column-name in the dataframe for time-to-event.
:param censor_colname: The column-name in the dataframe for the censoring indicator.
:param start_time_colname: Optional, the column-name in the dataframe for start-times (for left-truncation).
:return: A DataFrame with kaplan-meier estimates.
"""
try:
from pandas.core.groupby.generic import DataFrameGroupBy
except ImportError as e:
raise ImportError("Must install pandas for `km_summary`") from e
if isinstance(dataframe, DataFrameGroupBy):
df_applied = dataframe.apply(
self.km_summary,
preprocessor=preprocessor,
time_colname=time_colname,
censor_colname=censor_colname,
start_time_colname=start_time_colname
)
index_idx = [i for i, _ in enumerate(df_applied.index.names)]
return df_applied.reset_index(level=index_idx[:-1], drop=False)
else:
# preprocess X:
if hasattr(preprocessor, 'transform'):
X = preprocessor.transform(dataframe)
else:
X = dataframe.loc[:, preprocessor].values
X = to_tensor(X, device=self.device, dtype=self.module_dtype_)
# km estimate:
df_km = km_summary(
time=dataframe[time_colname].values,
is_upper_cens=dataframe[censor_colname],
lower_trunc=dataframe[start_time_colname] if start_time_colname else None
)
# generate predicted params, transpose as inputs to distribution:
with torch.no_grad():
y_preds = self.infer(X)
kwargs = {k: y_true[None, :] for k, y_true in zip(self.distribution_param_names_, y_preds)}
distribution = self.distribution(**kwargs)
# get unique times in distribution-friendly format:
y = df_km.loc[:, ['time']].values
if self.scale_y:
y = self.y_scaler_.transform(y)
y = to_tensor(y, device=self.device, dtype=self.module_dtype_)
# b/c dist-kwargs transposed, broadcasting logic means we get array with dims: (times, dataframe_rows)
observed = y[:, [0]]
surv = 1. - distribution.cdf(observed)
if start_time_colname:
# TODO: either figure out if taking the average is valid, or emit a warning
min_ltrunc = np.full_like(observed, fill_value=dataframe[start_time_colname].min())
if self.scale_y:
min_ltrunc = self.y_scaler_.transform(min_ltrunc)
min_ltrunc = to_tensor(min_ltrunc, device=self.device, dtype=self.module_dtype_)
surv /= (1. - distribution.cdf(min_ltrunc))
# this is then reduced, collapsing across dataframe rows, so that we get a mean estimate for this dataset
df_km['model_estimate'] = torch.mean(surv, dim=1)
return df_km
def estimate_laplace_params(self, X, y, **fit_params):
y = self.y_scaler_.transform(y)
return super().estimate_laplace_params(X=X, y=y, **fit_params)
| 43.108696 | 118 | 0.611195 | 711 | 5,949 | 4.917018 | 0.285513 | 0.040904 | 0.034611 | 0.01373 | 0.177632 | 0.13873 | 0.135011 | 0.088673 | 0.037757 | 0.037757 | 0 | 0.001673 | 0.296689 | 5,949 | 137 | 119 | 43.423358 | 0.833891 | 0.180198 | 0 | 0.132653 | 0 | 0.010204 | 0.040859 | 0 | 0 | 0 | 0 | 0.007299 | 0 | 1 | 0.061224 | false | 0 | 0.122449 | 0 | 0.265306 | 0.010204 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2835f76300ac2af88eed3dcdbc78c9c3bd0f54a | 2,848 | py | Python | src/genpen/axicam.py | ANaka/genpen | 08f811dde40596a774ab4343af45a0ac0896840e | [
"MIT"
] | null | null | null | src/genpen/axicam.py | ANaka/genpen | 08f811dde40596a774ab4343af45a0ac0896840e | [
"MIT"
] | null | null | null | src/genpen/axicam.py | ANaka/genpen | 08f811dde40596a774ab4343af45a0ac0896840e | [
"MIT"
] | null | null | null | import signal
from pov import pov
from pathlib import Path
import fn
import time
import vpype
from pyaxidraw import axidraw
from contextlib import closing
from genpen import genpen as gp
from tqdm import tqdm
class GracefulExiter():
# definitely jacked this from somewhere on stack overflow
def __init__(self):
self.state = False
signal.signal(signal.SIGINT, self.change_state)
def change_state(self, signum, frame):
print("exit flag set to True (repeat to exit now)")
signal.signal(signal.SIGINT, signal.SIG_DFL)
self.state = True
def exit(self):
return self.state
class AxiCam(object):
def __init__(
self,
svg_path=None,
image_savedir=None,
plot_id=None,
cam=None,
):
if plot_id == None:
plot_id = fn.get_current_plot_id()
self.plot_id = plot_id
if svg_path == None:
svg_path = Path(gp.SVG_SAVEDIR).joinpath(plot_id).with_suffix('.svg')
self.svg_path = svg_path
self.ad = axidraw.AxiDraw()
self.ad.plot_setup(self.svg_path)
self.ad.options.mode = "layers"
self.ad.options.units = 2
self.ad.update()
self.doc = vpype.read_multilayer_svg(self.svg_path, 0.1)
self.image_savedir = image_savedir
self.cam = cam
@property
def n_layers(self):
return len(self.doc.layers)
def plot_layer(self, cam, layer_number, wait_time=0.):
self.ad.options.layer = layer_number
self.ad.plot_run()
time.sleep(wait_time)
cam.save_image()
def init_cam(self, camera_index=None):
if self.cam is not None:
try:
self.cam.close()
except:
pass
self.cam = pov.Camera(savedir=self.image_savedir, camera_index=camera_index)
return self.cam
def plot_layers(self, prog_bar=True, wait_times=0., start_layer=0, stop_layer=None):
wait_times = gp.make_callable(wait_times)
if stop_layer is None:
stop_layer = self.n_layers
iterator = range(start_layer, stop_layer)
if prog_bar:
iterator = tqdm(iterator)
flag = GracefulExiter()
self.init_cam()
for layer_number in iterator:
wait_time = wait_times()
self.plot_layer(cam=self.cam, layer_number=layer_number, wait_time=wait_time)
if flag.exit():
self.cam.close()
break
self.cam.save_image()
self.cam.close()
def toggle_pen(self):
self.ad.options.mode = 'toggle'
self.ad.plot_run()
self.ad.options.mode = 'layers' | 28.19802 | 89 | 0.582514 | 362 | 2,848 | 4.378453 | 0.290055 | 0.037855 | 0.041009 | 0.032177 | 0.029022 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003143 | 0.329705 | 2,848 | 101 | 90 | 28.19802 | 0.827135 | 0.019312 | 0 | 0.063291 | 0 | 0 | 0.022923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113924 | false | 0.012658 | 0.126582 | 0.025316 | 0.303797 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a284bcce6d0ebd92b209594a0d385cd9ac12adb1 | 20,043 | py | Python | pythia/pythia.py | ChrisTimperley/Pythia | 2daa6ee6462abe8b6c727d1dc70aca5ab0efa9d3 | [
"MIT"
] | null | null | null | pythia/pythia.py | ChrisTimperley/Pythia | 2daa6ee6462abe8b6c727d1dc70aca5ab0efa9d3 | [
"MIT"
] | 3 | 2016-12-14T18:25:10.000Z | 2017-01-19T17:56:39.000Z | pythia/pythia.py | ChrisTimperley/Pythia | 2daa6ee6462abe8b6c727d1dc70aca5ab0efa9d3 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from timeit import default_timer as timer
from subprocess import Popen, PIPE, TimeoutExpired
from pprint import pprint
import os
import signal
import re
import types
import argparse
import os.path
import json
import shutil
import subprocess
import sys
import tempfile
import resource
CONSTANT_NOISE = float(os.environ.get("PYTHIA_CONSTANT_NOISE", 0.5))
PLATFORM_SCALE = float(os.environ.get("PYTHIA_PLATFORM_SCALE", 1.0))
SCALING_FACTOR = float(os.environ.get("PYTHIA_LINEAR_NOISE", 2.0))
MEM_LIMIT = 1000 * (1000000) # 1 GB
INPUT_REGEX = r'(?<=\<\<SANDBOX>>\/)[\w|_|\.|-|\/]+\b'
# A special object used to indicate that a test execution timed out
class TestTimeout(object):
def pretty(self):
return "Timed out!"
# This function determines what the time limit should be for a test
# execution, given the duration of the oracle test execution and a flag
# indicating whether the execution is being used to generate coverage
# information.
def time_limit(duration, coverage_enabled):
scale = (10 * SCALING_FACTOR) if coverage_enabled else SCALING_FACTOR
return PLATFORM_SCALE * ((scale * duration) + CONSTANT_NOISE)
# Describes the state of the sandbox as a dictionary of file names and their
# associated SHA1 hashes.
def sandbox_state(d):
cmd = "find '%s' -type f -exec sha1sum '{}' \;" % d
state = subprocess.check_output(cmd, shell=True)
state = state.decode(sys.stdout.encoding)
state = [] if state == "" else state.splitlines(True)
state = map(lambda l: l.split(' ', 1), state)
state = {(f[len(d)+2:].strip()): h for (h, f) in state} # trim sandbox dir
return state
# Provides a detailed description of a particular test suite
class TestManifest(object):
# loads a manifest from a given file
def __init__(self, fn):
assert os.path.isfile(fn), "specified manifest file must exist"
assert fn[-5:] == '.json', "specified manifest file must end in .json"
with open(fn, 'r') as f:
cases = json.load(f)
assert isinstance(cases, list), "manifest file must contain a JSON list"
cases = [TestCase.from_json(i, c) for (i, c) in enumerate(cases)]
self.__cases = cases
def contents(self):
return self.__cases
def get(self, num):
return self.__cases[num]
# Provides a particular test suite mapping
class TestMapping(object):
def __init__(self, fn):
assert os.path.isfile(fn), "specified mapping file must exist"
assert fn[-5:] == '.json', "specified mapping file must end in .json"
with open(fn, 'r') as f:
self.__mapping = json.load(f)
def mapping(self):
return self.__mapping
def get(self, id):
return self.__mapping[id]
class TestInput(object):
def __init__(self, maps_to, maps_from):
self.__maps_to = maps_to
self.__maps_from = maps_from
def maps_to(self):
return self.__maps_to
def maps_from(self):
return self.__maps_from
def preexecute():
os.setsid()
resource.setrlimit(resource.RLIMIT_AS, (MEM_LIMIT, MEM_LIMIT))
class TestCase(object):
@staticmethod
def from_json(num, jsn):
assert 'command' in jsn, "test case definition must specify 'command'"
inpts = [TestInput(t, f) for (t, f) in jsn.get('input', {}).items()]
return TestCase(num, jsn['command'], inpts)
def __init__(self, num, command, inpts):
self.__num = num
self.__command = command
self.__inpts = inpts
def number(self):
return self.__num
def command(self):
return self.__command
def to_json(self):
inpts_json = {i.maps_to(): i.maps_from() for i in self.__inpts}
return {'command': self.__command, 'input': inpts_json}
def execute(self, executable_fn, inputd, tlim):
assert tlim is None or tlim > 0
# generate a sandbox directory for this test execution
sandboxd = tempfile.mkdtemp()
# calculate the absolute path for the executable
executable_fn = os.path.abspath(executable_fn)
# execute the test case within the sandbox, then ensure it's destroyed
try:
cmd = self.__command.replace("<<EXECUTABLE>>", executable_fn)
cmd = cmd.replace("<<SANDBOX>>", ".")
# prepare the inputs
# TODO: for now the inputs are copied into the sandbox; in the
# future, we should allow the creation of symbolic links (when
# specified by the user).
for inpt in self.__inpts:
cp_from = os.path.join(inputd, inpt.maps_from())
# if the file doesn't exist within the inputs directory, don't
# try to copy it
if not os.path.exists(cp_from):
continue
cp_to = os.path.join(sandboxd, inpt.maps_to())
cp_to_dir = os.path.dirname(cp_to)
if not os.path.exists(cp_to_dir):
os.makedirs(cp_to_dir)
shutil.copy2(cp_from, cp_to)
# execute the command within the sandbox under the given time limit
#
# Credit to J.F. Sebastian on StackOverflow for advice on enforcing timeouts
# and killing process groups when shell=True is used.
#
# http://stackoverflow.com/questions/36952245/subprocess-timeout-failure
with Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE, preexec_fn=preexecute, cwd=sandboxd) as p:
try:
t_start = timer()
stdout, stderr = p.communicate(timeout=tlim)
t_end = timer()
stdout = str(stdout)[2:-1]
stderr = str(stderr)[2:-1]
retcode = p.returncode
state = sandbox_state(sandboxd)
return TestOutcome(stdout, stderr, retcode, state, (t_end - t_start))
# if the command timed out, return a special TestTimeout object
except TimeoutExpired:
os.killpg(p.pid, signal.SIGKILL)
return TestTimeout()
# ensure the sandbox is destroyed after execution
finally:
if os.path.exists(sandboxd):
shutil.rmtree(sandboxd)
# Describes the outcome of a test case execution in terms of the standard
# output, standard error, return code, and state of the sandbox.
class TestOutcome(object):
@staticmethod
def from_json(jsn):
return TestOutcome(jsn['out'], jsn['err'], jsn['retcode'], jsn['sandbox'], float(jsn['duration']))
def __init__(self, stdout, stderr, retcode, sandbox, duration):
self.__stdout = stdout
self.__stderr = stderr
self.__retcode = retcode
self.__sandbox = sandbox
self.__duration = float(duration)
def stdout(self):
return self.__stdout
def stderr(self):
return self.__stderr
def retcode(self):
return self.__retcode
def sandbox(self):
return self.__sandbox
def duration(self):
return self.__duration
def __eq__(self, other):
return not (other is None) and \
not isinstance(other, TestTimeout) and \
self.__stdout == other.stdout() and \
self.__stderr == other.stderr() and \
self.__retcode == other.retcode() and \
self.__sandbox == other.sandbox()
def pretty(self):
pprint(self.to_json())
def to_json(self):
return {'out': self.__stdout,\
'err': self.__stderr,\
'duration': self.__duration,\
'retcode': self.__retcode,\
'sandbox': self.__sandbox}
# Defines the intended behaviour for a program on a given test suite
class Oracle(object):
@staticmethod
def generate(manifest, executable_fn, input_d, oracle_fn):
assert isinstance(manifest, TestManifest), "manifest should be provided as a TestManifest object"
assert not os.path.exists(oracle_fn), "oracle file must not already exist"
assert oracle_fn[-5:] == '.json', "oracle file must end in '.json'"
# compute the expected outcomes for each test
outcomes = []
for case in manifest.contents():
print("-- running test: {}".format(case.command()))
sys.stdout.flush()
outcomes.append(case.execute(executable_fn, input_d, None))
oracle = Oracle(outcomes)
# write the outcomes to the specified file, ensuring the file is
# destroyed in the event of an exception (preventing a corrupted
# oracle).
try:
with open(oracle_fn, 'w') as f:
json.dump(oracle.to_json(), f, indent=2, sort_keys=True)
except:
if os.path.exists(oracle_fn):
os.remove(oracle_fn)
raise
return oracle
# Attempts to load an oracle from a given file
@staticmethod
def load(oracle_fn):
assert os.path.isfile(oracle_fn), "oracle file must exist"
assert oracle_fn[-5:] == '.json', "oracle file must end in '.json'"
with open(oracle_fn, 'r') as f:
return Oracle([TestOutcome.from_json(o) for o in json.load(f)])
def __init__(self, outcomes):
self.__outcomes = outcomes
# returns the expected outcome for a given test case
# TODO: test case isn't in oracle
def expected(self, test):
return self.__outcomes[test.number()]
def to_json(self):
return [o.to_json() for o in self.__outcomes]
def run_test(manifest, oracle, executable, inputs, test, coverage_enabled, verbose):
expected = oracle.expected(test)
tlim = time_limit(expected.duration(), coverage_enabled)
outcome = test.execute(executable, inputs, tlim)
passed = outcome == expected
if verbose:
print("Expected:")
expected.pretty()
print("\nActual:")
outcome.pretty()
print("")
if passed:
print("Finished running test case: PASSED")
exit(0)
else:
print("Finished running test case: FAILED")
exit(1)
# Generates the oracle for a given problem, storing its knowledge to disk at a
# specified oracle directory.
def generate(executable, tests="tests.pythia.json", inputs="inputs", output="oracle.pythia.json"):
assert os.path.isfile(executable), "specified executable must exist"
manifest = TestManifest(tests)
print("Generating oracle...")
Oracle.generate(manifest, executable, inputs, output)
print("Finished.\nSaved to disk at: %s" % output)
# Generates the oracle for a given problem, storing its knowledge to disk at a
# specified oracle directory. Uses arguments provided by the command line.
def action_generate(args):
return generate(args.executable, args.tests, args.inputs, args.output)
# Runs a test case with a given number against the oracle
def action_run(args):
manifest = TestManifest(args.tests)
oracle = Oracle.load(args.oracle)
test = manifest.get(args.num)
print("Running test case %d: %s" % (args.num, test.command()))
return run_test(manifest, oracle, args.executable, args.inputs, test, args.coverage, args.verbose)
# Runs a test case with a given ID, supplied by the mapping file, against the
# oracle
def action_run_by_id(args):
manifest = TestManifest(args.tests)
mapping = TestMapping(args.mapping)
oracle = Oracle.load(args.oracle)
test_num = mapping.get(args.id)
test = manifest.get(test_num)
print("Running test case %s: %s" % (args.id, test.command()))
return run_test(manifest, oracle, args.executable, args.inputs, test, args.coverage, args.verbose)
# Constructs a test manifest for a given problem by converting its MTS output
def action_build_mts(args):
# generate mts command list
try:
subprocess.check_call(["mts",\
args.object, args.executable, args.universe,\
"R",\
"commands.txt", "NULL", "NULL"])
print("Generated MTS output")
subprocess.check_call(("grep -e '^%s' commands.txt | sponge commands.txt" % args.executable),\
shell=True)
# WARN:
# really, the matching portion of this should be: ${object}/inputs
# however, MTS seems to produce "../inputs", regardless of where the
# object directory may be
subprocess.check_call("sed -i 's;../inputs;<<SANDBOX>>;g' commands.txt",\
shell=True)
subprocess.check_call(("sed -i 's; > %s/outputs/.\+$;;g' commands.txt" % args.object),\
shell=True)
subprocess.check_call(("sed -i 's;%s;<<EXECUTABLE>>;g' commands.txt" % args.executable),\
shell=True)
print("Sanitised MTS output into list of Pythia commands")
# destroy commands.txt in the event of an error
except:
if os.path.exists("commands.txt"):
os.remove("commands.txt")
raise
# convert the commands list into a JSON Pythia manifest
manifest = []
with open("commands.txt", "r") as f:
for cmd in f:
cmd = cmd.strip()
inpts = {}
for inpt in re.findall(INPUT_REGEX, cmd):
inpts[inpt] = inpt
manifest.append({'command': cmd, 'input': inpts})
# write the manifest to file
with open(args.output, "w") as f:
json.dump(manifest, f, indent=2, sort_keys=True)
# Determines the passing and failing tests for a variant of a program against
# the oracle. These results are used to generate a mapping between GenProg
# style test identifiers and their numbers in the test manifest.
def action_map(args):
assert not os.path.exists("map.pythia.json"), "map.pythia.json must not exist within working directory"
manifest = TestManifest(args.tests)
oracle = Oracle.load(args.oracle)
print("Generating test case mapping...")
m = {}
failed = []
num_passed = 0
num_failed = 0
num_tests = len(manifest.contents())
for (i, test) in enumerate(manifest.contents()):
expected = oracle.expected(test)
tlim = time_limit(expected.duration(), False)
print("Running test case: {} [{}/{}]".format(test.command(), i, num_tests))
actual = test.execute(args.executable, args.inputs, tlim)
outcome = actual == expected
if outcome:
num_passed += 1
m["p%d" % num_passed] = test.number()
else:
num_failed += 1
failed.append(str(test.number()))
m["n%d" % num_failed] = test.number()
# debugging
print("Generated test case mapping")
print("Found %d failing tests: %s" % (num_failed, ', '.join(failed)))
print("Saving map file to disk at: map.pythia.json")
with open('map.pythia.json', 'w') as f:
json.dump(m, f, indent=2, sort_keys=True)
print("Saved map file to: map.pythia.json")
# CLI setup
PARSER = argparse.ArgumentParser()
SUBPARSERS = PARSER.add_subparsers()
PARSER.add_argument('--version', action='version', version='0.0.1')
# generate action
GENERATE_PARSER = SUBPARSERS.add_parser('generate')
GENERATE_PARSER.add_argument('executable',\
help='location of program executable')
GENERATE_PARSER.add_argument('--inputs',\
help='location of test case inputs directory',\
default='inputs')
GENERATE_PARSER.add_argument('-t', '--tests',\
help='location of test suite manifest file',\
default='tests.pythia.json')
GENERATE_PARSER.add_argument('-o', '--output',\
help='file to save oracle at',\
default='oracle.pythia.json')
GENERATE_PARSER.set_defaults(func=action_generate)
# convert-mts action
BUILD_MTS_PARSER = SUBPARSERS.add_parser('build-manifest-from-mts')
BUILD_MTS_PARSER.add_argument('object',\
help='location of root, object directory')
BUILD_MTS_PARSER.add_argument('universe',\
help='location of TSL universe file')
BUILD_MTS_PARSER.add_argument('executable',\
help='location of program executable')
BUILD_MTS_PARSER.add_argument('-o', '--output',\
help='file to save the converted test manifest at',\
default='tests.pythia.json')
BUILD_MTS_PARSER.set_defaults(func=action_build_mts)
# run action
RUN_PARSER = SUBPARSERS.add_parser('run')
RUN_PARSER.add_argument('executable',\
help='location of program executable')
RUN_PARSER.add_argument('num',\
type=int,\
help='number of the test case that should be executed')
RUN_PARSER.add_argument('--inputs',\
help='location of test case inputs directory',\
default='inputs')
RUN_PARSER.add_argument('--oracle',\
help='location of oracle file, used for validation',\
default='oracle.pythia.json')
RUN_PARSER.add_argument('-t', '--tests',\
help='location of test suite manifest file',\
default='tests.pythia.json')
RUN_PARSER.add_argument('--coverage',\
action='store_true',\
help='flag indicating whether coverage is enabled',\
default=False)
RUN_PARSER.add_argument('--verbose',\
action='store_true',\
help='flag to control verbosity',\
default=False)
RUN_PARSER.set_defaults(func=action_run)
# run by id action
RUN_ID_PARSER = SUBPARSERS.add_parser('run-by-id')
RUN_ID_PARSER.add_argument('executable',\
help='location of program executable')
RUN_ID_PARSER.add_argument('id',\
help='id of the test that should be run')
RUN_ID_PARSER.add_argument('--inputs',\
help='location of test case inputs directory',\
default='inputs')
RUN_ID_PARSER.add_argument('--oracle',\
help='location of oracle file, used for validation',\
default='oracle.pythia.json')
RUN_ID_PARSER.add_argument('-t', '--tests',\
help='location of test suite manifest file',\
default='tests.pythia.json')
RUN_ID_PARSER.add_argument('--mapping',\
help='location of test mapping file',\
default='map.pythia.json')
RUN_ID_PARSER.add_argument('--coverage',\
action='store_true',\
help='flag indicating whether coverage is enabled',\
default=False)
RUN_ID_PARSER.add_argument('--verbose',\
action='store_true',\
help='flag to control verbosity',\
default=False)
RUN_ID_PARSER.set_defaults(func=action_run_by_id)
# map action
MAP_PARSER = SUBPARSERS.add_parser('map')
MAP_PARSER.add_argument('executable',\
help='location of program executable')
MAP_PARSER.add_argument('--inputs',\
help='location of test case inputs directory',\
default='inputs')
MAP_PARSER.add_argument('--oracle',\
help='location of oracle file, used for validation',\
default='oracle.pythia.json')
MAP_PARSER.add_argument('-t', '--tests',\
help='location of test suite manifest file',\
default='tests.pythia.json')
MAP_PARSER.set_defaults(func=action_map)
def main():
args = PARSER.parse_args()
if 'func' in vars(args):
args.func(args)
| 40.737805 | 108 | 0.611136 | 2,465 | 20,043 | 4.832454 | 0.161866 | 0.021911 | 0.03996 | 0.0136 | 0.315228 | 0.251427 | 0.222884 | 0.215329 | 0.209453 | 0.164288 | 0 | 0.003811 | 0.279898 | 20,043 | 491 | 109 | 40.820774 | 0.82152 | 0.147184 | 0 | 0.212766 | 0 | 0 | 0.17948 | 0.008927 | 0 | 0 | 0 | 0.002037 | 0.037234 | 1 | 0.111702 | false | 0.015957 | 0.039894 | 0.053191 | 0.25 | 0.053191 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a28659f0ec57ee57b4b3fa8ceb49d7d5a13ba39b | 179,765 | py | Python | lamb/meta.py | rawlins/lambda-notebook | 5da92a8625b9ad516d5084ca833d71eec14e0da3 | [
"BSD-3-Clause"
] | 13 | 2016-01-18T17:33:50.000Z | 2022-01-19T02:43:08.000Z | lamb/meta.py | rawlins/lambda-notebook | 5da92a8625b9ad516d5084ca833d71eec14e0da3 | [
"BSD-3-Clause"
] | 11 | 2016-02-10T20:48:44.000Z | 2019-09-06T15:31:53.000Z | lamb/meta.py | rawlins/lambda-notebook | 5da92a8625b9ad516d5084ca833d71eec14e0da3 | [
"BSD-3-Clause"
] | 4 | 2018-11-27T14:45:49.000Z | 2021-06-05T17:08:40.000Z | #!/usr/local/bin/python3
# -*- coding: utf-8 -*-
import sys, re, logging, random
from numbers import Number
from lamb import types, utils, parsing, display
from lamb.types import TypeMismatch, type_e, type_t, type_n
from lamb.types import type_property, type_transitive, BasicType, FunType
from lamb.utils import *
global logger
def setup_logger():
"""Set up a module-level logger called `logger` for use across `lamb`
modules."""
global logger
logger = logging.getLogger("lamb")
logger.handlers = list() # otherwise, will get double output on reload
# (since this just keeps adding handlers)
logger.setLevel(logging.INFO)
logger.propagate = False
# note that basicConfig does _not_ work for interactive ipython sessions,
# including notebook.
infoch = logging.StreamHandler(sys.stdout)
infoch.setFormatter(logging.Formatter(
'%(levelname)s (%(module)s): %(message)s'))
def info_filter(record):
if record.levelno == logging.INFO:
return 1
else:
return 0
infoch.addFilter(info_filter)
infoch.setLevel(logging.INFO)
errch = logging.StreamHandler(sys.stderr)
#ch.setLevel(logging.INFO)
errch.setLevel(logging.WARNING)
errch.setFormatter(logging.Formatter(
'%(levelname)s (%(module)s): %(message)s'))
logger.addHandler(errch)
logger.addHandler(infoch)
setup_logger()
global _constants_use_custom, _type_system
_constants_use_custom = False
global _parser_assignment
_parser_assignment = None
def constants_use_custom(v):
"""Set whether constants use custom display routines."""
global _constants_use_custom
_constants_use_custom = v
# TODO: could consider associating TypedExpr with a type system rather than
# using the global variable. advantages: generality. Disadvantages: may be a
# little pointless in practice?
def set_type_system(ts):
"""Sets the current type system for the metalanguage. This is a global
setting."""
global _type_system
_type_system = ts
def get_type_system():
"""Gets the current (global) type system for the metalanguage."""
return _type_system
def ts_unify(a, b):
"""Calls the current type system's `unify` function on types `a` and `b`.
This returns a unified type, or `None` if the two can't be unified."""
ts = get_type_system()
return ts.unify(a, b)
global unify
unify = ts_unify
def ts_compatible(a, b):
"""Returns `True` or `False` depending on whether `a` and `b` are
compatible types."""
ts = get_type_system()
return ts.unify(a,b) is not None
def check_type(item, typ, raise_tm=True, msg=None):
ts = get_type_system()
if not ts.eq_check(item.content.type, typ):
if raise_tm:
raise types.TypeMismatch(item, typ, msg)
else:
return None
else:
return item
def tp(s):
"""Convenience wrapper for the current type system's type parser."""
ts = get_type_system()
result = ts.type_parser(s)
return result
def let_wrapper(s):
result = derived(s.compact_type_vars(), s, "Let substitution")
result.let = True
return result
def te(s, let=True, assignment=None):
"""Convenience wrapper for `lang.TypedExpr.factory`."""
result = TypedExpr.factory(s, assignment=assignment)
if let and isinstance(result, TypedExpr):
result = let_wrapper(result)
return result
def term(s, typ=None, assignment=None):
"""Convenience wrapper for building terms.
`s`: the term's name.
`typ`: the term's type, if specified."""
return TypedTerm.term_factory(s, typ=typ, assignment=assignment)
_type_system = types.poly_system
unary_tf_ops = set(['~'])
binary_tf_ops = set(['>>', '<<', '&', '|', '<=>', '%'])
tf_ops = unary_tf_ops | binary_tf_ops
unary_num_ops = set(['-'])
binary_num_rels = set(['<', '<=', '>=', '>'])
binary_num_ops = {'+', '-', '/', '*', '**'}
num_ops = unary_num_ops | binary_num_ops | binary_num_rels
basic_ops = tf_ops | num_ops
basic_ops_to_latex = {
"&": "\\wedge{}",
"|": "\\vee{}",
"~": "\\neg{}",
">>": "\\rightarrow{}"
}
def text_op_in_latex(op):
if op in basic_ops_to_latex:
return basic_ops_to_latex[op]
return op
global partiality_analysis, partiality_strict, partiality_weak
partiality_strict = "strict"
partiality_weak = "weak"
partiality_analysis = partiality_weak
class TypeEnv(object):
def __init__(self, var_mapping=None, type_mapping=None):
self.type_var_set = set()
if type_mapping is None:
self.type_mapping = dict()
else:
self.type_mapping = type_mapping
if var_mapping is None:
self.var_mapping = dict()
else:
self.var_mapping = var_mapping
self.update_var_set()
def _repr_html_(self):
s = "<table>"
s += ("<tr><td>Term mappings: </td><td>%s</td></tr>" %
utils.dict_latex_repr(self.var_mapping))
s += ("<tr><td>Type mappings: </td><td>%s</td></tr>" %
utils.dict_latex_repr(self.type_mapping))
s += ("<tr><td>Type variables: </td><td>%s</td></tr>" %
utils.set_latex_repr(self.type_var_set))
s += "</table>"
return s
def update_var_set(self):
s = types.vars_in_env(self.var_mapping)
s = s | set(self.type_mapping.keys())
for m in self.type_mapping:
s = s | self.type_mapping[m].bound_type_vars()
self.type_var_set = s
def term_by_name(self, vname):
if vname in self.var_mapping:
return TypedTerm(vname, self.var_mapping[vname],
defer_type_env=True)
else:
return None
def add_var_mapping(self, vname, typ):
result = self.try_add_var_mapping(vname, typ)
if result is None:
raise TypeMismatch(self.term_by_name(vname), typ,
"Failed to unify types across distinct instances of term")
return result
def try_add_var_mapping(self, vname, typ):
ts = get_type_system()
if vname in self.var_mapping:
principal = self.try_unify(self.var_mapping[vname], typ,
update_mapping=True)
if principal is None:
return None
assert principal is not None
self.var_mapping[vname] = principal
self.update_type_vars()
else:
assert typ is not None
self.var_mapping[vname] = typ
principal = typ
self.add_type_to_var_set(principal)
return principal
def try_unify(self, t1, t2, update_mapping=False):
ts = get_type_system()
result = ts.unify_details(t1, t2, assignment=self.type_mapping)
if result is None:
return None
else:
if update_mapping:
self.type_mapping = result.mapping
self.update_var_set()
return result.principal
def add_type_to_var_set(self, typ):
self.type_var_set = self.type_var_set | typ.bound_type_vars()
def update_type_vars(self):
for k in self.var_mapping:
# note that the following is not generally safe, but here we are
# working with TypedTerms that have no TypeEnv
new_type = self.var_mapping[k].sub_type_vars(self.type_mapping)
self.var_mapping[k] = new_type
def try_add_type_mapping(self, type_var, typ, defer=False):
if isinstance(typ, types.VariableType):
if typ in self.type_var_set or type_var in self.type_var_set:
principal = self.try_unify(type_var, typ, update_mapping=True)
else:
principal = type_var
self.type_mapping[type_var] = typ
self.type_var_set = self.type_var_set | {type_var, typ}
else:
principal = self.try_unify(type_var, typ, update_mapping=True)
if not defer:
self.update_type_vars()
return principal
def add_type_mapping(self, type_var, typ, defer=False):
principal = self.try_add_type_mapping(type_var, typ, defer=defer)
if principal is None:
raise TypeMismatch(self.type_mapping[type_var], typ,
"Failed to unify type variable %s across contexts" % type_var)
return principal
def merge(self, tenv):
for v in tenv.type_mapping:
self.add_type_mapping(v, tenv.type_mapping[v], defer=True)
self.update_type_vars()
for v in tenv.var_mapping:
self.add_var_mapping(v, tenv.var_mapping[v])
self.type_var_set |= tenv.type_var_set
return self
def intersect_merge(self, tenv):
for v in tenv.type_mapping:
if (v in self.type_var_set
or len(tenv.type_mapping[v].bound_type_vars()
& self.type_var_set) > 0):
self.add_type_mapping(v, tenv.type_mapping[v], defer=True)
self.update_type_vars()
for v in tenv.var_mapping:
self.add_var_mapping(v, tenv.var_mapping[v])
return self
def copy(self):
env = TypeEnv(self.var_mapping.copy(), self.type_mapping.copy())
env.type_var_set = self.type_var_set.copy()
return env
def __repr__(self):
return ("[TypeEnv: Variables: "
+ repr(self.var_mapping)
+ ", Type mapping: "
+ repr(self.type_mapping)
+ ", Type variables: "
+ repr(self.type_var_set)
+ "]")
def merge_type_envs(env1, env2, target=None):
"""Merge two type environments. A type environment is simply an assignment,
where the mappings to terms are used to define types. Other mappings are
ignored.
If `target` is set, it specifies a set of variable names to specifically
target; anything not in it is ignored.
If `target` is None, all mappings are merged."""
ts = get_type_system()
result = dict()
for k1 in env1:
if target and not k1 in target:
continue
if (not env1[k1].term()):
continue
if k1 in env2:
unify = ts.unify(env1[k1].type, env2[k1].type)
if unify is None:
raise TypeMismatch(env1[k1], env2[k1],
"Failed to unify types across distinct instances of term")
result[k1] = env1[k1].try_adjust_type(unify)
else:
result[k1] = env1[k1]
for k2 in env2:
if target and not k2 in target:
continue
if not env2[k2].term():
continue
if k2 not in env1:
result[k2] = env2[k2]
return result
def merge_tes(te1, te2, symmetric=True):
"""Produce a TypedExpr that is the result of 'merging' `te1` and `te2`.
TypedExprs can be merged only if their types can match. This has two types
of behaviors:
* Symmetric: if `te1` is a term and `te2` is not a term, return te2 coerced
to the principal type; v.v for `te2` and `te1. Otherwise, if they are
equal (using `==`, which checks structural/string identity) return the
result at the principle type.
* Non-symmetric: if `te1` is a term, return `te2` at the principal type.
Otherwise, return something (at the principal type) only if `te1` and
`te2` are equal.
The failure cases for both modes will raise a TypeMismatch.
"""
ts = get_type_system()
principal = ts.unify(te1.type, te2.type)
# TODO: these error messages are somewhat cryptic
if principal is None:
raise TypeMismatch(te1, te2,
"Failed to merge typed expressions (incompatible types)")
te1_new = te1.try_adjust_type(principal)
te2_new = te2.try_adjust_type(principal)
if te1_new is None or te2_new is None:
raise TypeMismatch(te1, te2,
"Failed to merge typed expressions (type adjustment failed)")
if te1_new.term():
if symmetric and te2_new.term() and not (te1_new == te2_new):
raise TypeMismatch(te1, te2,
"Failed to merge typed expressions; result is not equal")
return te2_new
elif symmetric and te2_new.term():
return te1_new
else:
if not (te1_new == te2_new):
raise TypeMismatch(te1, te2,
"Failed to merge typed expressions; result is not equal")
return te1_new
class TypedExpr(object):
"""Basic class for a typed n-ary logical expression in a formal language.
This class should generally be constructed using the factory method, not the
constructor.
Three key fields:
* type: an object that implements the type interface.
* op: an object representing the operator in the expression.
* args: _n_ args representing the arguments (if any) to the operator.
The op field:
* may be a string representing the operator symbol. This case mostly
covers special hard-coded logical/numeric operators. May be used in
subclasses such as LFun. NOTE: for hard-coded operators this is now
deprecated, call the factory function.
* May be itself a TypedExpr. (For example, an LFun with the right type.)
If so, there must be exactly one argument of the correct type.
* May be a term name, treating this case as either a 0-ary operator or an
unsaturated term. Note that right now, this _only_ occurs in
subclasses. (TypedTerm)
originally based on logic.Expr (from aima python), now long diverged.
"""
def __init__(self, op, *args, defer=False):
"""
Constructor for TypedExpr class. This should generally not be called
directly, rather, the factory function should be used. In fact,
TypedExpr is not currently ever directly instantiated.
This is intended only for calls from subclass `__init__`. It (at this
stage) amounts to a convenience function that sets some common
variables -- a subclass that does not call this should ensure that
these are all set. self.args must be a list (not a tuple).
WARNING: this function does not set self.type, which _must_ be set.
It does not perform any type-checking.
`defer`: annotate with this if the TypedExpr does not conform to type
constraints. (Useful for derivational histories or error reports.)
"""
self.type_guessed = False
self.derivation = None
self.defer = defer
self.let = False
if (len(args) == 0):
args = list()
self.op = op
self.args = list(args)
def _type_cache_get(self, t):
try:
cache = self._type_adjust_cache
except AttributeError:
self._type_adjust_cache = dict()
return False
if t in cache:
return cache[t] #.deep_copy()
else:
return False
def _type_cache_set(self, t, result):
try:
cache = self._type_adjust_cache
except AttributeError:
self._type_adjust_cache = dict()
cache = self._type_adjust_cache
cache[t] = result
def try_adjust_type_caching(self, new_type, derivation_reason=None,
assignment=None, let_step=None):
cached = self._type_cache_get(new_type)
if cached is not False:
return cached
if let_step is not None:
result = let_step.try_adjust_type(new_type,
derivation_reason=derivation_reason, assignment=assignment)
# TODO: freshen variables again here?
else:
result = self.try_adjust_type(new_type,
derivation_reason=derivation_reason, assignment=assignment)
self._type_cache_set(new_type, result)
return result
def try_adjust_type(self, new_type, derivation_reason=None,
assignment=None):
"""Attempts to adjust the type of `self` to be compatible with
`new_type`.
If the types already match, it return self.
If it succeeds, it returns a modified _copy_ of self.
If unify suggests a strengthened type, but it can't get there, it
returns self and prints a warning.
If it fails completely, it returns None."""
ts = get_type_system()
env = self.get_type_env().copy()
unify_target = env.try_unify(self.type, new_type, update_mapping=True)
if unify_target is None:
return None
if self.type == unify_target:
self._type_cache_set(self.type, self)
return self
else:
assert not isinstance(self.op, TypedExpr)
if derivation_reason is None:
derivation_reason = "Type adjustment"
if self.term():
new_term = self.copy()
principal = env.try_add_var_mapping(new_term.op, new_type)
if principal is None:
return None
new_term._type_env = env
new_term.type = principal
if assignment is not None and new_term.op in assignment:
assignment[new_term.op] = new_term
return derived(new_term, self, derivation_reason)
else:
# use the subclass' type adjustment function
result = self.try_adjust_type_local(unify_target,
derivation_reason, assignment, env)
if result is not None:
result = result.under_type_assignment(env.type_mapping)
if result is not None:
result._type_env = env
if result is None:
logger.warning(
"In type adjustment, unify suggested a strengthened arg"
" type, but could not accommodate: %s -> %s"
% (self.type, unify_target))
return self
else:
return derived(result, self, derivation_reason)
def try_adjust_type_local(self, unified_type, derivation_reason,
assignment, env):
# write an error instead of throwing an exception -- this is easier for
# the user to handle atm
logger.error("Unimplemented `try_adjust_type_local` in class '%s'"
% type(self).__name__)
return None
def get_type_env(self, force_recalc=False):
if force_recalc:
self._type_env = self.calc_type_env(recalculate=force_recalc)
try:
return self._type_env
except AttributeError:
self._type_env = self.calc_type_env(recalculate=force_recalc)
return self._type_env
def calc_type_env(self, recalculate=False):
env = TypeEnv()
for part in self:
if isinstance(part, TypedExpr):
env.merge(part.get_type_env(force_recalc=recalculate))
return env
def _unsafe_subst(self, i, s):
self.args[i] = s
return self
def subst(self, i, s, assignment=None):
s = TypedExpr.ensure_typed_expr(s)
parts = list(self.args)
old = parts[i]
if not isinstance(old, TypedExpr):
raise ValueError("Cannot perform substitution on non-TypedExpr %s"
% (old))
ts = get_type_system()
# check: is the type of the substitution compatible with the type of
# what it is replacing?
unified = ts.unify(s.type, old.type) # order matters: prioritize type
# variables from the substitution
if unified is None:
raise TypeMismatch(s, old, "Substitution for element %s of '%s'"
% (i, repr(self)))
if unified != s.type:
# compatible but unify suggested a new type for the substitution.
# Try adjusting the type of the expression.
s_a = s.try_adjust_type(unified)
if s_a is None:
raise TypeMismatch(s, old, "Substitution for element %s of '%s'"
% (i, repr(self)))
s = s_a
parts[i] = s
result = self.copy_local(*parts)
return result
@classmethod
def parse(cls, s, assignment=None, locals=None):
"""Attempt to parse a string `s` into a TypedExpr
`assignment`: a variable assignment to use when parsing.
`locals`: a dict to use as the local variables when parsing.
"""
if assignment is None:
assignment = dict()
ts = get_type_system()
(struc, i) = parsing.parse_paren_str(s, 0, ts)
return cls.try_parse_paren_struc_r(struc, assignment=assignment,
locals=locals)
_parsing_locals = dict()
@classmethod
def add_local(cls, l, value):
cls._parsing_locals[l] = value
@classmethod
def del_local(cls, l):
if l == "TypedExpr" or l == "TypedTerm":
raise Exception("Cannot delete parsing local '%s'" % l)
del cls._parsing_locals[l]
@classmethod
def try_parse_flattened(cls, s, assignment=None, locals=None):
"""Attempt to parse a flat, simplified string into a TypedExpr. Binding
expressions should be already handled.
assignment: a variable assignment to use when parsing.
locals: a dict to use as the local variables when parsing.
Do some regular expression magic to expand metalanguage terms into
constructor/factory calls, and then call eval.
The gist of the magic (see expand_terms):
* replace some special cases with less reasonable operator names.
(This is based on AIMA logic.py)
* find things that look like term names, and surround them with calls
to the term factory function.
Certain special case results are wrapped in TypedExprs, e.g. sets and
tuples.
"""
if locals is None:
locals = dict()
# Replace the alternative spellings of operators with canonical
# spellings
to_eval = s.replace('==>', '>>').replace('<==', '<<')
to_eval = to_eval.replace('<=>', '%').replace('=/=', '^').replace('==', '%')
lcopy = locals.copy()
lcopy.update(cls._parsing_locals)
to_eval = TypedExpr.expand_terms(to_eval, assignment=assignment,
ignore=lcopy.keys())
# Now eval the string. (A security hole; do not use with an adversary.)
lcopy.update({'assignment': assignment, 'type_e': type_e})
# cannot figure out a better way of doing this short of actually parsing
# TODO: reimplement as a real parser, don't rely on `eval`
global _parser_assignment
_parser_assignment = assignment # not remotely thread-safe
try:
result = eval(to_eval, dict(), lcopy)
except SyntaxError as e:
raise parsing.ParseError("Failed to parse expression", s=s, e=e)
# other exceptions just get raised directly -- what comes up in
# practice?
_parser_assignment = None
if isinstance(result, tuple):
return Tuple(result)
elif isinstance(result, set):
return ListedSet(result)
elif isinstance(result, dict) and len(result) == 0:
# hack: empty dict is treated as empty set, so that "{}" makes sense
return ListedSet(set())
elif isinstance(result, TypedExpr):
return result
else:
logger.warning("parse_flattened returning non-TypedExpr")
return result
@classmethod
def try_parse_binding_struc(cls, s, assignment=None, locals=None,
vprefix="ilnb"):
"""Try to parse `s` as a binding operator expression. Will return a
subclass of BindingOp, None, or raise a `parsing.ParseError`.
the variable on the exception `met_preconditions` is used to attempt to
figure out whether this was a plausible attempt at a binding operator
expression, so as to get the error message right."""
try:
return BindingOp.try_parse_binding_struc_r(s, assignment=assignment, locals=locals, vprefix=vprefix)
except parsing.ParseError as e:
if not e.met_preconditions:
return None
else:
raise e
@classmethod
def try_parse_paren_struc_r(cls, struc, assignment=None, locals=None,
vprefix="ilnb"):
"""Recursively try to parse a semi-AST with parenthetical structures
matched."""
expr = cls.try_parse_binding_struc(struc, assignment=assignment,
locals=locals, vprefix=vprefix)
if expr is not None:
return expr
# struc is not primarily a binding expression
s = ""
h = dict()
vnum = 1
for sub in struc:
if isinstance(sub, str):
s += sub
else:
sub_expr = cls.try_parse_paren_struc_r(sub,
assignment=assignment, locals=locals, vprefix=vprefix)
var = vprefix + str(vnum)
s += "(" + var + ")"
vnum += 1
h[var] = sub_expr
expr = cls.try_parse_flattened(s, assignment=assignment, locals=h)
return expr
@classmethod
def try_parse_type(cls, s, onto=None):
"""Attempt to get a type name out of s.
Assumes s is already stripped."""
ts = get_type_system()
result = ts.type_parser(s)
return result
@classmethod
def try_parse_term_sequence(cls, s, lower_bound=1, upper_bound=None,
assignment=None):
s = s.strip()
if len(s) == 0:
sequence = list()
i = 0
else:
v, typ, i = cls.parse_term(s, i=0, return_obj=False,
assignment=assignment)
sequence = [(v, typ)]
if i < len(s):
i = parsing.consume_whitespace(s, i)
while i < len(s):
i = parsing.consume_char(s, i, ",",
"expected comma in variable sequence")
i = parsing.consume_whitespace(s, i)
v, typ, i = cls.parse_term(s, i=i, return_obj=False,
assignment=assignment)
if v is None:
raise parsing.ParseError(
"Failed to find term following comma in variable sequence",
s=s, i=i, met_preconditions=True)
sequence.append((v, typ))
if lower_bound and len(sequence) < lower_bound:
raise parsing.ParseError(
("Too few variables (%i < %i) in variable sequence"
% (len(sequence), lower_bound)),
s=s, i=i, met_preconditions=True)
if upper_bound and len(sequence) > upper_bound:
raise parsing.ParseError(
("Too many variables (%i > %i) in variable sequence"
% (len(sequence), upper_bound)),
s=s, i=i, met_preconditions=True)
return sequence
@classmethod
def try_parse_typed_term(cls, s, assignment=None, strict=False):
"""Try to parse string 's' as a typed term.
assignment: a variable assignment to parse s with.
Format: n_t
* 'n': a term name.
- initial numeric: term is a number.
- initial alphabetic: term is a variable or constant. (Variable:
lowercase initial.)
* 't': a type, optional. If absent, will either get it from
assignment, or return None as the 2nd element.
Returns a tuple of a variable name, and a type. If you want a
TypedTerm, call one of the factory functions.
Raises: TypeMismatch if the assignment supplies a type inconsistent
with the specified one.
"""
seq = cls.try_parse_term_sequence(s, lower_bound=1, upper_bound=1,
assignment=assignment)
return seq[0]
@classmethod
def find_term_locations(cls, s, i=0):
"""Find locations in a string `s` that are term names."""
term_re = re.compile(r'([a-zA-Z0-9]+)(_)?')
unfiltered_result = parsing.find_pattern_locations(term_re, s, i=i,
end=None)
result = list()
for r in unfiltered_result:
if r.start() > 0 and s[r.start() - 1] == ".":
# result is prefaced by a ".", and therefore is a functional
# call or attribute
continue
result.append(r)
return result
@classmethod
def expand_terms(cls, s, i=0, assignment=None, ignore=None):
"""Treat terms as macros for term_factory calls. Attempt to find all
term strings, and replace them with eval-able factory calls.
This is an expanded version of the original regex approach; one reason
to move away from that is that this will truely parse the types."""
terms = cls.find_term_locations(s, i)
if ignore is None:
ignore = set()
offset = 0
for t in terms:
if t.start() + offset < i:
# parsing has already consumed this candidate term, ignore.
# (E.g. an "e" in a type signature.)
continue
(name, typ, end) = cls.parse_term(s, t.start() + offset,
return_obj=False, assignment=assignment)
if name is None:
logger.warning("Unparsed term '%s'" % t.group(0)) # TODO: more?
continue
elif name in ignore:
continue
# ugh this is sort of absurd
if typ is None:
replace = ('TypedExpr.term_factory("%s", typ=None, assignment=assignment)' % (name))
else:
replace = ('TypedExpr.term_factory("%s", typ="%s", assignment=assignment)' % (name, repr(typ)))
s = s[0:t.start() + offset] + replace + s[end:]
i = t.start() + offset + len(replace)
len_original = end - (t.start() + offset)
offset += len(replace) - len_original
return s
@classmethod
def parse_term(cls, s, i=0, return_obj=True, assignment=None):
"""Parse position `i` in `s` as a term expression. A term expression
is some alphanumeric sequence followed optionally by an underscore and
a type. If a type is not specified locally, but is present in
`assignment`, use that. If a type is specified and is present in
`assignment`, check type compatibility immediately."""
ts = get_type_system()
term_name, next = parsing.consume_pattern(s, i, r'([a-zA-Z0-9]+)(_)?',
return_match=True)
if not term_name:
if return_obj:
return (None, i)
else:
return (None, None, i)
if term_name.group(2):
# try to parse a type
# if there is a _, will force an attempt
typ, end = ts.type_parser_recursive(s, next)
else:
typ = None
end = next
if return_obj:
term = cls.term_factory(term_name.group(1), typ=typ,
assignment=assignment, preparsed=True)
return (term, end)
else:
return (term_name.group(1), typ, end)
@classmethod
def term_factory(cls, s, typ=None, assignment=None, preparsed=False):
"""Attempt to construct a TypedTerm from argument s.
If s is already a TypedTerm, return a copy of the term.
If s is a string, try to parse the string as a term name. (see
try_parse_typed_term)
Otherwise, fail.
"""
# TODO: if handed a complex TypedExpr, make a term referring to it??
if isinstance(typ, str):
ts = get_type_system()
typ = ts.type_parser(typ)
if (isinstance(s, TypedTerm)):
# todo: handle conversion to custom
result = s.copy()
if typ is not None:
result = result.try_adjust_type(typ, assignment=assignment)
return result
elif (isinstance(s, str)):
if typ is None and not preparsed:
v, typ = cls.try_parse_typed_term(s, assignment=assignment, strict=True)
else:
v = s
v = num_or_str(v)
if typ is not None:
type_vars = typ.bound_type_vars()
if _constants_use_custom and not is_var_symbol(v):
return CustomTerm(v, typ=typ, assignment=assignment)
else:
return TypedTerm(v, typ=typ, assignment=assignment)
else:
raise NotImplementedError
@classmethod
def factory(cls, *args, assignment=None):
"""Factory method for TypedExprs. Will return a TypedExpr or subclass.
Special cases:
* single arg, is a TypedExpr: will return a copy of that arg. (See
ensure_typed_expr for alternate semantics.)
* single arg, is a number: will return a TypedTerm using that number.
* single arg, is a variable/constant name: will return a TypedTerm
using that name. (Happens in parser magic.)
* single arg, complex expression: will parse it using python syntax.
(Happens in parser magic.)
* multiple args: call the standard constructor.
"""
### NOTE: do not edit this function lightly...
global _parser_assignment
if assignment is None:
if _parser_assignment is None:
assignment = dict()
else:
assignment = _parser_assignment # not remotely thread-safe
if len(args) == 1 and isinstance(args[0], TypedExpr):
# handing this a single TypedExpr always returns a copy of the
# object. I set this case aside for clarity. subclasses must
# implement copy() for this to work right.
return args[0].copy()
if len(args) == 0:
return None #TODO something else?
elif len(args) == 1:
# args[0] is either an unsaturated function, a term, or a string
# that needs parsed.
# in the first two cases, return a unary TypedExpr
s = args[0]
if s is True:
return true_term
elif s is False:
return false_term
elif isinstance(s, Number):
return TypedTerm(s, type_n)
elif isinstance(s, str):
#return cls.parse_expr_string(s, assignment)
return cls.parse(s, assignment)
else:
raise NotImplementedError
else:
# Argument length > 1.
# This code path is for constructing complex TypedExprs where
# args[0] must be a function / operator. Will potentially recurse
# via ensure_typed_expr on all arguments.
if isinstance(args[0], str):
if args[0] in op_symbols:
# args[0] is a special-cased operator symbol
return op_expr_factory(*args)
# the only kind of operator-expression generated after this point is
# an ApplicationExpr.
operator = cls.ensure_typed_expr(args[0])
# this is redundant with the constructor, but I can't currently find
# a way to simplify. After this point, all elements of args will be
# TypedExprs.
remainder = tuple([cls.ensure_typed_expr(a) for a in args[1:]])
# package longer arg lengths in Tuples. After this point, there are
# only two elements under consideration.
if len(remainder) > 1:
arg = Tuple(args[1:])
else:
arg = remainder[0]
if (not operator.type.functional()) and operator.type_guessed:
# special case: see if the type of the operator is guessed and
# coerce accordingly
# prevent future coercion of the argument
arg.type_not_guessed()
coerced_op = operator.try_coerce_new_argument(arg.type,
assignment=assignment)
if coerced_op is not None:
logger.info(
"Coerced guessed type for '%s' into %s, "
"to match argument '%s'"
% (repr(operator), coerced_op.type, repr(arg)))
operator = coerced_op
else:
logger.warning(
"Unable to coerce guessed type %s for '%s' "
"to match argument '%s' (type %s)"
% (operator.type, repr(operator), repr(arg), arg.type))
result = ApplicationExpr(operator, arg, assignment=assignment)
if result.let:
result = derived(result.compact_type_vars(), result,
"Let substitution")
return result
@classmethod
def ensure_typed_expr(cls, s, typ=None, assignment=None):
"""Coerce s to a typed expression if necessary, otherwise, return s."""
if isinstance(s, TypedExpr):
if assignment is not None:
result = s.under_assignment(assignment)
else:
result = s
else:
try:
result = cls.factory(s, assignment=assignment)
except NotImplementedError:
raise ValueError(
"Do not know how to ensure TypedExpr for '%s'" % repr(s))
if typ is None:
return result
else:
r_adjusted = result.try_adjust_type(typ, assignment=assignment)
if r_adjusted is None:
raise TypeMismatch(result, typ, mode="type adjustment")
else:
return r_adjusted
def try_coerce_new_argument(self, typ, remove_guessed=False,
assignment=None):
return None
def type_not_guessed(self):
"""Recursively set that the type of `self` is not a guess."""
self.type_guessed = False
if isinstance(self.op, TypedExpr):
self.op.type_not_guessed()
def copy(self):
"""Make a copy of the expression. Will not produce a deep copy.
Relies on correctly implement `copy_local`.
"""
return self.copy_local(*self)
def copy_local(self, *args, type_check=True):
"""
Make a copy of the element preserving everything *except* the AST.
The default implementation calls the constructor with `args`, so if this
isn't appropriate, you must override."""
return type(self)(*args)
def deep_copy(self):
accum = list()
for p in self:
if isinstance(p, TypedExpr):
accum.append(p.deep_copy())
else:
accum.append(p)
return self.copy_local(*accum, type_check=False)
def type_env(self, constants=False, target=None, free_only=True):
env = dict()
for part in self:
if isinstance(part, TypedExpr):
env = merge_type_envs(env, part.type_env(constants=constants,
target=target, free_only=free_only))
return env
def regularize_type_env(self, assignment=None, constants=False,
target=None):
if assignment is None:
assignment = dict()
env = self.get_type_env()
return self.under_type_assignment(env.type_mapping,
merge_intersect=False)
def compact_type_vars(self, target=None, unsafe=None, used_vars_only=True,
store_mapping=False):
"""Compact the type variables on `self` into X variables with a low
number. By default this will not store the mapping that resulted in
the compaction, i.e. the type environment is a clean slate. For this
reason, it is suitable only for let-bound contexts."""
history_env = self.get_type_env()
if len(history_env.type_var_set) == 0:
return self
c = self.copy()
# note: the following is already triggered by copy. If this behavior
# changes, this needs updating.
env = c.get_type_env()
if len(env.type_var_set) == 0:
return c
if used_vars_only:
tenv = env.type_var_set - set(env.type_mapping.keys())
else:
tenv = env.type_var_set
if len(tenv) == 0:
return self
compacted_map = types.compact_type_set(tenv, unsafe=unsafe)
result = self.under_type_injection(compacted_map)
result._type_env_history = history_env
if not store_mapping:
result.get_type_env(force_recalc=True)
return result
def freshen_type_vars(self, target=None, unsafe=None, used_vars_only=False,
store_mapping=False):
history_env = self.get_type_env()
if len(history_env.type_var_set) == 0:
return self
c = self.copy()
# note: the following is already triggered by copy. If this behavior
# changes, this needs updating.
env = c.get_type_env()
if used_vars_only:
tenv = env.type_var_set - set(env.type_mapping.keys())
else:
tenv = env.type_var_set
if len(tenv) == 0:
return self
fresh_map = types.freshen_type_set(tenv, unsafe=unsafe)
result = self.under_type_injection(fresh_map)
result._type_env_history = history_env
if not store_mapping:
result.get_type_env(force_recalc=True)
return result
def let_type(self, typ):
result = self.try_adjust_type(typ)
if result is None:
return None
if result.let:
result = result.compact_type_vars()
return result
def has_type_vars(self):
return len(self.get_type_env().type_var_set) > 0
def _unsafe_under_type_injection(self, mapping):
if len(mapping) == 0:
return self
for i in range(len(self)):
self._unsafe_subst(i, self[i].under_type_injection(mapping))
self.type = self.type.sub_type_vars(mapping)
return self
def under_type_injection(self, mapping):
accum = list()
for p in self:
accum.append(p.under_type_injection(mapping))
r = self.copy_local(*accum, type_check=False)
r.type = r.type.sub_type_vars(mapping)
if r.term():
r.get_type_env(force_recalc=True)
return r
def under_type_assignment(self, mapping, reset=False, merge_intersect=True):
# TODO: For somewhat irritating reasons, this is currently a _lot_
# slower if reset=True
if len(mapping) == 0:
return self
dirty = False
parts = list()
copy = self
for part in copy:
new_part = part.under_type_assignment(mapping, reset=reset)
if new_part is not part:
dirty = True
else:
if reset:
new_part = new_part.copy()
new_part.get_type_env(force_recalc=True)
parts.append(new_part)
# this may or may not be recalculated by copy_local. The main case
# where it isn't is terms.
copy_type = copy.type.sub_type_vars(mapping)
# Note: we still need to reset the subordinate type environments even
# in this case.
if copy_type == self.type and not dirty:
return self
result = copy.copy_local(*parts)
if result.term():
result.type = copy_type
if reset:
result.get_type_env(force_recalc=True)
if merge_intersect:
result._type_env = result.get_type_env().intersect_merge(
TypeEnv(type_mapping=mapping))
else:
result._type_env = result.get_type_env().merge(
TypeEnv(type_mapping=mapping))
# need to set a derivation step for this in the calling function.
result.derivation = self.derivation
return result
def under_assignment(self, assignment):
"""Use `assignment` to replace any appropriate variables in `self`."""
# do this first so that any errors show up before the recursive step
if assignment is None:
a2 = dict()
else:
a2 = {key: self.ensure_typed_expr(assignment[key])
for key in assignment}
return variable_replace_unify(self, a2)
# the next sequence of functions is clearly inefficient, and could be
# replaced by memoization (e.g. 'director strings' or whatever). But I
# don't think it matters for this application.
def free_variables(self):
"""Find the set of variables that are free in the typed expression.
"""
result = set()
if len(self.args) == 0:
if is_var_symbol(self.op):
# should not be reachable
assert False
result = {self.op}
if isinstance(self.op, TypedExpr):
result.update(self.op.free_variables())
for a in self.args:
if isinstance(a, TypedExpr):
result.update(a.free_variables())
elif is_var_symbol(a):
result.add(a)
return result
def bound_variables(self):
"""Find the set of variables that are bound (somewhere) in a typed
expression.
Note that this may be overlapping with the set of free variables.
"""
result = set()
for a in self.args:
result.update(a.bound_variables())
return result
def find_safe_variable(self, starting="x"):
"""Find an a safe alpha variant of the starting point (by default: 'x'),
that is not used in the expression."""
blockset = self.free_variables() | self.bound_variables()
varname = alpha_variant(starting, blockset)
return varname
def term(self):
return (isinstance(self.op, str) and len(self.args) == 0)
def functional(self):
funtype = unify(self.type, tp("<X,Y>"))
return (funtype is not None)
def atomic(self):
return len(self.args) == 0
def simplify(self):
return self
def simplify_all(self):
result = self
dirty = False
for i in range(len(result.args)):
new_arg_i = result.args[i].simplify_all()
if new_arg_i is not result.args[i]:
dirty = True
result = derived(result.subst(i, new_arg_i), result,
desc=("Recursive simplification of argument %i"
% i),
subexpression=new_arg_i)
result = result.simplify()
return result
def reducible(self):
return False
def reduce(self):
assert (not self.reducible())
return self
def reduce_sub(self, i):
"""Applies reduce to a constituent term, determined by argument i."""
new_arg_i = self.args[i].reduce()
if new_arg_i is not self.args[i]:
result = self.copy()
result.args[i] = new_arg_i
if len(result.args) == 2 and isinstance(result, BindingOp):
reason = "Reduction of body"
else:
reason = "Reduction of operand %s" % (i)
return derived(result, self, desc=reason)
return self
def reduce_all(self):
"""Maximally reduce function-argument combinations in `self`."""
# this is a dumb strategy: it's either not fully general (but I haven't
# found the case yet), or it's way too inefficient, I'm not sure which;
# probably both. The potential overkill is the recursive step.
# TODO: research on reduction strategies.
# TODO: add some kind of memoization?
# uncomment this to see just how bad this function is...
#print("reduce_all on '%s'" % repr(self))
result = self
dirty = False
for i in range(len(result.args)):
new_arg_i = result.args[i].reduce_all()
if new_arg_i is not result.args[i]:
if not dirty:
dirty = True
args = list(result.args)
args[i] = new_arg_i
next_step = result.copy_local(*args)
if len(result.args) == 2 and isinstance(result, BindingOp):
reason = "Recursive reduction of body"
else:
reason = "Recursive reduction of operand %s" % (i)
result = derived(next_step, result, desc=reason,
subexpression=new_arg_i)
self_dirty = False
while result.reducible():
new_result = result.reduce()
if new_result is not result:
dirty = True
self_dirty = True
result = new_result # no need to add a derivation here, reduce
# will do that already
else:
break # should never happen...but prevent loops in case of error
if self_dirty:
new_result = result.reduce_all() # TODO: is this overkill?
result = new_result
return result
def calculate_partiality(self, vars=None):
condition = true_term
new_parts = list()
for part in self:
part_i = part.calculate_partiality(vars=vars)
if isinstance(part_i, Partial):
condition = condition & part_i.condition
part_i = part_i.body
new_parts.append(part_i)
new_self = self.copy_local(*new_parts)
condition = condition.simplify_all()
if condition is true_term:
intermediate = derived(Partial(new_self, condition), self,
"Partiality simplification")
return derived(new_self, intermediate, "Partiality simplification")
else:
return derived(Partial(new_self, condition), self,
"Partiality simplification")
def __call__(self, *args):
"""Attempt to construct a saturated version of self. This constructs a
composite TypedExpr, with the function (`self`) as the operator and the
argument(s) as the arguments. Type checking happens immediately."""
#print("globals: ", globals())
return TypedExpr.factory(self, *args)
def __repr__(self):
"""Return a string representation of the TypedExpr.
This is guaranteed (barring bugs) to produce a parsable string that
builds the same object.
"""
assert not isinstance(self.op, TypedExpr)
if not self.args: # Constant or proposition with arity 0
return repr(self.op)
elif len(self.args) == 1: # Prefix operator
return repr(self.op) + repr(self.args[0])
else: # Infix operator
return '(%s)' % (' '+self.op+' ').join([repr(a) for a in self.args])
def latex_str(self, **kwargs):
"""Return a representation of the TypedExpr suitable for Jupyter
Notebook display.
In this case the output should be pure LaTeX."""
assert not isinstance(self.op, TypedExpr)
if not self.args:
return ensuremath(str(self.op))
# past this point in the list of cases should only get hard-coded
# operators
elif len(self.args) == 1: # Prefix operator
return ensuremath(text_op_in_latex(self.op)
+ self.args[0].latex_str(**kwargs))
else: # Infix operator
return ensuremath('(%s)' %
(' '+text_op_in_latex(self.op)+' ').join(
[a.latex_str(**kwargs) for a in self.args]))
def _repr_latex_(self):
return self.latex_str()
def __str__(self):
return "%s, type %s" % (self.__repr__(), self.type)
def __eq__(self, other):
"""x and y are equal iff their ops and args are equal.
Note that this is a _syntactic_ notion of equality, not a _semantic_
notion -- for example, two expressions would fail this notion of
equality if one reduces to the other but that reduction has not been
done. Alphabetic variants will also not come out as equal."""
# need to explicitly check this in case recursion accidentally descends into a string Op
# TODO revisit
if isinstance(other, TypedExpr):
return (other is self) or (self.op == other.op and self.args == other.args and self.type == other.type)
else:
return False
#TODO: equality by semantics, not syntax?
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
"""Need a hash method so TypedExprs can live in dicts.
Note that there are some special cases to worry about: ListedSets are
not guaranteed to hash correctly.
"""
# TODO: deal with ListedSets
return hash(self.op) ^ hash(tuple(self.args)) ^ hash(self.type)
def __getitem__(self, i):
"""If `i` is a number, returns a part of `self` by index.
index 0 always gives the operator.
index >=1 gives whatever arguments there are. Note that this is
shifted from the indexing of `self.args`.
If `i` is a TypedExpr, try to construct an expression representing
indexing."""
if isinstance(i, TypedExpr):
return TupleIndex(self, i)
else:
return self.args[i]
def __len__(self):
"""Return the number of parts of `self`, including the operator."""
return len(self.args)
# See http://www.python.org/doc/current/lib/module-operator.html
# Not implemented: not, abs, pos, concat, contains, *item, *slice
def __and__(self, other): return self.factory('&', self, other)
def __invert__(self): return self.factory('~', self)
def __lshift__(self, other): return self.factory('<<', self, other)
def __rshift__(self, other): return self.factory('>>', self, other)
def __or__(self, other): return self.factory('|', self, other)
def __xor__(self, other): return self.factory('^', self, other)
def __mod__(self, other): return self.factory('<=>', self, other)
def __lt__(self, other): return self.factory('<', self, other)
def __le__(self, other): return self.factory('<=', self, other)
def __ge__(self, other): return self.factory('>=', self, other)
def __gt__(self, other): return self.factory('>', self, other)
def __add__(self, other): return self.factory('+', self, other)
def __sub__(self, other): return self.factory('-', self, other)
def __div__(self, other): return self.factory('/', self, other)
def __truediv__(self, other):return self.factory('/', self, other)
def __mul__(self, other): return self.factory('*', self, other)
def __neg__(self): return self.factory('-', self)
def __pow__(self, other): return self.factory('**', self, other)
def __bool__(self):
# otherwise, python tries to use the fact that these objects implement a
# container interface to convert to bool, which can lead to weird
# results.
# TODO: revisit... (see also false_term)
return True
TypedExpr.add_local('TypedExpr', TypedExpr)
class ApplicationExpr(TypedExpr):
def __init__(self, fun, arg, defer=False, assignment=None, type_check=True):
if type_check and not defer:
tc_result = self.fa_type_inference(fun, arg, assignment)
if tc_result is None:
if not fun.functional():
raise TypeMismatch(fun, arg, "Function-argument expression: left subexpression is not a function")
else:
raise TypeMismatch(fun, arg, "Function-argument expression: mismatched types")
fun, arg, out_type, history = tc_result
op = "Apply"
args = [fun, arg]
self.type = out_type
else:
history = False
op = "Apply"
args = [fun, arg]
# note: fun.type MUST be functional!
self.type = fun.type.right
super().__init__(op, *args, defer=defer)
if fun.let and arg.let:
self.let = True
if history:
# bit of a hack: build a derivation with the deferred version as
# the origin
old = ApplicationExpr(fun, arg, defer=True)
derived(self, old, desc="Type inference")
if isinstance(fun, LFun):
arg.type_not_guessed()
else:
# not 100% that the following is the right fix...
try:
self.type_guessed = fun.type_guessed
except AttributeError:
self.type_guessed = False
def copy(self):
return self.copy_local(self.args[0], self.args[1])
def copy_local(self, fun, arg, type_check=True):
result = ApplicationExpr(fun, arg, defer=self.defer,
type_check=type_check)
result.let = self.let
result.type_guessed = self.type_guessed
return result
def latex_str(self, **kwargs):
fun = self.args[0]
arg = self.args[1]
if isinstance(arg, Tuple):
arg_str = arg.latex_str(**kwargs) # tuple already generates parens
else:
arg_str = "(%s)" % (arg.latex_str(**kwargs))
if isinstance(fun, CustomTerm):
return ensuremath(fun.custom_appl_latex(arg_str))
elif isinstance(fun, LFun):
return ensuremath("{[%s]}%s" % (fun.latex_str(**kwargs), arg_str))
else:
return ensuremath('%s%s' % (fun.latex_str(**kwargs), arg_str))
def __repr__(self):
"""Return a string representation of the TypedExpr.
This is guaranteed (barring bugs) to produce a parsable string that
builds the same object.
"""
fun = self.args[0]
arg = self.args[1]
if isinstance(arg, Tuple):
arg_str = repr(arg) # tuple already generates parens
else:
arg_str = "(%s)" % (repr(arg))
if isinstance(fun, CustomTerm):
return fun.custom_appl(arg_str) # TODO: ???
elif isinstance(fun, LFun):
return "(%s)%s" % (repr(fun), arg_str)
else:
return '%s%s' % (repr(fun), arg_str)
def try_adjust_type_local(self, new_type, derivation_reason, assignment,
env):
fun = self.args[0]
arg = self.args[1]
(new_fun_type, new_arg_type, new_ret_type) = get_type_system().unify_fr(
fun.type, new_type, assignment=env.type_mapping)
if new_fun_type is None:
return None
new_fun = fun.try_adjust_type(new_fun_type,
derivation_reason=derivation_reason,
assignment=assignment)
if new_fun is None:
return None
new_arg = arg.try_adjust_type(new_arg_type,
derivation_reason=derivation_reason,
assignment=assignment)
if new_arg is None:
return None
result = self.copy_local(new_fun, new_arg, type_check=False)
return result
def try_coerce_new_argument(self, typ, remove_guessed=False,
assignment=None):
"""For guessed types, see if it is possible to coerce a new argument.
Will recurse to find guessed types.
This is not type inference. Rather, it is a convenience shorthand for
writing n-ary extensional predicates without type annotation."""
if not self.type_guessed:
return None
result = self.args[0].try_coerce_new_argument(typ,
assignment=assignment)
if result is not None:
copy = ApplicationExpr(result, self.args[1])
if (remove_guessed):
result.type_guessed = False
return copy
else:
return None
@classmethod
def fa_type_inference(cls, fun, arg, assignment):
ts = get_type_system()
old_fun = None
old_arg = None
if fun.let:
fun = fun.freshen_type_vars()
if arg.let:
arg = arg.freshen_type_vars()
history = False
(f_type, a_type, out_type) = ts.unify_fa(fun.type, arg.type)
if f_type is None:
return None
if fun.type != f_type:
fun = fun.try_adjust_type_caching(f_type,
derivation_reason="Type inference (external)",
assignment=assignment)
history = True
if a_type != arg.type:
arg = arg.try_adjust_type_caching(a_type,
derivation_reason="Type inference (external)",
assignment=assignment)
history = True
return (fun, arg, out_type, history)
def reducible(self):
if isinstance(self.args[0], LFun) or isinstance(self.args[0],
Disjunctive):
return True
return False
def reduce(self):
"""if there are arguments to op, see if a single reduction is
possible."""
if not self.reducible():
return self
else:
return derived(self.args[0].apply(self.args[1]), self,
desc="Reduction")
def calculate_partiality(self, vars=None):
# defer calculation of the argument until beta reduction has occurred
if isinstance(self.args[0], LFun):
return self
else:
return super().calculate_partiality()
@classmethod
def random(self, random_ctrl_fun):
ftyp = get_type_system().random_from_class(types.FunType)
fun = random_lfun_force_bound(ftyp, random_ctrl_fun)
arg = random_ctrl_fun(typ=ftyp.left)
return ApplicationExpr(fun, arg)
class Tuple(TypedExpr):
"""TypedExpr wrapper on a tuple.
This works basically as a python tuple would, and is indicated using commas
within a parenthetical. `args` is a list containing the elements of the
tuple."""
def __init__(self, args, typ=None, type_check=True):
new_args = list()
type_accum = list()
for i in range(len(args)):
if typ is None or not type_check:
a_i = self.ensure_typed_expr(args[i])
else:
a_i = self.ensure_typed_expr(args[i], typ=typ[i])
new_args.append(a_i)
type_accum.append(a_i.type)
super().__init__("Tuple", *new_args)
self.type = types.TupleType(*type_accum)
def copy(self):
return Tuple(self.args)
def copy_local(self, *args, type_check=True):
return Tuple(args, typ=self.type)
def index(self, i):
return self.args[i]
def term(self):
return False
def tuple(self):
"""Return a python `tuple` version of the Tuple object."""
return tuple(self.args)
def try_adjust_type_local(self, unified_type, derivation_reason,
assignment, env):
content = [self.args[i].try_adjust_type(unified_type[i],
derivation_reason=derivation_reason,
assignment=assignment)
for i in range(len(self.args))]
return self.copy_local(*content)
def __repr__(self):
return "(" + ", ".join([repr(a) for a in self.args]) + ")"
def latex_str(self, parens=True, **kwargs):
inner = ", ".join([a.latex_str(**kwargs) for a in self.args])
if parens:
return ensuremath("(" + inner + ")")
else:
return ensuremath(inner)
@classmethod
def random(cls, ctrl, max_type_depth=1, max_tuple_len=5, allow_empty=True):
if allow_empty:
r = range(max_tuple_len+1)
else:
r = range(max_tuple_len+1)[1:]
length = random.choice(r)
signature = [get_type_system().random_type(max_type_depth, 0.5)
for i in range(length)]
args = [ctrl(typ=t) for t in signature]
return Tuple(args)
# suppress any constant type
global suppress_constant_type
suppress_constant_type = False
# suppress only constant predicates
# a predicate type is either <e,t>, or any characteristic function of a set of
# tuples
global suppress_constant_predicate_type
suppress_constant_predicate_type = True
global suppress_bound_var_types
suppress_bound_var_types = True
class TypedTerm(TypedExpr):
"""used for terms of arbitrary type. Note that this is not exactly
standard usage of 'term'. In general, these cover variables and constants.
The name of the term is 'op', and 'args' is empty.
The attribute 'type_guessed' is flagged if the type was not specified; this
may result in coercion as necessary."""
def __init__(self, varname, typ=None, latex_op_str=None, assignment=None,
defer_type_env=False, type_check=True):
# NOTE: does not call super
self.op = varname
self.derivation = None
self.defer = False
self.let = False
update_a = False
if typ is None:
if assignment is not None and self.op in assignment:
self.type = assignment[self.op].type
self.type_guessed = False
else:
self.type = default_type(varname)
self.type_guessed = True
else:
self.type_guessed = False
self.type = typ
if type_check and not defer_type_env: # note: cannot change type in
# place safely with this code here
env = self.calc_type_env()
if assignment is not None:
if self.op in assignment and typ is not None:
env.add_var_mapping(self.op, assignment[self.op].type)
self.type = env.var_mapping[self.op]
self._type_env = env
self.suppress_type = False
if isinstance(self.op, Number): # this isn't very elegant...
if self.type != type_n:
raise TypeMismatch(self.op, self.type,
"Numeric must have type n")
self.type_guessed = False
self.suppress_type = True # suppress types for numbers
self.args = list()
self.latex_op_str = latex_op_str
if update_a:
assignment[self.op] = self
def copy(self):
return TypedTerm(self.op, typ=self.type)
def copy_local(self, type_check=True):
result = TypedTerm(self.op, typ=self.type,
latex_op_str=self.latex_op_str,
type_check=type_check)
if not type_check:
result._type_env = self._type_env.copy()
result.type_guessed = self.type_guessed
return result
def calc_type_env(self, recalculate=False):
env = TypeEnv()
env.add_var_mapping(self.op, self.type)
return env
def type_env(self, constants=False, target=None, free_only=True):
if self.constant() and not constants:
return set()
if not target or self.op in target:
return {self.op: self}
return set()
def free_variables(self):
if is_var_symbol(self.op):
return {self.op}
else:
return set()
def term(self):
return True
def apply(self, arg):
return self(arg)
@property
def term_name(self):
return self.op
def constant(self):
"""Return true iff `self` is a constant.
This follows the prolog convention: a constant is a term with a
capitalized first letter. Numbers are constants."""
return not is_var_symbol(self.op)
def variable(self):
"""Return true iff `self` is a variable.
This follows the prolog convention: a variable is a term with a
lowercase first letter."""
return is_var_symbol(self.op)
def __repr__(self):
return "%s_%s" % (self.op, repr(self.type))
def should_show_type(self, assignment=None):
if assignment and suppress_bound_var_types:
if self.op in assignment:
return False
if self.suppress_type:
return False
if suppress_constant_type and self.constant():
return False
if suppress_constant_predicate_type:
if (self.constant() and self.type.functional()
and not isinstance(self.type, types.VariableType)):
if ((self.type.left == types.type_e
or isinstance(self.type.left, types.TupleType))
and self.type.right == types.type_t):
return False
return True
def try_coerce_new_argument(self, typ, remove_guessed = False,
assignment=None):
if not self.type_guessed:
return None
coerced_op = self.term_factory(self.op,
typ=self.type.add_internal_argument(typ),
preparsed=True)
if not remove_guessed:
coerced_op.type_guessed = True
if assignment is not None and self.op in assignment:
assignment[self.op] = coerced_op
return coerced_op
def __hash__(self):
return hash("TypedTerm") ^ super().__hash__()
def latex_str(self, show_types=True, assignment=None, **kwargs):
if self.latex_op_str is None:
op = self.op
else:
op = self.latex_op_str
if not show_types or not self.should_show_type(assignment=assignment):
return ensuremath("{%s}" % op)
else:
return ensuremath("{%s}_{%s}" % (op, self.type.latex_str()))
def _repr_latex_(self):
return self.latex_str()
random_term_base = {type_t : "p", type_e : "x", type_n : "n"}
@classmethod
def random(cls, random_ctrl_fun, typ=None, blockset=None, usedset=set(),
prob_used=0.8, prob_var=0.5, max_type_depth=1):
ts = get_type_system()
if blockset is None:
blockset = set()
varname = None
is_var = (random.random() <= prob_var)
try_used = ((len(usedset) > 0) and (random.random() <= prob_used))
if typ is None:
if try_used:
used_var = random.choice(list(usedset))
varname = used_var.op
typ = used_var.type
else:
typ = ts.random_type(max_type_depth, 0.5)
else:
used_typed = [x for x in list(usedset)
if (x.type==typ and x.variable() == is_var)]
if try_used and len(used_typed) > 0:
varname = (random.choice(list(used_typed))).op
if varname is None:
if typ in random_term_base.keys():
base = random_term_base[typ]
else:
base = "f"
if not is_var:
base = base.upper()
varname = alpha_variant(base, blockset | {n.op for n in usedset})
return TypedExpr.term_factory(varname, typ)
TypedExpr.add_local('TypedTerm', TypedTerm)
class CustomTerm(TypedTerm):
"""A subclass of TypedTerm used for custom displays of term names.
The main application is for English-like metalanguage a la Heim and Kratzer.
This isn't fully implemented as that metalanguage is actually extremely
difficult to get right computationally..."""
def __init__(self, varname, custom_english=None, suppress_type=True,
small_caps=True, typ=None, assignment=None, type_check=True):
TypedTerm.__init__(self, varname, typ=typ, assignment=assignment,
type_check=type_check)
self.custom = custom_english
self.sc = small_caps
self.suppress_type = suppress_type
self.verbal = False
# TODO: check type against custom string
def copy(self):
return CustomTerm(self.op, custom_english=self.custom,
suppress_type=self.suppress_type,
small_caps=self.sc,
typ=self.type)
def copy(self, op):
return CustomTerm(op, custom_english=self.custom,
suppress_type=self.suppress_type,
small_caps=self.sc,
typ=self.type)
def latex_str(self, show_types=True, **kwargs):
s = ""
# custom made small caps
if self.sc:
if len(self.op) == 1:
s += "{\\rm %s}" % (self.op[0].upper())
else:
s += "{\\rm %s {\\small %s}}" % (self.op[0].upper(),
self.op[1:].upper())
else:
s += "{\\rm %s}" % self.op
if show_types and not self.suppress_type:
s += "_{%s}" % self.type.latex_str()
return ensuremath(s)
def __repr__(self):
if self.sc:
return self.op.upper()
else:
return self.op
def get_custom(self):
# needs to be dynamic to deal with coerced types
if self.custom is None:
if self.type == type_property:
if self.verbal:
return "s"
else:
return "is a"
else:
if self.type == type_transitive:
if self.verbal:
return "s"
return ""
else:
return self.custom
def custom_appl_latex(self, arg_str):
if self.verbal:
return "%s\\text{ }%s\\text{%s}" % (arg_str, self.latex_str(),
self.get_custom())
else:
return "%s \\text{ %s }%s" % (arg_str, self.get_custom(),
self.latex_str())
def custom_appl(self, arg_str):
if self.verbal:
return "%s %s%s" % (arg_str, self.latex_str(), self.get_custom())
else:
return "%s %s %s" % (arg_str, repr(self), self.get_custom())
class MiniOp(object):
"""This is a class to pass to a TypeMismatch so that the operator is
displayed nicely."""
def __init__(self, op_uni, op_latex, typ=None):
if typ != None:
self.type = typ
self.op_uni = op_uni
self.op_latex = op_latex
def __repr__(self):
return self.op_uni
def __str__(self):
return repr(self)
def latex_str(self):
return self.op_latex
def short_str_latex(self):
return self.latex_str()
def latex_str_long(self):
return latex_str(self)
def _repr_latex_(self):
return self.latex_str()
@classmethod
def from_op(cls, op):
return MiniOp(op.op_name, op.op_name_latex)
###############
#
# Partiality
#
###############
class Partial(TypedExpr):
def __init__(self, body, condition, type_check=True):
if condition is None:
condition = true_term
if isinstance(body, Partial):
condition = condition & body.condition
body = body.body
while isinstance(condition, Partial):
condition = condition.body & condition.condition
condition = TypedExpr.ensure_typed_expr(condition, types.type_t)
super().__init__("Partial", body, condition)
self.type = body.type
self.condition = condition
self.body = body
def calculate_partiality(self, vars=None):
new_body = self.body.calculate_partiality(vars=vars)
new_condition = self.condition.calculate_partiality(vars=vars)
if isinstance(new_condition, Partial):
new_condition = new_condition.body & new_condition.condition
if isinstance(new_body, Partial):
new_condition = new_condition & new_body.condition
new_body = new_body.body
new_condition = new_condition.simplify_all()
return derived(Partial(new_body, new_condition), self,
"Partiality simplification")
def term(self):
return self.body.term()
def tuple(self):
return tuple(self.args)
def meta_tuple(self):
return Tuple(self.args)
def try_adjust_type_local(self, unified_type, derivation_reason, assignment,
env):
tuple_version = self.meta_tuple()
revised_type = types.TupleType(unified_type, types.type_t)
result = tuple_version.try_adjust_type(unified_type, derivation_reason,
assignment, env)
return self.copy_local(result[1], result[2])
def latex_str(self, **kwargs):
if self.condition and self.condition != true_term:
return ensuremath("\\left|\\begin{array}{l}%s\\\\%s\\end{array}\\right|"
% (self.body.latex_str(**kwargs),
self.condition.latex_str(**kwargs)))
else:
return ensuremath("%s" % (self.body.latex_str(**kwargs)))
@classmethod
def from_Tuple(cls, t):
if (isinstance(t, TypedExpr)
and (not isinstance(t, Tuple) or len(t) != 2)):
raise parsing.ParseError(
"Partial requires a Tuple of length 2. (Received `%s`.)"
% repr(t))
return Partial(t[0], t[1])
@classmethod
def get_condition(cls, p):
if isinstance(p, Partial) or isinstance(p, PLFun):
return p.condition
else:
return true_term
@classmethod
def get_atissue(cls, p):
if isinstance(p, Partial) or isinstance(p, PLFun):
return p.body
else:
return p
@classmethod
def random(cls, ctrl, max_type_depth=1):
# This will implicitly use the same depth for the body and condition
typ = get_type_system().random_type(max_type_depth, 0.5)
body = ctrl(typ=typ)
condition = ctrl(typ=type_t)
return Partial(body, condition)
TypedExpr.add_local("Partial", Partial.from_Tuple)
###############
#
# more type underspecification
#
###############
# The `Disjunctive` class allows for the construction of ad-hoc polymorphic
# expressions in the metalanguage. It takes a set of expressions, and gives you
# an object that will simplify to one or more of the expressions depending on
# type adjustment/inference. It enforces the constraint that every (non-
# disjunctive) type it is constructed from must be simplifiable to no more than
# one expression. So, constructing a Disjunctive from two objects of the same
# type is not permitted, but neither are cases where the types overlap (so for
# example, where you have an expression of type e, and an expression of type
# [e|t], because that would lead to a problem if it were adjusted to type e.)
#
# In a very roundabout way, this class acts like a dictionary mapping types to
# expressions.
class Disjunctive(TypedExpr):
def __init__(self, *disjuncts, type_check=True):
ts = get_type_system()
principal_type = types.DisjunctiveType(*[d.type for d in disjuncts])
t_adjust = set()
# this is not a great way to do this (n*m) but I couldn't see a
# cleverer way to catch stuff like:
# > `Disjunctive(te("x_e"), te("y_n"), te("z_[e|t]"))`
# It would work to not have this check here, and let the error happen
# on type adjustment later (e.g. type adjustment to `e` would fail in
# the above example) but I decided that that would be too confusing.
for d in disjuncts:
for t in principal_type:
r = d.try_adjust_type(t)
if r is not None:
if r.type in t_adjust:
raise parsing.ParseError(
"Disjoined expressions must determine unique types"
" (type %s appears duplicated in expression '%s' "
"for disjuncts '%s')"
% (repr(t), repr(d), repr(disjuncts)))
else:
t_adjust |= {r.type}
self.type = types.DisjunctiveType(*t_adjust)
super().__init__("Disjunctive", *disjuncts)
def copy(self):
return Disjunctive(*self.args)
def copy_local(self, *disjuncts, type_check=True):
return Disjunctive(*disjuncts)
def term(self):
return False
def __repr__(self):
return "Disjunctive(%s)" % (",".join([repr(a) for a in self.args]))
def latex_str(self, disj_type=False, **kwargs):
if disj_type:
return ensuremath("{Disjunctive}^{%s}(%s)" % (self.type.latex_str(),
", ".join([a.latex_str(**kwargs) for a in self.args])))
else:
return ensuremath("{Disjunctive}(\\left[%s\\right])"
% (("\\mid{}").join([a.latex_str(**kwargs)
for a in self.args])))
def try_adjust_type_local(self, unified_type, derivation_reason, assignment,
env):
ts = get_type_system()
l = list()
for a in self.args:
t = ts.unify(unified_type, a.type)
if t is None:
continue
l.append(a.try_adjust_type(t, derivation_reason=derivation_reason,
assignment=assignment))
assert len(l) > 0
if (len(l) == 1):
return l[0]
else:
return Disjunctive(*l)
def apply(self, arg):
if not self.type.functional():
raise TypeMismatch(self,arg, "Application to a non-functional Disjunction")
applied_disjuncts = list()
for d in self.args:
if not d.functional():
continue
try:
applied_disjuncts.append(d.apply(arg))
except TypeMismatch:
continue
result = self.factory(*applied_disjuncts)
if result is None:
raise TypeMismatch(self,arg, "Application to a non-functional Disjunction")
return result
@classmethod
def from_tuple(cls, t):
return Disjunctive(*t)
@classmethod
def factory(cls, *disjuncts):
disjuncts = set(disjuncts)
if len(disjuncts) == 0:
return None
elif len(disjuncts) == 1:
(r,) = disjuncts
return r
else:
return Disjunctive(*disjuncts)
@classmethod
def random(cls, ctrl, max_type_depth=1, max_disj_len=3):
r = range(max_disj_len+1)[1:]
length = random.choice(r)
signature = {get_type_system().random_type(max_type_depth, 0.5,
allow_variables=False, allow_disjunction=False)
for i in range(length)}
args = [ctrl(typ=t) for t in signature]
return cls.factory(*args) # may not actually generate a Disjunctive
TypedExpr.add_local("Disjunctive", Disjunctive.from_tuple)
###############
#
# Operators
#
###############
class UnaryOpExpr(TypedExpr):
"""This class abstracts over expressions headed by specific unary operators.
It is not necessarily designed to be instantiated directly, but rather
subclassed for particular hard-coded operators.
Because of the way the copy function works, it is currently not suited for
direct instantiation.
In logical terms, this corresponds to syncategorematic definitions of
operators as is standard in definitions of logics. For example, statements
like '~p is a sentence iff p is a sentence'. While semantics is not
currently implemented, it could be done in subclasses."""
def __init__(self, typ, op, arg1, op_name_uni=None, op_name_latex=None,
tcheck_args=True):
if tcheck_args:
super().__init__(op, self.ensure_typed_expr(arg1, typ))
else:
super().__init__(op, self.ensure_typed_expr(arg1))
self.type = typ
if op_name_uni is None:
self.op_name = op
else:
self.op_name = op_name_uni
if op_name_latex is None:
self.op_name_latex = self.op_name
else:
self.op_name_latex = op_name_latex
self.operator_style = True
def copy(self):
return self.copy_local(*self.args)
def copy_local(self, *args, type_check=True):
"""This must be overriden in classes that are not produced by the
factory."""
return op_expr_factory(self.op, *args)
def __str__(self):
return "%s%s\nType: %s" % (self.op_name, repr(self.args[0]), self.type)
def __repr__(self):
if (self.operator_style):
return "%s%s" % (self.op_name, repr(self.args[0]))
else:
return "%s(%s)" % (self.op_name, repr(self.args[0]))
def latex_str_long(self):
return self.latex_str() + "\\\\ Type: %s" % self.type.latex_str()
def latex_str(self, **kwargs):
if (self.operator_style):
return ensuremath("%s %s" % (self.op_name_latex,
self.args[0].latex_str(**kwargs)))
else:
return ensuremath("%s(%s)" % (self.op_name_latex,
self.args[0].latex_str(**kwargs)))
@classmethod
def random(cls, ctrl):
return cls(ctrl(typ=type_t))
class BinaryOpExpr(TypedExpr):
"""This class abstracts over expressions headed by specific binary
operators. It is not necessarily designed to be instantiated directly, but
rather subclassed for particular hard-coded operators.
Because of the way the copy function works, it is currently not suited for
direct instantiation at all."""
def __init__(self, typ, op, arg1, arg2, op_name_uni=None,
op_name_latex=None, tcheck_args=True):
if tcheck_args:
args = [self.ensure_typed_expr(arg1, typ),
self.ensure_typed_expr(arg2, typ)]
else:
args = [self.ensure_typed_expr(arg1), self.ensure_typed_expr(arg2)]
super().__init__(op, *args)
self.type = typ
if op_name_uni is None:
self.op_name = op
else:
self.op_name = op_name_uni
if op_name_latex is None:
self.op_name_latex = self.op_name
else:
self.op_name_latex = op_name_latex
def copy(self):
return self.copy_local(*self.args)
def copy_local(self, *args, type_check=True):
"""This must be overriden by classes that are not produced by the
factory."""
return op_expr_factory(self.op, *args)
def __str__(self):
return "%s\nType: %s" % (repr(self), self.type)
def __repr__(self):
return "(%s %s %s)" % (repr(self.args[0]), self.op_name,
repr(self.args[1]))
def latex_str_long(self):
return self.latex_str() + "\\\\ Type: %s" % self.type.latex_str()
def latex_str(self, **kwargs):
return ensuremath("(%s %s %s)" % (self.args[0].latex_str(**kwargs),
self.op_name_latex,
self.args[1].latex_str(**kwargs)))
@classmethod
def join(cls, *l):
"""Joins an arbitrary number of arguments using the binary operator.
Note that currently association is left to right. Requires a subclass
that defines a two-parameter __init__ function. (I.e. will potentially
fail if called on the abstract class.)
Will also fail on operators that do not take the same type (i.e.
SetContains).
"""
if len(l) == 0:
return true_term
if len(l) == 1:
return l[0]
else:
cur = l[0]
for i in range(len(l) - 1):
cur = cls(cur, l[i+1]) # will raise an error if the subclass
# doesn't define a constructor this way.
return cur
@classmethod
def random(cls, ctrl):
return cls(ctrl(typ=type_t), ctrl(typ=type_t))
# could make a factory function for these
class UnaryNegExpr(UnaryOpExpr): #unicode: ¬
def __init__(self, body):
super().__init__(type_t, "~", body, "~", "\\neg{}")
def simplify(self):
if self.args[0] == true_term:
return derived(false_term, self, desc="negation")
elif self.args[0] == false_term:
return derived(true_term, self, desc="negation")
else:
return self
class BinaryAndExpr(BinaryOpExpr): #note: there is a unicode, ∧.
def __init__(self, arg1, arg2):
super().__init__(type_t, "&", arg1, arg2, "&", "\\wedge{}")
def simplify(self):
if self.args[0] == false_term or self.args[1] == false_term:
return derived(false_term, self, desc="conjunction")
elif self.args[0] == true_term:
return derived(self.args[1].copy(), self, desc="conjunction")
elif self.args[1] == true_term:
return derived(self.args[0].copy(), self, desc="conjunction")
elif self.args[0] == self.args[1]:
return derived(self.args[0].copy(), self, desc="conjunction")
else:
return self
class BinaryOrExpr(BinaryOpExpr): #unicode: ∨.
def __init__(self, arg1, arg2):
super().__init__(type_t, "|", arg1, arg2, "|", "\\vee{}")
def simplify(self):
if self.args[0] == true_term or self.args[1] == true_term:
return derived(true_term, self, desc="disjunction")
elif self.args[0] == false_term:
# covers case of False | False
return derived(self.args[1].copy(), self, desc="disjunction")
elif self.args[1] == false_term:
return derived(self.args[0].copy(), self, desc="disjunction")
elif self.args[0] == self.args[1]:
return derived(true_term, self, desc="disjunction")
else:
return self
#unicode arrow: →
class BinaryArrowExpr(BinaryOpExpr):
def __init__(self, arg1, arg2):
super().__init__(type_t, ">>", arg1, arg2, ">>", "\\rightarrow{}")
def simplify(self):
if self.args[0] == false_term:
return derived(true_term, self, desc="material implication")
elif self.args[0] == true_term:
return derived(self.args[1].copy(), self,
desc="material implication")
elif self.args[1] == false_term:
return derived(UnaryNegExpr(self.args[0]), self,
desc="material implication")
elif self.args[1] == true_term:
return derived(true_term, self, desc="material implication")
elif self.args[0] == self.args[1]:
return derived(true_term, self, desc="material implication")
else:
return self
# unicode: ↔
class BinaryBiarrowExpr(BinaryOpExpr):
def __init__(self, arg1, arg2):
super().__init__(type_t, "<=>", arg1, arg2, "<=>", "\\leftrightarrow{}")
def simplify(self):
if self.args[0] == false_term:
if self.args[1] == true_term:
return derived(false_term, self, desc="biconditional")
elif self.args[1] == false_term:
return derived(true_term, self, desc="biconditional")
else:
return derived(UnaryNegExpr(self.args[1]), self,
"biconditional")
elif self.args[0] == true_term:
if self.args[1] == false_term:
return derived(false_term, self, desc="biconditional")
elif self.args[1] == true_term:
return derived(true_term, self, desc="biconditional")
else:
return derived(self.args[1].copy(), self, desc="biconditional")
elif self.args[1] == false_term: # term+term is already taken care of
return derived(UnaryNegExpr(self.args[0]), self,
desc="biconditional")
elif self.args[1] == true_term:
return derived(self.args[0].copy(), self, desc="biconditional")
elif self.args[0] == self.args[1]:
return derived(true_term, self, desc="biconditional")
else:
return self
# TODO: generalize this?
class BinaryNeqExpr(BinaryOpExpr):
def __init__(self, arg1, arg2):
super().__init__(type_t, "^", arg1, arg2, "=/=", "\\not=")
def simplify(self):
if self.args[0] == false_term:
if self.args[1] == true_term:
return derived(true_term, self, desc="neq")
elif self.args[1] == false_term:
return derived(false_term, self, desc="neq")
else:
return derived(self.args[1].copy(), self, desc="neq")
elif self.args[0] == true_term:
if self.args[1] == false_term:
return derived(true_term, self, desc="neq")
elif self.args[1] == true_term:
return derived(false_term, self, desc="neq")
else:
return derived(UnaryNegExpr(self.args[1]), self, desc="neq")
elif self.args[1] == true_term: # term+term is already taken care of
return derived(UnaryNegExpr(self.args[0]), self, desc="neq")
elif self.args[1] == false_term:
return derived(self.args[0].copy(), self, desc="neq")
elif self.args[0] == self.args[1]:
return derived(false_term, self, desc="neq")
else:
# note: don't simplify p =/= q; this would be a job for a prover
return self
class BinaryGenericEqExpr(BinaryOpExpr):
"""Type-generic equality. This places no constraints on the type of `arg1`
and `arg2` save that they be equal. See `eq_factory`."""
def __init__(self, arg1, arg2):
arg1 = self.ensure_typed_expr(arg1)
# maybe raise the exception directly?
arg2 = self.ensure_typed_expr(arg2, arg1.type)
# some problems with equality using '==', TODO recheck, but for now
# just use "<=>" in the normalized form
super().__init__(type_t, "<=>", arg1, arg2, op_name_uni = "<=>",
op_name_latex = "=", tcheck_args = False)
def simplify(self):
if self.args[0] == self.args[1]:
return derived(true_term, self, desc="Equality")
else:
if (isinstance(self.args[0].op, Number)
and isinstance(self.args[1].op, Number)):
return derived(false_term, self, desc="Equality")
else:
return self # this would require a solver for the general case
@classmethod
def random(cls, ctrl, max_type_depth=1):
body_type = get_type_system().random_type(max_type_depth, 0.5)
return cls(ctrl(typ=body_type), ctrl(typ=body_type))
def eq_factory(arg1, arg2):
"""If type is type t, return a biconditional. Otherwise, build an equality
statement."""
arg1 = TypedExpr.ensure_typed_expr(arg1)
arg2 = TypedExpr.ensure_typed_expr(arg2)
ts = get_type_system()
if arg1.type == types.type_t: # this must be exact so as not to trigger on
# combinators. TODO: something more general?
return BinaryBiarrowExpr(arg1, arg2)
else:
return BinaryGenericEqExpr(arg1, arg2)
def binary_num_op(op, op_uni=None, op_latex=None, simplify_fun=None):
"""Factory for binary numeric operators."""
if op_uni is None:
op_uni = op
if op_latex is None:
op_latex = op
class BinOp(BinaryOpExpr):
def __init__(self, arg1, arg2):
super().__init__(type_n, op, arg1, arg2, op_uni, op_latex)
def simplify(self):
if simplify_fun is None:
return self
if (isinstance(self.args[0].op, Number)
and isinstance(self.args[1].op, Number)):
return derived(te(simplify_fun(self.args[0].op,
self.args[1].op)),
self, desc=op_uni)
else:
return self
@classmethod
def random(cls, ctrl):
return cls(ctrl(typ=type_n), ctrl(typ=type_n))
return BinOp
def binary_num_rel(op, op_uni=None, op_latex=None, simplify_fun=None):
"""Factory for binary numeric relations."""
if op_uni is None:
op_uni = op
if op_latex is None:
op_latex = op
class BinOp(BinaryOpExpr):
def __init__(self, arg1, arg2):
# this is a bit redundant but it works
super().__init__(type_t, op,
self.ensure_typed_expr(arg1, types.type_n),
self.ensure_typed_expr(arg2, types.type_n),
op_uni, op_latex, tcheck_args=False)
def simplify(self):
if simplify_fun is None:
return self
if (isinstance(self.args[0].op, Number)
and isinstance(self.args[1].op, Number)):
return derived(te(simplify_fun(self.args[0].op,
self.args[1].op)),
self, desc=op_uni)
else:
return self
@classmethod
def random(cls, ctrl):
return cls(ctrl(typ=type_n), ctrl(typ=type_n))
return BinOp
BinaryLExpr = binary_num_rel("<", "<", "<", simplify_fun=lambda x,y: x<y)
BinaryLeqExpr = binary_num_rel("<=", "<=", "\\leq{}",
simplify_fun=lambda x,y: x<=y)
BinaryGeqExpr = binary_num_rel(">=", ">=", "\\geq{}",
simplify_fun=lambda x,y: x>=y)
BinaryGExpr = binary_num_rel(">", ">", ">", simplify_fun=lambda x,y: x>y)
BinaryPlusExpr = binary_num_op("+", "+", "+", simplify_fun=lambda x,y: x+y)
BinaryMinusExpr = binary_num_op("-", "-", "-", simplify_fun=lambda x,y: x-y)
BinaryDivExpr = binary_num_op("/", "/", "/", simplify_fun=lambda x,y: x/y)
BinaryTimesExpr = binary_num_op("*", "*", "*", simplify_fun=lambda x,y: x*y)
BinaryExpExpr = binary_num_op("**", "**", "**", simplify_fun=lambda x,y: x**y)
# There's only one of these, so a factory would be silly
class UnaryNegativeExpr(UnaryOpExpr):
def __init__(self, body):
super().__init__(type_n, "-", body, "-", "-")
def simplify(self):
if isinstance(self.args[0].op, Number):
return derived(te(- self.args[0].op), self, desc="unary -")
else:
return self
@classmethod
def random(cls, ctrl):
return cls(ctrl(typ=type_n))
class SetContains(BinaryOpExpr):
"""Binary relation of set membership. This uses `<<` as the symbol.
Note that this _does_ support reduction if the set describes its members by
condition, as set membership is equivalent to saturation of the
characteristic function of the set."""
def __init__(self, arg1, arg2, type_check=True):
# seems like the best way to do the mutual type checking here?
# Something more elegant?
arg1 = self.ensure_typed_expr(arg1)
arg2 = self.ensure_typed_expr(arg2, types.SetType(arg1.type))
arg1 = self.ensure_typed_expr(arg1, arg2.type.content_type)
#super().__init__(type_t, "<<", arg1, arg2, "∈", "\\in{}", tcheck_args=False)
# was having some trouble with the ∈ symbol, not sure what the problem is but disabled for now.
super().__init__(type_t, "<<", arg1, arg2, "<<", "\\in{}",
tcheck_args=False)
def copy(self):
return SetContains(self.args[0], self.args[1])
def copy_local(self, arg1, arg2, type_check=True):
return SetContains(arg1, arg2)
def reduce(self):
if isinstance(self.args[1], ConditionSet):
derivation = self.derivation
step = (self.args[1].to_characteristic()(self.args[0])).reduce()
step.derivation = derivation # suppress the intermediate parts of
# this derivation, if any
return derived(step, self, "∈ reduction")
else:
# leave ListedSets as-is for now. TODO could expand this using
# disjunction.
return self
def reducible(self):
if isinstance(self.args[1], ConditionSet):
return True
return False
@classmethod
def random(cls, ctrl, max_type_depth=1):
content_type = get_type_system().random_type(max_type_depth, 0.5)
return SetContains(ctrl(typ=content_type), ctrl(
typ=types.SetType(content_type)))
class TupleIndex(BinaryOpExpr):
def __init__(self, arg1, arg2, type_check=True):
arg1 = self.ensure_typed_expr(arg1)
if not isinstance(arg1.type, types.TupleType):
raise types.TypeMismatch(arg1, arg2,
mode="Tuple indexing expression with a non-tuple")
arg2 = self.ensure_typed_expr(arg2, types.type_n)
if isinstance(arg2.op, Number): # TODO better way to determine whether
# arg2 is a constant of type type_n?
if arg2.op >= len(arg1.type):
raise TypeMismatch(arg1, arg2,
mode="Tuple indexing expression with out-of-range index")
output_type = arg1.type[arg2.op]
else:
output_type = types.VariableType("X") # TODO this is problematic
logger.warning(
"Using non-constant tuple index; not well-supported.")
super().__init__(output_type, "[]", arg1, arg2, "[]", "[]",
tcheck_args=False)
def copy(self):
return TupleIndex(self.args[0], self.args[1])
def copy_local(self, arg1, arg2, type_check=True):
return TupleIndex(arg1, arg2)
def try_adjust_type_local(self, unified_type, derivation_reason, assignment,
env):
if isinstance(self.args[1].op, Number):
ttype = list(self.args[0].type)
ttype[self.args[1].op] = unified_type
adjusted_tuple = self.args[0].try_adjust_type(
types.TupleType(*ttype))
return self.copy_local(adjusted_tuple, self.args[1])
else:
logger.warning(
"Using non-constant index; not well-supported at present.")
return None
def __str__(self):
return "%s\nType: %s" % (repr(self), self.type)
def __repr__(self):
return "(%s[%s])" % (repr(self.args[0]), repr(self.args[1]))
def latex_str_long(self):
return self.latex_str() + "\\\\ Type: %s" % self.type.latex_str()
def latex_str(self, **kwargs):
return ensuremath("(%s[%s])" % (self.args[0].latex_str(**kwargs),
self.args[1].latex_str(**kwargs)))
def reduce(self):
if (isinstance(self.args[0], Tuple)
and isinstance(self.args[1].op, Number)):
result = self.args[0].tuple()[self.args[1].op].copy()
return derived(result, self, "Resolution of index")
else:
return self
def reducible(self):
if (isinstance(self.args[0], Tuple)
and isinstance(self.args[1].op, Number)):
return True
# no support for non-constant indices at present, not even ones that
# should be mathematically simplifiable
return False
@classmethod
def random(cls, ctrl, max_type_depth=1):
content_type = get_type_system().random_type(max_type_depth, 0.5)
tup = Tuple.random(ctrl, max_type_depth=max_type_depth,
allow_empty=False)
index = random.choice(range(len(tup)))
return TupleIndex(tup, index)
unary_symbols_to_op_exprs = {"~" : UnaryNegExpr,
"-" : UnaryNegativeExpr}
# not implemented: << as left implication. I am using << for set membership.
# note that neq is for type t only.
binary_symbols_to_op_exprs = {
"&" : BinaryAndExpr,
"|" : BinaryOrExpr,
">>" : BinaryArrowExpr,
"<=>" : eq_factory,
"==" : eq_factory,
"%" : BinaryNeqExpr,
"^" : BinaryNeqExpr,
"<" : BinaryLExpr,
">" : BinaryGExpr,
"<=" : BinaryLeqExpr,
">=" : BinaryGeqExpr,
"+" : BinaryPlusExpr,
"-" : BinaryMinusExpr,
"/" : BinaryDivExpr,
"*" : BinaryTimesExpr,
"**" : BinaryExpExpr,
"<<" : SetContains}
op_symbols = (set(unary_symbols_to_op_exprs.keys())
| set(binary_symbols_to_op_exprs.keys()))
# TODO raise exceptions
def op_expr_factory(op, *args):
"""Given some operator/relation symbol with arguments, construct an
appropriate TypedExpr subclass for that operator."""
# this conditional is necessary because the same symbol may involve both a
# unary and a binary operator
if len(args) == 0:
raise ValueError("0-length operator")
elif len(args) == 1:
if op not in unary_symbols_to_op_exprs:
raise ValueError("Unknown unary operator symbol '%s'" % op)
else:
return unary_symbols_to_op_exprs[op](args[0])
elif len(args) == 2:
if op not in binary_symbols_to_op_exprs:
raise ValueError("Unknown binary operator symbol '%s'" % op)
else:
return binary_symbols_to_op_exprs[op](args[0], args[1])
else:
raise ValueError("Too many arguments (%s) to operator '%s'"
% (len(args), op))
###############
#
# Binding expressions
#
###############
global recurse_level
recurse_level = 0
class BindingOp(TypedExpr):
"""Abstract class for a unary operator with a body that binds a single
variable in its body.
Never instantiated directly. To see how to use this, it may be helpful to
look at the definite description tutorial, which shows how to build an iota
operator."""
binding_operators = dict()
canonicalize_names = dict()
unparsed_operators = set()
op_regex = None
init_op_regex = None
# set the following in subclasses
canonical_name = None
secondary_names = set()
allow_multivars = False
allow_novars = False
op_name_uni = None
op_name_latex = None
partiality_weak = True
@classmethod
def binding_op_factory(self, op_class, var_list, body, assignment=None):
for i in range(len(var_list)):
if not is_var_symbol(var_list[i][0]):
raise parsing.ParseError(
"Need variable name in binding operator expression"
" (received '%s')" % var_list[i][0], None)
if var_list[i][1] is None:
# TODO: flag as a guessed type somehow?
var_list[i] = (var_list[i][0],
default_variable_type(var_list[i][0]))
if op_class.allow_multivars or op_class.allow_novars:
# use alternate constructor
if (not op_class.allow_multivars) and len(var_list) > 1:
raise parsing.ParseError(
"Operator class '%s' does not allow >1 variables"
% (op_class.canonical_name), None)
if (not op_class.allow_novars) and len(var_list) == 0:
raise parsing.ParseError(
"Operator class '%s' does not allow 0 variables"
% (op_class.canonical_name), None)
return op_class(var_list, body, assignment=assignment)
else:
if len(var_list) != 1:
raise parsing.ParseError(
"Operator class '%s' does not allow %i variables"
% (op_class.canonical_name, len(var_list)), None)
return op_class(var_or_vtype=var_list[0][1],
varname=var_list[0][0],
body=body,
assignment=assignment)
def __init__(self, var_or_vtype, typ, body, varname=None, body_type=None,
assignment=None, type_check=True):
# NOTE: not calling superclass
# Warning: can't assume in general that typ is not None.
# I.e. may be set in subclass after a call
# to this function. Subclass is responsible for doing this properly...
if body_type is None:
body_type = typ
if isinstance(var_or_vtype, str): # TODO: support type strings
var_or_vtype = TypedExpr.term_factory(var_or_vtype)
if isinstance(var_or_vtype, TypedTerm):
if varname is not None:
logger.warning("Overriding varname '%s' with '%s'"
% (varname, var_or_vtype.op))
varname = var_or_vtype.op
vartype = var_or_vtype.type
elif isinstance(var_or_vtype, types.TypeConstructor):
if varname is None:
varname = self.default_varname()
vartype = var_or_vtype
else:
logger.error("Unknown var_or_vtype: " + repr(var_or_vtype))
raise NotImplementedError
if not is_var_symbol(varname):
raise ValueError("Need variable name (got '%s')" % varname)
if typ is not None:
self.type = typ
self.derivation = None
self.type_guessed = False
self.defer = False
self.let = False
self.init_args()
self.init_var(varname, vartype)
# TODO: consider overriding __eq__ and __hash__.
if type_check:
sassign = self.scope_assignment(assignment=assignment)
self.init_body(self.ensure_typed_expr(body, body_type,
assignment=sassign))
body_env = self.body.get_type_env()
if self.varname in body_env.var_mapping: # binding can be vacuous
if body_env.var_mapping[self.varname] != self.vartype:
# propagate type inference to binding expression
new_vartype = body_env.var_mapping[self.varname]
assert new_vartype is not None
self.init_var(self.varname, new_vartype)
self.init_body(self.body.regularize_type_env())
self.init_var_by_instance(
self.var_instance.under_type_assignment(body_env.type_mapping,
merge_intersect=False))
else:
self.init_body(body)
def copy_local(self, *args, type_check=True):
return type(self)(*args, type_check=type_check)
def scope_assignment(self, assignment=None):
if assignment is None:
assignment = dict()
else:
assignment = assignment.copy()
assignment[self.varname] = self.var_instance
return assignment
def default_varname(self):
return "x"
def init_args(self):
try:
a = self.args
except AttributeError:
self.args = list([None, None])
assert len(self.args) == 2
def init_var(self, name=None, typ=None):
self.init_args()
if name is None:
if typ is None:
raise ValueError
else:
var_instance = TypedTerm(self.varname, typ)
else:
if typ is None:
var_instance = TypedTerm(name, self.var_instance.type)
else:
var_instance = TypedTerm(name, typ)
self.args[0] = var_instance
self.op = "%s:" % (self.canonical_name)
def init_var_by_instance(self, v):
self.init_var(v.op, v.type)
def init_body(self, b):
self.init_args()
self.args[1] = b
@property
def varname(self):
return self.var_instance.term_name
@property
def vartype(self):
return self.var_instance.type
@property
def var_instance(self):
return self.args[0]
@property
def body(self):
return self.args[1]
@classmethod
def add_op(cls, op):
"""Register an operator to be parsed."""
if op.canonical_name is None:
BindingOp.unparsed_operators.add(op)
else:
if op.canonical_name in BindingOp.binding_operators:
logger.warning(
"Overriding existing binding operator '%s' in registry"
% op.canonical_name)
cls.remove_op(op)
BindingOp.binding_operators[op.canonical_name] = op
for alias in op.secondary_names:
BindingOp.canonicalize_names[alias] = op.canonical_name
BindingOp.compile_ops_re()
@classmethod
def remove_op(cls, op):
"""Remove an operator from the parsing registry."""
for alias in BindingOp.binding_operators[op.canonical_name].secondary_names:
del BindingOp.canonicalize_names[alias]
if op.canonical_name is None:
BindigOp.unparsed_operators.remove(op)
else:
del BindingOp.binding_operators[op.canonical_name]
BindingOp.compile_ops_re()
@classmethod
def compile_ops_re(cls):
"""Recompile the regex for detecting operators."""
op_names = (BindingOp.binding_operators.keys()
| BindingOp.canonicalize_names)
# sort with longer strings first, to avoid matching subsets of long
# names i.e. | is not greedy, need to work around that.
op_names = list(op_names)
op_names.sort(reverse=True)
if len(op_names) == 0:
BindingOp.op_regex = None
BindingOp.init_op_regex = None
else:
regex = "(" + ("|".join(op_names)) + ")"
BindingOp.op_regex = re.compile(regex)
BindingOp.init_op_regex = re.compile(r'^\s*' + regex)
def alpha_convert(self, new_varname):
"""Produce an alphabetic variant of the expression w.r.t. the bound
variable, with new_varname as the new name.
Returns a copy. Will not affect types of either the expression or the
variables."""
new_self = self.copy()
new_self.init_body(variable_convert(self.body, {self.varname: new_varname}))
new_self.init_var(name=new_varname)
return new_self
def latex_op_str(self):
return self.latex_op_str_short()
def latex_op_str_short(self):
return "%s %s_{%s} \\: . \\:" % (self.op_name_latex,
self.varname,
self.vartype.latex_str())
def __str__(self):
return "%s %s : %s, Type: %s" % (self.op_name, self.varname,
repr(self.body), self.type)
def latex_str_long(self):
return self.latex_str() + "\\\\ Type: %s" % self.type.latex_str()
def latex_str(self, assignment=None, **kwargs):
assignment = self.scope_assignment(assignment=assignment)
return ensuremath("%s %s" % (self.latex_op_str(),
self.body.latex_str(assignment=assignment, **kwargs)))
def __repr__(self):
return "(%s %s: %s)" % (self.op_name, repr(self.var_instance),
repr(self.body))
@property
def op_name(self):
if (self.op_name_uni is not None
and self.op_name_uni in self.secondary_names):
return self.op_name_uni
else:
return self.canonical_name
def free_variables(self):
return super().free_variables() - {self.varname}
def bound_variables(self):
return super().bound_variables() | {self.varname}
def calc_type_env(self, recalculate=False):
sub_env = self.body.get_type_env(force_recalc=recalculate).copy()
# ensure any variable types introduced by the variable show up even if
# they are not present in the subformula
sub_env.add_type_to_var_set(self.var_instance.type)
if self.varname in sub_env.var_mapping:
del sub_env.var_mapping[self.varname]
return sub_env
def type_env(self, constants=False, target=None, free_only=True):
sub_env = self.body.type_env(constants=constants, target=target,
free_only=free_only)
if free_only and self.varname in sub_env: # binding can be vacuous
del sub_env[self.varname]
return sub_env
def vacuous(self):
"""Return true just in case the operator's variable is not free in the
body expression."""
return self.varname in super().free_variables()
def term(self):
return False
def project_partiality_strict(b, body, condition):
b_cls = type(b)
if isinstance(b, ConditionSet) or isinstance(b, LFun):
return b
else: # IotaPartial handled in subclass
return Partial(b_cls(b.var_instance, body),
ForallUnary(b.var_instance, body))
def project_partiality_weak(b, body, condition):
b_cls = type(b)
if isinstance(b, ForallUnary):
return Partial(b_cls(b.var_instance, body),
b_cls(b.var_instance, condition))
elif isinstance(b, ExistsUnary) or isinstance(b, ExistsExact):
return Partial(b_cls(b.var_instance, body & condition),
b_cls(b.var_instance, condition))
elif isinstance(b, IotaUnary): # does this lead to scope issues for the condition?
return Partial(b_cls(b.var_instance, body & condition),
ExistsUnary(b.var_instance, condition))
elif isinstance(b, ConditionSet) or isinstance(b, LFun):
return b
else: # IotaPartial handled in subclass
# is this really a type issue?
raise TypeMismatch(b, None,
"No implemented way of projecting partiality for BindingOp %s"
% repr(type(b).__name__))
def calculate_partiality(self, vars=None):
if vars is None:
vars = set()
if isinstance(self, LFun):
vars |= {self.varname}
# defer any further calculation if there are bound variables in the body
if vars & self.body.free_variables():
return self
new_body = self.body.calculate_partiality(vars=vars)
if isinstance(new_body, Partial):
if new_body.condition is true_term:
return derived(self.copy_local(self.var_instance, new_body),
self, "Partiality simplification")
if self.varname in new_body.condition.free_variables():
if BindingOp.partiality_weak:
return derived(
self.project_partiality_weak(new_body.body,
new_body.condition),
self, "Partiality simplification")
else:
return derived(
self.project_partiality_strict(new_body.body,
new_body.condition),
self, "Partiality simplification")
else:
new_condition = new_body.condition
new_self = self.copy_local(self.var_instance, new_body.body)
return derived(Partial(new_self, new_condition), self,
"Partiality simplification")
else:
return derived(self.copy_local(self.var_instance, new_body), self,
"Partiality simplification")
@classmethod
def try_parse_header(cls, s, assignment=None, locals=None):
"""Try and parse the header of a binding operator expression, i.e.
everything up to the body including ':'.
If this succeeds, it will return a tuple with the class object, the
variable name, the variable type, and the string after the ':'' if any.
If it fails, it will either return None or raise an exception. That
exception is typically a ParseError.
"""
i = 0
if BindingOp.init_op_regex is None:
return None # no operators to parse
op_match = re.match(BindingOp.init_op_regex, s)
if not op_match:
raise parsing.ParseError(
"Unknown operator when trying to parsing "
"binding operator expression", s, None, met_preconditions=False)
op_name = op_match.group(1) # operator name
i = op_match.end(1)
if op_name in BindingOp.canonicalize_names:
op_name = BindingOp.canonicalize_names[op_name]
if op_name not in BindingOp.binding_operators:
raise Error(
"Can't find binding operator '%s'; should be impossible"
% op_name)
op_class = BindingOp.binding_operators[op_name]
split = s.split(":", 1)
if (len(split) != 2):
# possibly should change to met_preconditions = True in the future.
# At this point, we have seen a binding expression token.
raise parsing.ParseError(
"Missing ':' in binding operator expression", s, None,
met_preconditions=False)
header, remainder = split
vname = header[i:].strip() # removes everything but a variable name
var_seq = cls.try_parse_term_sequence(vname, lower_bound=None,
upper_bound=None, assignment=assignment)
return (op_class, var_seq, remainder)
@classmethod
def try_parse_binding_struc_r(cls, struc, assignment=None, locals=None,
vprefix="ilnb"):
"""Attempt to parse structure `s` as a binding structure. Used by the
factory function.
assignment: a variable assignment to use when parsing.
`struc` is a semi-AST with all parenthetical structures parsed.
(See `parsing.parse_paren_str`.)
Format: 'Op v : b'
* 'Op' is one of 'lambda', 'L', 'λ', 'Forall', 'Exists', 'Iota'.
(Subclasses can register themselves to be parsed.)
* 'v' is a variable name expression (see try_parse_typed_term),
e.g. 'x_e'
* 'b' is a function body, i.e. something parseable into a TypedExpr.
If 'v' does not provide a type, it will attempt to guess one based on
the variable name. The body will be parsed using a call to the
recursive `TypedExpr.try_parse_paren_struc_r`, with a shifted assignment
using the new variable 'v'.
Returns a subclass of BindingOp.
"""
if (len(struc) == 0):
return None
if isinstance(struc[0], str) and struc[0] in parsing.brackets:
potential_header = struc[1]
bracketed = True
else:
potential_header = struc[0]
bracketed = False
if not isinstance(potential_header, str):
return None
result = BindingOp.try_parse_header(potential_header)
if result is None:
return None
(op_class, var_list, remainder) = result
# remainder is any string left over from parsing the header.
if bracketed:
# note: syntax checking for bracket matching is already done, this
# does not need to check for that here.
assert(parsing.brackets[struc[0]] == struc[-1])
new_struc = [remainder,] + struc[2:-1]
else:
new_struc = [remainder,] + struc[1:]
if assignment is None:
assignment = dict()
else:
assignment = assignment.copy()
store_old_v = None
for var_tuple in var_list:
(v,t) = var_tuple
assignment[v] = TypedTerm(v, t)
body = None
try:
body = TypedExpr.try_parse_paren_struc_r(new_struc,
assignment=assignment, locals=locals, vprefix=vprefix)
except Exception as e:
if isinstance(e, parsing.ParseError):
raise e
else:
raise parsing.ParseError(
"Binding operator expression has unparsable body",
parsing.flatten_paren_struc(struc), None, e=e)
if body is None:
raise parsing.ParseError(
"Can't create body-less binding operator expression",
parsing.flatten_paren_struc(struc), None)
result = BindingOp.binding_op_factory(op_class, var_list, body,
assignment=assignment)
return result
@classmethod
def random(cls, ctrl, body_type=type_t, max_type_depth=1):
global random_used_vars
var_type = get_type_system().random_type(max_type_depth, 0.5)
variable = random_term(var_type, usedset=random_used_vars,
prob_used=0.2, prob_var=1.0)
random_used_vars |= {variable}
return cls(variable, ctrl(typ=type_t))
class ConditionSet(BindingOp):
"""A set represented as a condition on a variable.
The body must be of type t."""
canonical_name = "Set"
op_name_uni="Set"
op_name_latex="Set"
def __init__(self, var_or_vtype, body, varname=None, assignment=None,
type_check=True):
body = self.ensure_typed_expr(body, assignment=assignment)
super().__init__(var_or_vtype=var_or_vtype, typ=None, body=body,
varname=varname, body_type=types.type_t, assignment=assignment,
type_check=type_check)
self.type = types.SetType(self.vartype)
def structural_singleton(self):
pass
def term(self):
return False
def latex_str(self, parens=True, **kwargs):
return ensuremath("\\{%s_{%s}\\:|\\: "
% (self.varname, self.vartype.latex_str())
+ self.body.latex_str(**kwargs) + "\\}")
def __lshift__(self, i):
return SetContains(i, self)
def to_characteristic(self):
"""Return a LFun based on the condition used to describe the set."""
return LFun(self.vartype, self.body, self.varname)
def try_adjust_type_local(self, unified_type, derivation_reason,
assignment, env):
inner_type = unified_type.content_type
char = self.to_characteristic()
sub_var = TypedTerm(self.varname, inner_type)
new_condition = char.apply(sub_var)
return self.copy_local(sub_var, new_condition)
BindingOp.add_op(ConditionSet)
class ListedSet(TypedExpr):
"""A listed set is a set that simply lists members."""
canonical_name = "ListedSet"
op_name_uni="ListedSet"
op_name_latex="ListedSet"
def __init__(self, iterable, typ=None, assignment=None, type_check=True):
s = set(iterable) # remove duplicates, flatten order
args = [self.ensure_typed_expr(a,assignment=assignment) for a in s]
args = sorted(args, key=repr) # for a canonical ordering
if len(args) == 0 and typ is None:
typ = types.VariableType("X") # could be a set of anything
elif typ is None:
typ = args[0].type
for i in range(len(args)):
# type checking TODO: this isn't right, would need to pick the
# strongest type
args[i] = self.ensure_typed_expr(args[i], typ)
super().__init__("Set", *args)
#self.op = "Set"
self.type = types.SetType(typ)
def subst(self, i, s):
if len(self.args) < 2:
return super().subst(i, s)
else:
raise NotImplementedError(
"Beta reduction into a set of size>1 not currently supported.")
# TODO deal with this
# the problem is the same as usual -- set order isn't stable so we
# need to do this all at once rather than member-by-member.
def copy(self):
return ListedSet(self.args)
def copy_local(self, *args, type_check=True):
return ListedSet(args)
def term(self):
return False
def __lshift__(self, i):
"""Use the `<<` operator for set membership."""
return SetContains(i, self)
def set(self):
"""Return a python `set` version of the ListedSet.
Note that this isn't guaranteed to be defined for anything with a set
type."""
return set(self.args)
def cardinality(self):
return len(self.args)
def to_condition_set(self):
"""Convert to a condition set by disjoining members."""
# ensure that we build a condition set from a variable that is not free
# in any of the members
varname = self.find_safe_variable(starting="x")
conditions = [BinaryGenericEqExpr(TypedTerm(varname, a.type), a)
for a in self.args]
return ConditionSet(self.type.content_type,
BinaryOrExpr.join(*conditions), varname=varname)
def reduce_all(self):
"""Special-cased reduce_all for listed sets. There are two problems.
First, the reduction may actually result in a change in the size of the
set, something generally not true of reduction elsewhere. Second,
because the constructor calls `set`, `copy` is not guaranteed to return
an object with a stable order. Therefore we must batch the reductions
(where the TypedExpr version doesn't).
Note that currently this produces non-ideal derivation sequences."""
dirty = False
accum = list()
result = self
for i in range(len(result.args)):
new_arg_i = result.args[i].reduce_all()
if new_arg_i is not result.args[i]:
dirty = True
reason = "Recursive reduction of set member %s" % (i+1)
# TODO: this isn't quite right but I can't see what else to do
# right now
result = derived(result, result, desc=reason,
subexpression=new_arg_i, allow_trivial=True)
accum.append(new_arg_i)
else:
accum.append(new_arg_i)
if dirty:
new_result = ListedSet(accum)
new_result = derived(new_result, result,
desc="Construction of set from reduced set members")
result = new_result
return result
def __repr__(self):
return repr(set(self.args))
def latex_str(self, **kwargs):
inner = ", ".join([a.latex_str(**kwargs) for a in self.args])
return ensuremath("\\{" + inner + "\\}")
def try_adjust_type_local(self, unified_type, derivation_reason, assignment,
env):
inner_type = unified_type.content_type
content = [a.try_adjust_type(inner_type,
derivation_reason=derivation_reason,
assignment=assignment) for a in self.args]
result = self.copy_local(*content)
return result
@classmethod
def random(self, ctrl, max_type_depth=1, max_members=6, allow_empty=True):
typ = get_type_system().random_type(max_type_depth, 0.5)
if allow_empty:
r = range(max_members+1)
else:
r = range(max_members+1)[1:]
length = random.choice(r)
members = [ctrl(typ=typ) for i in range(length)]
return ListedSet(members)
class ForallUnary(BindingOp):
"""Universal unary quantifier"""
canonical_name = "Forall"
op_name_uni = "∀"
op_name_latex = "\\forall{}"
def __init__(self, var_or_vtype, body, varname=None, assignment=None,
type_check=True):
super().__init__(var_or_vtype, types.type_t, body, varname=varname,
assignment=assignment, type_check=type_check)
def copy(self):
return ForallUnary(self.vartype, self.body, self.varname)
def copy_local(self, var, arg, type_check=True):
return ForallUnary(var, arg, type_check=type_check)
def simplify(self):
# note: not valid if the domain of individuals is completely empty
# (would return True)
if not self.varname in self.body.free_variables():
return self.body
return self
BindingOp.add_op(ForallUnary)
class ExistsUnary(BindingOp):
"""Existential unary quantifier"""
canonical_name = "Exists"
op_name_uni="∃"
op_name_latex="\\exists{}"
def __init__(self, var_or_vtype, body, varname=None, assignment=None,
type_check=True):
super().__init__(var_or_vtype, types.type_t, body, varname=varname,
assignment=assignment, type_check=type_check)
def copy(self):
return ExistsUnary(self.vartype, self.body, self.varname)
def copy_local(self, var, arg, type_check=True):
return ExistsUnary(var, arg, type_check=type_check)
def simplify(self):
# note: not valid if the domain of individuals is completely empty
# (would return False)
if not self.varname in self.body.free_variables():
return self.body
return self
BindingOp.add_op(ExistsUnary)
class ExistsExact(BindingOp):
"""Existential unary quantifier"""
canonical_name = "ExistsExact"
op_name_uni="∃!"
op_name_latex="\\exists{}!"
def __init__(self, var_or_vtype, body, varname=None, assignment=None,
type_check=True):
super().__init__(var_or_vtype, types.type_t, body, varname=varname,
assignment=assignment, type_check=type_check)
def copy(self):
return ExistsExact(self.vartype, self.body, self.varname)
def copy_local(self, var, arg, type_check=True):
return ExistsExact(var, arg, type_check=type_check)
BindingOp.add_op(ExistsExact)
class IotaUnary(BindingOp):
"""Iota operator. This is best construed as Russellian."""
canonical_name = "Iota"
op_name_uni = "ι"
op_name_latex="\\iota{}"
secondary_names = {"ι"}
def __init__(self, var_or_vtype, body, varname=None, assignment=None,
type_check=True):
super().__init__(var_or_vtype=var_or_vtype, typ=None, body=body,
varname=varname, body_type=types.type_t, assignment=assignment,
type_check=type_check)
self.type = self.vartype
def copy(self):
return IotaUnary(self.vartype, self.body, self.varname)
def copy_local(self, var, arg, type_check=True):
return IotaUnary(var, arg, type_check=type_check)
def to_test(self, x):
"""Return a LFun based on the condition used to describe the set."""
return LFun(self.vartype, self.body, self.varname).apply(x)
def try_adjust_type_local(self, unified_type, derivation_reason, assignment,
env):
sub_var = TypedTerm(self.varname, unified_type)
# TODO: does this need to pass in assignment?
new_condition = self.to_test(sub_var)
result = self.copy_local(sub_var, new_condition)
return result
class IotaPartial(IotaUnary):
canonical_name = "IotaPartial"
op_name_uni = "ι"
op_name_latex="\\iota{}"
secondary_names = {}
def __init__(self, var_or_vtype, body, varname=None, assignment=None,
type_check=True):
super().__init__(var_or_vtype, body, varname, assignment, type_check)
def copy(self):
return IotaPartial(self.vartype, self.body, self.varname)
def copy_local(self, var, arg, type_check=True):
return IotaPartial(var, arg, type_check=type_check)
def calculate_partiality(self, vars=None):
new_body = self.body.calculate_partiality(vars=vars)
# defer any further calculation if there are bound variables in the body
if vars is not None:
if vars | new_body.free_variables():
return derived(self.copy_local(self.var_instance, new_body),
self, "Partiality simplification")
if isinstance(new_body, Partial):
new_body = new_body.body & new_body.condition
new_condition = new_body.copy()
new_body = IotaUnary(self.var_instance, new_body)
if self.varname in new_condition.free_variables():
new_condition = ExistsExact(self.var_instance, new_condition)
return derived(Partial(new_body, new_condition), self,
"Partiality simplification")
BindingOp.add_op(IotaUnary)
BindingOp.add_op(IotaPartial)
class LFun(BindingOp):
"""A typed function. Can itself be used as an operator in a TypedExpr.
"""
canonical_name = "Lambda"
secondary_names = {"L", "λ", "lambda"}
op_name_uni="λ"
op_name_latex="\\lambda{}"
def __init__(self, var_or_vtype, body, varname=None, let=False,
assignment=None, type_check=True):
# Use placeholder typ argument of None. This is because the input type
# won't be known until the var_or_vtype argument is parsed, which is
# done in the superclass constructor.
# sort of a hack, this could potentially cause odd side effects if
# BindingOp.__init__ is changed without taking this into account.
super().__init__(var_or_vtype=var_or_vtype, typ=None, body=body,
varname=varname, body_type=body.type, assignment=assignment,
type_check=type_check)
self.type = FunType(self.vartype, body.type)
self.let = let
@property
def argtype(self):
return self.type.left
@property
def returntype(self):
return self.type.right
def functional(self):
return True # no need to do any calculations
def copy(self):
r = LFun(self.argtype, self.body, self.varname, type_check=False)
r.let = self.let
return r
def copy_local(self, var, arg, type_check=True):
r = LFun(var, arg, type_check=type_check)
r.let = self.let
return r
def try_adjust_type_local(self, unified_type, derivation_reason, assignment,
env):
vacuous = False
# env will not start with bound variable in it
env.add_var_mapping(self.varname, self.argtype)
# update mapping with new type
left_principal = env.try_add_var_mapping(self.varname,
unified_type.left)
if left_principal is None:
return None
new_body = self.body
if self.argtype != left_principal:
# arg type needs to be adjusted.
new_var = TypedTerm(self.varname, left_principal)
else:
new_var = self.var_instance
if self.type.right != unified_type.right:
new_body = new_body.try_adjust_type(unified_type.right,
derivation_reason=derivation_reason,
assignment=assignment)
new_fun = self.copy_local(new_var, new_body)
env.merge(new_body.get_type_env())
if self.varname in env.var_mapping:
del env.var_mapping[self.varname]
new_fun = new_fun.under_type_assignment(env.type_mapping)
return new_fun
def apply(self,arg):
"""Apply an argument directly to the function.
`__call__` plus `reduce` is (almost) equivalent to `apply`, but using
`apply` directly will not generate a derivations."""
# do I really want flexible equality here??
# TODO: return to this. Right now a type mismatch still gets raised
# during beta reduction.
ts = get_type_system()
if ts.eq_check(self.argtype, arg.type):
# first check for potential variable name collisions when
# substituting, and the substitute
#TODO: do I want to actually return the result of alpha converting?
# May be needed later?
new_self = alpha_convert(self, unsafe_variables(self, arg))
# TODO: the copy here is a hack. Right now identity functions
# otherwise result in no copying at all, leading to very
# wrong results. This needs to be tracked down to its root and
# fixed.
return (beta_reduce_ts(new_self.body, new_self.varname, arg)).copy()
else:
raise TypeMismatch(self,arg, "Application")
def compose(self, other):
"""Function composition."""
return fun_compose(self, other)
def __mul__(self, other):
"""Override `*` as function composition for LFuns. Note that this
_only_ works for LFuns currently, not functional constants/variables."""
return self.compose(other)
@classmethod
def random(self, ctrl):
# not great at reusing bound variables
ftyp = get_type_system().random_from_class(types.FunType)
return random_lfun(ftyp, ctrl)
def geach_combinator(gtype, ftype):
body = term("g", gtype)(term("f", ftype)(term("x", ftype.left)))
combinator = LFun(gtype, LFun(ftype,
LFun(ftype.left, body,varname="x"),varname="f"), varname="g")
return combinator
def fun_compose(g, f):
"""Function composition using the geach combinator for the appropriate type,
defined above."""
if (not (g.type.functional() and f.type.functional()
and g.type.left == f.type.right)):
raise types.TypeMismatch(g, f, "Function composition type constraints not met")
combinator = geach_combinator(g.type, f.type)
result = (combinator(g)(f)).reduce_all()
return result
BindingOp.add_op(LFun)
###############
#
# Reduction code
#
###############
def unsafe_variables(fun, arg):
"""For a function and an argument, return the set of variables that are not
safe to use in application."""
return arg.free_variables() | fun.free_variables()
def beta_reduce_ts(t, varname, subst):
if varname in t.free_variables():
if (t.term() and t.op == varname):
return subst # TODO copy??
# we will be changing something in this expression, but not at this
# level of recursion.
parts = list()
for p in t:
parts.append(beta_reduce_ts(p, varname, subst))
t = t.copy_local(*parts)
return t
def variable_replace(expr, m):
def transform(e):
return TypedExpr.factory(m[e.op])
return variable_transform(expr, m.keys(), transform)
def variable_replace_strict(expr, m):
def transform(e):
result = TypedExpr.factory(m[e.op])
if result.type != e.type:
raise TypeMismatch(e, result, "Strict variable replace failed with mismatched types")
return result
return variable_transform(expr, m.keys(), transform)
def variable_replace_unify(expr, m):
def transform(e):
ts = get_type_system()
result = TypedExpr.factory(m[e.op])
if result.type != e.type:
unify = ts.unify(result.type, e.type)
if unify is None:
raise TypeMismatch(e, result, "Variable replace failed with mismatched types")
if unify == e.type: # unify gives us back e. Can we return e?
if result.term() and result.op == e.op:
return e
else:
return result
elif unify == result.type: # unify consistent with result
return result
else: # unify results in a 3rd type
result = result.try_adjust_type(unify, assignment=m)
return result
else:
if result.term() and result.op == e.op:
return e
else:
return result
r = variable_transform_rebuild(expr, m.keys(), transform)
return r
def variable_convert(expr, m):
def transform(e):
return TypedTerm(m[e.op], e.type)
return variable_transform(expr, m.keys(), transform)
def variable_transform(expr, dom, fun):
"""Transform free instances of variables in expr, as determined by the
function fun.
Operates on a copy.
expr: a TypedExpr
dom: a set of variable names
fun: a function from terms to TypedExprs."""
# TODO: check for properly named variables?
# TODO: double check -- what if I recurse into a region where a variable
# becomes free again?? I think this goes wrong
targets = dom & expr.free_variables()
if targets:
if expr.term() and expr.op in targets:
# expr itself is a term to be transformed.
return fun(expr)
expr = expr.copy()
for i in range(len(expr.args)):
expr.args[i] = variable_transform(expr.args[i], dom, fun)
return expr
def variable_transform_rebuild(expr, dom, fun):
"""Transform free instances of variables in expr, as determined by the
function fun.
Operates on a copy.
expr: a TypedExpr
dom: a set of variable names
fun: a function from terms to TypedExprs."""
targets = dom & expr.free_variables()
if targets:
if expr.term() and expr.op in targets:
# expr itself is a term to be transformed.
return fun(expr)
seq = list()
dirty = False
for i in range(len(expr.args)):
seq.append(variable_transform_rebuild(expr.args[i], targets, fun))
if seq[-1] != expr.args[i]:
dirty = True
if dirty:
expr = expr.copy_local(*seq)
return expr
# TODO: these last two functions are very similar, make an abstracted version?
def alpha_variant(x, blockset):
"""find a simple variant of string x that isn't in blocklist. Try adding
numbers to the end, basically.
side effect WARNING: updates blocklist itself to include the new
variable."""
if not x in blockset:
return x
split = utils.vname_split(x)
if len(split[1]) == 0:
count = 1
else:
# TODO: double check this -- supposed to prevent counterintuitive things
# like blocked "a01" resulting in "a1"
count = int(split[1]) + 1
prefix = split[0]
t = prefix + str(count)
while t in blockset:
count += 1
t = prefix + str(count)
blockset.add(t) # note: fails for non-sets
return t
def alpha_convert(t, blocklist):
""" produce an alphabetic variant of t that is guaranteed not to have any
variables in blocklist.
Possibly will not change t."""
overlap = t.bound_variables() & blocklist
full_bl = blocklist | t.free_variables() | t.bound_variables()
# note that this relies on the side effect of alpha_variant...
conversions = {x : alpha_variant(x, full_bl) for x in overlap}
return alpha_convert_r(t, overlap, conversions)
def alpha_convert_r(t, overlap, conversions):
overlap = overlap & t.bound_variables()
if overlap:
if isinstance(t, BindingOp) and t.varname in overlap:
# the operator is binding variables in the overlap set.
# rename instances of this variable that are free in the body of the
# operator expression.
t = t.alpha_convert(conversions[t.varname])
parts = list()
for i in range(len(t.args)):
parts.append(alpha_convert_r(t.args[i], overlap, conversions))
t = t.copy_local(*parts)
return t
###############
#
# Setup
#
###############
global true_term, false_term
# for whatever reason, monkey patching __bool__ doesn't work.
# TODO: is this a good idea?
class FalseTypedTerm(TypedTerm):
def __bool__(self):
return False
true_term = TypedTerm("True", types.type_t)
false_term = FalseTypedTerm("False", types.type_t)
def test_setup():
global ident, ia, ib, P, Q, p, y, t, testf, body, pmw_test1, pmw_test1b, t2
ident = te("L x_e : x")
ia = TypedExpr.factory(ident, "y")
ib = LFun(type_e, ia, "y")
P = TypedTerm("P", FunType(type_e, type_t))
Q = TypedTerm("Q", FunType(type_e, type_t))
x = TypedTerm("x", type_e)
y = TypedTerm("y", type_e)
t = TypedExpr.factory(P, x)
t2 = TypedExpr.factory(Q, x)
body = TypedExpr.factory("&", t, t) | t
p = TypedTerm("p", type_t)
testf = LFun(type_e, body)
pmw_test1 = LFun(type_t, LFun(type_e, t & p, "x"), "p")
pmw_test1b = LFun(type_e, t & t2, "x")
# test: when a free variable in a function scopes under an operator, do not
# bind the variable on application
assert pmw_test1.apply(t2) != pmw_test1b
# Different version of the same test: direct variable substitution
test2 = TypedExpr.factory("L y_e : L x_e : y_e")
test2b = TypedExpr.factory("L x_e : x_e")
assert test2.apply(x) != test2b
def default_variable_type(s):
#TODO something better
return type_e
def default_type(s):
if isinstance(s, TypedExpr):
return s.type
elif isinstance(s, Number):
return type_n
elif isinstance(s, str):
t = num_or_str(s)
if isinstance(t, Number):
return type_n
elif is_var_symbol(t):
return default_variable_type(s)
else:
#TODO, better default
return type_t
else:
# TODO: more default special cases? predicates?
raise NotImplementedError
def typed_expr(s):
# class method replaces this. Call this instead of factory, which has a
# slightly different semantics -- factory will make a copy if handed a
# TypedExpr.
return TypedExpr.ensure_typed_expr(s)
def is_symbol(s):
"A string s is a symbol if it starts with an alphabetic char."
return (isinstance(s, str) and len(s) > 0
and s[:1].isalpha()
and not is_multiword(s))
def is_var_symbol(s):
"A logic variable symbol is an initial-lowercase string."
return is_symbol(s) and s[0].islower()
def is_multiword(s):
"""a string is multiword if there is intermediate (non-initial and
non-trailing) whitespace."""
#TODO this could be more efficient
return (len(s.strip().split()) != 1)
class DerivationStep(object):
"""A single step of a derivation."""
def __init__(self, result, desc=None, origin=None, latex_desc=None,
subexpression=None, trivial=False):
self.result = result
self.subexpression = subexpression
if desc is None:
if latex_desc is None:
self.desc = self.latex_desc = ""
else:
self.desc = latex_desc
else:
self.desc = desc
if latex_desc is None:
self.latex_desc = desc
else:
self.latex_desc = latex_desc
if isinstance(origin, TypedExpr):
self.origin = (origin,)
else:
self.origin = tuple(origin)
self.trivial = trivial
def origin_str(self, latex=False):
if len(self.origin) == 1:
if latex:
return self.origin[0].latex_str()
else:
return repr(self.origin[0])
else:
if latex:
return ensuremath("(" +
(" + ".join([o.latex_str() for o in self.origin])) + ")")
else:
return "(" + (" + ".join([repr(o) for o in self.origin])) + ")"
def __repr__(self):
return ("[DerivationStep origin: "
+ repr(self.origin)
+ ", result: "
+ repr(self.result)
+ ", description: "
+ self.desc
+ "]")
class Derivation(object):
"""A derivation sequence, consisting of DerivationSteps."""
def __init__(self, steps):
self.steps = list()
self.steps_hash = dict()
if steps is not None:
self.add_steps(steps)
self.result = self[-1]
else:
self.result = None
def add_step(self, s):
self.steps_hash[len(self.steps)] = s
self.steps.append(s)
def add_steps(self, steps):
for s in steps:
self.add_step(s)
def __iter__(self):
return iter(self.steps)
def __len__(self):
return len(self.steps)
def __getitem__(self, i):
return self.steps[i]
def steps_sequence(self, latex=False, ignore_trivial=False):
l = list()
if len(self.steps) > 0:
l.append((self.steps[0].origin_str(latex), None, None))
for i in range(len(self.steps)):
# assume that origin matches previous result. Could check this.
if self.steps[i].trivial and ignore_trivial:
continue
if latex:
if self.steps[i].trivial:
l.append(("...", self.steps[i].latex_desc,
self.steps[i].subexpression))
else:
l.append((self.steps[i].result.latex_str(),
self.steps[i].latex_desc,
self.steps[i].subexpression))
else:
l.append((repr(self.steps[i].result),
self.steps[i].desc,
self.steps[i].subexpression))
return l
def equality_display(self, content, style=None):
l = self.steps_sequence(latex=True, ignore_trivial=True)
n = display.DisplayNode(content=content, parts=[step[0] for step in l],
style = display.EqualityDisplay())
return n
def build_display_tree(self, recurse=False, parent=None, reason=None,
style=None):
defaultstyle = {"align": "left"}
style = display.merge_styles(style, defaultstyle)
node_style = display.LRDerivationDisplay(**style)
l = self.steps_sequence(latex=True)
parts = list()
for (expr, subreason, subexpression) in l:
if reason == "":
reason = None
if subexpression and subexpression.derivation and (recurse):
parts.append(subexpression.derivation.build_display_tree(
recurse=recurse,
parent=expr,
reason=subreason,
style=style))
else:
parts.append(display.DisplayNode(content=expr,
explanation=subreason, parts=None, style=node_style))
if len(parts) == 0:
parts = None
return display.DisplayNode(content=parent, explanation=reason,
parts=parts, style=node_style)
def trace(self, recurse=True, style=None):
return self.build_display_tree(recurse=recurse, style=style)
def show(self, recurse=False, style=None):
return self.trace(recurse=recurse, style=style)
def _repr_html_(self):
return self.build_display_tree(recurse=False)._repr_html_()
def steps_str(self):
l = self.steps_sequence(latex=False)
s = ""
i = 1
for (expr, reason, subexpression) in l:
if reason is None:
s += "%2i. %s\n" % (i, expr)
else:
s += "%2i. %s (%s)\n" % (i, expr, reason)
i += 1
return s
def __repr__(self):
return self.steps_str()
def derivation_factory(result, desc=None, latex_desc=None, origin=None,
steps=None, subexpression=None, trivial=False):
"""Factory function for `Derivation`s. See `derived`."""
if origin is None:
if steps is not None and len(steps) > 0:
origin = steps[-1].result
drv = Derivation(steps)
# note: will make a copy of the derivation if steps is one; may be better to have something more efficient in the long run
drv.add_step(DerivationStep(result, desc=desc, origin=origin,
latex_desc=latex_desc, subexpression=subexpression, trivial=trivial))
return drv
def derived(result, origin, desc=None, latex_desc=None, subexpression=None,
allow_trivial=False):
"""Convenience function to return a derived TypedExpr while adding a
derivational step. Always return result, adds or updates its derivational
history as a side effect."""
if isinstance(result, TypedTerm) and result.derivation is None:
try:
# need to manually copy the typeenv?? TODO: double check...
tenv = result._type_env
# avoid mixing up derivations on terms. TODO: how bad is this?
result = result.copy()
result._type_env = tenv
except AttributeError: # no _type_env set
result = result.copy()
trivial = False
if result == origin: # may be inefficient?
if allow_trivial:
trivial = True
else:
# a bit hacky, but this scenario has come up
if result.derivation is None and result is not origin:
result.derivation = origin.derivation
return result
if result.derivation is None:
d = origin.derivation
else:
d = result.derivation
result.derivation = derivation_factory(result, desc=desc,
latex_desc=latex_desc,
origin=origin,
steps=d,
subexpression=subexpression,
trivial=trivial)
return result
def add_derivation_step(te, result, origin, desc=None, latex_desc=None,
subexpression=None, allow_trivial=False):
trivial = False
if result == origin: # may be inefficient?
if allow_trivial:
trivial = True
else:
return te
if te.derivation is None:
d = origin.derivation
else:
d = te.derivation
te.derivation = derivation_factory(result, desc=desc,
latex_desc=latex_desc,
origin=origin,
steps=d,
subexpression=subexpression,
trivial=trivial)
return te
def add_subexpression_step(te, subexpr, desc=None, latex_desc=None):
if subexpr.derivation is None or len(subexpr.derivation) == 0:
return te
start = subexpr.derivation[0].origin[0]
end = subexpr.derivation[-1].origin[-1]
add_derivation_step(te, end, start, desc=desc, latex_desc=latex_desc,
subexpression=subexpr)
return te
test_setup()
######################
#
# testing code
#
# * some code for generating random expressions within certain parameters
# * unit tests
#
######################
def repr_parse(e):
result = te(repr(e), let=False)
return result == e
random_types = [type_t]
random_ops = ["&", "|", ">>", "%"]
def random_tf_op_expr(ctrl_fun):
# TODO: not hardcode this
op = random.choice(random_ops)
while (op in binary_num_ops):
op = random.choice(random_ops)
if op == "~":
return UnaryNegExpr(ctrl_fun(typ=type_t))
elif op in binary_symbols_to_op_exprs.keys():
op_class = binary_symbols_to_op_exprs[op]
if op_class == eq_factory:
raise NotImplementedError
elif op_class == SetContains:
raise NotImplementedError
elif issubclass(op_class, BinaryOpExpr):
if op in binary_num_rels:
return op_class(ctrl_fun(typ=type_n), ctrl_fun(typ=type_n))
elif op in binary_tf_ops:
return op_class(ctrl_fun(typ=type_t), ctrl_fun(typ=type_t))
else:
raise NotImplementedError
else:
#print(repr(op_class))
raise NotImplementedError
else:
raise NotImplementedError
random_term_base = {type_t : "p", type_e : "x", type_n : "n"}
def random_term(typ=None, blockset=None, usedset=set(), prob_used=0.8,
prob_var=0.5, max_type_depth=1):
return TypedTerm.random(random_ctrl_fun=None,
typ=typ,
blockset=blockset,
usedset=usedset,
prob_used=prob_used,
prob_var=prob_var,
max_type_depth=max_type_depth)
# use this to try to get more reused bound variables (which tend to have odd
# types when generated randomly)
def random_pred_combo_from_term(output_type, ctrl, usedset):
ts = get_type_system()
term = (random.choice(list(usedset))).copy()
pred_type = ts.unify_ar(term.type, output_type)
pred = ctrl(typ=pred_type)
return pred(term)
def random_fa_combo(output_type, ctrl, max_type_depth=1):
ts = get_type_system()
input_type = ts.random_type(max_type_depth, 0.5, allow_variables=False)
fun_type = types.FunType(input_type, output_type)
result = (ctrl(typ=fun_type))(ctrl(typ=input_type))
return result
def random_lfun(typ, ctrl):
global random_used_vars
typ = get_type_system().unify(typ, tp("<?,?>"))
input_type = typ.left
body_type = typ.right
variable = random_term(input_type, usedset=random_used_vars, prob_used=0.2,
prob_var=1.0)
random_used_vars |= {variable}
return LFun(variable, ctrl(typ=body_type))
def random_lfun_force_bound(typ, ctrl):
global random_used_vars
typ = get_type_system().unify(typ, tp("<?,?>"))
input_type = typ.left
body_type = typ.right
variable = random_term(input_type, usedset=random_used_vars, prob_used=0.2,
prob_var=1.0)
random_used_vars |= {variable}
return LFun(variable, random_pred_combo_from_term(body_type, ctrl,
usedset=random_used_vars))
def random_binding_expr(ctrl, max_type_depth=1):
global random_used_vars
ts = get_type_system()
options = [ForallUnary, ExistsUnary, ExistsExact]
op_class = random.choice(options)
var_type = ts.random_type(max_type_depth, 0.5)
variable = random_term(var_type, usedset=random_used_vars, prob_used=0.2,
prob_var=1.0)
random_used_vars |= {variable}
return op_class(variable, ctrl(typ=type_t))
def random_from_class(cls, max_depth=1, used_vars=None):
global random_used_vars
if used_vars is None:
used_vars = set()
random_used_vars = used_vars
def ctrl(**args):
global random_used_vars
return random_expr(depth=max_depth-1, used_vars=random_used_vars,
**args)
return cls.random(ctrl)
# ugh, need to find a way to do this not by side effect
global random_used_vars
random_used_vars = set()
def random_expr(typ=None, depth=1, used_vars=None):
"""Generate a random expression of the specified type `typ`, with an AST of
specified `depth`. Leave used_vars as None for expected behavior.
This won't generate absolutely everything, and I haven't tried to make this
use some sensible distribution over expressions (whatever that would be).
If typ is None, it will draw from the random_types module level variable,
which is currently just [type_t].
An alternative approach would be to generate a random AST first, and fill
it in.
"""
global random_used_vars
if used_vars is None:
used_vars = set()
random_used_vars = used_vars
if typ is None:
typ = random.choice(random_types)
if depth == 0:
term = random_term(typ, usedset=random_used_vars)
random_used_vars |= {term}
return term
else:
# possibilities:
# 1. any typ: function-argument combination resulting in typ
# 2. if typ is type_t: operator expression of typ (exclude non type_t
# options for now)
# 3. if typ is type_t: binding expression of type_t
# 4. if typ is functional: LFun of typ
# ignore sets for now (variables with set types can be generated as
# part of option 1)
# ignore iota for now
options = [1]
if typ == type_t:
options.append(2)
options.append(3)
if typ.functional():
options.append(4)
options.append(5)
if depth == 1 and len(random_used_vars) > 0:
options.extend([6,7,8,9]) # try to reuse vars a bit more
choice = random.choice(options)
def ctrl(**args):
global random_used_vars
return random_expr(depth=depth-1, used_vars=random_used_vars,
**args)
if choice == 1:
return random_fa_combo(typ, ctrl)
elif choice == 2:
return random_tf_op_expr(ctrl)
elif choice == 3:
return random_binding_expr(ctrl)
elif choice == 4:
return random_lfun(typ, ctrl)
elif choice == 5:
return random_lfun_force_bound(typ, ctrl)
elif choice >= 6:
return random_pred_combo_from_term(typ, ctrl, random_used_vars)
else:
raise NotImplementedError
import unittest
def test_repr_parse_abstract(self, depth):
for i in range(1000):
x = random_expr(depth=depth)
result = repr_parse(x)
latex_str = x.latex_str() # also test that this doesn't crash -- can't
# easily test actual contents.
if not result:
print("Failure on depth %i expression '%s'" % (depth, repr(x)))
self.assertTrue(result)
def testsimp(self, a, b):
intermediate = a.simplify()
teb = te(b)
if intermediate != teb:
print("Failed simplification test: '%s == %s'" % (repr(a), repr(b)))
self.assertEqual(intermediate, teb)
return intermediate
te_classes = [ApplicationExpr, Tuple, TypedTerm, Partial, Disjunctive,
UnaryNegExpr, UnaryNegativeExpr, BinaryAndExpr, BinaryOrExpr,
BinaryArrowExpr, BinaryBiarrowExpr, BinaryNeqExpr, BinaryLExpr,
BinaryLeqExpr, BinaryGExpr, BinaryGeqExpr, BinaryPlusExpr,
BinaryMinusExpr, BinaryDivExpr, BinaryExpExpr, SetContains,
TupleIndex, ConditionSet, ListedSet, ForallUnary, ExistsUnary,
ExistsExact, IotaUnary, IotaPartial, LFun]
class MetaTest(unittest.TestCase):
def setUp(self):
self.ident = te("L x_e : x")
self.ia = TypedExpr.factory(self.ident, "y")
self.ib = LFun(type_e, self.ia, "y")
self.P = TypedTerm("P", FunType(type_e, type_t))
self.Q = TypedTerm("Q", FunType(type_e, type_t))
self.x = TypedTerm("x", type_e)
self.y = TypedTerm("y", type_e)
self.t = TypedExpr.factory(self.P, self.x)
self.t2 = TypedExpr.factory(self.Q, self.x)
self.body = TypedExpr.factory("&", self.t, self.t) | self.t
self.p = TypedTerm("p", type_t)
self.testf = LFun(type_e, self.body)
def test_basic(self):
# equality basics
self.assertEqual(self.P, self.P)
self.assertEqual(self.x, self.x)
self.assertEqual(self.testf, self.testf)
self.assertNotEqual(self.P, self.Q)
self.assertNotEqual(self.x, self.y)
def test_class_random(self):
for c in te_classes:
for i in range(50):
random_from_class(c)
def test_copy(self):
for c in te_classes:
for i in range(50):
x = random_from_class(c)
self.assertEqual(x, x.copy())
self.assertEqual(x, x.copy_local(*x))
def test_parse(self):
# overall: compare parsed TypedExprs with constructed TypedExprs
# basic operator syntax
self.assertEqual(TypedExpr.factory(
"(P_<e,t>(x_e) & P_<e,t>(x_e)) | P_<e,t>(x_e)"), self.body)
# parenthesis reduction
self.assertEqual(TypedExpr.factory(
"((P_<e,t>(x_e) & P_<e,t>(x_e)) | (P_<e,t>(x_e)))"), self.body)
# parenthesis grouping
self.assertNotEqual(TypedExpr.factory(
"P_<e,t>(x_e) & (P_<e,t>(x_e) | P_<e,t>(x_e))"), self.body)
# variable binding syntax
self.assertEqual(TypedExpr.factory("L x_e : x_e"), self.ident)
self.assertRaises(parsing.ParseError, TypedExpr.factory, "L x_e : x_t")
logger.setLevel(logging.WARNING)
te("L x: L y: In(y)(x)")
logger.setLevel(logging.INFO)
def test_reduce(self):
self.assertEqual(self.ident(self.y).reduce(), self.y)
# test: when a free variable in a function scopes under an operator, do
# not bind the variable on application
pmw_test1 = LFun(type_t, LFun(type_e, self.t & self.p, "x"), "p")
pmw_test1b = LFun(type_e, self.t & self.t2, "x")
self.assertNotEqual(pmw_test1.apply(self.t2), pmw_test1b)
# Different version of the same test: direct variable substitution
test2 = TypedExpr.factory("L y_e : L x_e : y_e")
test2b = TypedExpr.factory("L x_e : x_e")
self.assertNotEqual(test2.apply(self.x), test2b)
# test for accidental collisions from alpha conversions, added Apr 2015
test3 = TypedExpr.factory(
"(L xbar_<e,t> : L x_e : xbar(x))(L z_e : P_<(e,e,e),t>(x_e,z_e, x1_e))")
test3 = test3.reduce_all()
self.assertNotEqual(test3[1][1][0], test3[1][1][1])
self.assertNotEqual(test3[1][1][0], test3[1][1][2])
self.assertNotEqual(test3[1][1][1], test3[1][1][2])
def test_polymorphism(self):
# geach combinator test
g = te("L g_<Y,Z> : L f_<X,Y> : L x_X : g(f(x))")
self.assertEqual(g.try_adjust_type(tp("<<e,t>,<<e,e>,?>>")),
te("(λ g_<e,t>: (λ f_<e,e>: (λ x_e: g_<e,t>(f_<e,e>(x_e)))))"))
self.assertEqual(g.let_type(tp("<?,<<<e,t>,?>,?>>")),
te("(λ g_<Y,Z>: (λ f_<<e,t>,Y>: (λ x_<e,t>: g_<Y,Z>(f_<<e,t>,Y>(x_<e,t>)))))"))
# z combinator test
z = te("(λ f_<X,<e,Z>>: (λ g_<e,X>: (λ x_e: f(g(x))(x))))")
self.assertEqual(z.try_adjust_type(tp("<<e,<e,t>>,?>")),
te("(λ f_<e,<e,t>>: (λ g_<e,e>: (λ x_e: f_<e,<e,t>>(g_<e,e>(x_e))(x_e))))"))
def test_binary_simplify(self):
# negation
testsimp(self, te("~False"), True)
testsimp(self, te("~True"), False)
testsimp(self, te("~p_t"), te("~p_t"))
# conjunction
testsimp(self, te("True & True"), True)
testsimp(self, te("True & False"), False)
testsimp(self, te("False & True"), False)
testsimp(self, te("False & False"), False)
testsimp(self, te("False & p_t"), False)
testsimp(self, te("p_t & False"), False)
testsimp(self, te("True & p_t"), te("p_t"))
testsimp(self, te("p_t & True"), te("p_t"))
testsimp(self, te("p_t & p_t"), te("p_t"))
testsimp(self, te("p_t & q_t"), te("p_t & q_t"))
# disjunction
testsimp(self, te("True | True"), True)
testsimp(self, te("True | False"), True)
testsimp(self, te("False | True"), True)
testsimp(self, te("False | False"), False)
testsimp(self, te("False | p_t"), te("p_t"))
testsimp(self, te("p_t | False"), te("p_t"))
testsimp(self, te("True | p_t"), True)
testsimp(self, te("p_t | True"), True)
testsimp(self, te("p_t | p_t"), True)
testsimp(self, te("p_t | q_t"), te("p_t | q_t"))
# arrow
testsimp(self, te("True >> True"), True)
testsimp(self, te("True >> False"), False)
testsimp(self, te("False >> True"), True)
testsimp(self, te("False >> False"), True)
testsimp(self, te("False >> p_t"), True)
testsimp(self, te("p_t >> False"), te("~p_t"))
testsimp(self, te("True >> p_t"), te("p_t"))
testsimp(self, te("p_t >> True"), True)
testsimp(self, te("p_t >> p_t"), True)
testsimp(self, te("p_t >> q_t"), te("p_t >> q_t"))
# biconditional
testsimp(self, te("True <=> True"), True)
testsimp(self, te("True <=> False"), False)
testsimp(self, te("False <=> True"), False)
testsimp(self, te("False <=> False"), True)
testsimp(self, te("False <=> p_t"), te("~p_t"))
testsimp(self, te("p_t <=> False"), te("~p_t"))
testsimp(self, te("True <=> p_t"), te("p_t"))
testsimp(self, te("p_t <=> True"), te("p_t"))
testsimp(self, te("p_t <=> q_t"), te("p_t <=> q_t"))
testsimp(self, te("p_t <=> p_t"), True)
# neq
testsimp(self, te("True =/= True"), False)
testsimp(self, te("True =/= False"), True)
testsimp(self, te("False =/= True"), True)
testsimp(self, te("False =/= False"), False)
testsimp(self, te("False =/= p_t"), te("p_t"))
testsimp(self, te("p_t =/= False"), te("p_t"))
testsimp(self, te("True =/= p_t"), te("~p_t"))
testsimp(self, te("p_t =/= True"), te("~p_t"))
testsimp(self, te("p_t =/= q_t"), te("p_t =/= q_t"))
testsimp(self, te("p_t =/= p_t"), False)
# each of these generates 1000 random expressions with the specified depth,
# and checks whether their repr parses as equal to the original expression
def test_repr_parse_0(self): test_repr_parse_abstract(self, 0)
def test_repr_parse_1(self): test_repr_parse_abstract(self, 1)
def test_repr_parse_2(self): test_repr_parse_abstract(self, 2)
# def test_repr_parse_3(self): test_repr_parse_abstract(self, 3)
# def test_repr_parse_4(self): test_repr_parse_abstract(self, 4)
# def test_repr_parse_5(self): test_repr_parse_abstract(self, 5)
# def test_repr_parse_6(self): test_repr_parse_abstract(self, 6)
| 39.053878 | 126 | 0.573415 | 22,109 | 179,765 | 4.496585 | 0.065177 | 0.014807 | 0.005975 | 0.003621 | 0.408208 | 0.337203 | 0.27684 | 0.249087 | 0.216285 | 0.194075 | 0 | 0.005427 | 0.329641 | 179,765 | 4,602 | 127 | 39.062364 | 0.819454 | 0.184652 | 0 | 0.403428 | 0 | 0.003673 | 0.049317 | 0.003247 | 0 | 0 | 0 | 0.006302 | 0.011019 | 1 | 0.125803 | false | 0.000306 | 0.002143 | 0.039486 | 0.30854 | 0.000612 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a287c8710ab0590236a52016bc11bcc7db870699 | 4,158 | py | Python | raiden/utils/gas_reserve.py | Dominik1999/raiden | 95a8fa2499da82f3d216b7e8ad035fb4eba992fb | [
"MIT"
] | null | null | null | raiden/utils/gas_reserve.py | Dominik1999/raiden | 95a8fa2499da82f3d216b7e8ad035fb4eba992fb | [
"MIT"
] | null | null | null | raiden/utils/gas_reserve.py | Dominik1999/raiden | 95a8fa2499da82f3d216b7e8ad035fb4eba992fb | [
"MIT"
] | null | null | null | from typing import Tuple
from raiden.constants import (
GAS_REQUIRED_FOR_CLOSE_CHANNEL,
GAS_REQUIRED_FOR_OPEN_CHANNEL,
GAS_REQUIRED_FOR_SET_TOTAL_DEPOSIT,
GAS_REQUIRED_FOR_SETTLE_CHANNEL,
UNLOCK_TX_GAS_LIMIT,
)
from raiden.transfer import views
GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_SETTLE = (
UNLOCK_TX_GAS_LIMIT
)
GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_CLOSE = (
GAS_REQUIRED_FOR_SETTLE_CHANNEL + GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_SETTLE
)
GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_OPEN = (
GAS_REQUIRED_FOR_CLOSE_CHANNEL + GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_CLOSE
)
GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_COMPLETE = (
GAS_REQUIRED_FOR_OPEN_CHANNEL +
GAS_REQUIRED_FOR_SET_TOTAL_DEPOSIT +
GAS_REQUIRED_FOR_SET_TOTAL_DEPOSIT
)
GAS_RESERVE_ESTIMATE_SECURITY_FACTOR = 1.1
def _get_required_gas_estimate(
new_channels: int = 0,
opened_channels: int = 0,
closing_channels: int = 0,
closed_channels: int = 0,
settling_channels: int = 0,
settled_channels: int = 0,
) -> int:
estimate = 0
estimate += new_channels * GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_COMPLETE
estimate += opened_channels * GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_OPEN
estimate += closing_channels * GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_CLOSE
estimate += closed_channels * GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_CLOSE
estimate += settling_channels * GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_SETTLE
estimate += settled_channels * GAS_REQUIRED_FOR_CHANNEL_LIFECYCLE_AFTER_SETTLE
return estimate
def _get_required_gas_estimate_for_state(raiden) -> int:
chain_state = views.state_from_raiden(raiden)
token_addresses = views.get_token_identifiers(chain_state, raiden.default_registry.address)
gas_estimate = 0
for token_address in token_addresses:
num_opened_channels = len(views.get_channelstate_open(
chain_state,
raiden.default_registry.address,
token_address,
))
num_closing_channels = len(views.get_channelstate_closing(
chain_state,
raiden.default_registry.address,
token_address,
))
num_closed_channels = len(views.get_channelstate_closed(
chain_state,
raiden.default_registry.address,
token_address,
))
num_settling_channels = len(views.get_channelstate_settling(
chain_state,
raiden.default_registry.address,
token_address,
))
num_settled_channels = len(views.get_channelstate_settled(
chain_state,
raiden.default_registry.address,
token_address,
))
gas_estimate += _get_required_gas_estimate(
opened_channels=num_opened_channels,
closing_channels=num_closing_channels,
closed_channels=num_closed_channels,
settling_channels=num_settling_channels,
settled_channels=num_settled_channels,
)
return gas_estimate
def has_enough_gas_reserve(
raiden,
channels_to_open: int = 0,
) -> Tuple[bool, int]:
""" Checks if the account has enough balance to handle the lifecycles of all
open channels as well as the to be created channels.
Note: This is just an estimation.
Args:
raiden: A raiden service instance
channels_to_open: The number of new channels that should be opened
Returns:
Tuple of a boolean denoting if the account has enough balance for
the remaining lifecycle events and the estimate for the remaining
lifecycle cost
"""
gas_estimate = _get_required_gas_estimate_for_state(raiden)
gas_estimate += _get_required_gas_estimate(new_channels=channels_to_open)
gas_price = raiden.chain.client.gas_price()
reserve_amount = gas_estimate * gas_price
secure_reserve_estimate = round(reserve_amount * GAS_RESERVE_ESTIMATE_SECURITY_FACTOR)
current_account_balance = raiden.chain.client.balance(raiden.chain.client.sender)
return secure_reserve_estimate <= current_account_balance, secure_reserve_estimate
| 34.081967 | 95 | 0.740981 | 512 | 4,158 | 5.529297 | 0.189453 | 0.081597 | 0.10385 | 0.089014 | 0.550689 | 0.46556 | 0.364889 | 0.271282 | 0.200636 | 0.086189 | 0 | 0.003339 | 0.207792 | 4,158 | 121 | 96 | 34.363636 | 0.856102 | 0.107023 | 0 | 0.224719 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033708 | false | 0 | 0.033708 | 0 | 0.101124 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2895a25e71360020298df0af70506623cae5616 | 11,042 | py | Python | assignment3/src/rnn.py | jschmidtnj/cs584 | d1d4d485d1fac8743cdbbc2996792db249dcf389 | [
"MIT"
] | null | null | null | assignment3/src/rnn.py | jschmidtnj/cs584 | d1d4d485d1fac8743cdbbc2996792db249dcf389 | [
"MIT"
] | null | null | null | assignment3/src/rnn.py | jschmidtnj/cs584 | d1d4d485d1fac8743cdbbc2996792db249dcf389 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
rnn (rnn.py)
"""
from __future__ import annotations
from sys import argv
from typing import List, Optional, Any, Dict
from utils import file_path_relative
from variables import sentences_key, clean_data_folder, models_folder, \
rnn_folder, text_vectorization_folder, rnn_file_name, output_folder
from loguru import logger
from ast import literal_eval
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
# maximum number of words in vocabulary
vocab_size = 10000
# read dataset in batches of
batch_size = 50
# number of epochs to run
epochs = 10
# window size in rnn
window_size: int = 20
class PlotTrain(tf.keras.callbacks.Callback):
"""
plot for training results
"""
def __init__(self, name: str):
super().__init__()
self.losses = []
self.accuracy = []
self.logs = []
self.name = name
def on_train_begin(self, logs=None):
"""
initialize
"""
self.losses: List[float] = []
self.accuracy: List[float] = []
self.logs: List[Dict[str, Any]] = []
def on_epoch_end(self, epoch, logs=None):
"""
runs on end of each epoch
"""
if logs is None:
return
self.logs.append(logs)
self.losses.append(logs.get('loss'))
self.accuracy.append(logs.get('accuracy'))
if len(self.losses) > 1:
nums = np.arange(0, len(self.losses))
plt.style.use("seaborn")
plt.figure()
plt.plot(nums, self.losses, label="train loss")
plt.plot(nums, self.accuracy, label="train accuracy")
plt.title(f"Training Loss and Accuracy for Epoch {epoch}")
plt.xlabel("Epoch #")
plt.ylabel("Loss & Accuracy")
plt.legend()
file_path = file_path_relative(
f'{output_folder}/rnn_{self.name}_train_epoch_{epoch}.png')
plt.savefig(file_path)
def create_text_vectorization_model(text_vectorization_filepath: str,
dataset_all_tokens: tf.data.Dataset) -> tf.keras.models.Sequential:
"""
create text vectorization model
this vectorizer converts an array of strings to an array of integers
"""
vectorize_layer = TextVectorization(
max_tokens=vocab_size,
output_mode='int'
)
logger.success('created text vectorization layer')
# batch the dataset to make it easier to store
# in memory
vectorize_layer.adapt(dataset_all_tokens.batch(batch_size))
logger.success('adapted vectorization to training dataset')
text_vectorization_model = tf.keras.models.Sequential([
tf.keras.Input(shape=(1,), dtype=tf.string),
vectorize_layer
])
# simple text vectorization test
logger.info(text_vectorization_model.predict(["this is a test"]))
text_vectorization_model.save(text_vectorization_filepath)
return text_vectorization_model
def build_model(current_batch_size=batch_size) -> tf.keras.models.Sequential:
"""
build main rnn model
batch_size is a parameter because it changes in testing
"""
# rnn params
embedding_dim = 256
rnn_units = 1024
# this uses GRU
model = tf.keras.models.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[current_batch_size, None]),
tf.keras.layers.GRU(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size + 1)
])
logger.success('created tf model')
return model
def flatten_input(data: List[List[Any]]) -> List[Any]:
"""
flatten the given input (to 1xn)
"""
return np.hstack(data).tolist()
def pad_zeros(data: List[int], num_elem: int) -> List[int]:
"""
pads array so output is num_elem is length
"""
if len(data) >= num_elem:
return data[:-num_elem]
data.extend([0] * (num_elem - len(data)))
return data
def rnn_train(name: str, file_name: Optional[str] = None,
clean_data: Optional[pd.DataFrame] = None) -> tf.keras.models.Sequential:
"""
rnn training
creates the tensorflow rnn model for word prediction
"""
logger.info(f'run rnn training for {name}')
if file_name is None and clean_data is None:
raise ValueError('no file name or tokens provided')
# get training data
if clean_data is None:
file_path = file_path_relative(f'{clean_data_folder}/{file_name}')
logger.info(f'reading data from {file_path}')
clean_data = pd.read_csv(file_path, converters={
sentences_key: literal_eval})
tokens: List[List[str]] = clean_data[sentences_key]
flattened_tokens: List[str] = flatten_input(tokens)
dataset_all_tokens = tf.data.Dataset.from_tensor_slices(flattened_tokens)
logger.success('created all tokens text dataset')
# get text vectorization model
text_vectorization_filepath = file_path_relative(
f'{models_folder}/{name}/{text_vectorization_folder}')
text_vectorization_model = create_text_vectorization_model(
text_vectorization_filepath, dataset_all_tokens)
vectorized_tokens: List[int] = flatten_input(text_vectorization_model.predict(
flattened_tokens, batch_size=batch_size))
# create vectorized dataset
vectorized_tokens_dataset = tf.data.Dataset.from_tensor_slices(
vectorized_tokens)
# create sliding window
batched_vectorized_tokens = vectorized_tokens_dataset.batch(
window_size + 1, drop_remainder=True)
def split_train_test(batch: List[int]):
input_text = batch[:-1]
target_text = batch[1:]
return input_text, target_text
# create train and test
training_dataset = batched_vectorized_tokens.map(split_train_test)
# print some samples
logger.success('training data sample:')
for input_example, target_example in training_dataset.take(20):
logger.info(f"\ninput: {input_example}\ntarget: {target_example}")
# buffer size is used to shuffle the dataset
buffer_size = 10000
# create batches
training_dataset = training_dataset.shuffle(
buffer_size).batch(batch_size, drop_remainder=True)
logger.info(f'training dataset shape: {training_dataset}')
model = build_model()
# use sequence loss in training
def loss(targets, logits):
"""
return loss for given iteration
"""
return tfa.seq2seq.sequence_loss(logits, targets, tf.ones([batch_size, window_size]))
# use adam optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
logger.success('model compiled')
rnn_filepath = file_path_relative(
f'{rnn_folder}/{name}/{rnn_file_name}')
# save checkpoints to disk
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=rnn_filepath,
save_weights_only=True)
# create visualizations
plot_callback = PlotTrain(name)
history = model.fit(training_dataset, epochs=epochs,
callbacks=[checkpoint_callback, plot_callback])
model.summary()
last_loss = plot_callback.losses[-1]
logger.info(f'model loss: {last_loss}')
return text_vectorization_model
def rnn_predict_next(name: str,
text_vectorization_model: tf.keras.models.Sequential = None,
clean_input_file: Optional[str] = None,
clean_input_data: Optional[pd.DataFrame] = None,
num_lines_predict: Optional[int] = None,
num_predict: int = 1) -> None:
"""
predict next word(s) with given input
"""
logger.success(f'running predictions for {name}')
if clean_input_file is None and clean_input_data is None:
raise ValueError('no input file name or data provided')
# create model from disk
model = build_model(1)
rnn_filepath = file_path_relative(
f'{rnn_folder}/{name}/{rnn_file_name}')
model.load_weights(rnn_filepath)
model.build(tf.TensorShape([1, None]))
model.summary()
# get text vectorizer
if text_vectorization_model is None:
text_vectorization_filepath = file_path_relative(
f'{models_folder}/{name}/vectorization')
text_vectorization_model = tf.keras.models.load_model(
text_vectorization_filepath)
# get testing data
if clean_input_data is None:
file_path = file_path_relative(
f'{clean_data_folder}/{clean_input_file}')
logger.info(f'reading data from {file_path}')
clean_input_data = pd.read_csv(file_path, converters={
sentences_key: literal_eval})
predict_sentences: List[List[str]] = clean_input_data[sentences_key]
if num_lines_predict is not None:
predict_sentences = predict_sentences[:num_lines_predict]
# vectorize testing data
vectorize_layer: TextVectorization = text_vectorization_model.layers[0]
vocabulary = vectorize_layer.get_vocabulary()
# logger.info(f'vocabulary: {vocabulary}')
# reset model, get ready for predict
model.reset_states()
logger.success('[[<words>]] = predicted words:')
sum_probability_log: float = 0.
count_all_predict: int = 0
# iterate over all input sentences
for i, sentence in enumerate(predict_sentences):
full_sentence = sentence.copy()
for _ in range(num_predict):
vectorized_sentence = flatten_input(text_vectorization_model.predict(
full_sentence[-window_size:], batch_size=batch_size))
input_eval = tf.expand_dims(vectorized_sentence, 0)
predictions = model.predict(input_eval)
# remove batch dimension, get probabilities of last word
probabilities = tf.squeeze(predictions, 0)[-1]
# get the index of the prediction based on the max probability
predicted_index = np.argmax(probabilities)
predicted_word = vocabulary[predicted_index]
full_sentence.append(predicted_word)
sum_probability_log += np.log(probabilities[predicted_index])
count_all_predict += 1
logger.info(
f"{i + 1}. {' '.join(sentence)} [[{' '.join(full_sentence[len(sentence):])}]]")
if count_all_predict == 0:
logger.info('no predictions, no perplexity')
else:
total_loss = -1 * sum_probability_log
perplexity: float = np.exp(total_loss / count_all_predict)
logger.info(f"perplexity: {perplexity}")
if __name__ == '__main__':
if len(argv) < 2:
raise ValueError('no n-grams training data provided')
| 33.664634 | 103 | 0.664916 | 1,365 | 11,042 | 5.147253 | 0.220513 | 0.062909 | 0.0501 | 0.016937 | 0.170937 | 0.15158 | 0.105892 | 0.072018 | 0.072018 | 0.060917 | 0 | 0.00631 | 0.239268 | 11,042 | 327 | 104 | 33.767584 | 0.830119 | 0.118185 | 0 | 0.090909 | 0 | 0.005051 | 0.113319 | 0.036161 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.065657 | 0 | 0.171717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a28ca3ae58a4fa0e3aeb910430ef2016022e3d9f | 11,415 | py | Python | dipper/sources/Decipher.py | monarch-ci/dipper | abcd4843ec051a47cef3b592fadc1cd7d1616b45 | [
"BSD-3-Clause"
] | 52 | 2015-01-28T21:22:19.000Z | 2022-03-15T09:21:07.000Z | dipper/sources/Decipher.py | monarch-ci/dipper | abcd4843ec051a47cef3b592fadc1cd7d1616b45 | [
"BSD-3-Clause"
] | 742 | 2015-01-06T00:21:30.000Z | 2021-08-02T20:57:17.000Z | dipper/sources/Decipher.py | monarch-ci/dipper | abcd4843ec051a47cef3b592fadc1cd7d1616b45 | [
"BSD-3-Clause"
] | 24 | 2015-07-28T17:06:30.000Z | 2021-08-18T21:28:53.000Z | import logging
import re
import io
import csv
from zipfile import ZipFile
from dipper.sources.Source import Source
from dipper.models.Model import Model
from dipper.models.Reference import Reference
from dipper.models.assoc.G2PAssoc import G2PAssoc
from dipper.sources.HGNC import HGNC
from dipper.models.Genotype import Genotype
LOG = logging.getLogger(__name__)
class Decipher(Source):
"""
Deprecated - please see the EBIGene2Phen class, which parses the same
file but fetches it from EBI which has clearer terms for redistribution,
while Decipher has restrictive terms due to containing patient data in
password protected datasets.
The Decipher group curates and assembles the Development Disorder Genotype
Phenotype Database (DDG2P) which is a curated list of genes reported to be
associated with developmental disorders, compiled by clinicians as part of
the DDD study to facilitate clinical feedback of likely causal variants.
Beware that the redistribution of this data is a bit unclear from the
[license](https://decipher.sanger.ac.uk/legal).
If you intend to distribute this data, be sure to have the appropriate
licenses in place.
"""
files = {
'annot': {
'file': 'ddg2p.zip',
'url': 'https://decipher.sanger.ac.uk/files/ddd/ddg2p.zip',
'headers': []}
}
def __init__(self,
graph_type,
are_bnodes_skolemized,
data_release_version=None):
super().__init__(
graph_type=graph_type,
are_bnodes_skized=are_bnodes_skolemized,
data_release_version=data_release_version,
name='decipher',
ingest_title='Development Disorder Genotype Phenotype Database',
ingest_url='https://decipher.sanger.ac.uk/',
ingest_logo='source-decipher.png',
license_url='https://decipher.sanger.ac.uk/legal',
data_rights='https://decipher.sanger.ac.uk/datasharing',
# file_handle=None
)
if 'disease' not in self.all_test_ids:
LOG.warning("not configured with disease test ids.")
self.test_ids = []
else:
self.test_ids = self.all_test_ids['disease']
self.graph = self.graph
self.geno = Genotype(self.graph)
self.model = Model(self.graph)
self.graph_type = graph_type
self.are_bnodes_skolemized = are_bnodes_skolemized
return
def fetch(self, is_dl_forced=False):
self.get_files(is_dl_forced)
# since there's a dependency on HGNC files; fetch those too
hgnc = HGNC(self.graph_type, self.are_bnodes_skolemized)
hgnc.fetch(is_dl_forced)
return
def parse(self, limit=None):
if limit is not None:
LOG.info("Only parsing first %s rows", limit)
LOG.info("Parsing files...")
if self.test_only:
self.test_mode = True
self.graph = self.testgraph
else:
self.graph = self.graph
self.geno = Genotype(self.graph)
# rare disease-phenotype associations
self._process_ddg2p_annotations(limit)
LOG.info("Finished parsing.")
return
def _process_ddg2p_annotations(self, limit):
"""
The ddg2p annotations associate a gene symbol to an omim disease,
along with some HPO ids and pubs. The gene symbols come from gencode,
which in turn come from HGNC official gene symbols. Therefore,
we use the HGNC source class to get the id/symbol mapping for
use in our annotations here.
According to http://www.gencodegenes.org/faq.html,
"Gene names are usually HGNC or MGI-approved gene symbols mapped
to the GENCODE genes by the Ensembl xref pipeline. Sometimes,
when there is no official gene symbol, the Havana clone-based
name is used."
The kind of variation that is linked to a disease is indicated
(LOF, GOF, CNV, etc) in the source data.
Here, we create an anonymous variant of the specified gene of
the indicated type (mapped to the sequence ontology (SO)).
:param limit:
:return:
"""
line_counter = 0
if self.graph is not None:
graph = self.graph
else:
graph = self.graph
# in order for this to work, we need to map the HGNC id-symbol;
# hgnc = HGNC(self.graph_type, self.are_bnodes_skolemized)
# hgnc_symbol_id_map = hgnc.get_symbol_id_map() # Does Not Exists in hgnc
myzip = ZipFile(
'/'.join((self.rawdir, self.files['annot']['file'])), 'r')
# use the ddg2p.txt file
fname = 'ddg2p.txt'
unmapped_omim_counter = 0
unmapped_gene_count = 0
with myzip.open(fname, 'r') as f:
f = io.TextIOWrapper(f)
reader = csv.reader(f, delimiter='\t', quotechar='\"')
# score_means_by_measure = {}
# strain_scores_by_measure = {} # TODO theseare unused
for row in reader:
if re.match(r'#', row[0]): # skip comments
continue
(gencode_gene_name,
mode,
category,
consequence,
disease,
omim,
ddg2p_id,
pubmed_ids,
hpo_codes) = row
# hgnc_id = hgnc_symbol_id_map.get(gencode_gene_name.strip())
# if hgnc_id is None:
if True:
LOG.error(
"FIXME Couldn't map the gene symbol %s to HGNC.",
gencode_gene_name)
unmapped_gene_count += 1
continue
# add the gene
# self.model.addClassToGraph(hgnc_id, gencode_gene_name)
# TODO make VSLC with the variation
# to associate with the disorder
# TODO use the Inheritance and Mutation consequence
# to classify the VSLCs
# allele_id = self.make_allele_by_consequence(
# consequence, hgnc_id, gencode_gene_name)
if omim.strip() != '':
omim_id = 'OMIM:' + str(omim.strip())
# assume this is declared elsewhere in ontology
self.model.addClassToGraph(omim_id, None)
# ??? rel is never used
# if category.strip() == 'Confirmed DD gene':
# rel = self.self.globaltt['has phenotype']
# elif category.strip() == 'Probable DD gene':
# rel = self.self.globaltt['has phenotype']
# elif category.strip() == 'Possible DD gene':
# rel = self.self.globaltt['contributes to']
# elif category.strip() == 'Not DD gene':
# # TODO negative annotation
# continue
# assoc = G2PAssoc(graph, self.name, allele_id, omim_id)
# TODO 'rel' is assigned to but never used
for p in re.split(r';', pubmed_ids):
p = p.strip()
if p != '':
pmid = 'PMID:' + str(p)
r = Reference(
graph, pmid, self.globaltt['journal article'])
r.addRefToGraph()
assoc.add_source(pmid)
assoc.add_association_to_graph()
else:
# these are unmapped to a disease id.
# note that some match OMIM disease labels
# but the identifiers are just not included.
# TODO consider mapping to OMIM or DOIDs in other ways
LOG.warning(
"No omim id on line %d\n%s", line_counter, str(row))
unmapped_omim_counter += 1
# TODO hpo phenotypes
# since the DDG2P file is not documented,
# I don't know what the HPO annotations are actually about
# are they about the gene? the omim disease? something else?
# So, we wont create associations until this is clarified
if not self.test_mode and limit is not None and reader.line_num > limit:
break
myzip.close()
LOG.warning(
"gene-disorder associations with no omim id: %d",
unmapped_omim_counter)
LOG.warning("unmapped gene count: %d", unmapped_gene_count)
return
def make_allele_by_consequence(self, consequence, gene_id, gene_symbol):
"""
Given a "consequence" label that describes a variation type,
create an anonymous variant of the specified gene as an instance of
that consequence type.
:param consequence:
:param gene_id:
:param gene_symbol:
:return: allele_id
"""
allele_id = None
# Loss of function : Nonsense, frame-shifting indel,
# essential splice site mutation, whole gene deletion or any other
# mutation where functional analysis demonstrates clear reduction
# or loss of function
# All missense/in frame : Where all the mutations described in the data
# source are either missense or in frame deletions and there is no
# evidence favoring either loss-of-function, activating or
# dominant negative effect
# Dominant negative : Mutation within one allele of a gene that creates
# a significantly greater deleterious effect on gene product
# function than a monoallelic loss of function mutation
# Activating : Mutation, usually missense that results in
# a constitutive functional activation of the gene product
# Increased gene dosage : Copy number variation that increases
# the functional dosage of the gene
# Cis-regulatory or promotor mutation : Mutation in cis-regulatory
# elements that lies outwith the known transcription unit and
# promotor of the controlled gene
# Uncertain : Where the exact nature of the mutation is unclear or
# not recorded
type_id = self.resolve(consequence, mandatory=False)
if type_id == consequence:
LOG.warning("Consequence type unmapped: %s", str(consequence))
type_id = self.globaltt['sequence_variant']
# make the allele
allele_id = ''.join((gene_id, type_id))
allele_id = re.sub(r':', '', allele_id)
allele_id = self.make_id(allele_id) # make this a BNode
allele_label = ' '.join((consequence, 'allele in', gene_symbol))
self.model.addIndividualToGraph(allele_id, allele_label, type_id)
self.geno.addAlleleOfGene(allele_id, gene_id)
return allele_id
# def getTestSuite(self):
# # TODO
# import unittest
# from tests.test_decipher import DecipherTestCase
#
# test_suite = unittest.TestLoader().loadTestsFromTestCase(DecipherTestCase)
#
# return test_suite
| 38.177258 | 88 | 0.586509 | 1,335 | 11,415 | 4.89588 | 0.306367 | 0.020655 | 0.013923 | 0.016065 | 0.117656 | 0.094247 | 0.057222 | 0.057222 | 0.04437 | 0.031212 | 0 | 0.002661 | 0.341481 | 11,415 | 298 | 89 | 38.305369 | 0.866835 | 0.421288 | 0 | 0.136364 | 0 | 0 | 0.098481 | 0 | 0 | 0 | 0 | 0.006711 | 0 | 1 | 0.037879 | false | 0.007576 | 0.083333 | 0 | 0.174242 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a28cba95e14cb73a844da8072cd655a6734aade2 | 2,421 | py | Python | tests/pts1_prediction_tests.py | oKoch/pts1-prediction-tool | 0c1041905d12b96b72063aec5dccfd6073f31435 | [
"BSD-3-Clause"
] | 2 | 2021-04-12T09:50:51.000Z | 2021-04-12T10:50:10.000Z | tests/pts1_prediction_tests.py | oKoch/pts1-prediction-tool | 0c1041905d12b96b72063aec5dccfd6073f31435 | [
"BSD-3-Clause"
] | 1 | 2021-04-12T11:51:29.000Z | 2021-04-12T17:23:35.000Z | tests/pts1_prediction_tests.py | oKoch/pts1-prediction-tool | 0c1041905d12b96b72063aec5dccfd6073f31435 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from pts1_prediction_tool import pts1_prediction
class Test_pts1_prediction(unittest.TestCase):
@classmethod
def setUpClass(cls):
print("Test Start")
cls.predictor = pts1_prediction.PTS1_Predictor()
def setUp(self):
print("Test Start")
def tearDown(self):
print("Test Ende")
def test_predictor_constants(self):
self.assertEqual(pts1_prediction.PEROXISOMAL, 1)
self.assertEqual(pts1_prediction.NOT_PEROXISOMAL, 0)
self.assertEqual(len(pts1_prediction.AMINOACID_ONE_LETTERCODE), 22)
def test_prediction_with_peroxisomal_seq(self):
test_aa_seq = str('MALNFKDKVVIVTGAGGGIGKVYALEFAKRGAKVVVNDLGGSHTGQGSSSKAADKVVEEIKAAGGTAVANYDSVEDGEKIVQTAMDSFGGVDILINNAGILRDVSFG'+
'KMTDGDWDLVYRVHAKGAYKLSRAAWNHMREKNFGRIIMTSSAAGLYGNFGQANYGSMKMALVGLSNTLAQEGKSKNIHCNTIAPIAASRLTESVMPPEILEQMKPD' +
'YIVPLVLYLCHQDTTETGGVFEVGAGWVSKVRLQRSAGVYMKDLTPEKIKDNWAQIESFDNPSYPTSASESVSGILAAVNSKPADGESVLVRPPKVAVPKALAATPS' +
'GSVVVDGYNASKIFTTIQGNIGAKGAELVKKINGIYLINIKKGTNTQAWALDLKNGSGSIVVGAGSTKPNVTITVSDEDFVDIMTGKLNAQSAFTKGKLKISGNMGLA' +
'TKLGALMQGSKL')
pred_result = self.predictor.check_for_pts1(test_aa_seq)
self.assertEqual(pred_result.prediction, 1)
self.assertEqual(pred_result.aa_seq, test_aa_seq)
self.assertEqual(pred_result.isPeroxisomal, True)
def test_prediction_with_not_peroxisomal_seq(self):
test_aa_seq = str('MALNFKDKVVIVTGAGGGIGKVYALEFAKRGAKVVVNDLGGSHTGQGSSSKAADKVVEEIKAAGGTAVANYDSVEDGEKIVQTAMDSFGGVDILINNAGILRDVSFG'+
'KMTDGDWDLVYRVHAKGAYKLSRAAWNHMREKNFGRIIMTSSAAGLYGNFGQANYGSMKMALVGLSNTLAQEGKSKNIHCNTIAPIAASRLTESVMPPEILEQMKPD' +
'YIVPLVLYLCHQDTTETGGVFEVGAGWVSKVRLQRSAGVYMKDLTPEKIKDNWAQIESFDNPSYPTSASESVSGILAAVNSKPADGESVLVRPPKVAVPKALAATPS' +
'GSVVVDGYNASKIFTTIQGNIGAKGAELVKKINGIYLINIKKGTNTQAWALDLKNGSGSIVVGAGSTKPNVTITVSDEDFVDIMTGMMMMMMMMMMMMMMMMMMMMM')
pred_result = self.predictor.check_for_pts1(test_aa_seq)
self.assertEqual(pred_result.prediction, 0)
self.assertEqual(pred_result.aa_seq, test_aa_seq)
self.assertEqual(pred_result.isPeroxisomal, False)
def test_prediction_with_to_short_aa_seq(self):
test_aa_seq = str('GALMQGSKL')
pred_result = self.predictor.check_for_pts1(test_aa_seq)
self.assertIsNone(pred_result)
if __name__ == '__main__':
unittest.main()
| 44.833333 | 136 | 0.792235 | 194 | 2,421 | 9.515464 | 0.283505 | 0.029794 | 0.039003 | 0.081257 | 0.58559 | 0.58559 | 0.575298 | 0.575298 | 0.575298 | 0.575298 | 0 | 0.008213 | 0.144981 | 2,421 | 53 | 137 | 45.679245 | 0.883575 | 0 | 0 | 0.325 | 0 | 0 | 0.377943 | 0.353986 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.175 | false | 0 | 0.05 | 0 | 0.25 | 0.075 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a28e4b7d826da9df7b902a18e5ffa3b3c1fece24 | 1,857 | py | Python | examples/src/aws_encryption_sdk_cli_examples/command_line_roundtrip.py | phasenoisepon/aws-encryption-sdk-cli | ba6365817d7dc226f64bd9506ac28e5c32368c5a | [
"Apache-2.0"
] | 43 | 2018-08-30T14:06:22.000Z | 2022-03-22T18:39:11.000Z | examples/src/aws_encryption_sdk_cli_examples/command_line_roundtrip.py | phasenoisepon/aws-encryption-sdk-cli | ba6365817d7dc226f64bd9506ac28e5c32368c5a | [
"Apache-2.0"
] | 58 | 2018-08-03T07:57:15.000Z | 2022-03-28T12:36:02.000Z | examples/src/aws_encryption_sdk_cli_examples/command_line_roundtrip.py | phasenoisepon/aws-encryption-sdk-cli | ba6365817d7dc226f64bd9506ac28e5c32368c5a | [
"Apache-2.0"
] | 22 | 2018-08-16T14:28:06.000Z | 2021-11-01T08:46:09.000Z | # Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Example showing basic usage of the AWS Encryption CLI to encrypt
input from stdin and output it to stdout."""
import shlex
from subprocess import PIPE, Popen
from aws_encryption_sdk_cli_examples.example_test_utils import cmk_arn, is_windows, setup_file
def run():
expected_plaintext = "Hello World"
cmk = cmk_arn()
# Call the encrypt CLI command and ensure that it passes
encrypt_command = "encrypt_command_line.sh '{}' {}".format(expected_plaintext, cmk)
proc = Popen(shlex.split(encrypt_command, posix=not is_windows()), stdout=PIPE, stdin=PIPE, stderr=PIPE)
encrypted_stdout, stderr = proc.communicate()
if proc.returncode != 0:
raise AssertionError("Failed to encrypt", stderr)
ciphertext_string = encrypted_stdout.decode("utf-8")
# Call the decrypt CLI command and ensure that it passes
decrypt_command = "decrypt_command_line.sh {} {}".format(ciphertext_string, cmk)
proc = Popen(shlex.split(decrypt_command, posix=not is_windows()), stdout=PIPE, stdin=PIPE, stderr=PIPE)
decrypted_stdout, stderr = proc.communicate()
if proc.returncode != 0:
raise AssertionError("Failed to decrypt", stderr)
decrypted_plaintext = decrypted_stdout.strip().decode("utf-8")
assert decrypted_plaintext == expected_plaintext
| 44.214286 | 108 | 0.745827 | 262 | 1,857 | 5.167939 | 0.461832 | 0.044313 | 0.019202 | 0.028065 | 0.261448 | 0.228951 | 0.228951 | 0.183161 | 0.183161 | 0.183161 | 0 | 0.007722 | 0.163166 | 1,857 | 41 | 109 | 45.292683 | 0.863578 | 0.406031 | 0 | 0.105263 | 0 | 0 | 0.106089 | 0.042435 | 0 | 0 | 0 | 0 | 0.157895 | 1 | 0.052632 | false | 0 | 0.157895 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a290896e62f1605f8737dcf48c1f3572d55feef5 | 1,993 | py | Python | instrument/instrument/custom_instrument/blanket_order/blanket_order.py | shubhangiraut0027/rushabhinstruments_V13 | 7061a9e73821b559dbaee17b2172eb9df52ac0a8 | [
"MIT"
] | 1 | 2021-07-14T12:34:14.000Z | 2021-07-14T12:34:14.000Z | instrument/instrument/custom_instrument/blanket_order/blanket_order.py | shubhangiraut0027/rushabhinstruments_V13 | 7061a9e73821b559dbaee17b2172eb9df52ac0a8 | [
"MIT"
] | null | null | null | instrument/instrument/custom_instrument/blanket_order/blanket_order.py | shubhangiraut0027/rushabhinstruments_V13 | 7061a9e73821b559dbaee17b2172eb9df52ac0a8 | [
"MIT"
] | 4 | 2021-07-06T10:01:11.000Z | 2021-12-28T20:40:30.000Z | from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from datetime import date, timedelta, datetime
from frappe.utils import getdate, get_datetime, flt, nowdate, cstr, today
def on_submit(doc, method):
for row in doc.items:
row.updated_date = getdate(nowdate())
def generate_po_against_blanket_order_reminder():
data = {}
blanket_orders_name = frappe.db.sql("""SELECT name from `tabBlanket Order` where docstatus=1 and blanket_order_type='Purchasing' """, as_dict=1)
for name in blanket_orders_name:
order_name = frappe.get_list("Blanket Order Item", {"parent": name.get("name")}, ["name", "item_code", "item_name", "updated_date", "delivery_quantity", "frequency_in_day", "parent"])
item_list = {}
for row in order_name:
if row.get("frequency_in_day"):
today_date = row.get("updated_date")+timedelta(days=int(row.get("frequency_in_day")))
else:
today_date = row.get("updated_date")
if getdate(today_date) == getdate(nowdate()):
frappe.db.set_value("Blanket Order Item", row.get("name"), "updated_date", getdate(nowdate()))
item_list[row.get("item_name")] = row.get("delivery_quantity")
email_template = frappe.get_doc("Email Template", "Blanket Order")
if item_list:
data["name"]=name.get("name")
data["item_list"]=item_list
message = frappe.render_template(email_template.response_html, data)
try:
frappe.sendmail(
recipients = [frappe.db.get_value("Email Setting",{"email_name": "Purchase Order Email"},"email_id")],
sender = None,
subject = email_template.subject,
message = message,
delayed=False
)
except frappe.OutgoingEmailError:
pass
item_list.clear()
| 45.295455 | 191 | 0.619167 | 235 | 1,993 | 5.008511 | 0.357447 | 0.035684 | 0.045879 | 0.042481 | 0.078165 | 0.04418 | 0 | 0 | 0 | 0 | 0 | 0.001363 | 0.263924 | 1,993 | 44 | 192 | 45.295455 | 0.800954 | 0 | 0 | 0 | 0 | 0 | 0.201605 | 0.015547 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.026316 | 0.131579 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a290e0649b2d2174382737139ef1b8521a8264b6 | 4,852 | py | Python | CSV/lib/GetCsvColumn.py | philip-shen/note_python | db0ad84af25464a22ac52e348960107c81e74a56 | [
"MIT"
] | null | null | null | CSV/lib/GetCsvColumn.py | philip-shen/note_python | db0ad84af25464a22ac52e348960107c81e74a56 | [
"MIT"
] | 11 | 2021-02-08T20:45:23.000Z | 2022-03-12T01:00:11.000Z | CSV/lib/GetCsvColumn.py | philip-shen/note_python | db0ad84af25464a22ac52e348960107c81e74a56 | [
"MIT"
] | null | null | null | # pilicurg / GetCsvColumn
# https://github.com/pilicurg/GetCsvColumn/blob/master/demo.py
__author__ = 'www.lfhacks.com'
EXCLUDE_SIGN = '~'
EXCLUDE = lambda x: EXCLUDE_SIGN + str(x)
import csv
import re
class CsvFile(object):
'''get columns from a comma separated values(csv) file, providing various filter'''
def __init__(self, filename):
self._name = filename
self._header_list = []
self._dataDict = {}
self._open_file(self._name)
def _get_data_dict(self, reader):
datadict = {}
for headerindex, column in enumerate(zip(*reader)):
datadict[self._header_list[headerindex]] = column
return datadict
'''
python csv2libsvm.py: AttributeError: '_csv.reader' object has no attribute 'next'
https://stackoverflow.com/questions/42767250/python-csv2libsvm-py-attributeerror-csv-reader-object-has-no-attribute-nex
write next(reader) instead of reader.next()
'''
def _open_file(self, filename):
with open(filename) as csvfile:
reader = csv.reader(csvfile)
self._header_list = next(reader);#reader.next()
self._dataDict = self._get_data_dict(reader)
def get_column(self, *header_list, **rule_dict):
includelist, excludelist, converttype = self._reformat_rule_dict(rule_dict)
if len(header_list) == 1:
return self._filter_column(header_list[0], includelist, excludelist, converttype)
elif len(header_list) > 1:
return [self._filter_column(header, includelist, excludelist, converttype) for header in header_list]
else:
raise Exception('Empty header list!')
'''
Error: “ 'dict' object has no attribute 'iteritems' ”
https://stackoverflow.com/questions/30418481/error-dict-object-has-no-attribute-iteritems
use dict.items() instead of dict.iteritems()
'''
def _reformat_rule_dict(self, rule_dict):
convertType = rule_dict.pop('CONVERT', None)
seqmatch = re.compile(r'^(\[|\().*(\]|\))$')
includedict = {}
#for key, value in rule_dict.iteritems():
for key, value in rule_dict.items():
if str(value)[0] != EXCLUDE_SIGN:
if type(value) is list:
includedict[key] = value
else:
includedict[key] = [value]
excludedict = {}
#for key, value in rule_dict.iteritems():
for key, value in rule_dict.items():
if str(value)[0] == EXCLUDE_SIGN:
value = str(value).lstrip(EXCLUDE_SIGN)
if seqmatch.match(value):
excludedict[key] = eval(value)
else:
excludedict[key] = [value]
#includelist = tuple([{key: str(v)} for key, value in includedict.iteritems() for v in value])
#excludelist = tuple([{key: str(v)} for key, value in excludedict.iteritems() for v in value])
includelist = tuple([{key: str(v)} for key, value in includedict.items() for v in value])
excludelist = tuple([{key: str(v)} for key, value in excludedict.items() for v in value])
return includelist, excludelist, convertType
def _filter_column(self, header, includelist, excludelist, convertType):
if header not in self._header_list:
raise Exception('column \"%s\" not found in %s.' % (header, self._name))
'''
TypeError: 'dict_keys' object does not support indexing
https://stackoverflow.com/questions/17322668/typeerror-dict-keys-object-does-not-support-indexing
python2.x (when d.keys() returned a list).
With python3.x, d.keys() returns a dict_keys object which behaves a lot more like a set than a list. As such, it can't be indexed.
The solution is to pass list(d.keys()) (or simply list(d)) to shuffle.
'''
#include_unique_keys = list(set([d.keys()[0] for d in includelist]))
#exclude_unique_keys = list(set([d.keys()[0] for d in excludelist]))
include_unique_keys = list(set([ list(d.keys())[0] for d in includelist]))
exclude_unique_keys = list(set([ list(d.keys())[0] for d in excludelist]))
columnarray = []
for index, data in enumerate(self._dataDict[header]):
for key in include_unique_keys:
rowinclude = {key: self._dataDict[key][index]}
if rowinclude not in includelist:
break
else:
for key in exclude_unique_keys:
rowexclude = {key: self._dataDict[key][index]}
if rowexclude in excludelist:
break
else:
columnarray.append(convertType(data) if convertType is not None else data)
return tuple(columnarray) | 41.827586 | 138 | 0.613974 | 586 | 4,852 | 4.947099 | 0.25256 | 0.030355 | 0.030355 | 0.035874 | 0.358399 | 0.338048 | 0.3208 | 0.286996 | 0.25595 | 0.226975 | 0 | 0.010473 | 0.271847 | 4,852 | 116 | 139 | 41.827586 | 0.810076 | 0.11892 | 0 | 0.130435 | 0 | 0 | 0.026815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.028986 | 0 | 0.202899 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |