hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4cf536611c9c289cf0a6a5b53a470c6346137063 | 3,190 | py | Python | mmocr/models/textdet/postprocess/pse_postprocessor.py | yuexy/mmocr | 82488024db159266e66ea6b0d6f84a5a18e87362 | [
"Apache-2.0"
] | 2,261 | 2021-04-08T03:45:41.000Z | 2022-03-31T23:37:46.000Z | mmocr/models/textdet/postprocess/pse_postprocessor.py | yuexy/mmocr | 82488024db159266e66ea6b0d6f84a5a18e87362 | [
"Apache-2.0"
] | 789 | 2021-04-08T05:40:13.000Z | 2022-03-31T09:42:39.000Z | mmocr/models/textdet/postprocess/pse_postprocessor.py | yuexy/mmocr | 82488024db159266e66ea6b0d6f84a5a18e87362 | [
"Apache-2.0"
] | 432 | 2021-04-08T03:56:16.000Z | 2022-03-30T18:44:43.000Z | # Copyright (c) OpenMMLab. All rights reserved.
import cv2
import numpy as np
import torch
from mmcv.ops import contour_expand
from mmocr.core import points2boundary
from mmocr.models.builder import POSTPROCESSOR
from .base_postprocessor import BasePostprocessor
@POSTPROCESSOR.register_module()
class PSEPostprocessor(BasePostprocessor):
"""Decoding predictions of PSENet to instances. This is partially adapted
from https://github.com/whai362/PSENet.
Args:
text_repr_type (str): The boundary encoding type 'poly' or 'quad'.
min_kernel_confidence (float): The minimal kernel confidence.
min_text_avg_confidence (float): The minimal text average confidence.
min_kernel_area (int): The minimal text kernel area.
min_text_area (int): The minimal text instance region area.
"""
def __init__(self,
text_repr_type='poly',
min_kernel_confidence=0.5,
min_text_avg_confidence=0.85,
min_kernel_area=0,
min_text_area=16,
**kwargs):
super().__init__(text_repr_type)
assert 0 <= min_kernel_confidence <= 1
assert 0 <= min_text_avg_confidence <= 1
assert isinstance(min_kernel_area, int)
assert isinstance(min_text_area, int)
self.min_kernel_confidence = min_kernel_confidence
self.min_text_avg_confidence = min_text_avg_confidence
self.min_kernel_area = min_kernel_area
self.min_text_area = min_text_area
def __call__(self, preds):
"""
Args:
preds (Tensor): Prediction map with shape :math:`(C, H, W)`.
Returns:
list[list[float]]: The instance boundary and its confidence.
"""
assert preds.dim() == 3
preds = torch.sigmoid(preds) # text confidence
score = preds[0, :, :]
masks = preds > self.min_kernel_confidence
text_mask = masks[0, :, :]
kernel_masks = masks[0:, :, :] * text_mask
score = score.data.cpu().numpy().astype(np.float32)
kernel_masks = kernel_masks.data.cpu().numpy().astype(np.uint8)
region_num, labels = cv2.connectedComponents(
kernel_masks[-1], connectivity=4)
labels = contour_expand(kernel_masks, labels, self.min_kernel_area,
region_num)
labels = np.array(labels)
label_num = np.max(labels)
boundaries = []
for i in range(1, label_num + 1):
points = np.array(np.where(labels == i)).transpose((1, 0))[:, ::-1]
area = points.shape[0]
score_instance = np.mean(score[labels == i])
if not self.is_valid_instance(area, score_instance,
self.min_text_area,
self.min_text_avg_confidence):
continue
vertices_confidence = points2boundary(points, self.text_repr_type,
score_instance)
if vertices_confidence is not None:
boundaries.append(vertices_confidence)
return boundaries
| 35.842697 | 79 | 0.610658 | 369 | 3,190 | 5.01626 | 0.344173 | 0.058347 | 0.061588 | 0.06483 | 0.097245 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015267 | 0.301881 | 3,190 | 88 | 80 | 36.25 | 0.815896 | 0.20627 | 0 | 0 | 0 | 0 | 0.001636 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 1 | 0.037037 | false | 0 | 0.12963 | 0 | 0.203704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf6adac32a989c6e601b495a7a73b88e57eabb4 | 878 | py | Python | dags/kd03-dags/research_report_parser.py | ywf5566/airflow | e7872dddbf275729b2c42e2a4ff602a6df7d1536 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | null | null | null | dags/kd03-dags/research_report_parser.py | ywf5566/airflow | e7872dddbf275729b2c42e2a4ff602a6df7d1536 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | null | null | null | dags/kd03-dags/research_report_parser.py | ywf5566/airflow | e7872dddbf275729b2c42e2a4ff602a6df7d1536 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from datetime import datetime
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
default_args = {
'owner': 'afroot03'
}
dag = DAG(
'research_report_parser',
default_args=default_args,
description='beta_info_update',
schedule_interval='0 20 * * *',
catchup=False,
start_date=datetime(2021, 1, 16, 20, 0)
)
parse_research_report = BashOperator(task_id="parse_research_report", bash_command="cd /usr/lib/carter/event-news-scheduler;sh project/extractor/script/parse_research_report.sh prod ", dag=dag)
research_report_opinion_detection = BashOperator(task_id="research_report_opinion_detection", bash_command="cd /usr/lib/carter/event-news-scheduler;sh project/extractor/script/opinion_detection.sh prod ", dag=dag)
parse_research_report >> research_report_opinion_detection | 39.909091 | 213 | 0.779043 | 119 | 878 | 5.478992 | 0.462185 | 0.171779 | 0.116564 | 0.138037 | 0.205521 | 0.205521 | 0.205521 | 0.205521 | 0.205521 | 0.205521 | 0 | 0.020408 | 0.107062 | 878 | 22 | 214 | 39.909091 | 0.811224 | 0.047836 | 0 | 0 | 0 | 0.117647 | 0.367665 | 0.297006 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf6d1390cc0b4111f365c83154af60b50d2b6ac | 2,983 | py | Python | tweet.py | ardyflora/pytweetRogerInternet | 199c4f1e9e4a80559c7f59e16c3a0edd08527642 | [
"MIT"
] | null | null | null | tweet.py | ardyflora/pytweetRogerInternet | 199c4f1e9e4a80559c7f59e16c3a0edd08527642 | [
"MIT"
] | null | null | null | tweet.py | ardyflora/pytweetRogerInternet | 199c4f1e9e4a80559c7f59e16c3a0edd08527642 | [
"MIT"
] | null | null | null | import tweepy
import os
import pandas as pd
import matplotlib.pyplot as plt
import plotly.graph_objs as go
import plotly.plotly as py
from plotly import tools
from dotenv import load_dotenv
import plotly
import datetime
import time # {Added}
load_dotenv()
plotly.tools.set_credentials_file(
username=os.environ.get('username'),
api_key=os.environ.get('plotly_api_key'))
def twitter_authentication():
# Authenticating twitter with credentials from env
auth = tweepy.OAuthHandler(os.environ.get(
'consumer_key'), os.environ.get('consumer_secret'))
auth.set_access_token(os.environ.get('access_token'),
os.environ.get('access_token_secret'))
return tweepy.API(auth)
def get_current_internet_speed():
# Running speedtest-cli to get download and upload speed of the Internet
process = os.popen("speedtest-cli --simple")
preprocessed = process.read()
process.close()
ping, download, upload, _ = preprocessed.split('\n')
return download, upload
def write_into_json_file(download, upload):
# Write into json file and plot the data
json_backup = 'internetSpeed.json'
df_store = pd.DataFrame(columns=["Time", "Download Speed", "Upload Speed"])
time_data = str(datetime.datetime.now().strftime("%Y-%m-%d"))
try:
df_store = pd.read_json(json_backup)
df_store = df_store.append({
"Time": time_data,
"Download Speed": float(download.split(':')[1].split()[0]),
"Upload Speed": float(upload.split(':')[1].split()[0])
}, ignore_index=True)
df_store.to_json(json_backup)
trace_high = go.Scatter(
x=df_store.Time,
y=df_store['Download Speed'],
name="Download Speed",
line=dict(color='#17BECF'),
opacity=0.8)
trace_low = go.Scatter(
x=df_store.Time,
y=df_store['Upload Speed'],
name="Upload Speed",
line=dict(color='#7F7F7F'),
opacity=0.8)
data = [trace_high, trace_low]
fig = tools.make_subplots(rows=1, cols=2)
fig.append_trace(trace_high, 1, 1)
fig.append_trace(trace_low, 1, 2)
fig['layout'].update(
height=600,
width=800,
title='Internet Speed for few months')
py.iplot(fig, filename='simple-subplot-with-annotations')
# Save plot as img
py.image.save_as({'data': data}, 'scatter_plot', format='png')
except Exception as e:
print("The error msg:", e)
def main():
api = twitter_authentication()
download, upload = get_current_internet_speed()
message = "| Ignite 60 Plan | Actual Speed - Download: {}, Upload: {}".format(
download.split(':')[1], upload.split(':')[1])
# Update to twitter
api.update_status(message)
# write data into json
write_into_json_file(download, upload)
if __name__ == "__main__":
main()
| 28.141509 | 83 | 0.630908 | 382 | 2,983 | 4.751309 | 0.366492 | 0.034711 | 0.039669 | 0.028099 | 0.097521 | 0.097521 | 0.063361 | 0.031956 | 0.031956 | 0 | 0 | 0.012832 | 0.242373 | 2,983 | 105 | 84 | 28.409524 | 0.790265 | 0.074422 | 0 | 0.055556 | 0 | 0 | 0.155104 | 0.01126 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.152778 | 0 | 0.236111 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf6f90f97a266f542ca1d05ba36c9d433d0be5a | 1,007 | py | Python | other/chinking.py | gauthamkrishna-g/Real-Time-Sentiment-Analyzer-of-Twitter-Trends | 478ea270f67aa75c964d69d29d9bac59978fd7c5 | [
"MIT"
] | 6 | 2017-08-25T10:08:02.000Z | 2021-02-02T16:15:16.000Z | other/chinking.py | gauthkris/Real-Time-Sentiment-Analyzer-of-Twitter-Trends | 478ea270f67aa75c964d69d29d9bac59978fd7c5 | [
"MIT"
] | null | null | null | other/chinking.py | gauthkris/Real-Time-Sentiment-Analyzer-of-Twitter-Trends | 478ea270f67aa75c964d69d29d9bac59978fd7c5 | [
"MIT"
] | 2 | 2019-07-12T08:07:32.000Z | 2020-05-22T17:21:13.000Z | from nltk.tokenize import word_tokenize
from nltk import pos_tag
from nltk.tokenize import PunktSentenceTokenizer
from nltk.corpus import state_union
from nltk import RegexpParser
train_text = state_union.raw("2005-GWBush.txt")
sample_text = state_union.raw("2006-GWBush.txt")
custom_sent_tokenizer = PunktSentenceTokenizer(sample_text)
tokenized = custom_sent_tokenizer.tokenize(sample_text)
def process_content():
try:
for i in tokenized[:5]:
tagged = pos_tag(word_tokenize(i)) # tagset='universal'
chunkGram = r"""Chunk : {<.*>+}
}<VB.?|IN|DT|TO>{"""
chunkParser = RegexpParser(chunkGram)
chunked = chunkParser.parse(tagged)
print(chunked)
for subtree in chunked.subtrees(filter=lambda t:t.label() == "Chunk"):
print(subtree)
chunked.draw()
except Exception as e:
print(str(e))
process_content() | 34.724138 | 83 | 0.621648 | 112 | 1,007 | 5.4375 | 0.508929 | 0.065681 | 0.052545 | 0.07225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01238 | 0.278054 | 1,007 | 29 | 84 | 34.724138 | 0.825309 | 0.017875 | 0 | 0 | 0 | 0 | 0.096875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.208333 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf8c7dac3b7e12cbc636958b30abf4449cbc7d6 | 3,446 | py | Python | Easydashboard/app.py | harshalsonioo1/mpr-asi | a0c6c9321105776016c4b765ddcac964abe8f1c4 | [
"MIT"
] | null | null | null | Easydashboard/app.py | harshalsonioo1/mpr-asi | a0c6c9321105776016c4b765ddcac964abe8f1c4 | [
"MIT"
] | null | null | null | Easydashboard/app.py | harshalsonioo1/mpr-asi | a0c6c9321105776016c4b765ddcac964abe8f1c4 | [
"MIT"
] | null | null | null | from utils import plot_confusion_matrix, load_map, plot_SHAP, create_map
import hydralit as hy
import streamlit as st
import streamlit.components.v1 as components
st.set_option("deprecation.showPyplotGlobalUse", False)
app = hy.HydraApp(
title="Explainer Dashboard", # nav_container=st.header,
nav_horizontal=False,
navbar_animation=True,
hide_streamlit_markers=True,
use_navbar=True,
navbar_sticky=False,
)
# Remove whitespace from the top of the page and sidebar
st.markdown(
"""
<style>
.css-18e3th9 {
padding-top: 0rem;
padding-bottom: 10rem;
padding-left: 0rem;
padding-right: 0rem;
}
.css-1d391kg {
padding-top: 3.5rem;
padding-right: 1rem;
padding-bottom: 3.5rem;
padding-left: 1rem;
}
</style>
""",
unsafe_allow_html=True,
)
hide_streamlit_style = """
<style>
[theme]
base="light"
primaryColor="#1e84d4"
font="serif"
footer {visibility: hidden;}
</style>
"""
st.markdown(hide_streamlit_style, unsafe_allow_html=True)
st.sidebar.title("Explainer Dashboard")
@app.addapp(title="About")
def about():
c1, c2, c3 = st.columns((1, 6, 1))
with c2:
st.header('Need of Dashboard')
st.write('As we transition from team to Squads, there will be occurences to discuss model performance and workings with the team.')
st.write('Rather than sharing screenshots/plots of the model, dashboard could kindle collaborative efforts and speed up delivery')
st.header('Target Audience')
st.write('Target audience is us, developers. This is different from MPR which targets program performance.')
st.header('Features')
st.write('Threshold Adjustment and decision')
st.write('SHAP Analysis at index level')
st.write('Spatial Analysis')
st.write('Online Parameter Training, may be?')
st.header('Usage')
st.write('pip install easydashboard and then run() to launch it inside any system')
@app.addapp(title="Classification Metrics")
def home():
data_type = st.sidebar.selectbox(
"Select Test or train to see the metrics", ["Test", "Train"], index=0
)
threshold = st.sidebar.slider(
label="Prediction threshold",
min_value=0.0,
max_value=1.0,
value=0.5,
step=0.05,
format="%f",
)
plot_confusion_matrix(data_type, threshold)
@app.addapp(title="SHAP Analysis")
def shap_analysis():
data_type = st.sidebar.selectbox(
"Select Test or train to see the metrics", ["Test", "Train"], index=0
)
c1, c2, c3 = st.columns((1, 6, 1))
with c2:
plot_SHAP(data_type)
@app.addapp(title="Create Spatial View")
def create_map_view():
c1, c2, c3 = st.columns((1, 8, 1))
with c2:
st.title("Spatial view of predictions")
components.html(create_map(), height=800)
@app.addapp(title="View Existing Map")
def spatial_view():
c1, c2, c3 = st.columns((1, 6, 1))
with c2:
st.title("Spatial view")
components.html(load_map(), height=600)
# Run the whole lot, we get navbar, state management and app isolation, all with this tiny amount of work.
app.run()
| 28.01626 | 139 | 0.612304 | 435 | 3,446 | 4.767816 | 0.425287 | 0.027001 | 0.033751 | 0.015429 | 0.161524 | 0.13838 | 0.13838 | 0.10704 | 0.10704 | 0.10704 | 0 | 0.027633 | 0.275392 | 3,446 | 122 | 140 | 28.245902 | 0.802964 | 0.053395 | 0 | 0.139241 | 0 | 0.012658 | 0.368894 | 0.019037 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063291 | false | 0 | 0.050633 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf911eb15b62882ba50ef99f4d0b5369641722b | 261 | py | Python | ermaket/scripts/script_abort_example.py | SqrtMinusOne/ERMaket_Experiment | c4a7b61651edd15a619d9b690e2aaeaab4de282d | [
"Apache-2.0"
] | null | null | null | ermaket/scripts/script_abort_example.py | SqrtMinusOne/ERMaket_Experiment | c4a7b61651edd15a619d9b690e2aaeaab4de282d | [
"Apache-2.0"
] | null | null | null | ermaket/scripts/script_abort_example.py | SqrtMinusOne/ERMaket_Experiment | c4a7b61651edd15a619d9b690e2aaeaab4de282d | [
"Apache-2.0"
] | null | null | null | from ermaket.api.scripts import ReturnContext, UserScript
__all__ = ['script']
script = UserScript(id=1)
@script.register
def step_1(context):
ctx = ReturnContext(abort=418)
ctx.add_message("Sorry, this won't work", variant="danger")
return ctx
| 20.076923 | 63 | 0.724138 | 35 | 261 | 5.228571 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022624 | 0.153257 | 261 | 12 | 64 | 21.75 | 0.80543 | 0 | 0 | 0 | 0 | 0 | 0.130268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf973cf75050168caf53e9004f07a1509cab4ba | 578 | py | Python | hardhat/recipes/x11/server/xinit.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/x11/server/xinit.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/x11/server/xinit.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | from hardhat.recipes.base import GnuRecipe
class XInitRecipe(GnuRecipe):
def __init__(self, *args, **kwargs):
super(XInitRecipe, self).__init__(*args, **kwargs)
self.sha256 = '75d88d7397a07e01db253163b7c7a00b' \
'249b3d30e99489f2734cac9a0c7902b3'
self.name = 'xinit'
self.version = '1.3.4'
self.depends = ['xorg-server']
self.url = 'http://ftp.x.org/pub/individual/app/xinit-$version.tar.bz2'
self.configure_args += [
'--with-xinitdir=%s/etc/X11/app-defaults' % self.prefix_dir]
| 32.111111 | 79 | 0.624567 | 62 | 578 | 5.66129 | 0.725806 | 0.05698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119639 | 0.233564 | 578 | 17 | 80 | 34 | 0.672686 | 0 | 0 | 0 | 0 | 0.083333 | 0.315425 | 0.17851 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cf9defe4fdd8ddd2590406f9ebaacf6453d229b | 221 | py | Python | main.py | Oreross/hangman_game | 80d10c9e7199f38c5aead129d15c179ee5645c92 | [
"Apache-2.0"
] | 1 | 2020-08-11T13:54:02.000Z | 2020-08-11T13:54:02.000Z | main.py | Oreross/hangman_game | 80d10c9e7199f38c5aead129d15c179ee5645c92 | [
"Apache-2.0"
] | null | null | null | main.py | Oreross/hangman_game | 80d10c9e7199f38c5aead129d15c179ee5645c92 | [
"Apache-2.0"
] | null | null | null | from game.hangman import Hangman
def main():
w = Hangman(7)
while True:
w.draw_word_scheme()
w.check_char(input("Enter a char -> "))
w.check_win()
if __name__ == "__main__":
main()
| 15.785714 | 47 | 0.579186 | 30 | 221 | 3.866667 | 0.7 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006329 | 0.285068 | 221 | 13 | 48 | 17 | 0.727848 | 0 | 0 | 0 | 0 | 0 | 0.108597 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cfb0570d172533ee9894d3ef40fe14a0b5a7765 | 1,057 | py | Python | moodle/LoadBalancerStack.py | gastonmichel/moodle-docker | e604a6091ff65ae6a119d4e28b6e551bfa16270e | [
"CC0-1.0"
] | null | null | null | moodle/LoadBalancerStack.py | gastonmichel/moodle-docker | e604a6091ff65ae6a119d4e28b6e551bfa16270e | [
"CC0-1.0"
] | null | null | null | moodle/LoadBalancerStack.py | gastonmichel/moodle-docker | e604a6091ff65ae6a119d4e28b6e551bfa16270e | [
"CC0-1.0"
] | null | null | null | from aws_cdk import (
aws_ec2 as ec2,
aws_elasticloadbalancingv2 as elbv2,
core as cdk
)
from . import VPCStack
class MoodleLoadBalancerStack(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, vpc: VPCStack.MoodleVPCStack, **kwargs):
super().__init__(scope, construct_id, **kwargs)
self.load_balancer = elbv2.ApplicationLoadBalancer(
self, 'MoodleLoadBalancer',
vpc=vpc.vpc,
vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC),
internet_facing=True,
)
self.load_balancer.connections.allow_from_any_ipv4(
port_range=ec2.Port.tcp(80),
description='allow internet in port 80',
)
# self.load_balancer.connections.allow_from_any_ipv4(
# port_range=ec2.Port.tcp(443),
# description='allow internet in port 443',
# )
self.http_listener = self.load_balancer.add_listener(
'MoodleHttpListener',
port=80,
) | 30.2 | 104 | 0.631031 | 114 | 1,057 | 5.587719 | 0.45614 | 0.050235 | 0.100471 | 0.084772 | 0.288854 | 0.194662 | 0.194662 | 0.194662 | 0.194662 | 0.194662 | 0 | 0.030026 | 0.275307 | 1,057 | 35 | 105 | 30.2 | 0.801567 | 0.125828 | 0 | 0 | 0 | 0 | 0.066304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.086957 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4cfe154bfe19cd5e666abec8d197975b1b1677fc | 21,312 | py | Python | Code/detect-1113.py | b875102/simon | c7f0f2c7956e37f1cf5c71b1f0ca61ba2e1a5fb8 | [
"BSD-3-Clause-Attribution"
] | null | null | null | Code/detect-1113.py | b875102/simon | c7f0f2c7956e37f1cf5c71b1f0ca61ba2e1a5fb8 | [
"BSD-3-Clause-Attribution"
] | null | null | null | Code/detect-1113.py | b875102/simon | c7f0f2c7956e37f1cf5c71b1f0ca61ba2e1a5fb8 | [
"BSD-3-Clause-Attribution"
] | null | null | null | import argparse
import cv2
from models import * # set ONNX_EXPORT in models.py
from utils.datasets import *
from utils.utils import *
###############################################################
class_index = { '0':0, '1':1, '2':2, '3':3, '4':4, '5':5, '6':6, '7':7, '8':8, '9':9,\
'A':10, 'B':11, 'C':12, 'D':13, 'E':14, 'F':15, 'G':16, 'H':17, 'I':18, 'J':19,\
'K':20, 'L':21, 'M':22, 'N':23, 'O':24, 'P':25, 'Q':26, 'R':27, 'S':28, 'T':29,\
'U':30, 'V':31, 'W':32, 'X':33, 'Y':34, 'Z':35 }
###############################################
class Character():
def __init__( self, Name, Location ):
self.name = str(Name)
self.location_X = int( ( Location[0] + Location[2] ) / 2 ) ### location[0] : x1,[2] : x2
self.location_Y = int( ( Location[1] + Location[3] ) / 2 ) ### location[1] : y1,[3] : y2
def __str__(self):
return str(self.__class__) + ": " + str(self.__dict__)
###############################################
class LicensePlate():
def __init__( self, Image, Center_Point, Bounding_Box, Vehicle_Class_Name ):
self.image = Image
self.centerPoint = Center_Point
self.boundingBox = Bounding_Box
self.vehicleClassName = Vehicle_Class_Name
def __str__(self):
return str(self.__class__) + ": " + str(self.__dict__)
###############################################
class Vehicle():
def __init__( self, Class_Name, Bounding_Box ):
self.className = Class_Name
self.boundingBox = Bounding_Box
def __str__(self):
return str(self.__class__) + ": " + str(self.__dict__)
###############################################
def RectContains( rect, pt ):
return rect[0] < pt[0] < rect[2] and rect[1] < pt[1] < rect[3]
###############################################
def FilterLicensePlateCandidate( candidate ):
plate_list = []
for i in range( len(candidate) ):
plate_list.append(candidate[i][0])
return plate_list
###############################################
def LicensePlateRule( LPlist, VehicleClassName ):
length = len( LPlist )
# sedan
if VehicleClassName == 'sedan':
# new sedan XXX-XXXX
if length == 7:
LPlist.insert( 3, '-' )
elif length == 6:
index = 0
for i in range( len(LPlist) ):
if class_index[ LPlist[ i ] ] > 9:
index = i
if index > 3:
# old sedan XXXX-XX
LPlist.insert( 4, '-' )
elif index < 2:
# old sedan XX-XXXX
LPlist.insert( 2, '-' )
else:
# wrong character
return ''
# scooter
elif VehicleClassName == 'scooter':
# new scooter XXX-XXXX
# scooter XXX-XXX
if length == 7 or length == 6:
LPlist.insert( 3, '-' )
else:
# wrong character
return ''
# truck
elif VehicleClassName == 'truck':
# new truck XXX-XXXX
if length == 7:
LPlist.insert( 3, '-' )
elif length == 6:
index = 0
for i in range( len(LPlist) ):
if class_index[ LPlist[ i ] ] > 9:
index = i
if index > 3:
# old truck XXXX-XX
LPlist.insert( 4, '-' )
elif index < 2:
# old truck XX-XXXX
LPlist.insert( 2, '-' )
elif length == 5:
index = 0
for i in range( len(LPlist) ):
if class_index[ LPlist[ i ] ] > 9:
index = i
if index > 2:
# truck XXX-XX
LPlist.insert( 3, '-' )
else:
# old truck XX-XXX
LPlist.insert( 2, '-' )
elif length == 4:
# truck XX-XX
LPlist.insert( 2, '-' )
else:
# wrong character
return ''
# bus
# trailer
final = '%s'*len(LPlist) % tuple(LPlist)
return final
###############################################################
def detect(save_img=False):
imgsz = (320, 192) if ONNX_EXPORT else opt.img_size # (320, 192) or (416, 256) or (608, 352) for (height, width)
out, source, weights, half, view_img, save_txt = opt.output, opt.source, opt.weights, opt.half, opt.view_img, opt.save_txt
webcam = source == '0' or source.startswith('rtsp') or source.startswith('http') or source.endswith('.txt')
lpweights = opt.lpweights
# Initialize
device = torch_utils.select_device(device='cpu' if ONNX_EXPORT else opt.device)
if os.path.exists(out):
shutil.rmtree(out) # delete output folder
os.makedirs(out) # make new output folder
# Initialize model
model = Darknet(opt.cfg, imgsz)
lpmodel = Darknet(opt.lpcfg, imgsz)
# Load weights
attempt_download(weights)
if weights.endswith('.pt'): # pytorch format
model.load_state_dict(torch.load(weights, map_location=device)['model'])
else: # darknet format
load_darknet_weights(model, weights)
attempt_download(lpweights)
if lpweights.endswith('.pt'): # pytorch format
lpmodel.load_state_dict(torch.load(lpweights, map_location=device)['model'])
else: # darknet format
load_darknet_weights(lpmodel, lpweights)
# Second-stage classifier
classify = False
if classify:
modelc = torch_utils.load_classifier(name='resnet101', n=2) # initialize
modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']) # load weights
modelc.to(device).eval()
# Eval mode
model.to(device).eval()
lpmodel.to(device).eval()
# Fuse Conv2d + BatchNorm2d layers
# model.fuse()
# Export mode
if ONNX_EXPORT:
model.fuse()
img = torch.zeros((1, 3) + imgsz) # (1, 3, 320, 192)
f = opt.weights.replace(opt.weights.split('.')[-1], 'onnx') # *.onnx filename
torch.onnx.export(model, img, f, verbose=False, opset_version=11,
input_names=['images'], output_names=['classes', 'boxes'])
lpmodel.fuse()
lpimg = torch.zeros((1, 3) + imgsz) # (1, 3, 320, 192)
lpf = opt.lpweights.replace(opt.lpweights.split('.')[-1], 'onnx') # *.onnx filename
torch.onnx.export(lpmodel, lpimg, lpf, verbose=False, opset_version=11,
input_names=['lpimages'], output_names=['classes', 'boxes'])
# Validate exported model
import onnx
model = onnx.load(f) # Load the ONNX model
onnx.checker.check_model(model) # Check that the IR is well formed
print(onnx.helper.printable_graph(model.graph)) # Print a human readable representation of the graph
lpmodel = onnx.load(lpf) # Load the ONNX model
onnx.checker.check_model(lpmodel) # Check that the IR is well formed
print(onnx.helper.printable_graph(lpmodel.graph)) # Print a human readable representation of the graph
return
# Half precision
half = half and device.type != 'cpu' # half precision only supported on CUDA
if half:
model.half()
lpmodel.half()
# Set Dataloader
vid_path, vid_writer = None, None
if webcam:
view_img = True
torch.backends.cudnn.benchmark = True # set True to speed up constant image size inference
dataset = LoadStreams(source, img_size=imgsz)
else:
save_img = True
dataset = LoadImages(source, img_size=imgsz)
# Get names and colors
names = load_classes(opt.names)
colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))]
lpnames = load_classes(opt.lpnames)
lpcolors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(lpnames))]
# Run inference
print('total len: ', len(dataset))
dataset_count = 0
t0 = time.time()
img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img
_ = model(img.half() if half else img.float()) if device.type != 'cpu' else None # run once
_ = lpmodel(img.half() if half else img.float()) if device.type != 'cpu' else None # run once
for path, img, im0s, vid_cap, cur_video_frame_cnt in dataset:
print(path)
need_reprocess = False
LP_list = []
Vehicle_list = []
LP_xyxy_list = []
dataset_count+=1
#print(dataset_count)
img = torch.from_numpy(img).to(device)
img = img.half() if half else img.float() # uint8 to fp16/32
img /= 255.0 # 0 - 255 to 0.0 - 1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
# Inference
t1 = torch_utils.time_synchronized()
pred = model(img, augment=opt.augment)[0]
#t2 = torch_utils.time_synchronized()
# to float
if half:
pred = pred.float()
# Apply NMS
pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres,
multi_label=False, classes=opt.classes, agnostic=opt.agnostic_nms)
# Apply Classifier
if classify:
pred = apply_classifier(pred, modelc, img, im0s)
# Process detections
for i, det in enumerate(pred): # detections for image i
if webcam: # batch_size >= 1
p, s, im0 = path[i], '%g: ' % i, im0s[i].copy()
else:
p, s, im0 = path, '', im0s
save_path = str(Path(out) / Path(p).name)
s += '%gx%g ' % img.shape[2:] # print string
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
no_license_detect = True
if det is not None and len(det):
# Rescale boxes from imgsz to im0 size
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
# Print results
for c in det[:, -1].unique():
n = (det[:, -1] == c).sum() # detections per class
s += '%g %ss, ' % (n, names[int(c)]) # add to string
# Write results
for *xyxy, conf, cls in reversed(det):
if save_txt: # Write to file
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
with open(save_path[:save_path.rfind('.')] + '.txt', 'a') as file:
file.write(('%g ' * 5 + '\n') % (cls, *xywh)) # label format
# handle license plate class
if names[int(cls)] == 'license plate':
# crop license image
#print(int(xyxy[ 1 ]) , int(xyxy[ 3 ]), int(xyxy[ 0 ]) , int(xyxy[ 2 ]) )
lp_im0 = im0[ int(xyxy[ 1 ]) : int(xyxy[ 3 ]), int(xyxy[ 0 ]) : int(xyxy[ 2 ]) ]
# resize image height to 200
rate = 200 / ( int(xyxy[ 3 ]) - int(xyxy[ 1 ]) )
lp_im0 = cv2.resize(lp_im0, ( int( ( int(xyxy[ 2 ]) - int(xyxy[ 0 ]) ) * rate ), 200), interpolation=cv2.INTER_CUBIC)
# calculate LP image LP center point
centerX = int ( ( int( xyxy[ 0 ] ) + int( xyxy[ 2 ] ) ) / 2 )
centerY = int ( ( int( xyxy[ 1 ] ) + int( xyxy[ 3 ] ) ) / 2 )
# create LicensePlate ( Image, Center_Point, Bounding_Box, Vehicle_Class_Name )
LP = LicensePlate( lp_im0, ( centerX, centerY ),
( int( xyxy[ 0 ] ), int( xyxy[ 1 ] ), int( xyxy[ 2 ] ), int( xyxy[ 3 ] ) ), '' )
# add to LP_list
LP_list.append( LP )
LP_xyxy_list.append( xyxy )
need_reprocess = True
no_license_detect = False
# handle vehicle class
elif names[int(cls)] != 'license plate':
# create Vehicle( Center_Point, Bounding_Box, Vehicle_Class_Name )
vehicle = Vehicle( names[int(cls)], ( int( xyxy[ 0 ] ), int( xyxy[ 1 ] ), int( xyxy[ 2 ] ), int( xyxy[ 3 ] ) ) )
Vehicle_list.append( vehicle )
# draw result box
if save_img or view_img: # Add bbox to image
label = '%s %.2f' % (names[int(cls)], conf)
plot_one_box( xyxy, im0, label=label, color=colors[int(cls)] )
# Stream results
if view_img:
cv2.imshow(p, im0)
if cv2.waitKey(1) == ord('q'): # q to quit
print('Stop Iteration!!')
raise StopIteration
# Save results (image with detections)
if save_img and no_license_detect:
#print('save_img')
if dataset.mode == 'images':
cv2.imwrite(save_path, im0)
else:
if vid_path != save_path: # new video
vid_path = save_path
if isinstance(vid_writer, cv2.VideoWriter):
vid_writer.release() # release previous video writer
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*opt.fourcc), fps, (w, h))
vid_writer.write(im0)
#print( "LP_list : ",len(LP_list) )
#print( "LP_xyxy_list : ",len(LP_xyxy_list) )
if need_reprocess:
for i in range( len( LP_list ) ):
final_LP = ''
for j in range( len( Vehicle_list ) ):
# if LP center point in Vehicle bounding box:
if RectContains( Vehicle_list[ j ].boundingBox, LP_list[ i ].centerPoint ):
# LP class name = Vehicle class name
LP_list[ i ].vehicleClassName = Vehicle_list[ j ].className
break
# detect character
# padded resize
lp_im0 = LP_list[ i ].image
lp_img = letterbox( lp_im0, imgsz)[0]
lp_img = lp_img[:, :, ::-1].transpose(2 , 0, 1)
lp_img = np.ascontiguousarray(lp_img)
lp_img = torch.from_numpy(lp_img).to(device)
lp_img = lp_img.half() if half else lp_img.float() # uint8 to fp16/32
lp_img /= 255.0 # 0 - 255 to 0.0 - 1.0
if lp_img.ndimension() == 3:
lp_img = lp_img.unsqueeze(0)
# Inference
lp_pred = lpmodel( lp_img, augment=opt.augment )[ 0 ]
# to float
if half:
lp_pred = lp_pred.float()
# Apply NMS
lp_pred = non_max_suppression(lp_pred, opt.conf_thres, opt.iou_thres, multi_label=False, classes=opt.classes, agnostic=opt.agnostic_nms)
for lp_i, lp_det in enumerate(lp_pred): # detections for image lp_i
if lp_det is not None and len(lp_det):
# Rescale boxes from imgsz to im0 size
lp_det[:, :4] = scale_coords(lp_img.shape[2:], lp_det[:, :4], lp_im0.shape).round()
h = lp_im0.shape[0]
y_min_range = h * 0.25
y_max_range = h * 0.75
license_plate_candidate = []
char_len = len(lp_det)
# Write results
for *lp_xyxy, lp_conf, cls in reversed(lp_det):
locate = ( int(lp_xyxy[0]), int(lp_xyxy[1]), int(lp_xyxy[2]), int(lp_xyxy[3]) )
char = Character( lpnames[int(cls)], locate )
#print(int(cls))
if char.location_Y > y_max_range or char.location_Y < y_min_range:
char_len -= 1
continue
license_plate_candidate.append( ( char.name, char.location_X, char.location_Y ) )
# 字元數量超過7個或低於4個即代表辨識有問題
if char_len > 7 or char_len < 4:
print("character length error!! len : ", char_len )
continue
# 用 x 座標由左到右排列
#print(license_plate_candidate)
license_plate_candidate = sorted(license_plate_candidate, key = lambda s: s[ 1 ])
#print(license_plate_candidate)
# 過濾車牌字元
license_plate = FilterLicensePlateCandidate( license_plate_candidate )
#print(license_plate)
final_LP = LicensePlateRule( license_plate, LP_list[ i ].vehicleClassName )
if save_img or view_img:
plot_one_box( LP_xyxy_list[ i ], im0, label=final_LP, color=colors[ 0 ]) # 5 --> license plate color
#####################################################################
# Print time (inference + NMS)
#t2 = torch_utils.time_synchronized()
#print('%sDone. (%.3fs)' % (s, t2 - t1))
# Save results (image with detections)
if save_img:
#print('save_img')
if dataset.mode == 'images':
cv2.imwrite(save_path, im0)
else:
if vid_path != save_path: # new video
vid_path = save_path
if isinstance(vid_writer, cv2.VideoWriter):
vid_writer.release() # release previous video writer
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*opt.fourcc), fps, (w, h))
vid_writer.write(im0)
#####################################################################
# Print time (inference + NMS)
t2 = torch_utils.time_synchronized()
print('%sDone. (%.3fs)' % (s, t2 - t1))
if save_txt or save_img:
print('Results saved to %s' % os.getcwd() + os.sep + out)
if platform == 'darwin': # MacOS
os.system('open ' + save_path)
print('Done. (%.3fs)' % (time.time() - t0))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--cfg', type=str, default='cfg/yolov3-spp.cfg', help='*.cfg path')
parser.add_argument('--names', type=str, default='data/coco.names', help='*.names path')
parser.add_argument('--weights', type=str, default='weights/yolov3-spp-ultralytics.pt', help='weights path')
parser.add_argument('--source', type=str, default='data/samples', help='source') # input file/folder, 0 for webcam
parser.add_argument('--output', type=str, default='output', help='output folder') # output folder
parser.add_argument('--img-size', type=int, default=512, help='inference size (pixels)')
parser.add_argument('--conf-thres', type=float, default=0.3, help='object confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.6, help='IOU threshold for NMS')
parser.add_argument('--fourcc', type=str, default='mp4v', help='output video codec (verify ffmpeg support)')
parser.add_argument('--half', action='store_true', help='half precision FP16 inference')
parser.add_argument('--device', default='', help='device id (i.e. 0 or 0,1) or cpu')
parser.add_argument('--view-img', action='store_true', help='display results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--classes', nargs='+', type=int, help='filter by class')
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--lpcfg', type=str, default='cfg/yolov3-spp.cfg', help='*.cfg path')
parser.add_argument('--lpnames', type=str, default='data/coco.names', help='*.names path')
parser.add_argument('--lpweights', type=str, default='weights/yolov3-spp-ultralytics.pt', help='weights path')
opt = parser.parse_args()
opt.cfg = check_file(opt.cfg) # check file
opt.names = check_file(opt.names) # check file
opt.lpcfg = check_file(opt.lpcfg) # check file
opt.lpnames = check_file(opt.lpnames) # check file
print(opt)
with torch.no_grad():
detect()
| 42.624 | 152 | 0.513607 | 2,486 | 21,312 | 4.249397 | 0.176187 | 0.015903 | 0.030576 | 0.006816 | 0.361416 | 0.326013 | 0.301969 | 0.274612 | 0.23883 | 0.218951 | 0 | 0.027886 | 0.340419 | 21,312 | 499 | 153 | 42.709419 | 0.723625 | 0.135464 | 0 | 0.261682 | 0 | 0 | 0.063021 | 0.00372 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031153 | false | 0 | 0.018692 | 0.012461 | 0.090343 | 0.031153 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9800be2265795338942ce9d2f06754749b91c514 | 2,613 | py | Python | resources/lambda_function/execute_ddl/index.py | aws-samples/aws-cdk-lambda-import-export-redshift-ddl | 3b2c944caddfb0d37b08b09f791549a086e443ca | [
"MIT-0"
] | null | null | null | resources/lambda_function/execute_ddl/index.py | aws-samples/aws-cdk-lambda-import-export-redshift-ddl | 3b2c944caddfb0d37b08b09f791549a086e443ca | [
"MIT-0"
] | null | null | null | resources/lambda_function/execute_ddl/index.py | aws-samples/aws-cdk-lambda-import-export-redshift-ddl | 3b2c944caddfb0d37b08b09f791549a086e443ca | [
"MIT-0"
] | null | null | null | import logging
import os
import boto3
import psycopg2
from psycopg2.extensions import AsIs
from botocore.exceptions import ClientError
from urllib.parse import urlparse
logger = logging.getLogger()
logger.setLevel(logging.INFO)
secretsmanager = boto3.client("secretsmanager")
s3 = boto3.client("s3")
def handler(event, context):
host = event["connection"]["host"]
port = event["connection"]["port"]
db = event["connection"]["db"]
user = event["connection"]["user"]
password_secret_arn = event["connection"]["password_secret_arn"]
ddl_s3_uris = event["ddl_s3_uris"]
# Optional sanity checking of input can be done here
logger.info(f"Connecting to Redshift cluster {host}:{port}/{db} as user {user}")
conn = psycopg2.connect(
host=host,
port=port,
dbname=db,
user=user,
password=get_secret_value(password_secret_arn)
)
for ddl_s3_uri in ddl_s3_uris:
parsed_s3_uri = urlparse(ddl_s3_uri, allow_fragments=False)
s3_bucket = parsed_s3_uri.netloc
s3_key = parsed_s3_uri.path.lstrip('/')
execute_ddl(conn, s3_bucket, s3_key)
return {'message': 'Success'}
def get_secret_value(secret_arn: str):
try:
logger.debug(f"Retrieving secret value from {secret_arn}")
get_secret_value_response = secretsmanager.get_secret_value(SecretId=secret_arn)
except ClientError as e:
logger.error(f"The requested secret could not be retrieved: {secret_arn}")
raise
else:
if "SecretString" in get_secret_value_response:
return get_secret_value_response["SecretString"]
else:
return get_secret_value_response["SecretBinary"]
def execute_ddl(conn, s3_bucket, s3_key):
logger.info(f"Executing DDL from s3://{s3_bucket}/{s3_key}")
s3_obj = s3.get_object(Bucket=s3_bucket, Key=s3_key)
ddl_query = s3_obj["Body"].read().decode("utf-8")
ddl_filename = os.path.basename(s3_key)
schema = ddl_filename.partition("_ddl.sql")[0] # File name format is <schema>_ddl.sql
create_schema(conn, schema)
with conn, conn.cursor() as curs:
logger.debug(f"Executing query: {ddl_query}")
curs.execute(ddl_query)
def create_schema(conn, schema):
logger.info(f"Creating schema if one doesn't already exist for {schema}")
with conn, conn.cursor() as curs:
with open("create_schema.sql", mode="r", encoding="utf-8") as sql_file:
query = curs.mogrify(sql_file.read(), { "schemaname": AsIs(schema) })
logger.debug(f"Executing query: {query}")
curs.execute(query)
| 33.075949 | 89 | 0.685802 | 356 | 2,613 | 4.828652 | 0.351124 | 0.051193 | 0.05701 | 0.051193 | 0.129145 | 0.066318 | 0.066318 | 0 | 0 | 0 | 0 | 0.01619 | 0.196326 | 2,613 | 78 | 90 | 33.5 | 0.802381 | 0.033295 | 0 | 0.065574 | 0 | 0 | 0.208482 | 0.009909 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0.032787 | 0.114754 | 0 | 0.229508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9801a94c959c21f860f85c41508e66fcfd630de5 | 7,607 | py | Python | egen310.py | PlacidFireball/310-rover-code | 3a57fef2d92f8a3e2da6d3936db4f3da9eac45ae | [
"MIT"
] | null | null | null | egen310.py | PlacidFireball/310-rover-code | 3a57fef2d92f8a3e2da6d3936db4f3da9eac45ae | [
"MIT"
] | null | null | null | egen310.py | PlacidFireball/310-rover-code | 3a57fef2d92f8a3e2da6d3936db4f3da9eac45ae | [
"MIT"
] | null | null | null | # EGEN 310R D.3 Runner Script
# Written by Jared Weiss at Montana State University
# RESOURCES:
### [Pygame Docs:] https://www.pygame.org/docs/
# - I used the joystick docs (https://www.pygame.org/docs/ref/joystick.html) the most
### [Servo control article:] https://www.learnrobotics.org/blog/raspberry-pi-servo-motor/
# - article displaying the use of RPi.GPIO to control servo motors on the pi,
# - as of right now I am no longer using RPi.GPIO
### [YouTube Livestream article:] https://www.makeuseof.com/tag/live-stream-youtube-raspberry-pi/
# - although I didn't actually get a YouTube stream up and running, I did find this helpful for future use
# - on the http stream. (see run)
### [pigpio docs:] https://docs.juliahub.com/PiGPIO/8aGxa/0.2.0/api/
### [minimize servo jitter:] https://ben.akrin.com/raspberry-pi-servo-jitter/
# - I used these two to figure out how to control servos with the pigpio library
### Some joystick control info for my use, you may find it helpful for reading this code
# JOYSTICK CONTROL INFO:
# Left Joystick: Axis 0, 1
# - 1 Up (-1) down (1)
# - 0 Left (-1) right (1)
# Right Joystick: Axis 2, 3
# - 3 Up (-1) down (1)
# - 2 Left (-1) right (1)
# Left Trigger: Axis 5
# - (-1 no press) (1 full press)
# Right Trigger: Axis 4
# A button -> button 0
# B button -> button 1
# X -> 3
# Y -> 4
# D-Pad:(Hat)Up Down Left Right [We never actually used this]
# (0, 1) (0, -1) (-1, 0) (1, 0)
### -------------------------------- INCLUDES ----------------------------------
import pygame # for getting controller input
import pigpio # for servo control
import os # setting drivers and getting basename of this script
import time # for pausing the script when centering the drivetrain
### -------------------------------- INITIALIZATION ----------------------------------
### Writes `angle` to the `servo`
### I am assuming that these are 50hz, and that they
### have 180 degrees of motion
def servoSetAngle(servo, angle):
scaled = 500 + angle/180 * 2000 # want to write values between 500 and 2500
pwm.set_servo_pulsewidth(servo, scaled) # make the library call to write the angle
os.environ["SDL_VIDEODRIVER"] = "dummy" # work around the pi not having a video driver in headless mode
basename = os.path.basename(__file__) # retrieve the basename of this script
# Dictionary of our servo names to pins on the pi
servos = {"drivetrain" : 21, "steering" : 17,
"arm_up_down" : 18, "arm_rotate" : 22,
"articulation" : 27, "bucket" : 23,
"hopper" : 25}
# pigpio initialization
pwm = pigpio.pi() # start up the daemon
for name, pin in servos.items():
print(f"{basename}: Initializing: {name} on pin {pin}")
pwm.set_mode(pin, pigpio.OUTPUT) # set each pin to output
pwm.set_PWM_frequency(pin, 50) # with 50 hz signal
servoSetAngle(servos["drivetrain"], 90) # send the center signal to the drive train
# we're treating 90 as the center -> 180/2 = 90
time.sleep(2) # settle for 2 seconds so the moter can initialize
pygame.display.init() # initialize a dummy display
pygame.init() # initialize the pygame module for controller input
clock = pygame.time.Clock() # make a clock so we can get accurate timing should we need it
done = False
debug_joystick = True
# Initialize the joysticks.
pygame.joystick.init()
steering_angle = 90
arm_angle = 125
arm_elevation_angle = 90
DEADZONE = 0.4
### -------------------------------- MAIN PROGRAM LOOP ----------------------------------
while not done:
# Get rid of events that pygame generates because we
# do not care about them at all
pygame.event.clear()
# Get count of joysticks and stick all the joystick objects
# into a cute little list
joysticks = [pygame.joystick.Joystick(x) for x in range(pygame.joystick.get_count())]
if debug_joystick and joysticks:
print(f"{basename}: Controller initialized. Running...")
debug_joystick = False
elif not joysticks:
print(f"{basename}: No controller detected. Waiting...")
time.sleep(5)
# For each joystick:
for joystick in joysticks:
#print(joystick.get_name()) # log debug info to the console
axes = joystick.get_numaxes()
for i in range(axes): # do different stuff with each one
axis = joystick.get_axis(i)
if (i == 0):
if (axis < -1*DEADZONE):
steering_angle -= 2
elif (axis > DEADZONE):
steering_angle += 2
if (steering_angle > 135):
steering_angle = 135
elif (steering_angle < 45):
steering_angle = 45
servoSetAngle(servos["steering"] , steering_angle) # steering servo can go +/- 45 degrees from center
if (i == 1):
if (axis > 0.2 or axis < -0.2):
servoSetAngle(servos["drivetrain"] , 90 + (axis)*10) # we write a small range so that we have more control over speed
# any higher than this and we can't control it very well
if (i == 2):
# these if statements are so that we can let go of the joystick and the arm will stay in place
# we 0.4 is just so we have a little deadspot, makes it easier to control
if (axis < -1*DEADZONE):
arm_angle -= 2
if (axis > DEADZONE):
arm_angle += 2
if arm_angle < 0:
arm_angle = 0
if arm_angle > 180:
arm_angle = 180
#print("Arm rotation: "+str(arm_angle))
servoSetAngle(servos["arm_rotate"] , arm_angle) # arm rotation may need to be toned up as we stabilize it
if (i == 3):
if (axis > DEADZONE):
arm_elevation_angle -= 2
if (axis < -1*DEADZONE):
arm_elevation_angle += 2
if arm_elevation_angle < 42: # minimum angle we can write to the arm elevation servo
arm_elevation_angle = 42
if arm_elevation_angle > 140:
arm_elevation_angle = 140 # maximum angle we can write to the arm elevation servo
#print("Arm Elevation: "+str(arm_elevation_angle))
servoSetAngle(servos["arm_up_down"] , arm_elevation_angle) # write the angle to the servo
if (i == 4):
servoSetAngle(servos["articulation"], (axis+1)*60) # articulation (second servo on arm) with diminished range
if (i == 5):
servoSetAngle(servos["bucket"] , -1*(axis)*45+90) # bucket servo
#print("Axis {} value: {:>6.3f}".format(i, axis))
buttons = joystick.get_numbuttons()
# Exit on the B button
for i in range(buttons):
button = joystick.get_button(i)
if (i == 0):
if (button): # if the button is pressed we "open" the hopper
servoSetAngle(servos["hopper"], 45)
else:
servoSetAngle(servos["hopper"], 90)
if (i == 1):
if (button):
done = True
clock.tick(20) # don't think I need this but I'mma leave this here
### -------------------------------- CLEANUP ----------------------------------
pygame.quit()
for name, pin in servos.items():
print(f"{basename}: Cleaning up {name} on pin {pin}")
pwm.set_PWM_dutycycle(pin, 0)
pwm.set_PWM_frequency(pin, 0)
# End egen310.py
| 43.468571 | 139 | 0.588668 | 1,024 | 7,607 | 4.310547 | 0.325195 | 0.032623 | 0.034662 | 0.010195 | 0.089715 | 0.0657 | 0.03353 | 0.03353 | 0.03353 | 0 | 0 | 0.032841 | 0.279479 | 7,607 | 174 | 140 | 43.718391 | 0.772487 | 0.486263 | 0 | 0.134021 | 0 | 0 | 0.090095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010309 | false | 0 | 0.041237 | 0 | 0.051546 | 0.041237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9804198a7cfe5d4e970e3da94bea10e4d21e32f9 | 1,465 | py | Python | templates/new_language/lex_attrs.py | aniruddha-adhikary/spacy-dev-resources | 30f3b094bcac1670c010e46d11441796f7d0d683 | [
"MIT"
] | 132 | 2016-12-19T21:14:49.000Z | 2022-02-09T05:14:48.000Z | templates/new_language/lex_attrs.py | aniruddha-adhikary/spacy-dev-resources | 30f3b094bcac1670c010e46d11441796f7d0d683 | [
"MIT"
] | 32 | 2016-12-29T00:35:17.000Z | 2019-03-12T11:08:42.000Z | templates/new_language/lex_attrs.py | aniruddha-adhikary/spacy-dev-resources | 30f3b094bcac1670c010e46d11441796f7d0d683 | [
"MIT"
] | 68 | 2016-12-19T10:05:37.000Z | 2021-07-02T20:20:45.000Z | # coding: utf8
from __future__ import unicode_literals
# import the symbols for the attrs you want to overwrite
from ...attrs import LIKE_NUM
# Overwriting functions for lexical attributes
# Documentation: https://localhost:1234/docs/usage/adding-languages#lex-attrs
# Most of these functions, like is_lower or like_url should be language-
# independent. Others, like like_num (which includes both digits and number
# words), requires customisation.
# Example: check if token resembles a number
_num_words = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven',
'eight', 'nine', 'ten', 'eleven', 'twelve', 'thirteen', 'fourteen',
'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen', 'twenty',
'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety',
'hundred', 'thousand', 'million', 'billion', 'trillion', 'quadrillion',
'gajillion', 'bazillion']
def like_num(text):
text = text.replace(',', '').replace('.', '')
if text.isdigit():
return True
if text.count('/') == 1:
num, denom = text.split('/')
if num.isdigit() and denom.isdigit():
return True
if text in _num_words:
return True
return False
# Create dictionary of functions to overwrite. The default lex_attr_getters are
# updated with this one, so only the functions defined here are overwritten.
LEX_ATTRS = {
LIKE_NUM: like_num
}
| 33.295455 | 85 | 0.646416 | 177 | 1,465 | 5.242938 | 0.672316 | 0.037716 | 0.036638 | 0.040948 | 0.049569 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005222 | 0.2157 | 1,465 | 43 | 86 | 34.069767 | 0.802437 | 0.382253 | 0 | 0.136364 | 0 | 0 | 0.25308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98041abe73a09d744f8e1be028ecf96083d6176f | 6,277 | py | Python | pirates/npc/Boss.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 81 | 2018-04-08T18:14:24.000Z | 2022-01-11T07:22:15.000Z | pirates/npc/Boss.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 4 | 2018-09-13T20:41:22.000Z | 2022-01-08T06:57:00.000Z | pirates/npc/Boss.py | Willy5s/Pirates-Online-Rewritten | 7434cf98d9b7c837d57c181e5dabd02ddf98acb7 | [
"BSD-3-Clause"
] | 26 | 2018-05-26T12:49:27.000Z | 2021-09-11T09:11:59.000Z | from pandac.PandaModules import *
from direct.interval.IntervalGlobal import *
from pirates.battle import EnemyGlobals
from pirates.npc.BossBase import BossBase
from pirates.pirate import AvatarTypes
from pirates.effects.BossEffect import BossEffect
from pirates.effects.BossAura import BossAura
from direct.showbase.DirectObject import DirectObject
class Boss(BossBase):
def __init__(self, cr):
BossBase.__init__(self, cr)
self.effectIval = None
self.auraEffect = None
self.bossEffect = None
self.geometryNode = None
self.instanceNode = None
self.effectsNode = None
return
def setupBoss(self, isUndead=1, override=False):
if not override:
if self.instanceNode or base.options.getCharacterDetailSetting() == 0:
return
root = self
if hasattr(self, 'creature'):
root = self.creature
if root.hasLOD():
geom = root.getLOD('500')
if not geom:
geom = root.getLOD('low')
geom = geom.getChild(0)
while not geom.find('**/weapon*').isEmpty():
geom = geom.getChild(0)
else:
geom = root.getGeomNode().find('**/*actorGeom*')
parent = root.getGeomNode()
self.geometryNode = parent.attachNewNode('GeometryNode')
self.instanceNode = parent.attachNewNode('InstanceNode')
self.effectsNode = parent.attachNewNode('EffectsNode')
parent.getChild(0).reparentTo(self.geometryNode)
geom.instanceTo(self.instanceNode)
if base.useStencils:
mask = 255
ref = isUndead * 2 + 2
stencil_A = StencilAttrib.make(1, StencilAttrib.SCFAlways, StencilAttrib.SOKeep, StencilAttrib.SOKeep, StencilAttrib.SOReplace, 6, mask, mask)
stencil_B = StencilAttrib.make(1, StencilAttrib.SCFGreaterThan, StencilAttrib.SOKeep, StencilAttrib.SOKeep, StencilAttrib.SOReplace, ref, mask, mask)
stencil_C = StencilAttrib.make(1, StencilAttrib.SCFEqual, StencilAttrib.SOKeep, StencilAttrib.SOKeep, StencilAttrib.SOKeep, ref, mask, mask)
self.geometryNode.setAttrib(stencil_A)
self.instanceNode.setAttrib(stencil_B)
self.effectsNode.setAttrib(stencil_C)
else:
self.instanceNode.hide()
self.effectsNode.hide()
self.instanceNode.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd, ColorBlendAttrib.OIncomingAlpha, ColorBlendAttrib.OOne))
self.instanceNode.setTransparency(1, 1)
self.instanceNode.setDepthWrite(0)
self.instanceNode.setTextureOff(10000)
ts = TextureStage('ts')
ts.setCombineRgb(ts.CMReplace, ts.CSConstant, ts.COSrcColor)
ts.setCombineAlpha(ts.CMReplace, ts.CSConstant, ts.COSrcAlpha)
ts.setColor(Vec4(1, 1, 1, 0.01))
image = PNMImage(2, 2)
t = Texture()
t.load(image)
self.instanceNode.setTexture(ts, t)
self.instanceNode.getState().getAttrib(TextureAttrib.getClassType()).addOnStage(ts, t)
def _getBossModelScale(self):
return self.bossData['ModelScale']
def getEnemyScale(self):
return EnemyGlobals.getEnemyScale(self, self._getBossModelScale())
def skipBossEffect(self):
return False
def addBossEffect(self, avType):
if self.skipBossEffect():
return
isUndead = (avType != AvatarTypes.Navy)
if not self.instanceNode:
self.setupBoss(isUndead)
color = Vec4(0.25, 0.8, 0.0, 1.0)
if not isUndead:
color = Vec4(1.0, 1.0, 0.0, 1.0)
startScale = Vec3(1.025, 1.025, 1.01)
endScale = Vec3(1.15, 1.1, 1.01)
if base.options.getCharacterDetailSetting() > 0 or self.getName() == 'Jolly Roger':
self.effectIval = Sequence(LerpScaleInterval(self.instanceNode, 0.5, endScale, startScale=startScale), LerpScaleInterval(self.instanceNode, 0.5, startScale, startScale=endScale))
self.effectIval.loop()
self.bossEffect = BossEffect.getEffect(unlimited=True)
if self.bossEffect:
self.bossEffect.reparentTo(self.effectsNode)
self.bossEffect.setEffectScale(3.0)
self.bossEffect.setEffectColor(color)
self.bossEffect.setPos(0, 0, 10.0)
self.bossEffect.startLoop()
if hasattr(self, 'creature'):
headNode = self.creature.headNode
else:
headNode = self.headNode
self.auraEffect = BossAura.getEffect()
if self.auraEffect and base.useStencils:
scale = self.getEnemyScale()
mult = EffectModifiers[avType][0]
offset = EffectModifiers[avType][1]
if not headNode.isEmpty():
self.auraEffect.reparentTo(headNode)
stencil = StencilAttrib.make(1, StencilAttrib.SCFAlways, StencilAttrib.SOKeep, StencilAttrib.SOKeep, StencilAttrib.SOKeep, 4, 255, 255)
self.auraEffect.setAttrib(stencil, 1)
self.auraEffect.setScale(scale * mult)
self.auraEffect.setEffectColor(color)
self.auraEffect.setHpr(0, 0, -90)
self.auraEffect.setPos(offset)
self.auraEffect.startLoop()
def removeBossEffect(self):
if self.effectIval:
self.effectIval.pause()
self.effectIval = None
if self.bossEffect:
self.bossEffect.stopLoop()
self.bossEffect = None
if self.auraEffect:
self.auraEffect.stopLoop()
self.auraEffect = None
return
def getShortName(self):
return self._getBossName()
EffectModifiers = {AvatarTypes.Undead: [1.0, Point3(-1.3, 0, 0)],AvatarTypes.Navy: [1.0, Point3(-1.3, 0, 0)],AvatarTypes.Alligator: [0.75, Point3(0.75, 0, 0)],AvatarTypes.Bat: [0.6, Point3(0, 0, 0)],AvatarTypes.Crab: [1.0, Point3(0, 0, 0)],AvatarTypes.FlyTrap: [2.5, Point3(2.5, 0, 0)],AvatarTypes.Scorpion: [0.3, Point3(0, 0, 0)],AvatarTypes.Stump: [1.25, Point3(0, 0, 0)],AvatarTypes.Wasp: [0.25, Point3(-0.1, 0, 0)],AvatarTypes.Townfolk: [1.0, Point3(-1.3, 0, 0)]} | 46.154412 | 467 | 0.628644 | 667 | 6,277 | 5.890555 | 0.256372 | 0.009672 | 0.029779 | 0.05803 | 0.160855 | 0.094681 | 0.061084 | 0.05803 | 0.046322 | 0.046322 | 0 | 0.03683 | 0.260315 | 6,277 | 136 | 467 | 46.154412 | 0.80939 | 0 | 0 | 0.153226 | 0 | 0 | 0.016566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.064516 | 0.032258 | 0.201613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98061bec4ab8c362e8993720fb5a01fca50bd699 | 2,905 | py | Python | wanip_notifier.py | silvaBrian987/wanip_notifier | 8592e4ebf3c5b9cf4622f87223d843cad4e5845f | [
"Apache-2.0"
] | null | null | null | wanip_notifier.py | silvaBrian987/wanip_notifier | 8592e4ebf3c5b9cf4622f87223d843cad4e5845f | [
"Apache-2.0"
] | null | null | null | wanip_notifier.py | silvaBrian987/wanip_notifier | 8592e4ebf3c5b9cf4622f87223d843cad4e5845f | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import os
import subprocess
import logging
import smtplib
import socket
import ssl
import email
class App:
__logger: logging.Logger = logging.getLogger("ip-notifier")
__wanip_file: str = os.environ.get(
"WANIP_FILE", '/tmp/ip-notifier/current.ip')
__smtp_server: str = os.environ.get("WANIP_SMTP_SERVER")
__smtp_port: int = int(os.environ.get("WANIP_SMTP_PORT"))
__smtp_user: str = os.environ.get("WANIP_SMTP_USER")
__smtp_password: str = os.environ.get("WANIP_SMTP_PASSWORD")
__send_to: str = os.environ.get("WANIP_SEND_TO")
__send_from: str = os.environ.get("WANIP_SEND_FROM")
def run(self) -> None:
current_ip4 = self.get_current_wanip4()
self.__logger.debug("current_ip4 = {}".format(current_ip4))
new_ip4 = self.get_wanip4_from_dns()
self.__logger.debug("new_ip4 = {}".format(new_ip4))
if(current_ip4 is None or current_ip4 != new_ip4):
self.__logger.info(
f"Cambió la IP! Antes era {current_ip4} y ahora es {new_ip4}")
self.send_email(new_ip4)
self.set_wanip4(new_ip4)
def get_wanip4_from_dns(self) -> str:
cmd = ["dig", "@resolver1.opendns.com",
"ANY", "myip.opendns.com", "+short", "-4"]
process = subprocess.run(cmd,
check=True, stdout=subprocess.PIPE)
output = process.stdout.decode('UTF-8').strip('\n')
if 'failed' in output:
raise ConnectionError(output)
return output
def get_current_wanip4(self) -> str:
if not os.path.exists(self.__wanip_file):
return None
with open(self.__wanip_file) as f:
return f.readline()
def set_wanip4(self, ip: str) -> None:
dir = self.__wanip_file[0:self.__wanip_file.rindex('/')]
if not os.path.exists(dir):
os.mkdir(dir)
with open(self.__wanip_file, 'w') as f:
f.write(ip)
def send_email(self, ip: str):
self.__logger.debug("Enviando email...")
server = socket.gethostname()
message = f"Esta es la nueva ip del servidor {server}:\n{ip}"
msg = email.message.EmailMessage()
msg.set_content(message)
msg['Subject'] = f"wanip_notifier - Nueva ip para {server}"
msg['From'] = self.__send_from
msg['To'] = self.__send_to
context = ssl.create_default_context()
with smtplib.SMTP_SSL(self.__smtp_server, self.__smtp_port, context=context) as smtp_server:
smtp_server.login(self.__smtp_user, self.__smtp_password)
smtp_server.send_message(msg)
self.__logger.debug("Email enviado!")
if __name__ == "__main__":
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"), format=os.environ.get("WANIP_LOG_FORMAT", "%(asctime)s - [%(levelname)s] - %(message)s"))
App().run()
| 37.727273 | 155 | 0.62926 | 388 | 2,905 | 4.389175 | 0.311856 | 0.047563 | 0.063418 | 0.079859 | 0.186142 | 0.070464 | 0 | 0 | 0 | 0 | 0 | 0.010379 | 0.237177 | 2,905 | 76 | 156 | 38.223684 | 0.758123 | 0.006885 | 0 | 0 | 0 | 0 | 0.175104 | 0.01699 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078125 | false | 0.03125 | 0.109375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9809ab5317cd085c73700d8f77b21ca36ba30fc5 | 1,292 | py | Python | utils/config.py | ogrenenmakine/VCL-PL-Semi-Supervised-Learning-from-Noisy-Web-Data-with-Variational-Contrastive-Learning | baef25837ce7e073d03f69a095d1992aa18dd2d5 | [
"MIT"
] | null | null | null | utils/config.py | ogrenenmakine/VCL-PL-Semi-Supervised-Learning-from-Noisy-Web-Data-with-Variational-Contrastive-Learning | baef25837ce7e073d03f69a095d1992aa18dd2d5 | [
"MIT"
] | null | null | null | utils/config.py | ogrenenmakine/VCL-PL-Semi-Supervised-Learning-from-Noisy-Web-Data-with-Variational-Contrastive-Learning | baef25837ce7e073d03f69a095d1992aa18dd2d5 | [
"MIT"
] | null | null | null | """
Authors: Wouter Van Gansbeke, Simon Vandenhende
Licensed under the CC BY-NC 4.0 license (https://creativecommons.org/licenses/by-nc/4.0/)
"""
import os
import yaml
from easydict import EasyDict
from utils.utils import mkdir_if_missing
def create_config(config_file_env, config_file_exp, batch_size, epochs):
# Config for environment path
with open(config_file_env, 'r') as stream:
root_dir = yaml.safe_load(stream)['root_dir']
with open(config_file_exp, 'r') as stream:
config = yaml.safe_load(stream)
cfg = EasyDict()
# Copy
for k, v in config.items():
cfg[k] = v
# Set paths for pretext task (These directories are needed in every stage)
base_dir = os.path.join(root_dir, cfg['train_db_name'])
pretext_dir = os.path.join(base_dir, 'SimCLR-B' + str(batch_size))
mkdir_if_missing(base_dir)
mkdir_if_missing(pretext_dir)
cfg['pretext_dir'] = pretext_dir
cfg['pretext_checkpoint'] = os.path.join(pretext_dir, 'checkpoint.pth.tar')
cfg['pretext_model'] = os.path.join(pretext_dir, 'model.pth.tar')
cfg['topk_neighbors_train_path'] = os.path.join(pretext_dir, 'topk-train-neighbors.npy')
cfg['topk_neighbors_val_path'] = os.path.join(pretext_dir, 'topk-val-neighbors.npy')
return cfg
| 36.914286 | 92 | 0.706656 | 197 | 1,292 | 4.416244 | 0.416244 | 0.091954 | 0.068966 | 0.078161 | 0.110345 | 0.064368 | 0.064368 | 0 | 0 | 0 | 0 | 0.003735 | 0.171053 | 1,292 | 34 | 93 | 38 | 0.80859 | 0.188854 | 0 | 0 | 0 | 0 | 0.190751 | 0.090559 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
980a318d114792589559712f7fbea57028593d15 | 12,319 | py | Python | Scripts/plot_QBOExperiments_E-W_var_mo.py | zmlabe/ThicknessSensitivity | 6defdd897a61d7d1a02f34a9f4ec92b2b17b3075 | [
"MIT"
] | 1 | 2017-10-22T02:22:14.000Z | 2017-10-22T02:22:14.000Z | Scripts/plot_QBOExperiments_E-W_var_mo.py | zmlabe/ThicknessSensitivity | 6defdd897a61d7d1a02f34a9f4ec92b2b17b3075 | [
"MIT"
] | null | null | null | Scripts/plot_QBOExperiments_E-W_var_mo.py | zmlabe/ThicknessSensitivity | 6defdd897a61d7d1a02f34a9f4ec92b2b17b3075 | [
"MIT"
] | 4 | 2018-04-05T17:55:36.000Z | 2022-03-31T07:05:01.000Z | """
Plot comparisons between SIT and SIC modeling experiments using
WACCM4. Subplot includes FIT, HIT, FICT. Composites are
organized by QBO-E - QBO-W
Notes
-----
Author : Zachary Labe
Date : 31 January 2018
"""
### Import modules
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, addcyclic, shiftgrid
import nclcmaps as ncm
import datetime
import read_MonthlyOutput as MO
import calc_Utilities as UT
import cmocean
### Define directories
directorydata = '/surtsey/zlabe/simu/'
directoryfigure = '/home/zlabe/Desktop/'
#directoryfigure = '/home/zlabe/Documents/Research/SITperturb/Figures/'
### Define time
now = datetime.datetime.now()
currentmn = str(now.month)
currentdy = str(now.day)
currentyr = str(now.year)
currenttime = currentmn + '_' + currentdy + '_' + currentyr
titletime = currentmn + '/' + currentdy + '/' + currentyr
print('\n' '----Plotting QBO comparisons - %s----' % titletime)
### Alott time series
year1 = 1900
year2 = 2000
years = np.arange(year1,year2+1,1)
### Call arguments
varnames = ['Z500','Z30','SLP','T2M','U10','U300','SWE','THICK','P','EGR']
runnames = [r'HIT',r'FIT',r'FICT']
experiments = [r'\textbf{FIT--HIT}',r'\textbf{FICT--HIT}']
qbophase = ['pos','non','neg']
period = 'DJF'
for v in range(len(varnames)):
### Call function for surface temperature data from reach run
lat,lon,time,lev,tashit = MO.readExperi(directorydata,
'%s' % varnames[v],'HIT','surface')
lat,lon,time,lev,tasfit = MO.readExperi(directorydata,
'%s' % varnames[v],'FIT','surface')
lat,lon,time,lev,tasfict = MO.readExperi(directorydata,
'%s' % varnames[v],'FICT','surface')
### Create 2d array of latitude and longitude
lon2,lat2 = np.meshgrid(lon,lat)
### Read in QBO phases
filenamefitp = directorydata + 'FIT/monthly/QBO_%s_FIT.txt' % qbophase[0]
filenamefitno = directorydata + 'FIT/monthly/QBO_%s_FIT.txt' % qbophase[1]
filenamefitn = directorydata + 'FIT/monthly/QBO_%s_FIT.txt' % qbophase[2]
pos_fit = np.genfromtxt(filenamefitp,unpack=True,usecols=[0],dtype='int')
non_fit = np.genfromtxt(filenamefitno,unpack=True,usecols=[0],dtype='int')
neg_fit = np.genfromtxt(filenamefitn,unpack=True,usecols=[0],dtype='int')
filenamehitp = directorydata + 'HIT/monthly/QBO_%s_HIT.txt' % qbophase[0]
filenamehitno = directorydata + 'HIT/monthly/QBO_%s_HIT.txt' % qbophase[1]
filenamehitn = directorydata + 'HIT/monthly/QBO_%s_HIT.txt' % qbophase[2]
pos_hit = np.genfromtxt(filenamehitp,unpack=True,usecols=[0],dtype='int')
non_hit = np.genfromtxt(filenamehitno,unpack=True,usecols=[0],dtype='int')
neg_hit = np.genfromtxt(filenamehitn,unpack=True,usecols=[0],dtype='int')
filenamefictp = directorydata + 'FICT/monthly/QBO_%s_FICT.txt' % qbophase[0]
filenamefictno = directorydata + 'FICT/monthly/QBO_%s_FICT.txt' % qbophase[1]
filenamefictn = directorydata + 'FICT/monthly/QBO_%s_FICT.txt' % qbophase[2]
pos_fict = np.genfromtxt(filenamefictp,unpack=True,usecols=[0],dtype='int')
non_fict = np.genfromtxt(filenamefictno,unpack=True,usecols=[0],dtype='int')
neg_fict = np.genfromtxt(filenamefictn,unpack=True,usecols=[0],dtype='int')
### Concatonate runs
runs = [tashit,tasfit,tasfict]
### Separate per periods (ON,DJ,FM)
if period == 'ON':
tas_mo = np.empty((3,tashit.shape[0],tashit.shape[2],tashit.shape[3]))
for i in range(len(runs)):
tas_mo[i] = np.nanmean(runs[i][:,9:11,:,:],axis=1)
elif period == 'DJ':
tas_mo = np.empty((3,tashit.shape[0]-1,tashit.shape[2],tashit.shape[3]))
for i in range(len(runs)):
tas_mo[i],tas_mo[i] = UT.calcDecJan(runs[i],runs[i],lat,
lon,'surface',1)
elif period == 'FM':
tas_mo= np.empty((3,tashit.shape[0],tashit.shape[2],tashit.shape[3]))
for i in range(len(runs)):
tas_mo[i] = np.nanmean(runs[i][:,1:3,:,:],axis=1)
elif period == 'DJF':
tas_mo= np.empty((3,tashit.shape[0]-1,tashit.shape[2],tashit.shape[3]))
for i in range(len(runs)):
tas_mo[i],tas_mo[i] = UT.calcDecJanFeb(runs[i],runs[i],lat,
lon,'surface',1)
elif period == 'M':
tas_mo= np.empty((3,tashit.shape[0],tashit.shape[2],tashit.shape[3]))
for i in range(len(runs)):
tas_mo[i] = runs[i][:,2,:,:]
else:
ValueError('Wrong period selected! (ON,DJ,FM)')
### Composite by QBO phase
tas_mofitpos = tas_mo[1][pos_fit,:,:]
tas_mohitpos = tas_mo[0][pos_hit,:,:]
tas_mofictpos = tas_mo[2][pos_fict,:,:]
tas_mofitnon = tas_mo[1][non_fit,:,:]
tas_mohitnon = tas_mo[0][non_hit,:,:]
tas_mofictnon = tas_mo[2][non_fict,:,:]
tas_mofitneg = tas_mo[1][neg_fit,:,:]
tas_mohitneg = tas_mo[0][neg_hit,:,:]
tas_mofictneg = tas_mo[2][neg_fict,:,:]
### Compute climatology
climofitpos = np.nanmean(tas_mofitpos,axis=0)
climohitpos = np.nanmean(tas_mohitpos,axis=0)
climofictpos = np.nanmean(tas_mofictpos,axis=0)
climofitnon = np.nanmean(tas_mofitnon,axis=0)
climohitnon = np.nanmean(tas_mohitnon,axis=0)
climofictnon = np.nanmean(tas_mofictnon,axis=0)
climofitneg = np.nanmean(tas_mofitneg,axis=0)
climohitneg = np.nanmean(tas_mohitneg,axis=0)
climofictneg = np.nanmean(tas_mofictneg,axis=0)
climo = [climohitpos,climohitnon,climohitneg,
climohitpos,climohitnon,climohitneg]
### Compute comparisons for months - taken ensemble average
fithit = np.nanmean((tas_mofitneg-tas_mohitneg) - (tas_mofitpos[:32]-tas_mohitpos[:32]),axis=0)
ficthit = np.nanmean((tas_mofictneg-tas_mohitneg) - (tas_mofictpos[:32]-tas_mohitpos[:32]),axis=0)
diffruns_mo = [fithit,ficthit]
### Calculate significance for FM
stat_FITHIT,pvalue_FITHIT = UT.calc_indttest(tas_mofitneg-tas_mohitneg,tas_mofitpos[:32]-tas_mohitpos[:32])
stat_FICTHIT,pvalue_FICTHIT = UT.calc_indttest(tas_mofictneg-tas_mohitneg,tas_mofictpos[:32]-tas_mohitpos[:32])
pruns_mo = [pvalue_FITHIT,pvalue_FICTHIT]
###########################################################################
###########################################################################
###########################################################################
### Plot variable data for QBO composites
plt.rc('text',usetex=True)
plt.rc('font',**{'family':'sans-serif','sans-serif':['Avant Garde']})
### Set limits for contours and colorbars
if varnames[v] == 'T2M':
limit = np.arange(-10,10.1,0.5)
barlim = np.arange(-10,11,5)
elif varnames[v] == 'Z500':
limit = np.arange(-60,60.1,1)
barlim = np.arange(-60,61,30)
elif varnames[v] == 'Z30':
limit = np.arange(-100,100.1,5)
barlim = np.arange(-100,101,50)
elif varnames[v] == 'SLP':
limit = np.arange(-6,6.1,0.5)
barlim = np.arange(-6,7,3)
elif varnames[v] == 'U10' or varnames[v] == 'U300':
limit = np.arange(-10,10.1,1)
barlim = np.arange(-10,11,5)
elif varnames[v] == 'SWE':
limit = np.arange(-25,25.1,1)
barlim = np.arange(-25,26,25)
elif varnames[v] == 'P':
limit = np.arange(-2,2.1,0.05)
barlim = np.arange(-2,3,1)
elif varnames[v] == 'THICK':
limit = np.arange(-60,60.1,3)
barlim = np.arange(-60,61,30)
elif varnames[v] == 'EGR':
limit = np.arange(-0.2,0.21,0.02)
barlim = np.arange(-0.2,0.3,0.2)
fig = plt.figure()
for i in range(len(diffruns_mo)):
var = diffruns_mo[i]
pvar = pruns_mo[i]
ax1 = plt.subplot(1,2,i+1)
m = Basemap(projection='ortho',lon_0=0,lat_0=89,resolution='l',
area_thresh=10000.)
var, lons_cyclic = addcyclic(var, lon)
var, lons_cyclic = shiftgrid(180., var, lons_cyclic, start=False)
lon2d, lat2d = np.meshgrid(lons_cyclic, lat)
x, y = m(lon2d, lat2d)
pvar,lons_cyclic = addcyclic(pvar, lon)
pvar,lons_cyclic = shiftgrid(180.,pvar,lons_cyclic,start=False)
climoq,lons_cyclic = addcyclic(climo[i], lon)
climoq,lons_cyclic = shiftgrid(180.,climoq,lons_cyclic,start=False)
m.drawmapboundary(fill_color='white',color='dimgray',linewidth=0.7)
cs = m.contourf(x,y,var,limit,extend='both')
cs1 = m.contourf(x,y,pvar,colors='None',hatches=['....'],
linewidths=0.4)
if varnames[v] == 'Z30': # the interval is 250 m
cs2 = m.contour(x,y,climoq,np.arange(21900,23500,250),
colors='k',linewidths=1.5,zorder=10)
m.drawcoastlines(color='dimgray',linewidth=0.8)
if varnames[v] == 'T2M':
cmap = ncm.cmap('NCV_blu_red')
cs.set_cmap(cmap)
elif varnames[v] == 'Z500':
cmap = ncm.cmap('nrl_sirkes')
cs.set_cmap(cmap)
elif varnames[v] == 'Z30':
cmap = ncm.cmap('nrl_sirkes')
cs.set_cmap(cmap)
elif varnames[v] == 'SLP':
cmap = ncm.cmap('nrl_sirkes')
cs.set_cmap(cmap)
elif varnames[v] == 'U10' or varnames[v] == 'U300':
cmap = ncm.cmap('temp_diff_18lev')
cs.set_cmap(cmap)
cs.set_cmap(cmap)
elif varnames[v] == 'SWE':
cmap = cmap = cmocean.cm.balance
cs.set_cmap(cmap)
elif varnames[v] == 'P':
cmap = ncm.cmap('precip4_diff_19lev')
cs.set_cmap(cmap)
elif varnames[v] == 'THICK':
cmap = ncm.cmap('NCV_blu_red')
cs.set_cmap(cmap)
elif varnames[v] == 'EGR':
cmap = cmocean.cm.curl
cs.set_cmap(cmap)
### Add experiment text to subplot
ax1.annotate(r'%s' % experiments[i],xy=(0,0),xytext=(0.5,1.05),
textcoords='axes fraction',color='dimgrey',
fontsize=23,rotation=0,ha='center',va='center')
###########################################################################
cbar_ax = fig.add_axes([0.312,0.15,0.4,0.03])
cbar = fig.colorbar(cs,cax=cbar_ax,orientation='horizontal',
extend='max',extendfrac=0.07,drawedges=False)
if varnames[v] == 'T2M':
cbar.set_label(r'\textbf{$^\circ$C}',fontsize=11,color='dimgray')
elif varnames[v] == 'Z500':
cbar.set_label(r'\textbf{m}',fontsize=11,color='dimgray')
elif varnames[v] == 'Z30':
cbar.set_label(r'\textbf{m}',fontsize=11,color='dimgray')
elif varnames[v] == 'SLP':
cbar.set_label(r'\textbf{hPa}',fontsize=11,color='dimgray')
elif varnames[v] == 'U10' or varnames[v] == 'U300':
cbar.set_label(r'\textbf{m/s}',fontsize=11,color='dimgray')
elif varnames[v] == 'SWE':
cbar.set_label(r'\textbf{mm}',fontsize=11,color='dimgray')
elif varnames[v] == 'P':
cbar.set_label(r'\textbf{mm/day}',fontsize=11,color='dimgray')
elif varnames[v] == 'THICK':
cbar.set_label(r'\textbf{m}',fontsize=11,color='dimgray')
elif varnames[v] == 'EGR':
cbar.set_label(r'\textbf{1/day}',fontsize=11,color='dimgray')
cbar.set_ticks(barlim)
cbar.set_ticklabels(list(map(str,barlim)))
cbar.ax.tick_params(axis='x', size=.01)
cbar.outline.set_edgecolor('dimgrey')
plt.subplots_adjust(wspace=0.01)
plt.subplots_adjust(hspace=0.01)
plt.subplots_adjust(bottom=0.15)
plt.savefig(directoryfigure + '/QBO_%s/QBOExperiments_E-W_%s_%s.png' % (period,
period,
varnames[v]),
dpi=300)
print('Completed: Script done!')
| 42.626298 | 115 | 0.572693 | 1,592 | 12,319 | 4.326633 | 0.236809 | 0.045732 | 0.045296 | 0.018873 | 0.382404 | 0.35061 | 0.300377 | 0.240273 | 0.174216 | 0.155343 | 0 | 0.04106 | 0.246773 | 12,319 | 288 | 116 | 42.774306 | 0.701261 | 0.065996 | 0 | 0.279817 | 0 | 0 | 0.097712 | 0.024764 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.036697 | 0 | 0.036697 | 0.009174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
980ce10f02888bd3e0fcc58abcb43fed45e724e9 | 24,465 | py | Python | stax/aws/cloudformation.py | acaire/stax | 63fc890be14f3272fb98eb65c631343bae3c6d12 | [
"MIT"
] | null | null | null | stax/aws/cloudformation.py | acaire/stax | 63fc890be14f3272fb98eb65c631343bae3c6d12 | [
"MIT"
] | null | null | null | stax/aws/cloudformation.py | acaire/stax | 63fc890be14f3272fb98eb65c631343bae3c6d12 | [
"MIT"
] | null | null | null | import collections
import datetime
import difflib
import hashlib
import itertools
import json
import os
import pathlib
import string
import sys
import time
import uuid
import boto3
import botocore
import click
import halo
import yaml
from .. import gitlib
from ..exceptions import StackNotFound
from .connection_manager import get_client
yaml.add_multi_constructor('!', lambda loader, suffix, node: None)
SUCCESS_STATES = [
'CREATE_COMPLETE',
'DELETE_COMPLETE',
'IMPORT_COMPLETE',
'UPDATE_COMPLETE',
]
FAILURE_STATES = [
'CREATE_FAILED',
'DELETE_FAILED',
'IMPORT_ROLLBACK_COMPLETE',
'IMPORT_ROLLBACK_FAILED',
'ROLLBACK_COMPLETE',
'ROLLBACK_FAILED',
'UPDATE_ROLLBACK_COMPLETE',
'UPDATE_ROLLBACK_FAILED',
]
DEFAULT_AWS_REGIONS = [
'ap-northeast-1',
'ap-northeast-2',
'ap-south-1',
'ap-southeast-1',
'ap-southeast-2',
'ca-central-1',
'eu-central-1',
'eu-north-1',
'eu-west-1',
'eu-west-2',
'eu-west-3',
'sa-east-1',
'us-east-1',
'us-east-2',
'us-west-1',
'us-west-2',
]
def get_diff(s1, s2, prefix):
before = prefix + 'before'
after = prefix + 'after'
if isinstance(s1, str):
s1 = s1.splitlines(keepends=True)
if isinstance(s2, str):
s2 = s2.splitlines(keepends=True)
return difflib.unified_diff(s1, s2, fromfile=before, tofile=after)
def print_diff(diff):
changes = 0
for line in diff:
if line.startswith('+'):
click.secho(line, fg='green', nl=False)
changes += 1
elif line.startswith('-'):
click.secho(line, fg='red', nl=False)
changes += 1
else:
click.echo(line, nl=False)
if changes:
click.echo('\n')
return changes
class Template:
def __init__(self, template_body=None, template_file=None):
self.body = template_body
self.file = template_file
self.extn = 'json'
if self.body and self.file:
raise ValueError('You must specify one of either body or file')
@property
def raw(self):
if not self.body:
with open(self.file) as fh:
self.body = fh.read()
return self.body
@property
def to_dict(self):
if isinstance(self.raw, str):
try:
return json.loads(self.raw)
except:
self.extn = 'yaml'
return yaml.load(self.raw, Loader=yaml.BaseLoader)
return self.raw
class Params:
def __init__(self, params):
"""
Assemble a Params class by either passing in a:
string - To read a filename of dict values
dict - To read a dict of {k: v} values
list - To read a list of {ParameterName: foo, ParameterValue: bar} dicts
"""
self.params = params
if self.params is None or self.params == '':
self.type = 'dict'
self.params = None
elif isinstance(self.params, str):
self.type = 'file'
elif isinstance(self.params, list):
self.type = 'list'
elif isinstance(self.params, dict):
self.type = 'dict'
elif self.params is None:
self.type = 'dict'
else:
raise ValueError('Unexpected value for Params class')
@property
def raw(self):
return json.dumps(self.params)
@property
def to_dict(self):
if self.type == 'file':
with open(self.params) as fh:
return json.load(fh)
elif self.type == 'list':
return {
param['ParameterKey']: param['ParameterValue']
for param in self.params
}
return self.params
@property
def to_list(self):
if self.type in ['file', 'dict'] and self.params is not None:
return [{
"ParameterKey": k,
"ParameterValue": v
} for k, v in self.to_dict.items()
] if self.type is not None else None
return self.params
class Tags:
def __init__(self, tags):
"""
Assemble a Params class by either passing in a:
dict - To read a dict of {k: v} values
list - To read a list of {TagName: foo, TagValue: bar} dicts
"""
self.tags = tags
if isinstance(self.tags, dict) or self.tags is None:
self.type = 'dict'
elif isinstance(self.tags, list):
self.type = 'list'
else:
raise ValueError(
f'Unexpected {type(self.tags)} value for Tags class')
@property
def to_dict(self):
if self.type == 'list':
return {tag['Key']: tag['Value'] for tag in self.tags}
return self.tags
def to_list(self, extra_tags={}):
if self.type != 'list' and self.tags is not None:
return [{
"Key": k,
"Value": v
} for k, v in {
**extra_tags,
**self.tags
}.items()] if self.type is not None else None
return self.tags
class Cloudformation:
"""
Class for actions to do with Cloudformation
"""
def __init__(self, account=None, region=None):
self.account = account
self.region = region
@property
def client(self):
"""
Return a client
"""
return get_client(self.profile, self.region, 'cloudformation')
@property
def bucket_client(self):
"""
Return the bucket client
"""
return get_client(self.bucket['profile'], self.bucket['region'], 's3')
def gen_stack(self, stack_json):
if stack_json['StackName'].startswith('StackSet'):
raise ValueError(f'Ignoring StackSet {stack_json["StackName"]}')
attempt = 0
while True:
try:
raw_template = self.client.get_template(
StackName=stack_json['StackName'])['TemplateBody']
break
except botocore.exceptions.ClientError as err:
if err.response['Error']['Message'].find('Throttling') != -1:
if attempt > 10:
raise
time.sleep(2 ^ attempt * 100)
attempt += 1
else:
raise
stack = Stack(
name=stack_json['StackName'],
account=self.account,
region=self.region,
params=stack_json.get('Parameters', None),
template_body=raw_template,
)
# Ignore serverless
try:
stack.template.to_dict['Outputs']['ServerlessDeploymentBucketName']
except:
pass
else:
raise ValueError(
f'Ignoring serverless stack {stack_json["StackName"]}')
return stack
def save_stack(self, stack, force):
with open('stax.json', 'r') as fh_read:
stack_json = json.load(fh_read)
try:
template_dest = string.Template(
stack_json['stacks'][stack.name]['template']).substitute(
name=stack.name, account=stack.account)
template_val = template_dest
except:
template_dest = f'{stack.account}/{stack.name}/template.{stack.template.extn}'
template_val = f'$account/$name/template.{stack.template.extn}'
pathlib.Path(f'{stack.account}/{stack.name}').mkdir(parents=True,
exist_ok=True)
try:
params_dest = string.Template(stack_json['stacks'][
stack.name]['parameters'][stack.account]).substitute(
name=stack.name, account=stack.account)
params_val = params_dest
except:
params_dest = f'{stack.account}/{stack.name}/params.json'
params_val = f'$account/$name/params.json'
pathlib.Path(f'{stack.account}/{stack.name}').mkdir(parents=True,
exist_ok=True)
with open(template_dest, 'w') as fh:
if stack.template.extn == 'yaml':
# We can dump raw YAML - https://github.com/boto/boto3/issues/1468
fh.write(stack.template.raw)
else:
# If the JSON template can be parsed, it's returned as a dict
# so we can't return the original file, so we may as well pretty it
json.dump(stack.template.to_dict, fh, indent=4)
if stack.name not in stack_json['stacks']:
stack_json['stacks'][stack.name] = {}
if 'parameters' not in stack_json['stacks'][stack.name]:
stack_json['stacks'][stack.name]['parameters'] = {}
has_params = stack.params.to_dict
if has_params:
with open(params_dest, 'w') as fh:
json.dump(has_params, fh, sort_keys=True, indent=4)
stack_json['stacks'][stack.name]['parameters'][
stack.account] = params_val
else:
stack_json['stacks'][stack.name]['parameters'][stack.account] = ''
stack_json['stacks'][stack.name]['template'] = template_val
if 'regions' not in stack_json['stacks'][stack.name]:
stack_json['stacks'][stack.name]['regions'] = []
if self.region not in stack_json['stacks'][stack.name]['regions']:
stack_json['stacks'][stack.name]['regions'].append(self.region)
with open('stax.json', 'w') as fh_write:
json.dump(stack_json, fh_write, sort_keys=True, indent=4)
def generate_stacks(self, local_stacks={}, stack_names=None, force=False):
"""
Pull down a list of created AWS stacks, and
generate the configuration locally
"""
for _, remote_stack in self.describe_stacks(stack_names).items():
if remote_stack['StackStatus'] in ['REVIEW_IN_PROGRESS']:
print(
f'Skipping {remote_stack["StackName"]} due to {remote_stack["StackStatus"]} status'
)
continue
try:
parsed_stack = self.gen_stack(remote_stack)
except ValueError as err:
print(err)
continue
if force or parsed_stack not in local_stacks:
click.echo(f'Saving stack {parsed_stack.name}')
self.save_stack(parsed_stack, force)
else:
click.echo(
f'Skipping stack {parsed_stack.name} as it exists in stax.json - The live stack may differ, use --force to force'
)
def describe_stacks(self, names=None):
"""
Describe existing stacks
"""
results = {}
list_of_stacks_to_describe = [{
'StackName': name
} for name in names] if names else [{}]
for stack_to_describe in list_of_stacks_to_describe:
paginator = self.client.get_paginator('describe_stacks')
response_iterator = paginator.paginate(**stack_to_describe)
try:
results = {
**results,
**{
stack['StackName']: stack
for name in names for response in response_iterator for stack in response['Stacks']
}
}
except botocore.exceptions.ClientError as err:
if err.response['Error']['Message'].find(
'does not exist') != -1:
raise StackNotFound(
f'{stack_to_describe["StackName"]} stack does not exist'
)
raise
return results
@property
def exists(self):
"""
Determine if an individual stack exists
"""
try:
if self.describe_stacks(names=[self.name]):
return True
except StackNotFound:
return False
@property
def context(self):
"""
Return the click context
"""
return click.get_current_context().obj
@property
def account_id(self):
"""
Return the configured account ID
"""
return self.context.config['accounts'][self.account]['id']
@property
def profile(self):
"""
Return the configured account profile
"""
return self.context.config['accounts'][self.account]['profile']
@property
def default_tags(self):
"""
Return some default tags based on chosen CI
"""
if 'buildkite' in self.context.config.get('ci', {}):
return {
"BUILDKITE_COMMIT":
os.getenv("BUILDKITE_COMMIT", gitlib.current_branch()),
"BUILDKITE_BUILD_URL":
os.getenv("BUILDKITE_BUILD_URL", "dev"),
"BUILDKITE_REPO":
os.getenv("BUILDKITE_REPO", f"{gitlib.remotes()}"),
"BUILDKITE_BUILD_CREATOR":
os.getenv("BUILDKITE_BUILD_CREATOR", gitlib.user_email()),
"STAX_HASH":
self.hash_of_params_and_template,
}
return {}
@property
def resources(self):
"""
Return stack resources
"""
req = self.client.describe_stack_resources(StackName=self.name)
return req['StackResources']
def wait_for_stack_update(self, action=None):
"""
Wait for a stack change/update
"""
kwargs = {'text': '{self.name}: {action} Pending'}
if action == 'deletion':
kwargs['color'] = 'red'
spinner = halo.Halo(**kwargs)
spinner.start()
while True:
try:
req = self.client.describe_stacks(StackName=self.name)
except botocore.exceptions.ClientError as err:
if err.response['Error']['Message'].find(
'does not exist') != -1:
if action == 'deletion':
return spinner.succeed(
f'{self.name}: DELETE_COMPLETE (or stack not found)'
)
raise StackNotFound(f'{self.name} stack no longer exists')
raise
status = req['Stacks'][0]['StackStatus']
spinner.text = f'{self.name}: {status}'
if status in FAILURE_STATES:
return spinner.fail()
elif status in SUCCESS_STATES:
return spinner.succeed()
time.sleep(1)
def changeset_create_and_wait(self,
set_type,
use_existing_params=False,
skip_tags=False):
"""
Request a changeset, and wait for creation
"""
spinner = halo.Halo(
text=
f'Creating {set_type.lower()} changeset for {self.name}/{self.account} in {self.region}'
)
spinner.start()
# Create Changeset
kwargs = dict(
ChangeSetName=f'stax-{uuid.uuid4()}',
StackName=self.name,
Capabilities=["CAPABILITY_IAM", "CAPABILITY_NAMED_IAM"],
)
if len(self.template.raw) <= 51200:
kwargs['TemplateBody'] = self.template.raw
else:
kwargs[
'TemplateURL'] = f'https://{self.bucket["name"]}.s3.{self.bucket["region"]}.amazonaws.com/stax/stax_template_{self.hash_of_template}'
self.bucket_client.put_object(
Body=self.template.raw,
Bucket=self.bucket['name'],
Key=f'stax/stax_template_{self.hash_of_template}')
if use_existing_params:
stack_describe = self.describe_stacks(name=self.name)[self.name]
if 'Parameters' in stack_describe:
kwargs['Parameters'] = stack_describe['Parameters'].copy()
for param in kwargs['Parameters']:
param['UsePreviousValue'] = True
del (param['ParameterValue'])
if 'ResolvedValue' in param:
del (param['ResolvedValue'])
else:
params_passed = self.params.to_list
if params_passed:
kwargs['Parameters'] = params_passed
if not skip_tags:
tags_passed = self.tags.to_list(extra_tags=self.default_tags)
if tags_passed:
kwargs['Tags'] = tags_passed
try:
req = self.client.create_change_set(ChangeSetType=set_type,
**kwargs)
cs_id = req['Id']
except botocore.exceptions.ClientError as err:
err_msg = err.response['Error']['Message']
spinner.fail(f'{self.name}: {err.response["Error"]["Message"]}')
if err_msg.find('does not exist') != -1:
#spinner.fail(f'{self.name} does not exist')
raise StackNotFound(f'{self.name} stack no longer exists')
sys.exit(1)
# Wait for it to be ready
while True:
req = self.client.describe_change_set(ChangeSetName=cs_id)
if req['Status'] not in ['CREATE_PENDING', 'CREATE_IN_PROGRESS']:
break
time.sleep(1)
if 'StatusReason' in req and req['StatusReason'].find(
"didn't contain changes") != -1:
spinner.succeed(
f'{self.name}/{self.account} in {self.region} is up to date!\n'
)
return
spinner.succeed()
investigate = parse_changeset_changes(req['Changes'])
for thing in investigate:
if thing == 'Tags':
old_tags = self.describe_stacks(
name=self.name)[self.name]['Tags']
new_tags = kwargs['Tags']
differences = [
click.echo(f'{k}: \n' +
click.style(f' - {old_tags[k]}\n', fg=red) +
click.style(f' + {v}'))
for k, v in new_tags.items() if old_tags.get(k) != v
]
f'Are you sure you want to {click.style("create", fg="green")} {self.account}/{self.name} in {self.region}?'
return cs_id
def create(self):
"""
Create a stack via change set
"""
# Create changeset
changeset = self.changeset_create_and_wait('CREATE')
if not changeset:
return
if not click.confirm(
f'Are you sure you want to {click.style("create", fg="green")} {self.account}/{self.name} in {self.region}?'
):
self.client.delete_change_set(ChangeSetName=changeset,
StackName=self.name)
self.context.debug(f'Deleted changeset {changeset}')
return
# Execute changeset
req = self.client.execute_change_set(ChangeSetName=changeset)
# Wait for changes
self.wait_for_stack_update()
def delete(self):
"""
Create a stack via change set
"""
if not click.confirm(
f'Are you sure you want to {click.style("delete", fg="red")} {self.account}/{self.name} in {self.region}?'
):
return
click.echo(f'Deleting {self.name} in {self.region}')
req = self.client.delete_stack(StackName=self.name)
self.wait_for_stack_update('deletion')
def update(self, use_existing_params, skip_tags):
"""
Update a stack via change set
"""
# Create changeset
changeset = self.changeset_create_and_wait(
'UPDATE',
use_existing_params=use_existing_params,
skip_tags=skip_tags)
if not changeset:
return
if not click.confirm(
f'Are you sure you want to {click.style("update", fg="cyan")} {click.style(self.account, bold=True)}/{self.name} in {self.region}?'
):
self.client.delete_change_set(ChangeSetName=changeset,
StackName=self.name)
self.context.debug(f'Deleted changeset {changeset}')
return
# Execute changeset
req = self.client.execute_change_set(ChangeSetName=changeset)
# Wait for changes
self.wait_for_stack_update()
class Stack(Cloudformation):
"""
Stack class to represent how we define stacks as humans
not how AWS expects them to be
"""
def __init__(
self,
name,
account,
region,
params=None,
tags=None,
template_body=None,
template_file=None,
bucket=None,
purge=False,
):
# Adopt parent class methods/attributes
super().__init__()
self.name = name
self.account = account
self.region = region
self.params = Params(params=params)
if [template_body, template_file].count(None) != 1:
raise ValueError(
'You must enter either template_body or template_file')
if template_body:
self.template = Template(template_body=template_body)
else:
s = string.Template(template_file)
self.template = Template(
template_file=s.substitute(name=name, account=account))
self.bucket = bucket
self.tags = Tags(tags=tags)
self.purge = purge
@property
def hash_of_params_and_template(self):
"""
Hash parameters and templates to quickly determine if a stack needs to be updated
"""
return hashlib.sha256(
self.template.raw.encode('utf-8') +
self.params.raw.encode('utf-8')).hexdigest()
@property
def hash_of_template(self):
"""
Hash template to use for bucket filename
"""
return hashlib.sha256(self.template.raw.encode('utf-8')).hexdigest()
def pending_update(self, stax_hash):
"""
Determine if a stack needs to be updated by the lack or mismatch of `STAX_HASH` tag
"""
if self.hash_of_params_and_template != stax_hash:
return True
return False
def __members(self):
return (self.account, self.region, self.name)
def __eq__(self, other):
"""
Determine equivalence by AWS' unique stack perspective
"""
if type(self) is type(other):
return self.__members() == other.__members()
def __hash__(self):
return hash(self.__members())
def __repr__(self):
"""
Friendly repr
"""
return f'{self.account}/{self.region}/{self.name}'
def parse_changeset_changes(changes):
"""
Parse a changeset for changes
and highlight what has been added,
modified and removed
"""
# Find out more about these attributes
dig_into = []
for change in changes:
rc = change['ResourceChange']
if rc['Action'] == 'Add':
click.secho(
f'{rc["ResourceType"]} ({rc["LogicalResourceId"]}) will be added',
fg='green')
elif rc['Action'] == 'Modify':
mod_type = click.style(
'by deletion and recreation ',
fg='red') if rc['Replacement'] in ['True', True] else ''
scope_and_causing_entities = {
scope: [
detail['CausingEntity'] for detail in rc['Details']
if 'CausingEntity' in rc
]
for scope in rc['Scope']
}
cause = f'caused by changes to: {scope_and_causing_entities}'
click.secho(
f'{rc["ResourceType"]} ({rc["LogicalResourceId"]}) will be modified {mod_type}{cause}',
fg='yellow')
dig_into.extend(scope_and_causing_entities.keys())
elif rc['Action'] == 'Remove':
click.secho(
f'{rc["ResourceType"]} ({rc["LogicalResourceId"]}) will be deleted',
fg='red')
else:
raise ValueError('Unhandled change', change)
return dig_into
| 32.49004 | 149 | 0.54159 | 2,679 | 24,465 | 4.815976 | 0.158641 | 0.017982 | 0.015114 | 0.020152 | 0.300109 | 0.253682 | 0.216401 | 0.18315 | 0.148659 | 0.117191 | 0 | 0.004582 | 0.348743 | 24,465 | 752 | 150 | 32.533245 | 0.805184 | 0.075455 | 0 | 0.248639 | 0 | 0.014519 | 0.178966 | 0.04656 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07078 | false | 0.012704 | 0.041742 | 0.005445 | 0.206897 | 0.005445 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
980f3d1f3df4c3dd7054f229628807cf4bbb2a74 | 35,129 | py | Python | NGDAUpdater/NGDAUpdaterForStates.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | NGDAUpdater/NGDAUpdaterForStates.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | NGDAUpdater/NGDAUpdaterForStates.py | mattCensus/PerlScripts | d2643d99abc3f0647ebfbd41f7e5faa704da3e91 | [
"MIT"
] | null | null | null | import os
import fnmatch
import shutil
import re
import datetime
import time
#import StringIO
import pickle
import sys
import MetadataDateModules
from MetadataDateModules import metadataDateUpdater
from MetadataDateModules import TodaysDate
from FirstAlternativeTitle import FirstAlternativeTitle
from RestServiceFiller import RestServiceFiller
from WMSFiller import WMSFiller
from ThemeDir import ThemeDir
from EAFileFiller import EAFileFiller
from eaTitle import eaTitle
from FileNameCorrector import FileNameCorrector
from RestServiceFiller import restExist
from FileNameCorrector import RealfileName
from BrowseGraphicInserter import BrowseGraphicInserter
datesupdated=[]
NewFileArray=[]
NationalPlace=[]
DatesUpdated=0
FileCounter=0
EndDateStamp='no'
# getting today's date using the datetime module
PresentDate = datetime.datetime.now()
PresentDate.day
if PresentDate.hour > 12:
PresentHour = PresentDate.hour -12
AmPm='PM"'
else:
PresentHour =PresentDate.hour
AmPm ='AM'
presentTime= str(PresentHour) + ":" + str(PresentDate.minute) + ":" + str(PresentDate.second) + AmPm
if PresentDate.day < 10:
day = "0" + str(PresentDate.day)
else:
day = PresentDate.day
if PresentDate.month < 10:
month = "0" + str(PresentDate.month)
else:
month = PresentDate.month
PresentDate2 = str(PresentDate.year) + "-" + str(month) + "-" + str(day)
#path='C:/Users/mattp/Desktop/WorkFiles/XMLFiles/2020files/FE2020/StatesNew/unsd/'
#C:\Users\mattp\Desktop\WorkFiles\XMLFiles\2021Tiger\bg
path='C:/Users/mattp/Desktop/WorkFiles/XMLFiles/2021Tiger/sldl'
# C:\Users\mattp\Desktop\WorkFiles\XMLFiles\2021Tiger\roads\tl_2021_01001_roads.shp.iso.xml
# C:\Users\mattp\Desktop\WorkFiles\XMLFiles\2020files\ver2\fe_2020\stateNGDA\anrc\tl_2020_022_anrc.shp.iso.xml
# C:\Users\mattp\Desktop\WorkFiles\XMLFiles\2020 files\ver2\fe_2020\NationalNGDA
SeriesTheme=' Current Unified School Districts State-based Shapefile'
configfiles = [os.path.join(dirpath, f)
for dirpath, dirnames, files in os.walk(path)
for f in files if f.endswith('.xml')]
def DateStampMod(DateStampInd, CurrentDate,ContentIfoInd):
#print('Now working on '+ CurrentDate)
if ContentIfoInd == 'yes':
NewFile.write(line)
EndDateStamp = 'No'
return EndDateStamp
elif DateStampInd == 'yes':
NewFile.write('<gco:Date>' + PresentDate2 + '</gco:Date>\n')
NewFile.write('</gmd:dateStamp>')
EndDateStamp= 'No'
return EndDateStamp
else:
NewFile.write('<gmd:dateStamp>')
NewFile.write('<gco:Date>' + PresentDate2 + '</gco:Date>\n')
NewFile.write('</gmd:dateStamp>')
EndDateStamp = 'No'
return EndDateStamp
def eaUrl(Pass):
Theme = Pass
#print("Now in the eaUrl Module\n")
#print("Now working on:" + Theme)
EATheme = str(ThemeDir(Theme))
FirstPartUrl='https://meta.geo.census.gov/data/existing/decennial/GEO/GPMB/TIGERline/Current_19110/'
YearDir='fe_2020'
EAFileName='tl_2020_' + EATheme + '.shp.ea.iso.xml'
FinalEaFile= FirstPartUrl + '/' + YearDir + "/" + EATheme + '/' + EAFileName +'\n'
return FinalEaFile
if os.path.exists(path):
print("The " + path + " directory exists")
else:
print("Could not find " + path + ". Please make sure the path is correct")
sys.exit(1)
def keywordCounter(input):
file=input
KeywordModCounter=0
ReadFile = open(file, "r")
for line in ReadFile:
if re.search('<gmd:keyword>',line,flags=0):
KeywordModCounter+=1
else:
continue
FinalKeyword= KeywordModCounter-3
return FinalKeyword
for file in configfiles:
print ('Now working on: ' + file)
transferOptionsCounter=0
linkageCounter=0
editionCounter=0
FileCounter += 1
gmdDateCounter=0
KeywordModCounter=0
KeywordGood = 'yes'
keywordCounter =0
NationalPlace.clear()
nationalPlaceInd = 'no'
keywordind = 'no'
InCitInd = 'no'
TitleEndCharacterString ='no'
DescriptiveKeywordsInd='off'
MafTigerInd = 'no'
dotLocation = file.find(".")
preDot = file[0:dotLocation]
postDot = file[dotLocation:]
ContentIfoInd = 'no'
FirstTitle = 'Yes'
endTitleCounter=0
datasetUriind = 'no'
OutFile = preDot + "_corrected_" + postDot
characterSetCounter = 0
Restful ='no'
#print ('Before the first loop')
ReadFileA = open(file, "r", encoding='utf-8')
for line in ReadFileA:
#print('line' + line)
if re.search('<gmd:keyword>', line, flags=0):
KeywordModCounter += 1
else:
continue
PrePlace = KeywordModCounter -6
StateKeywords= PrePlace +3
ReadFileA.close()
#finalKeyword=int(keywordCounter(file))
print (' PrePlace' + str( PrePlace))
print ('StateKeywords' + str(StateKeywords))
#print("preDot: " + preDot)
#print("PostDot: " + postDot)
#print("Outfile" + OutFile)
#print("File: " + file)
print("Now Working on: " + file)
#print ("Outfile=" + OutFile)
ReadFile = open(file, "r", encoding='utf-8')
with open(OutFile, "w") as NewFile:
for line in ReadFile:
if re.search('gmd:linkage',line,flags=0):
linkageCounter+=1
#NewFile.write('<!-- if #1 -->\n')
if linkageCounter == 1:
LinkageInd='yes'
NewFile.write(line)
else:
NewFile.write(line)
elif re.search('</gmd:characterSet>', line, flags=0):
if characterSetCounter == 0:
NewFile.write(line)
NewFile.write('<gmd:parentIdentifier>\n')
NewFile.write('<gco:CharacterString>TIGER/Line Shapefile, Current, Series Information for the' + SeriesTheme + ' </gco:CharacterString>\n')
#NewFile.write('<!-- Stop16 -->')
NewFile.write('</gmd:parentIdentifier>\n')
characterSetCounter += 1
else:
NewFile.write(line)
elif re.search('</gmd:purpose>', line, flags=0):
#NewFile.write ("In the Status Insertion. The characterSetCounter is " +str(characterSetCounter))
if characterSetCounter == 1:
NewFile.write(line)
NewFile.write('<gmd:status>\n')
NewFile.write(
' <gmd:MD_ProgressCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ProgressCode" codeListValue="Completed">Completed</gmd:MD_ProgressCode>\n')
NewFile.write('</gmd:status>\n')
characterSetCounter += 1
elif re.search('</gmd:resourceMaintenance>', line, flags=0):
NewFile.write(line)
BrowseGraphicInserter(mainTheme, NewFile)
elif re.search('<gco:CharacterString>MAF/TIGER</gco:CharacterString>', line, flags=0):
#NewFile.write('<!-- Stop19 -->')
#ind#1
NewFile.write(line)
MafTigerInd = 'yes'
#NewFile.write('<!-- MafTigerind: ' + MafTigerInd + ' -->\n')
elif re.search('rest', line, flags=0):
NewFile.write(line)
#NewFile.write('<!11 In the Rest -->')
Restful = 'yes'
RestTheme = line
elif re.search('Federal Information Processing Series (FIPS), Geographic Names Information System (GNIS), and feature names.',line,flags=0):
NewFile.write(line)
elif re.search('gmd:URL',line, flags=0):
#NewFile.write('<!-- if #2 -->\n')
print("---------------------------------\n")
print ("LinkageInd: " + LinkageInd + "\n")
if LinkageInd =="yes":
#NewFile.write(line)
lastSlash=line.rfind("/tl")+1
lastEndtag=line.find("</gmd:URL>")
ZipFileName=line[lastSlash: lastEndtag]
ThemeURL=str(ThemeDir( mainTheme))
'''
# NewFile.write('<!-- ZipFileName ' + ZipFileName + '-->')
#NewFile.write('<!-- ThemeURL' + ThemeURL + '-->')
'''
FinalZip=' <gmd:URL>https://www2.census.gov/geo/tiger/TIGER2021/'+ ThemeURL +'/' + ZipFileName + '</gmd:URL>\n'
LinkageInd="No"
# print('In the LinkageId section\n')
#print(line)
#print ('ZipFileName: ' + ZipFileName)
LinkageInd='No'
#NewFile.write('<!--- What is going on here? -->')
NewFile.write(FinalZip)
else:
NewFile.write(line)
elif re.search('<gco:CharacterString>.shp.iso.xml',line, flags=0):
RevFile = FileNameCorrector(file, OutFile)
# NewFile.write('<!-- if #3 -->\n')
NewFile.write(RevFile)
# elif re.search(' <gco:CharacterString>.iso.xml', line, flags=0):
# RevFile = FileNameCorrector(file, OutFile)
# NewFile.write('<!-- if #3 -->\n')
# NewFile.write(RevFile)
elif re.search ('codeListValue=""',line,flags=0):
NewFile.write(' codeListValue="dataset"/>')
elif re.search('<gmd:MD_GeometricObjectTypeCode',line,flags=0):
NewFile.write('<!-- if #5 -->\n')
lastCarrot=line.find('>')-1
maipart=line[0:lastCarrot]
GMTC=maipart+'" codeListValue="complex">complex</gmd:MD_GeometricObjectTypeCode>'
NewFile.write(GMTC)
elif re.search('</gmd:featureTypes>',line,flags=0):
#NewFile.write('<!-- if #6 -->\n')
NewFile.write(line)
NewFile.write(' <gmd:featureCatalogueCitation>')
NewFile.write(' <gmd:CI_Citation>\n')
NewFile.write(' <gmd:title>\n')
NewFile.write(str(eaTitle(mainTheme)))
NewFile.write(' </gmd:title>\n')
NewFile.write(' <gmd:date>\n')
NewFile.write(' <gmd:CI_Date>\n')
NewFile.write(' <gmd:date>\n')
NewFile.write(' <gco:Date>2020</gco:Date>\n')
NewFile.write(' </gmd:date>\n')
NewFile.write(' <gmd:dateType>\n')
NewFile.write(' <gmd:CI_DateTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode" codeListValue="publication" codeSpace="002"/>\n')
NewFile.write(' </gmd:dateType>\n')
NewFile.write(' </gmd:CI_Date>\n')
NewFile.write(' </gmd:date>\n')
NewFile.write(' <gmd:citedResponsibleParty xlink:href="https://www.ngdc.noaa.gov/docucomp/1df27e57-4768-42de-909b-52f530601fba" xlink:title="U.S Department of Commerce, U.S Census Bureau, Geography Division (distributor)"/>')
NewFile.write(' <gmd:otherCitationDetails>\n')
EAFile = str(eaUrl(mainTheme))
NewFile.write(' <gco:CharacterString>' + EAFile + '</gco:CharacterString>\n')
#NewFile.write('<!-- Stop1 -->')
NewFile.write(' </gmd:otherCitationDetails>\n')
NewFile.write(' </gmd:CI_Citation>\n')
NewFile.write(' </gmd:featureCatalogueCitation>\n')
elif re.search('</gmd:protocol>',line,flags=0):
NewFile.write('<!-- if #7 -->\n')
if transferOptionsCounter == 0:
NewFile.write(line)
NewFile.write(' <gmd:applicationProfile>\n')
NewFile.write(' <gco:CharacterString>ZIP</gco:CharacterString>\n')
#NewFile.write('<!-- Stop2 -->')
NewFile.write('</gmd:applicationProfile>\n')
NewFile.write('<gmd:name>\n')
NewFile.write('<gco:CharacterString>'+ ZipFileName + '</gco:CharacterString>\n')
#NewFile.write('<!-- Stop3 -->')
NewFile.write(' </gmd:name>\n')
NewFile.write('<gmd:description>\n')
actualFile=RealfileName(file, OutFile)
NewFile.write(' <gco:CharacterString> This zip file contains the ' + actualFile + ' shapefile </gco:CharacterString>\n')
#NewFile.write('<!-- Stop4 -->')
NewFile.write('</gmd:description>\n')
else:
NewFile.write(line)
elif re.search('<gco:CharacterString>TIGER/Line Shapefile',line,flags=0):
#NewFile.write('<!-- if #8 -->\n')
if FirstTitle == 'Yes':
FirstTitle ='No'
TitleEndCharacterString='yes'
mainTitle=line
lastComma=line.rfind(',')+1
if re.search('</gco:CharacterString>',line,flags=0):
#NewFile.write('<!-- Stop5a -->')
closingTagLoc=line.find('</')
mainTheme = line[lastComma:closingTagLoc]
else:
mainTheme=line[lastComma:]
Geography=line[68:lastComma-1]
#print ('Geography:' + Geography)
PrimaryAlternateTitle = '<gco:CharacterString>TIGER/Line Shapefile, Current, ' + Geography + mainTheme + '</gco:CharacterString>\n'
#NewFile.write('<!-- Stop5b -->')
NewFile.write(PrimaryAlternateTitle)
#NewFile.write('<!-- Check 1 -->\n')
NewFile.write('</gmd:title>\n')
NewFile.write(' <gmd:alternateTitle>\n')
if re.search('</gco:CharacterString>',mainTitle,flags=0):
#NewFile.write('<!-- Stop6 -->')
NewFile.write(mainTitle)
else:
NewFile.write(mainTitle+ '</gco:CharacterString>')
NewFile.write(' </gmd:alternateTitle>\n')
FirstAlternativeTitle
FirstAlternativeTitle(mainTheme,NewFile)
else:
NewFile.write(line)
elif re.search('</gmd:transferOptions>', line, flags=0):
#NewFile.write('<!-- if #9 -->\n')
#print('In the transfer options section')
#print('transferOptionsCounter' + str(transferOptionsCounter: ) + "\n")
#NewFile.write('<!-- transferOptionsCounter ' + str(transferOptionsCounter) +'-->')
if transferOptionsCounter == 1:
NewFile.write(line)
NewFile.write(' <gmd:transferOptions>\n')
NewFile.write(' <gmd:MD_DigitalTransferOptions>\n')
NewFile.write(' <gmd:onLine>\n')
NewFile.write(' <gmd:CI_OnlineResource>\n')
WMSFiller(mainTheme,NewFile)
NewFile.write(' <gmd:function>\n')
NewFile.write(' <gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode"\n')
NewFile.write(' codeListValue="search">search\n')
NewFile.write(' </gmd:CI_OnLineFunctionCode>\n')
NewFile.write(' </gmd:function>\n')
NewFile.write(' </gmd:CI_OnlineResource>\n')
NewFile.write(' </gmd:onLine>\n')
NewFile.write(' </gmd:MD_DigitalTransferOptions>\n')
NewFile.write(' </gmd:transferOptions>\n')
transferOptionsCounter += 1
NewFile.write(' <gmd:transferOptions>\n')
NewFile.write(' <gmd:MD_DigitalTransferOptions>\n')
NewFile.write(' <gmd:onLine>\n')
NewFile.write(' <gmd:CI_OnlineResource>\n')
EAFileFiller(mainTheme,NewFile)
NewFile.write(' <gmd:function>\n')
NewFile.write(
' <gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode"\n')
NewFile.write(' codeListValue="download">download\n')
NewFile.write(' </gmd:CI_OnLineFunctionCode>\n')
NewFile.write(' </gmd:function>\n')
NewFile.write(' </gmd:CI_OnlineResource>\n')
NewFile.write(' </gmd:onLine>\n')
NewFile.write(' </gmd:MD_DigitalTransferOptions>\n')
NewFile.write(' </gmd:transferOptions>\n')
else:
NewFile.write(line)
transferOptionsCounter += 1
elif re.search('</gmd:title>',line,flags=0):
#NewFile.write('<!-- if #10 -->\n')
if endTitleCounter ==0:
endTitleCounter+=1
else:
NewFile.write(line)
elif re.search('</gco:CharacterString>',line,flags=0):
#NewFile.write('<!-- if #11 TitleEndCharacterString:' + TitleEndCharacterString + '\n nationalPlaceInd: ' + nationalPlaceInd +'-->\n')
#NewFile.write('<!-- Stop7 -->')
#NewFile.write('<!-- Line: ' + line + '-->')
#NewFile.write(line)
if TitleEndCharacterString == 'yes':
#NewFile.write('<!-- Stop7z -->')
#NewFile.write('<!-- 11az -->')
TitleEndCharacterString='no'
continue
elif re.search('http://www.census.gov/geo/reference/geocodes.html',line,flags=0):
NewFile.write(line)
#NewFile.write('<!-- Stop7a -->')
#elif re.search('.zip', line, flags=0):
# lastPartPos = line.find('http://www2.census.gov/geo/tiger/TIGER2020PL')
# lastPart = line[lastPartPos:]
# finalUrl = '<gco:CharacterString>https://www2.census.gov/geo/tiger/TIGER2020PL/LAYER/' + lastPart
# NewFile.write(finalUrl)
#NewFile.write('<!-- Stop7c -->')
elif KeywordGood == 'no':
#NewFile.write('<!-- Stop7d -->')
#NewFile.write('<!-- 11a -->')
if keywordind== 'yes':
NewFile.write('<!-- if #11b -->\n')
#(line)
#print('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA')
#print('keywordCounter: ' + str(keywordCounter))
#print('KeywordGood = ' + KeywordGood)
#print('BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB')
if KeywordGood == 'yes':
NewFile.write('<!-- 11c -->')
if re.search('State or Equivalent Entity',line,flags=0):
StateEntityCounter+=1
if StateEntityCounter >1:
continue
else:
NewFile.write(line)
#print('Printing the Keyword!!!!!!!!!!!!!!')
keywordind='no'
else:
NewFile.write(line)
#print('Printing the Keyword!!!!!!!!!!!!!!')
keywordind = 'no'
else:
#print('Now writing' + line + 'to the NationalPlace ')
#NewFile.write('<!-- Now writing' + line + 'to the NationalPlace ')
NationalPlace.append(line)
keywordind = 'no'
elif re.search('<gco:CharacterString>MAF/TIGER</gco:CharacterString>', line, flags=0):
#NewFile.write('<!-- Stop8 -->')
#ind#2
NewFile.write(line)
MafTigerInd = 'yes'
#NewFile.write('<!-- MafTigerind: ' + MafTigerInd + ' -->\n')
else:
continue
elif datasetUriind =='yes':
#NewFile.write('<!-- Stop8m -->')
if re.search('FIPS', line, flags=0):
NewFile.write(line)
#NewFile.write('<!-- Stop8ma -->')
elif re.search('U.S. Department of Commerce, U.S. Census Bureau,',line,flags=0):
NewFile.write(line)
#NewFile.write('<!-- Stop8mb -->')
elif re.search('301-763-',line,flags=0):
NewFile.write(line)
#NewFile.write('<!-- Stop8mc -->')
elif re.search('4600 Silver Hill Road, Stop 7400',line,flags=0):
NewFile.write(line)
#NewFile.write('<!-- Stop8md -->')
elif re.search('Washington',line,flags=0):
NewFile.write(line)
#NewFile.write('<!-- Stop8me -->')
datasetUriind = 'no'
else:
#stop1
#NewFile.write('<!-- Stop8mf -->')
doubleSlashLoc=line.find('//')
postSlash=line[doubleSlashLoc:]
newUrl='<gco:CharacterString>https:' + postSlash
NewFile.write(newUrl)
datasetUriind = 'no'
elif InCitInd == 'yes':
InCitInd ='no'
continue
else:
NewFile.write(line)
elif re.search(' <gmd:edition>',line,flags=0):
#NewFile.write('<!-- if #12 -->\n')
NewFile.write(line)
NewFile.write(' <gco:CharacterString>2020</gco:CharacterString>')
#NewFile.write('<!-- Stop10 -->')
elif re.search('http://www2.census.gov/geo/tiger/TIGER2020',line,flags=0):
#NewFile.write('<!-- if #13 -->\n')
semiLoc=line.rfind(':')
lastpart=line[semiLoc:]
CorrectedHttp=' <gco:CharacterString>https' + lastpart
#NewFile.write('<!-- string corrected -->')
elif re.search(' </gmd:edition>',line, flags=0):
#NewFile.write('<!-- if #14 -->\n')
editionCounter+=1
#print('editionCounter: ' + str(editionCounter))
if editionCounter ==1:
NewFile.write(line)
NewFile.write(' <gmd:identifier>\n')
NewFile.write(' <gmd:MD_Identifier>\n')
NewFile.write(' <gmd:code>\n')
NewFile.write(' <gco:CharacterString>https://www.census.gov</gco:CharacterString>\n')
#NewFile.write('<!-- Stop12 -->')
NewFile.write(' </gmd:code>\n')
NewFile.write(' </gmd:MD_Identifier>\n')
NewFile.write(' </gmd:identifier>\n')
else:
NewFile.write(line)
elif re.search('<gmd:extent/>',line, flags=0):
#NewFile.write('<!-- if #15 -->\n')
NewFile.write(' <gmd:extent>\n')
NewFile.write(' <gml:TimePeriod gml:id="timePeriod">\n')
NewFile.write(' <gml:beginPosition>2020-06</gml:beginPosition>\n')
NewFile.write(' <gml:endPosition>2021-05</gml:endPosition>\n')
NewFile.write(' </gml:TimePeriod>\n')
NewFile.write(' </gmd:extent>\n')
elif re.search('<gml:beginPosition/>',line,flags=0):
#NewFile.write('<!-- if #16 -->\n')
NewFile.write(' <gml:beginPosition>2020-06</gml:beginPosition>\n')
elif re.search('<gml:endPosition/>',line,flags=0):
#NewFile.write('<!-- if #17 -->\n')
NewFile.write(' <gml:endPosition>2021-05</gml:endPosition>\n')
elif re.search('<gmd:keyword>',line, flags=0):
#NewFile.write('<!-- if #18 -->\n')
#NewFile.write('<!-- if #18' + line + '-->\n')
keywordCounter+=1
#print('00000000000000000000000000000000000000000000000000000')
#print('keywordCounter: ' + str(keywordCounter))
#print(line)
keywordind='yes'
if keywordCounter <=PrePlace:
KeywordGood='yes'
NewFile.write(line)
DescriptiveKeywordsInd='off'
elif keywordCounter<StateKeywords:
KeywordGood = 'no'
DescriptiveKeywordsInd='off'
else:
if re.search('State or Equivalent Entity',line,flags=0):
continue
else:
NewFile.write(line)
KeywordGood = 'yes'
DescriptiveKeywordsInd = 'on'
elif re.search(' <gco:CharacterString>',line,flags=0):
#NewFile.write('<!-- if #19 -->\n')
#print(line)
#print('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA')
#print ('keywordCounter: ' + str(keywordCounter))
#print ('KeywordGood = ' + KeywordGood)
#print('BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB')
if re.search('>ANSI INCITS 38:2009', line, flags=0):
#newLine = line + '</gco:CharacterString>'
#(ANSI INCITS 38-2009), Federal Information Processing Series (FIPS) – States/State Equivalents'
NewFile.write('<gco:CharacterString>National Standard Codes (ANSI INCITS 38-2009)</gco:CharacterString>' )
#NewFile.write('<!-- Stop13 -->')
elif re.search('<gco:CharacterString>MAF/TIGER</gco:CharacterString>', line, flags=0):
#ind#3
#NewFile.write('<!-- Stop14 -->')
NewFile.write(line)
MafTigerInd='yes'
#NewFile.write('<!-- MafTigerind: ' + MafTigerInd+ ' -->\n')
else:
NewFile.write(line)
elif re.search('</gmd:keyword>',line,flags=0):
#NewFile.write('<!-- if #20 -->\n')
#print('Ending the keyword tag')
if KeywordGood =='no':
continue
else:
NewFile.write(line)
elif re.search('</gmd:descriptiveKeywords>',line,flags=0):
#NewFile.write('<!-- if #21 -->\n')
#NewFile.write('<!-- Working with gmd:descriptiveKeywords DescriptiveKeywordsInd: ' + DescriptiveKeywordsInd + '-->')
if DescriptiveKeywordsInd=='on':
NewFile.write(line)
NewFile.write(' <gmd:descriptiveKeywords>')
NewFile.write(' <gmd:MD_Keywords>\n')
for item in NationalPlace:
#NewFile.write('<!-- item: ' + item + " -->")
if item != 'State or Equivalent Entity':
NewFile.write(' <gmd:keyword>\n')
NewFile.write(item + '\n')
NewFile.write(' </gmd:keyword>\n')
NewFile.write(' <gmd:type>\n')
NewFile.write(' <gmd:MD_KeywordTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"\n')
NewFile.write(' codeListValue="place"/>\n')
NewFile.write(' </gmd:type>\n')
NewFile.write(' <gmd:thesaurusName>\n')
NewFile.write(' <gmd:CI_Citation>\n')
NewFile.write(' <gmd:title>\n')
NewFile.write(' <gco:CharacterString>ISO 3166 Codes for the representation of names of countries and their subdivisions</gco:CharacterString>\n')
#NewFile.write('<!-- Stop14 -->')
NewFile.write(' </gmd:title>\n')
NewFile.write(' <gmd:date gco:nilReason="unknown"/>\n')
NewFile.write(' </gmd:CI_Citation>\n')
NewFile.write(' </gmd:thesaurusName>\n')
NewFile.write(' </gmd:MD_Keywords>\n')
NewFile.write(' </gmd:descriptiveKeywords>\n')
DescriptiveKeywordsInd='off'
else:
NewFile.write(line)
elif re.search ('<gmd:dataSetURI>',line,flags=0):
datasetUriind='yes'
NewFile.write(line)
elif re.search('ANSI INCITS 31:2009',line,flags=0):
#NewFile.write('<!--ANSI INCITS 31:2009 -->\n')
InCitInd='yes'
continue
elif re.search ('(Formerly FIPS 8-6)',line, flags=0):
#NewFile.write('<!--ANSI INCITS 31:2009 -->\n')
continue
elif re.search ('<gmd:date>',line,flags=0):
if re.search ('<gmd:date gco:nilReason="unknown"/>',line, flags=0):
NewFile.write(line)
elif MafTigerInd =='yes':
NewFile.write(' <gmd:date gco:nilReason="unknown"/>')
else:
NewFile.write(line)
elif re.search('<gmd:CI_Date>',line,flags=0):
if MafTigerInd =='yes':
continue
else:
NewFile.write(line)
elif re.search('<gco:Date>Unpublished material</gco:Date>',line,flags=0):
if MafTigerInd =='yes':
continue
else:
NewFile.write(line)
elif re.search('</gmd:date>',line,flags=0):
if MafTigerInd == 'yes':
gmdDateCounter+=1
if gmdDateCounter ==1:
continue
else:
MafTigerInd='no'
else:
NewFile.write(line)
elif re.search('<gmd:dateType>',line,flags=0):
if MafTigerInd == 'yes':
continue
else:
NewFile.write(line)
elif re.search(' <gmd:CI_DateTypeCode',line, flags=0):
if MafTigerInd =='yes':
continue
else:
NewFile.write(line)
elif re.search('codeListValue="publication date"',line, flags=0):
if MafTigerInd == 'yes':
continue
else:
NewFile.write(line)
elif re.search('</gmd:CI_DateTypeCode>',line, flags=0):
if MafTigerInd == 'yes':
continue
else:
NewFile.write(line)
elif re.search('</gmd:dateType>',line, flags=0):
if MafTigerInd == 'yes':
continue
else:
NewFile.write(line)
elif re.search('</gmd:CI_Date>',line, flags=0):
if MafTigerInd == 'yes':
continue
else:
NewFile.write(line)
elif re.search('codeListValue="download!!!!!">download!!!',line, flags=0):
NewFile.write('codeListValue="download">download')
# elif re.search('/gmd:applicationProfile' ,line, flags=0) and Restful =='yes':
# NewFile.write(line)
# restExist(mainTheme,NewFile)
else:
NewFile.write(line)
#print(line)
NewFileArray.append(OutFile)
NewFile.close()
#print("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n")
for newFile in NewFileArray:
# print (newFile)
newFileCorrectLoc = newFile.find('_corrected')
preCorret = newFile[0:newFileCorrectLoc]
postCorrect = newFile[newFileCorrectLoc + 11:]
DestFile = preCorret + postCorrect
# print(preCorret)
# print (postCorrect)
shutil.copyfile(newFile, DestFile)
# newFile.close
'''
for newFile in NewFileArray:
os.remove(newFile)
'''
print ("Done! "+ str(FileCounter) + " files have been processed at "+ presentTime + "!")
sys.exit(1) | 48.320495 | 256 | 0.478892 | 2,909 | 35,129 | 5.765555 | 0.15538 | 0.173146 | 0.071309 | 0.056284 | 0.50316 | 0.432566 | 0.375447 | 0.333592 | 0.304019 | 0.266993 | 0 | 0.023139 | 0.389792 | 35,129 | 727 | 257 | 48.320495 | 0.759237 | 0.148339 | 0 | 0.410058 | 0 | 0.015474 | 0.261153 | 0.092108 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005803 | false | 0.003868 | 0.040619 | 0 | 0.056093 | 0.017408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
98106631425f65be0197ac9801c484298d0e3b91 | 1,511 | py | Python | source/galaxy/db.py | colinleach/400B_Leach | 656abe04237d7a8de2cf56e9bfe986c333c62739 | [
"MIT"
] | 1 | 2020-03-16T12:46:02.000Z | 2020-03-16T12:46:02.000Z | source/galaxy/db.py | colinleach/400B_Leach | 656abe04237d7a8de2cf56e9bfe986c333c62739 | [
"MIT"
] | null | null | null | source/galaxy/db.py | colinleach/400B_Leach | 656abe04237d7a8de2cf56e9bfe986c333c62739 | [
"MIT"
] | null | null | null | import psycopg2
import yaml
from pathlib import Path
class DB():
"""
A simple wrapper class for connecting to the PostgreSQL database.
Takes no arguments. Relies on having connection information in
`~/dbconn.yaml`.
"""
def __init__(self):
"Reads the connection parameters, makes the connection and a cursor"
params = self.read_params()
inf = f"dbname={params['dbname']} user={params['username']}"
inf += f" host='{params['host']}' password={params['password']}"
self.connection = psycopg2.connect(inf)
self.connection.autocommit = True
self.cursor = self.connection.cursor()
def read_params(self):
"Needs the yaml parameter file to be in the user's home directory"
filename = Path.home() / 'dbconn.yaml'
with open(filename) as file:
params = yaml.full_load(file)
return params
def get_cursor(self):
"A simple getter method"
return self.cursor
def run_query(self, query):
"""
Runs a SQL query (typically SELECT)
Returns results in Python list format
(not numpy, which would need a dtype list)
"""
self.cursor.execute(query)
return self.cursor.fetchall()
def get_xyz(self, gal, snap):
"""
"""
sql = f"""SELECT pnum, x, y, z
FROM simdata
WHERE galname='{gal}' AND snap={snap}
ORDER BY pnum"""
return self.run_query(sql)
| 26.051724 | 76 | 0.598279 | 185 | 1,511 | 4.827027 | 0.513514 | 0.044793 | 0.035834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00188 | 0.295831 | 1,511 | 57 | 77 | 26.508772 | 0.837406 | 0.28458 | 0 | 0 | 0 | 0 | 0.325901 | 0.087479 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.033333 | 0.1 | 0 | 0.433333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9813fd5b96f485c7f17dd680b383ea12db93f961 | 8,094 | bzl | Python | python/pip.bzl | jdob/rules_python | dad40476d74a7b1e903573293370927579261413 | [
"Apache-2.0"
] | null | null | null | python/pip.bzl | jdob/rules_python | dad40476d74a7b1e903573293370927579261413 | [
"Apache-2.0"
] | null | null | null | python/pip.bzl | jdob/rules_python | dad40476d74a7b1e903573293370927579261413 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Import pip requirements into Bazel."""
load("//python/pip_install:pip_repository.bzl", "pip_repository", _package_annotation = "package_annotation")
load("//python/pip_install:repositories.bzl", "pip_install_dependencies")
load("//python/pip_install:requirements.bzl", _compile_pip_requirements = "compile_pip_requirements")
compile_pip_requirements = _compile_pip_requirements
package_annotation = _package_annotation
def pip_install(requirements = None, name = "pip", **kwargs):
"""Accepts a `requirements.txt` file and installs the dependencies listed within.
Those dependencies become available in a generated `requirements.bzl` file.
This macro wraps the [`pip_repository`](./pip_repository.md) rule that invokes `pip`.
In your WORKSPACE file:
```python
pip_install(
requirements = ":requirements.txt",
)
```
You can then reference installed dependencies from a `BUILD` file with:
```python
load("@pip//:requirements.bzl", "requirement")
py_library(
name = "bar",
...
deps = [
"//my/other:dep",
requirement("requests"),
requirement("numpy"),
],
)
```
> Note that this convenience comes with a cost.
> Analysis of any BUILD file which loads the requirements helper in this way will
> cause an eager-fetch of all the pip dependencies,
> even if no python targets are requested to be built.
> In a multi-language repo, this may cause developers to fetch dependencies they don't need,
> so consider using the long form for dependencies if this happens.
In addition to the `requirement` macro, which is used to access the `py_library`
target generated from a package's wheel, the generated `requirements.bzl` file contains
functionality for exposing [entry points][whl_ep] as `py_binary` targets.
[whl_ep]: https://packaging.python.org/specifications/entry-points/
```python
load("@pip_deps//:requirements.bzl", "entry_point")
alias(
name = "pip-compile",
actual = entry_point(
pkg = "pip-tools",
script = "pip-compile",
),
)
```
Note that for packages whose name and script are the same, only the name of the package
is needed when calling the `entry_point` macro.
```python
load("@pip_deps//:requirements.bzl", "entry_point")
alias(
name = "flake8",
actual = entry_point("flake8"),
)
```
Args:
requirements (Label): A 'requirements.txt' pip requirements file.
name (str, optional): A unique name for the created external repository (default 'pip').
**kwargs (dict): Additional arguments to the [`pip_repository`](./pip_repository.md) repository rule.
"""
# Just in case our dependencies weren't already fetched
pip_install_dependencies()
pip_repository(
name = name,
requirements = requirements,
repo_prefix = "pypi__",
**kwargs
)
def pip_parse(requirements_lock, name = "pip_parsed_deps", **kwargs):
"""Accepts a locked/compiled requirements file and installs the dependencies listed within.
Those dependencies become available in a generated `requirements.bzl` file.
You can instead check this `requirements.bzl` file into your repo, see the "vendoring" section below.
This macro wraps the [`pip_repository`](./pip_repository.md) rule that invokes `pip`, with `incremental` set.
In your WORKSPACE file:
```python
load("@rules_python//python:pip.bzl", "pip_parse")
pip_parse(
name = "pip_deps",
requirements_lock = ":requirements.txt",
)
load("@pip_deps//:requirements.bzl", "install_deps")
install_deps()
```
You can then reference installed dependencies from a `BUILD` file with:
```python
load("@pip_deps//:requirements.bzl", "requirement")
py_library(
name = "bar",
...
deps = [
"//my/other:dep",
requirement("requests"),
requirement("numpy"),
],
)
```
In addition to the `requirement` macro, which is used to access the generated `py_library`
target generated from a package's wheel, The generated `requirements.bzl` file contains
functionality for exposing [entry points][whl_ep] as `py_binary` targets as well.
[whl_ep]: https://packaging.python.org/specifications/entry-points/
```python
load("@pip_deps//:requirements.bzl", "entry_point")
alias(
name = "pip-compile",
actual = entry_point(
pkg = "pip-tools",
script = "pip-compile",
),
)
```
Note that for packages whose name and script are the same, only the name of the package
is needed when calling the `entry_point` macro.
```python
load("@pip_deps//:requirements.bzl", "entry_point")
alias(
name = "flake8",
actual = entry_point("flake8"),
)
```
## Vendoring the requirements.bzl file
In some cases you may not want to generate the requirements.bzl file as a repository rule
while Bazel is fetching dependencies. For example, if you produce a reusable Bazel module
such as a ruleset, you may want to include the requirements.bzl file rather than make your users
install the WORKSPACE setup to generate it.
See https://github.com/bazelbuild/rules_python/issues/608
This is the same workflow as Gazelle, which creates `go_repository` rules with
[`update-repos`](https://github.com/bazelbuild/bazel-gazelle#update-repos)
To do this, use the "write to source file" pattern documented in
https://blog.aspect.dev/bazel-can-write-to-the-source-folder
to put a copy of the generated requirements.bzl into your project.
Then load the requirements.bzl file directly rather than from the generated repository.
See the example in rules_python/examples/pip_parse_vendored.
Args:
requirements_lock (Label): A fully resolved 'requirements.txt' pip requirement file
containing the transitive set of your dependencies. If this file is passed instead
of 'requirements' no resolve will take place and pip_repository will create
individual repositories for each of your dependencies so that wheels are
fetched/built only for the targets specified by 'build/run/test'.
Note that if your lockfile is platform-dependent, you can use the `requirements_[platform]`
attributes.
name (str, optional): The name of the generated repository. The generated repositories
containing each requirement will be of the form <name>_<requirement-name>.
**kwargs (dict): Additional arguments to the [`pip_repository`](./pip_repository.md) repository rule.
"""
# Just in case our dependencies weren't already fetched
pip_install_dependencies()
pip_repository(
name = name,
requirements_lock = requirements_lock,
repo_prefix = "{}_".format(name),
incremental = True,
**kwargs
)
def pip_repositories():
"""
Obsolete macro to pull in dependencies needed to use the pip_import rule.
Deprecated:
the pip_repositories rule is obsolete. It is not used by pip_install.
"""
# buildifier: disable=print
print("DEPRECATED: the pip_repositories rule has been replaced with pip_install, please see rules_python 0.1 release notes")
| 36.133929 | 128 | 0.680875 | 1,034 | 8,094 | 5.234043 | 0.283366 | 0.049889 | 0.031596 | 0.025499 | 0.427199 | 0.40133 | 0.400591 | 0.400591 | 0.400591 | 0.384331 | 0 | 0.002714 | 0.226217 | 8,094 | 223 | 129 | 36.295964 | 0.861408 | 0.794045 | 0 | 0.333333 | 0 | 0 | 0.305936 | 0.147032 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9815924c70a5419e8fe5fb5a75e1e2c1b28350dd | 2,896 | py | Python | utils/api_call.py | arpitkjain7/VacciDate | b542028490c76d44d53880798d3e5e8cf13e7c22 | [
"MIT"
] | 4 | 2021-05-23T13:48:22.000Z | 2022-02-23T04:27:35.000Z | utils/api_call.py | arpitkjain7/VacciDate | b542028490c76d44d53880798d3e5e8cf13e7c22 | [
"MIT"
] | null | null | null | utils/api_call.py | arpitkjain7/VacciDate | b542028490c76d44d53880798d3e5e8cf13e7c22 | [
"MIT"
] | null | null | null | from integration.api_setu import (
get_applicable_slots,
get_district_id_from_file,
get_state_id_by_state_name,
get_instant_applicable_slots,
)
from utils.load_config import load_configuration
from utils.external_caller import APIInterface
import time
from VacciDate_bot.send_message import send_personal_message
import json
config = load_configuration(config_path="data/config.yml")
get_slot_by_district = config.get("COWIN").get("SLOT_BY_DISTICT")
get_slot_by_pincode = config.get("COWIN").get("SLOT_BY_PINCODE")
def api_setu_get_slot_by_district(district_id, start_date):
try:
slot_details = json.loads(
APIInterface.get(
route=get_slot_by_district,
params={"district_id": district_id, "date": start_date},
)
)
return slot_details
except Exception as error:
print(f"Exception in api_setu_get_slot_by_district function : {error}")
return None
def api_setu_get_slot_by_pincode(pincode, start_date):
try:
slot_details = json.loads(
APIInterface.get(
route=get_slot_by_pincode,
params={"pincode": pincode, "date": start_date},
)
)
return slot_details
except Exception as error:
print(f"Exception in api_setu_get_slot_by_pincode function : {error}")
return None
def get_details(district_id, start_date, age_group, chat_id):
try:
slot_details = json.loads(
APIInterface.get(
route=get_slot_by_district,
params={"district_id": district_id, "date": start_date},
)
)
if len(age_group) == 0:
age_group.append(18)
available_slots = get_applicable_slots(
slot_details=slot_details, age_group=age_group
)
if len(available_slots) > 0:
print("slot available")
# message = "\n".join(available_slots)
# try:
for i in range(min(5, len(available_slots))):
# response_status, sleep_time = send_mess(text=available_slots[i],chat_id)
# if not response_status:
# print(f"sleeping for {sleep_time} seconds")
# time.sleep(sleep_time)
send_personal_message(msg=available_slots[i], chat_id=chat_id)
return True
except Exception as error:
print(f"Exception in get_details function : {error}")
return False
def get_instant_details(district_id, start_date, age_group):
slot_details = json.loads(
APIInterface.get(
route=get_slot_by_district,
params={"district_id": district_id, "date": start_date},
)
)
available_slots = get_instant_applicable_slots(
slot_details=slot_details, age_group=age_group
)
return available_slots[:5] | 34.070588 | 90 | 0.643646 | 357 | 2,896 | 4.857143 | 0.226891 | 0.048443 | 0.062284 | 0.058824 | 0.541522 | 0.489043 | 0.423299 | 0.384083 | 0.361592 | 0.361592 | 0 | 0.002846 | 0.272099 | 2,896 | 85 | 91 | 34.070588 | 0.819734 | 0.07355 | 0 | 0.371429 | 0 | 0 | 0.107957 | 0.021292 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.085714 | 0 | 0.242857 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
981806b5945b12b9c6ea7adf2957b3110d87f687 | 18,833 | py | Python | utils/optimization.py | dexuiz/merlot | c83dcc1b6efb62a7000696d1c1d8e9f048b89c94 | [
"MIT"
] | 148 | 2021-06-07T22:21:51.000Z | 2022-03-28T02:27:49.000Z | utils/optimization.py | dexuiz/merlot | c83dcc1b6efb62a7000696d1c1d8e9f048b89c94 | [
"MIT"
] | 11 | 2021-06-15T04:23:51.000Z | 2022-03-27T17:41:46.000Z | utils/optimization.py | dexuiz/merlot | c83dcc1b6efb62a7000696d1c1d8e9f048b89c94 | [
"MIT"
] | 13 | 2021-06-15T13:35:09.000Z | 2022-02-17T05:28:13.000Z | import re
from collections import defaultdict
from copy import deepcopy
import numpy as np
import tensorflow as tf
from utils.model_utils import get_shape_list
def build_optimizer_from_config(loss, optimizer_config, device_config=None):
"""
This is a utility to build an optimizer from optimizer_config.
:param loss: what to use.
:param optimizer_config: k/v of options
:param device_config: Additional options that can be rolled in
:return: An optimizer
"""
optimizer_types = {
'adam_optimizer': create_fixed_adam_optimizer_with_warmup,
}
if optimizer_config['type'] not in optimizer_types:
raise ValueError("The optimizer type {} isn't supported".format(optimizer_config['type']))
kwargs = deepcopy(optimizer_config)
if device_config is not None:
kwargs.update(deepcopy(device_config))
del kwargs['type']
return optimizer_types[optimizer_config['type']](loss, **kwargs)
def _print_var_list_for_debugging(var_list):
"""
For debugging, print a list of vars. Sort by the shapes, also print the total size.
:param var_list: list of vars.
:return: Nothing!
"""
if len(var_list) == 0:
tf.logging.info('~~~ (N/A) ~~~')
return
sorted_vars = sorted([(_get_variable_name(x.name), tuple(get_shape_list(x))) for x in var_list],
key=lambda x: -np.prod(x[1]))
total_size = sum([np.prod(x[1]) for x in sorted_vars])
# Pretty print each line
longest_name = max([len(x[0]) for x in sorted_vars])
prints = [' {s:<{w}}'.format(s=x[0], w=longest_name) + '{}'.format(x[1]) for x in sorted_vars]
for l in prints:
tf.logging.info(l)
tf.logging.info('~~~~ Total size = {} or {:.1f}M\n'.format(
total_size, float(total_size) / 1000000.0
))
def create_fixed_adam_optimizer_with_warmup(loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu,
weight_decay_rate=1e-4, param_overrides=None, freeze_scope=None,
verbose=False, clip_norm=1.0, adafactor=False, epsilon=1e-6, beta_2=0.98,
use_bfloat16_adam=False, do_param_scale=False, decay_beta2_adafactor=False,
**kwargs):
"""
Does AdamW optimization. Unlike the BERT optimizer, here I added bias correct which the original
one didn't seem to have.
:param loss:
:param learning_rate: The default learning rate we'll use. All of the learning rates, including overridden ones
will get scaled during the initial `num_warmup_steps`.
:param num_train_steps: How many steps to train for overall.
:param num_warmup_steps: A number, presumably < num_train_steps which specifies for how long we warmup.
:param use_tpu: Whether to use TPU. This is important because we need to duplicate the optimizer accross shards.
:param weight_decay_rate: How much to decay the weights by default.
:param param_overrides: Which parameters to override. This works like the following. You pass in a
LIST of LIST, DICTIONARY pairs. Each pair consists of a bunch of regular expressions
and if one of those are activated, we will override the default parameters in that instance.
For instance
["LayerNorm", "layer_norm", 'GroupNorm', "bias"], {"weight_decay_rate": 0}
will set any parameter matching the first couple of regexes to have weight_decay_rate of 0.
:param freeze_scope: OLD deprecated parameter that sets anything matching ["^freeze_scope/"] to have {"learning_rate": 0}
:param verbose: Use this for extra debugging output
:param kwargs: extra args, not needed
:return:
"""
global_step = tf.compat.v1.train.get_or_create_global_step()
# Implements linear decay of the learning rate. This does it globally over all parameters
# which should be OK.
# Make it so that we scale the loss UP to learning_rate
# scale * (1-(num_warmup_steps / num_train_steps)) = 1.0
# scale = 1/(1-(num_warmup_steps / num_train_steps))
# scale = num_train_steps /(num_train_steps - num_warmup_steps
base_scale = float(num_train_steps) / (
float(num_train_steps) - float(num_warmup_steps) + 1.0) if num_warmup_steps else 1.0
learning_rate_scale = tf.compat.v1.train.polynomial_decay(
tf.constant(value=base_scale, shape=[], dtype=tf.float32),
global_step,
num_train_steps,
end_learning_rate=0.0,
power=1.0,
cycle=False)
# Implements linear warmup. I.e., if global_step < num_warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * learning_rate`.
if num_warmup_steps:
global_steps_int = tf.cast(global_step, tf.int32)
warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32)
global_steps_float = tf.cast(global_steps_int, tf.float32)
warmup_steps_float = tf.cast(warmup_steps_int, tf.float32)
warmup_percent_done = global_steps_float / warmup_steps_float
learning_rate_scale = tf.where(global_steps_int < warmup_steps_int, warmup_percent_done, learning_rate_scale)
# Deal with the parameter overrides.
# We can override:
# learning_rate. if learning_rate = 0 then we aren't training it at all.
# beta_1
# beta_2
# epsilon
# weight_decay_rate
if param_overrides is None:
param_overrides = []
if freeze_scope is not None:
print("NOTE! freeze_scope is deprecated. You can do the exact same thing by instead setting\n"
"param_overrides: [[[\"^{}\"], {{\"learning_rate\": 0}}]]".format(freeze_scope))
param_overrides.append([[f'^{freeze_scope}'], {'learning_rate': 0}])
tvars = tf.trainable_variables()
param_name_to_overridden_parameters = defaultdict(dict)
for regexes, overridden_parameters in param_overrides:
for k in overridden_parameters:
if k not in ('learning_rate', 'weight_decay_rate', 'beta_1', 'beta_2', 'epsilon', 'do_factor'):
raise ValueError(
"Regex rule {} -> {} isn't OK because {} isn't a changable optimization parameter".format(
regexes, overridden_parameters, k
))
for regex in regexes:
for p in tvars:
param_name = _get_variable_name(p.name)
if re.search(regex, param_name) is not None:
param_name_to_overridden_parameters[param_name].update(overridden_parameters)
non_trainable_vars = [v for v in tvars
if not param_name_to_overridden_parameters[_get_variable_name(v.name)].get('learning_rate',
1.0)]
if len(non_trainable_vars) != 0:
tf.logging.info("\n~~~~~ NOT training the following variables:")
_print_var_list_for_debugging(non_trainable_vars)
tvars = [v for v in tvars
if param_name_to_overridden_parameters[_get_variable_name(v.name)].get('learning_rate', 1.0)]
# Get all possible conditions, just for debugging purposes.
conditions_to_params = defaultdict(list)
for v in tvars:
conditions = param_name_to_overridden_parameters[_get_variable_name(v.name)]
conditions_str = ','.join(f'{k}={v}' for k, v in sorted(conditions.items()))
conditions_to_params[conditions_str].append(v)
for conditions, param_list in conditions_to_params.items():
if not conditions:
tf.logging.info(
"\n~~~~~ For the following params, using DEFAULTS \n{}".format(','.join(f'{k}={v}' for k, v in {
'learning_rate': learning_rate, 'weight_decay_rate': weight_decay_rate, 'beta_1': 0.9,
'beta_2': beta_2, 'eps': epsilon, 'use_bfloat16_adam': use_bfloat16_adam,
}.items())))
else:
tf.logging.info("\nFor the following params, overriding {}".format(conditions))
_print_var_list_for_debugging(param_list)
grads = tf.gradients(loss, tvars)
if adafactor:
raise ValueError("Adafactor not supported rn")
else:
optimizer = AdamOptimizer(
learning_rate=learning_rate,
weight_decay_rate=weight_decay_rate,
learning_rate_scale=learning_rate_scale,
beta_1=0.9,
beta_2=beta_2,
epsilon=epsilon,
param_name_to_overridden_parameters=dict(param_name_to_overridden_parameters),
make_things_dependent_on_grad=True,
use_bfloat16_adam=use_bfloat16_adam,
)
train_metrics = {
'learning_rate': learning_rate * learning_rate_scale,
'minibatch_loss': loss,
}
if verbose:
for v in tvars:
if v.dtype == tf.bfloat16:
raise ValueError(f"{v.name} is bfloat16")
train_metrics['weight_decay_loss'] = tf.add_n([
tf.nn.l2_loss(v) * param_name_to_overridden_parameters[
_get_variable_name(v.name)].get('weight_decay_rate', weight_decay_rate)
for v in tvars])
# Clip grads AND log
param_to_l2 = {_get_variable_name(x.name): tf.nn.l2_loss(y) for x, y in zip(tvars, grads) if y is not None}
global_norm = tf.math.sqrt(2.0 * tf.add_n(list(param_to_l2.values())))
if clip_norm > 0.0:
tf.logging.info("clipping the global norm to {:.3f}".format(clip_norm))
(grads, _) = tf.clip_by_global_norm(grads, use_norm=global_norm, clip_norm=clip_norm)
else:
tf.logging.info("Not clipping the global norm")
# Log the global norms. I'm not worrying about grouping or any of that
# so for language/layer00/key_layer/kernel
# and language/layer00/key_layer/bias
# we log both these parameters as well as language/layer00/key_layer/, language/layer00/ ...
all_groups = sorted(set(['/'.join(x.split('/')[:(depth + 1)]) for x in param_to_l2.keys()
for depth in range(len(x.split('/')))]))
for g in all_groups:
# Hide some boring things
if g.split('/')[-1] in ('beta', 'kernel', 'bias', 'gamma'):
continue
train_metrics[f'gradnorms/{g}'] = tf.math.sqrt(
2.0 * tf.add_n([v for k, v in param_to_l2.items() if k.startswith(g)]))
train_metrics[f'gradnorms/_overall'] = global_norm
else:
# Clip by global norm. I think we need this, but RoBERTa didn't use it so maybe not? idk. adding it anyways
if clip_norm > 0.0:
tf.logging.info("clipping the global norm to {:.3f}".format(clip_norm))
grads, use_norm = tf.clip_by_global_norm(grads, clip_norm=clip_norm)
train_metrics[f'gradnorms/_overall'] = use_norm
else:
tf.logging.info("Not clipping the global norm")
if use_tpu:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
train_op = optimizer.apply_gradients(
zip(grads, tvars), global_step=global_step)
# Normally the global step update is done inside of `apply_gradients`.
# However, `AdamOptimizer` doesn't do this. But if you use
# a different optimizer, you should probably take this line out.
# + If you're using BN you need UPDATE_OPS to run also
new_global_step = global_step + 1
train_op = tf.group(train_op, [global_step.assign(new_global_step)],
tf.get_collection(tf.GraphKeys.UPDATE_OPS))
return train_op, train_metrics
def _get_variable_name(param_name):
"""Get the variable name from the tensor name. This just strips off the trailing :0"""
m = re.match("^(.*):\\d+$", param_name)
if m is not None:
param_name = m.group(1)
return param_name
# extreme hacky stuff
missing_precision = 1.00390625 # 1 / (2 ** 8)
def _decode_v(stored_v):
"""
Use the extra bit to get 1 extra point of range
If we do this hack then we will be off by at most 1 / (2 ** 9) which is better I guess
If sign bit is positive do nothing
If sign bit is negative multiply
:param stored_v:
:param use_bfloat16:
:return:
"""
sign = tf.math.sign(stored_v) # [1 or -1]
v_abs = tf.cast(tf.abs(stored_v), dtype=tf.float32)
v_abs = tf.where(tf.greater(sign, 0), v_abs, v_abs * missing_precision)
return v_abs
def _encode_v(stored_v):
bfloat_enc = tf.cast(stored_v, dtype=tf.bfloat16)
bfloat_enc_f32 = tf.cast(bfloat_enc, dtype=tf.float32)
err0 = tf.abs(bfloat_enc_f32 - stored_v)
err1 = tf.abs(bfloat_enc_f32 * missing_precision - stored_v)
return tf.where(tf.less_equal(err0, err1), bfloat_enc, -bfloat_enc)
class AdamOptimizer(tf.compat.v1.train.Optimizer):
"""A basic Adam optimizer that includes "correct" L2 weight decay.
Also adding bias correction
"""
def __init__(self,
learning_rate,
learning_rate_scale=1.0,
weight_decay_rate=0.0,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-6,
param_name_to_overridden_parameters=None,
name="AdamOptimizer",
make_things_dependent_on_grad=False,
use_bfloat16_adam=False,
do_param_scale=False,
decay_beta2_adafactor=False):
"""Constructs a AdamWeightDecayOptimizer."""
super(AdamOptimizer, self).__init__(False, name)
self.learning_rate = learning_rate
self.learning_rate_scale = learning_rate_scale
self.weight_decay_rate = weight_decay_rate
self.beta_1 = beta_1
self.beta_2 = beta_2
self.epsilon = epsilon
self.param_name_to_overridden_parameters = {} if param_name_to_overridden_parameters is None else param_name_to_overridden_parameters
self.make_things_dependent_on_grad = make_things_dependent_on_grad
self.use_bfloat16_adam=use_bfloat16_adam
self.do_param_scale = do_param_scale
self.do_factor = True # won't do anything unless adafactor is on
self.decay_beta2_adafactor = decay_beta2_adafactor
def _get_hyperparam(self, param_name, hyperparam_name):
"""
For the given parameter, get the right hyperparameter. It might have been overridden.
:param param_name:
:param hyperparam_name:
:return:
"""
if hyperparam_name not in ('learning_rate', 'weight_decay_rate', 'beta_1', 'beta_2', 'epsilon', 'do_factor'):
raise ValueError(f"Invalid hyperparameter name {hyperparam_name}")
if param_name not in self.param_name_to_overridden_parameters:
return getattr(self, hyperparam_name)
overridden_params = self.param_name_to_overridden_parameters[param_name]
return overridden_params.get(hyperparam_name, getattr(self, hyperparam_name))
def apply_gradients(self, grads_and_vars, global_step=None, name=None):
"""See base class."""
assignments = []
for (grad, param) in grads_and_vars:
if grad is None or param is None:
continue
param_name = _get_variable_name(param.name)
# Override parameters
beta_1 = self._get_hyperparam(param_name, 'beta_1')
beta_2 = self._get_hyperparam(param_name, 'beta_2')
weight_decay_rate = self._get_hyperparam(param_name, 'weight_decay_rate')
epsilon = self._get_hyperparam(param_name, 'epsilon')
learning_rate = self._get_hyperparam(param_name, 'learning_rate') * self.learning_rate_scale
# Bias correction
t = tf.cast(global_step, dtype=tf.float32) + 1.0
bc1 = 1.0 - tf.pow(beta_1, t)
bc2 = 1.0 - tf.pow(beta_2, t)
learning_rate *= tf.sqrt(bc2) / bc1
grad_squared = tf.square(grad) + 1e-30
if self.make_things_dependent_on_grad:
# HACK: Make things dependent on grad.
# This confounds the XLA rewriter and keeps it from fusing computations
# across different variables. This fusion is a bad for HBM usage, since
# it causes the gradients to persist in memory.
grad_squared_mean = tf.reduce_mean(grad_squared)
learning_rate += grad_squared_mean * 1e-30
epsilon += grad_squared_mean * 1e-30
dtype = tf.bfloat16 if self.use_bfloat16_adam else tf.float32
stored_m = tf.get_variable(
name=param_name + "/adam_m",
shape=param.shape.as_list(),
dtype=dtype,
trainable=False,
initializer=tf.zeros_initializer())
stored_v = tf.get_variable(
name=param_name + "/adam_v",
shape=param.shape.as_list(),
dtype=dtype,
trainable=False,
initializer=tf.zeros_initializer())
m = tf.cast(stored_m, dtype=tf.float32) if self.use_bfloat16_adam else stored_m
v = _decode_v(stored_v) if self.use_bfloat16_adam else stored_v
# Standard Adam update.
next_m = tf.multiply(beta_1, m) + tf.multiply(1.0 - beta_1, grad)
next_v = tf.multiply(beta_2, v) + tf.multiply(1.0 - beta_2, grad_squared)
update = next_m / (tf.sqrt(next_v) + epsilon)
# Just adding the square of the weights to the loss function is *not*
# the correct way of using L2 regularization/weight decay with Adam,
# since that will interact with the m and v parameters in strange ways.
#
# Instead we want ot decay the weights in a manner that doesn't interact
# with the m/v parameters. This is equivalent to adding the square
# of the weights to the loss with plain (non-momentum) SGD.
if weight_decay_rate > 0:
update += weight_decay_rate * param
update_with_lr = learning_rate * update
next_param = param - update_with_lr
if self.use_bfloat16_adam:
next_m = tf.cast(next_m, dtype=tf.bfloat16)
next_v = _encode_v(next_v)
assignments.extend(
[param.assign(next_param),
stored_m.assign(next_m),
stored_v.assign(next_v)])
return tf.group(*assignments, name=name)
def reduce_rms(x):
return tf.sqrt(tf.reduce_mean(tf.square(x)))
| 44.840476 | 141 | 0.63564 | 2,553 | 18,833 | 4.435566 | 0.177438 | 0.043448 | 0.026492 | 0.025963 | 0.286648 | 0.201696 | 0.128488 | 0.102526 | 0.093871 | 0.087513 | 0 | 0.017539 | 0.273403 | 18,833 | 419 | 142 | 44.947494 | 0.809997 | 0.242394 | 0 | 0.105469 | 0 | 0 | 0.082056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039063 | false | 0 | 0.023438 | 0.003906 | 0.105469 | 0.023438 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9819a8579247b5bad793cf277cc96621e0bd2af0 | 9,157 | py | Python | AsyncGear/Gear.py | monk-after-90s/AsyncGear | 6773d38d564c21bbb2f9a0d4fd14a0c24b541ece | [
"MIT"
] | 4 | 2021-01-06T06:14:04.000Z | 2022-01-12T05:32:03.000Z | AsyncGear/Gear.py | monk-after-90s/AsyncGear | 6773d38d564c21bbb2f9a0d4fd14a0c24b541ece | [
"MIT"
] | 1 | 2021-08-05T09:54:30.000Z | 2021-08-05T10:43:33.000Z | AsyncGear/Gear.py | monk-after-90s/AsyncGear | 6773d38d564c21bbb2f9a0d4fd14a0c24b541ece | [
"MIT"
] | null | null | null | '''
Transfer an object to its gear, as an interface.
'''
import asyncio
import datetime
from .AsyncPeriod import AsyncPeriod
from .method_run_when import call_backs
from ensureTaskCanceled import ensureTaskCanceled
gears = {}
class _Gear:
# last_set_period = {}
def __init__(self, obj):
self.obj = obj
self.periods = {}
self._unlocked = asyncio.Event()
self._unlocked.set()
self.assistant_tasks = []
self.prev_period = None
self._current_period: AsyncPeriod = None
def delete(self):
'''
Delete the gear. You'd better delete the gear when it is no more used.
:return:
'''
if self.obj in gears.keys():
gears.pop(self.obj)
for task in self.assistant_tasks:
asyncio.create_task(ensureTaskCanceled(task))
def _set_intance_gear_callbacks(self, period_names):
def _set_intance_gear_callbacks(attr, obj, period_names):
from .run_when import _run_when
periods2del = []
for period_name in period_names: # 新时期被加入,找到可以启动的回调等待
if period_name in call_backs[attr].keys(): # 启动等待
periods2del.append(period_name)
for time_method in call_backs[attr][period_name].keys():
if asyncio.iscoroutinefunction(attr):
@_run_when(obj, time_method, period_name, call_backs[attr][period_name][time_method])
async def wrapper():
return await asyncio.create_task(attr(obj))
else:
@_run_when(obj, time_method, period_name, call_backs[attr][period_name][time_method])
def wrapper():
return attr(obj)
# 删除该period记录
[call_backs[attr].pop(period) for period in periods2del]
# 如果attr没有还要关联的period,则删除该attr的记录
if not bool(call_backs[attr]):
call_backs.pop(attr)
for attr in list(getattr(type(self.obj), '__dict__', {}).values()) + \
list(getattr(self.obj, '__dict__', {}).values()): # 遍历绑定对象的命名空间及其类命名空间,找到实例回调方法对应的类函数
attr = getattr(attr, '__func__', attr) if type(attr) is classmethod else attr
if attr in list(call_backs.keys()):
_set_intance_gear_callbacks(attr, self.obj, period_names)
def add_periods(self, *new_period_names: str):
'''
Dynamically add periods for some object. The first added would be the default.
:return:
'''
for new_period_name in new_period_names:
if new_period_name in self.periods.keys():
raise KeyError(f'Period {new_period_name} has already been added.')
self.periods[new_period_name] = AsyncPeriod(new_period_name, self.obj, self)
if len(self.periods.keys()) == 1:
self._set_period(new_period_name)
self._set_intance_gear_callbacks(new_period_names)
def get_present_period(self):
'''
Get the present period of the target object.
:return:
'''
if self._current_period is not None:
return self._current_period._name
def current_set_datetime(self) -> datetime.datetime:
'''
Get the UTC datetime when the present period is set.
:return:
'''
if self._current_period is not None:
return self._current_period._ensured_time
def get_period_names(self):
'''
Get the periods of the target object.
:return:
'''
return tuple(self.periods.keys())
def sync_set_period(self, period_name: str, slot_num: int = 1):
'''
Synchronous version of set_period.
Set obj to period period_name when unlocked, otherwise PermissionError is raised.
:param period_name:
:param slot_num: Attention! Do not use it if you do not understand the parameter!
slot_num means that only after slot_num times Gear(obj).set_period(period_name,slot_num) run,
the period of Gear(obj) could really be set to period_name, which is interrupted
if among these times set_period run, the same period_name with a different slot_num is given.
Then the procedure is refreshed, the count would be reset.
:return:
'''
return self._set_period(period_name, slot_num)
def _set_period(self, period_name: str, slot_num: int = 1):
p = self.periods[period_name]
p.slots_num_for_true = slot_num
p.filled_slots_num += 1
async def set_period(self, period_name: str, slot_num: int = 1):
'''
Set obj to period period_name when unlocked, otherwise PermissionError is raised.
:param period_name:
:param slot_num: Attention! Do not use it if you do not understand the parameter!
slot_num means that only after slot_num times Gear(obj).set_period(period_name,slot_num) run,
the period of Gear(obj) could really be set to period_name, which is interrupted
if among these times set_period run, the same period_name with a different slot_num is given.
Then the procedure is refreshed, the count would be reset.
:return:
'''
if self.get_present_period() != period_name:
await asyncio.create_task(self.wait_outside_period(period_name))
else:
await asyncio.create_task(self.wait_inside_period(period_name))
try:
if self._unlocked.is_set():
self._set_period(period_name, slot_num)
else:
raise PermissionError('The gear is locked.')
finally:
# self.last_set_period[self.obj] = period_name
if self.get_present_period() != period_name:
await asyncio.create_task(self.wait_outside_period(period_name))
else:
await asyncio.create_task(self.wait_inside_period(period_name))
async def wait_change_period(self):
'''
Wait for the instance when the period is changed.
:return:
'''
if self.get_period_names():
await asyncio.wait(
[asyncio.create_task(self.wait_enter_period(period)) for period in self.get_period_names()],
return_when='FIRST_COMPLETED')
else:
raise RuntimeError('No periods.')
async def wait_inside_period(self, period_name: str):
'''
Wait the time slot when the gear is inside period period_name. As logically, as long as the gear is inside period
period_name, this coroutine is awaited immediately.
:param period_name:
:return:
'''
period = self.periods[period_name]
await asyncio.create_task(period.wait_true())
async def wait_outside_period(self, period_name: str):
'''
Wait the time slot when the gear is outside period period_name. As logically, as long as the gear is outside period
period_name, this coroutine is awaited immediately.
:param period_name:
:return:
'''
period = self.periods[period_name]
await asyncio.create_task(period.wait_false())
async def wait_enter_period(self, period_name: str):
'''
Wait the instant when the gear enters period period_name.
:param period_name:
:return:
'''
period = self.periods[period_name]
await asyncio.create_task(period.wait_change_into_true())
async def wait_exit_period(self, period_name: str):
'''
Wait the instant when the gear exits period period_name.
:param period_name:
:return:
'''
period = self.periods[period_name]
await asyncio.create_task(period.wait_change_into_false())
def lock(self):
'''
Lock the period of the gear.
:return:
'''
self._unlocked.clear()
def unlock(self):
'''
Unlock the period of the gear after the gear is locked.
:return:
'''
self._unlocked.set()
async def wait_unlock(self):
'''
wait the gear to be unlocked.
:return:
'''
await asyncio.create_task(self._unlocked.wait())
# def when_enter(self, period_name: str, queue_blocking='abandon'):
# return run_when_enter(self.obj, period_name, queue_blocking)
#
# def when_exit(self, period_name: str, queue_blocking='abandon'):
# return run_when_exit(self.obj, period_name, queue_blocking)
#
# def when_inside(self, period_name: str, queue_blocking='abandon'):
# return run_when_inside(self.obj, period_name, queue_blocking)
#
# def when_outside(self, period_name: str, queue_blocking='abandon'):
# return run_when_outside(self.obj, period_name, queue_blocking)
def Gear(obj) -> _Gear:
if obj not in gears.keys():
gears[obj] = _Gear(obj)
return gears[obj]
| 36.337302 | 123 | 0.617014 | 1,140 | 9,157 | 4.711404 | 0.157018 | 0.119158 | 0.053621 | 0.034817 | 0.542171 | 0.49246 | 0.49246 | 0.470303 | 0.449637 | 0.449637 | 0 | 0.001245 | 0.29846 | 9,157 | 251 | 124 | 36.482072 | 0.834838 | 0.193841 | 0 | 0.190909 | 0 | 0 | 0.021207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127273 | false | 0 | 0.054545 | 0.009091 | 0.254545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
981c4d0691e03fae2e87b2e125ed737c6938becf | 1,431 | py | Python | bmctool/utils/pulses/calc_power_equivalents.py | schuenke/BMCTool | 99ca94dab0e49a02eec774205731de80bef432d9 | [
"MIT"
] | 3 | 2021-03-29T10:43:39.000Z | 2021-08-16T09:57:54.000Z | bmctool/utils/pulses/calc_power_equivalents.py | schuenke/BMCTool | 99ca94dab0e49a02eec774205731de80bef432d9 | [
"MIT"
] | 4 | 2021-02-23T13:21:49.000Z | 2021-10-12T17:30:34.000Z | bmctool/utils/pulses/calc_power_equivalents.py | schuenke/BMCTool | 99ca94dab0e49a02eec774205731de80bef432d9 | [
"MIT"
] | null | null | null | """
calc_power_equivalents.py
"""
import numpy as np
from types import SimpleNamespace
def calc_power_equivalent(rf_pulse: SimpleNamespace,
tp: float,
td: float,
gamma_hz: float = 42.5764)\
-> np.ndarray:
"""
Calculates the continuous wave power equivalent for a given rf pulse.
:param rf_pulse: pypulseq radio-frequency pulse
:param tp: pulse duration [s]
:param td: interpulse delay [s]
:param gamma_hz: gyromagnetic ratio [Hz]
"""
amp = rf_pulse.signal/gamma_hz
duty_cycle = tp / (tp + td)
return np.sqrt(np.trapz(amp**2, rf_pulse.t) / tp * duty_cycle) # continuous wave power equivalent
def calc_amplitude_equivalent(rf_pulse: SimpleNamespace,
tp: float,
td: float,
gamma_hz: float = 42.5764)\
-> np.ndarray:
"""
Calculates the continuous wave amplitude equivalent for a given rf pulse.
:param rf_pulse: pypulseq radio-frequency pulse
:param tp: pulse duration [s]
:param td: interpulse delay [s]
:param gamma_hz: gyromagnetic ratio [Hz]
"""
duty_cycle = tp / (tp + td)
alpha_rad = np.trapz(rf_pulse.signal * gamma_hz * 360, rf_pulse.t) * np.pi / 180
return alpha_rad / (gamma_hz * 2 * np.pi * tp) * duty_cycle # continuous wave amplitude equivalent
| 33.27907 | 103 | 0.603075 | 180 | 1,431 | 4.644444 | 0.305556 | 0.083732 | 0.04067 | 0.076555 | 0.717703 | 0.61244 | 0.574163 | 0.574163 | 0.574163 | 0.574163 | 0 | 0.020141 | 0.30608 | 1,431 | 42 | 104 | 34.071429 | 0.821752 | 0.378756 | 0 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e217b2d1cb22bc90807570f9d2817b3c713b7660 | 2,682 | py | Python | day11.py | kwinkunks/aoc18 | 4d664c0a6da8f0109dd8db2292c5906c098693c9 | [
"Apache-2.0"
] | 1 | 2018-12-02T22:09:26.000Z | 2018-12-02T22:09:26.000Z | day11.py | kwinkunks/aoc18 | 4d664c0a6da8f0109dd8db2292c5906c098693c9 | [
"Apache-2.0"
] | null | null | null | day11.py | kwinkunks/aoc18 | 4d664c0a6da8f0109dd8db2292c5906c098693c9 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf 8 -*-
"""
Advent of Code 2018
Day X
"""
def argmax2d(list2d):
maxi = (0, 0)
for i, r in enumerate(list2d):
for j, e in enumerate(r):
maxi = (i, j) if e > list2d[maxi[0]][maxi[1]] else maxi
return maxi
def max2d(list2d):
r, c = argmax2d(list2d)
return list2d[r][c]
class Grid(list):
def __init__(self, shape, serial_number):
"""shape is (x-size, y-size) and not (rows, columns).
Bah, a Grid should really be empty. Then I could use
one for the subgrids too. Damn.
"""
w, h = [int(i) for i in shape]
super(Grid, self).__init__([w*[0] for _ in range(h)])
for y, row in enumerate(self):
for x, c in enumerate(row):
rackid = x + 1 + 10
power = rackid * (y + 1)
power += serial_number
power *= rackid
power = power % 1000 // 100
power -= 5
self[y][x] = power
def read(self, x, y):
return self[y - 1][x - 1]
@property
def shape(self):
return self.x, self.y
@property
def x(self):
return len(self[0])
@property
def y(self):
return len(self)
def traverse(self, n):
"""Traverse subsquares."""
for y in range(self.y - n + 1):
for x in range(self.x - n + 1):
yield x, y, [r[x:x+n] for r in self[y:y+n]]
@staticmethod
def power(grid):
return sum(sum(r) for r in grid)
def powergrids(self, n):
"""This should really be a Grid.
"""
w, h = self.x - n + 1, self.y - n + 1
subgrids = [w*[0] for _ in range(h)]
for x, y, subgrid in self.traverse(n):
subgrids[y][x] = self.power(subgrid)
return subgrids
def part1():
"""Part 1.
"""
g = Grid(shape=(300, 300), serial_number=5153)
powergrid = g.powergrids(3)
max_row, max_col = argmax2d(powergrid)
return f"Max x, y = {max_col+1}, {max_row+1}"
def part2(n_max):
"""Part 2.
"""
g = Grid(shape=(300, 300), serial_number=5153)
maxp = 0
maxn = 0
for n in range(n_max):
powergrid = g.powergrids(n)
maxp_ = max2d(powergrid)
print(maxp_)
if maxp_ > maxp:
maxp = maxp_
maxn = n
powergrid = g.powergrids(maxn)
max_row, max_col = argmax2d(powergrid)
return f"Max x, y, n = {max_col+1},{max_row+1},{maxn}"
if __name__ == "__main__":
import sys
if sys.argv[1] == '1':
print(part1())
else:
print(sys.argv)
print(part2(int(sys.argv[2])))
| 24.381818 | 67 | 0.50783 | 385 | 2,682 | 3.444156 | 0.25974 | 0.022624 | 0.045249 | 0.010558 | 0.155354 | 0.155354 | 0.134238 | 0.110106 | 0.06184 | 0.06184 | 0 | 0.044905 | 0.352349 | 2,682 | 109 | 68 | 24.605505 | 0.71848 | 0.101044 | 0 | 0.097222 | 0 | 0 | 0.037527 | 0.012793 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.013889 | 0.069444 | 0.333333 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e21a7ec79d03d31431672c365c79e63660aac47f | 1,537 | py | Python | dataset/test_dataset.py | SABER-labs/SABERv2 | 028d403beadec3adebd51582fd8ef896a2fe3696 | [
"MIT"
] | 1 | 2022-03-02T02:52:24.000Z | 2022-03-02T02:52:24.000Z | dataset/test_dataset.py | SABER-labs/SABERv2 | 028d403beadec3adebd51582fd8ef896a2fe3696 | [
"MIT"
] | null | null | null | dataset/test_dataset.py | SABER-labs/SABERv2 | 028d403beadec3adebd51582fd8ef896a2fe3696 | [
"MIT"
] | null | null | null | import os
from pathlib import Path
from typing import Dict, List, Tuple, Union
import csv
from torch import Tensor
from torch.utils.data import Dataset
import torchaudio
def load_audio(line: List[str],
header: List[str],
path: str) -> Tuple[Tensor, int, Dict[str, str]]:
# Each line as the following data:
# client_id, path, sentence, up_votes, down_votes, age, gender, accent
assert header[1] == "path"
filename = os.path.join(path, line[1])
waveform, sample_rate = torchaudio.load(filename)
dic = dict(zip(header, line))
return waveform, sample_rate, dic
class SimClrTestDataset(Dataset):
def __init__(self,
root: Union[str, Path],
tsv: str = "test.tsv") -> None:
self._path = os.fspath(root)
self._tsv = os.path.join(self._path, tsv)
with open(self._tsv, "r") as tsv_:
walker = csv.reader(tsv_, delimiter="\t")
self._header = next(walker)
self._walker = list(walker)
def __getitem__(self, n: int) -> Tuple[Tensor, int, Dict[str, str]]:
line = self._walker[n]
return load_audio(line, self._header, self._path)
def __len__(self) -> int:
return len(self._walker)
if __name__ == "__main__":
from utils.config import config
loader = SimClrTestDataset(
root=config.dataset.test_root, tsv=config.dataset.test)
for i in range(len(loader)):
example = loader[i]
print(example[0].shape, example[1], example[2])
| 30.74 | 74 | 0.627196 | 204 | 1,537 | 4.529412 | 0.397059 | 0.025974 | 0.028139 | 0.038961 | 0.051948 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0.004348 | 0.251789 | 1,537 | 49 | 75 | 31.367347 | 0.79913 | 0.065712 | 0 | 0 | 0 | 0 | 0.01605 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.108108 | false | 0 | 0.216216 | 0.027027 | 0.432432 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e21b6fe53d69134213736a1276fb524ba7bd9d93 | 3,875 | py | Python | LexData/entity.py | DiFronzo/LexData | f32e112a774e3300f3a3908cca3645fd80b29f5c | [
"MIT"
] | 16 | 2019-08-27T03:55:45.000Z | 2021-12-04T13:58:01.000Z | LexData/entity.py | DiFronzo/LexData | f32e112a774e3300f3a3908cca3645fd80b29f5c | [
"MIT"
] | 16 | 2019-10-25T20:16:33.000Z | 2020-12-16T23:28:16.000Z | LexData/entity.py | DiFronzo/LexData | f32e112a774e3300f3a3908cca3645fd80b29f5c | [
"MIT"
] | 7 | 2019-12-15T11:47:00.000Z | 2021-05-14T16:30:40.000Z | import json
import logging
from typing import Dict, List, Union
from .claim import Claim
from .wikidatasession import WikidataSession
class Entity(dict):
"""
Base class for all types of entities – currently: Lexeme, Form, Sense.
Not yet implemented: Item, Property.
"""
def __init__(self, repo: WikidataSession):
super().__init__()
self.repo = repo
@property
def claims(self) -> Dict[str, List[Claim]]:
"""
All the claims of the Entity
:rtype: Dict[str, List[Claim]]
"""
if self.get("claims", {}) != []:
return {k: [Claim(c) for c in v] for k, v in self.get("claims", {}).items()}
else:
return {}
def addClaims(self, claims: Union[List[Claim], Dict[str, List[str]]]):
"""
Add claims to the entity.
:param claims: The claims to be added to the entity.
There are two possibilities for this:
- A list of Objects of type Claim
Example: ``[Claim(propertyId="P31", value="Q1")]``
- A dictionary with the property id as key and lists of
string formated entity ids as values.
Example: ``{"P31": ["Q1", "Q2"]}``
The first supports all datatypes, whereas the later
currently only supports datatypes of kind Entity.
"""
if isinstance(claims, list):
self.__setClaims__(claims)
elif isinstance(claims, dict):
self.__createClaims__(claims)
else:
raise TypeError("Invalid argument type:", type(claims))
def __setClaims__(self, claims: List[Claim]):
"""
Add prebuild claims to the entity
:param claims: The list of claims to be added
"""
for claim in claims:
pid = claim.property
self.__setClaim__(pid, claim)
def __createClaims__(self, claims: Dict[str, List[str]]):
"""
Create and add new claims to the entity.
Only properties of some entity type are implemented:
Item, Property, Lexeme, Form and Sense
:param claims: The set of claims to be added
"""
for cle, values in claims.items():
for value in values:
self.__setEntityClaim__(cle, value)
def __setEntityClaim__(self, idProp: str, idStr: str):
"""
Add a claim of an entity-type to the entity.
Supported types are Lexeme, Form, Sense, Item, Property.
:param idProp: id of the property (example: "P31")
:param idItem: id of the entity (example: "Q1")
"""
entityId = int(idStr[1:])
claim_value = json.dumps({"entity-type": "item", "numeric-id": entityId})
self.__setClaim__(idProp, claim_value)
def __setClaim__(self, idProp: str, claim_value):
PARAMS = {
"action": "wbcreateclaim",
"format": "json",
"entity": self.id,
"snaktype": "value",
"bot": "1",
"property": idProp,
"value": claim_value,
"token": "__AUTO__",
}
DATA = self.repo.post(PARAMS)
assert "claim" in DATA
addedclaim = DATA["claim"]
logging.info("Claim added")
# Add the created claim to the local entity instance
if self.get("claims", []) == []:
self["claims"] = {idProp: addedclaim}
elif idProp in self.claims:
self.claims[idProp].append(addedclaim)
else:
self.claims[idProp] = [addedclaim]
@property
def id(self) -> str:
EntityId = self.get("id")
assert isinstance(EntityId, str)
return EntityId
def __str__(self) -> str:
return super().__repr__()
| 30.511811 | 88 | 0.551484 | 433 | 3,875 | 4.799076 | 0.300231 | 0.030318 | 0.026468 | 0.024543 | 0.049086 | 0.049086 | 0.029836 | 0 | 0 | 0 | 0 | 0.004682 | 0.338581 | 3,875 | 126 | 89 | 30.753968 | 0.805696 | 0.322581 | 0 | 0.080645 | 0 | 0 | 0.073067 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.145161 | false | 0 | 0.080645 | 0.016129 | 0.306452 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e21cc0d3b1c7ce3572e22140c0545c412f22261b | 9,715 | py | Python | tfLib/ops.py | zhangqianhui/Sparsely-Grouped-GAN | 71cb757d05309324c8d8f38b95a30a83574d7df7 | [
"MIT"
] | 62 | 2018-07-06T04:49:10.000Z | 2021-11-11T07:26:33.000Z | tfLib/ops.py | zhangqianhui/Sparsely-Grouped-GAN | 71cb757d05309324c8d8f38b95a30a83574d7df7 | [
"MIT"
] | 8 | 2018-08-31T02:37:52.000Z | 2022-03-12T00:33:16.000Z | tfLib/ops.py | zhangqianhui/Sparsely-Grouped-GAN | 71cb757d05309324c8d8f38b95a30a83574d7df7 | [
"MIT"
] | 16 | 2018-07-10T07:46:45.000Z | 2021-06-16T06:08:36.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.contrib.layers.python.layers import batch_norm
from tensorflow.contrib.layers.python.layers import l2_regularizer
import functools
def log_sum_exp(x, axis=1):
m = tf.reduce_max(x, keep_dims=True)
return m + tf.log(tf.reduce_sum(tf.exp(x - m), axis=axis))
def lrelu(x, alpha=0.2, name="LeakyReLU"):
with tf.variable_scope(name):
return tf.maximum(x , alpha*x)
def conv2d(input_, output_dim, k=4, s=2, use_sp=False, padding='SAME', scope="conv2d", use_bias=True):
with tf.variable_scope(scope):
w = tf.get_variable('w', [k, k, input_.get_shape()[-1], output_dim],
initializer=tf.contrib.layers.variance_scaling_initializer(),
regularizer=l2_regularizer(scale=0.0001))
if use_sp:
conv = tf.nn.conv2d(input_, spectral_norm(w), strides=[1, s, s, 1], padding=padding)
else:
conv = tf.nn.conv2d(input_, w, strides=[1, s, s, 1], padding=padding)
if use_bias:
biases = tf.get_variable('biases', [output_dim], initializer=tf.constant_initializer(0.0))
conv = tf.reshape(tf.nn.bias_add(conv, biases), tf.shape(conv))
return conv
def fully_connect(input_, output_dim, scope=None, use_sp=False,
bias_start=0.0, with_w=False):
shape = input_.get_shape().as_list()
with tf.variable_scope(scope or "Linear"):
matrix = tf.get_variable("Matrix", [shape[1], output_dim], tf.float32,
initializer=tf.contrib.layers.variance_scaling_initializer(), regularizer=l2_regularizer(0.0001))
bias = tf.get_variable("bias", [output_dim], tf.float32,
initializer=tf.constant_initializer(bias_start))
if use_sp:
mul = tf.matmul(input_, spectral_norm(matrix))
else:
mul = tf.matmul(input_, matrix)
if with_w:
return mul + bias, matrix, bias
else:
return mul + bias
def instance_norm(x, scope='instance_norm'):
return tf.contrib.layers.instance_norm(x,
epsilon=1e-05,
center=True, scale=True,
scope=scope)
def Adaptive_instance_norm(input, beta, gamma, epsilon=1e-5, scope="adaptive_instance_norm"):
ch = beta.get_shape().as_list()[-1]
with tf.variable_scope(scope):
mean, variance = tf.nn.moments(input, axes=[1,2], keep_dims=True)
inv = tf.rsqrt(variance + epsilon)
normalized = (input - mean) * inv
beta = tf.reshape(beta, shape=[-1, 1, 1, ch])
gamma = tf.reshape(gamma, shape=[-1, 1, 1, ch])
return gamma * normalized + beta
def Resblock_AdaIn_Affline_layers(x_init, o_dim, style_code, us=True, scope='resblock'):
input_ch = x_init.get_shape().as_list()[-1]
affline_layers = functools.partial(fully_connect, output_dim=input_ch*2)
affline_layers2 = functools.partial(fully_connect, output_dim=o_dim*2)
with tf.variable_scope(scope):
def shortcut(x):
if us:
x = upscale(x, scale=2)
if input_ch != o_dim:
x = conv2d(x, output_dim=o_dim, k=1, s=1, scope='conv', use_bias=False)
return x
with tf.variable_scope('res1'):
bg = affline_layers(style_code, scope='fc1')
beta, gamma = bg[:, 0:input_ch], bg[:, input_ch: input_ch*2]
x = Adaptive_instance_norm(x_init, beta=beta, gamma=gamma, scope='AdaIn1')
x = lrelu(x)
if us:
x = upscale(x, scale=2)
x = conv2d(x, o_dim, k=3, s=1, padding='SAME')
with tf.variable_scope('res2'):
bg = affline_layers2(style_code, scope='fc2')
beta, gamma = bg[:, 0:o_dim], bg[:, o_dim: o_dim*2]
x = Adaptive_instance_norm(x, beta=beta, gamma=gamma, scope='AdaIn2')
x = lrelu(x)
x = conv2d(x, o_dim, k=3, s=1, padding='SAME')
if o_dim != input_ch or us:
x_init = shortcut(x_init)
return (x + x_init) / tf.sqrt(2.0)
def Resblock(x_init, o_dim=256, relu_type="lrelu", use_IN=True, ds=True, scope='resblock'):
dim = x_init.get_shape().as_list()[-1]
conv1 = functools.partial(conv2d, output_dim=dim, k=3, s=1)
conv2 = functools.partial(conv2d, output_dim=o_dim, k=3, s=1)
In = functools.partial(instance_norm)
input_ch = x_init.get_shape().as_list()[-1]
with tf.variable_scope(scope):
def relu(relu_type):
relu_dict = {
"relu": tf.nn.relu,
"lrelu": lrelu
}
return relu_dict[relu_type]
def shortcut(x):
if input_ch != o_dim:
x = conv2d(x, output_dim=o_dim, k=1, s=1, scope='conv', use_bias=False)
if ds:
x = avgpool2d(x, k=2)
return x
if use_IN:
x = conv1(relu(relu_type)(In(x_init, scope='bn1')), padding='SAME', scope='c1')
if ds:
x = avgpool2d(x, k=2)
x = conv2(relu(relu_type)(In(x, scope='bn2')), padding='SAME', scope='c2')
else:
x = conv1(relu(relu_type)(x_init), padding='SAME', scope='c1')
if ds:
x = avgpool2d(x, k=2)
x = conv2(relu(relu_type)(x), padding='SAME', scope='c2')
if input_ch != o_dim or ds:
x_init = shortcut(x_init)
return (x + x_init) / tf.sqrt(2.0) #unit variance
def de_conv(input_, output_dim,
k_h=4, k_w=4, d_h=2, d_w=2, use_sp=False,
scope="deconv2d", with_w=False):
with tf.variable_scope(scope):
w = tf.get_variable('w', [k_h, k_w, output_dim[-1], input_.get_shape()[-1]], dtype=tf.float32,
initializer=tf.contrib.layers.variance_scaling_initializer())
if use_sp:
deconv = tf.nn.conv2d_transpose(input_, spectral_norm(w), output_shape=output_dim,
strides=[1, d_h, d_w, 1])
else:
deconv = tf.nn.conv2d_transpose(input_, w, output_shape=output_dim,
strides=[1, d_h, d_w, 1])
biases = tf.get_variable('biases', [output_dim[-1]], tf.float32, initializer=tf.constant_initializer(0.0))
deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())
if with_w:
return deconv, w, biases
else:
return deconv
def avgpool2d(x, k=2):
return tf.nn.avg_pool(x, ksize=[1, k, k ,1], strides=[1, k, k, 1], padding='SAME')
def Adaptive_pool2d(x, output_size=1):
input_size = get_conv_shape(x)[-1]
stride = int(input_size / (output_size))
kernel_size = input_size - (output_size - 1) * stride
return tf.nn.avg_pool(x, ksize=[1, kernel_size, kernel_size, 1], strides=[1, kernel_size, kernel_size, 1], padding='SAME')
def upscale(x, scale):
_, h, w, _ = get_conv_shape(x)
return resize_nearest_neighbor(x, (h * scale, w * scale))
def get_conv_shape(tensor):
shape = int_shape(tensor)
return shape
def int_shape(tensor):
shape = tensor.get_shape().as_list()
return [num if num is not None else -1 for num in shape]
def resize_nearest_neighbor(x, new_size):
x = tf.image.resize_nearest_neighbor(x, new_size)
return x
def conv_cond_concat(x, y):
"""Concatenate conditioning vector on feature map axis."""
x_shapes = x.get_shape()
y_shapes = y.get_shape()
y_reshaped = tf.reshape(y, [y_shapes[0], 1, 1, y_shapes[-1]])
return tf.concat([x , y_reshaped*tf.ones([x_shapes[0], x_shapes[1], x_shapes[2] , y_shapes[-1]])], 3)
def batch_normal(input, scope="scope", reuse=False):
return batch_norm(input, epsilon=1e-5, decay=0.9, scale=True, scope=scope, reuse=reuse, fused=True, updates_collections=None)
def _l2normalize(v, eps=1e-12):
return v / (tf.reduce_sum(v ** 2) ** 0.5 + eps)
def spectral_norm(W, collections=None, return_norm=False, name='sn'):
shape = W.get_shape().as_list()
if len(shape) == 1:
sigma = tf.reduce_max(tf.abs(W))
else:
if len(shape) == 4:
_W = tf.reshape(W, (-1, shape[3]))
shape = (shape[0] * shape[1] * shape[2], shape[3])
else:
_W = W
u = tf.get_variable(
name=name + "_u",
shape=(_W.shape.as_list()[-1], shape[0]),
initializer=tf.random_normal_initializer,
collections=collections,
trainable=False
)
_u = u
for _ in range(1):
_v = tf.nn.l2_normalize(tf.matmul(_u, _W), 1)
_u = tf.nn.l2_normalize(tf.matmul(_v, tf.transpose(_W)), 1)
_u = tf.stop_gradient(_u)
_v = tf.stop_gradient(_v)
sigma = tf.reduce_mean(tf.reduce_sum(_u * tf.transpose(tf.matmul(_W, tf.transpose(_v))), 1))
update_u_op = tf.assign(u, _u)
with tf.control_dependencies([update_u_op]):
sigma = tf.identity(sigma)
if return_norm:
return W / sigma, sigma
else:
return W / sigma
def getWeight_Decay(scope='discriminator'):
return tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES, scope=scope))
def getTrainVariable(vars, scope='discriminator'):
return [var for var in vars if scope in var.name]
| 38.86 | 130 | 0.582913 | 1,378 | 9,715 | 3.892598 | 0.152395 | 0.028523 | 0.02349 | 0.031879 | 0.370433 | 0.320097 | 0.219239 | 0.182886 | 0.156786 | 0.136465 | 0 | 0.026297 | 0.283685 | 9,715 | 249 | 131 | 39.016064 | 0.744504 | 0.006794 | 0 | 0.253807 | 0 | 0 | 0.025234 | 0.002342 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121827 | false | 0 | 0.035533 | 0.030457 | 0.294416 | 0.005076 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e21d93df6e09e189fe0531be7e6a41a04a6df768 | 2,091 | py | Python | Deploy Koalas/predicao_seguro_veicular.py | Alberlando/Projeto_Stack_Labs | dbf4ff2150f199e9e8a7bb7bd298d26bcaa6b5d2 | [
"MIT"
] | null | null | null | Deploy Koalas/predicao_seguro_veicular.py | Alberlando/Projeto_Stack_Labs | dbf4ff2150f199e9e8a7bb7bd298d26bcaa6b5d2 | [
"MIT"
] | null | null | null | Deploy Koalas/predicao_seguro_veicular.py | Alberlando/Projeto_Stack_Labs | dbf4ff2150f199e9e8a7bb7bd298d26bcaa6b5d2 | [
"MIT"
] | null | null | null | import pickle
import pandas as pd
from flask import Flask, render_template, request
# Pastas de template e assets
application = Flask(__name__, template_folder='template', static_folder='template/assets')
# Modelo Treinado
modelo = pickle.load(open('./models/modelo.pkl', 'rb'))
@application.route('/')
def home():
return render_template("homepage.html")
@application.route('/predicao_seguro_veicular')
def predicao_seguro_veicular():
return render_template("form.html")
@application.route('/about')
def about():
return render_template("about.html")
def get_data():
Annual_Premium = request.form.get('Annual_Premium')
Vintage = request.form.get('Vintage')
Age = request.form.get('Age')
Vehicle_Damage = request.form.get('Vehicle_Damage')
Previously_Insured = request.form.get('Previously_Insured')
d_dict = {'Annual_Premium': [Annual_Premium],
'Vintage': [Vintage],
'Age': [Age],
'Vehicle_Damage': [Vehicle_Damage],
'Previously_Insured': [Previously_Insured]}
return pd.DataFrame.from_dict(d_dict, orient='columns')
@application.route('/send', methods=['POST'])
def show_data():
df = get_data()
df['Annual_Premium'] = df['Annual_Premium'].astype('float32')
df['Vintage'] = df['Vintage'].astype('int64')
df['Age'] = df['Age'].astype('int64')
df['Vehicle_Damage'] = df['Vehicle_Damage'].astype('int64')
df['Previously_Insured'] = df['Previously_Insured'].astype('int64')
df = df[['Annual_Premium', 'Vintage', 'Age', 'Vehicle_Damage', 'Previously_Insured']]
prediction = modelo.predict(df)
outcome = 'Cliente tem interesse, vamos pra cimaaa galeraaaa!'
imagem = 'felicidade.jpg'
if prediction == 0:
outcome = 'Cliente não tem interesse! Que pena, vamos continuar tentando.'
imagem = 'tristeza.jpg'
return render_template('result.html', tables=[df.to_html(classes='data', header=True, col_space=10)],
result=outcome, imagem=imagem)
if __name__ == "__main__":
application.run(debug=True)
| 32.169231 | 105 | 0.675275 | 247 | 2,091 | 5.506073 | 0.37247 | 0.066912 | 0.051471 | 0.066176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007506 | 0.171688 | 2,091 | 64 | 106 | 32.671875 | 0.777714 | 0.020564 | 0 | 0 | 0 | 0 | 0.289487 | 0.012225 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.066667 | 0.066667 | 0.288889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e21ebbedab4084b07dd7feca9c6e28c31c047c5a | 9,732 | py | Python | smartsheet/folders.py | Funtimes-Smarts/Python-import-Smart | ffb99887d03e31d10da553c9ee8c7be1238816fc | [
"Apache-2.0"
] | null | null | null | smartsheet/folders.py | Funtimes-Smarts/Python-import-Smart | ffb99887d03e31d10da553c9ee8c7be1238816fc | [
"Apache-2.0"
] | null | null | null | smartsheet/folders.py | Funtimes-Smarts/Python-import-Smart | ffb99887d03e31d10da553c9ee8c7be1238816fc | [
"Apache-2.0"
] | null | null | null | # pylint: disable=C0111,R0902,R0913
# Smartsheet Python SDK.
#
# Copyright 2016 Smartsheet.com, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"): you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
from .models.folder import Folder
import logging
import os.path
import six
from . import fresh_operation
class Folders(object):
"""Class for handling Folders operations."""
def __init__(self, smartsheet_obj):
"""Init Folders with base Smartsheet object."""
self._base = smartsheet_obj
self._log = logging.getLogger(__name__)
def copy_folder(self, folder_id, container_destination_obj,
include=None, skip_remap=None, omit=None):
"""Creates a copy of the specified Folder.
Args:
folder_id (int): Folder ID
container_destination_obj
(ContainerDestination): Container Destination object.
include (list[str]): A comma separated list of
elements to copy. Valid list values: attachments,
cellLinks, data, discussions, filters, forms, ruleRecipients,
rules, shares, all (deprecated).
skip_remap (list[str]): A comma separated list
of references to NOT re-map for the newly created resource.
Valid list items: cellLinks, reports, sheetHyperlinks, sights
omit (list[str]): A comma separated list of elements to omit.
The only current valid item is sheetHyperlinks
Returns:
Result
"""
_op = fresh_operation('copy_folder')
_op['method'] = 'POST'
_op['path'] = '/folders/' + str(folder_id) + '/copy'
_op['query_params']['include'] = include
_op['query_params']['skipRemap'] = skip_remap
_op['query_params']['omit'] = omit
_op['json'] = container_destination_obj
expected = ['Result', 'Folder']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
def create_folder_in_folder(self, folder_id, folder_obj):
"""Create a Folder in the specified Folder
Args:
folder_id (int): Folder ID
folder_obj (Folder): Folder object.
Returns:
Result
"""
if isinstance(folder_obj, str):
folder_obj = Folder({
'name': folder_obj
})
_op = fresh_operation('create_folder_in_folder')
_op['method'] = 'POST'
_op['path'] = '/folders/' + str(folder_id) + '/folders'
_op['json'] = folder_obj
# filter before we go
_op['json'].pre_request_filter = 'create_folder_in_folder'
expected = ['Result', 'Folder']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
def create_sheet_in_folder(self, folder_id, sheet_obj):
"""Create a Sheet from scratch in the specified Folder.
Args:
folder_id (int): Folder ID
sheet_obj (Sheet): Sheet object.
Returns:
Result
"""
_op = fresh_operation('create_sheet_in_folder')
_op['method'] = 'POST'
_op['path'] = '/folders/' + str(folder_id) + '/sheets'
_op['json'] = sheet_obj
# filter before we go
_op['json'].pre_request_filter = 'create_sheet_in_folder'
expected = ['Result', 'Sheet']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
# pylint: disable=invalid-name
def create_sheet_in_folder_from_template(self, folder_id, sheet_obj,
include=None):
"""Create a Sheet in the specified Folder from the specified Template.
The Sheet object should be limited to the following
attributes:
name (required): need not be unique.
fromId (required): the ID of the Template to use in creating the
Sheet.
The optional Include parameter is a list of elements to copy from
the Template. It may include: data, attachments, discussions,
cellLinks, forms
Args:
folder_id (int): Folder ID
sheet_obj (Sheet): Sheet object.
include (list[str]): A list of optional elements
to include from the source Template. Valid list values:
data, attachments, discussions, cellLinks, forms.
Returns:
Result
"""
_op = fresh_operation('create_sheet_in_folder_from_template')
_op['method'] = 'POST'
_op['path'] = '/folders/' + str(folder_id) + '/sheets'
_op['query_params']['include'] = include
_op['json'] = sheet_obj
# filter before we go
_op['json'].pre_request_filter = 'create_sheet_in_folder_from_template'
expected = ['Result', 'Sheet']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
# pylint: enable=invalid-name
def delete_folder(self, folder_id):
"""Delete the Folder (and its contents) specified in the request.
Args:
folder_id (int): Folder ID
Returns:
Result
"""
_op = fresh_operation('delete_folder')
_op['method'] = 'DELETE'
_op['path'] = '/folders/' + str(folder_id)
expected = 'Result'
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
def get_folder(self, folder_id, include=None):
"""Get the specified Folder (and list its contents).
Args:
folder_id (int): Folder ID
include (list[str]): A comma-separated list of
optional elements to include in the response. Valid list
values: ownerInfo, sheetVersion, source.
Returns:
Folder
"""
_op = fresh_operation('get_folder')
_op['method'] = 'GET'
_op['path'] = '/folders/' + str(folder_id)
_op['query_params']['include'] = include
expected = 'Folder'
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
def list_folders(self, folder_id, page_size=100, page=1,
include_all=False):
"""Get a list of top-level child Folders within the specified Folder.
Args:
folder_id (int): Folder ID
page_size (int): The maximum number of items to
return per page. Defaults to 100.
page (int): Which page to return. Defaults to 1
if not specified.
include_all (bool): If true, include all results
(i.e. do not paginate).
Returns:
IndexResult
"""
_op = fresh_operation('list_folders')
_op['method'] = 'GET'
_op['path'] = '/folders/' + str(folder_id) + '/folders'
_op['query_params']['pageSize'] = page_size
_op['query_params']['page'] = page
_op['query_params']['includeAll'] = include_all
expected = ['IndexResult', 'Folder']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
def move_folder(self, folder_id, container_destination_obj):
"""Moves the specified Folder to another location.
Args:
folder_id (int): Folder ID
container_destination_obj
(ContainerDestination): Container Destination object.
Returns:
Result
"""
_op = fresh_operation('move_folder')
_op['method'] = 'POST'
_op['path'] = '/folders/' + str(folder_id) + '/move'
_op['json'] = container_destination_obj
# filter before we go
_op['json'].pre_request_filter = 'move_folder'
expected = ['Result', 'Folder']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
def update_folder(self, folder_id, folder_obj):
"""Update the specified Folder.
Args:
folder_id (int): Folder ID
folder_obj (Folder): Folder object.
Returns:
Result
"""
if isinstance(folder_obj, str):
folder_obj = Folder({
'name': folder_obj
})
_op = fresh_operation('update_folder')
_op['method'] = 'PUT'
_op['path'] = '/folders/' + str(folder_id)
_op['json'] = folder_obj
# filter before we go
_op['json'].pre_request_filter = 'update_folder'
expected = ['Result', 'Folder']
prepped_request = self._base.prepare_request(_op)
response = self._base.request(prepped_request, expected, _op)
return response
| 33.215017 | 79 | 0.60522 | 1,107 | 9,732 | 5.079494 | 0.196929 | 0.051218 | 0.019207 | 0.024009 | 0.589721 | 0.533701 | 0.477147 | 0.452961 | 0.442646 | 0.390539 | 0 | 0.00409 | 0.296547 | 9,732 | 292 | 80 | 33.328767 | 0.817266 | 0.373202 | 0 | 0.571429 | 0 | 0 | 0.150271 | 0.030241 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089286 | false | 0 | 0.053571 | 0 | 0.232143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e220223f379eea5955f8b9951c232b87a6d1acb8 | 1,201 | py | Python | Chapter 5/HelloLights.py | PacktPublishing/Mathematics-for-Game-Programming-and-Computer-Graphics | c6101487dfe078c25724ed7d58998f84a9ac57dd | [
"MIT"
] | null | null | null | Chapter 5/HelloLights.py | PacktPublishing/Mathematics-for-Game-Programming-and-Computer-Graphics | c6101487dfe078c25724ed7d58998f84a9ac57dd | [
"MIT"
] | null | null | null | Chapter 5/HelloLights.py | PacktPublishing/Mathematics-for-Game-Programming-and-Computer-Graphics | c6101487dfe078c25724ed7d58998f84a9ac57dd | [
"MIT"
] | null | null | null | from Cube import *
from pygame.locals import *
from OpenGL.GL import *
from OpenGL.GLU import *
pygame.init()
screen_width = 500
screen_height = 500
screen = pygame.display.set_mode((screen_width, screen_height), DOUBLEBUF | OPENGL)
pygame.display.set_caption('Lights in OpenGL')
done = False
white = pygame.Color(255, 255, 255)
glMatrixMode(GL_PROJECTION)
gluPerspective(60, (screen_width / screen_height), 0.1, 100.0)
glMatrixMode(GL_MODELVIEW)
glTranslatef(0.0, 0.0, -3.0)
glEnable(GL_DEPTH_TEST)
glEnable(GL_LIGHTING)
glLight(GL_LIGHT0, GL_POSITION, (5, 5, 5, 1))
glLightfv(GL_LIGHT0, GL_AMBIENT, (1, 0, 1, 1))
glLightfv(GL_LIGHT0, GL_DIFFUSE, (1, 0, 0, 1))
glLightfv(GL_LIGHT0, GL_SPECULAR, (0, 1, 0, 1))
glMaterial(GL_FRONT, GL_DIFFUSE, (0, 1, 0, 1))
glEnable(GL_LIGHT0)
# Change path name to suit your directory structure
mesh = Cube(GL_POLYGON, "../images/bricks.jpg")
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glRotatef(5, 1, 0, 1)
mesh.draw()
pygame.display.flip()
pygame.time.wait(50)
pygame.quit()
| 28.595238 | 84 | 0.693589 | 183 | 1,201 | 4.382514 | 0.437158 | 0.01995 | 0.049875 | 0.067332 | 0.074813 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06079 | 0.178185 | 1,201 | 41 | 85 | 29.292683 | 0.751773 | 0.040799 | 0 | 0 | 0 | 0 | 0.032462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2223661c2c4f44a67ddc90bd3bf6d66591db78b | 5,504 | py | Python | simulate_SNVs.py | pjedge/longshot_study | da4d3ffb1a58575142a9712c21b12e2cf6083d2d | [
"MIT"
] | 3 | 2019-10-15T12:28:38.000Z | 2019-12-09T07:56:26.000Z | simulate_SNVs.py | pjedge/longshot_study | da4d3ffb1a58575142a9712c21b12e2cf6083d2d | [
"MIT"
] | null | null | null | simulate_SNVs.py | pjedge/longshot_study | da4d3ffb1a58575142a9712c21b12e2cf6083d2d | [
"MIT"
] | 2 | 2020-12-29T09:34:09.000Z | 2022-01-11T06:12:02.000Z | import copy
import random
import os
import pysam
import itertools
import numpy as np
from numpy.random import choice
VALID_CHROMS = set(['{}'.format(c) for c in range(1,23)]+['X']+['contig1','contig2','contig3'])
# estimate prior probability of genotypes using strategy described here:
# http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2694485/
# "prior probability of each genotype"
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# CAUTION!
# Selects a variant genotype, assuming the site already is a SNV.
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
def create_phased_genotype_selector():
alleles = ['A','C','G','T']
genotypes = list(itertools.combinations_with_replacement(alleles,2))
hom_snp_rate = 0.0005
het_snp_rate = 0.001
diploid_genotype_priors = dict()
haploid_genotype_priors = dict()
transition = {'A':'G','G':'A','T':'C','C':'T'}
for allele in alleles:
# priors on haploid alleles
haploid_genotype_priors[allele] = dict()
haploid_genotype_priors[allele][allele] = 1 - het_snp_rate
haploid_genotype_priors[allele][transition[allele]] = het_snp_rate / 6 * 4
for transversion in alleles:
if transversion in haploid_genotype_priors[allele]:
continue
haploid_genotype_priors[allele][transversion] = het_snp_rate / 6
diploid_genotype_priors[allele] = []
for G in genotypes:
g1,g2 = G
# probability of homozygous reference is the probability of neither het or hom SNP
if g1 == g2 and g1 == allele:
diploid_genotype_priors[allele].append(0)
elif g1 == g2 and g1 != allele:
# transitions are 4 times as likely as transversions
if g1 == transition[allele]:
diploid_genotype_priors[allele].append(hom_snp_rate / 6 * 4)
else:
diploid_genotype_priors[allele].append(hom_snp_rate / 6)
else: # else it's the product of the haploid priors
diploid_genotype_priors[allele].append(haploid_genotype_priors[allele][g1] * haploid_genotype_priors[allele][g2])
# remove the option of selecting homozygous reference
total = sum(diploid_genotype_priors[allele])
for i in range(len(genotypes)):
diploid_genotype_priors[allele][i] /= total
# make sure everything sums to 1
diploid_genotype_priors[allele][-1] = 1.0 - sum(diploid_genotype_priors[allele][:-1])
g_ixs = list(range(len(genotypes)))
def phased_genotype_selector(ref_allele):
g_ix = choice(g_ixs, 1, p=diploid_genotype_priors[ref_allele])[0]
g = genotypes[g_ix]
if random.random() > 0.5:
return g
else:
return (g[1],g[0])
return phased_genotype_selector
def simulate_SNV_VCF(hg19_fasta, output_vcf, min_pos=None, max_pos=None):
phased_genotype_selector = create_phased_genotype_selector()
with pysam.FastaFile(hg19_fasta) as fasta, open(output_vcf,'w') as outv:
header = '''##fileformat=VCFv4.2
##source=simulate_SNVs.py
#CHROM\tPOS\tID\tREF\tALT\tQUAL\tFILTER\tINFO\tFORMAT\tSAMPLE'''
print(header ,file=outv)
for chrom in fasta.references:
size = fasta.get_reference_length(chrom)
pos = 0
while pos < size:
pos += np.random.geometric(0.0015, size=None)
ref_allele = fasta.fetch(chrom,pos,pos+1).upper()
if ref_allele in ['A','C','G','T']:
genotype = phased_genotype_selector(ref_allele)
else:
continue
#hap1_seq.append(genotype[0])
#hap2_seq.append(genotype[1])
if genotype[0] == genotype[1] and genotype[0] != ref_allele:
var_str = genotype[0]
genotype_str = '1|1'
elif genotype[1] == ref_allele and genotype[0] != ref_allele:
var_str = genotype[0]
genotype_str = '1|0'
elif genotype[0] == ref_allele and genotype[1] != ref_allele:
var_str = genotype[1]
genotype_str = '0|1'
elif genotype[0] != ref_allele and genotype[1] != ref_allele and genotype[0] != genotype[1]:
# triallelic
var_str = genotype[0] + ',' + genotype[1]
genotype_str = '1|2'
else:
print("INVALID GENOTYPE ENCOUNTERED")
exit(1)
if(chrom not in VALID_CHROMS):
continue
if (min_pos != None and pos < min_pos) or (max_pos != None and pos > max_pos):
continue
if genotype != (ref_allele,ref_allele) and 'N' not in genotype:
el = [None]*10
el[0] = chrom
el[1] = pos + 1
el[2] = '.'
el[3] = ref_allele
el[4] = var_str
el[5] = 100
el[6] = 'PASS'
el[7] = '.'
el[8] = 'GT'
el[9] = genotype_str
line = '{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}'.format(*el)
print(line,file=outv)
if __name__ == '__main__':
generate_fasta()
| 37.69863 | 133 | 0.544876 | 650 | 5,504 | 4.421538 | 0.286154 | 0.092554 | 0.111343 | 0.084551 | 0.219903 | 0.123173 | 0.107516 | 0.097077 | 0.093946 | 0.063326 | 0 | 0.029979 | 0.321221 | 5,504 | 145 | 134 | 37.958621 | 0.739293 | 0.133176 | 0 | 0.108911 | 0 | 0 | 0.051536 | 0.026083 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029703 | false | 0.009901 | 0.069307 | 0 | 0.128713 | 0.029703 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2240a7bb6db4aec7ff5b824e862708bbb200092 | 26,604 | py | Python | examples/mlutils.py | HongGroup/AHO | 2f492fccc9535365ff618b4ffd04dd64248bddac | [
"MIT"
] | 4 | 2021-06-08T13:16:44.000Z | 2021-12-01T09:06:55.000Z | examples/mlutils.py | licheng-xu-echo/AHO | 2f492fccc9535365ff618b4ffd04dd64248bddac | [
"MIT"
] | null | null | null | examples/mlutils.py | licheng-xu-echo/AHO | 2f492fccc9535365ff618b4ffd04dd64248bddac | [
"MIT"
] | 2 | 2021-12-01T09:07:02.000Z | 2022-01-17T07:52:22.000Z | # -*- coding: utf-8 -*-
"""
@author: Li-Cheng Xu
"""
import numpy as np
from rdkit import Chem
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error,r2_score
from sklearn.model_selection import train_test_split
from openbabel.pybel import readfile,Outputfile
def molformatconversion(input_file:str,output_file:str,input_format="xyz",output_format="sdf"):
molecules = readfile(input_format,input_file)
output_file_writer = Outputfile(output_format,output_file,overwrite=True)
for i,molecule in enumerate(molecules):
output_file_writer.write(molecule)
output_file_writer.close()
print('%d molecules converted'%(i+1))
def process_desc(array):
'''
process descriptor, delete "NaN" in the descriptor and
the dimensionality that is same in all inputs.
'''
desc_len = array.shape[1]
rig_idx = []
for i in range(desc_len):
try:
desc_range = array[:,i].max() - array[:,i].min()
if desc_range != 0 and not np.isnan(desc_range):
rig_idx.append(i)
except:
continue
array = array[:,rig_idx]
array = np.array(array,dtype=np.float32)
return array
def maxminscale(array):
'''
max-min normalization processing
'''
return (array - array.min(axis=0))/(array.max(axis=0)-array.min(axis=0))
def standardxyz(init_xyz_coord,atom1_num,atom2_num,atom3_num):
atom1_num = int(atom1_num)
atom2_num = int(atom2_num)
atom3_num = int(atom3_num)
oldcoord = np.c_[init_xyz_coord, np.ones(len(init_xyz_coord))]
first_atom_coord = oldcoord[atom1_num-1][0:3]
second_atom_coord = oldcoord[atom2_num-1][0:3]
Xv = second_atom_coord-first_atom_coord
Xv_xy = Xv.copy()
Xv_xy[2] = 0
X_v = np.array([Xv[0],0,0])
Z_v = np.array([0,0,1])
alpha = np.arccos(Xv_xy[0:3].dot(
X_v[0:3])/(np.sqrt(Xv_xy[0:3].dot(Xv_xy[0:3]))*np.sqrt(X_v[0:3].dot(X_v[0:3]))))
beta = np.arccos(Xv[0:3].dot(
Z_v)/(np.sqrt(Xv[0:3].dot(Xv[0:3]))*np.sqrt(Z_v.dot(Z_v))))
if Xv_xy[1]*Xv_xy[0] > 0:
alpha = -alpha
if Xv[0] < 0:
beta = -beta
def T_M(a):
T_M = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [
0, 0, 1, 0], [a[0], a[1], a[2], 1]])
return T_M
def RZ_alpha_M(alpha):
RZ_alpha_M = np.array([[np.cos(alpha), np.sin(
alpha), 0, 0], [-np.sin(alpha), np.cos(alpha), 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
return RZ_alpha_M
def RY_beta_M(beta):
RY_beta_M = np.array([[np.cos(beta), 0, np.sin(beta), 0], [
0, 1, 0, 0], [-np.sin(beta), 0, np.cos(beta), 0], [0, 0, 0, 1]])
return RY_beta_M
a = -first_atom_coord
new_xyz_coord1 = oldcoord.dot(T_M(a)).dot(
RZ_alpha_M(alpha)).dot(RY_beta_M(beta))
third_atom_coord = new_xyz_coord1[atom3_num-1][0:3]
second_atom_coord = new_xyz_coord1[atom2_num-1][0:3]
Xy = third_atom_coord - second_atom_coord
Y_v = np.array([0, 1, 0])
gamma = np.arccos(Xy.dot(Y_v)/(np.sqrt(Xy.dot(Xy))*np.sqrt(Y_v.dot(Y_v))))
if Xy[0] < 0:
gamma = -gamma
NewCoord = new_xyz_coord1.dot(RZ_alpha_M(gamma))
third_atom_coord = NewCoord[atom3_num-1][0:3]
third_XY = third_atom_coord[0:2]
axis_y_2d = np.array([0,1])
sita = np.arccos(third_XY.dot(axis_y_2d)/(np.sqrt(third_XY.dot(third_XY))*np.sqrt(axis_y_2d.dot(axis_y_2d))))
if third_XY[0]*third_XY[1] < 0:
sita = -sita
NewCoord0 = NewCoord.dot(RZ_alpha_M(sita))
NewCoord1 = np.around(np.delete(NewCoord0, 3, axis=1), decimals=8)
return NewCoord1
def shuffle_index(array,random_state=None):
np.random.seed(random_state)
index = list(range(len(array)))
np.random.shuffle(index)
return index
def select_exp_set(re_smi,metals,tag,target_smi,target_metal=None,rt=False,temp=None,size=10,random_state=None):
test_index = []
if random_state != None:
np.random.seed(random_state)
tag_distrib_dict = {0.1:[],0.2:[],0.3:[],0.4:[],
0.5:[],0.6:[],0.7:[],0.8:[],
0.9:[],1.0:[]}
shuffle_idx = list(range(len(re_smi)))
np.random.shuffle(shuffle_idx)
for i in shuffle_idx:
if re_smi[i] == target_smi:
if target_metal != None and metals[i] == target_metal:
if rt and temp[i] >= 20 and temp[i] <= 30:
if tag[i] <= 0.1 and len(tag_distrib_dict[0.1]) < size:
tag_distrib_dict[0.1].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.2 and tag[i] > 0.1 and len(tag_distrib_dict[0.2]) < size:
tag_distrib_dict[0.2].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.3 and tag[i] > 0.2 and len(tag_distrib_dict[0.3]) < size:
tag_distrib_dict[0.3].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.4 and tag[i] > 0.3 and len(tag_distrib_dict[0.4]) < size:
tag_distrib_dict[0.4].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.5 and tag[i] > 0.4 and len(tag_distrib_dict[0.5]) < size:
tag_distrib_dict[0.5].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.6 and tag[i] > 0.5 and len(tag_distrib_dict[0.6]) < size:
tag_distrib_dict[0.6].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.7 and tag[i] > 0.6 and len(tag_distrib_dict[0.7]) < size:
tag_distrib_dict[0.7].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.8 and tag[i] > 0.7 and len(tag_distrib_dict[0.8]) < size:
tag_distrib_dict[0.8].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.9 and tag[i] > 0.8 and len(tag_distrib_dict[0.9]) < size:
tag_distrib_dict[0.9].append(tag[i])
test_index.append(i)
elif tag[i] <= 1.0 and tag[i] > 0.9 and len(tag_distrib_dict[1.0]) < size:
tag_distrib_dict[1.0].append(tag[i])
test_index.append(i)
elif rt == False:
if tag[i] <= 0.1 and len(tag_distrib_dict[0.1]) < size:
tag_distrib_dict[0.1].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.2 and tag[i] > 0.1 and len(tag_distrib_dict[0.2]) < size:
tag_distrib_dict[0.2].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.3 and tag[i] > 0.2 and len(tag_distrib_dict[0.3]) < size:
tag_distrib_dict[0.3].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.4 and tag[i] > 0.3 and len(tag_distrib_dict[0.4]) < size:
tag_distrib_dict[0.4].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.5 and tag[i] > 0.4 and len(tag_distrib_dict[0.5]) < size:
tag_distrib_dict[0.5].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.6 and tag[i] > 0.5 and len(tag_distrib_dict[0.6]) < size:
tag_distrib_dict[0.6].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.7 and tag[i] > 0.6 and len(tag_distrib_dict[0.7]) < size:
tag_distrib_dict[0.7].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.8 and tag[i] > 0.7 and len(tag_distrib_dict[0.8]) < size:
tag_distrib_dict[0.8].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.9 and tag[i] > 0.8 and len(tag_distrib_dict[0.9]) < size:
tag_distrib_dict[0.9].append(tag[i])
test_index.append(i)
elif tag[i] <= 1.0 and tag[i] > 0.9 and len(tag_distrib_dict[1.0]) < size:
tag_distrib_dict[1.0].append(tag[i])
test_index.append(i)
elif target_metal == None:
if rt and temp[i] >= 20 and temp[i] <= 30:
if tag[i] <= 0.1 and len(tag_distrib_dict[0.1]) < size:
tag_distrib_dict[0.1].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.2 and tag[i] > 0.1 and len(tag_distrib_dict[0.2]) < size:
tag_distrib_dict[0.2].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.3 and tag[i] > 0.2 and len(tag_distrib_dict[0.3]) < size:
tag_distrib_dict[0.3].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.4 and tag[i] > 0.3 and len(tag_distrib_dict[0.4]) < size:
tag_distrib_dict[0.4].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.5 and tag[i] > 0.4 and len(tag_distrib_dict[0.5]) < size:
tag_distrib_dict[0.5].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.6 and tag[i] > 0.5 and len(tag_distrib_dict[0.6]) < size:
tag_distrib_dict[0.6].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.7 and tag[i] > 0.6 and len(tag_distrib_dict[0.7]) < size:
tag_distrib_dict[0.7].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.8 and tag[i] > 0.7 and len(tag_distrib_dict[0.8]) < size:
tag_distrib_dict[0.8].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.9 and tag[i] > 0.8 and len(tag_distrib_dict[0.9]) < size:
tag_distrib_dict[0.9].append(tag[i])
test_index.append(i)
elif tag[i] <= 1.0 and tag[i] > 0.9 and len(tag_distrib_dict[1.0]) < size:
tag_distrib_dict[1.0].append(tag[i])
test_index.append(i)
elif rt == False:
if tag[i] <= 0.1 and len(tag_distrib_dict[0.1]) < size:
tag_distrib_dict[0.1].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.2 and tag[i] > 0.1 and len(tag_distrib_dict[0.2]) < size:
tag_distrib_dict[0.2].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.3 and tag[i] > 0.2 and len(tag_distrib_dict[0.3]) < size:
tag_distrib_dict[0.3].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.4 and tag[i] > 0.3 and len(tag_distrib_dict[0.4]) < size:
tag_distrib_dict[0.4].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.5 and tag[i] > 0.4 and len(tag_distrib_dict[0.5]) < size:
tag_distrib_dict[0.5].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.6 and tag[i] > 0.5 and len(tag_distrib_dict[0.6]) < size:
tag_distrib_dict[0.6].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.7 and tag[i] > 0.6 and len(tag_distrib_dict[0.7]) < size:
tag_distrib_dict[0.7].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.8 and tag[i] > 0.7 and len(tag_distrib_dict[0.8]) < size:
tag_distrib_dict[0.8].append(tag[i])
test_index.append(i)
elif tag[i] <= 0.9 and tag[i] > 0.8 and len(tag_distrib_dict[0.9]) < size:
tag_distrib_dict[0.9].append(tag[i])
test_index.append(i)
elif tag[i] <= 1.0 and tag[i] > 0.9 and len(tag_distrib_dict[1.0]) < size:
tag_distrib_dict[1.0].append(tag[i])
test_index.append(i)
print('target experiment set size: %d'%len(test_index))
return test_index
def select_related_set(re_smi,metals,exclude_smi,target_metal,related_smi_set_1,related_smi_set_2):
related_train_index_1 = []
related_train_index_2 = []
for i in list(range(len(re_smi))):
tmp_smi = re_smi[i]
if tmp_smi == exclude_smi or metals[i] != target_metal:
continue
flag_2 = 0
for tmp_related_2 in related_smi_set_2:
if Chem.MolFromSmiles(tmp_smi).HasSubstructMatch(Chem.MolFromSmiles(tmp_related_2)):
flag_2 = 1
if flag_2 == 0:
for tmp_related_1 in related_smi_set_1:
if Chem.MolFromSmiles(tmp_smi).HasSubstructMatch(Chem.MolFromSmiles(tmp_related_1)):
related_train_index_1.append(i)
break
elif flag_2 == 1:
related_train_index_2.append(i)
related_train_index_1 = list(set(related_train_index_1))
related_train_index_2 = list(set(related_train_index_2))
print('related set 1 size: %d, related set 2 size: %d'%(len(related_train_index_1),len(related_train_index_2)))
return related_train_index_1,related_train_index_2
class small_sample_learning():
def __init__(self,related_index_1,related_index_2,test_index,test_size=0.5,split_seed=None):
np.random.seed(split_seed)
test_shuffle_index = list(range(len(test_index)))
np.random.shuffle(test_shuffle_index)
test_index_shuffle = np.array(test_index)[test_shuffle_index]
self.related_index_1 = related_index_1
self.related_index_2 = related_index_2
self.test_index_shuffle_1 = test_index_shuffle[:int(len(test_index_shuffle)*test_size)]
self.test_index_shuffle_2 = test_index_shuffle[int(len(test_index_shuffle)*test_size):]
def delta_learning(self,react_desc,tag,model_ensemble=[],tag_scale=1,n_jobs=1):
assert len(model_ensemble) == 3, 'model_ensemble should contain 3 models'
model_1 = model_ensemble[0]
delta_model_2 = model_ensemble[1]
delta_model_3 = model_ensemble[2]
related_train_x_1,related_train_y_1 = react_desc[self.related_index_1],tag[self.related_index_1]
related_train_x_2,related_train_y_2 = react_desc[self.related_index_2],tag[self.related_index_2]
append_x,append_y = react_desc[self.test_index_shuffle_1],tag[self.test_index_shuffle_1]
external_x,external_y = react_desc[self.test_index_shuffle_2],tag[self.test_index_shuffle_2]
print('delta model is training...')
model_1.fit(related_train_x_1,related_train_y_1)
external_y_pred_1 = model_1.predict(external_x)
pred_related_2 = model_1.predict(related_train_x_2)
delta_related_2 = related_train_y_2 - pred_related_2
delta_model_2.fit(related_train_x_2,delta_related_2)
external_y_pred_2 = delta_model_2.predict(external_x) + model_1.predict(external_x)
pred_append_y = model_1.predict(append_x)+delta_model_2.predict(append_x)
delta_append_y = append_y - pred_append_y
delta_model_3.fit(append_x,delta_append_y)
external_y_pred_3 = delta_model_3.predict(external_x) + delta_model_2.predict(external_x) + model_1.predict(external_x)
mae = mean_absolute_error(external_y,external_y_pred_3)*tag_scale
r2 = r2_score(external_y,external_y_pred_3)
print('+++delta learning+++MAE: %.3f, r2_score: %.3f'%(mae,r2))
return external_y_pred_1,external_y_pred_2,external_y_pred_3,external_y
def only_exp_learning(self,react_desc,tag,model,tag_scale=1,n_jobs=1):
append_x,append_y = react_desc[self.test_index_shuffle_1],tag[self.test_index_shuffle_1]
external_x,external_y = react_desc[self.test_index_shuffle_2],tag[self.test_index_shuffle_2]
print('model training...')
model.fit(append_x,append_y)
external_y_pred = model.predict(external_x)
mae = mean_absolute_error(external_y,external_y_pred)*tag_scale
r2 = r2_score(external_y,external_y_pred)
print('+++training with only experiment set+++MAE: %.3f, r2_score: %.3f'%(mae,r2))
return external_y_pred,external_y
def with_related_set_raw(self,react_desc,tag,model,tag_scale=1,n_jobs=1):
related_train_x_1,related_train_y_1 = react_desc[self.related_index_1],tag[self.related_index_1]
related_train_x_2,related_train_y_2 = react_desc[self.related_index_2],tag[self.related_index_2]
append_x,append_y = react_desc[self.test_index_shuffle_1],tag[self.test_index_shuffle_1]
external_x,external_y = react_desc[self.test_index_shuffle_2],tag[self.test_index_shuffle_2]
train_x = np.concatenate([related_train_x_1,related_train_x_2],axis=0)
train_y = np.concatenate([related_train_y_1,related_train_y_2],axis=0)
print('model training...')
model.fit(train_x,train_y)
external_y_pred = model.predict(external_x)
mae = mean_absolute_error(external_y,external_y_pred)*tag_scale
r2 = r2_score(external_y,external_y_pred)
print('+++training with experiment and related set+++MAE: %.3f, r2_score: %.3f'%(mae,r2))
return external_y_pred,external_y
class ML():
def __init__(self,related_set_1,related_set_2,target_sub_set):
self.r_1_x,self.r_1_y = related_set_1[0],related_set_1[1]
self.r_2_x,self.r_2_y = related_set_2[0],related_set_2[1]
self.t_x,self.t_y = target_sub_set[0],target_sub_set[1]
def hierarc_learn(self,model_ensemble=[],r_t=10):
assert len(model_ensemble) == 3, 'model_ensemble should contain 3 models, otherwise please manually modify the "hierarchical_learning.hierarc_learn" module'
r_1_x,r_1_y = self.r_1_x,self.r_1_y
r_2_x,r_2_y = self.r_2_x,self.r_2_y
t_x,t_y = self.t_x,self.t_y
base_model = model_ensemble[0]
delta_model_1 = model_ensemble[1]
delta_model_2 = model_ensemble[2]
model_ensemble_list = []
print('model is training...')
for r in range(r_t):
print('++++ No. %2d++++'%r)
#base_model.random_state = r
#delta_model_1.random_state = r
#delta_model_2.random_state = r
base_model.fit(r_1_x,r_1_y)
r_2_p = base_model.predict(r_2_x)
delta_related_2 = r_2_y - r_2_p
delta_model_1.fit(r_2_x,delta_related_2)
t_p = base_model.predict(t_x)+delta_model_1.predict(t_x)
delta_t_y = t_y - t_p
delta_model_2.fit(t_x,delta_t_y)
model_ensemble_list.append([base_model,delta_model_1,delta_model_2])
return model_ensemble_list
def naive_multi_set_learn(self,model):
tot_x,tot_y = np.concatenate([self.r_1_x,self.r_2_x,self.t_x],axis=0),np.concatenate([self.r_1_y,self.r_2_y,self.t_y],axis=0)
print('model is training...')
model.fit(tot_x,tot_y)
return model
def naive_learn(self,model):
print('model is training...')
model.fit(self.t_x,self.t_y)
return model
class eval_models():
def __init__(self,x,y):
self.x = x
self.y = y
def eval_hierarchic_models(self,model_ensemble_list,scale=1):
total_r2 = []
total_mae = []
total_pred_y = []
for idx,model_ensemble in enumerate(model_ensemble_list):
pred_y = model_ensemble[0].predict(self.x)+\
model_ensemble[1].predict(self.x)+model_ensemble[2].predict(self.x)
tmp_r2 = r2_score(self.y,pred_y)
tmp_mae = mean_absolute_error(self.y,pred_y)*scale
total_pred_y.append(pred_y)
total_r2.append(tmp_r2)
total_mae.append(tmp_mae)
highest_r2_idx = np.argmax(total_r2)
print('+++hierarchical learning+++MAE: %.3f, r2_score: %.3f, %d'%(total_mae[highest_r2_idx],total_r2[highest_r2_idx],highest_r2_idx))
return total_pred_y[highest_r2_idx],model_ensemble_list[highest_r2_idx]
def eval_naive_model(self,model,scale=1):
pred_y = model.predict(self.x)
tmp_r2 = r2_score(self.y,pred_y)
tmp_mae = mean_absolute_error(self.y,pred_y)*scale
print('MAE: %.3f, r2_score: %.3f'%(tmp_mae,tmp_r2))
return pred_y
def draw4fig(ext_y_true,ext_y_pred_exp,ext_y_pred_raw,ext_y_pred_3,ext_y_pred_2,ext_y_pred_1,tag_scale=1,figsave_path=None):
fig = plt.figure(figsize=(10,8))
label_font_size = 13
title_fontsize = 15
ticks_font_size = 12
plt.subplot(221)
plt.scatter(ext_y_true*tag_scale,ext_y_pred_exp*tag_scale,c='lightcoral',alpha=0.8)
plt.xlabel('Observed $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=label_font_size)
plt.ylabel('Predict $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=label_font_size)
plt.text(0.2,tag_scale,'MAE: %.3f kcal/mol'%mean_absolute_error(ext_y_true*tag_scale,ext_y_pred_exp*tag_scale),fontsize=ticks_font_size)
plt.text(0.2,tag_scale-0.5,'${R^2}$: %.3f'%r2_score(ext_y_true*tag_scale,ext_y_pred_exp*tag_scale),fontsize=ticks_font_size)
plt.plot([0,tag_scale+0.2],[0,tag_scale+0.2],color='lightgrey')
plt.xticks(fontsize=ticks_font_size)
plt.yticks(fontsize=ticks_font_size)
plt.title('prediction performance of set A',fontsize=title_fontsize)
plt.subplot(222)
plt.scatter(ext_y_true*tag_scale,ext_y_pred_raw*tag_scale,c='lightblue',alpha=0.8)
plt.xlabel('Observed $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=label_font_size)
plt.ylabel('Predict $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=label_font_size)
plt.text(0.2,tag_scale,'MAE: %.3f kcal/mol'%mean_absolute_error(ext_y_true*tag_scale,ext_y_pred_raw*tag_scale),fontsize=ticks_font_size)
plt.text(0.2,tag_scale-0.5,'${R^2}$: %.3f'%r2_score(ext_y_true*tag_scale,ext_y_pred_raw*tag_scale),fontsize=ticks_font_size)
plt.plot([0,tag_scale+0.2],[0,tag_scale+0.2],color='lightgrey')
plt.xticks(fontsize=ticks_font_size)
plt.yticks(fontsize=ticks_font_size)
plt.title('prediction performance of set B',fontsize=title_fontsize)
plt.subplot(223)
plt.scatter(ext_y_true*tag_scale,ext_y_pred_3*tag_scale,c='yellowgreen',alpha=0.8)
plt.xlabel('Observed $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=label_font_size)
plt.ylabel('Predict $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=label_font_size)
plt.text(0.2,tag_scale,'MAE: %.3f kcal/mol'%mean_absolute_error(ext_y_true*tag_scale,ext_y_pred_3*tag_scale),fontsize=ticks_font_size)
plt.text(0.2,tag_scale-0.5,'${R^2}$: %.3f'%r2_score(ext_y_true*tag_scale,ext_y_pred_3*tag_scale),fontsize=ticks_font_size)
plt.plot([0,tag_scale+0.2],[0,tag_scale+0.2],color='lightgrey')
plt.xticks(fontsize=ticks_font_size)
plt.yticks(fontsize=ticks_font_size)
plt.title('prediction performance of set C',fontsize=title_fontsize)
plt.tight_layout()
plt.show()
if figsave_path != None:
fig.savefig(figsave_path,dpi=400)
def train_eval(info_npz,model,test_size=0.1,tag_scale=1,rand_seed=None,example_mode=False):
desc = info_npz['desc']
tag = info_npz['tag']
if example_mode:
train_idx = info_npz['train_idx']
test_idx = info_npz['test_idx']
train_x,test_x,train_y,test_y = desc[train_idx],desc[test_idx],tag[train_idx],tag[test_idx]
else:
np.random.seed(rand_seed)
train_x,test_x,train_y,test_y = train_test_split(desc,tag,test_size=test_size)
model.fit(train_x,train_y)
train_pred = model.predict(train_x)
test_pred = model.predict(test_x)
train_r2 = r2_score(train_y,train_pred)
train_mae = mean_absolute_error(train_y,train_pred)
test_r2 = r2_score(test_y,test_pred)
test_mae = mean_absolute_error(test_y,test_pred)
print('train set MAE: %.3f, r2_score: %.3f'%(mean_absolute_error(train_y,train_pred)*tag_scale,r2_score(train_y,train_pred)))
print('test set MAE: %.3f, r2_score: %.3f'%(mean_absolute_error(test_y,test_pred)*tag_scale,r2_score(test_y,test_pred)))
return train_y,train_pred,test_y,test_pred
def drawregfig(train_y,train_pred,test_y,test_pred,tag_scale,figsave_path=None):
fontsize=18
fig = plt.figure(figsize=(10,5))
plt.subplot(121)
plt.scatter(train_y*tag_scale,train_pred*tag_scale,c='darkviolet',alpha=0.2)
plt.plot([0,4.6],[0,4.6],c='deepskyblue')
plt.xlabel('Observed $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=fontsize)
plt.xticks(fontsize=fontsize-3)
plt.ylabel('Predict $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=fontsize)
plt.yticks(fontsize=fontsize-3)
plt.text(0.1,4.4,'MAE: %.3f kcal/mol'%(mean_absolute_error(train_y,train_pred)*tag_scale),fontsize=fontsize)
plt.text(0.1,4.0,'${R^2}$: %.3f'%r2_score(train_y,train_pred),fontsize=fontsize)
plt.subplot(122)
plt.scatter(test_y*tag_scale,test_pred*tag_scale,c='forestgreen',alpha=0.5)
plt.plot([0,4.6],[0,4.6],c='lightcoral')
plt.xlabel('Observed $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=fontsize)
plt.xticks(fontsize=fontsize-3)
plt.ylabel('Predict $\Delta$$\Delta$$\itG$ (kcal/mol)',fontsize=fontsize)
plt.yticks(fontsize=fontsize-3)
plt.text(0.1,4.4,'MAE: %.3f kcal/mol'%(mean_absolute_error(test_y,test_pred)*tag_scale),fontsize=fontsize)
plt.text(0.1,4.0,'${R^2}$: %.3f'%r2_score(test_y,test_pred),fontsize=fontsize)
plt.tight_layout()
if figsave_path != None:
fig.savefig(figsave_path,dpi=400)
| 53.854251 | 165 | 0.598444 | 4,185 | 26,604 | 3.512067 | 0.0681 | 0.031569 | 0.077153 | 0.0745 | 0.649 | 0.592938 | 0.554837 | 0.542727 | 0.517689 | 0.506804 | 0 | 0.043892 | 0.269508 | 26,604 | 493 | 166 | 53.963489 | 0.712411 | 0.009999 | 0 | 0.420582 | 0 | 0 | 0.058721 | 0.009961 | 0 | 0 | 0 | 0 | 0.004474 | 1 | 0.053691 | false | 0 | 0.013423 | 0 | 0.114094 | 0.038031 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2257823e08ea7f45c9fa382382cc45a54e32c63 | 4,736 | py | Python | computation_migration/calc/code/draw.py | mengyingzhou/ipv6_firewall_computation_migration | 3fbc1f910e1fffdf2d5bb25eed631dffc6d7d842 | [
"MIT"
] | null | null | null | computation_migration/calc/code/draw.py | mengyingzhou/ipv6_firewall_computation_migration | 3fbc1f910e1fffdf2d5bb25eed631dffc6d7d842 | [
"MIT"
] | null | null | null | computation_migration/calc/code/draw.py | mengyingzhou/ipv6_firewall_computation_migration | 3fbc1f910e1fffdf2d5bb25eed631dffc6d7d842 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import os
import numpy as np
import json
plt.rcParams["font.family"] = 'Arial Unicode MS' #显示中文标签
plt.rcParams['axes.unicode_minus'] = False #这两行需要手动设置
data_source = '../data/'
def his_1():
data_path = os.path.join(data_source, 'single.txt')
f = open(data_path, 'r')
y1 = []
y2 = []
y3 = []
for line in f:
if 'cost_1' in line:
y1.append(int(line.split(':')[1]))
if 'cost_2' in line:
y2.append(int(line.split(':')[1]))
if 'transfer_time' in line:
y3.append(int(line.split(':')[1]))
# print('1111', line)
f.close()
size = len(y1)
x = np.arange(size)
total_width, n = 0.8, 2 # 有多少个类型,只需更改n即可
width = total_width / n
x = x - (total_width - width) / 2
print(y1)
print(y2)
# print(y3)
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi'] = 300 #分辨率
# plt.rcParams['figure.figsize'] = (16.0, 9.0) # 尺寸
plt.bar(x, y1, width=width, label='本地运算', color='red')
plt.bar(x + width, y2, width=width, label='迁移运算', color='deepskyblue')
# plt.bar(x + 2 * width, y3, width=width, label='useast4c', color='green')
plt.xticks()
plt.legend(loc="upper left") # 防止label和图像重合显示不出来
plt.ylabel('总时长/ms')
plt.xlabel('图片编号')
# plt.rcParams['savefig.dpi'] = 300 # 图片像素
# plt.rcParams['figure.dpi'] = 300 # 分辨率
# plt.title("measurement-latency")
plt.savefig('../figures/f1.pdf')
plt.close()
# plt.show()
def his_2():
data_path = os.path.join(data_source, 'single.txt')
f = open(data_path, 'r')
y1 = []
y2 = []
y3 = []
for line in f:
if 'cost_1' in line:
y1.append(int(line.split(':')[1]))
if 'cost_2' in line:
y2.append(int(line.split(':')[1]))
if 'transfer_time' in line:
y3.append(int(line.split(':')[1]))
# print('1111', line)
f.close()
p1 = []
p2 = []
for i in range(len(y2)):
p1.append(y3[i]/(y2[i] + y3[i]) * 100)
p2.append(y2[i]/(y2[i] + y3[i]) * 100)
index = np.arange(len(y2))
width = 0.4
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi'] = 300 #分辨率
plt.bar(index, p1, width=width, label='传输时长占比', color='red')
plt.bar(index, p2, width=width, bottom=p1, label='运算时长占比')
plt.ylim(0, 100)
plt.xticks()
plt.legend(loc="upper left") # 防止label和图像重合显示不出来
plt.ylabel('百分比/%')
plt.xlabel('图片编号')
# plt.rcParams['figure.figsize'] = (16.0, 9.0) # 尺寸
# plt.title("measurement-latency")
plt.savefig('../figures/f2.pdf')
# plt.show()
plt.close()
# print(p)
def his_3():
def add_text(x, y, data, fontsize=10):
for y0, data0 in zip(y, data):
plt.text(x, y0, round(data0, 1), fontsize=fontsize)
data_path = os.path.join(data_source, 'multi.txt')
f = open(data_path, 'r')
y1 = []
y2 = []
y3 = []
for line in f:
if 'result_local' in line:
y1.append(int(line.split(':')[1].split(' ')[-1]))
if '[' in line:
tmp = line.split('{')[1].split('}')[0]
# print(tmp)
tmp = json.loads('{' + tmp + '}')
y2.append(int(tmp["result"][1]))
if 'total_time_2' in line:
y3.append(int(line.split(':')[1]))
# print('1111', line)
f.close()
index = np.arange(3)
width = 0.6
# print(y1)
# tmp_data = np.array(y1)
category_names = ["图片" + str(x) for x in range(1, 6)]
category_colors = plt.get_cmap('RdYlGn')(np.linspace(0.15, 0.85, 5))
# print(category_colors)
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi'] = 300 #分辨率
# print(index)
sum = 0
accumulate = []
for i in range(5):
# print(category_colors[i])
plt.bar(index[0], y1[i], width=width, label=category_names[i], bottom=sum, color=category_colors[i])
sum += y1[i]
accumulate.append(sum - y1[i] / 2 - 3000)
# add_text(index[0], accumulate, np.arange(1, 6))
sum = 0
accumulate = []
for i in range(5):
plt.bar(index[1], y2[i], width=width, bottom=sum, color=category_colors[i])
sum += y2[i]
accumulate.append(sum - y2[i] / 2 - 3000)
# add_text(index[1], accumulate, np.arange(1, 6))
plt.bar(index[2], y3, width=width, label='迁移并行运算')
x_item = ['本地单次运算', '迁移单次运算', '迁移并行运算']
plt.xticks(index, x_item)
plt.ylabel('运算时长/ms')
plt.legend()
# plt.xlabel('图片编号')
plt.savefig('../figures/f3.pdf')
# plt.show()
plt.close()
if __name__ == "__main__":
# his_1()
# his_2()
his_3()
# 编号、大小、人数、运行时间 | 29.234568 | 108 | 0.544975 | 682 | 4,736 | 3.709677 | 0.225806 | 0.052174 | 0.035573 | 0.056917 | 0.506719 | 0.466403 | 0.444269 | 0.376285 | 0.345059 | 0.345059 | 0 | 0.051937 | 0.264147 | 4,736 | 162 | 109 | 29.234568 | 0.674032 | 0.162584 | 0 | 0.465517 | 0 | 0 | 0.105987 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.034483 | 0 | 0.068966 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e228dc884f57dfc425adce95624c25fcfa4c9559 | 1,591 | py | Python | 21-custom-form/Python/test-model/test-model.py | ibnmasud/AI-102-AIEngineer | 21e8d90300c88ca49a19e8f212400996ae7261ee | [
"MIT"
] | 163 | 2021-01-27T14:07:36.000Z | 2022-03-28T23:55:36.000Z | 21-custom-form/Python/test-model/test-model.py | ibnmasud/AI-102-AIEngineer | 21e8d90300c88ca49a19e8f212400996ae7261ee | [
"MIT"
] | 93 | 2021-01-27T16:07:03.000Z | 2022-03-31T13:49:49.000Z | 21-custom-form/Python/test-model/test-model.py | ibnmasud/AI-102-AIEngineer | 21e8d90300c88ca49a19e8f212400996ae7261ee | [
"MIT"
] | 243 | 2021-01-28T16:16:55.000Z | 2022-03-30T03:21:00.000Z | import os
from dotenv import load_dotenv
from azure.core.exceptions import ResourceNotFoundError
from azure.ai.formrecognizer import FormRecognizerClient
from azure.ai.formrecognizer import FormTrainingClient
from azure.core.credentials import AzureKeyCredential
def main():
try:
# Get configuration settings
load_dotenv()
form_endpoint = os.getenv('FORM_ENDPOINT')
form_key = os.getenv('FORM_KEY')
# Create client using endpoint and key
form_recognizer_client = FormRecognizerClient(form_endpoint, AzureKeyCredential(form_key))
form_training_client = FormTrainingClient(form_endpoint, AzureKeyCredential(form_key))
# Model ID from when you trained your model.
model_id = os.getenv('MODEL_ID')
# Test trained model with a new form
with open('test1.jpg', "rb") as f:
poller = form_recognizer_client.begin_recognize_custom_forms(
model_id=model_id, form=f)
result = poller.result()
for recognized_form in result:
print("Form type: {}".format(recognized_form.form_type))
for name, field in recognized_form.fields.items():
print("Field '{}' has label '{}' with value '{}' and a confidence score of {}".format(
name,
field.label_data.text if field.label_data else name,
field.value,
field.confidence
))
except Exception as ex:
print(ex)
if __name__ == '__main__':
main() | 34.586957 | 102 | 0.635449 | 178 | 1,591 | 5.47191 | 0.426966 | 0.035934 | 0.026694 | 0.051335 | 0.13963 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00088 | 0.285355 | 1,591 | 46 | 103 | 34.586957 | 0.855761 | 0.089252 | 0 | 0 | 0 | 0 | 0.09072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.193548 | 0 | 0.225806 | 0.096774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e22c22ffbd9a0a2faef414a9d04083e1ef218ad7 | 7,830 | py | Python | cognito_emulator/userpool/views/oauth2.py | opencollector/cognito-emulator | 6351c3ec26425d57d58e6d6821d057058170e381 | [
"MIT"
] | 4 | 2020-11-17T09:29:03.000Z | 2021-07-28T22:08:52.000Z | cognito_emulator/userpool/views/oauth2.py | opencollector/cognito-emulator | 6351c3ec26425d57d58e6d6821d057058170e381 | [
"MIT"
] | null | null | null | cognito_emulator/userpool/views/oauth2.py | opencollector/cognito-emulator | 6351c3ec26425d57d58e6d6821d057058170e381 | [
"MIT"
] | 1 | 2020-11-21T12:30:06.000Z | 2020-11-21T12:30:06.000Z | # Copyright (c) 2020 Open Collector, Inc.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import dataclasses
import logging
import re
import typing
from urllib.parse import urlencode, urlparse, urlunparse
from sqlalchemy.orm import exc as orm_exc # type: ignore
from starlette.authentication import requires
from starlette.endpoints import HTTPEndpoint
from starlette.exceptions import HTTPException
from starlette.requests import Request
from starlette.responses import JSONResponse, RedirectResponse, Response
from starlette.routing import Router
from starlette.status import HTTP_400_BAD_REQUEST, HTTP_404_NOT_FOUND
from ...db import session
from ...executor import async_
from ...middlewares import NOW_KEY, WithTemplates
from ...utils import authenticate_by
from ..models import Client, Event, User, UserPool
from ..oidc import (
EVENT_KEY,
ClientModelWrapper,
OAuth2AuthenticationBackend,
OpenIDCodeMixin,
OpenIDConnectIdProvider,
UserModelWrapper,
get_client_at_authorization_endpoint,
get_client_at_token_endpoint,
)
logger = logging.getLogger(__name__)
routes = Router()
def build_issuer_url(region: str, pool: UserPool) -> str:
return f"https://cognito-idp.{region}.amazonaws.com/{pool.key}"
def get_id_provider(
request: Request, client: ClientModelWrapper
) -> OpenIDConnectIdProvider:
jwt_config = request.app.state.jwt_config
if client is not None:
jwt_config = dataclasses.replace(
jwt_config,
issuer=build_issuer_url(request.app.state.region, client.obj.pool),
)
return OpenIDConnectIdProvider(
jwt_config,
now=lambda: request.scope[NOW_KEY],
uuidgen=request.app.state.uuidgen,
)
def new_event(request: Request, pool: UserPool, type_: str) -> Event:
event = Event(
pool=pool,
created_at=request.scope[NOW_KEY], # type: ignore
key=request.app.state.uuidgen(),
)
session.add(event)
session.commit()
return event
def user_for_session(
session_: typing.Dict[str, typing.Any], pool: UserPool
) -> typing.Optional[User]:
if pool is not None:
per_pool_session = session_.get(pool.key, {})
else:
per_pool_session = session_
user_id = per_pool_session.get("user_id")
if user_id is None:
return None
try:
return session.query(User).filter_by(id=user_id).one()
except orm_exc.NoResultFound:
return None
@routes.route("/authorize")
class AuthorizationEndpoint(HTTPEndpoint):
async def get(self, request: Request) -> Response:
logger.debug("authorization")
client_wrap = await async_(get_client_at_authorization_endpoint)(request)
if client_wrap is None:
raise HTTPException(status_code=HTTP_404_NOT_FOUND)
user: typing.Optional[User] = await async_(user_for_session)(
request.session, client_wrap.obj.pool
)
logger.debug(f"user={user}")
prompt = request.query_params.get("prompt")
if prompt is None or prompt != "none":
if user is None:
return RedirectResponse(
request.url_for("pools:signin", pool=client_wrap.obj.pool.key)
+ "?"
+ urlencode({"back_to": str(request.url)})
)
request.scope[EVENT_KEY] = (
await async_(new_event)(request, client_wrap.obj.pool, "authorization")
if client_wrap is not None
else None
)
id_provider = await async_(get_id_provider)(request, client_wrap)
return await id_provider.create_authorization_response(request, user)
def only_scheme_and_host_part(uri: str) -> str:
parsed_uri = urlparse(uri)
return urlunparse(
(
parsed_uri.scheme,
(
f"{parsed_uri.hostname}"
f"{':' if parsed_uri.port is not None else ''}"
f"{parsed_uri.port if parsed_uri.port is not None else ''}"
),
"",
"",
"",
"",
)
)
def append_cors_header_if_valid(
request: Request, response: Response, allowed_origins: typing.Set[str]
) -> Response:
origin = request.headers.get("Origin")
if origin is None:
return response
try:
validated_origin = only_scheme_and_host_part(origin)
except ValueError:
logger.info("failed to parse Origin header: {origin}")
return response
if validated_origin in allowed_origins:
response.headers["Access-Control-Allow-Origin"] = validated_origin
response.headers["Vary"] = ", ".join(
c for c in re.split(r"\s+,\s+", response.headers.get("Vary", "")) if c != ""
)
return response
@routes.route("/token")
class TokenEndpoint(HTTPEndpoint):
async def post(self, request: Request) -> Response:
logger.debug("token")
client_wrap = await async_(get_client_at_token_endpoint)(request)
request.scope[EVENT_KEY] = (
await async_(new_event)(request, client_wrap.obj.pool, "token")
if client_wrap is not None
else None
)
id_provider = await async_(get_id_provider)(request, client_wrap)
resp = await id_provider.create_token_response(request)
if client_wrap is None:
return resp
return append_cors_header_if_valid(
request,
resp,
{only_scheme_and_host_part(uri) for uri in client_wrap.obj.redirect_uris},
)
@routes.route("/userInfo")
@authenticate_by(OAuth2AuthenticationBackend())
class UserInfoEndpoint(HTTPEndpoint):
@requires(["openid", "email", "profile"])
async def get(self, request):
return JSONResponse(
OpenIDCodeMixin.generate_user_info(
None, UserModelWrapper(request.user.obj), request.auth.scopes
)
)
class LogoutEndpoint(HTTPEndpoint):
async def get(self, request):
client_id = request.query_params.get("client_id")
if client_id is None:
raise HTTPException(status_code=HTTP_400_BAD_REQUEST)
try:
client = await async_(
session.query(Client).filter_by(oauth2_client_id=client_id).one
)()
except orm_exc.NoResultFound as e:
raise HTTPException(status_code=HTTP_404_NOT_FOUND) from e
logout_uri = request.query_params.get("logout_uri")
if logout_uri is not None:
if logout_uri not in client.logout_uris:
raise HTTPException(status_code=HTTP_400_BAD_REQUEST)
return typing.cast(WithTemplates, request).templates(
"logout.html",
context={"pool": client.pool, "client": client, "back_to": logout_uri},
)
| 35.429864 | 88 | 0.671392 | 957 | 7,830 | 5.31139 | 0.281087 | 0.025575 | 0.012394 | 0.011017 | 0.21385 | 0.18316 | 0.11568 | 0.101121 | 0.055479 | 0.055479 | 0 | 0.0042 | 0.239847 | 7,830 | 220 | 89 | 35.590909 | 0.849798 | 0.138697 | 0 | 0.159091 | 0 | 0 | 0.06501 | 0.007141 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0 | 0.107955 | 0.005682 | 0.255682 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e22c2608e9d7b603ab0ab3546b2f860fe1b72dbd | 10,671 | py | Python | openml_speed_dating_pipeline_steps/openml_speed_dating_pipeline_steps.py | benman1/OpenML-Speed-Dating | db76372411eaf18d10513a3dd434a3414f2eda9a | [
"MIT"
] | 1 | 2020-12-17T20:25:13.000Z | 2020-12-17T20:25:13.000Z | openml_speed_dating_pipeline_steps/openml_speed_dating_pipeline_steps.py | benman1/OpenML-Speed-Dating | db76372411eaf18d10513a3dd434a3414f2eda9a | [
"MIT"
] | null | null | null | openml_speed_dating_pipeline_steps/openml_speed_dating_pipeline_steps.py | benman1/OpenML-Speed-Dating | db76372411eaf18d10513a3dd434a3414f2eda9a | [
"MIT"
] | null | null | null | """This module implements the pipeline steps needed to classify partner choices
in the OpenML Speed Dating challenge."""
from functools import lru_cache
import operator
from joblib import Parallel, delayed
import numpy as np
import pandas as pd
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.impute import SimpleImputer
import category_encoders.utils as util
class RangeTransformer(BaseEstimator, TransformerMixin):
'''
A custom transformer for ranges.
Parameters
----------
range_features : list[str] or None
This specifies the column names with the ranges. If None,
all features will be encoded. This is important so this
transformer will work with sklearn's ColumnTransformer.
suffix : this determines how we will rename the transformed features.
Attributes
----------
range_features : list[str]
Here we store the columns with range features.
'''
def __init__(self, range_features=None, suffix='_range/mean', n_jobs=-1):
assert isinstance(range_features, list) or range_features is None
self.range_features = range_features
self.suffix = suffix
self.n_jobs = n_jobs
def fit(self, X, y=None):
'''Nothing to do here
'''
return self
def transform(self, X, y=None):
'''apply the transformation
Parameters:
-----------
X : array-like; either numpy array or pandas dataframe.
'''
X = util.convert_input(X)
if self.range_features is None:
self.range_features = list(X.columns)
range_data = pd.DataFrame(index=X.index)
for col in self.range_features:
range_data[str(col) + self.suffix] = pd.to_numeric(
self._vectorize(X[col])
)
self.feature_names = list(range_data.columns)
return range_data
def _vectorize(self, s):
return Parallel(n_jobs=self.n_jobs)(
delayed(self._encode_range)(x) for x in s
)
@staticmethod
@lru_cache(maxsize=32)
def _encode_range(range_str):
splits = range_str[1:-1].split('-')
range_max = float(splits[-1])
range_min = float('-'.join(splits[:-1]))
return sum([range_min, range_max]) / 2.0
def get_feature_names(self):
'''Array mapping from feature integer indices to feature name
'''
return self.feature_names
class NumericDifferenceTransformer(BaseEstimator, TransformerMixin):
'''
A custom transformer that calculates differences between
numeric features.
Parameters
----------
features : list[str] or None
This specifies the column names with the numerical features. If None,
all features will be encoded. This is important so this
transformer will work with sklearn's ColumnTransformer.
suffix : this determines how we will rename the transformed features.
op : this is the operation to calculate between the two columns.
This is minus (operator.sub) by default.
Attributes
----------
features : list[str]
Here we store the columns with numerical features.
Example
-------
>>> from sklearn import datasets
>>> import pandas as pd
>>> iris = datasets.load_iris()
>>> data = pd.DataFrame(data=iris.data, columns=iris.feature_names)
>> numeric_difference = pipeline_steps.NumericDifferenceTransformer()
>>> numeric_difference.transform(data).columns
Index(['sepal length (cm)_sepal width (cm)_numdist',
'sepal length (cm)_petal length (cm)_numdist',
'sepal length (cm)_petal width (cm)_numdist',
'sepal width (cm)_petal length (cm)_numdist',
'sepal width (cm)_petal width (cm)_numdist',
'petal length (cm)_petal width (cm)_numdist'],
dtype='object')
'''
def __init__(self, features=None,
suffix='_numdist', op=operator.sub, n_jobs=-1
):
assert isinstance(features, list) or features is None
self.features = features
self.suffix = suffix
self.op = op
self.n_jobs = n_jobs
def fit(self, X, y=None):
'''Nothing to do here
'''
X = util.convert_input(X)
if self.features is None:
self.numeric_features = list(
X.select_dtypes(include='number').columns
)
self.features = list(X.columns)
else:
self.numeric_features = self.features
feature_pairs = self._feature_pairs()
columns = Parallel(n_jobs=self.n_jobs)(
delayed(self._col_name)(col1, col2)
for col1, col2 in feature_pairs
)
columns.extend(self.features)
self.feature_names = columns
return self
def _col_name(self, col1, col2):
return str(col1) + '_' + str(col2) + self.suffix
def _feature_pairs(self):
feature_pairs = []
for i, col1 in enumerate(self.numeric_features[:-1]):
for col2 in self.numeric_features[i+1:]:
feature_pairs.append((col1, col2))
return feature_pairs
def transform(self, X, y=None):
'''apply the transformation
Parameters:
-----------
X : array-like; either numpy array or pandas dataframe.
'''
X = util.convert_input(X)
feature_pairs = self._feature_pairs()
data_cols = Parallel(n_jobs=self.n_jobs)(
delayed(self.op)(X[col1], X[col2])
for col1, col2 in feature_pairs
)
# to keep all features including original numeric ones:
data_cols.extend([
X[col] for col in self.features
])
data = pd.concat(data_cols, axis=1)
data.rename(
columns={i: col for i, col in enumerate(self.feature_names)},
inplace=True, copy=False
)
data.index = X.index
return data
def get_feature_names(self):
'''Array mapping from feature integer indices to feature name
'''
return self.feature_names
class FloatTransformer(BaseEstimator, TransformerMixin):
'''
A custom transformer for floats encoded as strings.
NOTE: I consider this tranformer obsolete, since
I am using the OpenML version of the dataset.
Parameters
----------
float_features : list[str] or None
This specifies the column names with the floats that are encoded as
strings.
suffix : this determines how we will rename the transformed features.
Attributes
----------
float_features : list[str] or None
Here we store the columns with float features.
'''
def __init__(self, float_features=[], suffix='_asfloat'):
assert isinstance(float_features, list)
self.float_features = float_features
self.suffix = suffix
def fit(self, X, y=None):
'''Nothing to do here
'''
return self
def transform(self, X, y=None):
'''apply the transformation
Parameters:
-----------
X : array-like; either numpy array or pandas dataframe.
'''
X = util.convert_input(X)
if self.float_features is None:
self.float_features = list(X.columns)
float_data = pd.DataFrame()
for col in self.float_features:
float_data[str(col) + self.suffix] = X[col].apply(
lambda x: float(x)
if x != '?' else np.NaN
).astype(float)
self.feature_names = list(float_data.columns)
return float_data
def get_feature_names(self):
'''Array mapping from feature integer indices to feature name
'''
return self.feature_names
class PandasPicker(BaseEstimator, TransformerMixin):
'''
A convenience class to use pandas dataframes with a pipeline.
Parameters
----------
features : list[str]
This specifies the column names that we want to use.
suffix : this determines how we will rename the features.
Empty string by default.
Attributes
----------
features : list[str]
Here we store the column names that we use.
'''
def __init__(self, features=[], suffix=''):
assert isinstance(features, list)
self.features = features
self.suffix = suffix
def fit(self, X, y=None):
'''Nothing to do here
'''
return self
def transform(self, X, y=None):
'''apply the transformation
Parameters:
-----------
X : array-like; either numpy array or pandas dataframe.
'''
X = util.convert_input(X)
if self.features is None:
self.features = list(X.columns)
new_data = pd.DataFrame()
for col in self.features:
new_data[str(col) + self.suffix] = X[col]
return new_data
def get_feature_names(self):
'''Array mapping from feature integer indices to feature name
'''
return self.features
class PandasPicker2(PandasPicker):
'''
working around this issue:
https://github.com/openml/OpenML/issues/340
Found a second occurence of component...
'''
class SimpleImputerWithFeatureNames(SimpleImputer):
'''Thin wrapper around the SimpleImputer that provides get_feature_names()
'''
def __init__(self, missing_values=np.nan, strategy="mean",
fill_value=None, verbose=0, copy=True):
super(SimpleImputerWithFeatureNames, self).__init__(
missing_values, strategy, fill_value, verbose,
copy, add_indicator=True
)
def fit(self, X, y=None):
super().fit(X, y)
if isinstance(X, (pd.DataFrame, pd.Series)):
self.features = list(X.columns)
else:
self.features = list(range(X.shape[1]))
return self
def transform(self, X):
"""Impute all missing values in X. Returns a DataFrame if given
a DataFrame.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete.
"""
X2 = super().transform(X)
if isinstance(X, (pd.DataFrame, pd.Series)):
return pd.DataFrame(
data=X2,
columns=self.get_feature_names()
)
else:
return X2
def get_features_with_missing(self):
return [self.features[f] for f in self.indicator_.features_]
def get_feature_names(self):
return self.features
| 31.293255 | 79 | 0.608659 | 1,274 | 10,671 | 4.967818 | 0.187598 | 0.036025 | 0.008532 | 0.01422 | 0.493759 | 0.43514 | 0.372413 | 0.315532 | 0.293885 | 0.285827 | 0 | 0.005028 | 0.291819 | 10,671 | 340 | 80 | 31.385294 | 0.832473 | 0.36051 | 0 | 0.320755 | 0 | 0 | 0.006583 | 0 | 0 | 0 | 0 | 0 | 0.025157 | 1 | 0.157233 | false | 0 | 0.050314 | 0.025157 | 0.377358 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e22fc237bf632bd9754fd9b3152633f46c2df55e | 2,061 | py | Python | Semester 6/MA 374 (Financial Engg. Lab)/Lab 9/180123062_AB_q2.py | Imperial-lord/IITG | df4233905d2954511d5b16666f0d44cc38b9df90 | [
"MIT"
] | 4 | 2021-03-02T03:58:55.000Z | 2022-03-28T13:38:05.000Z | Semester 6/MA 374 (Financial Engg. Lab)/Lab 9/180123062_AB_q2.py | Imperial-lord/IITG | df4233905d2954511d5b16666f0d44cc38b9df90 | [
"MIT"
] | null | null | null | Semester 6/MA 374 (Financial Engg. Lab)/Lab 9/180123062_AB_q2.py | Imperial-lord/IITG | df4233905d2954511d5b16666f0d44cc38b9df90 | [
"MIT"
] | 4 | 2021-02-04T17:44:23.000Z | 2022-03-28T13:38:09.000Z | # Question 02, Lab 09
# AB Satyaprkash, 180123062
# imports
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# functions
def plotGraphs(fileName):
df = pd.read_csv(fileName)
# extract and rename columns
df = df[['Expiry', 'Strike Price', 'Close']]
mapper = dict(
zip(df.columns, ['Maturity', 'Strike Price', 'Option Price']))
df = df.rename(columns=mapper)
df.iloc[:, 1:] = df.iloc[:, 1:].astype(float)
print(df)
plotTitle = fileName.split('/')[-1][:-4]
# 2D Plot of Option Price vs Maturity
df.plot(x='Maturity', y='Option Price', kind='scatter',
title=f'Option Price vs Maturity for {plotTitle}', rot=45, s=0.6, figsize=(8, 8), color='blue')
plt.savefig(f'Plots/Question 2/{plotTitle}_1.png')
plt.show()
# 2D Plot of Option Price vs Strike Price
df.plot(x='Strike Price', y='Option Price', kind='scatter',
title=f'Option Price vs Strike Price for {plotTitle}', s=0.6, figsize=(8, 8), color='red')
plt.savefig(f'Plots/Question 2/{plotTitle}_2.png')
plt.show()
df['Maturity'] = df['Maturity'].astype('datetime64[ns]')
fig = plt.figure(figsize=(10, 10))
ax = Axes3D(fig)
ax.plot_trisurf(df['Maturity'], df['Strike Price'],
df['Option Price'])
ax.set_title(f'3D Plot for {plotTitle}')
ax.set_xlabel('Maturity in Nanoseconds')
ax.set_ylabel('Strike Price')
ax.set_zlabel('Option Price')
plt.savefig(f'Plots/Question 2/{plotTitle}_3.png')
plt.show()
# program body
fileNameArray = ['Stock Options/Index/INDEX_CE.csv', 'Stock Options/Index/INDEX_PE.csv', 'Stock Options/GAIL/GAIL_CE.csv', 'Stock Options/GAIL/GAIL_PE.csv',
'Stock Options/IOC/IOC_CE.csv', 'Stock Options/IOC/IOC_PE.csv', 'Stock Options/ONGC/ONGC_CE.csv', 'Stock Options/ONGC/ONGC_PE.csv', 'Stock Options/TATAMOTORS/TATAMOTORS_CE.csv',
'Stock Options/TATAMOTORS/TATAMOTORS_PE.csv']
for fileName in fileNameArray:
plotGraphs(fileName)
| 36.157895 | 194 | 0.657448 | 298 | 2,061 | 4.479866 | 0.345638 | 0.089888 | 0.101124 | 0.06367 | 0.365543 | 0.196255 | 0.164794 | 0.062921 | 0.062921 | 0.062921 | 0 | 0.026834 | 0.186317 | 2,061 | 56 | 195 | 36.803571 | 0.769231 | 0.086851 | 0 | 0.081081 | 0 | 0 | 0.407368 | 0.14095 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.108108 | 0 | 0.135135 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e230749534777e0ccbaed6e1bc443c35cb0deff6 | 714 | py | Python | test/test_nstx/__init__.py | Fusion-Data-Platform/fdp | d87a52207238f168ed69b9f96dc8f20f4481366d | [
"MIT"
] | 10 | 2015-12-18T22:38:07.000Z | 2020-03-02T09:15:50.000Z | test/test_nstx/__init__.py | Fusion-Data-Platform/fdp | d87a52207238f168ed69b9f96dc8f20f4481366d | [
"MIT"
] | 14 | 2015-12-07T16:41:48.000Z | 2019-01-18T17:48:55.000Z | test/test_nstx/__init__.py | Fusion-Data-Platform/fdp | d87a52207238f168ed69b9f96dc8f20f4481366d | [
"MIT"
] | 5 | 2016-05-20T17:35:23.000Z | 2019-01-17T19:00:06.000Z | from __future__ import print_function
import socket
from fdp.lib import datasources
# some valid shots for testing
shotlist = [204620, 204551, 142301, 204670, 204956, 204990]
def server_connection():
machine = datasources.canonicalMachineName('nstx')
servers = [datasources.MDS_SERVERS[machine],
datasources.LOGBOOK_CREDENTIALS[machine]]
for server in servers:
hostname = server['hostname']
port = server['port']
try:
s = socket.create_connection((hostname, port), 3)
s.close()
except Exception as ex:
print('Exception for host {} on port {}: {}'.format(hostname, port, ex))
return False
return True
| 31.043478 | 84 | 0.648459 | 78 | 714 | 5.820513 | 0.628205 | 0.079295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069549 | 0.254902 | 714 | 22 | 85 | 32.454545 | 0.783835 | 0.039216 | 0 | 0 | 0 | 0 | 0.076023 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.166667 | 0 | 0.333333 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e230cac0e9bca0e9832f66ff06f35ec5bb98d139 | 15,731 | py | Python | CUB/plots.py | TeodorChiaburu/ConceptBottleneck | a9f90743f9e3fff52446dfe0a0f256bc32033e4a | [
"MIT"
] | 75 | 2020-07-12T06:32:18.000Z | 2022-03-10T11:40:08.000Z | CUB/plots.py | TeodorChiaburu/ConceptBottleneck | a9f90743f9e3fff52446dfe0a0f256bc32033e4a | [
"MIT"
] | 13 | 2020-07-13T08:33:05.000Z | 2022-03-30T08:25:14.000Z | CUB/plots.py | TeodorChiaburu/ConceptBottleneck | a9f90743f9e3fff52446dfe0a0f256bc32033e4a | [
"MIT"
] | 16 | 2020-07-15T03:23:04.000Z | 2022-02-19T16:34:21.000Z |
import numpy as np
import matplotlib.pyplot as plt
r = {
# Normal experiments
'Independent': np.genfromtxt('IndependentModel__WithValSigmoid/results.txt'),
'Sequential': np.genfromtxt('SequentialModel__WithVal/results.txt'),
'Sequential_ConceptsBreakdown': np.genfromtxt('SequentialModel__WithVal/concepts.txt'),
'Joint0.001': np.genfromtxt('Joint0.001Model/results.txt'),
'Joint0.01': np.genfromtxt('Joint0.01Model/results.txt'),
'Joint0.01_ConceptsBreakdown': np.genfromtxt('Joint0.01Model/concepts.txt'),
'Joint0.1': np.genfromtxt('Joint0.1Model/results.txt'),
'Joint1': np.genfromtxt('Joint1Model/results.txt'),
'Standard': np.genfromtxt('Joint0Model/results.txt'),
'Standard Probe': np.genfromtxt('Joint0Model_LinearProbe/results.txt'),
'Standard No Bottleneck': np.genfromtxt('StandardNoBNModel/results.txt'),
'Multitask': np.genfromtxt('MultitaskModel/results.txt'),
# Data efficiency experiments
'StandardModel_DataEffN1': np.genfromtxt('Joint0Model_DataEffN1_Result/results.txt'),
'StandardModel_DataEffN3': np.genfromtxt('Joint0Model_DataEffN3_Result/results.txt'),
'StandardModel_DataEffN7': np.genfromtxt('Joint0Model_DataEffN7_Result/results.txt'),
'StandardModel_DataEffN10': np.genfromtxt('Joint0Model_DataEffN10_Result/results.txt'),
'StandardModel_DataEffN15': np.genfromtxt('Joint0Model_DataEffN15_Result/results.txt'),
'Joint0.01Model_DataEffN1': np.genfromtxt('Joint0.01Model_DataEffN1_Result/results.txt'),
'Joint0.01Model_DataEffN3': np.genfromtxt('Joint0.01Model_DataEffN3_Result/results.txt'),
'Joint0.01Model_DataEffN7': np.genfromtxt('Joint0.01Model_DataEffN7_Result/results.txt'),
'Joint0.01Model_DataEffN10': np.genfromtxt('Joint0.01Model_DataEffN10_Result/results.txt'),
'Joint0.01Model_DataEffN15': np.genfromtxt('Joint0.01Model_DataEffN15_Result/results.txt'),
'IndependentModel_DataEffN1': np.genfromtxt('IndependentModel_WithVal_DataEffN1_Result/results.txt'),
'IndependentModel_DataEffN3': np.genfromtxt('IndependentModel_WithVal_DataEffN3_Result/results.txt'),
'IndependentModel_DataEffN7': np.genfromtxt('IndependentModel_WithVal_DataEffN7_Result/results.txt'),
'IndependentModel_DataEffN10': np.genfromtxt('IndependentModel_WithVal_DataEffN10_Result/results.txt'),
'IndependentModel_DataEffN15': np.genfromtxt('IndependentModel_WithVal_DataEffN15_Result/results.txt'),
'SequentialModel_DataEffN1': np.genfromtxt('SequentialModel_WithVal_DataEffN1_Result/results.txt'),
'SequentialModel_DataEffN3': np.genfromtxt('SequentialModel_WithVal_DataEffN3_Result/results.txt'),
'SequentialModel_DataEffN7': np.genfromtxt('SequentialModel_WithVal_DataEffN7_Result/results.txt'),
'SequentialModel_DataEffN10': np.genfromtxt('SequentialModel_WithVal_DataEffN10_Result/results.txt'),
'SequentialModel_DataEffN15': np.genfromtxt('SequentialModel_WithVal_DataEffN15_Result/results.txt'),
# TTI experiments
'TTI_Joint0.01Model': np.genfromtxt('TTI__Joint0.01Model/results.txt'),
'TTI_Joint0.01SigmoidModel': np.genfromtxt('TTI__Joint0.01SigmoidModel/results.txt'),
'TTI_SequentialModel': np.genfromtxt('TTI__SequentialModel_WithVal/results.txt'),
'TTI_IndependentModel': np.genfromtxt('TTI__IndependentModel_WithValSigmoid/results.txt'),
# Adversarial experiments
'StandardAdversarialModel': np.genfromtxt('Joint0AdversarialModel/results.txt'),
'Joint0.01AdversarialModel': np.genfromtxt('Joint0.01AdversarialModel/results.txt'),
'SequentialAdversarialModel': np.genfromtxt('SequentialAdversarialModel/results.txt'),
'IndependentAdversarialModel': np.genfromtxt('IndependentAdversarialSigmoidModel/results.txt'),
}
# =============================================================================================
# ======================================== Table 1 & 2 ========================================
# =============================================================================================
exps = ['Independent', 'Sequential', 'Joint0.01', 'Standard', 'Standard Probe', 'Standard No Bottleneck', 'Multitask']
print('Table 1 & 2')
output_string = ' y Error | c Error \n'
for exp in exps:
if r[exp][0] >= 0:
output_string += '%30s %.3f +- %.3f | ' % (exp, r[exp][0], r[exp][1] * 2)
else:
output_string += '%30s - | ' % exp
if r[exp][2] >= 0:
output_string += '%.3f +- %.3f\n' % (r[exp][2], r[exp][3] * 2)
else:
output_string += ' - \n'
print(output_string)
# =============================================================================================
# ========================================= Figure 2 ==========================================
# =============================================================================================
SMALL_SIZE = 11
MEDIUM_SIZE = 12
BIGGER_SIZE = 16
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=BIGGER_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=MEDIUM_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE+1) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(18, 9), dpi=300)
# ========= y vs C performance =========
# ---- OAI Data ----
marker_style = { 'marker': 's', 'facecolors': 'none', 'edgecolors': '#1f77b4' }
data = [('Standard' , 1.000, 0.440),
('Joint, $\lambda$ = 0.001', 0.829, 0.440),
('Joint, $\lambda$ = 0.01' , 0.595, 0.441),
('Independent' , 0.529, 0.435),
('Joint, $\lambda$ = 0.1' , 0.548, 0.432),
('Joint, $\lambda$ = 1' , 0.543, 0.418),
('Sequential' , 0.527, 0.418),]
colors = ['#9467bd', '#ff7f0e', '#ff7f0e', '#d62728', '#ff7f0e', '#ff7f0e', '#2ca02c']
x_unit, y_unit = 0.0125, 0.00125
delta_oai = [(-4,-1.7), (-6,-1.75), (1.3,-0.2), (1.4,-0.25), (1.4,-0.3), (1.2,-0.35), (-4.0,-1.7)]
subplt = axes[0, 0]
line = [d for i, d in enumerate(data) if i in [0, 2, 3, 6]]
x_fill_1 = [line[-1][1], line[-1][1], 1.05]
y_fill_1 = [line[-1][2], line[-1][2] + 0.5, line[-1][2] + 0.5]
y_fill_2 = [line[-1][2], line[-1][2], line[-1][2]]
subplt.set_ylim(bottom=0.415, top=0.445)
subplt.set_xlim(left=0.47, right=1.05)
subplt.fill_between(x_fill_1, y_fill_1, y_fill_2, where=y_fill_2 <= y_fill_1, facecolor='#7f7f7f', alpha=0.1)
subplt.scatter([d[1] for d in data], [d[2] for d in data], color=colors, **marker_style)
for (name, x, y), (del_x, del_y) in zip(data, delta_oai):
del_x, del_y = del_x * x_unit, del_y * y_unit
subplt.annotate(name, (x + del_x, y + del_y))
subplt.set_title('OAI')
subplt.set_xlabel('Concept ($c$) RMSE')
subplt.set_ylabel('Task ($y$) RMSE')
# ---- CUB Data ----
data = [('Standard' , 0.5, r['Standard'][0]),
('Joint, $\lambda$ = 0.001', r['Joint0.001'][2], r['Joint0.001'][0]),
('Joint, $\lambda$ = 0.01' , r['Joint0.01'][2], r['Joint0.01'][0]),
('Joint, $\lambda$ = 0.1' , r['Joint0.1'][2], r['Joint0.1'][0]),
('Sequential' , r['Sequential'][2], r['Sequential'][0]),
('Joint, $\lambda$ = 1' , r['Joint1'][2], r['Joint1'][0]),
('Independent' , r['Independent'][2], r['Independent'][0])]
colors = ['#9467bd', '#ff7f0e', '#ff7f0e', '#ff7f0e', '#2ca02c', '#ff7f0e', '#d62728']
CUB_SCALE = 100.
x_unit, y_unit = 2.5/CUB_SCALE, 0.25/CUB_SCALE
delta_cub = [(-3.9,-0.5), (0.8,-0.4), (0.3,1.3), (0.5,-0.8), (0.6,-0.7), (0.6,-0.2), (0.6,-0.8)]
subplt = axes[1, 0]
subplt.scatter([d[1] for d in data], [d[2] for d in data], color=colors, **marker_style)
x_fill_1 = [x/CUB_SCALE for x in [3.12, 3.23, 14.21, 52]]
y_fill_1 = [x/CUB_SCALE for x in [25.5, 25.5, 25.5, 25.5]]
y_fill_2 = [x/CUB_SCALE for x in [24.3, 19.9, 17.0, 17.1]]
subplt.set_ylim(bottom=16/CUB_SCALE, top=25.5/CUB_SCALE)
subplt.set_xlim(left=0, right=52/CUB_SCALE)
subplt.fill_between(x_fill_1, y_fill_1, y_fill_2, where=y_fill_2 <= y_fill_1, facecolor='#7f7f7f', alpha=0.1)
for (name, x, y), (del_x, del_y) in zip(data, delta_cub):
del_x, del_y = del_x * x_unit, del_y * y_unit
subplt.annotate(name, (x + del_x, y + del_y))
subplt.set_title('CUB')
subplt.set_xlabel('Concept ($c$) error')
subplt.set_ylabel('Task ($y$) error')
# ========= Counts vs A performance =========
# ---- OAI ----
bins = np.arange(0, 1.01, 0.1)
x = np.arange(len(bins)) # the bin locations
bar_width, bar_gap = 0.5, 0.1
colors = ['#ff7f0e', '#2ca02c']
subplt = axes[0, 1]
data = [('Joint', [0, 0, 0, 0, 0, 0, 0, 2.2, 6.8, 1., 0]),
('Sequential / Independent', [0, 0, 0, 0, 0, 0, 0, 2., 7., 1., 0])]
for i, d in enumerate(data):
name, counts = d
rects = subplt.bar(x + bar_width/2 + i * bar_width, counts, bar_width - bar_gap, color=colors[i], label=name)
# Add some text for labels, title and custom x-axis tick labels, etc.
subplt.set_xlabel('Pearson correlation')
subplt.set_ylabel('Average counts')
subplt.set_title('OAI')
subplt.set_xticks(x)
subplt.set_xticklabels(['%.1f' % b for b in bins])
subplt.set_xlim(left=0., right=10.)
subplt.legend()
# ---- CUB ----
subplt = axes[1, 1]
data = [('Joint', r['Joint0.01_ConceptsBreakdown']),
('Sequential / Independent', r['Sequential_ConceptsBreakdown'])]
xlabel, xticklabels = 'F1', ['%.1f' % b for b in bins]
for i, d in enumerate(data):
name, counts = d
rects = subplt.bar(x + bar_width/2 + i * bar_width, counts, bar_width - bar_gap, color=colors[i], label=name)
# Add some text for labels, title and custom x-axis tick labels, etc.
subplt.set_xlabel('F1')
subplt.set_ylabel('Average counts')
subplt.set_title('CUB')
subplt.set_xticks(x)
subplt.set_xticklabels(xticklabels)
subplt.set_xlim(left=0., right=10.)
subplt.legend()
# ========= Data efficiency =========
# ---- OAI ----
# ---- Seeded ----
data = [('Standard', '#9467bd', [0.5089946, 0.47227314, 0.4460999, 0.44069982]),
('Joint', '#ff7f0e', [0.46282497, 0.45027956, 0.43035713, 0.4180873]),
('Sequential', '#2ca02c', [0.4682201081476601, 0.4541488508626818, 0.43791661291392225, 0.4296507395092277]),
('Independent', '#d62728', [0.4548289, 0.4435927, 0.42583558, 0.4179975])]
x = [10, 20, 50, 100]
subplt = axes[0, 2]
for name, color, y in data:
subplt.plot(x, y, marker='s', fillstyle='none', label=name, color=color)
subplt.set_title('OAI')
subplt.set_xlim(left=0, right=105)
subplt.legend(loc='upper right')
subplt.set_xlabel('Data proportion (%)')
subplt.set_ylabel('Task ($y$) RMSE')
subplt.yaxis.grid(True, linestyle='--')
# ---- CUB ----
data = [('Standard', '#9467bd', [r['StandardModel_DataEffN1'][0], r['StandardModel_DataEffN3'][0], r['StandardModel_DataEffN7'][0], r['StandardModel_DataEffN10'][0], r['StandardModel_DataEffN15'][0], r['Standard'][0]]),
('Joint', '#ff7f0e', [r['Joint0.01Model_DataEffN1'][0], r['Joint0.01Model_DataEffN3'][0], r['Joint0.01Model_DataEffN7'][0], r['Joint0.01Model_DataEffN10'][0], r['Joint0.01Model_DataEffN15'][0], r['Joint0.01'][0]]),
('Sequential', '#2ca02c', [r['SequentialModel_DataEffN1'][0], r['SequentialModel_DataEffN3'][0], r['SequentialModel_DataEffN7'][0], r['SequentialModel_DataEffN10'][0], r['SequentialModel_DataEffN15'][0], r['Sequential'][0]]),
('Independent', '#d62728', [r['IndependentModel_DataEffN1'][0], r['IndependentModel_DataEffN3'][0], r['IndependentModel_DataEffN7'][0], r['IndependentModel_DataEffN10'][0], r['IndependentModel_DataEffN15'][0], r['Independent'][0]])]
x = [3.33, 10, 23.36, 33.37, 50, 100]
subplt = axes[1, 2]
for name, color, y in data:
subplt.plot(x, y, marker='s', fillstyle='none', label=name, color=color)
subplt.set_title('CUB')
subplt.set_xlim(left=0, right=105)
subplt.legend(loc='upper right')
subplt.set_xlabel('Data proportion (%)')
subplt.set_ylabel('Task ($y$) error')
subplt.yaxis.grid(True, linestyle='--')
plt.subplots_adjust(hspace=0.4)
plt.tight_layout()
plt.savefig('figure2.png')
# ===============================================================================================
# ======================================== Figure 4: TTI ========================================
# ===============================================================================================
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(18, 4), dpi=300)
# ---- OAI ----
data = [(r'Control', '#1f77b4',
[0.441, 0.424, 0.411, 0.407, 0.418, 0.43 , 0.458, 0.459, 0.446, 0.456, 0.456]),
('Joint', '#ff7f0e',
[0.418, 0.361, 0.306, 0.284, 0.271, 0.26 , 0.237, 0.235, 0.241, 0.245, 0.244]),
('Sequential', '#2ca02c',
[0.418, 0.364, 0.304, 0.288, 0.262, 0.247, 0.231, 0.23 , 0.235, 0.233, 0.235]),
('Independent', '#d62728',
[0.43 , 0.384, 0.3 , 0.282, 0.241, 0.203, 0.161, 0.16 , 0.16 , 0.159, 0.159])]
xs = range(11)
for name, color, ys in data:
axes[0].plot(xs, ys, marker='s', fillstyle='none', color=color, label=name)
axes[0].set_ylim(bottom=0.15, top=0.5)
axes[0].set_title(r'OAI (Nonlinear $c \rightarrow y$)')
axes[0].legend(loc='lower left', prop={'size': 9.5})
axes[0].set_xlabel('Number of concepts intervened')
axes[0].set_ylabel('Task ($y$) RMSE')
axes[0].yaxis.grid(True, linestyle='--')
# ---- OAI ----
data = [('Joint', '#ff7f0e',
[0.419, 0.376, 0.364, 0.482, 0.469, 0.445, 0.442, 0.464, 0.461, 0.454, 0.451]),
('Sequential', '#2ca02c',
[0.441, 0.414, 0.383, 0.378, 0.36 , 0.355, 0.354, 0.37 , 0.372, 0.372, 0.366]),
('Independent', '#d62728',
[0.446, 0.417, 0.376, 0.37 , 0.351, 0.344, 0.34 , 0.339, 0.339, 0.34 , 0.339])]
xs = range(11)
for name, color, ys in data:
axes[1].plot(xs, ys, marker='s', fillstyle='none', color=color, label=name)
axes[1].set_ylim(bottom=0.15, top=0.5)
axes[1].set_title(r'OAI (Linear $c \rightarrow y$)')
axes[1].legend(loc='lower left', prop={'size': 9.5})
axes[1].set_xlabel('Number of concepts intervened')
axes[1].yaxis.grid(True, linestyle='--')
# ---- CUB ----
data = [(r'Joint, from sigmoid', '#17becf', r['TTI_Joint0.01SigmoidModel'][:, 1]),
('Joint', '#ff7f0e', r['TTI_Joint0.01Model'][:, 1]),
('Sequential', '#2ca02c', r['TTI_SequentialModel'][:, 1]),
('Independent', '#d62728', r['TTI_IndependentModel'][:, 1])]
xs = range(29)
for name, color, ys in data:
ys = [1 - y / 100. for y in ys]
axes[2].plot(xs, ys, marker='s', fillstyle='none', color=color, label=name)
axes[2].set_title('CUB')
axes[2].legend(loc='lower left', prop={'size': 9.5})
axes[2].set_xlabel('Number of concept groups intervened')
axes[2].set_ylabel('Task ($y$) error')
axes[2].yaxis.grid(True, linestyle='--')
plt.subplots_adjust(wspace=0.25, bottom=0.15)
plt.savefig('figure4.png')
# ======================================================================================================
# ======================================== Table 3: Adversarial ========================================
# ======================================================================================================
exps = ['StandardAdversarialModel', 'Joint0.01AdversarialModel', 'SequentialAdversarialModel', 'IndependentAdversarialModel']
output_string = ' y Error | c Error \n'
for exp in exps:
output_string += '%30s %.3f +- %.3f | ' % (exp, r[exp][0], r[exp][1] * 2)
if r[exp][2] >= 0:
output_string += '%.3f +- %.3f\n' % (r[exp][2], r[exp][3] * 2)
else:
output_string += ' - \n'
print(output_string)
| 53.14527 | 240 | 0.60473 | 2,134 | 15,731 | 4.332709 | 0.155576 | 0.051914 | 0.03461 | 0.003461 | 0.368484 | 0.294398 | 0.257192 | 0.230478 | 0.215336 | 0.191542 | 0 | 0.09704 | 0.136609 | 15,731 | 295 | 241 | 53.325424 | 0.583714 | 0.118238 | 0 | 0.295359 | 0 | 0 | 0.363124 | 0.215329 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.008439 | 0 | 0.008439 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e233adc9b22c4398d7a8beae60c358df0f7d7c2d | 2,104 | py | Python | tensorflow/python/data/experimental/ops/resampling.py | EricRemmerswaal/tensorflow | 141ff27877579c81a213fa113bd1b474c1749aca | [
"Apache-2.0"
] | 190,993 | 2015-11-09T13:17:30.000Z | 2022-03-31T23:05:27.000Z | tensorflow/python/data/experimental/ops/resampling.py | EricRemmerswaal/tensorflow | 141ff27877579c81a213fa113bd1b474c1749aca | [
"Apache-2.0"
] | 48,461 | 2015-11-09T14:21:11.000Z | 2022-03-31T23:17:33.000Z | tensorflow/python/data/experimental/ops/resampling.py | EricRemmerswaal/tensorflow | 141ff27877579c81a213fa113bd1b474c1749aca | [
"Apache-2.0"
] | 104,981 | 2015-11-09T13:40:17.000Z | 2022-03-31T19:51:54.000Z | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Resampling dataset transformations."""
from tensorflow.python.util import deprecation
from tensorflow.python.util.tf_export import tf_export
@deprecation.deprecated(None, "Use `tf.data.Dataset.rejection_resample(...)`.")
@tf_export("data.experimental.rejection_resample")
def rejection_resample(class_func, target_dist, initial_dist=None, seed=None):
"""A transformation that resamples a dataset to achieve a target distribution.
**NOTE** Resampling is performed via rejection sampling; some fraction
of the input values will be dropped.
Args:
class_func: A function mapping an element of the input dataset to a scalar
`tf.int32` tensor. Values should be in `[0, num_classes)`.
target_dist: A floating point type tensor, shaped `[num_classes]`.
initial_dist: (Optional.) A floating point type tensor, shaped
`[num_classes]`. If not provided, the true class distribution is
estimated live in a streaming fashion.
seed: (Optional.) Python integer seed for the resampler.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
"""
def _apply_fn(dataset):
"""Function from `Dataset` to `Dataset` that applies the transformation."""
return dataset.rejection_resample(
class_func=class_func,
target_dist=target_dist,
initial_dist=initial_dist,
seed=seed)
return _apply_fn
| 41.254902 | 80 | 0.716255 | 282 | 2,104 | 5.251773 | 0.485816 | 0.040513 | 0.030385 | 0.021607 | 0.054018 | 0.054018 | 0.054018 | 0.054018 | 0 | 0 | 0 | 0.006275 | 0.166825 | 2,104 | 50 | 81 | 42.08 | 0.838562 | 0.715304 | 0 | 0 | 0 | 0 | 0.151571 | 0.144177 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e234414690173c77918ef824123cf90f04a2cbcc | 1,099 | py | Python | dump_ids.py | alexprz/NHIS_analyse | 466e944ed1002bf227cb1522d91daf9b80c7d7d5 | [
"MIT"
] | 1 | 2022-03-01T05:33:12.000Z | 2022-03-01T05:33:12.000Z | dump_ids.py | aperezlebel/benchmark_mv_approaches | 466e944ed1002bf227cb1522d91daf9b80c7d7d5 | [
"MIT"
] | null | null | null | dump_ids.py | aperezlebel/benchmark_mv_approaches | 466e944ed1002bf227cb1522d91daf9b80c7d7d5 | [
"MIT"
] | 1 | 2020-09-01T15:20:39.000Z | 2020-09-01T15:20:39.000Z | from argparse import Namespace
from itertools import product
from joblib import Parallel, delayed
import prediction
def run(args):
tasks = [
'TB/death_pvals',
'TB/platelet_pvals',
'TB/hemo',
'TB/hemo_pvals',
'TB/acid',
'TB/septic_pvals',
'UKBB/breast_25',
'UKBB/breast_pvals',
'UKBB/skin_pvals',
'UKBB/parkinson_pvals',
'UKBB/fluid_pvals',
'MIMIC/septic_pvals',
'MIMIC/hemo_pvals',
'NHIS/income_pvals',
]
def run_one(task, T):
argv = {
'action': 'prediction',
'task_name': task,
'strategy_name': '0',
'T': str(T),
'RS': '0',
'dump_idx_only': True,
'n_top_pvals': 100,
}
# Only one trial for task having features manually selected (not _pvals)
if '_pvals' not in task and T != 0:
return
args = Namespace(**argv)
prediction.run(args)
Parallel(n_jobs=-1)(delayed(run_one)(task, T) for task, T in product(tasks, range(5)))
| 23.891304 | 90 | 0.536852 | 129 | 1,099 | 4.395349 | 0.48062 | 0.063492 | 0.035273 | 0.038801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013717 | 0.33667 | 1,099 | 45 | 91 | 24.422222 | 0.76406 | 0.063694 | 0 | 0 | 0 | 0 | 0.271665 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e23480de34ea3367e0215069e71041e9d6217090 | 649 | py | Python | exercicios/exe100/exe100.py | tiagolsouza/exercicios-Curso-em-video-PYTHON | e4e6975fac7e4883aeab58b970c6ca72895564e4 | [
"MIT"
] | null | null | null | exercicios/exe100/exe100.py | tiagolsouza/exercicios-Curso-em-video-PYTHON | e4e6975fac7e4883aeab58b970c6ca72895564e4 | [
"MIT"
] | null | null | null | exercicios/exe100/exe100.py | tiagolsouza/exercicios-Curso-em-video-PYTHON | e4e6975fac7e4883aeab58b970c6ca72895564e4 | [
"MIT"
] | null | null | null | a = 50
print('\033[32m_\033[m' * a)
print(f'\033[1;32m{"SISTEMA QUE SORTEIA E SOMA PARES":=^{a}}\033[m')
print('\033[32m-\033[m' * a)
from random import randint
from time import sleep
def sorteia(lista):
print('\033[1;34msorteando 5 valores da lista: ', end='')
for cont in range(0,5):
lista.append(randint(1,10))
print(f'{lista[-1]}', end=' ')
sleep(0.3)
print('Pronto!\033[m')
def somapar(lista):
soma = 0
for c in lista:
if c % 2 == 0:
soma += c
print(f'\033[34mSomando os valores pares de {lista}, temos {soma}\033[m')
numeros = list()
sorteia(numeros)
somapar(numeros)
| 21.633333 | 77 | 0.59168 | 105 | 649 | 3.647619 | 0.438095 | 0.052219 | 0.057441 | 0.073107 | 0.083551 | 0.083551 | 0 | 0 | 0 | 0 | 0 | 0.111776 | 0.228043 | 649 | 29 | 78 | 22.37931 | 0.652695 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0.318182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e234bb2a04280183f76b137d872eee45b289f225 | 1,915 | py | Python | autosklearn/constants.py | wsyjwps1983/autosklearn | 2e29ebaca6bc26fa838f7c3b8b13960c600884e4 | [
"BSD-3-Clause"
] | null | null | null | autosklearn/constants.py | wsyjwps1983/autosklearn | 2e29ebaca6bc26fa838f7c3b8b13960c600884e4 | [
"BSD-3-Clause"
] | null | null | null | autosklearn/constants.py | wsyjwps1983/autosklearn | 2e29ebaca6bc26fa838f7c3b8b13960c600884e4 | [
"BSD-3-Clause"
] | 1 | 2019-04-01T11:53:20.000Z | 2019-04-01T11:53:20.000Z | # -*- encoding: utf-8 -*-
BINARY_CLASSIFICATION = 1
MULTICLASS_CLASSIFICATION = 2
MULTILABEL_CLASSIFICATION = 3
REGRESSION = 4
REGRESSION_TASKS = [REGRESSION]
CLASSIFICATION_TASKS = [BINARY_CLASSIFICATION, MULTICLASS_CLASSIFICATION,
MULTILABEL_CLASSIFICATION]
TASK_TYPES = REGRESSION_TASKS + CLASSIFICATION_TASKS
TASK_TYPES_TO_STRING = \
{BINARY_CLASSIFICATION: 'binary.classification',
MULTICLASS_CLASSIFICATION: 'multiclass.classification',
MULTILABEL_CLASSIFICATION: 'multilabel.classification',
REGRESSION: 'regression'}
STRING_TO_TASK_TYPES = \
{'binary.classification': BINARY_CLASSIFICATION,
'multiclass.classification': MULTICLASS_CLASSIFICATION,
'multilabel.classification': MULTILABEL_CLASSIFICATION,
'regression': REGRESSION}
ACC_METRIC = 5
AUC_METRIC = 6
BAC_METRIC = 7
F1_METRIC = 8
PAC_METRIC = 9
CLASSIFICATION_METRICS = [ACC_METRIC, AUC_METRIC, BAC_METRIC,
F1_METRIC, PAC_METRIC]
R2_METRIC = 10
A_METRIC = 11
REGRESSION_METRICS = [R2_METRIC, A_METRIC]
METRIC = CLASSIFICATION_METRICS + REGRESSION_METRICS
STRING_TO_METRIC = {
'acc': ACC_METRIC,
'acc_metric': ACC_METRIC,
'auc': AUC_METRIC,
'auc_metric': AUC_METRIC,
'bac': BAC_METRIC,
'bac_metric': BAC_METRIC,
'f1': F1_METRIC,
'f1_metric': F1_METRIC,
'pac': PAC_METRIC,
'pac_metric': PAC_METRIC,
'r2': R2_METRIC,
'r2_metric': R2_METRIC,
'a': A_METRIC,
'a_metric': A_METRIC}
METRIC_TO_STRING = {
ACC_METRIC: 'acc_metric',
AUC_METRIC: 'auc_metric',
BAC_METRIC: 'bac_metric',
F1_METRIC: 'f1_metric',
PAC_METRIC: 'pac_metric',
R2_METRIC: 'r2_metric',
A_METRIC: 'a_metric'}
METRICS_SHORT_TO_LONG_FORM = {
'acc': 'acc_metric',
'auc': 'auc_metric',
'bac': 'bac_metric',
'f1': 'f1_metric',
'pac': 'pac_metric',
'r2': 'r2_metric',
'a': 'a_metric'} | 26.971831 | 73 | 0.696084 | 220 | 1,915 | 5.645455 | 0.159091 | 0.057971 | 0.152979 | 0.10628 | 0.640902 | 0.428341 | 0.251208 | 0.251208 | 0.251208 | 0.251208 | 0 | 0.021907 | 0.189556 | 1,915 | 71 | 74 | 26.971831 | 0.778351 | 0.01201 | 0 | 0 | 0 | 0 | 0.208355 | 0.075093 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e238163f8acd55691ab287c42a85b19501bfcfe7 | 3,362 | py | Python | signal_interpreter_server/tests/unit_tests/test_routes.py | mtormane/signal-interpreter-server | 6e96d520604db479c5d70e7ff372375e5552e346 | [
"MIT"
] | null | null | null | signal_interpreter_server/tests/unit_tests/test_routes.py | mtormane/signal-interpreter-server | 6e96d520604db479c5d70e7ff372375e5552e346 | [
"MIT"
] | null | null | null | signal_interpreter_server/tests/unit_tests/test_routes.py | mtormane/signal-interpreter-server | 6e96d520604db479c5d70e7ff372375e5552e346 | [
"MIT"
] | null | null | null | from unittest.mock import patch
import pytest
from werkzeug.exceptions import InternalServerError, BadRequest
from signal_interpreter_server.routes import interpret_signal
from signal_interpreter_server.routes import signal_interpreter_app, parser_factory
from signal_interpreter_server.exceptions import SignalError
from signal_interpreter_server.json_parser import JsonParser
@pytest.mark.parametrize("payload, expected_status_code, expected_response", [
({"signal": "11"}, 200, "ECU Reset"),
({"dummy": "27"}, 400, None)
])
@patch.object(signal_interpreter_app, "run")
def test_interpret_signal(mock_run, payload, expected_status_code, expected_response, signal_interpreter_app_instance):
with signal_interpreter_app_instance as client:
with patch.object(parser_factory, "get_parser", return_value=JsonParser):
with patch.object(JsonParser, "get_signal_title", return_value=expected_response):
# with tmp_app_instance as client:
response = client.post("/", json=payload)
if expected_response is not None:
tmp = {"signal_title": expected_response}
else:
tmp = expected_response
assert response.get_json() == tmp
assert response.status_code == expected_status_code
def test_interpret_signal_with_signal_not_found(signal_interpreter_app_instance):
with signal_interpreter_app_instance as client:
with patch.object(parser_factory, "get_parser", return_value=JsonParser):
with patch.object(JsonParser, "get_signal_title", side_effect=SignalError('MockedError')) as mock_get_signal:
with pytest.raises(InternalServerError):
with pytest.raises(SignalError) as excinfo:
response = client.post("/", json={"signal": "99"})
assert response.get_json() is None
assert response.status_code == 500
mock_get_signal.assert_called_with("99")
interpret_signal()
assert excinfo.value.message == 'MockedError'
def test_interpret_signal_with_invalid_parser(signal_interpreter_app_instance):
with patch.object(parser_factory, "get_parser", side_effect=ValueError('MockedError')):
with signal_interpreter_app_instance as client:
with pytest.raises(ValueError) as excinfo:
response = client.post("/", json={"signal": "11"})
assert response.get_json() is None
assert response.status_code == 500
assert excinfo.value.message == 'MockedError'
def test_interpret_signal_with_invalid_format(signal_interpreter_app_instance):
with signal_interpreter_app_instance as client:
with patch.object(parser_factory, "get_parser", return_value=JsonParser):
with patch.object(JsonParser, "get_signal_title", side_effect=TypeError('MockedError')) as mock_get_signal:
with pytest.raises(TypeError) as excinfo:
response = client.post("/", json={""})
assert response.get_json() is None
assert response.status_code == 400
interpret_signal()
assert excinfo.value.message == 'MockedError' | 50.939394 | 121 | 0.672814 | 366 | 3,362 | 5.874317 | 0.188525 | 0.110698 | 0.093023 | 0.104186 | 0.636279 | 0.611163 | 0.560465 | 0.437209 | 0.377674 | 0.377674 | 0 | 0.009831 | 0.243605 | 3,362 | 66 | 122 | 50.939394 | 0.835627 | 0.009518 | 0 | 0.320755 | 0 | 0 | 0.079098 | 0.006316 | 0 | 0 | 0 | 0 | 0.226415 | 1 | 0.075472 | false | 0 | 0.132075 | 0 | 0.207547 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2390611f4647b2e7e3551c1736652b488f23d92 | 425 | py | Python | backend/lib/utils/iterators.py | bayesimpact/tds-frontend | a4f47e384ef4fe4dc43c30423a1713c2c93dc87f | [
"Apache-2.0"
] | 15 | 2018-05-08T23:54:38.000Z | 2020-03-07T20:46:37.000Z | backend/lib/utils/iterators.py | akegan/encompass | 85852a91c646c62e8cd05f9c2b0c7cf0079ea7f2 | [
"Apache-2.0"
] | 297 | 2018-02-05T19:04:26.000Z | 2022-02-12T07:52:37.000Z | backend/lib/utils/iterators.py | bayesimpact/tds | a4f47e384ef4fe4dc43c30423a1713c2c93dc87f | [
"Apache-2.0"
] | 6 | 2018-05-21T19:51:15.000Z | 2019-03-21T19:20:27.000Z | """Iterators utils."""
def iterate_in_slices(iterable, batch_size):
"""Yield lists of size batch_size from an iterable."""
it = iter(iterable)
try:
while True:
chunk = [] # The buffer to hold the next n items.
for _ in range(batch_size):
chunk.append(next(it))
yield chunk
except StopIteration:
if len(chunk) > 0:
yield chunk
| 26.5625 | 62 | 0.562353 | 52 | 425 | 4.480769 | 0.673077 | 0.11588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003571 | 0.341176 | 425 | 15 | 63 | 28.333333 | 0.828571 | 0.242353 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e23aa280ead4993cea9b3c1ee67c0cfb1a5721dc | 845 | py | Python | download_models.py | AhsanAliLodhi/conditional-gans | b4e8299c59143507f9e16e965dfb659995db3b3e | [
"MIT"
] | null | null | null | download_models.py | AhsanAliLodhi/conditional-gans | b4e8299c59143507f9e16e965dfb659995db3b3e | [
"MIT"
] | null | null | null | download_models.py | AhsanAliLodhi/conditional-gans | b4e8299c59143507f9e16e965dfb659995db3b3e | [
"MIT"
] | null | null | null | import urllib.request
import zipfile
import os
url = 'https://syncandshare.lrz.de/dl/fiVRr3EeLiGxcoHXmW5yoWHq/models.zip'
print('Dlownloaing models (This might take time)')
urllib.request.urlretrieve(url, 'models.zip')
with zipfile.ZipFile('models.zip', 'r') as zip_ref:
print("extracting models")
zip_ref.extractall('models')
os.remove("models.zip")
with zipfile.ZipFile('models/stargan_models_rel.zip', 'r') as zip_ref:
print("extracting relative stargan_models")
zip_ref.extractall('conditional_models/aligned_models/stargan_relative_distance/')
os.remove("stargan_models_rel.zip")
with zipfile.ZipFile('models/stargan_models_abs.zip', 'r') as zip_ref:
print("extracting absolute stargan_models")
zip_ref.extractall('conditional_models/aligned_models/stargan_absolute_distance/')
os.remove("stargan_models_abs.zip") | 40.238095 | 86 | 0.784615 | 115 | 845 | 5.556522 | 0.313043 | 0.098592 | 0.065728 | 0.098592 | 0.610329 | 0.519562 | 0.458529 | 0.206573 | 0.206573 | 0.206573 | 0 | 0.002587 | 0.085207 | 845 | 21 | 87 | 40.238095 | 0.824062 | 0 | 0 | 0 | 0 | 0 | 0.535461 | 0.262411 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e23b814797f83ec93f19e86ffddbfaea18bceed4 | 708 | pyde | Python | mode/examples/Basics/Typography/Words/Words.pyde | timgates42/processing.py | 78a237922c2a928b83f4ad579dbf8d32c0099890 | [
"Apache-2.0"
] | 1,224 | 2015-01-01T22:09:23.000Z | 2022-03-29T19:43:56.000Z | mode/examples/Basics/Typography/Words/Words.pyde | timgates42/processing.py | 78a237922c2a928b83f4ad579dbf8d32c0099890 | [
"Apache-2.0"
] | 253 | 2015-01-14T03:45:51.000Z | 2022-02-08T01:18:19.000Z | mode/examples/Basics/Typography/Words/Words.pyde | timgates42/processing.py | 78a237922c2a928b83f4ad579dbf8d32c0099890 | [
"Apache-2.0"
] | 225 | 2015-01-13T18:38:33.000Z | 2022-03-30T20:27:39.000Z | """
* Words.
*
* The text() function is used for writing words to the screen.
* The letters can be aligned left, center, or right with the
* textAlign() function.
"""
def setup():
size(640, 360)
# Create the font
printArray(PFont.list())
f = createFont("Georgia", 24)
textFont(f)
def draw():
background(102)
textAlign(RIGHT)
drawType(width * 0.25)
textAlign(CENTER)
drawType(width * 0.5)
textAlign(LEFT)
drawType(width * 0.75)
def drawType(x):
line(x, 0, x, 65)
line(x, 220, x, height)
fill(0)
text("ichi", x, 95)
fill(51)
text("ni", x, 130)
fill(204)
text("san", x, 165)
fill(255)
text("shi", x, 210)
| 17.268293 | 63 | 0.574859 | 101 | 708 | 4.029703 | 0.60396 | 0.095823 | 0.103194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087379 | 0.272599 | 708 | 40 | 64 | 17.7 | 0.702913 | 0.252825 | 0 | 0 | 0 | 0 | 0.036965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e243561f750625be44774b3ce79a8ebb51a0b8d8 | 1,469 | py | Python | vent/core/network_tap/ncontrol/ncontrol.py | edgardmota/vent | 67b01abc059a3e9e8d16670c7058f0a9e267d8f1 | [
"Apache-2.0"
] | null | null | null | vent/core/network_tap/ncontrol/ncontrol.py | edgardmota/vent | 67b01abc059a3e9e8d16670c7058f0a9e267d8f1 | [
"Apache-2.0"
] | null | null | null | vent/core/network_tap/ncontrol/ncontrol.py | edgardmota/vent | 67b01abc059a3e9e8d16670c7058f0a9e267d8f1 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import docker
import logging
import sys
import web
from rest.create import CreateR
from rest.delete import DeleteR
from rest.nics import NICsR
from rest.nlist import ListR
from rest.start import StartR
from rest.stop import StopR
module_logger = logging.getLogger(__name__)
class NControlServer(object):
"""
This class is responsible for initializing the urls and web server.
"""
# need __new__ for tests, but fails to call __init__ when actually running
def __new__(*args, **kw):
if hasattr(sys, '_called_from_test'):
module_logger.info("don't call __init__")
else: # pragma: no cover
return object.__new__(*args, **kw)
def __init__(self, port=8080, host='0.0.0.0'): # pragma: no cover
d_client = docker.from_env()
d_client.images.pull('cyberreboot/vent-ncapture', tag='master')
nf_inst = NControl()
urls = nf_inst.urls()
app = web.application(urls, globals())
web.httpserver.runsimple(app.wsgifunc(), (host, port))
class NControl:
"""
This class is for defining things needed to start up.
"""
@staticmethod
def urls():
urls = (
'/create', CreateR,
'/delete', DeleteR,
'/list', ListR,
'/nics', NICsR,
'/start', StartR,
'/stop', StopR
)
return urls
if __name__ == '__main__':
NControlServer().app.run()
| 25.77193 | 78 | 0.62015 | 181 | 1,469 | 4.779006 | 0.530387 | 0.055491 | 0.025434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007428 | 0.266848 | 1,469 | 56 | 79 | 26.232143 | 0.795729 | 0.169503 | 0 | 0 | 0 | 0 | 0.098651 | 0.021079 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.263158 | 0 | 0.447368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e246453b42dcd37dfd9577c8a2d5d504feec20c6 | 410 | py | Python | applications/prediction_bigball/container/c2_Twitter_Collector/app/predict.py | Dumpkin1996/clipper | 1a08bbdde846c3cfe76236c68548a848f71605e0 | [
"Apache-2.0"
] | 2 | 2019-04-24T13:46:28.000Z | 2019-05-28T06:59:26.000Z | applications/prediction_clipper/container/c2_Twitter_Collector/app/predict.py | SimonZsx/clipper | 457088be2ebe68c68b94d90389d1308e35b4c844 | [
"Apache-2.0"
] | null | null | null | applications/prediction_clipper/container/c2_Twitter_Collector/app/predict.py | SimonZsx/clipper | 457088be2ebe68c68b94d90389d1308e35b4c844 | [
"Apache-2.0"
] | 4 | 2019-04-03T11:03:57.000Z | 2019-06-26T08:22:38.000Z | import io
import sys
import tweepy
import time
def predict(request): # serve as api function
start = time.time()
info = request.split(":")
stockcode = info[0]
data_path = "/container/c2_Twitter_Collector/dataset/" + stockcode + ".txt"
with open(data_path, 'r', encoding='utf-8') as file:
result = file.read().replace('\n', '')
end = time.time()
print("ELASPSED TIME", end - start)
return result
| 20.5 | 76 | 0.682927 | 58 | 410 | 4.758621 | 0.689655 | 0.057971 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008721 | 0.160976 | 410 | 19 | 77 | 21.578947 | 0.793605 | 0.05122 | 0 | 0 | 0 | 0 | 0.172324 | 0.104439 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.428571 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2484c0e3f3a758de7ba5492e88d263a61c7d083 | 5,686 | py | Python | snoop/models.py | hoover/snoop | bd49b081418a8a01a1e469ab17759a4c5b20d850 | [
"MIT"
] | 5 | 2017-01-03T00:52:03.000Z | 2019-10-27T03:32:35.000Z | snoop/models.py | hoover/snoop | bd49b081418a8a01a1e469ab17759a4c5b20d850 | [
"MIT"
] | 25 | 2016-08-21T11:26:44.000Z | 2018-03-13T12:19:20.000Z | snoop/models.py | hoover/snoop | bd49b081418a8a01a1e469ab17759a4c5b20d850 | [
"MIT"
] | 6 | 2016-09-27T13:03:45.000Z | 2019-10-27T03:32:30.000Z | from pathlib import Path
from io import BytesIO
import json
from contextlib import contextmanager
import tempfile
import shutil
from django.db import models, transaction
from django.contrib.postgres.fields import JSONField
from django.conf import settings
def cache(model, keyfunc):
def decorator(func):
if not settings.SNOOP_CACHE:
return func
@transaction.atomic
def wrapper(*args, **kwargs):
key = keyfunc(*args, **kwargs)
row, created = model.objects.get_or_create(pk=key)
if not created:
return json.loads(row.value)
value = func(*args, **kwargs)
row.value = json.dumps(value)
row.save()
return value
wrapper.no_cache = func
return wrapper
return decorator
class EmailCache(models.Model):
id = models.IntegerField(primary_key=True)
value = models.TextField()
time = models.DateTimeField(auto_now=True)
class Collection(models.Model):
path = models.CharField(max_length=4000)
slug = models.CharField(max_length=100, unique=True)
title = models.CharField(max_length=200)
es_index = models.CharField(max_length=200)
description = models.TextField(blank=True)
ocr = JSONField(default=dict, blank=True)
def __str__(self):
return self.slug
class Document(models.Model):
collection = models.ForeignKey('Collection')
container = models.ForeignKey('Document',
related_name='contained_set',
null=True)
parent = models.ForeignKey('Document',
related_name='child_set',
null=True)
path = models.CharField(max_length=4000)
content_type = models.CharField(max_length=100, blank=True)
filename = models.CharField(max_length=1000)
disk_size = models.BigIntegerField()
md5 = models.CharField(max_length=40, blank=True, db_index=True)
sha1 = models.CharField(max_length=50, blank=True, db_index=True)
broken = models.CharField(max_length=100, blank=True)
rev = models.IntegerField(null=True)
flags = JSONField(default=dict, blank=True)
digested_at = models.DateTimeField(null=True, blank=True)
class Meta:
# TODO: constraint does not apply to container=None rows
unique_together = ('container', 'path')
index_together = ('collection', 'digested_at')
def __str__(self):
return str(self.path)
@property
def absolute_path(self):
assert self.container is None
return Path(self.collection.path) / self.path
def _open_file(self):
if self.content_type == 'application/x-directory':
return BytesIO()
if self.container is None:
return self.absolute_path.open('rb')
else:
from . import emails, archives, pst
if emails.is_email(self.container):
return emails.get_email_part(self.container, self.path)
if archives.is_archive(self.container):
return archives.open_file(self.container, self.path)
if pst.is_pst_file(self.container):
return pst.open_file(self.container, self.path)
raise RuntimeError
@contextmanager
def open(self, filesystem=False):
""" Open the document as a file. If the document is inside an email or
archive, it will be copied to a temporary file:
with doc.open() as f:
f.read()
If ``filesystem`` is True, ``f`` will have a ``path`` attribute, which
is the absolute path of the file on disk.
"""
with self._open_file() as f:
if filesystem:
if self.container:
MB = 1024*1024
suffix = Path(self.filename).suffix
with tempfile.NamedTemporaryFile(suffix=suffix) as tmp:
shutil.copyfileobj(f, tmp, length=4*MB)
tmp.flush()
tmp.path = Path(tmp.name)
yield tmp
else:
f.path = self.absolute_path
yield f
else:
yield f
class Ocr(models.Model):
collection = models.ForeignKey(
'Collection',
related_name='ocr_documents',
)
tag = models.CharField(max_length=100)
md5 = models.CharField(max_length=40, db_index=True)
path = models.CharField(max_length=4000)
text = models.TextField(blank=True)
class Meta:
unique_together = ('collection', 'tag', 'md5')
@property
def absolute_path(self):
return Path(self.collection.ocr[self.tag]) / self.path
class Digest(models.Model):
id = models.IntegerField(primary_key=True)
data = models.TextField()
class Job(models.Model):
queue = models.CharField(max_length=100)
data = JSONField(null=True)
started = models.BooleanField(default=False)
class Meta:
unique_together = ('queue', 'data')
index_together = ('queue', 'started')
class TikaCache(models.Model):
sha1 = models.CharField(max_length=50, primary_key=True)
value = models.TextField()
time = models.DateTimeField(auto_now=True)
class TikaLangCache(models.Model):
sha1 = models.CharField(max_length=50, primary_key=True)
value = models.CharField(max_length=20)
time = models.DateTimeField(auto_now=True)
class HtmlTextCache(models.Model):
sha1 = models.CharField(max_length=50, primary_key=True)
value = models.TextField()
time = models.DateTimeField(auto_now=True)
| 31.414365 | 78 | 0.62522 | 666 | 5,686 | 5.225225 | 0.262763 | 0.077586 | 0.093103 | 0.124138 | 0.380747 | 0.256609 | 0.178448 | 0.125862 | 0.104023 | 0.104023 | 0 | 0.016286 | 0.276469 | 5,686 | 180 | 79 | 31.588889 | 0.829606 | 0.057686 | 0 | 0.233083 | 0 | 0 | 0.031498 | 0.004338 | 0 | 0 | 0 | 0.005556 | 0.007519 | 1 | 0.067669 | false | 0 | 0.075188 | 0.022556 | 0.646617 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e24c67e2fa0e7cbac60e363c0a3c0d90ac4a1b04 | 1,643 | py | Python | basic_op.py | elosivi/python_tkinter_calculator | 1ee17adb26aa9bf2b3a17a0b340464c6ba562972 | [
"MIT"
] | null | null | null | basic_op.py | elosivi/python_tkinter_calculator | 1ee17adb26aa9bf2b3a17a0b340464c6ba562972 | [
"MIT"
] | null | null | null | basic_op.py | elosivi/python_tkinter_calculator | 1ee17adb26aa9bf2b3a17a0b340464c6ba562972 | [
"MIT"
] | null | null | null | """
myOperations is a list wich memory all values and operators entered by the user before click on "="
"""
myOperations=[]
def put_in_myOperations(value):
#this function fill myOperations list
myOperations.append(value)
def clear_myOperations():
#this function delete all values in myOperations list
myOperations[:]=[]
def make_operation(inputOperator):
#this function add the operator in the array "myOperations
myOperations.append(inputOperator)
def calc_all_myOperations():
"""
This function browse the array myOperations to return the result
of all the operations 2 by 2 from the first to the last, entered
before click on "="
When it make the fist operation with the 3 first values (2 values + 1 operator)
it delete 2 value and replace the first one by the result
While it's not empty it continue to make the other operations.
It don't manage priorities between operators (% and * before + and -)
"""
while len(myOperations)>1:
f_nb = float(myOperations[0])
operator = myOperations[1]
s_nb = float(myOperations[2])
if operator == "+":
result = f_nb + s_nb
elif operator== "-":
result = f_nb - s_nb
elif operator== "*":
result = f_nb * s_nb
elif operator== "/":
result = f_nb / s_nb
elif operator== "% of ":
print(myOperations)
result = f_nb/100*s_nb
round_result=round(result,5)
del myOperations[:2]
myOperations[0]=round_result
clear_myOperations()
return round_result
| 31.596154 | 99 | 0.63664 | 212 | 1,643 | 4.830189 | 0.353774 | 0.017578 | 0.043945 | 0.066406 | 0.101563 | 0.101563 | 0.101563 | 0.101563 | 0.101563 | 0.101563 | 0 | 0.013559 | 0.281802 | 1,643 | 51 | 100 | 32.215686 | 0.854237 | 0.405356 | 0 | 0 | 0 | 0 | 0.009698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.178571 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e24cff14d61b3d1d516acfc54a69d88f1d665b63 | 3,863 | py | Python | cantools/autosar/secoc.py | malneni/cantools | b9958577c0f616c28c7fa37a2d2b491478e065ba | [
"MIT"
] | null | null | null | cantools/autosar/secoc.py | malneni/cantools | b9958577c0f616c28c7fa37a2d2b491478e065ba | [
"MIT"
] | null | null | null | cantools/autosar/secoc.py | malneni/cantools | b9958577c0f616c28c7fa37a2d2b491478e065ba | [
"MIT"
] | null | null | null | # Utilities for dealing with AUTOSAR secure on-board communication.
# (SecOC, i.e., verification of the authenticity of the sender of
# messages.)
from cantools.database.can.message import Message
from cantools.errors import Error
from typing import (
Union,
List,
Optional,
)
from cantools.typechecking import (
SecOCAuthenticatorFn,
)
import bitstruct
class SecOCError(Error):
"""Exception that is raised if something SecOC related goes wrong.
"""
pass
def compute_authenticator(raw_payload: bytes,
dbmsg: Message,
authenticator_fn: SecOCAuthenticatorFn,
freshness_value: int) \
-> bytearray:
"""Given a byte-like object that contains the encoded signals to be
send, compute the full authenticator SecOC value.
"""
if dbmsg.autosar is None or dbmsg.autosar.secoc is None:
raise SecOCError(f'Message "{dbmsg.name}" is not secured')
secoc_props = dbmsg.autosar.secoc
n_fresh = secoc_props.freshness_bit_length
payload_len = secoc_props.payload_length
# build the data that needs to be passed to authentificator function
auth_data = bitstruct.pack(f'u16' # data ID
f'r{payload_len*8}' # payload to be secured
f'u{n_fresh}', # freshness value
secoc_props.data_id,
raw_payload[:payload_len],
freshness_value)
# compute authenticator value
return authenticator_fn(dbmsg, auth_data, freshness_value)
def apply_authenticator(raw_payload: bytes,
dbmsg: Message,
authenticator_fn: SecOCAuthenticatorFn,
freshness_value: int) \
-> bytearray:
"""Given a byte-like object that contains the encoded signals to be
send, compute the full message which ought to be send.
This is basically the concatination of the raw payload, the
truncated freshness value and the truncated authenticator for the
message.
"""
if dbmsg.autosar is None:
raise RuntimeError(f'Message "{dbmsg.name}" does not have '
f'AUTOSAR specific properties.')
elif dbmsg.autosar.secoc is None:
raise RuntimeError(f'Message "{dbmsg.name}" does not have any'
f'SecOC properties (message is not secured).')
result = bytearray(raw_payload)
# compute authenticator value
auth_value = compute_authenticator(raw_payload,
dbmsg,
authenticator_fn,
freshness_value)
# get the last N bits of the freshness value.
secoc_props = dbmsg.autosar.secoc
n_fresh_tx = secoc_props.freshness_tx_bit_length
mask = (1 << n_fresh_tx) - 1
truncated_freshness_value = freshness_value&mask
payload_len = secoc_props.payload_length
bitstruct.pack_into(f'u{n_fresh_tx}r{secoc_props.auth_tx_bit_length}',
result,
payload_len*8,
truncated_freshness_value,
auth_value)
return result
def verify_authenticator(raw_payload: bytes,
dbmsg: Message,
authenticator_fn: SecOCAuthenticatorFn,
freshness_value: int) \
-> bool:
"""Verify that a message that is secured via SecOC is valid."""
tmp_payload = apply_authenticator(raw_payload,
dbmsg,
authenticator_fn,
freshness_value)
return raw_payload == tmp_payload
| 35.768519 | 74 | 0.588403 | 412 | 3,863 | 5.347087 | 0.300971 | 0.082615 | 0.052202 | 0.03813 | 0.391739 | 0.376305 | 0.325919 | 0.29596 | 0.244212 | 0.244212 | 0 | 0.002394 | 0.351281 | 3,863 | 107 | 75 | 36.102804 | 0.876696 | 0.220813 | 0 | 0.328358 | 0 | 0 | 0.087797 | 0.015593 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0.014925 | 0.074627 | 0 | 0.179104 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e24fbce9aed9113bc47d48913f811befc901b6e5 | 1,710 | py | Python | spec_collection.py | oursonvie/xcar | 2bc52f2935e62823c589e9a9fe708f1dcd2cdb69 | [
"Apache-2.0"
] | null | null | null | spec_collection.py | oursonvie/xcar | 2bc52f2935e62823c589e9a9fe708f1dcd2cdb69 | [
"Apache-2.0"
] | null | null | null | spec_collection.py | oursonvie/xcar | 2bc52f2935e62823c589e9a9fe708f1dcd2cdb69 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import crawler
import mongo
# in this module, data form model will be read and spec of each model is
# recorded int he database
# import mongo driver
from pymongo import MongoClient
client = MongoClient()
db = client['test_car']
# define base link
baselink = "http://newcar.xcar.com.cn"
# select DB collection
cars = db.cars_model.find()
# loop in collection and get spec
for car in cars:
brand = car['Brand']
name = car['Name']
model = car['Model']
url = car['Model_url']
car_id = car['_id']
specpage = crawler.readlink(baselink + url)
if len(specpage.findAll('em', text = u'排量(L):')) != 0:
price = float(specpage.b.text) * 10
horsepower = specpage.find('em', text = u'最大功率(kW/rpm):').findNext('td').text
liter = specpage.find('em', text = u'排量(L):').findNext('td').text
engine_type = specpage.find('em', text = u'进气形式:').findNext('td').text
tourque = specpage.find('em', text = u'最大扭矩(Nm/rpm):').findNext('td').text
drive = specpage.find('em', text = u'驱动方式:').findNext('td').text
else:
price = float(specpage.find('div', attrs = {'class':'price'}).b.text) * 10
horsepower = 0
liter = 0
engine_type = 0
tourque = 0
drive = 0
car = {'url': url,
'Brand': brand,
'Name': name,
'Model': model,
'Price': price,
'Engine_size': liter,
'Engine_type': engine_type,
'Horsepower': horsepower,
'Tourque': tourque,
'Drive': drive
}
print ('%s %s %s %sk %skW') % (brand, name, model, price, horsepower)
mongo.insert_spec(car)
| 28.032787 | 85 | 0.570175 | 222 | 1,710 | 4.342342 | 0.414414 | 0.037344 | 0.043568 | 0.093361 | 0.112033 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008821 | 0.27076 | 1,710 | 61 | 86 | 28.032787 | 0.764234 | 0.121637 | 0 | 0 | 0 | 0 | 0.150502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073171 | 0 | 0.073171 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2527a6fe723aa45e3efd33e33cfd5d50fd7fc36 | 3,393 | py | Python | pyapi/addins-api/localapi/deployment.py | dockerian/py-api | 777db7d5dacf3ecf29a991f50d2ac78bb5bef66a | [
"Apache-2.0"
] | null | null | null | pyapi/addins-api/localapi/deployment.py | dockerian/py-api | 777db7d5dacf3ecf29a991f50d2ac78bb5bef66a | [
"Apache-2.0"
] | 6 | 2019-12-26T16:51:55.000Z | 2022-03-21T22:16:45.000Z | pyapi/addins-api/localapi/deployment.py | dockerian/pyapi | 777db7d5dacf3ecf29a991f50d2ac78bb5bef66a | [
"Apache-2.0"
] | null | null | null | import json
import shutil
import tempfile
import traceback
from multiprocessing import Process
from subprocess import call, check_output, CalledProcessError
from api import swift
from utils import settings, delete_directory_tree
from deploy.deploy import Deployment
from deploy.deploy_status import DeploymentStatus
from deploy.helion_cli import HelionCliComposer
from deploy.package import Package
from catalog import get_available_package
from logger import getLogger
logger = getLogger(__name__)
def check_package_exists(package_name):
container = settings('swift_container')
file_name = '{0}.tar.gz'.format(package_name)
return swift.check_file_exists(container, file_name)
def deploy_package(package_name, endpoint_url, username, password):
"""
Deploy a package into destination (e.g. ALS/Cloud Foundry)
Params:
package_name - the name of the package to deploy
endpoint_url - the destination (e.g. ALS/Cloud Foundry) endpoint URL
ie: 'https://api.15.126.129.33.xip.io'
username - the user name (admin email) for destination login
password - the password for destination login
"""
if (not check_package_exists(package_name)):
return {'status': 404}
cwd = ''
try:
# ToDo [zhuyux]: using factory to get package
cwd = tempfile.mkdtemp()
pkg_filename = '{0}.tar.gz'.format(package_name)
package_path = '{0}/{1}'.format(cwd, pkg_filename)
package = Package(package_name, package_path, endpoint_url)
# instantiate a cli composer
composer = HelionCliComposer(endpoint_url, username, password)
deploy_status = DeploymentStatus(package)
deployment = Deployment(package, composer, deploy_status, True)
deployment_id = deployment.deployment_id
deployment.set_status('INIT')
# Start a new process to execute the deployment
process = Process(
name='deployment_{0}'.format(deployment_id),
target=deployment.deploy)
process.start()
logger.info('Deployment {0} started for {1}.'.format(
deployment_id, package_name))
return {
'status': 202,
'deployment_id': deployment_id,
'package': package_name}
except Exception as e:
stack_info = traceback.format_exc()
error_message = "Exception on deploy {0}. Details:\n{1}".format(
package_name, stack_info)
logger.exception(error_message)
delete_directory_tree(cwd)
return {'status': 500, 'errors': error_message}
def get_status(id):
"""
Get the deployment status by id
"""
try:
logger.info("======= deployment::get_status =======")
deploy_status = DeploymentStatus()
result = deploy_status.get_status(id)
logger.debug('Deployment result: {0}'.format(result))
if result == {} or not result['deploy_status']:
return {'status': 404}
else:
return {'status': 200, 'data': result}
except Exception as e:
stack_info = traceback.format_exc()
error = "Exception on getting deployment status"
error_message = "{0} for {1}. Details:\n{2}".format(
error, id, stack_info)
logger.exception(error_message)
return {'status': 500, 'errors': error_message}
| 32.625 | 76 | 0.660772 | 397 | 3,393 | 5.473552 | 0.302267 | 0.050621 | 0.02347 | 0.02301 | 0.21353 | 0.156466 | 0.046019 | 0.046019 | 0.046019 | 0.046019 | 0 | 0.015929 | 0.241379 | 3,393 | 103 | 77 | 32.941748 | 0.828283 | 0.151783 | 0 | 0.179104 | 0 | 0 | 0.119816 | 0.007799 | 0 | 0 | 0 | 0.009709 | 0 | 1 | 0.044776 | false | 0.029851 | 0.208955 | 0 | 0.358209 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e254e16f7fd71692efcf28db1746504d39517446 | 5,153 | py | Python | schulze.py | bjornlevi/schulze | b63951a4083592988b0a94d2cdc048f1989c36e3 | [
"MIT"
] | 2 | 2015-11-09T05:11:20.000Z | 2016-05-05T10:23:10.000Z | schulze.py | bjornlevi/schulze | b63951a4083592988b0a94d2cdc048f1989c36e3 | [
"MIT"
] | null | null | null | schulze.py | bjornlevi/schulze | b63951a4083592988b0a94d2cdc048f1989c36e3 | [
"MIT"
] | null | null | null | """
Schulze STV voting implementation.
See https://en.wikipedia.org/wiki/Schulze_method
"""
from collections import defaultdict, OrderedDict
import random
def compute_strongest_paths(preference, candidates):
"""
input: preference
p[i,j] = number of voters that prefer candidate i to candidate j
input: candidates
['candidate_1_id', 'candidate_2_1_id', ...]
output:
strongest_paths[i,j] = bottleneck number in the strongest path between i and j
"""
strongest_paths = defaultdict(lambda: defaultdict(int))
# Calculate the strongest paths between candidates
for i in candidates:
for j in candidates:
if i != j:
if preference[i][j] > preference[j][i]:
strongest_paths[i][j] = preference[i][j]
else:
strongest_paths[i][j] = 0
for i in candidates:
for j in candidates:
if i != j:
for k in candidates:
if i != k and j != k:
#p[j,k] := max ( p[j,k], min ( p[j,i], p[i,k] ) )
strongest_paths[j][k] = max(strongest_paths[j][k], min(strongest_paths[j][i], strongest_paths[i][k]))
return strongest_paths
def get_ordered_voting_results(strongest_paths):
"""
strongest_paths: the strongest paths of each candidate.
returns:
ordered dictionary, ordered by how many wins a candidate had against other candidates
key is candidate, value is list of candidates defeated by that candidate.
"""
# We need to determine the ordering among the candidates by comparing their respective path strengths.
# For all candidates, compare their path strengths in both directions, the candidate that has stronger path
# wins the other candidate. Order them from the candidate that wins all others, to the one that wins none.
wins = defaultdict(list)
for ci in strongest_paths.iterkeys():
for cj in strongest_paths.iterkeys():
if ci == cj:
continue
if strongest_paths[ci][cj] > strongest_paths[cj][ci]:
wins[ci].append(cj)
# Create ordered results of candidates that actually won other candidates
ordered_results = sorted(wins.items(), key=lambda x: len(x[1]), reverse=True)
# Add any candidates that did not win anything in a random order
stragglers = [c for c in strongest_paths.keys() if c not in wins]
random.shuffle(stragglers)
for straggler in stragglers:
ordered_results.append((straggler, None))
return OrderedDict(ordered_results)
def rank_votes(votes, candidates):
"""
input: votes is a list of preference ordered votes
vote = [(1,'a'), (2, 'b'), (2, 'd'), (2, 'x'), (100, 'y')]
input: candidates is a list of candidate voting keys:
candidates = ['a,' 'b,' 'c,' 'd,' 'x,' 'y']
Note that candidate 'c' is not listed in the example vote. This means no vote for 'c'.
output: A dictionary of preference counts for each candidate.
preference = {
'a': {'a': 0, 'b': 1, 'c': 1, 'd,' 1, 'x,' 1, 'y': 1}, #place ahead of everyone
'b': {'b': 0, 'a': 0, 'c': 1, 'd,' 1, 'x,' 1, 'y': 1}, #not placed ahead of a, equal to d and x
'c': {'c': 0, 'b': 0, 'a': 0, 'd,' 0, 'x,' 0, 'y': 0}, #not placed ahead of anyone
'd': {'d': 0, 'b': 1, 'c': 1, 'a,' 0, 'x,' 1, 'y': 1}, #equal to b and x, ahead of y
'x': {'x': 0, 'b': 1, 'c': 1, 'd,' 1, 'a,' 0, 'y': 1}, #equal to b and d, ahead of y
'y': {'y': 0, 'b': 0, 'c': 1, 'd,' 0, 'x,' 0, 'a': 0}, #c got no vote
#'self' is always 0
}
"""
invalid_votes = list()
#prepare the output - 0 set all candidates
preference = defaultdict(lambda: defaultdict(int))
for vote in votes:
# make sure the votes are in order
vote.sort()
voted_candidates = set([x[1] for x in vote])
# check for duplicate choices
if len(voted_candidates) != len(vote):
# duplicate choice, invalid!
invalid_votes.append(vote)
else:
for i, choice in enumerate(vote):
# resolve ties: [(1, 'a'), (2, 'c'), (2, 'e'), (3, 'b'), (5, 'd')] 'e' also gets a 'c' increment
tied_candidates = [x[1] for x in vote if choice[0] == x[0]]
not_voted_candidates = set(candidates)-voted_candidates
# increment against all other candidates
candidate = vote[i][1]
opponents_to_increment = list(
set([x[1] for x in vote[i+1:]] + list(not_voted_candidates) + tied_candidates))
increment_candidate(candidate, opponents_to_increment, preference)
return preference
def increment_candidate(candidate, opponents, preference_dict):
for opponent in opponents:
if opponent in preference_dict[candidate]:
preference_dict[candidate][opponent] += 1
else:
preference_dict[candidate][opponent] = 1
| 39.335878 | 125 | 0.578886 | 703 | 5,153 | 4.169275 | 0.237553 | 0.090754 | 0.020471 | 0.016377 | 0.091778 | 0.056636 | 0.043671 | 0.030024 | 0.024565 | 0.024565 | 0 | 0.017709 | 0.298661 | 5,153 | 130 | 126 | 39.638462 | 0.793304 | 0.463613 | 0 | 0.163636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072727 | false | 0 | 0.036364 | 0 | 0.163636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2553ed325e23b75f15e38f6dfb159756a4a0513 | 5,308 | py | Python | JPMC_presentation/int_func.py | qBraid/presentations | dd3d19f934b806b96c05a626b14224f35f868f6d | [
"MIT"
] | null | null | null | JPMC_presentation/int_func.py | qBraid/presentations | dd3d19f934b806b96c05a626b14224f35f868f6d | [
"MIT"
] | 1 | 2021-06-15T15:10:06.000Z | 2021-06-15T15:10:06.000Z | JPMC_presentation/int_func.py | qBraid/presentations | dd3d19f934b806b96c05a626b14224f35f868f6d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Dec 12 19:42:57 2018
@author: kanav
"""
import logging
from pyscf import gto, scf, ao2mo
from pyscf.lib import param
from scipy import linalg as scila
from pyscf.lib import logger as pylogger
from qiskit.chemistry import QiskitChemistryError
# from qiskit.chemistry import QMolecule
# from qiskit.chemistry import AquaChemistryError
from qiskit.chemistry import QMolecule
import numpy as np
# import gse_algo as ga
# from qiskit.chemistry import FermionicOperator
from qiskit.chemistry import FermionicOperator
logger = logging.getLogger(__name__)
from pyscf.scf.hf import get_ovlp
def _calculate_integrals(mol, calc_type='rhf', atomic=False):
"""Function to calculate the one and two electron terms. Perform a Hartree-Fock calculation in
the given basis.
Args:
mol : A PySCF gto.Mole object.
calc_type: rhf, uhf, rohf
Returns:
ehf : Hartree-Fock energy
enuke : Nuclear repulsion energy
norbs : Number of orbitals
mohij : One electron terms of the Hamiltonian.
mohijkl : Two electron terms of the Hamiltonian.
mo_coeff: Orbital coefficients
orbs_energy: Orbitals energies
x_dip_ints: x dipole moment integrals
y_dip_ints: y dipole moment integrals
z_dip_ints: z dipole moment integrals
nucl_dipl : Nuclear dipole moment
"""
enuke = gto.mole.energy_nuc(mol)
if calc_type == 'rhf':
mf = scf.RHF(mol)
elif calc_type == 'rohf':
mf = scf.ROHF(mol)
elif calc_type == 'uhf':
mf = scf.UHF(mol)
else:
raise QiskitChemistryError('Invalid calc_type: {}'.format(calc_type))
ehf = mf.kernel()
if type(mf.mo_coeff) is tuple:
mo_coeff = mf.mo_coeff[0]
mo_occ = mf.mo_occ[0]
else:
mo_coeff = mf.mo_coeff
mo_occ = mf.mo_occ
norbs = mo_coeff.shape[0]
orbs_energy = mf.mo_energy
# print(np.dot(mo_coeff,mo_coeff.T))
O = get_ovlp(mol)
# print(np.dot(O,O.T))
mo_tr = np.dot(np.dot(O,mo_coeff),O.T)
# print(np.dot(mo_tr,mo_tr.T))
# two_body_temp = QMolecule.twoe_to_spin(_q_.mo_eri_ints)
# temp_int = np.einsum('ijkl->ljik', _q_.mo_eri_ints)
# two_body_temp = QMolecule.twoe_to_spin(temp_int)
# mol = gto.M(atom=mol.atom, basis='sto-3g')
# X = np.kron(np.identity(2), np.linalg.inv(scipy.linalg.sqrtm(O)))
### for atomic basis
if atomic:
mo_coeff = np.identity(len(mo_coeff))
###
# print(mo_coeff)
hij = mf.get_hcore()
mohij = np.dot(np.dot(mo_coeff.T, hij), mo_coeff)
# mohij = hij
eri = ao2mo.incore.full(mf._eri, mo_coeff, compact=False)
# eri_1 = mf._eri
# print(np.shape(eri))
# print(np.shape(eri_1))
mohijkl = eri.reshape(norbs, norbs, norbs, norbs)
# exit()
# dipole integrals
mol.set_common_orig((0, 0, 0))
ao_dip = mol.intor_symmetric('int1e_r', comp=3)
x_dip_ints = QMolecule.oneeints2mo(ao_dip[0], mo_coeff)
y_dip_ints = QMolecule.oneeints2mo(ao_dip[1], mo_coeff)
z_dip_ints = QMolecule.oneeints2mo(ao_dip[2], mo_coeff)
dm = mf.make_rdm1(mf.mo_coeff, mf.mo_occ)
if calc_type == 'rohf' or calc_type == 'uhf':
dm = dm[0]
elec_dip = np.negative(np.einsum('xij,ji->x', ao_dip, dm).real)
elec_dip = np.round(elec_dip, decimals=8)
nucl_dip = np.einsum('i,ix->x', mol.atom_charges(), mol.atom_coords())
nucl_dip = np.round(nucl_dip, decimals=8)
logger.info("HF Electronic dipole moment: {}".format(elec_dip))
logger.info("Nuclear dipole moment: {}".format(nucl_dip))
logger.info("Total dipole moment: {}".format(nucl_dip+elec_dip))
return ehf, enuke, norbs, mohij, mohijkl, mo_coeff, orbs_energy, x_dip_ints, y_dip_ints, z_dip_ints, nucl_dip
def qmol_func(mol, calc_type='rhf', atomic=False):
ehf, enuke, norbs, mohij, mohijkl, mo_coeff, orbs_energy, x_dip, y_dip, z_dip, nucl_dip = _calculate_integrals(mol, calc_type,atomic)
# Create driver level molecule object and populate
_q_ = QMolecule()
# Energies and orbits
_q_.hf_energy = ehf
_q_.nuclear_repulsion_energy = enuke
_q_.num_orbitals = norbs
_q_.num_alpha = mol.nelec[0]
_q_.num_beta = mol.nelec[1]
_q_.mo_coeff = mo_coeff
_q_.orbital_energies = orbs_energy
# Molecule geometry
_q_.molecular_charge = mol.charge
_q_.multiplicity = mol.spin + 1
_q_.num_atoms = mol.natm
_q_.atom_symbol = []
_q_.atom_xyz = np.empty([mol.natm, 3])
atoms = mol.atom_coords()
for _n in range(0, _q_.num_atoms):
xyz = mol.atom_coord(_n)
_q_.atom_symbol.append(mol.atom_pure_symbol(_n))
_q_.atom_xyz[_n][0] = xyz[0]
_q_.atom_xyz[_n][1] = xyz[1]
_q_.atom_xyz[_n][2] = xyz[2]
# 1 and 2 electron integrals. h1 & h2 are ready to pass to FermionicOperator
_q_.mo_onee_ints = mohij
_q_.mo_eri_ints = mohijkl
# dipole integrals
_q_.x_dip_mo_ints = x_dip
_q_.y_dip_mo_ints = y_dip
_q_.z_dip_mo_ints = z_dip
# dipole moment
_q_.nuclear_dipole_moment = nucl_dip
_q_.reverse_dipole_sign = True
return _q_
| 33.594937 | 138 | 0.648832 | 798 | 5,308 | 4.027569 | 0.270677 | 0.052271 | 0.03547 | 0.046671 | 0.21593 | 0.092719 | 0.047293 | 0.028625 | 0.028625 | 0.028625 | 0 | 0.012913 | 0.241334 | 5,308 | 157 | 139 | 33.808917 | 0.7852 | 0.288621 | 0 | 0.02381 | 0 | 0 | 0.041643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.119048 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2557523d73c9829aec7494745f01d0144031b52 | 1,673 | py | Python | 02_Optimization/naive.py | IC-UFAL/mlclass | c314b028e2221915bc97ab95fc36d46fb87d8f5b | [
"MIT"
] | null | null | null | 02_Optimization/naive.py | IC-UFAL/mlclass | c314b028e2221915bc97ab95fc36d46fb87d8f5b | [
"MIT"
] | null | null | null | 02_Optimization/naive.py | IC-UFAL/mlclass | c314b028e2221915bc97ab95fc36d46fb87d8f5b | [
"MIT"
] | 2 | 2019-02-20T02:57:35.000Z | 2019-02-28T00:49:41.000Z | import datetime
import json
import requests
# url = 'http://localhost:8080/antenna/simulate?phi1={}&theta1={}&phi2={}&theta2={}&phi3={}&theta3={}'
url = 'https://aydanomachado.com/mlclass/02_Optimization.php?phi1={}&theta1={}&phi2={}&theta2={}&phi3={}&theta3={}&dev_key=Dual Core'
angles = ['phi1', 'theta1', 'phi2', 'theta2', 'phi3', 'theta3']
values = {'phi1': 10, 'theta1': 180, 'phi2': 359, 'theta2': 60, 'phi3': 180, 'theta3': 205}
best_result = 29.3878848771
while True:
improved = False
for k in angles:
print('Current Angle:', k)
print('At:', datetime.datetime.now().time())
print()
best_angle = values[k]
for i in range(360):
values[k] = i
while True:
try:
r = requests.get(url.format(values['phi1'], values['theta1'], values['phi2'], values['theta2'],
values['phi3'], values['theta3']))
result = float(json.loads(r.text)['gain'])
break
except:
print('-'*15, 'DEU PAU NA REQUISIÇÃO! At {}'.format(datetime.datetime.now().time()))
continue
print(result)
if result > best_result:
best_result = result
best_angle = i
improved = True
print('-'*30)
print('NEW Best Result:', best_result)
print('Values:', values)
print('At:', datetime.datetime.now().time())
print()
values[k] = best_angle
if not improved:
print('FIM!')
break
| 32.173077 | 133 | 0.500897 | 175 | 1,673 | 4.737143 | 0.428571 | 0.060314 | 0.050663 | 0.072376 | 0.193004 | 0.193004 | 0.084439 | 0 | 0 | 0 | 0 | 0.063964 | 0.336521 | 1,673 | 51 | 134 | 32.803922 | 0.682883 | 0.059773 | 0 | 0.205128 | 0 | 0.025641 | 0.188415 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.282051 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2586815f692b4c5cc76a0b0f9b72bd23ec35589 | 4,511 | py | Python | nautobot/extras/admin.py | romanukes/nautobot | 1a58479e702c16c9298ab18b96f74718c64697f9 | [
"Apache-2.0"
] | null | null | null | nautobot/extras/admin.py | romanukes/nautobot | 1a58479e702c16c9298ab18b96f74718c64697f9 | [
"Apache-2.0"
] | null | null | null | nautobot/extras/admin.py | romanukes/nautobot | 1a58479e702c16c9298ab18b96f74718c64697f9 | [
"Apache-2.0"
] | null | null | null | from db_file_storage.form_widgets import DBAdminClearableFileInput
from django import forms
from django.contrib import admin, messages
from django.db import transaction
from django.db.models import ProtectedError
from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
def order_content_types(field):
"""
Order the list of available ContentTypes by application
"""
queryset = field.queryset.order_by("app_label", "model")
field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
#
# Custom fields
#
class CustomFieldForm(forms.ModelForm):
class Meta:
model = CustomField
exclude = []
widgets = {
"default": forms.TextInput(),
"validation_regex": forms.Textarea(
attrs={
"cols": 80,
"rows": 3,
}
),
}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
order_content_types(self.fields["content_types"])
class CustomFieldChoiceAdmin(admin.TabularInline):
"""
Defines the inline formset factory that handles choices for selection type custom fields.
The `extra` defines the default number of inline rows that appear in the UI.
"""
model = CustomFieldChoice
extra = 5
@admin.register(CustomField)
class CustomFieldAdmin(admin.ModelAdmin):
"""
Define the structure and composition of the custom field form in the admin panel.
"""
actions = None
form = CustomFieldForm
inlines = [CustomFieldChoiceAdmin]
list_display = [
"name",
"models",
"type",
"required",
"filter_logic",
"default",
"weight",
"description",
]
list_filter = [
"type",
"required",
"content_types",
]
fieldsets = (
(
"Custom Field",
{
"fields": (
"type",
"name",
"weight",
"label",
"description",
"required",
"default",
"filter_logic",
)
},
),
(
"Assignment",
{
"description": "A custom field must be assigned to one or more object types.",
"fields": ("content_types",),
},
),
(
"Validation Rules",
{
"fields": (
"validation_minimum",
"validation_maximum",
"validation_regex",
),
"classes": ("monospace",),
},
),
)
def models(self, obj):
return ", ".join([ct.name for ct in obj.content_types.all()])
@transaction.atomic
def save_formset(self, request, form, formset, change):
# TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
if formset.model != CustomFieldChoice:
return super().save_formset(request, form, formset, change)
instances = formset.save(commit=False)
for instance in instances:
instance.save()
formset.save_m2m()
for obj in formset.deleted_objects:
try:
obj.delete()
except ProtectedError as e:
self.message_user(request, e, level=messages.ERROR)
raise e
#
# File attachments
#
class FileProxyForm(forms.ModelForm):
class Meta:
model = FileProxy
exclude = []
widgets = {
"file": DBAdminClearableFileInput,
}
@admin.register(FileProxy)
class FileProxyAdmin(admin.ModelAdmin):
form = FileProxyForm
list_display = ["name", "uploaded_at"]
list_filter = ["uploaded_at"]
#
# Job results (jobs, scripts, reports, Git repository sync, etc.)
#
@admin.register(JobResult)
class JobResultAdmin(admin.ModelAdmin):
list_display = [
"obj_type",
"name",
"created",
"completed",
"user",
"status",
]
fields = [
"obj_type",
"name",
"created",
"completed",
"user",
"status",
"data",
"job_id",
]
list_filter = [
"status",
]
readonly_fields = fields
def has_add_permission(self, request):
return False
| 24.252688 | 104 | 0.533141 | 404 | 4,511 | 5.836634 | 0.433168 | 0.030534 | 0.010178 | 0.00933 | 0.066158 | 0.031383 | 0.031383 | 0 | 0 | 0 | 0 | 0.001733 | 0.360452 | 4,511 | 185 | 105 | 24.383784 | 0.815598 | 0.109732 | 0 | 0.335766 | 0 | 0 | 0.135709 | 0 | 0 | 0 | 0 | 0.005405 | 0 | 1 | 0.036496 | false | 0 | 0.043796 | 0.014599 | 0.270073 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e25bb3d21aa35bada6a1cc156c43483bfa1ddc27 | 1,125 | py | Python | main/services/post_service.py | andriidem308/django_02 | 6d9f624271d28ea6f53517e6144fa6b9d76598e5 | [
"MIT"
] | null | null | null | main/services/post_service.py | andriidem308/django_02 | 6d9f624271d28ea6f53517e6144fa6b9d76598e5 | [
"MIT"
] | 1 | 2021-05-15T18:28:26.000Z | 2021-05-15T18:28:26.000Z | main/services/post_service.py | andriidem308/django_02 | 6d9f624271d28ea6f53517e6144fa6b9d76598e5 | [
"MIT"
] | null | null | null | """Show Posts Method."""
from django.core.cache import cache
from main.forms import CommentsForm
from main.models import Post
def post_all():
"""Post All."""
key = Post().__class__.cache_key()
if key in cache:
objects_all = cache.get(key)
else:
objects_all = Post.objects.all()
cache.set(key, objects_all, 30)
return objects_all
def post_find(post_id: int) -> Post:
"""Post Find."""
return Post.objects.get(id=post_id)
def comment_method(post, request):
"""Comment Show and Post."""
comments = post.comments.filter(activate=True)
if request.method == 'POST':
# A comment was posted
comment_form = CommentsForm(data=request.POST)
if comment_form.is_valid():
# Create Comment object but don't save to database yet
new_comment = comment_form.save(commit=False)
# Assign the current post to the comment
new_comment.post = post
# Save the comment to the database
new_comment.save()
else:
comment_form = CommentsForm()
return comment_form, comments
| 28.846154 | 66 | 0.639111 | 148 | 1,125 | 4.702703 | 0.385135 | 0.071839 | 0.043103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002401 | 0.259556 | 1,125 | 38 | 67 | 29.605263 | 0.833133 | 0.185778 | 0 | 0.083333 | 0 | 0 | 0.004484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e25bf8be3fd6bade037eaf4f8cc3eb38deb9550a | 470 | py | Python | tests/fixtures/device.py | jspaaks/vak | 581ec4869d342e5d52bc057de54c10901f06d343 | [
"BSD-3-Clause"
] | 26 | 2019-03-04T20:08:57.000Z | 2022-01-22T13:40:00.000Z | tests/fixtures/device.py | jspaaks/vak | 581ec4869d342e5d52bc057de54c10901f06d343 | [
"BSD-3-Clause"
] | 379 | 2019-03-03T12:16:05.000Z | 2022-03-29T13:44:46.000Z | tests/fixtures/device.py | jspaaks/vak | 581ec4869d342e5d52bc057de54c10901f06d343 | [
"BSD-3-Clause"
] | 12 | 2019-11-22T21:19:19.000Z | 2022-03-14T17:44:59.000Z | import pytest
import torch
DEVICES = ["cpu"]
if torch.cuda.is_available():
DEVICES.append("cuda")
@pytest.fixture(params=DEVICES)
def device(request):
"""parametrized device function,
that returns string names of the devices
that ``torch`` considers "available".
causes any test using ``device`` fixture to run just once
if only a cpu is available,
and twice if ``torch.cuda.is_available()`` returns ``True``."""
return request.param
| 24.736842 | 67 | 0.695745 | 63 | 470 | 5.15873 | 0.619048 | 0.101538 | 0.067692 | 0.08 | 0.135385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185106 | 470 | 18 | 68 | 26.111111 | 0.848564 | 0.544681 | 0 | 0 | 0 | 0 | 0.037234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e25e669fd4134f81db148915a2fa79ef70cd3223 | 3,464 | py | Python | ckan_api_client/tests/functional/client_lowlev/test_group_crud.py | plorenzatto/ckan-api-client | aad42484b07f3f24eef32d10547fad8d7ea27400 | [
"BSD-2-Clause"
] | 4 | 2015-07-30T03:46:48.000Z | 2018-04-26T08:28:39.000Z | ckan_api_client/tests/functional/client_lowlev/test_group_crud.py | plorenzatto/ckan-api-client | aad42484b07f3f24eef32d10547fad8d7ea27400 | [
"BSD-2-Clause"
] | 3 | 2015-03-09T11:16:15.000Z | 2021-03-11T16:09:25.000Z | ckan_api_client/tests/functional/client_lowlev/test_group_crud.py | plorenzatto/ckan-api-client | aad42484b07f3f24eef32d10547fad8d7ea27400 | [
"BSD-2-Clause"
] | 2 | 2016-09-12T14:14:45.000Z | 2021-03-11T16:09:56.000Z | import copy
import pytest
from ckan_api_client.exceptions import HTTPError
from ckan_api_client.tests.utils.strings import gen_random_id
from ckan_api_client.tests.utils.validation import check_group
@pytest.mark.xfail(run=False, reason='Work in progress')
def test_group_crud(ckan_client_ll):
client = ckan_client_ll
code = gen_random_id()
group = {
'name': 'group-{0}'.format(code),
'title': 'Group {0}'.format(code),
}
created = client.post_group(group)
check_group(created, group)
group_id = created['id']
# Retrieve & check
retrieved = client.get_group(group_id)
assert retrieved == created
# Update & check
updated = client.put_group({'id': group_id, 'title': 'My Group'})
assert updated['name'] == group['name']
assert updated['title'] == 'My Group'
# Check differences
expected = copy.deepcopy(created)
expected['title'] = 'My Group'
check_group(updated, expected)
# Retrieve & double-check
retrieved = client.get_group(group_id)
assert retrieved == updated
# Delete
# ------------------------------------------------------------
# Note: it's impossible to actually delete a group.
# The only hint it has been deleted is its "state"
# is set to "deleted".
# ------------------------------------------------------------
client.delete_group(group_id)
with pytest.raises(HTTPError) as excinfo:
client.get_group(group_id)
assert excinfo.value.status_code in (404, 403) # workaround
# retrieved = ckan_client.get_group(group_id)
# assert retrieved['state'] == 'deleted'
# anon_client = ckan_client.anonymous
# # with pytest.raises(HTTPError) as excinfo:
# # anon_client.get_group(group_id)
# # assert excinfo.value.status_code in (404, 403) # workaround
# retrieved = anon_client.get_group(group_id)
# assert retrieved['state'] == 'deleted'
# @pytest.mark.xfail(run=False, reason='Is using deprecated functions')
# def test_simple_group_crud(ckan_client):
# # Let's try creating a dataset
# _group = get_dummy_group(ckan_client)
# group = ckan_client.post_group(_group)
# group_id = group['id']
# ## Let's check group data first..
# for key, val in _group.iteritems():
# assert group[key] == val
# ## Check that retrieved group is identical
# group = ckan_client.get_group(group_id)
# for key, val in _group.iteritems():
# assert group[key] == val
# ## Check against data loss on update..
# retrieved_group = group
# updates = {
# 'title': 'New group title',
# 'description': 'New group description',
# }
# new_group = copy.deepcopy(group)
# new_group.update(updates)
# new_group['id'] = group_id
# ## Get the updated group
# updated_group = ckan_client.put_group(new_group)
# updated_group_2 = ckan_client.get_group(group_id)
# ## They should be equal!
# assert updated_group == updated_group_2
# ## And the updated group shouldn't have data loss
# expected_group = copy.deepcopy(retrieved_group)
# expected_group.update(updates)
# check_group(updated_group, expected_group)
# # for f in GROUP_FIELDS['cruft']:
# # updated_group.pop(f, None)
# # expected_group.pop(f, None)
# # assert updated_group == expected_group
# ## Delete the group
# ckan_client.delete_group(group_id)
| 30.928571 | 71 | 0.638857 | 432 | 3,464 | 4.909722 | 0.266204 | 0.056106 | 0.067893 | 0.071664 | 0.34512 | 0.322489 | 0.212164 | 0.208392 | 0.208392 | 0.115983 | 0 | 0.005885 | 0.215069 | 3,464 | 111 | 72 | 31.207207 | 0.774182 | 0.604792 | 0 | 0.066667 | 0 | 0 | 0.072925 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.033333 | false | 0 | 0.166667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26133561a6a1fbea9a0c907133569d903786fa4 | 749 | py | Python | ex102.py | ezequielwish/Python3 | a4489d49e6919649437cb9e682614240701e2b68 | [
"MIT"
] | 1 | 2022-01-24T02:01:32.000Z | 2022-01-24T02:01:32.000Z | ex102.py | ezequielwish/Python3 | a4489d49e6919649437cb9e682614240701e2b68 | [
"MIT"
] | null | null | null | ex102.py | ezequielwish/Python3 | a4489d49e6919649437cb9e682614240701e2b68 | [
"MIT"
] | null | null | null | # Crie um programa que tenha uma função fatorial() que receba dois parâmetros: o primeiro que indique
# o número a calcular e outro chamado show, que será um valor lógico (opcional) indicando se será
# mostrado ou não na tela o processo de cálculo do fatorial.
def factorial(number, show=False):
"""
Calcula o fatorial de um núumero
:param number: o número a ser calculado o fatorial
:param show: mostrar o cálculo
:return: fatorial
"""
fact = 1
for count in range(number, 0, -1):
fact *= count
if show:
print(count, end='')
if count == 1:
print(' = ', end='')
else:
print(' x ', end='')
return fact
print(factorial(5, True))
| 29.96 | 101 | 0.600801 | 102 | 749 | 4.411765 | 0.598039 | 0.031111 | 0.035556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009634 | 0.307076 | 749 | 24 | 102 | 31.208333 | 0.857418 | 0.518024 | 0 | 0 | 0 | 0 | 0.018127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.166667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e262da1aa6ad6096d2959977d0253825881aad07 | 1,417 | py | Python | tbonlineproject/faq/templatetags/faqtags.py | nathangeffen/tbonline3 | 1b8a3af8d2dc1ee8083ca6638d025e94bd98f253 | [
"MIT"
] | null | null | null | tbonlineproject/faq/templatetags/faqtags.py | nathangeffen/tbonline3 | 1b8a3af8d2dc1ee8083ca6638d025e94bd98f253 | [
"MIT"
] | 3 | 2021-06-08T23:57:13.000Z | 2022-01-13T03:42:01.000Z | tbonlineproject/faq/templatetags/faqtags.py | nathangeffen/tbonline-2 | 0d5869197e66a0057fa07cb99f21dde7f5b47c30 | [
"MIT"
] | null | null | null | import re
from django import template
from faq.models import QuestionCategory, QuestionAndAnswer
register = template.Library()
class QuestionsByCategories(template.Node):
def __init__(self, categories, var_name):
self.categories = categories
self.var_name = var_name
def render(self, context):
try:
context[self.var_name] = QuestionCategory.objects.filter(pk__in=self.categories)
except:
pass
return ""
def do_get_question_categories(parser, token):
try:
# split_contents() knows not to split quoted strings.
tag_name, arg = token.contents.split(None, 1)
except ValueError:
raise template.TemplateSyntaxError("%r tag requires arguments" % token.contents.split()[0])
m = re.search(r'(\"[0-9,]+\") as (\w+)', arg)
if not m:
raise template.TemplateSyntaxError("%r tag had invalid arguments" % tag_name)
category_string, var_name = m.groups()
category_string = category_string[1:-1]
try:
categories = [int(i) for i in category_string.rsplit(",")]
except ValueError:
raise template.TemplateSyntaxError("List of question and answer "
"categories must be comma separated integers")
return QuestionsByCategories(categories, var_name)
register.tag('get_question_categories', do_get_question_categories)
| 30.148936 | 99 | 0.66549 | 163 | 1,417 | 5.619632 | 0.472393 | 0.045852 | 0.068777 | 0.050218 | 0.148472 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005561 | 0.238532 | 1,417 | 46 | 100 | 30.804348 | 0.843373 | 0.035992 | 0 | 0.16129 | 0 | 0 | 0.124633 | 0.016862 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0.032258 | 0.096774 | 0 | 0.290323 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26a005aba399c82f6157e54fb1eeafe0b723175 | 5,145 | py | Python | tools/test_and_draw_to_file.py | freesunshine/mmdetection | 9fbd92c181f90feccfba7c8f17817c170b484bd9 | [
"Apache-2.0"
] | null | null | null | tools/test_and_draw_to_file.py | freesunshine/mmdetection | 9fbd92c181f90feccfba7c8f17817c170b484bd9 | [
"Apache-2.0"
] | null | null | null | tools/test_and_draw_to_file.py | freesunshine/mmdetection | 9fbd92c181f90feccfba7c8f17817c170b484bd9 | [
"Apache-2.0"
] | null | null | null | from mmdet.apis import init_detector, inference_detector, show_result
import mmcv
import os
import cv2
import sys
from mmdet.datasets.pipelines.loading import LoadPolNPZImageFromFile
from mmdet.datasets.pipelines.loading import LoadPolSubImageFromFile
import numpy as np
def load_pol_sub_image(sample_file, div_num=65535.0):
img = cv2.imread(sample_file, -1)
if img is None:
print('load image error')
print(sample_file)
else:
img = img.astype(np.float32)
img = img / div_num
return img
def load_pol_npz_image(sample_file):
img = np.load(sample_file)["arr_0"]
if img is None:
print('load image error')
print(sample_file)
return img
def test_and_draw_from_single_file(sample_file, ext_name, bgr_file, out_file, config_file, checkpoint_file, score_threhold):
model = init_detector(config_file, checkpoint_file, device='cuda:0')
sample = None
if ext_name == 'bgr':
sample = mmcv.imread(sample_file)
elif ext_name == 'tiff':
sample = load_pol_sub_image(sample_file)
else:
sample = LoadPolNPZImageFromFile(sample_file)
print(sample)
result = inference_detector(model, sample)
img = mmcv.imread(bgr_file)
show_result(img, result, model.CLASSES, out_file=out_file, score_thr=score_threhold)
# ext_name= bgr\pol\sub\others
def test_and_draw_from_xmls(xml_dir, ext_name, sample_dir, bgr_dir, out_dir, config_file, checkpoint_file, score_threhold):
model = init_detector(config_file, checkpoint_file, device='cuda:0')
xlms = os.listdir(xml_dir)
sample_ids = [i.split('_')[0]+'_'+i.split('_')[1] for i in xlms if i.endswith('.xml')]
for xlm_filename in xlms:
sample_file=''
sample_path=''
sample = None
sample_id = xlm_filename.split('.')[0]
if ext_name=='bgr':
sample_file = xlm_filename.split('.')[0] + '.tiff'
elif ext_name =='pol' or ext_name=='sub':
sample_file = sample_id.split('_')[0] + '_' + sample_id.split('_')[1] + '.' + 'tiff'
else:
sample_file = sample_id.split('_')[0] + '_' + sample_id.split('_')[1] + '.' + 'ext_name'+'.npz'
sample_path = os.path.join(sample_dir, sample_file)
if ext_name=='bgr':
sample = mmcv.imread(sample_path)
elif ext_name =='pol' or ext_name=='sub':
sample = load_pol_sub_image(sample_path)
else:
sample = load_pol_npz_image(sample_path)
img_path = os.path.join(bgr_dir, xlm_filename.split('.')[0] + '.tiff')
img = mmcv.imread(img_path)
result = inference_detector(model, sample)
out_file = sample_id.split('_')[0] + '_' + sample_id.split('_')[1] + '.' + ext_name+'.jpg'
out_path = os.path.join(out_dir, out_file)
show_result(img, result, model.CLASSES, show=False, out_file=out_path, score_thr=score_threhold)
print(out_path)
if __name__ == '__main__':
# polnet_cfg = "/home/gdgc0402/Code/mmdet-pol/configs/PolNet/faster_rcnn_pol_r50_fpn_1x_48-96-32-16-5.py"
# polnet_pth = "/home/gdgc0402/Data/work_dirs/car-xmls/faster_rcnn_pol_r50_fpn_1x_48-96-32-16-5/epoch_200.pth"
# polnet_sample = "/home/gdgc0402/Data/PolData/images/d04590135_images/20200102_102624628.tiff"
# polnet_bgr = "/home/gdgc0402/Data/PolData/images/bgr_images/20200102_102624628.tiff"
#
# test_and_draw_from_single_file(polnet_sample,
# 'tiff',
# polnet_bgr,
# '/home/gdgc0402/1.jpg',
# polnet_cfg,
# polnet_pth,
# 0.5
# )
#
# bgr_cfg = "/home/gdgc0402/Code/mmdet-pol/configs/PolNet/faster_rcnn_bgr_r50_fpn_1x.py"
# bgr_pth = "/home/gdgc0402/Data/work_dirs/car-xmls/faster_rcnn_bgr_r50_fpn_1x/epoch_80.pth"
# bgr_sample = "/home/gdgc0402/Data/PolData/images/bgr_images/20200102_102624628.tiff"
# bgr_bgr = "/home/gdgc0402/Data/PolData/images/bgr_images/20200102_102624628.tiff"
#
# test_and_draw_from_single_file(bgr_sample,
# 'bgr',
# bgr_bgr,
# '/home/gdgc0402/1.jpg',
# bgr_cfg,
# bgr_pth,
# 0.5
# )
xml_dir = '/home/gdgc0402/Data/PolData/car-xmls/all'
ext_name = 'bgr'
sample_dir = '/home/gdgc0402/Data/PolData/images/bgr_images'
bgr_dir = '/home/gdgc0402/Data/PolData/images/bgr_images'
out_dir = '/home/gdgc0402/Data/PolData/result_images'
config_file = "/home/gdgc0402/Code/mmdet-pol/configs/PolNet/faster_rcnn_bgr_r50_fpn_1x.py"
checkpoint_file = "/home/gdgc0402/Data/work_dirs/car-xmls2/car2_faster_rcnn_bgr_r50_fpn_1x/epoch_100.pth"
score_threhold = 0.5
test_and_draw_from_xmls(xml_dir, ext_name, sample_dir, bgr_dir, out_dir, config_file, checkpoint_file, score_threhold) | 42.172131 | 124 | 0.625462 | 680 | 5,145 | 4.391176 | 0.176471 | 0.0643 | 0.058942 | 0.061621 | 0.643336 | 0.547555 | 0.464501 | 0.426993 | 0.376758 | 0.357334 | 0 | 0.057165 | 0.255394 | 5,145 | 122 | 125 | 42.172131 | 0.722266 | 0.277162 | 0 | 0.306667 | 0 | 0 | 0.126524 | 0.089407 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053333 | false | 0 | 0.106667 | 0 | 0.186667 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26c7f0e54b8d43e115078966363aff4e60f48a6 | 26,192 | py | Python | src/cbc_sdk/platform/devices.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 24 | 2020-10-16T22:07:38.000Z | 2022-03-24T14:58:03.000Z | src/cbc_sdk/platform/devices.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 63 | 2020-10-26T18:26:15.000Z | 2022-03-31T17:31:02.000Z | src/cbc_sdk/platform/devices.py | fslds/carbon-black-cloud-sdk-python | 248a3c63d6b36d6fcdbcb3f51fb7751f062ed372 | [
"MIT"
] | 10 | 2020-11-09T11:54:23.000Z | 2022-03-24T20:44:00.000Z | #!/usr/bin/env python3
# *******************************************************
# Copyright (c) VMware, Inc. 2020-2021. All Rights Reserved.
# SPDX-License-Identifier: MIT
# *******************************************************
# *
# * DISCLAIMER. THIS PROGRAM IS PROVIDED TO YOU "AS IS" WITHOUT
# * WARRANTIES OR CONDITIONS OF ANY KIND, WHETHER ORAL OR WRITTEN,
# * EXPRESS OR IMPLIED. THE AUTHOR SPECIFICALLY DISCLAIMS ANY IMPLIED
# * WARRANTIES OR CONDITIONS OF MERCHANTABILITY, SATISFACTORY QUALITY,
# * NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE.
"""Model and Query Classes for Platform Devices"""
from cbc_sdk.errors import ApiError, ServerError
from cbc_sdk.platform import PlatformModel
from cbc_sdk.platform.vulnerability_assessment import Vulnerability, VulnerabilityQuery
from cbc_sdk.base import (BaseQuery, QueryBuilder, QueryBuilderSupportMixin, CriteriaBuilderSupportMixin,
IterableQueryMixin, AsyncQueryMixin)
import time
""""Device Models"""
class Device(PlatformModel):
"""Represents a device (endpoint)."""
urlobject = "/appservices/v6/orgs/{0}/devices"
urlobject_single = "/appservices/v6/orgs/{0}/devices/{1}"
primary_key = "id"
swagger_meta_file = "platform/models/device.yaml"
def __init__(self, cb, model_unique_id, initial_data=None):
"""
Initialize the Device object.
Args:
cb (BaseAPI): Reference to API object used to communicate with the server.
model_unique_id (str): ID of the alert represented.
initial_data (dict): Initial data used to populate the alert.
"""
super(Device, self).__init__(cb, model_unique_id, initial_data)
if model_unique_id is not None and initial_data is None:
self._refresh()
@classmethod
def _query_implementation(cls, cb, **kwargs):
"""
Returns the appropriate query object for the Device type.
Args:
cb (BaseAPI): Reference to API object used to communicate with the server.
**kwargs (dict): Not used, retained for compatibility.
Returns:
DeviceSearchQuery: The query object for this alert type.
"""
return DeviceSearchQuery(cls, cb)
@property
def deviceId(self):
"""Warn user that Platform Devices use 'id', not 'device_id'.
Platform Device API's return 'id' in API responses, where Endpoint Standard
API's return 'deviceId'.
"""
raise AttributeError("Platform Devices use .id property for device ID.")
def _refresh(self):
"""
Rereads the device data from the server.
Returns:
bool: True if refresh was successful, False if not.
"""
url = self.urlobject_single.format(self._cb.credentials.org_key, self._model_unique_id)
resp = self._cb.get_object(url)
self._info = resp
self._last_refresh_time = time.time()
return True
def lr_session(self, async_mode=False):
"""
Retrieve a Live Response session object for this Device.
Returns:
LiveResponseSession: Live Response session for the Device.
Raises:
ApiError: If there is an error establishing a Live Response session for this Device.
"""
return self._cb._request_lr_session(self._model_unique_id, async_mode=async_mode)
def background_scan(self, flag):
"""
Set the background scan option for this device.
Args:
flag (bool): True to turn background scan on, False to turn it off.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_background_scan([self._model_unique_id], flag)
def bypass(self, flag):
"""
Set the bypass option for this device.
Args:
flag (bool): True to enable bypass, False to disable it.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_bypass([self._model_unique_id], flag)
def delete_sensor(self):
"""
Delete this sensor device.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_delete_sensor([self._model_unique_id])
def uninstall_sensor(self):
"""
Uninstall this sensor device.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_uninstall_sensor([self._model_unique_id])
def quarantine(self, flag):
"""
Set the quarantine option for this device.
Args:
flag (bool): True to enable quarantine, False to disable it.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_quarantine([self._model_unique_id], flag)
def update_policy(self, policy_id):
"""
Set the current policy for this device.
Args:
policy_id (int): ID of the policy to set for the devices.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_update_policy([self._model_unique_id], policy_id)
def update_sensor_version(self, sensor_version):
"""
Update the sensor version for this device.
Args:
sensor_version (dict): New version properties for the sensor.
Returns:
str: The JSON output from the request.
"""
return self._cb.device_update_sensor_version([self._model_unique_id], sensor_version)
def vulnerability_refresh(self):
"""Perform an action on a specific device. Only REFRESH is supported."""
request = {"action_type": 'REFRESH'}
url = "/vulnerability/assessment/api/v1/orgs/{}".format(self._cb.credentials.org_key)
url += '/devices/{}/device_actions'.format(self._model_unique_id)
resp = self._cb.post_object(url, body=request)
if resp.status_code == 200:
return resp.json()
elif resp.status_code == 204:
return None
else:
raise ServerError(error_code=resp.status_code, message="Device action error: {0}".format(resp.content))
def get_vulnerability_summary(self, category=None):
"""
Get the vulnerabilities associated with this device
Args:
category (string): (optional) vulnerabilty category (OS, APP)
Returns:
dict: summary for the vulnerabilities for this device
"""
VALID_CATEGORY = ["OS", "APP"]
query_params = {}
url = '/vulnerability/assessment/api/v1/orgs/{}'
if category and category not in VALID_CATEGORY:
raise ApiError("Invalid category provided")
elif category:
query_params["category"] = category
req_url = url.format(self._cb.credentials.org_key) + '/devices/{}/vulnerabilities/summary'.format(self.id)
return self._cb.get_object(req_url, query_params)
def get_vulnerabilties(self):
"""
Get an Operating System or Application Vulnerability List for a specific device.
Returns:
dict: vulnerabilities for this device
"""
return VulnerabilityQuery(Vulnerability, self._cb, self)
############################################
# Device Queries
class DeviceSearchQuery(BaseQuery, QueryBuilderSupportMixin, CriteriaBuilderSupportMixin,
IterableQueryMixin, AsyncQueryMixin):
"""Represents a query that is used to locate Device objects."""
VALID_OS = ["WINDOWS", "ANDROID", "MAC", "IOS", "LINUX", "OTHER"]
VALID_STATUSES = ["PENDING", "REGISTERED", "UNINSTALLED", "DEREGISTERED",
"ACTIVE", "INACTIVE", "ERROR", "ALL", "BYPASS_ON",
"BYPASS", "QUARANTINE", "SENSOR_OUTOFDATE",
"DELETED", "LIVE"]
VALID_PRIORITIES = ["LOW", "MEDIUM", "HIGH", "MISSION_CRITICAL"]
VALID_DIRECTIONS = ["ASC", "DESC"]
VALID_DEPLOYMENT_TYPES = ["ENDPOINT", "WORKLOAD"]
def __init__(self, doc_class, cb):
"""
Initialize the DeviceSearchQuery.
Args:
doc_class (class): The model class that will be returned by this query.
cb (BaseAPI): Reference to API object used to communicate with the server.
"""
self._doc_class = doc_class
self._cb = cb
self._count_valid = False
super(DeviceSearchQuery, self).__init__()
self._query_builder = QueryBuilder()
self._criteria = {}
self._time_filter = {}
self._exclusions = {}
self._sortcriteria = {}
self.max_rows = -1
def _update_exclusions(self, key, newlist):
"""
Updates the exclusion criteria being collected for a query.
Assumes the specified criteria item is defined as a list; the list passed in will be set as the value for this
criteria item, or appended to the existing one if there is one.
Args:
key (str): The key for the criteria item to be set.
newlist (list): List of values to be set for the criteria item.
"""
oldlist = self._exclusions.get(key, [])
self._exclusions[key] = oldlist + newlist
def set_ad_group_ids(self, ad_group_ids):
"""
Restricts the devices that this query is performed on to the specified AD group IDs.
Args:
ad_group_ids (list): List of AD group IDs to restrict the search to.
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid (non-int) values are passed in the list.
"""
if not all(isinstance(ad_group_id, int) for ad_group_id in ad_group_ids):
raise ApiError("One or more invalid AD group IDs")
self._update_criteria("ad_group_id", ad_group_ids)
return self
def set_device_ids(self, device_ids):
"""
Restricts the devices that this query is performed on to the specified device IDs.
Args:
device_ids (list): List of device IDs to restrict the search to.
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid (non-int) values are passed in the list.
"""
if not all(isinstance(device_id, int) for device_id in device_ids):
raise ApiError("One or more invalid device IDs")
self._update_criteria("id", device_ids)
return self
def set_last_contact_time(self, *args, **kwargs):
"""
Restricts the devices that this query is performed on to the specified last contact time.
Args:
*args (list): Not used, retained for compatibility.
**kwargs (dict): Keyword arguments to this function. The critical ones are "start" (the start time),
"end" (the end time), and "range" (the range value).
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If an invalid combination of keyword parameters are specified.
"""
if kwargs.get("start", None) and kwargs.get("end", None):
if kwargs.get("range", None):
raise ApiError("cannot specify range= in addition to start= and end=")
stime = kwargs["start"]
if not isinstance(stime, str):
stime = stime.isoformat()
etime = kwargs["end"]
if not isinstance(etime, str):
etime = etime.isoformat()
self._time_filter = {"start": stime, "end": etime}
elif kwargs.get("range", None):
if kwargs.get("start", None) or kwargs.get("end", None):
raise ApiError("cannot specify start= or end= in addition to range=")
self._time_filter = {"range": kwargs["range"]}
else:
raise ApiError("must specify either start= and end= or range=")
return self
def set_os(self, operating_systems):
"""
Restricts the devices that this query is performed on to the specified operating systems.
Args:
operating_systems (list): List of operating systems to restrict search to. Valid values in this list are
"WINDOWS", "ANDROID", "MAC", "IOS", "LINUX", and "OTHER".
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid operating system values are passed in the list.
"""
if not all((osval in DeviceSearchQuery.VALID_OS) for osval in operating_systems):
raise ApiError("One or more invalid operating systems")
self._update_criteria("os", operating_systems)
return self
def set_policy_ids(self, policy_ids):
"""
Restricts the devices that this query is performed on to the specified policy IDs.
Args:
policy_ids (list): List of policy IDs to restrict the search to.
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid (non-int) values are passed in the list.
"""
if not all(isinstance(policy_id, int) for policy_id in policy_ids):
raise ApiError("One or more invalid policy IDs")
self._update_criteria("policy_id", policy_ids)
return self
def set_status(self, statuses):
"""
Restricts the devices that this query is performed on to the specified status values.
Args:
statuses (list): List of statuses to restrict search to. Valid values in this list are "PENDING",
"REGISTERED", "UNINSTALLED", "DEREGISTERED", "ACTIVE", "INACTIVE", "ERROR", "ALL",
"BYPASS_ON", "BYPASS", "QUARANTINE", "SENSOR_OUTOFDATE", "DELETED", and "LIVE".
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid status values are passed in the list.
"""
if not all((stat in DeviceSearchQuery.VALID_STATUSES) for stat in statuses):
raise ApiError("One or more invalid status values")
self._update_criteria("status", statuses)
return self
def set_target_priorities(self, target_priorities):
"""
Restricts the devices that this query is performed on to the specified target priority values.
Args:
target_priorities (list): List of priorities to restrict search to. Valid values in this list are "LOW",
"MEDIUM", "HIGH", and "MISSION_CRITICAL".
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid priority values are passed in the list.
"""
if not all((prio in DeviceSearchQuery.VALID_PRIORITIES) for prio in target_priorities):
raise ApiError("One or more invalid target priority values")
self._update_criteria("target_priority", target_priorities)
return self
def set_exclude_sensor_versions(self, sensor_versions):
"""
Restricts the devices that this query is performed on to exclude specified sensor versions.
Args:
sensor_versions (list): List of sensor versions to be excluded.
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid (non-string) values are passed in the list.
"""
if not all(isinstance(v, str) for v in sensor_versions):
raise ApiError("One or more invalid sensor versions")
self._update_exclusions("sensor_version", sensor_versions)
return self
def sort_by(self, key, direction="ASC"):
"""
Sets the sorting behavior on a query's results.
Example:
>>> cb.select(Device).sort_by("status")
Args:
key (str): The key in the schema to sort by.
direction (str): The sort order, either "ASC" or "DESC".
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If an invalid direction value is passed.
"""
if direction not in DeviceSearchQuery.VALID_DIRECTIONS:
raise ApiError("invalid sort direction specified")
self._sortcriteria = {"field": key, "order": direction}
return self
def set_deployment_type(self, deployment_type):
"""
Restricts the devices that this query is performed on to the specified deployment types.
Args:
deployment_type (list): List of deployment types to restrict search to.
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If invalid deployment type values are passed in the list.
"""
if not all((type in DeviceSearchQuery.VALID_DEPLOYMENT_TYPES) for type in deployment_type):
raise ApiError("invalid deployment_type specified")
self._update_criteria("deployment_type", deployment_type)
return self
def set_max_rows(self, max_rows):
"""
Sets the max number of devices to fetch in a singular query
Args:
max_rows (integer): Max number of devices
Returns:
DeviceSearchQuery: This instance.
Raises:
ApiError: If rows is negative or greater than 10000
"""
if max_rows < 0 or max_rows > 10000:
raise ApiError("Max rows must be between 0 and 10000")
self.max_rows = max_rows
return self
def _build_request(self, from_row, max_rows):
"""
Creates the request body for an API call.
Args:
from_row (int): The row to start the query at.
max_rows (int): The maximum number of rows to be returned.
Returns:
dict: The complete request body.
"""
mycrit = self._criteria
if self._time_filter:
mycrit["last_contact_time"] = self._time_filter
request = {"criteria": mycrit, "exclusions": self._exclusions}
request["query"] = self._query_builder._collapse()
if from_row > 1:
request["start"] = from_row
if max_rows >= 0:
request["rows"] = max_rows
elif self.max_rows >= 0:
request["rows"] = self.max_rows
if self._sortcriteria != {}:
request["sort"] = [self._sortcriteria]
return request
def _build_url(self, tail_end):
"""
Creates the URL to be used for an API call.
Args:
tail_end (str): String to be appended to the end of the generated URL.
Returns:
str: The complete URL.
"""
url = self._doc_class.urlobject.format(self._cb.credentials.org_key) + tail_end
return url
def _count(self):
"""
Returns the number of results from the run of this query.
Returns:
int: The number of results from the run of this query.
"""
if self._count_valid:
return self._total_results
url = self._build_url("/_search")
request = self._build_request(0, -1)
resp = self._cb.post_object(url, body=request)
result = resp.json()
self._total_results = result["num_found"]
self._count_valid = True
return self._total_results
def _perform_query(self, from_row=1, max_rows=-1):
"""
Performs the query and returns the results of the query in an iterable fashion.
Device v6 API uses base 1 instead of 0.
Args:
from_row (int): The row to start the query at (default 1).
max_rows (int): The maximum number of rows to be returned (default -1, meaning "all").
Returns:
Iterable: The iterated query.
"""
url = self._build_url("/_search")
current = from_row
numrows = 0
still_querying = True
while still_querying:
request = self._build_request(current, max_rows)
resp = self._cb.post_object(url, body=request)
result = resp.json()
self._total_results = result["num_found"]
self._count_valid = True
results = result.get("results", [])
for item in results:
yield self._doc_class(self._cb, item["id"], item)
current += 1
numrows += 1
if max_rows > 0 and numrows == max_rows:
still_querying = False
break
from_row = current
if current >= self._total_results:
still_querying = False
break
def _run_async_query(self, context):
"""
Executed in the background to run an asynchronous query. Must be implemented in any inheriting classes.
Args:
context (object): The context returned by _init_async_query. May be None.
Returns:
Any: Result of the async query, which is then returned by the future.
"""
url = self._build_url("/_search")
self._total_results = 0
self._count_valid = False
output = []
while not self._count_valid or len(output) < self._total_results:
request = self._build_request(len(output), -1)
resp = self._cb.post_object(url, body=request)
result = resp.json()
if not self._count_valid:
self._total_results = result["num_found"]
self._count_valid = True
results = result.get("results", [])
output += [self._doc_class(self._cb, item["id"], item) for item in results]
return output
def download(self):
"""
Uses the query parameters that have been set to download all device listings in CSV format.
Example:
>>> cb.select(Device).set_status(["ALL"]).download()
Returns:
str: The CSV raw data as returned from the server.
Raises:
ApiError: If status values have not been set before calling this function.
"""
tmp = self._criteria.get("status", [])
if not tmp:
raise ApiError("at least one status must be specified to download")
query_params = {"status": ",".join(tmp)}
tmp = self._criteria.get("ad_group_id", [])
if tmp:
query_params["ad_group_id"] = ",".join([str(t) for t in tmp])
tmp = self._criteria.get("policy_id", [])
if tmp:
query_params["policy_id"] = ",".join([str(t) for t in tmp])
tmp = self._criteria.get("target_priority", [])
if tmp:
query_params["target_priority"] = ",".join(tmp)
tmp = self._query_builder._collapse()
if tmp:
query_params["query_string"] = tmp
if self._sortcriteria:
query_params["sort_field"] = self._sortcriteria["field"]
query_params["sort_order"] = self._sortcriteria["order"]
url = self._build_url("/_search/download")
return self._cb.get_raw_data(url, query_params)
def _bulk_device_action(self, action_type, options=None):
"""
Perform a bulk action on all devices matching the current search criteria.
Args:
action_type (str): The action type to be performed.
options (dict): Any options for the bulk device action.
Returns:
str: The JSON output from the request.
"""
request = {"action_type": action_type, "search": self._build_request(0, -1)}
if options:
request["options"] = options
return self._cb._raw_device_action(request)
def background_scan(self, scan):
"""
Set the background scan option for the specified devices.
Args:
scan (bool): True to turn background scan on, False to turn it off.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("BACKGROUND_SCAN", self._cb._action_toggle(scan))
def bypass(self, enable):
"""
Set the bypass option for the specified devices.
Args:
enable (bool): True to enable bypass, False to disable it.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("BYPASS", self._cb._action_toggle(enable))
def delete_sensor(self):
"""
Delete the specified sensor devices.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("DELETE_SENSOR")
def uninstall_sensor(self):
"""
Uninstall the specified sensor devices.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("UNINSTALL_SENSOR")
def quarantine(self, enable):
"""
Set the quarantine option for the specified devices.
Args:
enable (bool): True to enable quarantine, False to disable it.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("QUARANTINE", self._cb._action_toggle(enable))
def update_policy(self, policy_id):
"""
Set the current policy for the specified devices.
Args:
policy_id (int): ID of the policy to set for the devices.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("UPDATE_POLICY", {"policy_id": policy_id})
def update_sensor_version(self, sensor_version):
"""
Update the sensor version for the specified devices.
Args:
sensor_version (dict): New version properties for the sensor.
Returns:
str: The JSON output from the request.
"""
return self._bulk_device_action("UPDATE_SENSOR_VERSION", {"sensor_version": sensor_version})
| 35.062918 | 118 | 0.60465 | 3,088 | 26,192 | 4.967617 | 0.132124 | 0.020209 | 0.014407 | 0.016623 | 0.40678 | 0.35189 | 0.312907 | 0.300391 | 0.280574 | 0.258475 | 0 | 0.003342 | 0.303108 | 26,192 | 746 | 119 | 35.10992 | 0.837068 | 0.390195 | 0 | 0.192453 | 0 | 0 | 0.12264 | 0.019102 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158491 | false | 0.022642 | 0.018868 | 0 | 0.366038 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26ce504f8d020dbf40351637cdd277a284cb83b | 810 | py | Python | cmd/run_pytest.py | Amourspirit/python-version-num | 03e8f35b85900a5f9736dda2d9c172e73bdad9fe | [
"MIT"
] | 1 | 2021-11-13T08:26:05.000Z | 2021-11-13T08:26:05.000Z | cmd/run_pytest.py | Amourspirit/python-version-num | 03e8f35b85900a5f9736dda2d9c172e73bdad9fe | [
"MIT"
] | null | null | null | cmd/run_pytest.py | Amourspirit/python-version-num | 03e8f35b85900a5f9736dda2d9c172e73bdad9fe | [
"MIT"
] | 1 | 2021-11-13T08:26:41.000Z | 2021-11-13T08:26:41.000Z | # coding: utf-8
from subprocess import run
import pathlib
import os
TEST_DIR = 'tests'
ROOT_PATH = pathlib.Path(__file__).parent.parent
TEST_MODULES = ['verr']
def get_modules():
global TEST_MODULES
result = ''
if len(TEST_MODULES) > 0:
result = ' --cov=' + ' --cov='.join(TEST_MODULES)
return result
def main():
global ROOT_PATH
global TEST_DIR
os.chdir(str(ROOT_PATH))
# print(ROOT_PATH)
cov_mod = get_modules()
# see: https://stackoverflow.com/questions/41748464/pytest-cannot-import-module-while-python-can
cmd_str = f"python -m pytest {TEST_DIR}{os.sep}{cov_mod} --cov-report=html"
print(cmd_str)
res = run(cmd_str.split(' '))
if res and res.returncode != 0:
print(res)
# print(cmd_str)
if __name__ == '__main__':
main()
| 24.545455 | 100 | 0.658025 | 115 | 810 | 4.365217 | 0.486957 | 0.063745 | 0.035857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017054 | 0.203704 | 810 | 32 | 101 | 25.3125 | 0.76124 | 0.17284 | 0 | 0 | 0 | 0 | 0.141353 | 0.040602 | 0.041667 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.125 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26da13b7aa0f70c0a93ca683c6aa55cf517af22 | 10,159 | py | Python | sectograph/widgets/edit_window.py | yumauri/sectograph | 457c8d44c632e04031d3e59a955d6c41a81a8104 | [
"MIT"
] | null | null | null | sectograph/widgets/edit_window.py | yumauri/sectograph | 457c8d44c632e04031d3e59a955d6c41a81a8104 | [
"MIT"
] | null | null | null | sectograph/widgets/edit_window.py | yumauri/sectograph | 457c8d44c632e04031d3e59a955d6c41a81a8104 | [
"MIT"
] | null | null | null | from PyQt5 import QtCore, QtGui, QtWidgets # , uic
from sectograph import resources, widgets, entities, datetime as dt
from .edit_window_ui import Ui_EditEventForm
repeat = {
"None": None,
"Every Day": 1,
"Every Week": 7,
"Every 2 Weeks": 14,
"Every Month": 30, # actually not a month
"Every Year": 365, # actually not a year
}
alert_interval = {
"None": None,
"At time of event": 0,
"5 minutes before": 5,
"10 minutes before": 10,
"15 minutes before": 15,
"30 minutes before": 30,
"1 hour before": 60,
"2 hours before": 120,
"1 day before": 60 * 24,
"2 days before": 60 * 2 * 24,
}
# class EditWindow(QtWidgets.QWidget):
class EditWindow(QtWidgets.QWidget, Ui_EditEventForm):
app: widgets.Application
evt: entities.BaseEvent
should_play: bool
def __init__(self, app: widgets.Application, theme_name: str) -> None:
super().__init__(app.main_window)
self.app = app
self.colors = app.colors_repository.get()
# uic.loadUi('sectograph/widgets/edit_window_ui.ui', self)
self.setupUi(self)
self.initUI()
def initUI(self) -> None:
self.setWindowFlags(QtCore.Qt.Tool)
self.installEventFilter(self)
for i, (name, value) in enumerate(repeat.items()):
self.repeat_ComboBox.insertItem(i, name)
self.repeat_ComboBox.setItemData(i, value)
for color in self.colors:
self.color_ComboBox.insertItem(1e3, color)
for i, (name, value) in enumerate(alert_interval.items()):
self.alert_ComboBox.insertItem(i, name)
self.alert_ComboBox.setItemData(i, value)
self.sound_ComboBox.insertItem(0, "None")
for sound in resources.Sounds.all().keys():
self.sound_ComboBox.insertItem(1e3, sound)
self.allday_CheckBox.stateChanged.connect(self.allday_changed)
self.repeat_ComboBox.currentTextChanged.connect(self.repeat_changed)
self.endrepeat_ComboBox.currentTextChanged.connect(self.endrepeat_changed)
self.color_ComboBox.currentTextChanged.connect(self.color_changed)
self.alert_ComboBox.currentTextChanged.connect(self.alert_changed)
self.sound_ComboBox.currentTextChanged.connect(self.sound_changed)
self.save_PushButton.clicked.connect(self.save)
self.cancel_PushButton.clicked.connect(self.cancel)
self.delete_PushButton.clicked.connect(self.delete)
self.sound_PushButton.clicked.connect(self.sound_changed)
def open_with_new(self):
# create new empty base event with duration of 15 minutes
self.open(
entities.BaseEvent(
start=dt.app_now(),
finish=dt.app_now() + dt.timedelta(minutes=15),
text="",
color="orange",
)
)
def open(self, evt: entities.BaseEvent):
self.evt = evt
self.setWindowTitle(("Edit" if evt.id else "Add") + " event")
self.text_LineEdit.setText(evt.text)
self.allday_CheckBox.setChecked(evt.day)
self.start_DateEdit.setDateTime(evt.start)
self.finish_DateEdit.setDateTime(evt.finish)
self.starts_DateTimeEdit.setDateTime(evt.start)
self.finish_DateTimeEdit.setDateTime(evt.finish)
# select item in repeat_ComboBox
self.repeat_ComboBox.setCurrentIndex(list(repeat.values()).index(evt.repeat))
# select item in endrepeat_ComboBox and end repeat date
self.endrepeat_ComboBox.setCurrentIndex(0 if evt.end is None else 1)
if evt.end is not None:
self.endrepeat_DateEdit.setDateTime(evt.end)
# select item in color_ComboBox
if evt.color:
if evt.color in self.colors:
self.color_ComboBox.setCurrentIndex(self.colors.index(evt.color))
else:
self.color_ComboBox.setEditText(evt.color)
# select item in alert_ComboBox
self.alert_ComboBox.setCurrentIndex(
list(alert_interval.values()).index(evt.notify)
)
# select item in sound_ComboBox
self.should_play = False
self.sound_ComboBox.setCurrentIndex(
list(resources.Sounds.all().keys()).index(evt.sound) + 1 if evt.sound else 0
)
# actualize form state and show
self.delete_PushButton.setVisible(not not evt.id)
self.allday_changed()
self.repeat_changed()
self.color_changed()
self.alert_changed()
self.sound_changed()
self.show()
def allday_changed(self):
is_day_event = self.allday_CheckBox.isChecked()
self.starts_DateTimeEdit.setVisible(not is_day_event)
self.finish_DateTimeEdit.setVisible(not is_day_event)
self.ends_Label.setVisible(not is_day_event)
self.start_DateEdit.setVisible(is_day_event)
self.finish_DateEdit.setVisible(False)
self.alert_Label.setVisible(not is_day_event)
self.alert_ComboBox.setVisible(not is_day_event)
self.alert_changed()
def repeat_changed(self):
repeat_idx = self.repeat_ComboBox.currentIndex()
self.endrepeat_Label.setVisible(repeat_idx != 0)
self.endrepeat_ComboBox.setVisible(repeat_idx != 0)
self.endrepeat_changed()
def endrepeat_changed(self):
repeat_idx = self.repeat_ComboBox.currentIndex()
endrepeat_idx = self.endrepeat_ComboBox.currentIndex()
self.endrepeat_DateEdit.setVisible(repeat_idx != 0 and endrepeat_idx != 0)
def color_changed(self):
color = self.color_ComboBox.currentText()
self.colorshow_Label.setStyleSheet(f"margin-left:5px;background-color:{color}")
def alert_changed(self):
alert = self.alert_ComboBox.currentText()
visible = self.alert_ComboBox.isVisible()
self.sound_Label.setVisible(visible and alert != "None")
self.sound_ComboBox.setVisible(visible and alert != "None")
self.sound_PushButton.setVisible(visible and alert != "None")
def sound_changed(self):
name = self.sound_ComboBox.currentText()
self.sound_PushButton.setDisabled(name == "None")
if self.sound_ComboBox.isVisible() and self.should_play and name != "None":
self.app.sound.play(name)
self.should_play = True
def cancel(self):
self.close()
def save(self):
# get form data
text = self.text_LineEdit.text()
# prevent saving event with empty text
if not text:
return
day = self.allday_CheckBox.isChecked()
if day:
start = self.start_DateEdit.date().toPyDate()
finish = self.finish_DateEdit.date().toPyDate()
start = dt.datetime.combine(start, dt.app_time())
finish = dt.datetime.combine(finish, dt.app_time())
else:
start = self.starts_DateTimeEdit.dateTime().toPyDateTime()
finish = self.finish_DateTimeEdit.dateTime().toPyDateTime()
start = start.replace(second=0, microsecond=0)
finish = finish.replace(second=0, microsecond=0)
repeat = self.repeat_ComboBox.itemData(self.repeat_ComboBox.currentIndex())
if repeat is None or self.endrepeat_ComboBox.currentIndex() == 0:
end = None
else:
end = self.endrepeat_DateEdit.date().toPyDate()
color = self.color_ComboBox.currentText()
notify = self.alert_ComboBox.itemData(self.alert_ComboBox.currentIndex())
notify = notify if self.alert_ComboBox.isVisible() else None
sound = self.sound_ComboBox.currentText()
sound = sound if notify is not None and sound != "None" else None
# compose new event
evt = entities.BaseEvent(
id=self.evt.id,
start=start,
finish=finish,
text=text,
color=color,
notify=notify,
sound=sound,
day=day,
repeat=repeat,
end=end,
)
# convert to event or day event
# and update/add this event in database
if day:
evt = entities.DayEvent.from_base_event(evt)
if evt.id:
self.app.day_events_repository.update(evt)
else:
self.app.day_events_repository.add(evt)
else:
evt = entities.Event.from_base_event(evt)
if evt.id:
self.app.events_repository.update(evt)
else:
self.app.events_repository.add(evt)
# update events and close this window
self.app.main_window.update_events()
self.close()
def delete(self):
msg_box = QtWidgets.QMessageBox()
answer = msg_box.question(
self,
"Delete event",
f'Are you sure you want to delete event\n"{self.evt.text}"',
QtWidgets.QMessageBox.Yes | QtWidgets.QMessageBox.No,
QtWidgets.QMessageBox.Yes,
)
if answer == QtWidgets.QMessageBox.Yes:
# delete event
if self.evt.day:
self.app.day_events_repository.delete(self.evt)
else:
self.app.events_repository.delete(self.evt)
# update events and close this window
self.app.main_window.update_events()
self.close()
def keyPressEvent(self, event: QtGui.QKeyEvent):
key = event.key()
if key == QtCore.Qt.Key_Escape:
self.close()
def eventFilter(self, source, event):
if event.type() == QtCore.QEvent.KeyPress:
if (
event.key() == QtCore.Qt.Key_Enter
or event.key() == QtCore.Qt.Key_Return
):
if self.text_LineEdit.text() != "":
self.save()
return super().eventFilter(source, event)
def showEvent(self, event):
rect = self.parent().geometry()
x = rect.x() + rect.width() // 2 - self.width() // 2
y = rect.y()
self.move(x, y)
super().showEvent(event)
self.activateWindow()
| 36.543165 | 88 | 0.625849 | 1,186 | 10,159 | 5.218381 | 0.185497 | 0.030215 | 0.027468 | 0.015835 | 0.226046 | 0.139441 | 0.113589 | 0.049766 | 0.033608 | 0.023913 | 0 | 0.009329 | 0.271976 | 10,159 | 277 | 89 | 36.67509 | 0.827474 | 0.061325 | 0 | 0.113122 | 0 | 0 | 0.036889 | 0.006726 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072398 | false | 0 | 0.013575 | 0 | 0.113122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26e1e30afd44b5865d29418ebb7a003e39c3faf | 2,010 | py | Python | time_converter.py | lisprolog/python | 3d2acea2721873d57418b9158ed3bed6e160eb16 | [
"BSD-3-Clause"
] | null | null | null | time_converter.py | lisprolog/python | 3d2acea2721873d57418b9158ed3bed6e160eb16 | [
"BSD-3-Clause"
] | null | null | null | time_converter.py | lisprolog/python | 3d2acea2721873d57418b9158ed3bed6e160eb16 | [
"BSD-3-Clause"
] | null | null | null | '''
You prefer a good old 12-hour time format. But the modern world we live in would rather use the 24-hour format and you see it everywhere. Your task is to convert the time from the 24-h format into 12-h format by following the next rules:
- the output format should be 'hh:mm a.m.' (for hours before midday) or 'hh:mm p.m.' (for hours after midday)
- if hours is less than 10 - don't write a '0' before it. For example: '9:05 a.m.'
Here you can find some useful information about the 12-hour format.
example
Input: Time in a 24-hour format (as a string).
Output: Time in a 12-hour format (as a string).
Precondition:
'00:00' <= time <= '23:59'
'''
def time_converter(time):
result = ""
hours = time[0:2]
minutes = time[3:5]
meridiem = "a.m."
if (hours == "00" and minutes == "00"): # converting "00" str() to 0 int() is a problem
return "12:00 a.m." # therefore catch it right here.
checkValue = int(hours) * 60 + int(minutes);# threshold for 12 hours subtraction
hours2 = ""
if int(hours) > 11: # change variable to p.m.
meridiem = "p.m."
if checkValue > 779: # subtract 12 hours
hours = int(hours) - 12
if int(hours) < 10: # delete leading 0 like (09:00) to (9:00)
hours3 = str(hours)
hours2 = hours3.lstrip("0")
else:
hours2 = str(hours)
result = str(hours2) + ":" + minutes + " " + meridiem #
return result
if __name__ == '__main__':
print("Example:")
print(time_converter('12:30'))
#These "asserts" using only for self-checking and not necessary for auto-testing
assert time_converter('12:30') == '12:30 p.m.'
assert time_converter('09:00') == '9:00 a.m.'
assert time_converter('23:15') == '11:15 p.m.'
assert time_converter('00:00') == '12:00 a.m.'
print("Coding complete? Click 'Check' to earn cool rewards!")
| 42.765957 | 238 | 0.589055 | 303 | 2,010 | 3.861386 | 0.429043 | 0.010256 | 0.064957 | 0.051282 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075157 | 0.285075 | 2,010 | 46 | 239 | 43.695652 | 0.73904 | 0.458706 | 0 | 0 | 0 | 0 | 0.146455 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.035714 | false | 0 | 0 | 0 | 0.107143 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e26e49266e956ade02f709bc18636ba6f4633d81 | 447 | py | Python | LeetCode/Study Plan/Algorithm/Day 1/704. Binary Search.py | TejM/HackerRank-Solutions | 8c5a79f7e644f42bc20a8c32818bf88a5c320bc1 | [
"MIT"
] | null | null | null | LeetCode/Study Plan/Algorithm/Day 1/704. Binary Search.py | TejM/HackerRank-Solutions | 8c5a79f7e644f42bc20a8c32818bf88a5c320bc1 | [
"MIT"
] | null | null | null | LeetCode/Study Plan/Algorithm/Day 1/704. Binary Search.py | TejM/HackerRank-Solutions | 8c5a79f7e644f42bc20a8c32818bf88a5c320bc1 | [
"MIT"
] | null | null | null | # Space Complexity O(1)
# Time Complexity O(log N)
class Solution:
def search(self, nums: List[int], target: int) -> int:
left = 0
right = len(nums) - 1
while left <= right:
mid = left + right // 2
if target == nums[mid]:
return mid
elif target > nums[mid]:
left = mid + 1
else:
right = mid - 1
return -1
| 27.9375 | 58 | 0.44519 | 52 | 447 | 3.826923 | 0.519231 | 0.110553 | 0.130653 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028689 | 0.454139 | 447 | 15 | 59 | 29.8 | 0.786885 | 0.105145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2717384cf4bcdc0a115d702eb3c542a9840a4cd | 8,777 | py | Python | entities/ships/ship.py | rkwong43/Toh | 702f689aa2c37a9f6f463e24405b6c418fd0607f | [
"CC-BY-4.0"
] | 3 | 2020-01-28T16:02:26.000Z | 2020-01-29T21:47:14.000Z | entities/ships/ship.py | rkwong43/Tears-Under-Heaven | 702f689aa2c37a9f6f463e24405b6c418fd0607f | [
"CC-BY-4.0"
] | null | null | null | entities/ships/ship.py | rkwong43/Tears-Under-Heaven | 702f689aa2c37a9f6f463e24405b6c418fd0607f | [
"CC-BY-4.0"
] | null | null | null | import math
import random
import pygame
from utils import config
# Constants for state of movement and rotations
NO_WAYPOINT = 1
MOVE_WAYPOINT = 2
FIRE_WAYPOINT = 3
MOVE_AND_FIRE_WAYPOINT = 4
"""Represents a generic ship.
"""
class Ship:
random.seed()
"""Constructor to make the ship.
:param x: starting x coordinate of ship
:type x: int
:param y: starting y coordinate of ship
:type y: int
:param hp: hit points of the ship
:type hp: int
:param shield: shield points of the ship
:type shield: int
:param size: size of ship
:type size: int
"""
def __init__(self, x, y, speed, hp, shield, size):
# Speed, constant along an angle (vector)
self.speed = speed * (30 / config.game_fps)
# Position
self.x = int(x)
self.y = int(y)
self.end_x = 0
self.end_y = 0
# Size of the ship (not scaling, should be a value in pixels)
self.size = size
self.angle = -90
#############################################
# Health
self.hp = hp
self.max_hp = hp
#############################################
# Shield
self.shield = shield
# Maximum shield
self.max_shield = shield
# Shield recharge rate
self.shield_recharge_rate = (self.max_shield // 20) / config.game_fps
self.shield_recharge_rate = 1 if self.shield_recharge_rate == 0 else self.shield_recharge_rate
# Delay before shield recharges
self.shield_delay = config.game_fps * 2
# Keeps count of when to regenerate
self.shield_recharge = self.shield_delay
#############################################
# Status indicators
# is_damaged is used for telling when the shop
self.is_damaged = False
self.is_dead = False
# Current waypoint
self.waypoint = None
self._wp_state = NO_WAYPOINT
# Rotation states
self._wp_rotations = {NO_WAYPOINT: self._rotate,
MOVE_WAYPOINT: self._rotate,
FIRE_WAYPOINT: self._rotate_to_wp,
MOVE_AND_FIRE_WAYPOINT: self._rotate_to_wp
}
# Movement states
self._wp_movement = {NO_WAYPOINT: self._move,
MOVE_WAYPOINT: self._move_to_wp,
FIRE_WAYPOINT: self._move,
MOVE_AND_FIRE_WAYPOINT: self._move_to_wp
}
# If done moving to the waypoint, True by default
self.wp_done = True
# If it should be removed when offscreen
self.remove_if_offscreen = True
# If in a form of stealth
self.stealth = False
self.rotation_speed = 3 * (60 / config.game_fps)
# Ship effects
self.ship_effects = []
"""Represents the angle the ship is facing.
:param target: target the ship is facing
:type target: Ship or Waypoint
"""
def _rotate(self, target):
# Rotates the ship to face the target ship
# Adjustment for larger ships
if target.size > 2 * config.ship_size:
y = target.y + target.size / 2
x = target.x + target.size / 2
else:
y = target.y
x = target.x
y_dist = self.y - y
x_dist = self.x - x
target_angle = -int(math.degrees(math.atan2(y_dist, x_dist))) - 90
if abs(self.angle - target_angle) > self.rotation_speed:
v1 = pygame.math.Vector2()
v1.from_polar((1, self.angle))
v2 = pygame.math.Vector2()
v2.from_polar((1, target_angle))
angle_change = -self.rotation_speed if v1.angle_to(v2) < 0 else self.rotation_speed
self.angle += angle_change
"""Rotates the ship depending on its current state.
:param target: target the ship is facing
:type target: Ship or Waypoint
"""
def rotate(self, target):
if target is not None:
self._wp_rotations[self._wp_state](target)
"""Rotates the ship towards its waypoint.
"""
def _rotate_to_wp(self, *args):
self.angle = -math.degrees(math.atan2(self.y - self.waypoint.y, self.x - self.waypoint.x)) - 90
"""Lowers the health of the ship and switches states to a damaged one.
:param damage: damage taken from the collision
:type damage: int
"""
def damage(self, damage):
self.is_damaged = True
# Intended mechanic, any amount of shield will block a huge chunk of damage
# that will exceed the current shield value
if self.shield > 0:
self.shield -= damage
self.shield_recharge = 0
else:
self.hp -= damage
# Indicating that the ship is destroyed
if self.hp <= 0:
self.is_dead = True
"""Recharges the shield of the ship.
"""
def recharge_shield(self):
# Delay to recharge shield
if self.shield_recharge < self.shield_delay:
self.shield_recharge += 1
# Increases shield gradually until it hits the limit
elif self.shield < self.max_shield:
self.shield += self.shield_recharge_rate
# Makes sure it caps to account for rounding errors
if self.shield > self.max_shield:
self.shield = self.max_shield
self.ship_effects[:] = [effect for effect in self.ship_effects if effect.animate()]
"""Sets the ship's waypoint.
:param wp: waypoint to travel to
:type wp: Waypoint
:param fire_at: if the ship will fire at the waypoint rather than the player
:type fire_at: bool
:param move_to: if the ship moves towards the waypoint
:type move_to: bool
"""
def set_waypoint(self, wp=None, fire_at=False, move_to=True):
if wp is not None:
self.waypoint = wp
if fire_at and not move_to:
self._wp_state = FIRE_WAYPOINT
elif not fire_at and move_to:
self._wp_state = MOVE_WAYPOINT
self.wp_done = False
elif fire_at and move_to:
self._wp_state = MOVE_AND_FIRE_WAYPOINT
self.wp_done = False
else:
self._wp_state = NO_WAYPOINT
"""Moves the ship.
"""
def move(self):
self._wp_movement[self._wp_state]()
"""Moves the ship randomly to a generated position on the screen.
"""
def _move(self):
if self.speed > 0:
delta_x = 0
delta_y = 0
x_done = False
if self.x < self.end_x - self.speed:
delta_x += self.speed
elif self.x > self.end_x + self.speed:
delta_x -= self.speed
else:
x_done = True
if self.y < self.end_y - self.speed:
delta_y += self.speed
elif self.y > self.end_y + self.speed:
delta_y -= self.speed
elif x_done:
self.end_x, self.end_y = self._generate_pos()
self.x += delta_x
self.y += delta_y
for effect in self.ship_effects:
effect.x += delta_x
effect.y += delta_y
"""Moves the ship towards its waypoint.
"""
def _move_to_wp(self):
if self.speed > 0:
delta_x = 0
delta_y = 0
x_done = False
if self.x < self.waypoint.x - self.speed:
delta_x += self.speed
elif self.x > self.waypoint.x + self.speed:
delta_x -= self.speed
else:
x_done = True
if self.y < self.waypoint.y - self.speed:
delta_y += self.speed
elif self.y > self.waypoint.y + self.speed:
delta_y -= self.speed
elif x_done:
self.wp_done = True
self.x += delta_x
self.y += delta_y
for effect in self.ship_effects:
effect.x += delta_x
effect.y += delta_y
"""Generates a new position to move into.
:returns: tuple of x and y pos
:rtype: (int, int)
"""
def _generate_pos(self):
x = random.randint(config.ship_size, config.display_width - (2 * config.ship_size))
y = random.randint(0, config.display_height - config.ship_size)
return x, y
"""Spins the ship in circles.
"""
def _spin(self, target):
self.angle += self.rotation_speed
if self.angle > 360:
self.angle -= 360
"""Does nothing unless it specifically has a command for being offscreen.
"""
def offscreen(self):
pass
| 31.571942 | 103 | 0.557366 | 1,135 | 8,777 | 4.139207 | 0.178855 | 0.0298 | 0.034483 | 0.023414 | 0.296509 | 0.242444 | 0.192422 | 0.192422 | 0.176671 | 0.1639 | 0 | 0.009771 | 0.347043 | 8,777 | 277 | 104 | 31.685921 | 0.809981 | 0.097186 | 0 | 0.267974 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084967 | false | 0.006536 | 0.026144 | 0 | 0.124183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2766dfb113345e1fd3a134fe901eaf909ed4549 | 1,827 | py | Python | constants2.py | Utsav-Patel/The-Imitation-Game | 09dfaffdf917c1adfb1d8cd3e09a216b9a014e52 | [
"MIT"
] | null | null | null | constants2.py | Utsav-Patel/The-Imitation-Game | 09dfaffdf917c1adfb1d8cd3e09a216b9a014e52 | [
"MIT"
] | null | null | null | constants2.py | Utsav-Patel/The-Imitation-Game | 09dfaffdf917c1adfb1d8cd3e09a216b9a014e52 | [
"MIT"
] | null | null | null | import os
PROJECT_PATH = os.path.dirname(__file__)
PROJECT_NO = 3
ARCHITECTURE_TYPE = 'dense'
NUM_COLS = 10
NUM_ROWS = 10
TRAINED_MODEL_NUM_ROWS = 10
TRAINED_MODEL_NUM_COLS = 10
INF = 1e9
FILE_PREFIX = "10x10"
FILE_SUFFIX = "new"
TRAIN_DATA_PREFIX = "21_to_30_probability_and_1000_each"
VALIDATION_TEST_DATA_PREFIX = "validation_plus_test"
CHECKPOINT_FILEPATH = os.path.join(PROJECT_PATH, "checkpoints", "project" + str(PROJECT_NO), ARCHITECTURE_TYPE,
FILE_PREFIX, FILE_SUFFIX, FILE_SUFFIX + "-{epoch:04d}.ckpt")
DATA_PATH = os.path.join(PROJECT_PATH, "data", "project" + str(PROJECT_NO), ARCHITECTURE_TYPE, FILE_PREFIX,
TRAIN_DATA_PREFIX + ".pkl")
VALIDATION_TEST_PATH = os.path.join(PROJECT_PATH, "data", "project" + str(PROJECT_NO), ARCHITECTURE_TYPE, FILE_PREFIX,
VALIDATION_TEST_DATA_PREFIX + ".pkl")
STATE_OF_THE_ART_MODEL_PROJECT1_DENSE_CHECKPOINT_PATH = os.path.join(PROJECT_PATH, "checkpoints", "project1", "dense",
FILE_PREFIX)
STATE_OF_THE_ART_MODEL_PROJECT1_CNN_CHECKPOINT_PATH = os.path.join(PROJECT_PATH, "checkpoints", "project1", "cnn",
FILE_PREFIX)
STARTING_POSITION_OF_AGENT = (0, 0)
GOAL_POSITION_OF_AGENT = (NUM_ROWS - 1, NUM_COLS - 1)
X = [-1, 0, 1, 0]
Y = [0, 1, 0, -1]
UNVISITED_NUMBER = 0
BLOCKED_NUMBER = -1
UNBLOCKED_NUMBER = 1
TARGET_CANNOT_BE_REACHED_NUMBER = 4
UNBLOCKED_WEIGHT = 5
NEIGHBOR_WEIGHT = 5
CURRENT_CELL_WEIGHT = 10
FLAT_FALSE_NEGATIVE_RATE = 0.2
HILLY_FALSE_NEGATIVE_RATE = 0.5
FOREST_FALSE_NEGATIVE_RATE = 0.8
ZERO_PROBABILITY = 0.0
ONE_PROBABILITY = 1.0
NUM_ITERATIONS = 1
PROBABILITY_OF_GRID = 0.3
TRAJECTORY_LENGTH_THRESHOLD = 1000
| 29.95082 | 118 | 0.683634 | 247 | 1,827 | 4.619433 | 0.352227 | 0.057844 | 0.043821 | 0.074496 | 0.376862 | 0.376862 | 0.263804 | 0.263804 | 0.224365 | 0.129711 | 0 | 0.046512 | 0.223317 | 1,827 | 60 | 119 | 30.45 | 0.757576 | 0 | 0 | 0.047619 | 0 | 0 | 0.097481 | 0.01862 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e277f4bdbc0bb30de17cd91736ad9678057f1dc7 | 3,919 | py | Python | tests/manage/mcg/conftest.py | vikasmulaje/ocs-ci | 98ce950150e061ba872c62f2d55f9bd395241a6e | [
"MIT"
] | null | null | null | tests/manage/mcg/conftest.py | vikasmulaje/ocs-ci | 98ce950150e061ba872c62f2d55f9bd395241a6e | [
"MIT"
] | null | null | null | tests/manage/mcg/conftest.py | vikasmulaje/ocs-ci | 98ce950150e061ba872c62f2d55f9bd395241a6e | [
"MIT"
] | null | null | null | import logging
import pytest
from ocs_ci.ocs import constants
from ocs_ci.ocs.resources import mcg
from tests import helpers
from tests.helpers import craft_s3_command, create_unique_resource_name
logger = logging.getLogger(__name__)
@pytest.fixture()
def mcg_obj():
"""
Returns an MCG resource that's connected to the S3 endpoint
Returns:
MCG: An MCG resource
"""
mcg_obj = mcg.MCG()
return mcg_obj
@pytest.fixture()
def uploaded_objects(request, mcg_obj, awscli_pod):
"""
Deletes all objects that were created as part of the test
Args:
mcg_obj (MCG): An MCG object containing the MCG S3 connection credentials
awscli_pod (Pod): A pod running the AWSCLI tools
Returns:
list: An empty list of objects
"""
uploaded_objects_paths = []
def object_cleanup():
for uploaded_filename in uploaded_objects_paths:
logger.info(f'Deleting object {uploaded_filename}')
awscli_pod.exec_cmd_on_pod(
command=craft_s3_command(mcg_obj, "rm " + uploaded_filename),
secrets=[mcg_obj.access_key_id, mcg_obj.access_key, mcg_obj.endpoint]
)
request.addfinalizer(object_cleanup)
return uploaded_objects_paths
@pytest.fixture()
def bucket_factory(request, mcg_obj):
"""
Create a bucket factory. Calling this fixture creates a new bucket(s).
For a custom amount, provide the 'amount' parameter.
Args:
mcg_obj (MCG): An MCG object containing the MCG S3 connection credentials
"""
created_bucket_names = []
def _create_buckets(amount=1):
"""
Creates and deletes all buckets that were created as part of the test
Args:
amount (int): The amount of buckets to create
Returns:
list: A list of s3.Bucket objects, containing all the created buckets
"""
for i in range(amount):
bucket_name = create_unique_resource_name(
resource_description='bucket', resource_type='s3'
)
logger.info(f'Creating bucket: {bucket_name}')
created_bucket_names.append(
mcg_obj.s3_create_bucket(bucketname=bucket_name)
)
return created_bucket_names
def bucket_cleanup():
all_existing_buckets = mcg_obj.s3_list_all_bucket_names()
for bucket in created_bucket_names:
if bucket.name in all_existing_buckets:
logger.info(f'Deleting bucket {bucket.name}')
bucket.object_versions.delete()
mcg_obj.s3_delete_bucket(bucket)
logger.info(
f"Verifying whether bucket: {bucket.name} exists after deletion"
)
assert not mcg_obj.s3_verify_bucket_exists(bucket)
request.addfinalizer(bucket_cleanup)
return _create_buckets
@pytest.fixture()
def created_pods(request):
"""
Deletes all pods that were created as part of the test
Returns:
list: An empty list of pods
"""
created_pods_objects = []
def pod_cleanup():
for pod in created_pods_objects:
logger.info(f'Deleting pod {pod.name}')
pod.delete()
request.addfinalizer(pod_cleanup)
return created_pods_objects
@pytest.fixture()
def awscli_pod(mcg_obj, created_pods):
"""
Creates a new AWSCLI pod for relaying commands
Args:
created_pods (Fixture/list): A fixture used to keep track of created pods
and clean them up in the teardown
Returns:
pod: A pod running the AWS CLI
"""
awscli_pod_obj = helpers.create_pod(namespace='noobaa',
pod_dict_path=constants.AWSCLI_POD_YAML)
helpers.wait_for_resource_state(awscli_pod_obj, constants.STATUS_RUNNING)
created_pods.append(awscli_pod_obj)
return awscli_pod_obj
| 28.816176 | 85 | 0.655269 | 499 | 3,919 | 4.907816 | 0.252505 | 0.0392 | 0.032666 | 0.020825 | 0.123316 | 0.109432 | 0.089833 | 0.089833 | 0.077583 | 0.05145 | 0 | 0.004215 | 0.273539 | 3,919 | 135 | 86 | 29.02963 | 0.855989 | 0.272008 | 0 | 0.076923 | 0 | 0 | 0.073585 | 0 | 0 | 0 | 0 | 0 | 0.015385 | 1 | 0.138462 | false | 0 | 0.092308 | 0 | 0.323077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e27a9e3d318c52ddc407fa0d3a29622c0ebd8768 | 488 | py | Python | odoo-13.0/doc/_extensions/autojsdoc/ext/__init__.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | odoo-13.0/doc/_extensions/autojsdoc/ext/__init__.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | odoo-13.0/doc/_extensions/autojsdoc/ext/__init__.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from .directives import automodule_bound, autodirective_bound
from .extractor import _get_roots
def setup(app):
app.add_config_value('js_roots', _get_roots, 'env')
modules = {}
app.add_directive_to_domain('js', 'automodule', automodule_bound(app, modules)
)
autodirective = autodirective_bound(app, modules)
for n in ['autonamespace', 'automixin', 'autoclass', 'autofunction']:
app.add_directive_to_domain('js', n, autodirective)
| 34.857143 | 82 | 0.715164 | 59 | 488 | 5.627119 | 0.525424 | 0.054217 | 0.090361 | 0.10241 | 0.150602 | 0.150602 | 0 | 0 | 0 | 0 | 0 | 0.002421 | 0.153689 | 488 | 13 | 83 | 37.538462 | 0.801453 | 0.043033 | 0 | 0 | 0 | 0 | 0.146237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e27b9ff878d5e676787f0419abe25a6716e3f34c | 1,867 | py | Python | mjolnir/test/kafka/test_bulk_daemon.py | kdhingra307/ncm | 07557138897d4266ce413c9f4fe033c24c6df065 | [
"MIT"
] | null | null | null | mjolnir/test/kafka/test_bulk_daemon.py | kdhingra307/ncm | 07557138897d4266ce413c9f4fe033c24c6df065 | [
"MIT"
] | null | null | null | mjolnir/test/kafka/test_bulk_daemon.py | kdhingra307/ncm | 07557138897d4266ce413c9f4fe033c24c6df065 | [
"MIT"
] | null | null | null | from mjolnir.kafka import bulk_daemon
import pytest
def _mock_bulk_response(ok, action, status, result):
return ok, {
action: {
'status': status,
'result': result,
}
}
def _update_success(result, n=1):
return [_mock_bulk_response(True, 'update', 200, result) for _ in range(n)]
def _update_missing(n=1):
return [_mock_bulk_response(False, 'update', 404, '') for _ in range(n)]
@pytest.mark.parametrize('expected,records', [
({}, []),
({bulk_daemon.Metric.ACTION_RESULTS['updated']: 1}, _update_success('updated')),
({bulk_daemon.Metric.ACTION_RESULTS['created']: 2}, _update_success('created', 2)),
({bulk_daemon.Metric.ACTION_RESULTS['noop']: 1}, _update_success('noop')),
({bulk_daemon.Metric.OK_UNKNOWN: 3}, _update_success('otherthing', 3)),
({bulk_daemon.Metric.MISSING: 1}, _update_missing()),
({bulk_daemon.Metric.FAILED: 1}, [_mock_bulk_response(False, 'update', 500, '')]),
(
{
bulk_daemon.Metric.ACTION_RESULTS['updated']: 4,
bulk_daemon.Metric.ACTION_RESULTS['noop']: 2,
bulk_daemon.Metric.MISSING: 14
},
_update_success('updated', 4) + _update_success('noop', 2) + _update_missing(14)
)
])
def test_stream_to_es_stats_collection(mocker, expected, records, mock):
mock = mocker.patch('mjolnir.kafka.bulk_daemon.streaming_bulk')
mock.return_value = records
for metric, _ in expected.items():
# alhmost certainly fragile
metric._value.set(0)
# Dupe the records or exceptions will report empty dicts
records = [(ok, dict(value)) for ok, value in records]
bulk_daemon.stream_to_es(None, records)
for metric, expected_value in expected.items():
# almost certainly fragile
data = metric._samples()[0][2]
assert data == expected_value
| 35.903846 | 88 | 0.656668 | 231 | 1,867 | 5.021645 | 0.320346 | 0.103448 | 0.124138 | 0.094828 | 0.218103 | 0.160345 | 0 | 0 | 0 | 0 | 0 | 0.020121 | 0.201393 | 1,867 | 51 | 89 | 36.607843 | 0.757881 | 0.05624 | 0 | 0 | 0 | 0 | 0.0876 | 0.022753 | 0 | 0 | 0 | 0 | 0.025 | 1 | 0.1 | false | 0 | 0.05 | 0.075 | 0.225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e281094af97245f121c74904895b108ebd981d2a | 2,248 | py | Python | tests/restart/rigid_vector.py | cselab/uDeviceX | 2ad5e9dd9f118e3998b291cbfc35ee91205bbef8 | [
"MIT"
] | 2 | 2018-09-19T09:53:35.000Z | 2018-10-08T16:37:31.000Z | tests/restart/rigid_vector.py | dimaleks/uDeviceX | 2ad5e9dd9f118e3998b291cbfc35ee91205bbef8 | [
"MIT"
] | 20 | 2018-09-19T10:05:55.000Z | 2018-10-01T14:50:18.000Z | tests/restart/rigid_vector.py | dimaleks/uDeviceX | 2ad5e9dd9f118e3998b291cbfc35ee91205bbef8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import mirheo as mir
import numpy as np
import argparse
import trimesh
parser = argparse.ArgumentParser()
parser.add_argument("--restart", action='store_true', default=False)
parser.add_argument("--ranks", type=int, nargs=3)
args = parser.parse_args()
ranks = args.ranks
domain = (16, 16, 16)
dt = 0.0
u = mir.Mirheo(ranks, domain, debug_level=9,
log_filename='log', no_splash=True,
checkpoint_every = (0 if args.restart else 5))
mesh = trimesh.creation.icosphere(subdivisions=1, radius = 0.1)
coords = [[-0.01, 0., 0.],
[ 0.01, 0., 0.],
[0., -0.01, 0.],
[0., 0.01, 0.],
[0., 0., -0.01],
[0., 0., 0.01]]
udx_mesh = mir.ParticleVectors.Mesh(mesh.vertices.tolist(), mesh.faces.tolist())
pv = mir.ParticleVectors.RigidObjectVector("pv", mass=1.0, inertia=[0.1, 0.1, 0.1], object_size=len(coords), mesh=udx_mesh)
nobjs = 10
pos = [ np.array(domain) * t for t in np.linspace(0, 1.0, nobjs) ]
Q = [ np.array([1.0, 0., 0., 0.]) for i in range(nobjs) ]
pos_q = np.concatenate((pos, Q), axis=1)
ic = mir.InitialConditions.Rigid(pos_q.tolist(), coords)
u.registerParticleVector(pv, ic)
# force correct oldMotions for correct ovStats
vv = mir.Integrators.RigidVelocityVerlet("vv")
u.registerIntegrator(vv)
u.setIntegrator(vv, pv)
if args.restart:
u.registerPlugins(mir.Plugins.createDumpObjectStats("objStats", ov=pv, dump_every=5, filename="stats/pv.csv"))
u.run(7, dt=dt)
# TEST: restart.rigid_vector
# cd restart
# rm -rf restart stats stats.rigid*txt
# mir.run --runargs "-n 1" ./rigid_vector.py --ranks 1 1 1
# mir.run --runargs "-n 2" ./rigid_vector.py --ranks 1 1 1 --restart
# mir.post ../tools/dump_csv.py stats/pv.csv objId time comx comy comz qw qx qy qz vx vy vz wx wy wz fx fy fz Tx Ty Tz | LC_ALL=en_US.utf8 sort > stats.rigid.out.txt
# TEST: restart.rigid_vector.mpi
# cd restart
# rm -rf restart stats stats.rigid*txt
# mir.run --runargs "-n 2" ./rigid_vector.py --ranks 1 1 2
# mir.run --runargs "-n 4" ./rigid_vector.py --ranks 1 1 2 --restart
# mir.post ../tools/dump_csv.py stats/pv.csv objId time comx comy comz qw qx qy qz vx vy vz wx wy wz fx fy fz Tx Ty Tz | LC_ALL=en_US.utf8 sort > stats.rigid.out.txt
| 33.552239 | 165 | 0.668149 | 386 | 2,248 | 3.823834 | 0.373057 | 0.02168 | 0.018293 | 0.016938 | 0.326558 | 0.326558 | 0.326558 | 0.296748 | 0.296748 | 0.296748 | 0 | 0.045113 | 0.171708 | 2,248 | 66 | 166 | 34.060606 | 0.747583 | 0.353648 | 0 | 0 | 0 | 0 | 0.036831 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.114286 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e286824d619b3fb3392a686ffb1f637ffe9558e5 | 226 | py | Python | app/user/urls.py | bondeveloper/maischool | 16bf2afe99d26caa067b7912e88839639cf2191e | [
"MIT"
] | null | null | null | app/user/urls.py | bondeveloper/maischool | 16bf2afe99d26caa067b7912e88839639cf2191e | [
"MIT"
] | null | null | null | app/user/urls.py | bondeveloper/maischool | 16bf2afe99d26caa067b7912e88839639cf2191e | [
"MIT"
] | null | null | null | from django.urls import path
from user import views
app_name = 'user'
urlpatterns = [
path('', views.ListUserView.as_view(), name="list"),
path('detail/<int:pk>', views.RetrieveUserView.as_view(), name="detail"),
]
| 20.545455 | 77 | 0.685841 | 30 | 226 | 5.066667 | 0.6 | 0.078947 | 0.131579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 226 | 10 | 78 | 22.6 | 0.783505 | 0 | 0 | 0 | 0 | 0 | 0.128319 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2869364b9dcccc3530b23aa12c8b83b04b6e9df | 11,588 | py | Python | squeezenet/develop_squeezenet.py | govindnh4cl/squeezenet | 4651a21d9ba274d49c9a8a8dcf5d98c16eff1510 | [
"MIT"
] | null | null | null | squeezenet/develop_squeezenet.py | govindnh4cl/squeezenet | 4651a21d9ba274d49c9a8a8dcf5d98c16eff1510 | [
"MIT"
] | 75 | 2020-03-14T17:07:09.000Z | 2020-05-17T01:03:29.000Z | squeezenet/develop_squeezenet.py | govindnh4cl/squeezenet | 4651a21d9ba274d49c9a8a8dcf5d98c16eff1510 | [
"MIT"
] | null | null | null | import time
import numpy as np
import tensorflow as tf
from my_logger import get_logger
from squeezenet.config import get_config
from squeezenet.inputs import get_input_pipeline
from squeezenet.networks.squeezenet import Squeezenet_Imagenet
from squeezenet import eval
from squeezenet.checkpoint_handler import CheckpointHandler
from squeezenet.utils import load_saved_model
from squeezenet.optimizer import CustomLearningRateScheduler
class DevelopSqueezenet:
def __init__(self, args):
self.cfg = get_config(args) # Get dictionary with configuration parameters
self.logger = get_logger()
self._pipeline = dict()
self.loss_fn = tf.losses.categorical_crossentropy # Loss function
self.net = None # Main network instance
self.opt = None # Optimizer instance
self._lr_scheduler = CustomLearningRateScheduler()
self._ckpt_hdl = CheckpointHandler(self.cfg)
return
def load_checkpointables(self, ckpt2load):
"""
Create entities that needs to be stored by checkpoints (if enabled).
Also load the stored entity value from stored checkpoint and fill-into memory.
:param ckpt2load: An identifier to represent which checkpoint to load from:
'none': Do not load from a checkpoint
'latest': Latest checkpoint
<int>: checkpoint integer id
:return: Checkpoint integer index. -1 if no checkpoint was loaded.
"""
self.net = self._set_network_for_training()
# Optimizer. Here, learning rate doesn't matter since we would overwrite it during training anyway
self.opt = tf.keras.optimizers.SGD()
if ckpt2load == 'none':
self.logger.info('Not looking for a checkpoint.')
ckpt_id = -1
else:
ckpt_id = self._ckpt_hdl.load_checkpoint({'net': self.net, 'opt': self.opt}, ckpt2load)
return ckpt_id
@tf.function # For faster training speed
def _train_step(self, batch_train):
"""
:param batch_train:
:return: loss_batch: A tensor scalar
"""
batch_x, batch_y = batch_train[0], batch_train[1] # Get current batch samples
with tf.GradientTape() as tape:
batch_y_pred = self.net.call(batch_x, training=True) # Run prediction on batch
loss_batch = tf.reduce_mean(self.loss_fn(batch_y, batch_y_pred)) # compute loss
grads = tape.gradient(loss_batch, self.net.trainable_variables) # compute gradient
self.opt.apply_gradients(zip(grads, self.net.trainable_variables)) # Update weights
return loss_batch
def _train_tf(self, train_dataset, val_dataset):
"""
:param train_dataset:
:param val_dataset:
:return:
"""
self.logger.info('Training with Tensorflow API')
# Create network. Also load values from checkpoint if checkpoints are enabled.
# last_epoch_idx is integer index. In case of valid checkpoint loading, this is the epoch index
# of the checkpoint's epoch. Other it is -1
last_epoch_idx = self.load_checkpointables(self.cfg.train.checkpoint_id)
if self.cfg.train.enable_chekpoints:
checkpoint_verified = False
else:
checkpoint_verified = True
# TODO: This is no longer required
self.net.training = True # Enable training mode
'''Main Loop'''
last_sleep_time = time.time()
last_lr = np.nan # Learning rate of last epoch
batch_counter = tf.zeros(1, dtype=tf.int64) # Overall batch-counter to serve as step for Tensorboard
# Loop over epochs
epoch_idx_start = last_epoch_idx + 1
epoch_idx_end = epoch_idx_start + self.cfg.train.num_epochs
for epoch_idx in range(epoch_idx_start, epoch_idx_end):
start_time = time.time()
# Running average loss per sample during this epoch. Needed for printing loss during training
running_loss = tf.keras.metrics.Mean()
# Setup optimizer learning rate
new_lr = self._lr_scheduler.get_learning_rate(epoch_idx)
if new_lr != last_lr:
self.logger.info('Using learning rate: {:f}'.format(new_lr))
self.opt.learning_rate.assign(new_lr) # Force set a custom learning rate into optimizer
last_lr = new_lr
# Loop over batches in the epoch
for batch_idx, train_batch in enumerate(train_dataset):
tf.summary.experimental.set_step(batch_counter) # Set step. Needed for summaries in Tensorboard
batch_loss = self._train_step(train_batch) # Tensor scalar loss for this batch
running_loss.update_state(batch_loss) # Update this batch's loss to
tf.summary.scalar('Train loss', batch_loss) # Log to tensorboard
tf.summary.scalar('Train running-loss', running_loss.result()) # Log to tensorboard
if not checkpoint_verified: # One time verification of whether checkpoint was restored properly
self._ckpt_hdl.verify_checkpoint_restore()
checkpoint_verified = True
# Print status after each batch
print('\rEpoch {:3d} Batch: {:d} Training Loss {:f}'.
format(epoch_idx, batch_idx, running_loss.result()), end='')
batch_counter += 1 # Increment overall-batch-counter
# Sleep intermittently to avoid burning down my machine
if self.cfg.train.enable_intermittent_sleep and \
time.time() - last_sleep_time > self.cfg.train.sleep_interval:
self.logger.info('Sleeping for {:d} seconds.'.format(self.cfg.train.sleep_duration))
time.sleep(self.cfg.train.sleep_duration)
last_sleep_time = time.time() # Reset
self.logger.info('Epoch {:3d} Training Loss {:f} Time {:.1f}s'.format(
epoch_idx,
running_loss.result(),
time.time() - start_time))
# Save checkpoint
if self.cfg.train.enable_chekpoints:
self._ckpt_hdl.save_checkpoint()
# TODO: time validation phase
# TODO: Should we cover it with tf.no_gradient() of tf.stop_gradient() ?
# Evaluate performance on validation set
if self.cfg.validation.enable is True and epoch_idx % self.cfg.validation.validation_interval == 0:
y_pred = np.nan * np.ones(shape=(len(self._pipeline['val']), self.cfg.dataset.num_classes), dtype=np.float32)
y_true = np.nan * np.ones(shape=(len(self._pipeline['val']), self.cfg.dataset.num_classes), dtype=np.float32)
# Loop over batches in the epoch
idx = 0 # Index of samples processed so far
for batch_idx, val_batch in enumerate(val_dataset):
batch_x, batch_y = val_batch[0], val_batch[1] # Get current batch samples
batch_y_pred = self.net.call(batch_x, training=False)
samples_in_batch = len(batch_y)
y_true[idx: idx + samples_in_batch] = batch_y
y_pred[idx: idx + samples_in_batch] = batch_y_pred
idx += samples_in_batch
val_loss = tf.reduce_mean(self.loss_fn(y_true, y_pred))
val_top1_acc, val_top5_acc = eval.get_accuracy(y_true, y_pred)
tf.summary.scalar('Validation loss', val_loss) # Log to tensorboard
tf.summary.scalar('Validation top-1 accuracy', val_top1_acc) # Log to tensorboard
tf.summary.scalar('Validation top-5 accuracy', val_top5_acc) # Log to tensorboard
self.logger.info('Epoch {:3d} Validation Loss: {:f} Accuracy Top-1: {:.1f}% Top-5: {:.1f}%'
.format(epoch_idx, val_loss, val_top1_acc * 100, val_top5_acc * 100))
return
def _set_network_for_training(self):
"""
Network factory
:return:
"""
if self.cfg.dataset.dataset == 'imagenet':
net = Squeezenet_Imagenet(self.cfg)
else:
assert False
return net
def _run_train_mode(self):
"""
Peforms training of the squeezement
:return: None
"""
'''Inputs'''
self._pipeline['train'] = get_input_pipeline(self.cfg, 'train', 'train')
train_dataset = self._pipeline['train'].get_dataset()
if self.cfg.validation.enable:
self._pipeline['val'] = get_input_pipeline(self.cfg, 'inference', 'val')
val_dataset = self._pipeline['val'].get_dataset()
else:
self._pipeline['val'] = None
val_dataset = None
if self.cfg.train.enable_summary is True:
train_summary_writer = tf.summary.create_file_writer(self.cfg.directories.dir_tb)
else:
train_summary_writer = tf.summary.create_noop_writer()
with train_summary_writer.as_default():
tf.summary.experimental.set_step(0) # Set step for summaries
self._train_tf(train_dataset, val_dataset)
self.logger.info('Training complete')
return
def _run_eval_mode(self):
"""
Evaluates the model on dataset
:return: None
"""
if self.cfg.eval.load_from == 'checkpoint':
self.load_checkpointables(self.cfg.eval.checkpoint_id)
else: # Load from a saved model
self.net = load_saved_model(self.cfg.directories.dir_model)
self.logger.info('Running evaluation on dataset portion: {:s}'.format(self.cfg.eval.portion))
self._pipeline[self.cfg.eval.portion] = get_input_pipeline(self.cfg, 'inference', self.cfg.eval.portion)
dataset = self._pipeline[self.cfg.eval.portion].get_dataset()
y_pred = np.nan * np.ones(shape=(len(self._pipeline[self.cfg.eval.portion]), self.cfg.dataset.num_classes), dtype=np.float32)
y_true = np.nan * np.ones(shape=(len(self._pipeline[self.cfg.eval.portion]), self.cfg.dataset.num_classes), dtype=np.float32)
# Loop over batches in the epoch
idx = 0 # Index of samples processed so far
for batch_idx, batch in enumerate(dataset):
batch_x, batch_y = batch[0], batch[1] # Get current batch samples
batch_y_pred = self.net.call(batch_x, False)
samples_in_batch = len(batch_y)
y_true[idx: idx + samples_in_batch] = batch_y
y_pred[idx: idx + samples_in_batch] = batch_y_pred
idx += samples_in_batch
# Print status after each batch
print('\rEvaluating batch: {:d} '.format(batch_idx), end='')
print('') # Pretty prints
loss = tf.reduce_mean(self.loss_fn(y_true, y_pred))
top1_acc, top5_acc = eval.get_accuracy(y_true, y_pred)
self.logger.info('Loss: {:f} Accuracy Top-1: {:.2f}% Top-5: {:.2f}%'
.format(loss, top1_acc * 100, top5_acc * 100))
return
def run(self):
"""
Main entry point of DevelopSqueezenet class
:return:
"""
with tf.device(self.cfg.hardware.device): # This does explicit device selection: cpu or gpu
if self.cfg.misc.mode == 'train':
self._run_train_mode()
elif self.cfg.misc.mode == 'eval':
self._run_eval_mode()
| 43.400749 | 133 | 0.624266 | 1,464 | 11,588 | 4.737022 | 0.201503 | 0.036337 | 0.017304 | 0.014708 | 0.289546 | 0.212257 | 0.169863 | 0.154722 | 0.142033 | 0.126604 | 0 | 0.007951 | 0.283656 | 11,588 | 266 | 134 | 43.56391 | 0.827491 | 0.210994 | 0 | 0.164557 | 0 | 0.006329 | 0.066735 | 0 | 0 | 0 | 0 | 0.003759 | 0.006329 | 1 | 0.050633 | false | 0 | 0.06962 | 0 | 0.170886 | 0.018987 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2877a645bbb7c7bad3cbc69941b6365e83c25ff | 1,151 | py | Python | Code/RUN_IntaRNA.py | nrohani/SARS-CoV-2 | 978320f7b644c65198d632dba8a8ebe8c4df542d | [
"MIT"
] | 1 | 2020-08-25T11:58:01.000Z | 2020-08-25T11:58:01.000Z | Code/RUN_IntaRNA.py | nrohani/SARS-CoV-2 | 978320f7b644c65198d632dba8a8ebe8c4df542d | [
"MIT"
] | null | null | null | Code/RUN_IntaRNA.py | nrohani/SARS-CoV-2 | 978320f7b644c65198d632dba8a8ebe8c4df542d | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Jun 5 17:14:12 2020
@author: Narjes Rohani
"""
#install subprocess, Pandas, biopython, and IntaRNA packages
import os
import subprocess
import pandas as pd
#Load file of miRNAs sequences that you want to canculate MFE to bind SARS-CoV-2
mRNAMicroRNA=pd.read_csv('miRNAsListFile.csv')
#Load UCS file
Xs=pd.read_csv('UCR.txt').seq
#Drop dublicate miRNA sequences
mRNAMicroRNA.drop_duplicates(subset='microRNA_name', keep='first', inplace=True)
Ylist=mRNAMicroRNA['microRNA_name']
scoresFinal=[]
names=[]
covSeq=[]
mirs=[]
for Y,name in zip(mRNAMicroRNA['miRNA_seq'],mRNAMicroRNA['microRNA_name']):
for X in Xs:
#Call IntaRNA for calculating MFE
command="IntaRNA -t"+str(X)+' -q '+str(Y)
output = subprocess.check_output(command, shell=True)
print(output)
scoresFinal.append(output)
names.append(name)
covSeq.append(X)
mirs.append(Y)
Data=pd.DataFrame(columns=['Name','energy','UCR','mirRNAseq'])
Data['Name']=names
Data['UCR']=covSeq
Data['mirRNAseq']=mirs
Data['energy']=scoresFinal
Data.to_csv('CalculatedEnergy.csv',index=False)
print(Data.head()) | 28.775 | 80 | 0.7298 | 167 | 1,151 | 4.976048 | 0.568862 | 0.043321 | 0.021661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013766 | 0.116421 | 1,151 | 40 | 81 | 28.775 | 0.803343 | 0.275413 | 0 | 0 | 0 | 0 | 0.190012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2886fcfd835808de1acab30f0aa321469626e64 | 3,229 | py | Python | salt/salt.py | juhanurmi/cryptography | 362bc1cc146d291f9864f9788de904a09649d39f | [
"MIT"
] | null | null | null | salt/salt.py | juhanurmi/cryptography | 362bc1cc146d291f9864f9788de904a09649d39f | [
"MIT"
] | null | null | null | salt/salt.py | juhanurmi/cryptography | 362bc1cc146d291f9864f9788de904a09649d39f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
python3 salt.py
'''
import time
import random
import hashlib
def all_ip_addresses():
''' Return all IPv4 addresses one by one '''
# IPv4 uses a 32-bit address space: 4,294,967,296 (2**32) unique addresses
for part1 in range(11, 256): # 11-255
for part2 in range(0, 256): # 0-255
for part3 in range(0, 256): # 0-255
for part4 in range(0, 256): # 0-255
yield '%s.%s.%s.%s' % (part1, part2, part3, part4)
def random_string(size=128):
''' Return a random string, default size is 128 '''
# Letters (upper and lower cases) + digits. Total of 64 options.
chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
return ''.join(random.choice(chars) for index in range(size))
def sha1_value(text):
''' Calculate hexdigest hash value '''
return hashlib.sha1(str(text).encode('utf-8')).hexdigest()
def main():
''' Main function '''
userdata_json = {'visit1': {'ip': '11.20.242.34', 'time': '2021-12-12', 'browser': 'firefox'},
'visit2': {'ip': '180.130.113.100', 'time': '2021-12-15', 'browser': 'chrome'},
'visit3': {'ip': '180.130.113.100', 'time': '2021-12-16', 'browser': 'chrome'},
'visit4': {'ip': '11.20.242.34', 'time': '2021-12-20', 'browser': 'firefox'},
'visit5': {'ip': '11.20.242.34', 'time': '2021-12-21', 'browser': 'firefox'},
'visit6': {'ip': '130.236.201.188', 'time': '2021-12-23', 'browser': 'safari'},
'visit7': {'ip': '175.228.153.155', 'time': '2021-12-23', 'browser': 'edge'}}
test_ip = userdata_json['visit1']['ip']
print('Example hashing %s' % test_ip)
plainhash = sha1_value(test_ip)
print('Hash without salt: %s\n' % plainhash)
time.sleep(10)
print('Bruteforce the IP address from the plain hash...')
starttime = time.time() # Start time
test_count = 0
for index, ip_address in enumerate(all_ip_addresses()):
testhash = sha1_value(ip_address)
print('%s \t %s' % (ip_address, testhash))
if plainhash == testhash:
print('Found IP address: %s' % ip_address)
test_count = index + 1
break
endtime = time.time() - starttime
print('Found match in %d seconds and %d tests/second.\n' % (endtime, int(test_count/endtime)))
print('Create a salt+hash version of the database to hide original IP addresses.')
# Let's use a random salt for each different IP address
# Use the same salt for the same IP
salt_map = {} # Structure to keep one salt per each unique IP address
for key, userdata in userdata_json.items():
ip_address = userdata['ip']
if ip_address not in salt_map: # If no salt for the IP
salt_map[ip_address] = random_string() # Add a new salt for the IP
random_salt = salt_map[ip_address]
hashwithsalt = sha1_value(ip_address + random_salt)
# We only want unique ID readable ID
userdata_json[key]['ip'] = hashwithsalt[0:10] # Take 10 first chars
for key, value in userdata_json.items():
print(key, value)
if __name__ == '__main__':
main()
| 45.478873 | 100 | 0.600186 | 448 | 3,229 | 4.227679 | 0.372768 | 0.061774 | 0.036959 | 0.017423 | 0.105597 | 0.085533 | 0.077614 | 0.058606 | 0 | 0 | 0 | 0.095806 | 0.246826 | 3,229 | 70 | 101 | 46.128571 | 0.682977 | 0.181171 | 0 | 0 | 0 | 0 | 0.26097 | 0.023865 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.057692 | 0 | 0.173077 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e28a9ea7765fcb38da42e9c9b798b40ab9662d1a | 2,025 | py | Python | demo/load_model/load_and_predict.py | Saumitra-Shukla/keras-bert | e60785d31129199ec0f922159e76bb63db330e00 | [
"MIT"
] | 2,465 | 2018-10-20T14:49:52.000Z | 2022-03-31T02:20:09.000Z | demo/load_model/load_and_predict.py | VictorMadu/keras-bert | 26bdfe3c36e77fa0524902f31263a920ccd62efb | [
"MIT"
] | 209 | 2018-11-01T09:03:39.000Z | 2022-03-19T09:07:47.000Z | demo/load_model/load_and_predict.py | VictorMadu/keras-bert | 26bdfe3c36e77fa0524902f31263a920ccd62efb | [
"MIT"
] | 566 | 2018-10-23T09:02:24.000Z | 2022-03-31T15:40:37.000Z | import sys
import numpy as np
from keras_bert import load_vocabulary, load_trained_model_from_checkpoint, Tokenizer, get_checkpoint_paths
print('This demo demonstrates how to load the pre-trained model and check whether the two sentences are continuous')
if len(sys.argv) == 2:
model_path = sys.argv[1]
else:
from keras_bert.datasets import get_pretrained, PretrainedList
model_path = get_pretrained(PretrainedList.chinese_base)
paths = get_checkpoint_paths(model_path)
model = load_trained_model_from_checkpoint(paths.config, paths.checkpoint, training=True, seq_len=None)
model.summary(line_length=120)
token_dict = load_vocabulary(paths.vocab)
token_dict_inv = {v: k for k, v in token_dict.items()}
tokenizer = Tokenizer(token_dict)
text = '数学是利用符号语言研究数量、结构、变化以及空间等概念的一门学科'
tokens = tokenizer.tokenize(text)
tokens[1] = tokens[2] = '[MASK]'
print('Tokens:', tokens)
indices = np.array([[token_dict[token] for token in tokens]])
segments = np.array([[0] * len(tokens)])
masks = np.array([[0, 1, 1] + [0] * (len(tokens) - 3)])
predicts = model.predict([indices, segments, masks])[0].argmax(axis=-1).tolist()
print('Fill with: ', list(map(lambda x: token_dict_inv[x], predicts[0][1:3])))
sentence_1 = '数学是利用符号语言研究數量、结构、变化以及空间等概念的一門学科。'
sentence_2 = '从某种角度看屬於形式科學的一種。'
print('Tokens:', tokenizer.tokenize(first=sentence_1, second=sentence_2))
indices, segments = tokenizer.encode(first=sentence_1, second=sentence_2)
masks = np.array([[0] * len(indices)])
predicts = model.predict([np.array([indices]), np.array([segments]), masks])[1]
print('%s is random next: ' % sentence_2, bool(np.argmax(predicts, axis=-1)[0]))
sentence_2 = '任何一个希尔伯特空间都有一族标准正交基。'
print('Tokens:', tokenizer.tokenize(first=sentence_1, second=sentence_2))
indices, segments = tokenizer.encode(first=sentence_1, second=sentence_2)
masks = np.array([[0] * len(indices)])
predicts = model.predict([np.array([indices]), np.array([segments]), masks])[1]
print('%s is random next: ' % sentence_2, bool(np.argmax(predicts, axis=-1)[0]))
| 39.705882 | 116 | 0.744198 | 295 | 2,025 | 4.955932 | 0.332203 | 0.043092 | 0.021888 | 0.05472 | 0.378933 | 0.337893 | 0.337893 | 0.337893 | 0.337893 | 0.337893 | 0 | 0.021452 | 0.102222 | 2,025 | 50 | 117 | 40.5 | 0.782728 | 0 | 0 | 0.27027 | 0 | 0 | 0.139259 | 0.031111 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.108108 | 0 | 0.108108 | 0.189189 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e28b7a15f09721739923d943b0fb11bb1a778844 | 3,052 | py | Python | archive/python/howto-logging.py | ajrichards/bayesian-examples | fbd87c6f1613ea516408e9ebc3c9eff1248246e4 | [
"BSD-3-Clause"
] | 2 | 2016-01-27T08:51:23.000Z | 2017-04-17T02:21:34.000Z | archive/python/howto-logging.py | ajrichards/notebook | fbd87c6f1613ea516408e9ebc3c9eff1248246e4 | [
"BSD-3-Clause"
] | null | null | null | archive/python/howto-logging.py | ajrichards/notebook | fbd87c6f1613ea516408e9ebc3c9eff1248246e4 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import time,os,re,csv,sys,uuid,joblib
from datetime import date
import numpy as np
from sklearn import svm
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
def train_model(X,y,saved_model):
"""
function to train model
"""
## Perform a train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
## Specify parameters and model
params = {'C':1.0,'kernel':'linear','gamma':0.5}
clf = svm.SVC(**params,probability=True)
## fit model on training data
clf = clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(classification_report(y_test,y_pred))
## retrain using all data
clf.fit(X, y)
print("... saving model: {}".format(saved_model))
joblib.dump(clf,saved_model)
print(y_test[:5])
print(X_test[:5,:])
def _update_predict_log(y_pred,y_proba,query,runtime):
"""
update predict log file
"""
## name the logfile using something that cycles with date (day, month, year)
today = date.today()
logfile = "example-predict-{}-{}.log".format(today.year, today.month)
## write the data to a csv file
header = ['unique_id','timestamp','y_pred','y_proba','x_shape','model_version','runtime']
write_header = False
if not os.path.exists(logfile):
write_header = True
with open(logfile,'a') as csvfile:
writer = csv.writer(csvfile, delimiter=',', quotechar='|')
if write_header:
writer.writerow(header)
to_write = map(str,[uuid.uuid4(),time.time(),y_pred,y_proba,query.shape,MODEL_VERSION,runtime])
writer.writerow(to_write)
def predict(query):
"""
generic function for prediction
"""
## start timer for runtime
time_start = time.time()
## ensure the model is loaded
model = joblib.load(saved_model)
## output checking
if len(query.shape) == 1:
query = query.reshape(1, -1)
## make prediction and gather data for log entry
y_pred = model.predict(query)
y_proba = None
if 'predict_proba' in dir(model) and model.probability == True:
y_proba = model.predict_proba(query)
m, s = divmod(time.time()-time_start, 60)
h, m = divmod(m, 60)
runtime = "%03d:%02d:%02d"%(h, m, s)
## update the log file
_update_predict_log(y_pred,y_proba,query,runtime)
return(y_pred)
if __name__ == "__main__":
## import some data to play with
iris = datasets.load_iris()
X = iris.data[:,:2]
y = iris.target
## train the model
MODEL_VERSION = 1.0
saved_model = "example-predict-{}.joblib".format(re.sub("\.","_",str(MODEL_VERSION)))
model = train_model(X,y,saved_model)
## example predict
query = np.array([[6.1,2.8]])
for query in [np.array([[6.1,2.8]]), np.array([[7.7,2.5]]), np.array([[5.8,3.8]])]:
y_pred = predict(query)
print("predicted: {}".format(y_pred))
| 28.259259 | 103 | 0.640891 | 447 | 3,052 | 4.210291 | 0.333333 | 0.026567 | 0.012752 | 0.023379 | 0.085016 | 0.076514 | 0.041445 | 0.041445 | 0.041445 | 0 | 0 | 0.018372 | 0.215269 | 3,052 | 107 | 104 | 28.523364 | 0.767432 | 0.167759 | 0 | 0 | 0 | 0 | 0.081103 | 0.020276 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.122807 | 0 | 0.175439 | 0.087719 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e28f288d8baf761d62a58045450c5d688f0b7d68 | 1,601 | py | Python | 892.surface-area-of-3-d-shapes.py | Lonitch/hackerRank | 84991b8340e725422bc47eec664532cc84a3447e | [
"MIT"
] | null | null | null | 892.surface-area-of-3-d-shapes.py | Lonitch/hackerRank | 84991b8340e725422bc47eec664532cc84a3447e | [
"MIT"
] | null | null | null | 892.surface-area-of-3-d-shapes.py | Lonitch/hackerRank | 84991b8340e725422bc47eec664532cc84a3447e | [
"MIT"
] | null | null | null | #
# @lc app=leetcode id=892 lang=python3
#
# [892] Surface Area of 3D Shapes
#
# https://leetcode.com/problems/surface-area-of-3d-shapes/description/
#
# algorithms
# Easy (57.01%)
# Likes: 209
# Dislikes: 270
# Total Accepted: 15.9K
# Total Submissions: 27.5K
# Testcase Example: '[[2]]'
#
# On a N * N grid, we place some 1 * 1 * 1 cubes.
#
# Each value v = grid[i][j] represents a tower of v cubes placed on top of grid
# cell (i, j).
#
# Return the total surface area of the resulting shapes.
#
#
#
#
#
#
#
#
#
#
#
#
#
# Example 1:
#
#
# Input: [[2]]
# Output: 10
#
#
#
# Example 2:
#
#
# Input: [[1,2],[3,4]]
# Output: 34
#
#
#
# Example 3:
#
#
# Input: [[1,0],[0,2]]
# Output: 16
#
#
#
# Example 4:
#
#
# Input: [[1,1,1],[1,0,1],[1,1,1]]
# Output: 32
#
#
#
# Example 5:
#
#
# Input: [[2,2,2],[2,1,2],[2,2,2]]
# Output: 46
#
#
#
#
# Note:
#
#
# 1 <= N <= 50
# 0 <= grid[i][j] <= 50
#
#
#
#
#
#
#
#
# @lc code=start
class Solution:
def surfaceArea(self, grid: List[List[int]]) -> int:
m,n = len(grid),len(grid[0])
for i in range(m):
grid[i]=[0]+grid[i]+[0]
grid = [[0]*(n+2)]+grid+[[0]*(n+2)]
ans = 0
for i in range(1,m+1):
for j in range(1,n+1):
c=grid[i][j]
if c>0:
ans+=2
nb = [c-grid[i][j-1],c-grid[i][j+1],
c-grid[i-1][j],c-grid[i+1][j]]
for s in nb:
if s>0:
ans+=s
return ans
# @lc code=end
| 14.294643 | 79 | 0.445971 | 243 | 1,601 | 2.938272 | 0.366255 | 0.063025 | 0.021008 | 0.029412 | 0.148459 | 0.030812 | 0.030812 | 0.030812 | 0 | 0 | 0 | 0.091516 | 0.344785 | 1,601 | 111 | 80 | 14.423423 | 0.589133 | 0.506558 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e28fafb02e2674eaa86bebf78be4058f9ae1ced9 | 1,147 | py | Python | tensorflow/afno/afno.py | DarshanDeshpande/research-paper-implementations | fc24acfc4644ccdc9f7d46a411aa66153f234499 | [
"MIT"
] | 7 | 2021-12-20T00:53:46.000Z | 2022-03-17T01:37:00.000Z | tensorflow/afno/afno.py | DarshanDeshpande/research-paper-implementations | fc24acfc4644ccdc9f7d46a411aa66153f234499 | [
"MIT"
] | null | null | null | tensorflow/afno/afno.py | DarshanDeshpande/research-paper-implementations | fc24acfc4644ccdc9f7d46a411aa66153f234499 | [
"MIT"
] | 1 | 2022-03-31T05:41:53.000Z | 2022-03-31T05:41:53.000Z | import tensorflow as tf
import tensorflow_addons as tfa
class AFNO(tf.keras.layers.Layer):
"""
AFNO with adaptive weight sharing and adaptive masking.
"""
def __init__(self, k, *args, **kwargs):
self.k = k
super().__init__(*args, **kwargs)
def build(self, input_shape):
d = (input_shape[-1] // 2) + 1
self.mlp_block = tf.keras.Sequential(
[
tf.keras.layers.Dense(d / self.k, activation="relu"),
tf.keras.layers.Dense(d / self.k, activation="linear"),
]
)
def call(self, input, training=True):
temp = input
x = tf.signal.rfft2d(
tf.cast(input, tf.float32),
)
batch, h, w, d = x.shape
x = tf.reshape(x, [batch, h, w, self.k, d // self.k])
x = self.mlp_block(x)
x = tf.reshape(x, [batch, h, w, d])
x = tfa.activations.softshrink(x)
x = tf.signal.irfft2d(tf.cast(x, tf.complex64))
return x + temp
def get_config(self):
return {"k": self.k}
@classmethod
def from_config(cls, config):
return cls(**config)
| 27.309524 | 71 | 0.541412 | 153 | 1,147 | 3.960784 | 0.392157 | 0.057756 | 0.064356 | 0.059406 | 0.189769 | 0.171617 | 0.171617 | 0.112211 | 0 | 0 | 0 | 0.011465 | 0.315606 | 1,147 | 41 | 72 | 27.97561 | 0.76051 | 0.047951 | 0 | 0 | 0 | 0 | 0.010223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0 | 0.064516 | 0.064516 | 0.354839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e28ff84afb5353549cff424a116a8d2e4aba5e25 | 7,696 | py | Python | deploy/cdk_files/deploy_cdk_stack.py | nickderobertis/nick-derobertis-site | 386061dc258921eed41f2d3965ef69e02adde7ba | [
"MIT"
] | 1 | 2022-03-31T10:55:40.000Z | 2022-03-31T10:55:40.000Z | deploy/cdk_files/deploy_cdk_stack.py | nickderobertis/nick-derobertis-site | 386061dc258921eed41f2d3965ef69e02adde7ba | [
"MIT"
] | 8 | 2020-08-28T11:44:37.000Z | 2020-08-31T09:19:19.000Z | deploy/cdk_files/deploy_cdk_stack.py | nickderobertis/nick-derobertis-site | 386061dc258921eed41f2d3965ef69e02adde7ba | [
"MIT"
] | null | null | null | """AWS CDK module to create ECS infrastructure"""
import os
from typing import Optional
from aws_cdk import (
core,
aws_ecs as ecs,
aws_ec2 as ec2,
aws_iam as iam,
aws_ecr as ecr,
aws_elasticloadbalancingv2 as elbv2,
aws_route53 as route53,
aws_route53_targets as alias,
aws_certificatemanager as acm,
aws_ssm as ssm,
)
from .config import DeploymentConfig
from create_ssh_key import key_pair_path
DUMMY_REGISTRY_PATH = "amazon/amazon-ecs-sample"
DEPLOY_ENV_NAME = os.environ["DEPLOY_ENVIRONMENT_NAME"]
class DeployCdkStack(core.Stack):
"""
Creates AWS infrastructure using AWS CDK
See more here: https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_ecs.README.html
"""
def __init__(
self,
scope: core.Construct,
id: str,
cfg: DeploymentConfig = DeploymentConfig(),
**kwargs,
) -> None:
super().__init__(scope, id, **kwargs)
kp_path = key_pair_path(DEPLOY_ENV_NAME, public=True)
ssm.StringParameter(
self,
cfg.names.public_key_param,
description="SSH Public Key",
parameter_name=cfg.params.ssh_key,
string_value=kp_path.read_text(),
tier=ssm.ParameterTier.STANDARD,
)
# Create the ECR Repository
ecr_repository = ecr.Repository(
self,
cfg.names.ecr_repo,
repository_name=cfg.names.ecr_repo,
removal_policy=core.RemovalPolicy.DESTROY,
)
# Create the ECS Cluster (and VPC)
vpc = ec2.Vpc(self, cfg.names.vpc, max_azs=3, nat_gateways=0)
cluster = ecs.Cluster(
self, cfg.names.ecs_cluster, cluster_name=cfg.names.ecs_cluster, vpc=vpc,
)
# Create the ECS Task Definition with placeholder container (and named Task Execution IAM Role)
execution_role = iam.Role(
self,
cfg.names.ecs_execution_role,
assumed_by=iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
role_name=cfg.names.ecs_execution_role,
)
execution_role.add_to_policy(
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
resources=["*"],
actions=[
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"ssm:GetParameters",
],
)
)
task_definition = ecs.FargateTaskDefinition(
self,
cfg.names.ecs_task_definition,
execution_role=execution_role,
family=cfg.names.ecs_task_definition,
)
container = task_definition.add_container(
cfg.names.app,
image=ecs.ContainerImage.from_registry(DUMMY_REGISTRY_PATH),
logging=ecs.LogDrivers.aws_logs(stream_prefix=cfg.names.app),
)
container.add_port_mappings(ecs.PortMapping(container_port=80, host_port=80))
# Create the ECS Service
service = ecs.FargateService(
self,
cfg.names.ecs_service,
cluster=cluster,
task_definition=task_definition,
service_name=cfg.names.ecs_service,
assign_public_ip=cfg.container_public_ip,
)
# Create a load balancer for the service
lb = elbv2.ApplicationLoadBalancer(
self,
cfg.names.load_balancer,
vpc=vpc,
internet_facing=cfg.is_public,
load_balancer_name=cfg.names.load_balancer,
)
health_check = elbv2.HealthCheck(
path=cfg.health_check.path,
interval=core.Duration.minutes(cfg.health_check.interval_minutes),
timeout=core.Duration.seconds(cfg.health_check.timeout_seconds),
healthy_http_codes=",".join(
[str(code) for code in cfg.health_check.healthy_http_codes]
),
)
target_group = elbv2.ApplicationTargetGroup(
scope=self,
id=cfg.names.autoscaling_target_group,
targets=[service],
vpc=vpc,
port=80,
health_check=health_check,
)
# Get existing hosted zone for URL
hosted_zone = route53.HostedZone.from_lookup(
self, cfg.names.route53_zone, domain_name=cfg.url
)
if cfg.include_ssl:
if cfg.include_www:
subject_alternative_names = [f"www.{cfg.url}"]
else:
subject_alternative_names = None
# Create SSL Certificate
cert = acm.Certificate(
self,
cfg.names.cert,
domain_name=cfg.url,
subject_alternative_names=subject_alternative_names,
validation=acm.CertificateValidation.from_dns(hosted_zone),
)
# Listen on 443 with cert, 80 redirects to 443
https_listener = lb.add_listener(
cfg.names.load_balancer_https_listener, port=443, certificates=[cert]
)
http_listener = lb.add_listener(
cfg.names.load_balancer_http_listener,
port=80,
default_action=elbv2.ListenerAction.redirect(protocol="HTTPS", permanent=True, port='443'),
)
https_listener.add_target_groups(
cfg.names.load_balancer_listener_target_groups,
target_groups=[target_group],
)
else:
# Listen on 80
http_listener = lb.add_listener(
cfg.names.load_balancer_http_listener, port=80,
)
http_listener.add_target_groups(
cfg.names.load_balancer_listener_target_groups,
target_groups=[target_group],
)
# Auto scaling options
scaling = service.auto_scale_task_count(max_capacity=cfg.autoscale.count_limit)
if cfg.autoscale.cpu_pct_limit:
scaling.scale_on_cpu_utilization(
"CpuScaling",
target_utilization_percent=cfg.autoscale.cpu_pct_limit,
policy_name=cfg.names.autoscaling_cpu_policy,
)
if cfg.autoscale.memory_pct_limit:
scaling.scale_on_memory_utilization(
"MemoryScaling",
target_utilization_percent=cfg.autoscale.memory_pct_limit,
policy_name=cfg.names.autoscaling_memory_policy,
)
if cfg.autoscale.request_count_limit:
scaling.scale_on_request_count(
"RequestScaling",
requests_per_target=cfg.autoscale.request_count_limit,
target_group=target_group,
policy_name=cfg.names.autoscaling_requests_policy,
)
# Route53 DNS Config
if cfg.url and hosted_zone is not None:
route53.ARecord(
self,
cfg.names.alias_record,
zone=hosted_zone,
target=route53.RecordTarget.from_alias(alias.LoadBalancerTarget(lb)),
record_name=cfg.url,
)
if cfg.include_www:
route53.CnameRecord(
self,
cfg.names.www_record,
domain_name=cfg.url,
zone=hosted_zone,
record_name=f"www.{cfg.url}",
)
| 34.666667 | 107 | 0.58342 | 798 | 7,696 | 5.35589 | 0.289474 | 0.054282 | 0.033692 | 0.032756 | 0.187178 | 0.106926 | 0.096631 | 0.079317 | 0.069724 | 0.069724 | 0 | 0.010806 | 0.338617 | 7,696 | 221 | 108 | 34.823529 | 0.82888 | 0.071206 | 0 | 0.153846 | 0 | 0 | 0.046253 | 0.021369 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005495 | false | 0 | 0.027473 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2908b05587d86510e1d4c415f410a863e39895d | 1,139 | py | Python | hello/urls.py | KKawamura1/sirobutton | fb74a68e6f7a18177df3fd60df898e46d59f886e | [
"MIT"
] | 8 | 2018-07-03T03:08:41.000Z | 2020-01-05T00:08:04.000Z | hello/urls.py | KKawamura1/sirobutton | fb74a68e6f7a18177df3fd60df898e46d59f886e | [
"MIT"
] | null | null | null | hello/urls.py | KKawamura1/sirobutton | fb74a68e6f7a18177df3fd60df898e46d59f886e | [
"MIT"
] | null | null | null | from django.urls import path
from django.contrib.sitemaps.views import sitemap
import hello.views
from hello.sitemaps import SubtitleSitemap, StaticSitemap
app_name = 'sirobutton'
sitemaps = {
'subtitles': SubtitleSitemap,
'static': StaticSitemap,
}
urlpatterns = [
path('', hello.views.SubtitleListView.as_view(), name='home'),
path('lists/', hello.views.SubtitleListView.as_view(), name='lists'),
path('subtitle/<int:pk>/', hello.views.SubtitleDetailView.as_view(), name='subtitle-detail'),
path('tags/', hello.views.TagListView.as_view(), name='tags'),
path('extra-search/', hello.views.DetailedSearchView.as_view(), name='detailed-search'),
path('about-this/', hello.views.AboutThisView.as_view(), name='about-this'),
path('jump-to-youtube/<int:pk>/', hello.views.RedirectToYoutubeView.as_view(),
name='jump-to-youtube'),
path('api/v1/post-add-tag/', hello.views.PostAddTagView.as_view(), name='add-tag'),
path('api/v1/post-remove-tag/', hello.views.PostRemoveTagView.as_view(), name='remove-tag'),
path('api/v1/oembed/', hello.views.OEmbedView.as_view(), name='oembed'),
]
| 42.185185 | 97 | 0.707638 | 144 | 1,139 | 5.520833 | 0.354167 | 0.138365 | 0.125786 | 0.07044 | 0.090566 | 0.090566 | 0 | 0 | 0 | 0 | 0 | 0.002947 | 0.106234 | 1,139 | 26 | 98 | 43.807692 | 0.777996 | 0 | 0 | 0 | 0 | 0 | 0.220369 | 0.042142 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e290dde6888c2655675fa623360f1477b47adc7f | 5,415 | py | Python | app/freelancer/tests/test_profile.py | mshirzad/find-my-job | 7dca88d6233649952f0b948156a91af5b96352ff | [
"MIT"
] | null | null | null | app/freelancer/tests/test_profile.py | mshirzad/find-my-job | 7dca88d6233649952f0b948156a91af5b96352ff | [
"MIT"
] | null | null | null | app/freelancer/tests/test_profile.py | mshirzad/find-my-job | 7dca88d6233649952f0b948156a91af5b96352ff | [
"MIT"
] | 1 | 2022-03-06T17:44:49.000Z | 2022-03-06T17:44:49.000Z | import os, tempfile
from PIL import Image
from django.contrib.auth import get_user_model
from django.test import TestCase
from django.urls import reverse
from rest_framework import test, status
from rest_framework.test import APIClient
from core.models import Profile, Address, Gig, Education
from freelancer.serializers import ProfileSerializer
# MY_PROFILE_URL = reverse('freelancer:myProfile-list')
# ALL_PROFILES_URL = reverse('freelancer:profile-list')
def upload_profile_photo_url(profile_id):
return reverse('freelancer:myprofile-uploade-profile-photo', args=[profile_id])
def profile_details_url(profile_id):
return reverse('freelancer:myprofile-details', args=[profile_id])
def create_sample_address(**params):
defaults = {
'address_line1': 'Apt 102, St 33 NW',
'city': 'LA',
'province': 'CA',
'post_code': '33AW23',
'country': 'USA'
}
defaults.update(params)
return Address.objects.create(**defaults)
def create_sample_edu(**params):
defaults = {
'degree': 'Master',
'university': 'MIT',
'faculty': 'CS',
'start_year': 2018,
'graduation_year': 2020
}
defaults.update(params)
return Education.objects.create(**defaults)
def create_sample_profile(user, **params):
defaults = {
'phone': '+93778898899',
'profession': 'Eng',
'boi': 'Test Boi',
'address': create_sample_address(),
'education': create_sample_edu()
}
defaults.update(params)
return Profile.objects.create(user=user, **defaults)
def create_sample_gig(freelancer, **params):
defaults = {
'title': 'New Gig for Web App',
'description': 'Some Lorem ipsom',
'min_price': 40.00
}
defaults.update(params)
return Gig.objects.create(freelancer=freelancer, **defaults)
# class TestPublicProfileAPI(TestCase):
# def setUp(self):
# self.client = APIClient()
# def test_auth_required(self):
# resp = self.client.get(ALL_PROFILES_URL)
# self.assertEqual(resp.status_code, status.HTTP_401_UNAUTHORIZED)
# class TestPrivateProfileAPI(TestCase):
# def setUp(self):
# self.client = APIClient()
# self.user = get_user_model().objects.create_user(
# email='test@findmyjob.com',
# password='test@12345'
# )
# self.user.name = 'Test User'
# self.client.force_authenticate(self.user)
# def test_show_freelancer_profile_to_other_users(self):
# user2 = get_user_model().objects.create_user(
# 'otheruser@findmyjob.com',
# 'test@1234555'
# )
# user2.name = 'Test USER'
# user3 = get_user_model().objects.create_user(
# 'user3@findmyjob.com',
# 'test@1234555'
# )
# user3.name = 'Test USER3'
# create_sample_profile(user=user2)
# create_sample_profile(user=user3)
# resp = self.client.get(ALL_PROFILES_URL)
# profiles = Profile.objects.all().order_by('-rating')
# serializer = ProfileSerializer(profiles, many=True)
# self.assertEqual(resp.status_code, status.HTTP_200_OK)
# self.assertEqual(resp.data, serializer.data)
# def test_show_profile_to_its_own_user(self):
# user2 = get_user_model().objects.create_user(
# 'otheruser@findmyjob.com',
# 'test@1234555'
# )
# user2.name = 'Test USER2'
# create_sample_profile(user=user2)
# create_sample_profile(user=self.user)
# resp = self.client.get(MY_PROFILE_URL)
# profile = Profile.objects.filter(user=self.user)
# serializer = ProfileSerializer(profile, many=True)
# self.assertEqual(resp.status_code, status.HTTP_200_OK)
# self.assertEqual(len(resp.data), 1)
# print(resp.data)
# print("#########")
# print(serializer.data)
# self.assertEqual(resp.data, serializer.data)
# class TestUploadProfilePhotoAPI(TestCase):
# def setUp(self):
# self.client = APIClient()
# self.user = get_user_model().objects.create_user(
# email='test@findmyjob.com',
# password='test@12345'
# )
# self.user.name = 'Test User'
# self.client.force_authenticate(self.user)
# self.profile = create_sample_profile(user= self.user)
# def tearDown(self):
# self.profile.profile_photo.delete()
# def test_upload_profile_photo(self):
# url = upload_profile_photo_url(profile_id=self.profile.id)
# with tempfile.NamedTemporaryFile(suffix='.jpg') as nft:
# img = Image.new('RGB', (10,10))
# img.save(nft, format='JPEG')
# nft.seek(0)
# resp = self.client.post(url, {'profile_photo': nft}, format='maltipart')
# self.profile.refresh_form_db()
# self.assertEqual(resp.status_code, status.HTTP_200_OK)
# self.assertIn('profile_photo', resp.data)
# self.assertTrue(os.path.exists(self.profile.profile_photo.path))
# def test_upload_profile_photo_bad_image(self):
# url = upload_profile_photo_url(profile_id=self.profile.id)
# resp = self.client.post(url, {'profile_photo': 'noImage'}, format='maltipart')
# self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
| 28.650794 | 88 | 0.635272 | 613 | 5,415 | 5.414356 | 0.270799 | 0.039771 | 0.040072 | 0.041579 | 0.449834 | 0.433263 | 0.367882 | 0.26755 | 0.26755 | 0.236818 | 0 | 0.023447 | 0.236011 | 5,415 | 188 | 89 | 28.803191 | 0.778825 | 0.638227 | 0 | 0.156863 | 0 | 0 | 0.169519 | 0.037433 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.176471 | 0.039216 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e292938be0fbd99557d32fc2766285f31e477327 | 5,106 | py | Python | app2.py | dropout-sih/flask-backend | 0423274a285a841bb007e7cf4284d6d95d74e20a | [
"MIT"
] | null | null | null | app2.py | dropout-sih/flask-backend | 0423274a285a841bb007e7cf4284d6d95d74e20a | [
"MIT"
] | null | null | null | app2.py | dropout-sih/flask-backend | 0423274a285a841bb007e7cf4284d6d95d74e20a | [
"MIT"
] | null | null | null | from bokeh.embed import components
from bokeh.plotting import figure, curdoc, ColumnDataSource
from bokeh.resources import INLINE
from bokeh.util.string import encode_utf8
from bokeh.models import CustomJS, LabelSet, Slider
from bokeh.models.widgets import Slider
from bokeh.models.layouts import WidgetBox, Row
from bokeh.layouts import row, widgetbox
from werkzeug import secure_filename
from flask import Flask, render_template, flash, request, redirect, url_for, session
from wtforms import Form, TextField, TextAreaField, validators, StringField, SubmitField
import os
from forms import *
import pandas as pd
import plotly
import plotly.plotly as py
import json
import numpy as np
from pandas import ExcelWriter
from pandas import ExcelFile
DEBUG = True
app = Flask(__name__) #initialising flask
app.config.from_object(__name__) #configuring flask
app.config['SECRET_KEY'] = '7d441f27d441f27567d441f2b6176a'
CURRENT_YEAR = '2015'
data = pd.read_csv("test_data.csv")
df = pd.DataFrame(data)
external = df['Normalised_x'].tolist()
internal = df['Normalised_y'].tolist()
names = df['Country'].tolist()
data_mod = pd.read_excel('final.xlsx', sheet_name=CURRENT_YEAR)
df1 = pd.DataFrame(data_mod)
ext = df1['External'].tolist()
int = df1['Internal'].tolist()
name = df1['Country'].tolist()
code = df1['Code'].tolist()
manip_name = ["India", "Belgium", "England"]
manip_y = [0.3,0.5,0.5]
source = ColumnDataSource(data=dict(external=ext, internal=int, names=name, c1=manip_y, c2=manip_name))
UPLOAD_FOLDER = '/static/internal/'
ALLOWED_EXTENSIONS = set(['csv'])
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
plotly.tools.set_credentials_file(username='rahulkumaran', api_key='04p6710F0Pcs8tmwLuSf')
'''def callback(attr, old, new):
data = source.data
val = cb_obj.year.value
x, y, names = ext, int, name
for i in range(0,len(name)):
if(name[i] in manip_name):
y[i] = manip_y[manip_name.index(name[i])]
print("here")
source.change.emit()'''
app = Flask(__name__)
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route('/', methods = ['GET', 'POST'])
def index():
return render_template('lp.html')
@app.route('/u', methods = ['GET', 'POST'])
def u():
if request.method == 'POST':
# check if the post request has the file part
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
# if user does not select file, browser also
# submit an empty part without filename
if file.filename == '':
flash('No selected file')
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return redirect(url_for('visualize'))
return render_template('upload.html')
@app.route("/relative-market-attractive-index-heatmap", methods=['GET', 'POST'])
def rmai():
data = [ dict(
type = 'choropleth',
locations = code,
z = ext,
text = name,
colorscale = [[0,"rgb(5, 10, 172)"],[0.35,"rgb(40, 60, 190)"],[0.5,"rgb(70, 100, 245)"],[0.6,"rgb(90, 120, 245)"],[0.7,"rgb(106, 137, 247)"],[1,"rgb(220, 220, 220)"]],
autocolorscale = False,
reversescale = True,
marker = dict(
line = dict (
color = 'rgb(180,180,180)',
width = 0.5
) ),
colorbar = dict(
autotick = False,
tickprefix = 'Relative',
title = 'Market<br>Attractive Index'),
) ]
layout = dict(
title = 'Relative Market Attractive Index Heatmap',
geo = dict(
showframe = False,
showcoastlines = False,
projection = dict(
type = 'Mercator'
)
)
)
return render_template('heatmap.html')
@app.route("/visualize",methods=['GET','POST'])
def visualize():
js_resources = INLINE.render_js()
css_resources = INLINE.render_css()
callback = CustomJS(args=dict(source=source), code="""
var data = source.data;
var val = year.value;
var get_name = data['c2'];
var get_y = data[c1];
var x = data['external'];
var y = data['internal'];
var name = data['names'];
for (var i = 0; i < x.length; i++) {
if(get_name.includes(name[i])){
y[i] = y[i] + get_y[(get_name.indexOf(name[i]))];
}
}
source.change.emit();
""")
fig = figure(plot_width=1000, plot_height=600)
fig.scatter('external', 'internal', source=source, marker="circle", size=5,line_color="navy", fill_color="green", alpha=0.6)
fig.xaxis[0].axis_label = 'External Index'
fig.xaxis[0].axis_label = 'Internal Index'
labels = LabelSet(x='external', y='internal', text='names', level='glyph', x_offset=5, y_offset=5, source=source, render_mode='canvas')
fig.add_layout(labels)
year = Slider(title="Year ", value=2000, start=1992, end=2030, step=1, width=250, callback=callback)
callback.args["year"] = year
layout = row(
fig,
widgetbox(year),
)
script, div = components(layout)
html = render_template(
'index.html',
layout_script=script,
layout_div=div,
js_resources=js_resources,
css_resources=css_resources,
)
return encode_utf8(html)
if __name__ == "__main__":
app.run(debug=True)
| 27.451613 | 168 | 0.697219 | 726 | 5,106 | 4.783747 | 0.352617 | 0.020731 | 0.016124 | 0.01958 | 0.031097 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035297 | 0.145515 | 5,106 | 185 | 169 | 27.6 | 0.760715 | 0.031336 | 0 | 0.029197 | 0 | 0 | 0.225048 | 0.033768 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036496 | false | 0 | 0.145985 | 0.014599 | 0.240876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e292cc422e185ad6a241928b70c1875793558ca0 | 1,072 | py | Python | src/mailauth/migrations/0001_initial.py | Nnonexistent/chemphys | d2f34364d006a494bb965bb83d1967d7dd56f9ba | [
"MIT"
] | null | null | null | src/mailauth/migrations/0001_initial.py | Nnonexistent/chemphys | d2f34364d006a494bb965bb83d1967d7dd56f9ba | [
"MIT"
] | 19 | 2015-03-08T08:46:09.000Z | 2019-10-01T05:16:43.000Z | src/mailauth/migrations/0001_initial.py | Nnonexistent/chemphys | d2f34364d006a494bb965bb83d1967d7dd56f9ba | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import mailauth.models
import django.utils.timezone
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='MailAuthToken',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('key', models.CharField(default=mailauth.models.default_key, unique=True, max_length=64, verbose_name='Key')),
('created', models.DateTimeField(default=django.utils.timezone.now)),
('user', models.ForeignKey(verbose_name='User', to=settings.AUTH_USER_MODEL)),
],
options={
'verbose_name': 'Mail auth token',
'verbose_name_plural': 'Mail auth tokens',
},
bases=(models.Model,),
),
]
| 33.5 | 127 | 0.620336 | 108 | 1,072 | 5.972222 | 0.518519 | 0.085271 | 0.058915 | 0.065116 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003774 | 0.258396 | 1,072 | 31 | 128 | 34.580645 | 0.807547 | 0.01959 | 0 | 0 | 0 | 0 | 0.095329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2962b912b0ef2494f5546db33de053e4cecd765 | 733 | py | Python | augmentation.py | JamesQFreeman/contrastive_learning_in_100_lines | e5c015c2fad392dd0142cd93728cec3ccdb935b9 | [
"MIT"
] | 2 | 2021-09-07T11:53:42.000Z | 2021-09-25T15:21:24.000Z | augmentation.py | JamesQFreeman/contrastive_learning_in_100_lines | e5c015c2fad392dd0142cd93728cec3ccdb935b9 | [
"MIT"
] | null | null | null | augmentation.py | JamesQFreeman/contrastive_learning_in_100_lines | e5c015c2fad392dd0142cd93728cec3ccdb935b9 | [
"MIT"
] | null | null | null | from torchvision import transforms as T
import torch
import random
class RandomApply(torch.nn.Module):
def __init__(self, fn, p):
super().__init__()
self.fn = fn
self.p = p
def forward(self, x):
if random.random() > self.p:
return x
return self.fn(x)
SimCLR_augment = torch.nn.Sequential(
RandomApply(
T.ColorJitter(0.8, 0.8, 0.8, 0.2),
p=0.3
),
T.RandomGrayscale(p=0.2),
T.RandomHorizontalFlip(),
RandomApply(
T.GaussianBlur((3, 3), (1.0, 2.0)),
p=0.2
),
T.RandomResizedCrop((224, 224)),
T.Normalize(
mean=torch.tensor([0.485, 0.456, 0.406]),
std=torch.tensor([0.229, 0.224, 0.225])),
)
| 21.558824 | 49 | 0.559345 | 105 | 733 | 3.819048 | 0.428571 | 0.01995 | 0.022444 | 0.01995 | 0.017456 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0.283765 | 733 | 33 | 50 | 22.212121 | 0.668571 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.107143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2966a8acfb10fae2bc5cafae57c64dc5b245211 | 728 | py | Python | lessons/058/function/app.py | murasaki718/tutorials | ad55a1f34f2dc050a5ccbceae0c09def0626dccf | [
"MIT"
] | 225 | 2020-12-12T03:41:46.000Z | 2022-03-30T20:07:31.000Z | lessons/058/function/app.py | kevAnto/tutorials | 9db76b5eeb6a54afabae6a4e386f155cc5dbc025 | [
"MIT"
] | 9 | 2021-09-18T12:36:23.000Z | 2022-03-11T17:24:20.000Z | lessons/058/function/app.py | kevAnto/tutorials | 9db76b5eeb6a54afabae6a4e386f155cc5dbc025 | [
"MIT"
] | 410 | 2020-12-24T03:34:33.000Z | 2022-03-31T22:38:13.000Z | import boto3
import requests
def lambda_handler(event, context):
print(event)
object_get_context = event["getObjectContext"]
request_route = object_get_context["outputRoute"]
request_token = object_get_context["outputToken"]
s3_url = object_get_context["inputS3Url"]
# Get object from S3
response = requests.get(s3_url)
original_object = response.content.decode('utf-8')
# Transform object
transformed_object = original_object.upper()
# Write object back to S3 Object Lambda
s3 = boto3.client('s3')
s3.write_get_object_response(
Body=transformed_object,
RequestRoute=request_route,
RequestToken=request_token)
return {'status_code': 200}
| 26.962963 | 54 | 0.712912 | 86 | 728 | 5.767442 | 0.476744 | 0.072581 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024014 | 0.199176 | 728 | 26 | 55 | 28 | 0.826758 | 0.100275 | 0 | 0 | 0 | 0 | 0.101382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.235294 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2983c94a04750401fd0d6ef968b1c6b8d6d289f | 2,051 | py | Python | ch6 slam&navigation/turtlebot/kobuki/kobuki_testsuite/src/kobuki_testsuite/angular_accelerate.py | MINAMISAMA/Castle-X | f6aedc4e67f772b2aed269617ee8a9cac95c7f63 | [
"Apache-2.0"
] | 3 | 2021-01-10T10:52:14.000Z | 2021-12-31T10:19:25.000Z | ch6 slam&navigation/turtlebot/kobuki/kobuki_testsuite/src/kobuki_testsuite/angular_accelerate.py | MINAMISAMA/Castle-X | f6aedc4e67f772b2aed269617ee8a9cac95c7f63 | [
"Apache-2.0"
] | 1 | 2019-01-15T12:37:59.000Z | 2019-01-15T12:37:59.000Z | ch6 slam&navigation/turtlebot/kobuki/kobuki_testsuite/src/kobuki_testsuite/angular_accelerate.py | MINAMISAMA/Castle-X | f6aedc4e67f772b2aed269617ee8a9cac95c7f63 | [
"Apache-2.0"
] | 2 | 2019-01-14T07:48:42.000Z | 2019-01-15T06:32:27.000Z | #!/usr/bin/env python
#
# License: BSD
# https://raw.github.com/yujinrobot/kobuki/hydro-devel/kobuki_testsuite/LICENSE
#
##############################################################################
# Imports
##############################################################################
import threading
import rospy
import math
from geometry_msgs.msg import Twist
from std_msgs.msg import String
##############################################################################
# Classes
##############################################################################
'''
implements a rotating motion.
'''
class AngularAccelerateTest(threading.Thread):
def __init__(self,cmd_vel_topic,log_topic, freq, accl):
threading.Thread.__init__(self)
self.pub_cmd = rospy.Publisher(cmd_vel_topic,Twist)
self.pub_log = rospy.Publisher(log_topic,String)
twist = Twist()
twist.linear.x = 0
twist.linear.y = 0
twist.linear.z = 0
twist.angular.x = 0
twist.angular.y = 0
twist.angular.z = 0
self.twist = twist
self.freq = freq
self.accl = accl
self._stop = False
def stop(self):
self._stop = True
def run(self):
self._stop = False
start = rospy.get_rostime()
rospy.sleep(0.5)
twist = self.twist
freq = self.freq
self.rate = rospy.Rate(freq)
theta_dot_dot = self.accl
msg = "Time : " + str(rospy.get_rostime().secs) + " Vel : " +str(twist.angular.z)
while not rospy.is_shutdown() and not self._stop:
twist.angular.z = twist.angular.z + ( 1.0 / freq ) * theta_dot_dot
msg = "Time : " + str(rospy.get_rostime().secs) + " Vel : " +str(twist.angular.z)
self.log(msg)
self.pub_cmd.publish(twist)
self.rate.sleep()
twist.angular.z = 0
self.pub_cmd.publish(twist)
def log(self,msg):
rospy.loginfo(msg)
t = String(msg)
self.pub_log.publish(t)
| 28.887324 | 93 | 0.504144 | 230 | 2,051 | 4.347826 | 0.330435 | 0.096 | 0.078 | 0.028 | 0.172 | 0.096 | 0.096 | 0.096 | 0.096 | 0.096 | 0 | 0.007092 | 0.243784 | 2,051 | 70 | 94 | 29.3 | 0.637653 | 0.066309 | 0 | 0.181818 | 0 | 0 | 0.018006 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.113636 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e29bbb2c0371d64fde51e9fa4ffd3d717bbe1e2c | 304 | py | Python | src/zs2decode/__init__.py | Gargoyl13/zs2decode | 7826df443327a9465a6a0626f10543e6cd9927e3 | [
"MIT"
] | 3 | 2021-12-02T20:09:26.000Z | 2022-03-07T22:38:08.000Z | src/zs2decode/__init__.py | Gargoyl13/zs2decode | 7826df443327a9465a6a0626f10543e6cd9927e3 | [
"MIT"
] | 6 | 2016-12-13T19:52:01.000Z | 2022-03-12T21:03:55.000Z | src/zs2decode/__init__.py | Gargoyl13/zs2decode | 7826df443327a9465a6a0626f10543e6cd9927e3 | [
"MIT"
] | 5 | 2016-07-11T12:27:17.000Z | 2021-07-20T12:48:13.000Z |
__version__ = "0.3.2.dev0"
__title__ = "zs2decode"
__description__ = "read Zwick zs2 and zp2 files"
__uri__ = "https://zs2decode.readthedocs.org/"
__author__ = "Chris Petrich"
__email__ = "cpetrich@users.noreply.github.com"
__license__ = "MIT"
__copyright__ = "Copyright (C) 2015-2017 Chris Petrich"
| 23.384615 | 55 | 0.743421 | 37 | 304 | 5.243243 | 0.891892 | 0.123711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06015 | 0.125 | 304 | 12 | 56 | 25.333333 | 0.669173 | 0 | 0 | 0 | 0 | 0 | 0.551155 | 0.108911 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e29faff8b7ac6704b7fa82f70fed6f89ab3c757c | 2,170 | py | Python | fastiqa/bunches/iqa/test_images.py | baidut/PatchVQ | 040486b6342dfd36695f1daea0b5c4d77d728a23 | [
"Unlicense"
] | 32 | 2020-12-05T09:11:20.000Z | 2022-03-28T07:49:13.000Z | fastiqa/bunches/iqa/test_images.py | utlive/PatchVQ | 040486b6342dfd36695f1daea0b5c4d77d728a23 | [
"Unlicense"
] | 5 | 2021-07-12T19:43:51.000Z | 2022-01-28T13:16:16.000Z | fastiqa/bunches/iqa/test_images.py | utlive/PatchVQ | 040486b6342dfd36695f1daea0b5c4d77d728a23 | [
"Unlicense"
] | 7 | 2020-12-29T21:52:07.000Z | 2022-03-18T15:12:50.000Z | """fastai1 code, not yet converted"""
from ..label import *
from tqdm import tqdm
from fastai.vision import open_image
import os
from PIL import Image as PIL_Image
"""
# %%
from fastiqa.all import *
learn = RoIPoolLearner.from_cls(FLIVE, RoIPoolModel)
learn.path = Path('.')
learn.load('RoIPoolModel-fit(10,bs=120)')
learn.export('trained_model.pkl')
from fastiqa.all import *; # Im2MOS(TestImages(path='/var/www/yourapplication'))
data.df
# %%
"""
class TestImages(IqaLabel): # Rois0123Label
path = '.'
img_tfm_size = None
valid_pct = 1
batch_size = 1
csv_labels = 'scores.csv'
@classmethod
def from_learner(cls, learn, path=None, dir_qmap=None, sz=None, **kwargs):
def proc_file(f):
im = open_image(os.path.join(path, f))
if dir_qmap is not None:
qmap = learn.predict_quality_map(im, [32, 32])
name = os.path.basename(f).split('.')[0]
qmap.plot()
qmap.savefig(os.path.join(dir_qmap, name + '.jpg'))
score = qmap.global_score
if sz is not None:
height, width = qmap.img.size
new_width = sz # 500
new_height = new_width * height // width
qmap.pil_image.resize((new_width, new_height), PIL_Image.ANTIALIAS).save(os.path.join(dir_qmap, name + '_raw.jpg'))
qmap.blend(mos_range=(None, None)).resize((new_width, new_height), PIL_Image.ANTIALIAS).save(os.path.join(dir_qmap, name + '_map.jpg'))
else:
score = learn.predict(im)[0].obj[0]
del im
del qmap
return score
if dir_qmap is not None:
os.makedirs(dir_qmap, exist_ok=True)
valid_images = (".jpg",".jpeg",".png",".bmp",".tif")
files = os.listdir(path if path is not None else cls.path)
files = [f for f in files if f.lower().endswith(valid_images)]
scores = [proc_file(f) for f in tqdm(files)]
df = pd.DataFrame({'mos': scores, 'name': files})
df.to_csv('scores.csv', index=False)
return cls(path=path)
pass
| 32.38806 | 155 | 0.586175 | 294 | 2,170 | 4.190476 | 0.397959 | 0.039773 | 0.032468 | 0.031656 | 0.151786 | 0.151786 | 0.105519 | 0.105519 | 0.105519 | 0.105519 | 0 | 0.014734 | 0.280645 | 2,170 | 66 | 156 | 32.878788 | 0.774504 | 0.023041 | 0 | 0.047619 | 0 | 0 | 0.038293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.02381 | 0.119048 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2a033cd806aa9352d0b43321894ec541d9fc3a8 | 2,264 | py | Python | codigos100/importaModulos.py | rosacarla/100-days-of-python-code | 3db9e35f861ce933e952cff2dd3a505dfce1b440 | [
"MIT"
] | 1 | 2021-09-26T09:17:36.000Z | 2021-09-26T09:17:36.000Z | codigos100/importaModulos.py | rosacarla/100-days-of-python-code | 3db9e35f861ce933e952cff2dd3a505dfce1b440 | [
"MIT"
] | null | null | null | codigos100/importaModulos.py | rosacarla/100-days-of-python-code | 3db9e35f861ce933e952cff2dd3a505dfce1b440 | [
"MIT"
] | null | null | null | # Codigo: Importa Modulos
# Autora: Carla Edila Silveira
# Finalidade: definir funcoes - curso Python Quick Start
# Data: 21/09/2021
# MODULO calendario do Python
import calendar # importa modulo de calendario
cal = calendar.month(2021, 9) # variavel recebe dados do calendario desejado
print(cal) # imprime o calendario atribuido a variavel
# importar modulo de matematica do Python
import math # importa modulo de matematica
result = math.sqrt(49) # atribui funcao raiz quadrada a variavel
print("A raiz quadrada de 49 é igual a ", result) #imprime o resultado
# MODULO Random
import random #importa modulo random
number = random.randint(1, 100) #gera aleatoriamente 1 nro inteiro entre os 2 inputs
print(number) #imprime o nro inteiro aleatorio
# escolha aleatoria de item em uma lista
movies = ["Aladim", "Oblivium", "Pocahontas", "The Lion King", "E o vento levou", "ET",
"Spider-Man", "Men in Black", "Avengers", "Rio", "Karate Kid"]
# escolher um item aleatorio de uma sequencia ou, por ex., lista de filmes
watch = random.choice(movies) # variavel para receber a escolha aleatoria do item pela funcao choice
print("Sua coleção de filmes é a seguinte: ", movies)
print("Escolhi este filme para você assistir hoje: ", watch) # imprime item escolhido
# reordenacao aleatoria de itens de uma lista
deck = ['Ace', 'Two', 'Three', 'Four', 'Five', 'Six', 'Seven', 'Eight', 'Nine', 'Ten', 'Jack', 'Queen', 'King']
print("Suas cartas de baralho: ",deck) # imprime a lista na mesma ordem de criação
random.shuffle(deck) # funcao reordena os itens da lista aleatoriamente
print("Embaralhei suas cartas: ",deck) # imprime lista rearranjada pela funcao
# MODULO Pow
# Documentacao: Retorna x elevado à potência y. Os casos excepcionais seguem
# o Anexo ‘F’ da norma C99, tanto quanto possível. Em particular, pow(1.0, x)
# e pow(x, 0.0) sempre retornam 1.0, mesmo quando x é um zero ou um NaN. Se
# ambos x e y são finitos, x é negativo, e y não é um inteiro, então pow(x, y)
# é indefinido e levanta "ValueError".
pot = math.pow(2, 10) # variavel recebe funcao que calcula o resultado de 2 elevado a 10a potencia
print("O resultado de 2 elevado à decima potencia é: ", pot) # imprime o resultado da potenciacao
# fim do codigo
| 43.538462 | 111 | 0.728357 | 350 | 2,264 | 4.711429 | 0.522857 | 0.019406 | 0.01698 | 0.015767 | 0.024257 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020419 | 0.178004 | 2,264 | 51 | 112 | 44.392157 | 0.865664 | 0.602915 | 0 | 0 | 0 | 0 | 0.410405 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2a035bc77e1b89c5aef8a210f2601d3543648c1 | 11,873 | py | Python | mindspore/train/quant/quant.py | ZephyrChenzf/mindspore | 8f191847cf71e12715ced96bc3575914f980127a | [
"Apache-2.0"
] | 7 | 2020-05-24T03:19:26.000Z | 2020-05-24T03:20:00.000Z | mindspore/train/quant/quant.py | ZephyrChenzf/mindspore | 8f191847cf71e12715ced96bc3575914f980127a | [
"Apache-2.0"
] | null | null | null | mindspore/train/quant/quant.py | ZephyrChenzf/mindspore | 8f191847cf71e12715ced96bc3575914f980127a | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""aware quantization."""
import re
from ... import nn
from ... import ops
from ..._checkparam import ParamValidator as validator
from ..._checkparam import Rel
from ...nn.layer import combined
from ...nn.layer import quant
_ACTIVATION_MAP = {nn.ReLU: quant.ReLUQuant,
nn.ReLU6: quant.ReLU6Quant,
nn.HSigmoid: quant.HSigmoidQuant,
nn.HSwish: quant.HSwishQuant}
class _AddFakeQuantInputOutput(nn.Cell):
"""
Add FakeQuant at input and output of the Network. Only support one input and one output case.
"""
def __init__(self, network, quant_delay=0):
super(_AddFakeQuantInputOutput, self).__init__(auto_prefix=False)
self.network = network
self.fake_quant_input = quant.FakeQuantWithMinMax(
min_init=-6, max_init=6, quant_delay=quant_delay, ema=True)
self.fake_quant_input.update_parameters_name('fake_quant_input')
self.fake_quant_output = quant.FakeQuantWithMinMax(
min_init=-6, max_init=6, quant_delay=quant_delay, ema=True)
self.fake_quant_output.update_parameters_name('fake_quant_output')
def construct(self, data):
data = self.fake_quant_input(data)
output = self.network(data)
output = self.fake_quant_output(output)
return output
class _AddFakeQuantAfterSubCell(nn.Cell):
"""
Add FakeQuant after of the sub Cell.
"""
def __init__(self, subcell, quant_delay=0, num_bits=8):
super(_AddFakeQuantAfterSubCell, self).__init__(auto_prefix=False)
self.subcell = subcell
self.fake_quant_act = quant.FakeQuantWithMinMax(min_init=-6,
max_init=6,
num_bits=num_bits,
quant_delay=quant_delay,
ema=True)
def construct(self, *data):
output = self.subcell(*data)
output = self.fake_quant_act(output)
return output
class ConvertToQuantNetwork:
"""
Convert network to quantization aware network
"""
__quant_op_name__ = ["TensorAdd", "Sub", "Mul", "RealDiv"]
def __init__(self,
network,
quant_delay=0,
bn_fold=False,
freeze_bn=0,
weight_bits=8,
act_bits=8,
per_channel=False,
symmetric=False,
narrow_range=False):
self.network = validator.check_isinstance(
'network', network, (nn.Cell,))
self.quant_delay = validator.check_integer(
"quant delay", quant_delay, 0, Rel.GE)
self.freeze_bn = validator.check_integer(
"freeze bn", freeze_bn, 0, Rel.GE)
self.weight_bits = validator.check_integer(
"weights bit", weight_bits, 0, Rel.GE)
self.act_bits = validator.check_integer(
"activations bit", act_bits, 0, Rel.GE)
self.bn_fold = validator.check_bool("bn fold", bn_fold)
self.per_channel = validator.check_bool("per channel", per_channel)
self.symmetric = validator.check_bool("symmetric", symmetric)
self.narrow_range = validator.check_bool("narrow range", narrow_range)
def _convert_op_name(self, name):
pattern = re.compile(r'([A-Z]{1})')
name_new = re.sub(pattern, r'_\1', name).lower()
if name_new[0] == '_':
name_new = name_new[1:]
return name_new
def run(self):
self.network.update_cell_prefix()
network = self._convert_subcells2quant(self.network)
return network
def _convert_subcells2quant(self, network):
"""
convet sub cell to quant cell
"""
cells = network.name_cells()
change = False
for name in cells:
subcell = cells[name]
if subcell == network:
continue
elif isinstance(subcell, combined.Conv2d):
prefix = subcell.param_prefix
new_subcell = self._convert_conv(subcell)
new_subcell.update_parameters_name(prefix + '.')
network.insert_child_to_cell(name, new_subcell)
change = True
elif isinstance(subcell, combined.Dense):
prefix = subcell.param_prefix
new_subcell = self._convert_dense(subcell)
new_subcell.update_parameters_name(prefix + '.')
network.insert_child_to_cell(name, new_subcell)
change = True
else:
self._convert_subcells2quant(subcell)
if isinstance(network, nn.SequentialCell) and change:
network.cell_list = list(network.cells())
# tensoradd to tensoradd quant
add_list = []
for name in network.__dict__:
if name[0] == '_':
continue
attr = network.__dict__[name]
if isinstance(attr, ops.Primitive) and attr.name in ConvertToQuantNetwork.__quant_op_name__:
add_list.append((name, attr))
for name, prim_op in add_list:
prefix = name
add_quant = _AddFakeQuantAfterSubCell(prim_op) # quant.TensorAddQuant()
prefix = '.'.join([network.param_prefix, self._convert_op_name(prim_op.name)])
add_quant.update_parameters_name(prefix + '.')
del network.__dict__[name]
network.insert_child_to_cell(name, add_quant)
return network
def _convert_conv(self, subcell):
"""
convet conv cell to combine cell
"""
conv_inner = subcell.conv
bn_inner = subcell.batchnorm
if subcell.batchnorm is not None and self.bn_fold:
conv_inner = quant.Conv2dBatchNormQuant(conv_inner.in_channels,
conv_inner.out_channels,
kernel_size=conv_inner.kernel_size,
stride=conv_inner.stride,
pad_mode=conv_inner.pad_mode,
padding=conv_inner.padding,
dilation=conv_inner.dilation,
group=conv_inner.group,
eps=bn_inner.eps,
momentum=bn_inner.momentum,
quant_delay=self.quant_delay,
freeze_bn=self.freeze_bn,
per_channel=self.per_channel,
num_bits=self.weight_bits,
fake=True,
symmetric=self.symmetric,
narrow_range=self.narrow_range)
del subcell.batchnorm
subcell.batchnorm = None
subcell.has_bn = False
else:
conv_inner = quant.Conv2dQuant(conv_inner.in_channels,
conv_inner.out_channels,
kernel_size=conv_inner.kernel_size,
stride=conv_inner.stride,
pad_mode=conv_inner.pad_mode,
padding=conv_inner.padding,
dilation=conv_inner.dilation,
group=conv_inner.group,
has_bias=conv_inner.has_bias,
quant_delay=self.quant_delay,
per_channel=self.per_channel,
num_bits=self.weight_bits,
symmetric=self.symmetric,
narrow_range=self.narrow_range)
subcell.conv = conv_inner
if subcell.activation is not None:
subcell.activation = self._convert_activation(subcell.activation)
else:
subcell = _AddFakeQuantAfterSubCell(subcell)
return subcell
def _convert_dense(self, subcell):
"""
convert dense cell to combine dense cell
"""
dense_inner = subcell.dense
dense_inner = quant.DenseQuant(dense_inner.in_channels,
dense_inner.out_channels,
has_bias=dense_inner.has_bias,
quant_delay=self.quant_delay,
per_channel=self.per_channel,
num_bits=self.weight_bits)
subcell.dense = dense_inner
if subcell.activation is not None:
subcell.activation = self._convert_activation(subcell.activation)
return subcell
def _convert_activation(self, activation):
act_class = activation.__class__
if act_class not in _ACTIVATION_MAP:
raise ValueError(
"Unsupported activation in auto Quant: ", act_class)
return _ACTIVATION_MAP[act_class](num_bits=self.act_bits, quant_delay=self.quant_delay)
def convert_quant_network(network,
quant_delay=0,
bn_fold=False,
freeze_bn=0,
weight_bits=8,
act_bits=8,
per_channel=False,
symmetric=False,
narrow_range=False
):
r"""
Create aware quantizaiton training network.
Args:
network (Cell): Obtain a pipeline through network for saving graph summary.
quant_delay (int): Number of steps after which weights and activations are quantized during eval. Default: 0.
bn_fold (bool): Flag to used bn fold ops for simulation inference operation. Default: False.
freeze_bn (bool): Number of steps after which BN parameters used total mean and variance. Default: 0.
weight_bits (int): Number of bits to use for quantizing weights. Default: 8.
act_bits (int): Number of bits to use for quantizing activations. Default: 8.
per_channel (bool): Quantization granularity based on layer or on channel. Default: False.
symmetric (bool): Quantization algorithm use symmetric or not. Default: False.
narrow_range (bool): Quantization algorithm use narrow range or not. Default: False.
returns:
Cell, Network which has change to aware quantization training network.
"""
net = ConvertToQuantNetwork(
network, quant_delay, bn_fold, freeze_bn, weight_bits, act_bits, per_channel, symmetric, narrow_range)
return net.run()
| 45.144487 | 117 | 0.550577 | 1,214 | 11,873 | 5.128501 | 0.198517 | 0.036942 | 0.016704 | 0.012849 | 0.31513 | 0.278831 | 0.261323 | 0.254899 | 0.218599 | 0.206714 | 0 | 0.006165 | 0.371599 | 11,873 | 262 | 118 | 45.316794 | 0.828307 | 0.164744 | 0 | 0.342105 | 0 | 0 | 0.02099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063158 | false | 0 | 0.036842 | 0 | 0.168421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2a1277d6d5bab92b42350a997c75399618d77e4 | 5,736 | py | Python | forks/sphinx-git/tests/test_git_commit_detail.py | JakeGWater/Virtual-Production-Independent-Film-Guide | 022721f79b5b199f7208340564e9850dc9d97939 | [
"MIT"
] | 1 | 2021-06-25T03:15:26.000Z | 2021-06-25T03:15:26.000Z | forks/sphinx-git/tests/test_git_commit_detail.py | JakeGWater/Virtual-Production-Independent-Film-Guide | 022721f79b5b199f7208340564e9850dc9d97939 | [
"MIT"
] | 10 | 2021-04-16T17:45:06.000Z | 2021-06-26T20:59:17.000Z | forks/sphinx-git/tests/test_git_commit_detail.py | JakeGWater/Virtual-Production-Independent-Film-Guide | 022721f79b5b199f7208340564e9850dc9d97939 | [
"MIT"
] | 1 | 2021-04-16T19:48:45.000Z | 2021-04-16T19:48:45.000Z | # -*- coding: utf-8 -*-
import os
from tempfile import mkstemp
from bs4 import BeautifulSoup
from git import Repo
from nose.tools import assert_equal, assert_in, assert_is, assert_is_not
from sphinx_git import GitCommitDetail
from . import MakeTestableMixin, TempDirTestCase
class TestableGitCommitDetail(MakeTestableMixin, GitCommitDetail):
github_nonce_url = 'https://github.com/no_user/no_repo.git/'
github_nonce_commit_base = 'https://github.com/no_user/no_repo/commit/'
class TestCommitDetail(TempDirTestCase):
def setup(self):
super(TestCommitDetail, self).setup()
self.commit_detail = TestableGitCommitDetail()
self.commit_detail.state.document.settings.env.srcdir = self.root
self.repo = Repo.init(self.root)
config_writer = self.repo.config_writer()
config_writer.set_value('user', 'name', 'Test User')
config_writer.release()
def test_commit_only(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'commit': True}
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
node_fl = node_p[0] # field list
node_f = node_fl[0] # field
assert_equal(1, len(node_fl))
assert_equal('Commit', node_f[0].astext())
assert_equal(
self.repo.commit().hexsha[:GitCommitDetail.default_sha_length],
node_f[1].astext()
)
def test_branch_only(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'branch': True}
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
node_fl = node_p[0] # field list
node_f = node_fl[0] # field
assert_equal(1, len(node_fl))
assert_equal('Branch', node_f[0].astext())
assert_equal('master', node_f[1].astext())
def test_commit_and_branch(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'commit': True, 'branch': True}
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
node_fl = node_p[0] # field list
node_f_b = node_fl[0] # field--branch
node_f_c = node_fl[1] # field--commit
assert_equal(2, len(node_fl))
assert_equal('Commit', node_f_c[0].astext())
assert_equal('Branch', node_f_b[0].astext())
def test_github_link(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'commit': True}
self.repo.create_remote('origin', self.commit_detail.github_nonce_url)
nodes = self.commit_detail.run()
list_markup = BeautifulSoup(str(nodes[0]), features='xml')
assert_is_not(list_markup.reference, None)
assert_equal(
self.commit_detail.github_nonce_commit_base +
self.repo.commit().hexsha,
list_markup.reference['refuri']
)
assert_equal(
self.repo.commit().hexsha[:GitCommitDetail.default_sha_length],
list_markup.reference.text
)
def test_no_github_link(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'commit': True, 'no_github_link': True}
self.repo.create_remote('origin', self.commit_detail.github_nonce_url)
nodes = self.commit_detail.run()
list_markup = BeautifulSoup(str(nodes[0]), features='xml')
assert_is(list_markup.reference, None)
def test_sha_length(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'commit': True, 'sha_length': 4}
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
node_fl = node_p[0] # field list
node_f = node_fl[0] # field
assert_equal(1, len(node_fl))
assert_equal('Commit', node_f[0].astext())
assert_equal(self.repo.commit().hexsha[:4], node_f[1].astext())
def test_untracked_files(self):
self.repo.index.commit('my root commit')
self.commit_detail.options = {'untracked': True}
fd, name = mkstemp(dir=self.root)
os.close(fd)
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
assert_equal(2, len(node_p))
node_w = node_p[1] # nodes.warning
node_i = node_w[0] # inline
assert_in('untracked', node_i.astext())
def test_uncommitted_changes(self):
fd, name = mkstemp(dir=self.root)
self.repo.index.add([name])
self.repo.index.commit('my root commit')
os.write(fd, "some change".encode('utf-8'))
os.close(fd)
self.commit_detail.options = {'uncommitted': True}
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
assert_equal(2, len(node_p))
node_w = node_p[1] # nodes.warning
node_i = node_w[0] # inline
assert_in('uncommitted', node_i.astext())
def test_detached_head(self):
self.repo.index.commit('my root commit')
self.repo.index.commit('a second commit')
self.repo.head.reference = self.repo.commit('HEAD~')
assert self.repo.head.is_detached, "HEAD unexpectedly attached"
self.commit_detail.options = {'commit': True}
nodes = self.commit_detail.run()
node_p = nodes[0] # <p> node
node_fl = node_p[0] # field list
node_f = node_fl[0] # field
assert_equal(1, len(node_fl))
assert_equal('Commit', node_f[0].astext())
assert_equal(
self.repo.commit().hexsha[:GitCommitDetail.default_sha_length],
node_f[1].astext()
)
| 39.558621 | 78 | 0.621688 | 755 | 5,736 | 4.495364 | 0.153642 | 0.067767 | 0.108427 | 0.055981 | 0.647024 | 0.616676 | 0.582499 | 0.558044 | 0.54891 | 0.537419 | 0 | 0.010715 | 0.251569 | 5,736 | 144 | 79 | 39.833333 | 0.779874 | 0.040621 | 0 | 0.531746 | 0 | 0 | 0.084687 | 0 | 0 | 0 | 0 | 0 | 0.198413 | 1 | 0.079365 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2a14db3105afb36bed7702d139b68659334f92c | 5,239 | py | Python | czitools/pylibczirw_tools.py | sebi06/czitools | 3fed073d5e56db0aaebe87f0e38af80b0724f005 | [
"BSD-3-Clause"
] | 4 | 2021-07-26T15:55:14.000Z | 2022-01-22T01:43:01.000Z | czitools/pylibczirw_tools.py | sebi06/czitools | 3fed073d5e56db0aaebe87f0e38af80b0724f005 | [
"BSD-3-Clause"
] | null | null | null | czitools/pylibczirw_tools.py | sebi06/czitools | 3fed073d5e56db0aaebe87f0e38af80b0724f005 | [
"BSD-3-Clause"
] | 1 | 2021-08-02T10:31:28.000Z | 2021-08-02T10:31:28.000Z | # -*- coding: utf-8 -*-
#################################################################
# File : pylibczirw_tools.py
# Version : 0.0.5
# Author : sebi06
# Date : 18.01.2022
#
# Disclaimer: This code is purely experimental. Feel free to
# use it at your own risk.
#
#################################################################
from __future__ import annotations
import pylibCZIrw.czi
from pylibCZIrw import czi as pyczi
from czitools import pylibczirw_metadata as czimd
from czitools import misc
import numpy as np
from typing import List, Dict, Tuple, Optional, Type, Any, Union
from tqdm.contrib.itertools import product
import dask
import dask.array as da
def read_7darray(filename: str) -> np.ndarray:
# get the complete metadata at once as one big class
mdata = czimd.get_czimetadata_extended(filename)
# open the CZI document to read the
with pyczi.open_czi(filename) as czidoc:
if mdata.image.SizeS is not None:
# get size for a single scene using the 1st
# works only if scene shape is consistent
sizeX = mdata.bbox.all_scenes[0].w
sizeY = mdata.bbox.all_scenes[0].h
if mdata.image.SizeS is None:
sizeX = mdata.image.SizeX
sizeY = mdata.image.SizeY
# check if dimensions are None (because they do not exist for that image)
sizeC = misc.check_dimsize(mdata.image.SizeC, set2value=1)
sizeZ = misc.check_dimsize(mdata.image.SizeZ, set2value=1)
sizeT = misc.check_dimsize(mdata.image.SizeT, set2value=1)
sizeS = misc.check_dimsize(mdata.image.SizeS, set2value=1)
# define the dimension order to be STZCYXA
array7d = np.empty([sizeS, sizeT, sizeZ, sizeC, sizeY, sizeX, 3 if mdata.isRGB else 1], dtype=mdata.npdtype)
# read array for the scene
for s, t, z, c in product(range(sizeS),
range(sizeT),
range(sizeZ),
range(sizeC)):
if mdata.image.SizeS is None:
image2d = czidoc.read(plane={'T': t, 'Z': z, 'C': c})
else:
image2d = czidoc.read(plane={'T': t, 'Z': z, 'C': c}, scene=s)
# check if the image2d is really not too big
if (mdata.bbox.total_bounding_box["X"][1] > mdata.image.SizeX or
mdata.bbox.total_bounding_box["Y"][1] > mdata.image.SizeY):
image2d = image2d[..., 0:mdata.image.SizeY, 0:mdata.image.SizeX, :]
# array6d[s, t, z, c, ...] = image2d[..., 0]
array7d[s, t, z, c, ...] = image2d
return array7d
def read_7darray_lazy(filename: str) -> da.Array:
def read_scene6d(filename: str,
sizes: Tuple[int, int, int, int, int],
s: int,
mdata: pylibCZIrw.czi_metadata.CziMetadata):
# define the dimension order to be TZCYXA
array6d = da.empty([sizes[0], sizes[1], sizes[2], sizes[3], sizes[4], 3 if mdata.isRGB else 1],
dtype=mdata.npdtype)
# open the CZI document to read the
with pyczi.open_czi(filename) as czidoc:
# read array for the scene
for t, z, c in product(range(sizeT),
range(sizeZ),
range(sizeC)):
if mdata.image.SizeS is None:
image2d = czidoc.read()
else:
image2d = czidoc.read(plane={'T': t, 'Z': z, 'C': c}, scene=s)
# check if the image2d is really not too big
if mdata.pyczi_dims["X"][1] > mdata.image.SizeX or mdata.pyczi_dims["Y"][1] > mdata.image.SizeY:
image2d = image2d[..., 0:mdata.image.SizeY, 0:mdata.image.SizeX, :]
array6d[t, z, c, ...] = image2d
return array6d
# get the metadata
mdata = get_czimdata_extended(filename)
if mdata.image.SizeS is not None:
# get size for a single scene using the 1st
# works only if scene shape is consistent
sizeX = mdata.bbox.all_scenes[0].w
sizeY = mdata.bbox.all_scenes[0].h
if mdata.image.SizeS is None:
sizeX = mdata.image.SizeX
sizeY = mdata.image.SizeY
# check if dimensions are None (because they do not exist for that image)
sizeC = misc.check_dimsize(mdata.image.SizeC, set2value=1)
sizeZ = misc.check_dimsize(mdata.image.SizeZ, set2value=1)
sizeT = misc.check_dimsize(mdata.image.SizeT, set2value=1)
sizeS = misc.check_dimsize(mdata.image.SizeS, set2value=1)
sizes = (sizeT, sizeZ, sizeC, sizeY, sizeX)
# define the required shape
sp = [sizeT, sizeZ, sizeC, sizeY, sizeX, 3 if mdata.isRGB else 1]
# create dask stack of lazy image readers
lazy_process_image = dask.delayed(read_scene6d) # lazy reader
lazy_arrays = [lazy_process_image(filename, sizes, s, mdata) for s in range(sizeS)]
dask_arrays = [da.from_delayed(lazy_array, shape=sp, dtype=mdata.npdtype) for lazy_array in lazy_arrays]
# Stack into one large dask.array
array7d = da.stack(dask_arrays, axis=0)
return array7d
| 36.894366 | 116 | 0.584272 | 696 | 5,239 | 4.33046 | 0.239943 | 0.086264 | 0.039814 | 0.05574 | 0.628069 | 0.600199 | 0.545786 | 0.529861 | 0.529861 | 0.512608 | 0 | 0.022336 | 0.290704 | 5,239 | 141 | 117 | 37.156028 | 0.788751 | 0.194503 | 0 | 0.486486 | 0 | 0 | 0.003203 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040541 | false | 0 | 0.135135 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
e2a20e7a3ae489800a9c2d7da9dadea501616d50 | 10,395 | py | Python | stage_2.py | prouast/ctc-intake-detection | 6dbfb9bbb0bb09980e4530b31742cb0d5357bf08 | [
"MIT"
] | null | null | null | stage_2.py | prouast/ctc-intake-detection | 6dbfb9bbb0bb09980e4530b31742cb0d5357bf08 | [
"MIT"
] | null | null | null | stage_2.py | prouast/ctc-intake-detection | 6dbfb9bbb0bb09980e4530b31742cb0d5357bf08 | [
"MIT"
] | null | null | null | """Evaluate exported frame-level probabilities."""
from __future__ import division
import argparse
import csv
import glob
import numpy as np
import os
from scipy.special import softmax
import sys
CSV_SUFFIX = '*.csv'
np.set_printoptions(threshold=sys.maxsize)
def import_probs_and_labels(args):
"""Import probabilities and labels from csv"""
filenames = sorted(glob.glob(os.path.join(args.input_dir, CSV_SUFFIX)))
assert filenames, "No files found for evaluation"
labels = {}
probs = {}
for filename in filenames:
labels[filename] = []
probs[filename] = []
with open(filename) as dest_f:
for row in csv.reader(dest_f, delimiter=','):
labels[filename].append(int(float(row[args.col_label])))
all_inputs_t = []
for i in range(args.num_event_classes + 1):
all_inputs_t.append(float(row[args.col_input + i]))
if args.input_format == "probs":
probs[filename].append(np.array(all_inputs_t)[1:].tolist())
elif args.input_format == "logits":
probs[filename].append(softmax(all_inputs_t)[1:].tolist())
labels[filename] = np.array(labels[filename])
probs[filename] = np.array(probs[filename])
return probs, labels
def max_search(probs, threshold, mindist, def_val):
"""Perform a max search"""
# Threshold probs without default event probs
probabilities = np.copy(probs)
probabilities[probabilities <= threshold] = 0
# Return array
detections = np.empty(np.shape(probabilities)[0], dtype=np.int32)
detections.fill(def_val)
# Potential detections
idx_p = np.where(probabilities > 0)[0]
if idx_p.size == 0:
return detections
# Identify start and end of detections
p_d = np.diff(idx_p) - 1
p = np.where(p_d > 0)[0]
p_start = np.concatenate(([0], p+1))
p_end = np.concatenate((p, [idx_p.shape[0]-1]))
# Infer start and end indices of detections
idx_start = idx_p[p_start]
idx_end = idx_p[p_end]
idx_max = [max(min(start+np.argmax(probabilities[start:end+1]), end), start)
for start, end in zip(idx_start, idx_end)]
# Remove detections within mindist
max_diff = np.diff(idx_max)
carry = 0; rem_i = []
for i, diff in enumerate(np.concatenate(([mindist], max_diff))):
if (diff + carry < mindist):
rem_i.append(i)
carry += diff
else:
carry = 0
if len(rem_i) > 0:
idx_max_mindist = np.delete(idx_max, rem_i)
else:
idx_max_mindist = idx_max
# Return detections
detections[idx_max_mindist] = np.argmax(probabilities[idx_max_mindist], axis=-1) + def_val + 1
return detections
def eval_stage_2(dets, labels, event_val, def_val):
"""Stage 2 evaluation based on gesture-level metric proposed by Kyritsis et al. (2019)"""
def _split_idx(labels):
idx_t = np.where(labels == event_val)[0]
t_d = np.diff(idx_t) - 1
t = np.where(t_d > 0)[0]
t_start = np.concatenate(([0], t+1))
t_end = np.concatenate((t, [idx_t.shape[0]-1]))
if len(idx_t > 0):
idx_start = idx_t[t_start]
idx_end = idx_t[t_end]
else:
return []
return [np.arange(start, end+1) for start, end in zip(idx_start, idx_end)]
idxs_t = _split_idx(labels)
idxs_f = np.where(labels == def_val)
idxs_o = np.intersect1d(np.where(labels != def_val), np.where(labels != event_val))
splits_t = [dets[split_idx] for split_idx in idxs_t]
splits_f = dets[idxs_f]
splits_o = dets[idxs_o]
tp = np.sum([1 if np.sum(np.equal(split, event_val)) > 0 else 0 for split in splits_t])
fn = np.sum([0 if np.sum(np.equal(split, event_val)) > 0 else 1 for split in splits_t])
fp_1 = np.sum([np.sum(np.equal(split, event_val)) - 1 if np.sum(np.equal(split, event_val)) > 1 else 0 for split in splits_t])
fp_2 = np.sum(np.equal(splits_f, event_val))
fp_3 = np.sum(np.equal(splits_o, event_val))
if tp > 0:
prec = tp / (tp + fp_1 + fp_2 + fp_3)
rec = tp / (tp + fn)
f1 = 2 * prec * rec / (prec + rec)
elif fn == 0:
prec = 1
rec = 1
f1 = 1
else:
prec = 0
rec = 0
f1 = 0
return tp, fn, fp_1, fp_2, fp_3, prec, rec, f1
def main(args=None):
# Event classes excluding default/idle
event_classes = range(args.def_val + 1, args.def_val + args.num_event_classes + 1, 1)
# Import the probs and labels from csv
probs, labels = import_probs_and_labels(args)
# Perform grid search
if args.mode == 'estimate':
# Collect all in one array
flat_labels = np.array([label for f in labels.keys() for label in labels[f]])
flat_probs = np.array([prob for f in probs.keys() for prob in probs[f]])
# All evaluated threshold values
threshold_vals = np.arange(args.min_threshold, args.max_threshold, args.inc_threshold)
f1_results = []
for threshold in threshold_vals:
# Perform max search
flat_dets = np.array([det for f in probs.keys() for det in
max_search(probs[f], threshold, args.min_dist, args.def_val)])
# Calculate Stage II
f1 = []; pre = []; rec = []
for i, event_val in enumerate(event_classes):
_, _, _, _, _, _, _, f1_i = eval_stage_2(flat_dets, flat_labels, event_val, args.def_val)
f1.append(f1_i)
f1_results.append(np.mean(f1))
# Find best threshold
final_threshold = threshold_vals[np.argmax(f1_results)]
print("===================================================")
print('Best threshold: {}'.format(final_threshold))
final_dets = max_search(flat_probs.tolist(), final_threshold, args.min_dist, args.def_val)
f1 = []; pre = []; rec = []
for i, event_val in enumerate(event_classes):
tp_i, fn_i, fp_1_i, fp_2_i, fp_3_i, pre_i, rec_i, f1_i = eval_stage_2(
final_dets, flat_labels, event_val, args.def_val)
f1.append(f1_i); pre.append(pre_i); rec.append(rec_i)
# Print results
print('---------------------- Class {} --------------------'.format(event_val))
print('F1: {}'.format(f1_i))
print('Precision: {}'.format(pre_i))
print('Recall: {}'.format(rec_i))
print('-----')
print('TP: {}'.format(tp_i))
print('FP_1: {}'.format(fp_1_i))
print('FP_2: {}'.format(fp_2_i))
print('FP_3: {}'.format(fp_3_i))
print('FN: {}'.format(fn_i))
print("===================================================")
print('mF1: {}'.format(np.mean(f1)))
print('mPre: {}'.format(np.mean(pre)))
print('mRec: {}'.format(np.mean(rec)))
else:
# Perform max search
tp, fp_1, fp_2, fp_3, fn = {}, {}, {}, {}, {}
for e in event_classes:
tp[str(e)], fp_1[str(e)], fp_2[str(e)], fp_3[str(e)], fn[str(e)] = \
[], [], [], [], []
for f in probs.keys():
print('---------------------- ID {} --------------------'.format(f))
# Max search for f
dets_f = max_search(probs[f], args.threshold, args.min_dist, args.def_val)
# Calculate Stage II
for i, e in enumerate(event_classes):
tp_i, fn_i, fp_1_i, fp_2_i, fp_3_i, pre_i, rec_i, f1_i = eval_stage_2(
dets_f, labels[f], e, args.def_val)
tp[str(e)].append(tp_i); fp_1[str(e)].append(fp_1_i);
fp_2[str(e)].append(fp_2_i); fp_3[str(e)].append(fp_3_i);
fn[str(e)].append(fn_i)
# Print results
print('---------------------- Class {} --------------------'.format(e))
print('F1: {}'.format(f1_i))
print('Precision: {}'.format(pre_i))
print('Recall: {}'.format(rec_i))
print('-----')
print('TP: {}'.format(tp_i))
print('FP_1: {}'.format(fp_1_i))
print('FP_2: {}'.format(fp_2_i))
print('FP_3: {}'.format(fp_3_i))
print('FN: {}'.format(fn_i))
print("===================================================")
f1s, pres, recs = [], [], []
for e in event_classes:
print('---------------------- Class {} --------------------'.format(e))
tp_e = np.sum(tp[str(e)])
fp_1_e = np.sum(fp_1[str(e)])
fp_2_e = np.sum(fp_2[str(e)])
fp_3_e = np.sum(fp_3[str(e)])
fn_e = np.sum(fn[str(e)])
if tp_e > 0:
pre_e = tp_e / (tp_e + fp_1_e + fp_2_e + fp_3_e)
rec_e = tp_e / (tp_e + fn_e)
f1_e = 2 * pre_e * rec_e / (pre_e + rec_e)
elif fn_e == 0:
pre_e = 1
rec_e = 1
f1_e = 1
else:
pre_e = 0
rec_e = 0
f1_e = 0
pres.append(pre_e)
recs.append(rec_e)
f1s.append(f1_e)
print('F1: {}'.format(f1_e))
print('Precision: {}'.format(pre_e))
print('Recall: {}'.format(rec_e))
print('-----')
print('TP: {}'.format(tp_e))
print('FP_1: {}'.format(fp_1_e))
print('FP_2: {}'.format(fp_2_e))
print('FP_3: {}'.format(fp_3_e))
print('FN: {}'.format(fn_e))
print('mF1: {}'.format(np.mean(f1s)))
print('mPre: {}'.format(np.mean(pres)))
print('mRec: {}'.format(np.mean(recs)))
# Run
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Evaluate model Stage II')
parser.add_argument('--input_dir', type=str, default='eval', nargs='?', help='Directory with eval data.')
parser.add_argument('--min_dist', type=int, default=16, nargs='?', help='Minimum frames between detections.')
parser.add_argument('--threshold', type=float, default=0.9, nargs='?', help='Detection threshold probability')
parser.add_argument('--mode', type=str, default='evaluate', nargs='?', help='Evaluation or estimation and evaluation')
parser.add_argument('--min_threshold', type=float, default=0.5, nargs='?', help='Minimum detection threshold probability')
parser.add_argument('--max_threshold', type=float, default=1, nargs='?', help='Maximum detection threshold probability')
parser.add_argument('--inc_threshold', type=float, default=0.001, nargs='?', help='Increment for detection threshold search')
parser.add_argument('--col_label', type=int, default=1, nargs='?', help='Col number of label in csv')
parser.add_argument('--col_input', type=int, default=2, nargs='?', help='First col number of event class logits or probs input in csv')
parser.add_argument('--num_event_classes', type=int, default=1, nargs='?', help='Number of event classes excluding default/idle')
parser.add_argument('--def_val', type=int, default=1, nargs='?', help='Value denoting default/idle event')
parser.add_argument('--input_format', type=str, default='probs', choices=('probs', 'logits'), help='Format of the input class values')
args = parser.parse_args()
main(args)
| 42.08502 | 137 | 0.614334 | 1,597 | 10,395 | 3.780839 | 0.134627 | 0.008943 | 0.033786 | 0.011924 | 0.344319 | 0.227228 | 0.157337 | 0.1421 | 0.137794 | 0.123054 | 0 | 0.020946 | 0.196248 | 10,395 | 246 | 138 | 42.256098 | 0.701735 | 0.066763 | 0 | 0.187793 | 0 | 0 | 0.139441 | 0.024948 | 0 | 0 | 0 | 0 | 0.004695 | 1 | 0.023474 | false | 0 | 0.046948 | 0 | 0.098592 | 0.197183 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |