hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
07abd88c1750bfa23ce141be4914e78e9e578d95 | 316 | py | Python | sqlakeyset/__init__.py | jhihruei/sqlakeyset | 0aa0f6e041dc37bc5f918303578875ad334cad6c | [
"Unlicense"
] | null | null | null | sqlakeyset/__init__.py | jhihruei/sqlakeyset | 0aa0f6e041dc37bc5f918303578875ad334cad6c | [
"Unlicense"
] | null | null | null | sqlakeyset/__init__.py | jhihruei/sqlakeyset | 0aa0f6e041dc37bc5f918303578875ad334cad6c | [
"Unlicense"
] | null | null | null |
from .columns import OC
from .paging import get_page, select_page, process_args
from .results import serialize_bookmark, unserialize_bookmark, Page, Paging
__all__ = [
'OC',
'get_page',
'select_page',
'serialize_bookmark',
'unserialize_bookmark',
'Page',
'Paging',
'process_args'
]
| 19.75 | 75 | 0.693038 | 36 | 316 | 5.694444 | 0.416667 | 0.068293 | 0.126829 | 0.165854 | 0.44878 | 0.44878 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199367 | 316 | 15 | 76 | 21.066667 | 0.810277 | 0 | 0 | 0 | 0 | 0 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07abdc1f2ef1ad7ab554d9cccaa9f73782091369 | 6,609 | py | Python | low_rank_local_connectivity/models/simple_model.py | shaun95/google-research | d41bbaca1eb9bfd980ec2b3fd201c3ddb4d1f2e5 | [
"Apache-2.0"
] | 1 | 2022-03-13T21:48:52.000Z | 2022-03-13T21:48:52.000Z | low_rank_local_connectivity/models/simple_model.py | shaun95/google-research | d41bbaca1eb9bfd980ec2b3fd201c3ddb4d1f2e5 | [
"Apache-2.0"
] | null | null | null | low_rank_local_connectivity/models/simple_model.py | shaun95/google-research | d41bbaca1eb9bfd980ec2b3fd201c3ddb4d1f2e5 | [
"Apache-2.0"
] | 1 | 2022-03-30T07:20:29.000Z | 2022-03-30T07:20:29.000Z | # coding=utf-8
# Copyright 2022 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Simple model for image classification.
The model is multiple
conv/locally_connected/wide_conv/low_rank_locally_connected layers followed
by a fully connected layer. Changes to the model architecture can be made by
modifying simple_model_config.py file.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import os
import tensorflow.compat.v1 as tf
from low_rank_local_connectivity import layers
from low_rank_local_connectivity import utils
MOMENTUM = 0.9
EPS = 1e-5
class SimpleNetwork(tf.keras.Model):
"""Locally Connected Network."""
def __init__(self, config, variable_scope='simple_network'):
super(SimpleNetwork, self).__init__()
self.variable_scope = variable_scope
self.config = copy.deepcopy(config)
filters_list = self.config.num_filters_list
depth = len(filters_list)
self.pass_is_training_list = []
self.layers_list = []
if self.config.num_channels < 1:
raise ValueError('num_channels should be > 0')
input_channels = self.config.num_channels
if self.config.coord_conv:
# Add two coordinate conv channels.
input_channels = input_channels + 2
if len(self.config.layer_types) < depth:
self.config.layer_types.extend(
['conv2d'] * (depth - len(self.config.layer_types)))
chin = input_channels
for i, (kernel_size, num_filters, strides, layer_type) in enumerate(zip(
self.config.kernel_size_list,
filters_list,
self.config.strides_list,
self.config.layer_types)):
padding = 'valid'
if layer_type == 'conv2d':
chout = num_filters
layer = tf.keras.layers.Conv2D(
filters=chout,
kernel_size=kernel_size,
strides=(strides, strides),
padding=padding,
activation=None,
use_bias=not self.config.batch_norm,
kernel_initializer=self.config.kernel_initializer,
name=os.path.join(self.variable_scope, 'layer%d' %i, layer_type))
elif layer_type == 'wide_conv2d':
# Conv. layer with equivalent params to low rank locally connected.
if self.config.rank < 1:
raise ValueError('rank should be > 0 for %s layer.' % layer_type)
chout = int((self.config.rank * chin + num_filters) / float(
chin + num_filters) * num_filters)
layer = tf.keras.layers.Conv2D(
filters=chout if i < (depth-1)
else int(num_filters * self.config.rank),
kernel_size=kernel_size, strides=(strides, strides),
padding=padding,
activation=None,
use_bias=not self.config.batch_norm,
kernel_initializer=self.config.kernel_initializer,
name=os.path.join(self.variable_scope, 'layer%d' %i, layer_type))
elif layer_type == 'locally_connected2d':
# Full locally connected layer.
chout = num_filters
layer = tf.keras.layers.LocallyConnected2D(
filters=chout,
kernel_size=(kernel_size, kernel_size),
strides=(strides, strides),
padding=padding,
activation=None,
use_bias=True, # not self.config.batch_norm,
name=os.path.join(self.variable_scope, 'layer%d' %i, layer_type),
kernel_initializer=self.config.kernel_initializer)
elif layer_type == 'low_rank_locally_connected2d':
if self.config.rank < 1:
raise ValueError('rank should be > 0 for %s layer.' % layer_type)
chout = num_filters
layer = layers.LowRankLocallyConnected2D(
filters=chout,
kernel_size=(kernel_size, kernel_size),
strides=(strides, strides),
padding=padding,
activation=None,
use_bias=not self.config.batch_norm,
name=os.path.join(self.variable_scope, 'layer%d' %i, layer_type),
kernel_initializer=self.config.kernel_initializer,
combining_weights_initializer=(
self.config.combining_weights_initializer),
spatial_rank=self.config.rank,
normalize_weights=self.config.normalize_weights,
input_dependent=config.input_dependent,
share_row_combining_weights=self.config.share_row_combining_weights,
share_col_combining_weights=self.config.share_col_combining_weights)
else:
raise ValueError('Can not recognize layer %s type.' % layer_type)
chin = chout
self.layers_list.append(layer)
self.pass_is_training_list.append(False)
if self.config.batch_norm:
layer = tf.keras.layers.BatchNormalization(
trainable=True, momentum=MOMENTUM, epsilon=EPS)
self.layers_list.append(layer)
self.pass_is_training_list.append(True)
layer = tf.keras.layers.ReLU()
self.layers_list.append(layer)
self.pass_is_training_list.append(False)
if self.config.global_avg_pooling:
self.layers_list.append(tf.keras.layers.GlobalAveragePooling2D())
else:
self.layers_list.append(tf.keras.layers.Flatten())
self.pass_is_training_list.append(False)
self.layers_list.append(tf.keras.layers.Dense(
units=self.config.num_classes, activation=None, use_bias=True,
name='logits'))
self.pass_is_training_list.append(False)
def __call__(self, images, is_training):
endpoints = {}
if self.config.coord_conv:
# Append position channels.
net = tf.concat([images, utils.position_channels(images)], axis=3)
else:
net = images
for i, (pass_is_training, layer) in enumerate(
zip(self.pass_is_training_list, self.layers_list)):
net = layer(net, training=is_training) if pass_is_training else layer(net)
endpoints['layer%d' % i] = net
tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, layer.updates)
self.add_update(layer.updates)
logits = net
return logits, endpoints
| 37.982759 | 80 | 0.681192 | 843 | 6,609 | 5.11981 | 0.252669 | 0.07646 | 0.029194 | 0.029194 | 0.40987 | 0.368397 | 0.352641 | 0.305607 | 0.288925 | 0.267609 | 0 | 0.006463 | 0.227417 | 6,609 | 173 | 81 | 38.202312 | 0.838817 | 0.157966 | 0 | 0.373016 | 0 | 0 | 0.045528 | 0.005059 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0.071429 | 0.063492 | 0 | 0.095238 | 0.007937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07b58dc361c480dc7628924d4fba99b729151138 | 687 | py | Python | client/modules/Wikipedia.py | devagul93/Jarvis-System | 8d1865b19bb8530831c868147c3b27a1c3bad59b | [
"MIT"
] | null | null | null | client/modules/Wikipedia.py | devagul93/Jarvis-System | 8d1865b19bb8530831c868147c3b27a1c3bad59b | [
"MIT"
] | null | null | null | client/modules/Wikipedia.py | devagul93/Jarvis-System | 8d1865b19bb8530831c868147c3b27a1c3bad59b | [
"MIT"
] | null | null | null | import wikipedia
import re
import TCPclient as client
WORDS = ["WIKIPEDIA","SEARCH","INFORMATION"]
def handle(text,mic,profile):
# SEARCH ON WIKIPEDIA
# ny = wikipedia.summary("New York",sentences=3);
# mic.say("%s"% ny)
#mic.say("What you want to search about")
#text = mic.activeListen()
print "entering wiki term"
text = client.grab_input()
while text.upper()=="WIKIPEDIA":
print "entering while"
text = client.grab_input()
print text
answer = wikipedia.summary(text,sentences=3)
answer +="\n"
print answer
client.send_out(answer)
#mic.say(answer)
def isValid(text):
return bool(re.search(r'\bwikipedia\b',text, re.IGNORECASE))
| 19.083333 | 61 | 0.679767 | 93 | 687 | 4.989247 | 0.516129 | 0.038793 | 0.060345 | 0.081897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003534 | 0.176128 | 687 | 35 | 62 | 19.628571 | 0.816254 | 0.24163 | 0 | 0.117647 | 0 | 0 | 0.160156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.176471 | null | null | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07b77a97c35aea2ef5b761b745880bae3410a131 | 2,796 | py | Python | meiduo_mall/meiduo_mall/apps/orders/views.py | Zasling/meiduo_mall33 | ec55597758d5052b311d65aee44533b001f6ddd8 | [
"MIT"
] | 1 | 2019-04-12T08:56:29.000Z | 2019-04-12T08:56:29.000Z | meiduo_mall/meiduo_mall/apps/orders/views.py | Zasling/meiduo_mall33 | ec55597758d5052b311d65aee44533b001f6ddd8 | [
"MIT"
] | null | null | null | meiduo_mall/meiduo_mall/apps/orders/views.py | Zasling/meiduo_mall33 | ec55597758d5052b311d65aee44533b001f6ddd8 | [
"MIT"
] | 1 | 2020-03-30T14:35:22.000Z | 2020-03-30T14:35:22.000Z | from rest_framework.response import Response
from rest_framework.views import APIView
from django_redis import get_redis_connection
from goods.models import SKU
from decimal import Decimal
from rest_framework.generics import CreateAPIView,ListAPIView
from rest_framework.mixins import ListModelMixin
from orders.serializers import OrderShowSerializer, OrderSaveSerializer, OrderListSerializer, CommentSerializers, \
CommentSaveSerializers, CommentShowSerializers
from users.models import User
from orders.models import OrderInfo,OrderGoods
from orders.utils import PageNum
from rest_framework.filters import OrderingFilter
# 展示订单信息
class OrdersShowView(APIView):
def get(self, request):
# 获取用户对象
user = request.user
# 建立redis连接
conn = get_redis_connection('cart')
# 获取hash数据sku_id ,count
sku_id_count = conn.hgetall('cart_%s' %user.id) # {10:1}
# 将byte类型数据转为整形
cart = {}
for sku_id, count in sku_id_count.items():
cart[int(sku_id)] = int(count)
# 获取集合数据
sku_ids = conn.smembers('cart_selected_%s' %user.id)
# 查询所有选中状态的数据对象
skus = SKU.objects.filter(id__in=sku_ids)
# 商品对象添加count属性(sku表中没有count字段,要手动添加属性)
for sku in skus:
sku.count = cart[sku.id]
# 生成运费
freight = Decimal(10.00)
# 序列化返回商品对象
ser = OrderShowSerializer({'freight': freight, 'skus': skus})
return Response(ser.data)
# 保存订单信息
class OrderSaveView(ListModelMixin, CreateAPIView):
serializer_class = OrderSaveSerializer
# 订单列表数据获取
class OrderListView(ListAPIView):
pagination_class = PageNum
serializer_class = OrderListSerializer
def get_queryset(self):
user = self.request.user
order = OrderInfo.objects.filter(user = user)
return order
# 评论-获取商品信息
class OrderComment(ListAPIView):
serializer_class = CommentSerializers
def get_queryset(self):
order_id = self.kwargs['order_id']
skus = OrderGoods.objects.filter(order_id = order_id, is_commented=False)
return skus
# 保存评论
class SaveSkuComment(CreateAPIView):
serializer_class = CommentSaveSerializers
# 商品详情中的评论展示
class ShowComment(ListAPIView):
serializer_class = CommentShowSerializers
def get_queryset(self):
# 从kwargs中获取sku_id
sku_id = self.kwargs['sku_id']
# 获取商品信息
orders = OrderGoods.objects.filter(sku_id=sku_id, is_commented = True)
for sku in orders:
skuinfo = OrderInfo.objects.get(order_id=sku.order_id)
user = User.objects.get(id = skuinfo.user_id)
# 获取用户名,判断是否匿名
sku.username = user.username
if sku.is_anonymous == True:
sku.username = '****'
return orders
| 31.066667 | 115 | 0.689199 | 308 | 2,796 | 6.100649 | 0.334416 | 0.023949 | 0.045237 | 0.028739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003257 | 0.231402 | 2,796 | 89 | 116 | 31.41573 | 0.871103 | 0.078326 | 0 | 0.052632 | 0 | 0 | 0.021926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.210526 | 0 | 0.561404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
07baaefdacf7ace2738a920e7e9c1d5671078a05 | 13,520 | py | Python | microbitAnim.py | SaitoYutaka/microbitAnim | 6630d5cdb3ae867d3467a035a1c14358944c0367 | [
"MIT"
] | null | null | null | microbitAnim.py | SaitoYutaka/microbitAnim | 6630d5cdb3ae867d3467a035a1c14358944c0367 | [
"MIT"
] | null | null | null | microbitAnim.py | SaitoYutaka/microbitAnim | 6630d5cdb3ae867d3467a035a1c14358944c0367 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
###########################################################################
## Python code generated with wxFormBuilder (version Aug 8 2018)
## http://www.wxformbuilder.org/
##
## PLEASE DO *NOT* EDIT THIS FILE!
###########################################################################
import wx
import wx.xrc
###########################################################################
## Class MyFrame1
###########################################################################
class MyFrame1 ( wx.Frame ):
def __init__( self, parent ):
wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = wx.EmptyString, pos = wx.Point( 0,0 ), size = wx.Size( 767,507 ), style = wx.DEFAULT_FRAME_STYLE|wx.TAB_TRAVERSAL )
self.SetSizeHints( wx.DefaultSize, wx.DefaultSize )
gbSizer1 = wx.GridBagSizer( 0, 0 )
gbSizer1.SetFlexibleDirection( wx.BOTH )
gbSizer1.SetNonFlexibleGrowMode( wx.FLEX_GROWMODE_SPECIFIED )
self.m_button00 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button00.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button00, wx.GBPosition( 0, 0 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button01 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button01.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button01, wx.GBPosition( 0, 1 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button02 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button02.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button02, wx.GBPosition( 0, 2 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button03 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button03.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button03, wx.GBPosition( 0, 3 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button04 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button04.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button04, wx.GBPosition( 0, 4 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button10 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button10.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button10, wx.GBPosition( 1, 0 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button11 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button11.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button11, wx.GBPosition( 1, 1 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button12 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button12.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button12, wx.GBPosition( 1, 2 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button13 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button13.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button13, wx.GBPosition( 1, 3 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button14 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button14.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button14, wx.GBPosition( 1, 4 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button20 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button20.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button20, wx.GBPosition( 2, 0 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button21 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button21.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button21, wx.GBPosition( 2, 1 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button22 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button22.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button22, wx.GBPosition( 2, 2 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button23 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button23.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button23, wx.GBPosition( 2, 3 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button24 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button24.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button24, wx.GBPosition( 2, 4 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button30 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button30.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button30, wx.GBPosition( 3, 0 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button31 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button31.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button31, wx.GBPosition( 3, 1 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button32 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button32.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button32, wx.GBPosition( 3, 2 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button33 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button33.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button33, wx.GBPosition( 3, 3 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button34 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button34.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button34, wx.GBPosition( 3, 4 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button40 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button40.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button40, wx.GBPosition( 4, 0 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button41 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button41.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button41, wx.GBPosition( 4, 1 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button42 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button42.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button42, wx.GBPosition( 4, 2 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button43 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button43.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button43, wx.GBPosition( 4, 3 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.m_button44 = wx.Button( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.Size( 50,50 ), 0 )
self.m_button44.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
gbSizer1.Add( self.m_button44, wx.GBPosition( 4, 4 ), wx.GBSpan( 1, 1 ), wx.ALL, 5 )
self.SetSizer( gbSizer1 )
self.Layout()
self.m_menubar1 = wx.MenuBar( 0 )
self.m_menu1 = wx.Menu()
self.m_menuItem3 = wx.MenuItem( self.m_menu1, wx.ID_ANY, u"Open", wx.EmptyString, wx.ITEM_NORMAL )
self.m_menu1.Append( self.m_menuItem3 )
self.m_menuItem1 = wx.MenuItem( self.m_menu1, wx.ID_ANY, u"Save", wx.EmptyString, wx.ITEM_NORMAL )
self.m_menu1.Append( self.m_menuItem1 )
self.m_menuItem2 = wx.MenuItem( self.m_menu1, wx.ID_ANY, u"quit", wx.EmptyString, wx.ITEM_NORMAL )
self.m_menu1.Append( self.m_menuItem2 )
self.m_menubar1.Append( self.m_menu1, u"File" )
self.m_menu2 = wx.Menu()
self.m_menuItem4 = wx.MenuItem( self.m_menu2, wx.ID_ANY, u"python", wx.EmptyString, wx.ITEM_NORMAL )
self.m_menu2.Append( self.m_menuItem4 )
self.m_menubar1.Append( self.m_menu2, u"export" )
self.SetMenuBar( self.m_menubar1 )
self.Centre( wx.BOTH )
# Connect Events
self.m_button00.Bind( wx.EVT_BUTTON, self.onButton00Click )
self.m_button01.Bind( wx.EVT_BUTTON, self.onButton01Click )
self.m_button02.Bind( wx.EVT_BUTTON, self.onButton02Click )
self.m_button03.Bind( wx.EVT_BUTTON, self.onButton03Click )
self.m_button04.Bind( wx.EVT_BUTTON, self.onButton04Click )
self.m_button10.Bind( wx.EVT_BUTTON, self.onButton10Click )
self.m_button11.Bind( wx.EVT_BUTTON, self.onButton11Click )
self.m_button12.Bind( wx.EVT_BUTTON, self.onButton12Click )
self.m_button13.Bind( wx.EVT_BUTTON, self.onButton13Click )
self.m_button14.Bind( wx.EVT_BUTTON, self.onButton14Click )
self.m_button20.Bind( wx.EVT_BUTTON, self.onButton20Click )
self.m_button21.Bind( wx.EVT_BUTTON, self.onButton21Click )
self.m_button22.Bind( wx.EVT_BUTTON, self.onButton22Click )
self.m_button23.Bind( wx.EVT_BUTTON, self.onButton23Click )
self.m_button24.Bind( wx.EVT_BUTTON, self.onButton24Click )
self.m_button30.Bind( wx.EVT_BUTTON, self.onButton30Click )
self.m_button31.Bind( wx.EVT_BUTTON, self.onButton31Click )
self.m_button32.Bind( wx.EVT_BUTTON, self.onButton32Click )
self.m_button33.Bind( wx.EVT_BUTTON, self.onButton33Click )
self.m_button34.Bind( wx.EVT_BUTTON, self.onButton34Click )
self.m_button40.Bind( wx.EVT_BUTTON, self.onButton40Click )
self.m_button41.Bind( wx.EVT_BUTTON, self.onButton41Click )
self.m_button42.Bind( wx.EVT_BUTTON, self.onButton42Click )
self.m_button43.Bind( wx.EVT_BUTTON, self.onButton43Click )
self.m_button44.Bind( wx.EVT_BUTTON, self.onButton44Click )
self.Bind( wx.EVT_MENU, self.OnMenuOpenSelect, id = self.m_menuItem3.GetId() )
self.Bind( wx.EVT_MENU, self.OnMenuSaveSelect, id = self.m_menuItem1.GetId() )
self.Bind( wx.EVT_MENU, self.OnMenuQuitSelect, id = self.m_menuItem2.GetId() )
self.Bind( wx.EVT_MENU, self.OnExportPythonSelect, id = self.m_menuItem4.GetId() )
def __del__( self ):
pass
# Virtual event handlers, overide them in your derived class
def onButton00Click( self, event ):
event.Skip()
def onButton01Click( self, event ):
event.Skip()
def onButton02Click( self, event ):
event.Skip()
def onButton03Click( self, event ):
event.Skip()
def onButton04Click( self, event ):
event.Skip()
def onButton10Click( self, event ):
event.Skip()
def onButton11Click( self, event ):
event.Skip()
def onButton12Click( self, event ):
event.Skip()
def onButton13Click( self, event ):
event.Skip()
def onButton14Click( self, event ):
event.Skip()
def onButton20Click( self, event ):
event.Skip()
def onButton21Click( self, event ):
event.Skip()
def onButton22Click( self, event ):
event.Skip()
def onButton23Click( self, event ):
event.Skip()
def onButton24Click( self, event ):
event.Skip()
def onButton30Click( self, event ):
event.Skip()
def onButton31Click( self, event ):
event.Skip()
def onButton32Click( self, event ):
event.Skip()
def onButton33Click( self, event ):
event.Skip()
def onButton34Click( self, event ):
event.Skip()
def onButton40Click( self, event ):
event.Skip()
def onButton41Click( self, event ):
event.Skip()
def onButton42Click( self, event ):
event.Skip()
def onButton43Click( self, event ):
event.Skip()
def onButton44Click( self, event ):
event.Skip()
def OnMenuOpenSelect( self, event ):
event.Skip()
def OnMenuSaveSelect( self, event ):
event.Skip()
def OnMenuQuitSelect( self, event ):
event.Skip()
def OnExportPythonSelect( self, event ):
event.Skip()
| 44.473684 | 181 | 0.598669 | 1,773 | 13,520 | 4.447829 | 0.091371 | 0.081156 | 0.026629 | 0.066193 | 0.613239 | 0.478443 | 0.469693 | 0.455998 | 0.455998 | 0.442683 | 0 | 0.074268 | 0.252071 | 13,520 | 303 | 182 | 44.620462 | 0.705597 | 0.017456 | 0 | 0.152632 | 1 | 0 | 0.002159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163158 | false | 0.005263 | 0.010526 | 0 | 0.178947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07c0c2bb274ab76681ad18763446d5b0c976c985 | 242 | py | Python | pixelate_task_1.py | Swayamshu/Pixelate_Sample_Arena | d8e8b4614987f9302a19ec1e20a922618e67b943 | [
"MIT"
] | null | null | null | pixelate_task_1.py | Swayamshu/Pixelate_Sample_Arena | d8e8b4614987f9302a19ec1e20a922618e67b943 | [
"MIT"
] | null | null | null | pixelate_task_1.py | Swayamshu/Pixelate_Sample_Arena | d8e8b4614987f9302a19ec1e20a922618e67b943 | [
"MIT"
] | null | null | null | import gym
import pix_sample_arena
import time
import pybullet as p
import pybullet_data
import cv2
if __name__ == "__main__":
env = gym.make("pix_sample_arena-v0")
x = 0
while True:
p.stepSimulation()
time.sleep(100) | 18.615385 | 41 | 0.702479 | 36 | 242 | 4.361111 | 0.666667 | 0.11465 | 0.178344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031579 | 0.214876 | 242 | 13 | 42 | 18.615385 | 0.794737 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
07c638a7630e99e901331aada0e29b538ff7310d | 1,482 | py | Python | forms/QRGenerator.py | Rono-Barto-Co/Project-QR | e80fc5a41f25542038c090311844912790cb1478 | [
"MIT"
] | 3 | 2019-07-04T03:27:06.000Z | 2019-09-06T08:52:35.000Z | forms/QRGenerator.py | Rono-Barto-Co/Project-QR | e80fc5a41f25542038c090311844912790cb1478 | [
"MIT"
] | null | null | null | forms/QRGenerator.py | Rono-Barto-Co/Project-QR | e80fc5a41f25542038c090311844912790cb1478 | [
"MIT"
] | null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField, SubmitField, SelectField
from wtforms.validators import DataRequired
class QRGenerator(FlaskForm):
code_content = StringField('Content', validators=[DataRequired()])
code_size = SelectField('Size', choices=[('15', 'Size'),
('5', '5'),
('10', '10'),
('15', '15'),
('20', '20'),
('25', '25'),
('30', '30')])
code_color = SelectField('Colour', choices=[('white', 'Colour'),
("white", "White"),
('yellow', "Yellow"),
('lime', "Green"),
("#ffa500", "Orange")])
code_correction = SelectField('Error Correction', choices=[("H", "Error Correction"),
("H", "H"),
("L", "L"),
("M", "M"),
("Q", "Q")])
code_image = StringField('Image URL')
generate_code = SubmitField('Generate QR Code')
| 54.888889 | 89 | 0.323887 | 87 | 1,482 | 5.436782 | 0.471264 | 0.046512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040419 | 0.549258 | 1,482 | 26 | 90 | 57 | 0.667665 | 0 | 0 | 0 | 0 | 0 | 0.112011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07c6a477f6bfebee04a539e1d02b2df95226ab91 | 1,259 | py | Python | Quiz/m2_advanced_quants/l5_volatility/volatility_estimation.py | jcrangel/AI-for-Trading | c3b865e992f8eb8deda91e7641428eef1d343636 | [
"Apache-2.0"
] | null | null | null | Quiz/m2_advanced_quants/l5_volatility/volatility_estimation.py | jcrangel/AI-for-Trading | c3b865e992f8eb8deda91e7641428eef1d343636 | [
"Apache-2.0"
] | null | null | null | Quiz/m2_advanced_quants/l5_volatility/volatility_estimation.py | jcrangel/AI-for-Trading | c3b865e992f8eb8deda91e7641428eef1d343636 | [
"Apache-2.0"
] | null | null | null | import pandas as pd
import numpy as np
def estimate_volatility(prices, l):
"""Create an exponential moving average model of the volatility of a stock
price, and return the most recent (last) volatility estimate.
Parameters
----------
prices : pandas.Series
A series of adjusted closing prices for a stock.
l : float
The 'lambda' parameter of the exponential moving average model. Making
this value smaller will cause the model to weight older terms less
relative to more recent terms.
Returns
-------
last_vol : float
The last element of your exponential moving averge volatility model series.
"""
# TODO: Implement the exponential moving average volatility model and return the last value.
return prices.ewm(alpha=(1-l)).mean()[-1]
def test_run(filename='data.csv'):
"""Test run get_most_volatile() with stock prices from a file."""
prices = pd.read_csv(filename, parse_dates=[
'date'], index_col='date', squeeze=True)
print("Most recent volatility estimate: {:.6f}".format(estimate_volatility(prices, 0.7)))
# print(estimate_volatility(prices, 0.7))
if __name__ == '__main__':
test_run()
| 31.475 | 96 | 0.659253 | 165 | 1,259 | 4.915152 | 0.515152 | 0.083847 | 0.088779 | 0.071517 | 0.064118 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007376 | 0.246227 | 1,259 | 39 | 97 | 32.282051 | 0.847208 | 0.586974 | 0 | 0 | 0 | 0 | 0.145161 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.5 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07cc54388c8061e52f8dc1aa33c14d904afe5143 | 3,964 | py | Python | lectures/extensions/hyperbolic_discounting/replication_code/src/analysis/get_bivariate_distr_data.py | loikein/ekw-lectures | a2f5436f10515ab26eab323fca8c37c91bdc5dcd | [
"MIT"
] | 4 | 2019-11-15T15:21:27.000Z | 2020-07-08T15:04:30.000Z | lectures/extensions/hyperbolic_discounting/replication_code/src/analysis/get_bivariate_distr_data.py | loikein/ekw-lectures | a2f5436f10515ab26eab323fca8c37c91bdc5dcd | [
"MIT"
] | 9 | 2019-11-18T15:54:36.000Z | 2020-07-14T13:56:53.000Z | lectures/extensions/hyperbolic_discounting/replication_code/src/analysis/get_bivariate_distr_data.py | loikein/ekw-lectures | a2f5436f10515ab26eab323fca8c37c91bdc5dcd | [
"MIT"
] | 3 | 2021-01-25T15:41:30.000Z | 2021-09-21T08:51:36.000Z | """Generate values of Method of Simulated Moments criterion function.
Given observed moments and weighting matrix in `OUT_ANALYSIS`, "msm_estimation",
generate values of Method of Simulated Moments criterion function for combinations
of discount factor and present bias values.
The goal is to study the bivariate distribution of the time preference parameters
around the combination of true parameter values.
"""
import itertools
import numpy as np
import pandas as pd
import respy as rp
import yaml
from bld.project_paths import project_paths_join as ppj
from src.library.compute_moments import _replace_nans
from src.library.compute_moments import calc_restricted_choice_probabilities
from src.library.compute_moments import calc_restricted_wage_distribution
from src.library.compute_moments import calc_unrestricted_choice_probabilities
from src.library.compute_moments import calc_unrestricted_wage_distribution
from src.library.compute_moments import calc_very_restricted_choice_probabilities
from src.library.compute_moments import calc_very_restricted_wage_distribution
from src.library.housekeeping import _load_pickle
from src.library.housekeeping import _temporary_working_directory
from tqdm import tqdm
def get_bivariate_distribution(params, crit_func, grid_delta, grid_beta):
"""Compute value of criterion function.
Args:
params (pd.DataFrame): DataFrame containing model parameters.
crit_func (dict): Dictionary containing model options.
grid_delta (np.array): Values of discount factor.
grid_beta (np.array): Values of present-bias parameter.
Returns:
pd.DataFrame
"""
results = []
for beta, delta in tqdm(itertools.product(grid_beta, grid_delta)):
params_ = params.copy()
params_.loc[("beta", "beta"), "value"] = beta
params_.loc[("delta", "delta"), "value"] = delta
val = crit_func(params_)
result = {"beta": beta, "delta": delta, "val": val}
results.append(result)
return pd.DataFrame.from_dict(results)
if __name__ == "__main__":
# load params
params = pd.read_csv(
ppj("IN_MODEL_SPECS", "params_hyp.csv"),
sep=";",
index_col=["category", "name"],
)
params["value"] = params["value"].astype(float)
# load options
with open(ppj("IN_MODEL_SPECS", "options_hyp.yaml")) as options:
options = yaml.safe_load(options)
# get empirical moments
empirical_moments = _load_pickle(ppj("OUT_ANALYSIS", "msm_estimation", "moments_hyp.pickle"))
# get weighting matrix
weighting_matrix = _load_pickle(
ppj("OUT_ANALYSIS", "msm_estimation", "weighting_matrix_hyp.pickle")
)
calc_moments = {
"Choice Probabilities Very Restricted": calc_very_restricted_choice_probabilities,
"Choice Probabilities Restricted": calc_restricted_choice_probabilities,
"Choice Probabilities Unrestricted": calc_unrestricted_choice_probabilities,
"Wage Distribution Very Restricted": calc_very_restricted_wage_distribution,
"Wage Distribution Restricted": calc_restricted_wage_distribution,
"Wage Distribution Unrestricted": calc_unrestricted_wage_distribution,
}
with _temporary_working_directory(snippet="heatmap"):
# get criterion function
weighted_sum_squared_errors = rp.get_moment_errors_func(
params=params,
options=options,
calc_moments=calc_moments,
replace_nans=_replace_nans,
empirical_moments=empirical_moments,
weighting_matrix=weighting_matrix,
)
# get bivariate distribution results
results = get_bivariate_distribution(
crit_func=weighted_sum_squared_errors,
params=params,
grid_delta=np.arange(0.945, 0.9625, 0.0025),
grid_beta=np.arange(0.75, 1.05, 0.01),
)
results.to_csv(ppj("OUT_ANALYSIS", "heatmap.csv"))
| 36.703704 | 97 | 0.726791 | 477 | 3,964 | 5.75891 | 0.27673 | 0.022934 | 0.045868 | 0.053513 | 0.333091 | 0.233345 | 0.210047 | 0.183109 | 0.156898 | 0.04878 | 0 | 0.007192 | 0.193239 | 3,964 | 107 | 98 | 37.046729 | 0.851782 | 0.212916 | 0 | 0.031746 | 1 | 0 | 0.145698 | 0.008801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015873 | false | 0 | 0.253968 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07d274563189ebc57a38c1571e12c09ed638080d | 18,828 | py | Python | scanlogger.py | pythonhacker/pyscanlogd | 64d6ad38127243e5c422be7f899ecfa802e1ad21 | [
"BSD-3-Clause"
] | 1 | 2021-04-03T22:15:06.000Z | 2021-04-03T22:15:06.000Z | scanlogger.py | pythonhacker/pyscanlogd | 64d6ad38127243e5c422be7f899ecfa802e1ad21 | [
"BSD-3-Clause"
] | null | null | null | scanlogger.py | pythonhacker/pyscanlogd | 64d6ad38127243e5c422be7f899ecfa802e1ad21 | [
"BSD-3-Clause"
] | 2 | 2020-12-18T20:06:21.000Z | 2021-04-08T02:47:40.000Z | # -- coding: utf-8
#!/usr/bin/env python
"""
pyscanlogger: Port scan detector/logger tool, inspired
by scanlogd {http://www.openwall.com/scanlogd} but with
added ability to log slow port-scans.
Features
1. Detects all stealth (half-open) and full-connect scans.
2. Detects Idle scan and logs it correctly using correlation!
3. Detects SCTP scan.
4. Detects slow port-scans also.
Modification History
Mar 17 2010 - Cleaned up code to publish to google.
Apr 8 2010 - Better detection of TCP full-connect scan without
spurious and incorrect logging. Better logging
functions.
Licensed under GNU GPL v3.0.
"""
import sys, os
import dpkt, pcap
import struct
import socket
import time
import threading
import optparse
import entry
import timerlist
__author__ = "pythonhacker"
__maintainer__ = "pythonhacker"
__version__ = '0.5.1'
__modified__ = 'Thu Apr 8 19:21:11 IST 2010'
# UDP - in progress...
SCAN_TIMEOUT = 5
WEIGHT_THRESHOLD = 25
PIDFILE="/var/run/pyscanlogger.pid"
# TCP flag constants
TH_URG=dpkt.tcp.TH_URG
TH_ACK=dpkt.tcp.TH_ACK
TH_PSH=dpkt.tcp.TH_PUSH
TH_RST=dpkt.tcp.TH_RST
TH_SYN=dpkt.tcp.TH_SYN
TH_FIN=dpkt.tcp.TH_FIN
# Protocols
TCP=dpkt.tcp.TCP
UDP=dpkt.udp.UDP
SCTP=dpkt.sctp.SCTP
get_timestamp = lambda : time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())
ip2quad = lambda x: socket.inet_ntoa(struct.pack('I', x))
scan_ip2quad = lambda scan: map(ip2quad, [scan.src, scan.dst])
class ScanLogger(object):
""" Port scan detector/logger """
# TCP flags to scan type mapping
scan_types = {0: 'TCP null',
TH_FIN: 'TCP fin',
TH_SYN: 'TCP syn', TH_SYN|TH_RST: 'TCP syn',
TH_ACK: 'TCP ack',
TH_URG|TH_PSH|TH_FIN: 'TCP x-mas',
TH_URG|TH_PSH|TH_FIN|TH_ACK: 'TCP x-mas',
TH_SYN|TH_FIN: 'TCP syn/fin',
TH_FIN|TH_ACK: 'TCP fin/ack',
TH_SYN|TH_ACK: 'TCP full-connect',
TH_URG|TH_PSH|TH_ACK|TH_RST|TH_SYN|TH_FIN: 'TCP all-flags',
TH_SYN|TH_ACK|TH_RST: 'TCP full-connect',
# Not a scan
TH_RST|TH_ACK: 'reply'}
def __init__(self, timeout, threshold, maxsize, daemon=True, logfile='/var/log/scanlog'):
self.scans = entry.EntryLog(maxsize)
self.long_scans = entry.EntryLog(maxsize)
# Port scan weight threshold
self.threshold = threshold
# Timeout for scan entries
self.timeout = timeout
# Long-period scan timeouts
self.timeout_l = 3600
# Long-period scan threshold
self.threshold_l = self.threshold/2
# Daemonize ?
self.daemon = daemon
# Log file
try:
self.scanlog = open(logfile,'a')
print >> sys.stderr, 'Scan logs will be saved to %s' % logfile
except (IOError, OSError), (errno, strerror):
print >> sys.stderr, "Error opening scan log file %s => %s" % (logfile, strerror)
self.scanlog = None
# Recent scans - this list allows to keep scan information
# upto last 'n' seconds, so as to not call duplicate scans
# in the same time-period. 'n' is 60 sec by default.
# Since entries time out in 60 seconds, max size is equal
# to maximum such entries possible in 60 sec - assuming
# a scan occurs at most every 5 seconds, this would be 12.
self.recent_scans = timerlist.TimerList(12, 60.0)
def hash_func(self, addr):
""" Hash a host address """
value = addr
h = 0
while value:
# print value
h ^= value
value = value >> 9
return h & (8192-1)
def mix(self, a, b, c):
a -= b; a -= c; a ^= (c>>13)
b -= c; b -= a; b ^= (a<<8)
c -= a; c -= b; c ^= (b>>13)
a -= b; a -= c; a ^= (c>>12)
b -= c; b -= a; b ^= (a<<16)
c -= a; c -= b; c ^= (b>>5)
a -= b; a -= c; a ^= (c>>3)
b -= c; b -= a; b ^= (a<<10)
c -= a; c -= b; c ^= (b>>15)
return abs(c)
def host_hash(self, src, dst):
""" Hash mix two host addresses """
return self.hash_func(self.mix(src, dst, 0xffffff))
def log(self, msg):
""" Log a message to console and/or log file """
line = '[%s]: %s' % (get_timestamp(), msg)
if self.scanlog:
self.scanlog.write(line + '\n')
self.scanlog.flush()
if not self.daemon:
print >> sys.stderr, line
def log_scan(self, scan, continuation=False, slow_scan=False, unsure=False):
""" Log the scan to file and/or console """
srcip, dstip = scan_ip2quad(scan)
ports = ','.join([str(port) for port in scan.ports])
if not continuation:
tup = [scan.type,scan.flags_or,srcip,dstip, ports]
if not slow_scan:
if scan.type != 'Idle':
line = '%s scan (flags:%d) from %s to %s (ports:%s)'
else:
tup.append(ip2quad(scan.zombie))
line = '%s scan (flags: %d) from %s to %s (ports: %s) using zombie host %s'
else:
tup.append(scan.time_avg)
if unsure:
line = 'Possible slow %s scan (flags:%d) from %s to %s (ports:%s), average timediff %.2fs'
else:
line = 'Slow %s scan (flags:%d) from %s to %s (ports:%s), average timediff %.2fs'
else:
tup = [scan.type, srcip,dstip, ports]
if not slow_scan:
if scan.type != 'Idle':
line = 'Continuation of %s scan from %s to %s (ports:%s)'
else:
tup.append(ip2quad(scan.zombie))
line = 'Continuation of %s scan from %s to %s (ports: %s) using zombie host %s'
else:
tup.append(scan.time_avg)
line = 'Continuation of slow %s scan from %s to %s (ports:%s), average timediff %.2fs'
msg = line % tuple(tup)
self.log(msg)
def update_ports(self, scan, dport, flags):
scan.flags_or |= flags
if dport in scan.ports:
return
# Add weight for port
if dport < 1024:
scan.weight += 3
else:
scan.weight += 1
scan.ports.append(dport)
def inspect_scan(self, scan, slow_scan=False):
# Sure scan
is_scan = ((slow_scan and scan.weight >= self.threshold_l) or (not slow_scan and scan.weight >= self.threshold))
# Possible scan
maybe_scan = (slow_scan and len(scan.ports)>=3 and len(scan.timediffs)>=4 and (scan.weight < self.threshold_l))
not_scan = False
if is_scan or maybe_scan:
scan.logged = True
if scan.proto==TCP:
idle_scan = False
if scan.flags_or==TH_RST:
# None does scan using RST, however this could be
# return packets from a zombie host to the scanning
# host when a scanning host is doing an idle scan.
# Basically
# A -scanning host
# B - zombie host
# C - target host
# If A does an idle scan on C with B as zombie,
# it will appear to C as if B is syn scanning it
# and later we could get an apparent RST "scan"
# from B to A
# Correlation: If 'RST scan' detected from X to Y
# See if there was a SYN scan recently from host
# X to host Z. Then actually Y is idle scanning
# Z
dummy_scans, idle_ports = [], []
for item in reversed(self.recent_scans):
rscan = item[1]
if rscan.src==scan.src and rscan.flags_or==TH_SYN and ((rscan.timestamp - scan.timestamp)<30):
idle_scan = True
idle_ports.append(rscan.ports)
dummy_scans.append(item)
if idle_scan:
scan.src = scan.dst
scan.dst = rscan.dst
scan.zombie = rscan.src
scan.type = 'Idle'
scan.ports = idle_ports
# for d in dummy_scans:
# self.recent_scans.remove(d)
else:
# Remove entry
if slow_scan:
del self.long_scans[scan.hash]
else:
del self.scans[scan.hash]
return False
else:
scan.type = self.scan_types.get(scan.flags_or,'unknown')
if scan.type in ('', 'reply'):
not_scan = True
# If we see scan flags 22 from A->B, make sure that
# there was no recent full-connect scan from B->A, if
# so this is spurious and should be ignored.
if scan.flags_or == (TH_SYN|TH_ACK|TH_RST) and len(self.recent_scans):
recent1 = self.recent_scans[-1:-2:-1]
for recent in recent1:
# Was not a scan, skip
if not recent.is_scan: continue
if recent.type == 'TCP full-connect' and ((scan.src == recent.dst) and (scan.dst == recent.src)):
# Spurious
self.log("Ignoring spurious TCP full-connect scan from %s" % ' to '.join(scan_ip2quad(scan)))
not_scan = True
break
# If this is a syn scan, see if there was a recent idle scan
# with this as zombie, then ignore it...
elif scan.flags_or == TH_SYN and len(self.recent_scans):
# Try last 1 scans
recent1 = self.recent_scans[-1:-2:-1]
for recent in recent1:
if recent.type=='Idle' and scan.src==recent.zombie:
self.log('Ignoring mis-interpreted syn scan from zombie host %s' % ' to '.join(scan_ip2quad(scan)))
break
# Reply from B->A for full-connect scan from A->B
elif (recent.type == 'reply' and ((scan.src == recent.dst) and (scan.dst == recent.src))):
scan.type = 'TCP full-connect'
break
elif scan.proto==UDP:
scan.type = 'UDP'
# Reset flags for UDP scan
scan.flags_or = 0
elif scan.proto==SCTP:
if scan.chunk_type==1:
scan.type = 'SCTP Init'
elif scan.chunk_type==10:
scan.type = 'SCTP COOKIE_ECHO'
# See if this was logged recently
scanentry = entry.RecentScanEntry(scan, not not_scan)
if scanentry not in self.recent_scans:
continuation=False
self.recent_scans.append(scanentry)
else:
continuation=True
if not not_scan:
self.log_scan(scan, continuation=continuation, slow_scan=slow_scan, unsure=maybe_scan)
# Remove entry
if slow_scan:
del self.long_scans[scan.hash]
else:
del self.scans[scan.hash]
return True
else:
return False
def process(self, pkt):
if not hasattr(pkt, 'ip'):
return
ip = pkt.ip
# Ignore non-tcp, non-udp packets
if type(ip.data) not in (TCP, UDP, SCTP):
return
pload = ip.data
src,dst,dport,flags = int(struct.unpack('I',ip.src)[0]),int(struct.unpack('I', ip.dst)[0]),int(pload.dport),0
proto = type(pload)
if proto == TCP: flags = pload.flags
key = self.host_hash(src,dst)
curr=time.time()
# Keep dropping old entries
self.recent_scans.collect()
if key in self.scans:
scan = self.scans[key]
if scan.src != src:
# Skip packets in reverse direction or invalid protocol
return
timediff = curr - scan.timestamp
# Update only if not too old, else skip and remove entry
if (timediff > self.timeout):
# Add entry in long_scans if timediff not larger
# than longscan timeout
prev = self.scans[key].timestamp
if timediff<=self.timeout_l:
if key not in self.long_scans:
lscan = entry.ScanEntry(key)
lscan.src = src
lscan.dst = dst
lscan.timestamp = curr
lscan.timediffs.append(curr - prev)
lscan.flags_or |= flags
lscan.ports.append(dport)
lscan.proto = proto
self.long_scans[key] = lscan
else:
lscan = self.long_scans[key]
lscan.timestamp = curr
lscan.flags_or |= flags
lscan.timediffs.append(curr - prev)
lscan.update_time_sd()
self.update_ports(lscan, dport, flags)
if lscan.time_sd<2:
# SD is less than 2, possible slow scan
# update port weights...
# print 'Weight=>',lscan.weight
if not self.inspect_scan(lscan, True):
# Not a scan, check # of entries - if too many
# then this is a regular network activity
# but not a scan, so remove entry
if len(lscan.timediffs)>=10:
# print lscan.src, lscan.timediffs, lscan.time_sd
print 'Removing',key,lscan.src,'since not a scan'
del self.long_scans[key]
elif len(lscan.timediffs)>2:
# More than 2 entries, but SD is too large,
# delete the entry
# print 'Removing',key,lscan.src,'since SD is',lscan.time_sd
del self.long_scans[key]
else:
# Too large timeout, remove key
del self.long_scans[key]
del self.scans[key]
return
if scan.logged: return
scan.timestamp = curr
self.update_ports(scan, dport, flags)
self.inspect_scan(scan)
else:
# Add new entry
scan = entry.ScanEntry(key)
scan.src = src
scan.dst = dst
scan.timestamp = curr
scan.flags_or |= flags
if proto==SCTP:
scan.chunk_type = pload.chunks[0].type
scan.ports.append(dport)
scan.proto = proto
self.scans[key] = scan
def loop(self):
pc = pcap.pcap()
decode = { pcap.DLT_LOOP:dpkt.loopback.Loopback,
pcap.DLT_NULL:dpkt.loopback.Loopback,
pcap.DLT_EN10MB:dpkt.ethernet.Ethernet } [pc.datalink()]
try:
print 'listening on %s: %s' % (pc.name, pc.filter)
for ts, pkt in pc:
self.process(decode(pkt))
except KeyboardInterrupt:
if not self.daemon:
nrecv, ndrop, nifdrop = pc.stats()
print '\n%d packets received by filter' % nrecv
print '%d packets dropped by kernel' % ndrop
def run_daemon(self):
# Disconnect from tty
try:
pid = os.fork()
if pid>0:
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #1 failed", e
sys.exit(1)
os.setsid()
os.umask(0)
# Second fork
try:
pid = os.fork()
if pid>0:
open(PIDFILE,'w').write(str(pid))
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed", e
sys.exit(1)
self.loop()
def run(self):
# If dameon, then create a new thread and wait for it
if self.daemon:
print 'Daemonizing...'
self.run_daemon()
else:
# Run in foreground
self.loop()
def main():
if os.geteuid() != 0:
sys.exit("You must be super-user to run this program")
o=optparse.OptionParser()
o.add_option("-d", "--daemonize", dest="daemon", help="Daemonize",
action="store_true", default=False)
o.add_option("-f", "--logfile", dest="logfile", help="File to save logs to",
default="/var/log/scanlog")
options, args = o.parse_args()
s=ScanLogger(SCAN_TIMEOUT, WEIGHT_THRESHOLD, 8192, options.daemon, options.logfile)
s.run()
if __name__ == '__main__':
main()
| 37.8833 | 132 | 0.468929 | 2,152 | 18,828 | 4.015799 | 0.196097 | 0.013539 | 0.017357 | 0.00648 | 0.206781 | 0.151701 | 0.11583 | 0.103795 | 0.103217 | 0.099514 | 0 | 0.013553 | 0.439611 | 18,828 | 496 | 133 | 37.959677 | 0.805516 | 0.126036 | 0 | 0.258065 | 0 | 0.016129 | 0.084012 | 0.001656 | 0 | 0 | 0.00053 | 0 | 0 | 0 | null | null | 0 | 0.029032 | null | null | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07d5b427e69bdc09287f1c66c02797e0db0b274b | 1,218 | py | Python | examples/question_answering/qa_sparse_train.py | ebell495/nn_pruning | 41263ab898117a639f3f219c23a4cecc8bc0e3f3 | [
"Apache-2.0"
] | 250 | 2021-02-22T15:50:04.000Z | 2022-03-31T08:12:02.000Z | examples/question_answering/qa_sparse_train.py | vuiseng9/nn_pruning | 8f4a14dd63d621483cbc1bc4eb34600d66e9e71b | [
"Apache-2.0"
] | 28 | 2021-02-22T15:54:34.000Z | 2022-03-17T08:57:38.000Z | examples/question_answering/qa_sparse_train.py | vuiseng9/nn_pruning | 8f4a14dd63d621483cbc1bc4eb34600d66e9e71b | [
"Apache-2.0"
] | 31 | 2021-02-22T16:07:17.000Z | 2022-03-28T09:17:24.000Z | # coding=utf-8
# Copyright 2020 The HuggingFace Team All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Sparse Fine-tuning the library models for question answering.
"""
# You can also adapt this script on your own question answering task. Pointers for this are left as comments.
from nn_pruning.sparse_trainer import SparseTrainer
from .qa_train import QATrainer
# SparseTrainer should appear first in the base classes, as its functions must override QATrainer and its base classes (Trainer)
class QASparseTrainer(SparseTrainer, QATrainer):
def __init__(self, sparse_args, *args, **kwargs):
QATrainer.__init__(self, *args, **kwargs)
SparseTrainer.__init__(self, sparse_args)
| 43.5 | 128 | 0.769294 | 177 | 1,218 | 5.19774 | 0.632768 | 0.065217 | 0.028261 | 0.034783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008824 | 0.162562 | 1,218 | 27 | 129 | 45.111111 | 0.893137 | 0.729885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
07d7992d7ae8299b452c378aa6d4664a38bab354 | 1,252 | py | Python | src/petronia/aid/bootstrap/__init__.py | groboclown/petronia | 486338023d19cee989e92f0c5692680f1a37811f | [
"MIT"
] | 19 | 2017-06-21T10:28:24.000Z | 2021-12-31T11:49:28.000Z | src/petronia/aid/bootstrap/__init__.py | groboclown/petronia | 486338023d19cee989e92f0c5692680f1a37811f | [
"MIT"
] | 10 | 2016-11-11T18:57:57.000Z | 2021-02-01T15:33:43.000Z | src/petronia/aid/bootstrap/__init__.py | groboclown/petronia | 486338023d19cee989e92f0c5692680f1a37811f | [
"MIT"
] | 3 | 2017-09-17T03:29:35.000Z | 2019-06-03T10:43:08.000Z |
"""
Common Petronia imports for bootstrap parts of an extension.
This should be imported along with the `simp` module.
"""
from ...base.bus import (
EventBus,
ListenerRegistrar,
ListenerSetup,
QueuePriority,
ExtensionMetadataStruct,
register_event,
EVENT_WILDCARD,
TARGET_WILDCARD,
QUEUE_EVENT_NORMAL,
QUEUE_EVENT_HIGH,
QUEUE_EVENT_IO,
QUEUE_EVENT_TYPES
)
from ...base.participant import (
create_singleton_identity,
NOT_PARTICIPANT,
)
from ...base.events import (
# These are generally just bootstrap events.
DisposeCompleteEvent,
as_dispose_complete_listener,
RequestDisposeEvent,
as_request_dispose_listener,
SystemStartedEvent,
as_system_started_listener,
)
from ...base.events.bus import (
EventProtectionModel,
GLOBAL_EVENT_PROTECTION,
INTERNAL_EVENT_PROTECTION,
PRODUCE_EVENT_PROTECTION,
CONSUME_EVENT_PROTECTION,
REQUEST_EVENT_PROTECTION,
RESPONSE_EVENT_PROTECTION,
)
from ...core.extensions.api import ANY_VERSION
from ...core.shutdown.api import (
SystemShutdownEvent,
as_system_shutdown_listener,
SystemShutdownFinalizeEvent,
as_system_shutdown_finalize_listener,
TARGET_ID_SYSTEM_SHUTDOWN,
)
| 19.261538 | 60 | 0.747604 | 130 | 1,252 | 6.846154 | 0.561538 | 0.101124 | 0.031461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188498 | 1,252 | 64 | 61 | 19.5625 | 0.875984 | 0.126997 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.139535 | 0 | 0.139535 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07dab8d1754575bc1f3f83e4e0cadea3c8dcd3af | 8,104 | py | Python | src/biotite/application/application.py | claudejrogers/biotite | 3635bc9071506ecb85ddd9b1dbe6a430295e060e | [
"BSD-3-Clause"
] | null | null | null | src/biotite/application/application.py | claudejrogers/biotite | 3635bc9071506ecb85ddd9b1dbe6a430295e060e | [
"BSD-3-Clause"
] | null | null | null | src/biotite/application/application.py | claudejrogers/biotite | 3635bc9071506ecb85ddd9b1dbe6a430295e060e | [
"BSD-3-Clause"
] | null | null | null | # This source code is part of the Biotite package and is distributed
# under the 3-Clause BSD License. Please see 'LICENSE.rst' for further
# information.
__name__ = "biotite.application"
__author__ = "Patrick Kunzmann"
__all__ = ["Application", "AppStateError", "TimeoutError", "VersionError",
"AppState", "requires_state"]
import abc
import time
from functools import wraps
from enum import Flag, auto
class AppState(Flag):
"""
This enum type represents the app states of an application.
"""
CREATED = auto()
RUNNING = auto()
FINISHED = auto()
JOINED = auto()
CANCELLED = auto()
def requires_state(app_state):
"""
A decorator for methods of :class:`Application` subclasses that
raises an :class:`AppStateError` in case the method is called, when
the :class:`Application` is not in the specified :class:`AppState`
`app_state`.
Parameters
----------
app_state : AppState
The required app state.
Examples
--------
Raises :class:`AppStateError` when `function` is called,
if :class:`Application` is not in one of the specified states:
>>> @requires_state(AppState.RUNNING | AppState.FINISHED)
... def function(self):
... pass
"""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
# First parameter of method is always 'self'
instance = args[0]
if not instance._state & app_state:
raise AppStateError(
f"The application is in {instance.get_app_state()} state, "
f"but {app_state} state is required"
)
return func(*args, **kwargs)
return wrapper
return decorator
class Application(metaclass=abc.ABCMeta):
"""
This class is a wrapper around an external piece of runnable
software in any sense. Subclasses of this abstract base class
specify the respective kind of software and the way of interacting
with it.
Every :class:`Application` runs through a different app states
(instances of enum :class:`AppState`) from its creation until its
termination:
Directly after its instantiation the app is in the *CREATED* state.
In this state further parameters can be set for the application run.
After the user calls the :func:`start()` method, the app state is
set to *RUNNING* and the :class:`Application` type specific
:func:`run()` method is called.
When the application finishes the AppState changes to *FINISHED*.
This is checked via the :class:`Application` type specific
:func:`is_finished()` method.
The user can now call the :func:`join()` method, concluding the
application in the *JOINED* state and making the results of the
application accessible by executing the :class:`Application`
type specific :func:`evaluate()` method.
Furthermore this executes the :class:`Application` type specific
:func:`clean_up()` method.
:func:`join()` can even be called in the *RUNNING* state:
This will constantly check :func:`is_finished()` and will directly
go into the *JOINED* state as soon as the application reaches the
*FINISHED* state.
Calling the :func:`cancel()` method while the application is
*RUNNING* or *FINISHED* leaves the application in the *CANCELLED*
state.
This triggers the :func:`clean_up()` method, too, but there are no
accessible results.
If a method is called in an unsuitable app state, an
:class:`AppStateError` is called.
The application run behaves like an additional thread: Between the
call of :func:`start()` and :func:`join()` other Python code can be
executed, while the application runs in the background.
"""
def __init__(self):
self._state = AppState.CREATED
@requires_state(AppState.CREATED)
def start(self):
"""
Start the application run and set its state to *RUNNING*.
This can only be done from the *CREATED* state.
"""
self.run()
self._start_time = time.time()
self._state = AppState.RUNNING
@requires_state(AppState.RUNNING | AppState.FINISHED)
def join(self, timeout=None):
"""
Conclude the application run and set its state to *JOINED*.
This can only be done from the *RUNNING* or *FINISHED* state.
If the application is *FINISHED* the joining process happens
immediately, if otherwise the application is *RUNNING*, this
method waits until the application is *FINISHED*.
Parameters
----------
timeout : float, optional
If this parameter is specified, the :class:`Application`
only waits for finishing until this value (in seconds) runs
out.
After this time is exceeded a :class:`TimeoutError` is
raised and the application is cancelled.
Raises
------
TimeoutError
If the joining process exceeds the `timeout` value.
"""
time.sleep(self.wait_interval())
while self.get_app_state() != AppState.FINISHED:
if timeout is not None and time.time()-self._start_time > timeout:
self.cancel()
raise TimeoutError(
f"The application expired its timeout "
f"({timeout:.1f} s)"
)
else:
time.sleep(self.wait_interval())
time.sleep(self.wait_interval())
try:
self.evaluate()
except AppStateError:
raise
except:
self._state = AppState.CANCELLED
raise
else:
self._state = AppState.JOINED
self.clean_up()
@requires_state(AppState.RUNNING | AppState.FINISHED)
def cancel(self):
"""
Cancel the application when in *RUNNING* or *FINISHED* state.
"""
self._state = AppState.CANCELLED
self.clean_up()
def get_app_state(self):
"""
Get the current app state.
Returns
-------
app_state : AppState
The current app state.
"""
if self._state == AppState.RUNNING:
if self.is_finished():
self._state = AppState.FINISHED
return self._state
@abc.abstractmethod
def run(self):
"""
Commence the application run. Called in :func:`start()`.
PROTECTED: Override when inheriting.
"""
pass
@abc.abstractmethod
def is_finished(self):
"""
Check if the application has finished.
PROTECTED: Override when inheriting.
Returns
-------
finished : bool
True of the application has finished, false otherwise
"""
pass
@abc.abstractmethod
def wait_interval(self):
"""
The time interval of :func:`is_finished()` calls in the joining
process.
PROTECTED: Override when inheriting.
Returns
-------
interval : float
Time (in seconds) between calls of :func:`is_finished()` in
:func:`join()`
"""
pass
@abc.abstractmethod
def evaluate(self):
"""
Evaluate application results. Called in :func:`join()`.
PROTECTED: Override when inheriting.
"""
pass
def clean_up(self):
"""
Do clean up work after the application terminates.
PROTECTED: Optionally override when inheriting.
"""
pass
class AppStateError(Exception):
"""
Indicate that the application lifecycle was violated.
"""
pass
class TimeoutError(Exception):
"""
Indicate that the application's timeout expired.
"""
pass
class VersionError(Exception):
"""
Indicate that the application's version is invalid.
"""
pass | 31.169231 | 79 | 0.604516 | 918 | 8,104 | 5.269063 | 0.248366 | 0.072359 | 0.024602 | 0.01902 | 0.167666 | 0.096547 | 0.052719 | 0.013645 | 0 | 0 | 0 | 0.000534 | 0.307132 | 8,104 | 260 | 80 | 31.169231 | 0.860908 | 0.538129 | 0 | 0.287356 | 0 | 0 | 0.086515 | 0.009107 | 0 | 0 | 0 | 0 | 0 | 1 | 0.149425 | false | 0.091954 | 0.045977 | 0 | 0.356322 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07dee507ce31e115b2b94a29d53cdc5c3d4bd0df | 2,316 | py | Python | scripts/examples/OpenMV/16-Codes/find_barcodes.py | jiskra/openmv | a0f321836f77f94d8118910598dcdb79eb784d58 | [
"MIT"
] | 1,761 | 2015-07-10T23:14:17.000Z | 2022-03-30T07:49:49.000Z | scripts/examples/OpenMV/16-Codes/find_barcodes.py | jiskra/openmv | a0f321836f77f94d8118910598dcdb79eb784d58 | [
"MIT"
] | 487 | 2015-07-07T23:21:20.000Z | 2022-03-30T17:13:22.000Z | scripts/examples/OpenMV/16-Codes/find_barcodes.py | jiskra/openmv | a0f321836f77f94d8118910598dcdb79eb784d58 | [
"MIT"
] | 882 | 2015-08-01T08:34:19.000Z | 2022-03-30T07:36:23.000Z | # Barcode Example
#
# This example shows off how easy it is to detect bar codes using the
# OpenMV Cam M7. Barcode detection does not work on the M4 Camera.
import sensor, image, time, math
sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.VGA) # High Res!
sensor.set_windowing((640, 80)) # V Res of 80 == less work (40 for 2X the speed).
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must turn this off to prevent image washout...
sensor.set_auto_whitebal(False) # must turn this off to prevent image washout...
clock = time.clock()
# Barcode detection can run at the full 640x480 resolution of your OpenMV Cam's
# OV7725 camera module. Barcode detection will also work in RGB565 mode but at
# a lower resolution. That said, barcode detection requires a higher resolution
# to work well so it should always be run at 640x480 in grayscale...
def barcode_name(code):
if(code.type() == image.EAN2):
return "EAN2"
if(code.type() == image.EAN5):
return "EAN5"
if(code.type() == image.EAN8):
return "EAN8"
if(code.type() == image.UPCE):
return "UPCE"
if(code.type() == image.ISBN10):
return "ISBN10"
if(code.type() == image.UPCA):
return "UPCA"
if(code.type() == image.EAN13):
return "EAN13"
if(code.type() == image.ISBN13):
return "ISBN13"
if(code.type() == image.I25):
return "I25"
if(code.type() == image.DATABAR):
return "DATABAR"
if(code.type() == image.DATABAR_EXP):
return "DATABAR_EXP"
if(code.type() == image.CODABAR):
return "CODABAR"
if(code.type() == image.CODE39):
return "CODE39"
if(code.type() == image.PDF417):
return "PDF417"
if(code.type() == image.CODE93):
return "CODE93"
if(code.type() == image.CODE128):
return "CODE128"
while(True):
clock.tick()
img = sensor.snapshot()
codes = img.find_barcodes()
for code in codes:
img.draw_rectangle(code.rect())
print_args = (barcode_name(code), code.payload(), (180 * code.rotation()) / math.pi, code.quality(), clock.fps())
print("Barcode %s, Payload \"%s\", rotation %f (degrees), quality %d, FPS %f" % print_args)
if not codes:
print("FPS %f" % clock.fps())
| 35.090909 | 121 | 0.638169 | 324 | 2,316 | 4.512346 | 0.401235 | 0.065663 | 0.109439 | 0.164159 | 0.086183 | 0.056088 | 0.056088 | 0.056088 | 0.056088 | 0 | 0 | 0.04442 | 0.222366 | 2,316 | 65 | 122 | 35.630769 | 0.767351 | 0.259067 | 0 | 0 | 0 | 0 | 0.093952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0.019231 | 0 | 0.346154 | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07e0af37e19269400d7c0cf5ac0dc3b1672e18e4 | 10,752 | py | Python | tests/test_packed_to_padded.py | theycallmepeter/pytorch3d_PBR | bc83c23969ff7843fc05d2da001952b368926174 | [
"BSD-3-Clause"
] | null | null | null | tests/test_packed_to_padded.py | theycallmepeter/pytorch3d_PBR | bc83c23969ff7843fc05d2da001952b368926174 | [
"BSD-3-Clause"
] | null | null | null | tests/test_packed_to_padded.py | theycallmepeter/pytorch3d_PBR | bc83c23969ff7843fc05d2da001952b368926174 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import unittest
import torch
from common_testing import TestCaseMixin, get_random_cuda_device
from pytorch3d.ops import packed_to_padded, padded_to_packed
from pytorch3d.structures.meshes import Meshes
class TestPackedToPadded(TestCaseMixin, unittest.TestCase):
def setUp(self) -> None:
super().setUp()
torch.manual_seed(1)
@staticmethod
def init_meshes(
num_meshes: int = 10,
num_verts: int = 1000,
num_faces: int = 3000,
device: str = "cpu",
):
device = torch.device(device)
verts_list = []
faces_list = []
for _ in range(num_meshes):
verts = torch.rand((num_verts, 3), dtype=torch.float32, device=device)
faces = torch.randint(
num_verts, size=(num_faces, 3), dtype=torch.int64, device=device
)
verts_list.append(verts)
faces_list.append(faces)
meshes = Meshes(verts_list, faces_list)
return meshes
@staticmethod
def packed_to_padded_python(inputs, first_idxs, max_size, device):
"""
PyTorch implementation of packed_to_padded function.
"""
num_meshes = first_idxs.size(0)
D = inputs.shape[1] if inputs.dim() == 2 else 0
if D == 0:
inputs_padded = torch.zeros((num_meshes, max_size), device=device)
else:
inputs_padded = torch.zeros((num_meshes, max_size, D), device=device)
for m in range(num_meshes):
s = first_idxs[m]
if m == num_meshes - 1:
f = inputs.shape[0]
else:
f = first_idxs[m + 1]
inputs_padded[m, :f] = inputs[s:f]
return inputs_padded
@staticmethod
def padded_to_packed_python(inputs, first_idxs, num_inputs, device):
"""
PyTorch implementation of padded_to_packed function.
"""
num_meshes = inputs.size(0)
D = inputs.shape[2] if inputs.dim() == 3 else 0
if D == 0:
inputs_packed = torch.zeros((num_inputs,), device=device)
else:
inputs_packed = torch.zeros((num_inputs, D), device=device)
for m in range(num_meshes):
s = first_idxs[m]
if m == num_meshes - 1:
f = num_inputs
else:
f = first_idxs[m + 1]
inputs_packed[s:f] = inputs[m, :f]
return inputs_packed
def _test_packed_to_padded_helper(self, D, device):
"""
Check the results from packed_to_padded and PyTorch implementations
are the same.
"""
meshes = self.init_meshes(16, 100, 300, device=device)
faces = meshes.faces_packed()
mesh_to_faces_packed_first_idx = meshes.mesh_to_faces_packed_first_idx()
max_faces = meshes.num_faces_per_mesh().max().item()
if D == 0:
values = torch.rand((faces.shape[0],), device=device, requires_grad=True)
else:
values = torch.rand((faces.shape[0], D), device=device, requires_grad=True)
values_torch = values.detach().clone()
values_torch.requires_grad = True
values_padded = packed_to_padded(
values, mesh_to_faces_packed_first_idx, max_faces
)
values_padded_torch = TestPackedToPadded.packed_to_padded_python(
values_torch, mesh_to_faces_packed_first_idx, max_faces, device
)
# check forward
self.assertClose(values_padded, values_padded_torch)
# check backward
if D == 0:
grad_inputs = torch.rand((len(meshes), max_faces), device=device)
else:
grad_inputs = torch.rand((len(meshes), max_faces, D), device=device)
values_padded.backward(grad_inputs)
grad_outputs = values.grad
values_padded_torch.backward(grad_inputs)
grad_outputs_torch1 = values_torch.grad
grad_outputs_torch2 = TestPackedToPadded.padded_to_packed_python(
grad_inputs, mesh_to_faces_packed_first_idx, values.size(0), device=device
)
self.assertClose(grad_outputs, grad_outputs_torch1)
self.assertClose(grad_outputs, grad_outputs_torch2)
def test_packed_to_padded_flat_cpu(self):
self._test_packed_to_padded_helper(0, "cpu")
def test_packed_to_padded_D1_cpu(self):
self._test_packed_to_padded_helper(1, "cpu")
def test_packed_to_padded_D16_cpu(self):
self._test_packed_to_padded_helper(16, "cpu")
def test_packed_to_padded_flat_cuda(self):
device = get_random_cuda_device()
self._test_packed_to_padded_helper(0, device)
def test_packed_to_padded_D1_cuda(self):
device = get_random_cuda_device()
self._test_packed_to_padded_helper(1, device)
def test_packed_to_padded_D16_cuda(self):
device = get_random_cuda_device()
self._test_packed_to_padded_helper(16, device)
def _test_padded_to_packed_helper(self, D, device):
"""
Check the results from packed_to_padded and PyTorch implementations
are the same.
"""
meshes = self.init_meshes(16, 100, 300, device=device)
mesh_to_faces_packed_first_idx = meshes.mesh_to_faces_packed_first_idx()
num_faces_per_mesh = meshes.num_faces_per_mesh()
max_faces = num_faces_per_mesh.max().item()
if D == 0:
values = torch.rand((len(meshes), max_faces), device=device)
else:
values = torch.rand((len(meshes), max_faces, D), device=device)
for i, num in enumerate(num_faces_per_mesh):
values[i, num:] = 0
values.requires_grad = True
values_torch = values.detach().clone()
values_torch.requires_grad = True
values_packed = padded_to_packed(
values, mesh_to_faces_packed_first_idx, num_faces_per_mesh.sum().item()
)
values_packed_torch = TestPackedToPadded.padded_to_packed_python(
values_torch,
mesh_to_faces_packed_first_idx,
num_faces_per_mesh.sum().item(),
device,
)
# check forward
self.assertClose(values_packed, values_packed_torch)
# check backward
if D == 0:
grad_inputs = torch.rand((num_faces_per_mesh.sum().item()), device=device)
else:
grad_inputs = torch.rand(
(num_faces_per_mesh.sum().item(), D), device=device
)
values_packed.backward(grad_inputs)
grad_outputs = values.grad
values_packed_torch.backward(grad_inputs)
grad_outputs_torch1 = values_torch.grad
grad_outputs_torch2 = TestPackedToPadded.packed_to_padded_python(
grad_inputs, mesh_to_faces_packed_first_idx, values.size(1), device=device
)
self.assertClose(grad_outputs, grad_outputs_torch1)
self.assertClose(grad_outputs, grad_outputs_torch2)
def test_padded_to_packed_flat_cpu(self):
self._test_padded_to_packed_helper(0, "cpu")
def test_padded_to_packed_D1_cpu(self):
self._test_padded_to_packed_helper(1, "cpu")
def test_padded_to_packed_D16_cpu(self):
self._test_padded_to_packed_helper(16, "cpu")
def test_padded_to_packed_flat_cuda(self):
device = get_random_cuda_device()
self._test_padded_to_packed_helper(0, device)
def test_padded_to_packed_D1_cuda(self):
device = get_random_cuda_device()
self._test_padded_to_packed_helper(1, device)
def test_padded_to_packed_D16_cuda(self):
device = get_random_cuda_device()
self._test_padded_to_packed_helper(16, device)
def test_invalid_inputs_shapes(self, device="cuda:0"):
with self.assertRaisesRegex(ValueError, "input can only be 2-dimensional."):
values = torch.rand((100, 50, 2), device=device)
first_idxs = torch.tensor([0, 80], dtype=torch.int64, device=device)
packed_to_padded(values, first_idxs, 100)
with self.assertRaisesRegex(ValueError, "input can only be 3-dimensional."):
values = torch.rand((100,), device=device)
first_idxs = torch.tensor([0, 80], dtype=torch.int64, device=device)
padded_to_packed(values, first_idxs, 20)
with self.assertRaisesRegex(ValueError, "input can only be 3-dimensional."):
values = torch.rand((100, 50, 2, 2), device=device)
first_idxs = torch.tensor([0, 80], dtype=torch.int64, device=device)
padded_to_packed(values, first_idxs, 20)
@staticmethod
def packed_to_padded_with_init(
num_meshes: int, num_verts: int, num_faces: int, num_d: int, device: str = "cpu"
):
meshes = TestPackedToPadded.init_meshes(
num_meshes, num_verts, num_faces, device
)
faces = meshes.faces_packed()
mesh_to_faces_packed_first_idx = meshes.mesh_to_faces_packed_first_idx()
max_faces = meshes.num_faces_per_mesh().max().item()
if num_d == 0:
values = torch.rand((faces.shape[0],), device=meshes.device)
else:
values = torch.rand((faces.shape[0], num_d), device=meshes.device)
torch.cuda.synchronize()
def out():
packed_to_padded(values, mesh_to_faces_packed_first_idx, max_faces)
torch.cuda.synchronize()
return out
@staticmethod
def packed_to_padded_with_init_torch(
num_meshes: int, num_verts: int, num_faces: int, num_d: int, device: str = "cpu"
):
meshes = TestPackedToPadded.init_meshes(
num_meshes, num_verts, num_faces, device
)
faces = meshes.faces_packed()
mesh_to_faces_packed_first_idx = meshes.mesh_to_faces_packed_first_idx()
max_faces = meshes.num_faces_per_mesh().max().item()
if num_d == 0:
values = torch.rand((faces.shape[0],), device=meshes.device)
else:
values = torch.rand((faces.shape[0], num_d), device=meshes.device)
torch.cuda.synchronize()
def out():
TestPackedToPadded.packed_to_padded_python(
values, mesh_to_faces_packed_first_idx, max_faces, device
)
torch.cuda.synchronize()
return out
| 39.384615 | 89 | 0.628255 | 1,355 | 10,752 | 4.635424 | 0.108487 | 0.033116 | 0.057953 | 0.043305 | 0.780608 | 0.736666 | 0.646712 | 0.609776 | 0.540519 | 0.496099 | 0 | 0.018483 | 0.280413 | 10,752 | 272 | 90 | 39.529412 | 0.793331 | 0.048735 | 0 | 0.41784 | 0 | 0 | 0.013111 | 0 | 0 | 0 | 0 | 0 | 0.042254 | 1 | 0.107981 | false | 0 | 0.023474 | 0 | 0.159624 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07e2537b3e43653ce0616ed6421ef634050042c8 | 3,085 | py | Python | pysrc/classifier.py | CrackerCat/xed | 428712c28e831573579b7f749db63d3a58dcdbd9 | [
"Apache-2.0"
] | 1,261 | 2016-12-16T14:29:30.000Z | 2022-03-30T20:21:25.000Z | pysrc/classifier.py | CrackerCat/xed | 428712c28e831573579b7f749db63d3a58dcdbd9 | [
"Apache-2.0"
] | 190 | 2016-12-17T13:44:09.000Z | 2022-03-27T09:28:13.000Z | pysrc/classifier.py | CrackerCat/xed | 428712c28e831573579b7f749db63d3a58dcdbd9 | [
"Apache-2.0"
] | 155 | 2016-12-16T22:17:20.000Z | 2022-02-16T20:53:59.000Z | #!/usr/bin/env python
# -*- python -*-
#BEGIN_LEGAL
#
#Copyright (c) 2019 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
from __future__ import print_function
import re
import genutil
import codegen
def _emit_function(fe, isa_sets, name):
fo = codegen.function_object_t('xed_classify_{}'.format(name))
fo.add_arg('const xed_decoded_inst_t* d')
fo.add_code_eol(' const xed_isa_set_enum_t isa_set = xed_decoded_inst_get_isa_set(d)')
# FIXME: 2017-07-14 optimization: could use a static array for faster checking, smaller code
switch = codegen.c_switch_generator_t('isa_set', fo)
isa_sets_sorted = sorted(isa_sets)
for c in isa_sets_sorted:
switch.add_case('XED_ISA_SET_{}'.format(c.upper()),[],do_break=False)
if len(isa_sets) > 0:
switch.add('return 1;')
switch.add_default(['return 0;'], do_break=False)
switch.finish()
fo.emit_file_emitter(fe)
def work(agi):
sse_isa_sets = set([])
avx_isa_sets = set([])
avx512_isa_sets = set([])
avx512_kmask_op = set([])
for generator in agi.generator_list:
for ii in generator.parser_output.instructions:
if genutil.field_check(ii, 'iclass'):
if re.search('AVX512',ii.isa_set):
avx512_isa_sets.add(ii.isa_set)
if re.search('KOP',ii.isa_set):
avx512_kmask_op.add(ii.isa_set)
elif re.search('AVX',ii.isa_set) or ii.isa_set in ['F16C', 'FMA']:
avx_isa_sets.add(ii.isa_set)
elif re.search('SSE',ii.isa_set) or ii.isa_set in ['AES','PCLMULQDQ']:
# Exclude MMX instructions that come in with SSE2 &
# SSSE3. The several purely MMX instr in SSE are
# "SSE-opcodes" with memop operands. One can look for
# those with SSE2MMX and SSSE3MMX xed isa_sets.
#
# Also exclude the SSE_PREFETCH operations; Those are
# just memops.
if (not re.search('MMX',ii.isa_set) and not re.search('PREFETCH',ii.isa_set)
and not re.search('X87',ii.isa_set) and not re.search('MWAIT',ii.isa_set)):
sse_isa_sets.add(ii.isa_set)
fe = agi.open_file('xed-classifiers.c') # xed_file_emitter_t
_emit_function(fe, avx512_isa_sets, 'avx512')
_emit_function(fe, avx512_kmask_op, 'avx512_maskop')
_emit_function(fe, avx_isa_sets, 'avx')
_emit_function(fe, sse_isa_sets, 'sse')
fe.close()
return
| 39.551282 | 96 | 0.647326 | 454 | 3,085 | 4.167401 | 0.396476 | 0.060254 | 0.059197 | 0.023256 | 0.103066 | 0.103066 | 0.080338 | 0.021142 | 0 | 0 | 0 | 0.023226 | 0.246353 | 3,085 | 77 | 97 | 40.064935 | 0.790538 | 0.322204 | 0 | 0 | 0 | 0 | 0.122152 | 0.015027 | 0 | 0 | 0 | 0.012987 | 0 | 1 | 0.047619 | false | 0 | 0.095238 | 0 | 0.166667 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07e5b14fe954fccf9ada38a8fb44f9dd227c6830 | 1,301 | py | Python | tests/web/config.py | zcqian/biothings.api | 61c0300317cf2ac7db8310b5b5741ad9b08c4163 | [
"Apache-2.0"
] | null | null | null | tests/web/config.py | zcqian/biothings.api | 61c0300317cf2ac7db8310b5b5741ad9b08c4163 | [
"Apache-2.0"
] | null | null | null | tests/web/config.py | zcqian/biothings.api | 61c0300317cf2ac7db8310b5b5741ad9b08c4163 | [
"Apache-2.0"
] | null | null | null | """
Web settings to override for testing.
"""
import os
from biothings.web.settings.default import QUERY_KWARGS
# *****************************************************************************
# Elasticsearch Variables
# *****************************************************************************
ES_INDEX = 'bts_test'
ES_DOC_TYPE = 'gene'
ES_SCROLL_SIZE = 60
# *****************************************************************************
# User Input Control
# *****************************************************************************
# use a smaller size for testing
QUERY_KWARGS['GET']['facet_size']['default'] = 3
QUERY_KWARGS['GET']['facet_size']['max'] = 5
QUERY_KWARGS['POST']['q']['jsoninput'] = True
# *****************************************************************************
# Elasticsearch Query Builder
# *****************************************************************************
ALLOW_RANDOM_QUERY = True
ALLOW_NESTED_AGGS = True
USERQUERY_DIR = os.path.join(os.path.dirname(__file__), 'userquery')
# *****************************************************************************
# Endpoints Specifics
# *****************************************************************************
STATUS_CHECK = {
'id': '1017',
'index': 'bts_test',
'doc_type': '_all'
}
| 34.236842 | 79 | 0.362798 | 92 | 1,301 | 4.858696 | 0.630435 | 0.098434 | 0.053691 | 0.085011 | 0.102908 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006717 | 0.08455 | 1,301 | 37 | 80 | 35.162162 | 0.368598 | 0.602613 | 0 | 0 | 0 | 0 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07e88f36bd18f9a9dc8241de858cfab239c3ca4a | 1,758 | py | Python | cogs/carbon.py | Baracchino-Della-Scuola/Bot | 65c1ef37ca9eae5d104de7d7de5cc58cc138402d | [
"MIT"
] | 6 | 2021-12-18T10:15:01.000Z | 2022-03-25T18:11:04.000Z | cogs/carbon.py | Baracchino-Della-Scuola/Bot | 65c1ef37ca9eae5d104de7d7de5cc58cc138402d | [
"MIT"
] | 3 | 2022-01-13T12:44:46.000Z | 2022-02-21T17:40:52.000Z | cogs/carbon.py | Baracchino-Della-Scuola/Bot | 65c1ef37ca9eae5d104de7d7de5cc58cc138402d | [
"MIT"
] | 1 | 2022-02-14T21:54:07.000Z | 2022-02-14T21:54:07.000Z | import discord
from discord.ext import commands
import urllib.parse
from .constants import themes, controls, languages, fonts, escales
import os
from pathlib import Path
from typing import Any
# from pyppeteer import launch
from io import *
import requests
def encode_url(text: str) -> str:
first_encoding = urllib.parse.quote(text, safe="*()")
return urllib.parse.quote(first_encoding, safe="*") # Carbonsh encodes text twice
def hex_to_rgb(hex: str) -> tuple:
"""
Args:
hex (str):
"""
return tuple(int(hex.lstrip("#")[i : i + 2], 16) for i in (0, 2, 4))
def parse_bg(background) -> str:
if background == "":
return "rgba(171, 184, 195, 1)"
elif background[0] == "#" or "(" not in background:
return f"rgba{hex_to_rgb(background) + (1,)}"
return background
def int_to_px(number) -> str:
return f"{number}px"
def int_to_percent(number) -> str:
return f"{number}%"
def trim_url(text: str) -> str:
if len(text) < 2000:
return text
if "%25" not in text:
return text[:2000]
if text[:2003][:-3] == "%25":
return text[:2000]
last_percent = text[:2000].rindex("%25")
return text[:last_percent]
_carbon_url = "https://carbonnowsh.herokuapp.com/"
def code_to_url(code: str) -> str:
return f"{_carbon_url}?&code={trim_url(encode_url(code))}"
class Carbon(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command()
async def carbonate(self, ctx, *, code):
carbon_url = code_to_url(code)
r = requests.get(carbon_url)
b = BytesIO(r.content)
await ctx.send(file=discord.File(fp=b, filename="code.png"))
async def setup(bot):
await bot.add_cog(Carbon(bot))
| 21.975 | 86 | 0.633675 | 251 | 1,758 | 4.310757 | 0.398406 | 0.032348 | 0.027726 | 0.02403 | 0.040665 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032847 | 0.220705 | 1,758 | 79 | 87 | 22.253165 | 0.756934 | 0.044369 | 0 | 0.042553 | 0 | 0 | 0.109705 | 0.045208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170213 | false | 0 | 0.191489 | 0.06383 | 0.638298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
07ea3ff52f1fa71b79053f13390d47944be9bd66 | 499 | py | Python | examples/mcp3xxx_mcp3002_single_ended_simpletest.py | sommersoft/Adafruit_CircuitPython_MCP3xxx | 94088a7e2b30f1b34e8a5fd7076075d88aad460b | [
"MIT"
] | null | null | null | examples/mcp3xxx_mcp3002_single_ended_simpletest.py | sommersoft/Adafruit_CircuitPython_MCP3xxx | 94088a7e2b30f1b34e8a5fd7076075d88aad460b | [
"MIT"
] | null | null | null | examples/mcp3xxx_mcp3002_single_ended_simpletest.py | sommersoft/Adafruit_CircuitPython_MCP3xxx | 94088a7e2b30f1b34e8a5fd7076075d88aad460b | [
"MIT"
] | null | null | null | import busio
import digitalio
import board
import adafruit_mcp3xxx.mcp3002 as MCP
from adafruit_mcp3xxx.analog_in import AnalogIn
# create the spi bus
spi = busio.SPI(clock=board.SCK, MISO=board.MISO, MOSI=board.MOSI)
# create the cs (chip select)
cs = digitalio.DigitalInOut(board.D5)
# create the mcp object
mcp = MCP.MCP3002(spi, cs)
# create an analog input channel on pin 0
chan = AnalogIn(mcp, MCP.P0)
print("Raw ADC Value: ", chan.value)
print("ADC Voltage: " + str(chan.voltage) + "V")
| 23.761905 | 66 | 0.747495 | 80 | 499 | 4.625 | 0.525 | 0.072973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030445 | 0.144289 | 499 | 20 | 67 | 24.95 | 0.836066 | 0.216433 | 0 | 0 | 0 | 0 | 0.07513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.454545 | 0 | 0.454545 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
07eb8c54a1c0d882798ebdd645e52dda754bb70e | 759 | py | Python | glue/core/data_factories/tables.py | rosteen/glue | ed71979f8e0e41f993a2363b3b5a8f8c3167a130 | [
"BSD-3-Clause"
] | 550 | 2015-01-08T13:51:06.000Z | 2022-03-31T11:54:47.000Z | glue/core/data_factories/tables.py | mmorys/glue | b58ced518ba6f56c59a4e03ffe84afa47235e193 | [
"BSD-3-Clause"
] | 1,362 | 2015-01-03T19:15:52.000Z | 2022-03-30T13:23:11.000Z | glue/core/data_factories/tables.py | mmorys/glue | b58ced518ba6f56c59a4e03ffe84afa47235e193 | [
"BSD-3-Clause"
] | 142 | 2015-01-08T13:08:00.000Z | 2022-03-18T13:25:57.000Z | from glue.core.data_factories.helpers import has_extension
from glue.config import data_factory
__all__ = ['tabular_data']
@data_factory(label="ASCII Table",
identifier=has_extension('csv txt tsv tbl dat '
'csv.gz txt.gz tbl.bz '
'dat.gz'),
priority=1)
def tabular_data(path, **kwargs):
from glue.core.data_factories.astropy_table import astropy_tabular_data
from glue.core.data_factories.pandas import pandas_read_table
for fac in [astropy_tabular_data, pandas_read_table]:
try:
return fac(path, **kwargs)
except Exception:
pass
else:
raise IOError("Could not parse file: %s" % path)
| 33 | 75 | 0.613966 | 93 | 759 | 4.774194 | 0.516129 | 0.072072 | 0.081081 | 0.108108 | 0.168919 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001894 | 0.304348 | 759 | 22 | 76 | 34.5 | 0.839015 | 0 | 0 | 0 | 0 | 0 | 0.123847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07ee95bf0289bb4f328ba250a0e725c6cb917270 | 2,073 | py | Python | d00dfeed/analyses/print_sloc_per_soc.py | rehosting/rehosting_sok | 499b625c8aa60020f311df97a6253820982f20d4 | [
"MIT"
] | 4 | 2021-09-17T02:37:08.000Z | 2022-02-15T01:44:41.000Z | d00dfeed/analyses/print_sloc_per_soc.py | rehosting/rehosting_sok | 499b625c8aa60020f311df97a6253820982f20d4 | [
"MIT"
] | null | null | null | d00dfeed/analyses/print_sloc_per_soc.py | rehosting/rehosting_sok | 499b625c8aa60020f311df97a6253820982f20d4 | [
"MIT"
] | null | null | null | # External deps
import os, sys, json
from pathlib import Path
from typing import Dict, List
# Internal deps
os.chdir(sys.path[0])
sys.path.append("..")
import df_common as dfc
import analyses_common as ac
# Generated files directory
GEN_FILE_DIR = str(Path(__file__).resolve().parent.parent) + os.sep + "generated_files" # TODO: ugly parent.parent pathing
if os.path.exists(GEN_FILE_DIR):
sys.path.append(GEN_FILE_DIR)
if os.path.exists(os.path.join(GEN_FILE_DIR, "sloc_cnt.py")):
from sloc_cnt import DRIVER_NAME_TO_SLOC
else:
print("Error: no SLOC file! Run \'df_analyze.py\' with \'--linux-src-dir\'")
sys.exit(1)
if __name__ == "__main__":
json_files = ac.argparse_and_get_files("Graph SLOC/SoC data")
soc_sloc_by_arch: Dict[str, List[int]] = {}
print("Gathering SLOC average by arch...")
from graph_dd_sloc_by_arch import get_sloc_avg_and_list_by_arch
cmp_by_arch = ac.build_dict_two_lvl_cnt(json_files, dfc.JSON_ARC, dfc.JSON_CMP_STR)
avg_sloc_by_arch, sloc_list_by_arch = get_sloc_avg_and_list_by_arch(cmp_by_arch, verbose = False)
# Collection
print("Iterating DTBs/SoCs...")
for dtb_json in json_files:
with open(dtb_json) as json_file:
data = json.load(json_file)
soc_sloc = 0
arch = data[dfc.JSON_ARC]
cmp_strs = data[dfc.JSON_CMP_STR]
# Total SLOC for this SoC
for cmp_str in cmp_strs:
driver_sloc = dfc.cmp_str_to_sloc(cmp_str)
if not driver_sloc: # Closed-source driver
driver_sloc = avg_sloc_by_arch[arch]
soc_sloc += driver_sloc
#print("{}: {}".format(cmp_str, driver_sloc))
if arch not in soc_sloc_by_arch:
soc_sloc_by_arch[arch] = []
else:
soc_sloc_by_arch[arch].append(soc_sloc)
print("{} ({}): {}".format(dtb_json.split(os.sep)[-1], arch, soc_sloc))
# Final stats
ac.print_mean_median_std_dev_for_dict_of_lists(soc_sloc_by_arch,
"\nSloc Per Soc, format: [arch : (mean, median, std_dev)]\n")
| 32.904762 | 122 | 0.673903 | 328 | 2,073 | 3.893293 | 0.317073 | 0.065779 | 0.062647 | 0.050901 | 0.076742 | 0.050117 | 0.050117 | 0.050117 | 0.050117 | 0.050117 | 0 | 0.002452 | 0.213218 | 2,073 | 62 | 123 | 33.435484 | 0.780503 | 0.095514 | 0 | 0.04878 | 0 | 0 | 0.132045 | 0 | 0 | 0 | 0 | 0.016129 | 0 | 1 | 0 | false | 0 | 0.170732 | 0 | 0.170732 | 0.121951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07f12eb8f08aef21196193b3111071cb20b8013a | 1,884 | py | Python | silver_bullet/crypto.py | Hojung-Jeong/Silver-Bullet-Encryption-Tool | 5ea29b3cd78cf7488e0cbdcf4ea60d7c9151c2a7 | [
"Apache-2.0"
] | null | null | null | silver_bullet/crypto.py | Hojung-Jeong/Silver-Bullet-Encryption-Tool | 5ea29b3cd78cf7488e0cbdcf4ea60d7c9151c2a7 | [
"Apache-2.0"
] | null | null | null | silver_bullet/crypto.py | Hojung-Jeong/Silver-Bullet-Encryption-Tool | 5ea29b3cd78cf7488e0cbdcf4ea60d7c9151c2a7 | [
"Apache-2.0"
] | null | null | null | '''
>List of functions
1. encrypt(user_input,passphrase) - Encrypt the given string with the given passphrase. Returns cipher text and locked pad.
2. decrypt(cipher_text,locked_pad,passphrase) - Decrypt the cipher text encrypted with SBET. It requires cipher text, locked pad, and passphrase.
'''
# CODE ========================================================================
import zlib
import random
from hashlib import sha1
from silver_bullet.TRNG import trlist
from silver_bullet.contain_value import contain
ascii_value=256
def ciphering(target_list,pad,decrypt=False):
result=[]
for counter in range(len(pad)):
if decrypt==False:
operated=contain(target_list[counter]+pad[counter],ascii_value)
else:
operated=contain(int(target_list[counter])-pad[counter],ascii_value)
result.append(operated)
return result
def locker(pad,passphrase):
cutter=round(len(passphrase)/2)
splited=[passphrase[:cutter],passphrase[cutter:]]
locker=[0 for counter in range(len(pad))]
for element in splited:
bloated_seed=sha1(element.encode()).hexdigest()
random.seed(bloated_seed)
locker=[contain(random.randrange(ascii_value)+element,ascii_value) for element in locker]
holder=[]
for counter in range(len(pad)):
operated=int(pad[counter])^locker[counter]
holder.append(operated)
return holder
def encrypt(user_input,passphrase):
compressed=zlib.compress(user_input.encode())
ui_listed=list(compressed)
pad=trlist(len(ui_listed),ascii_value)
ct=ciphering(ui_listed,pad)
lp=locker(pad,passphrase)
cipher_text=' '.join(map(str,ct))
locked_pad=' '.join(map(str,lp))
return cipher_text, locked_pad
def decrypt(cipher_text,locked_pad,passphrase):
ct=cipher_text.split(' ')
lp=locked_pad.split(' ')
pad=locker(lp,passphrase)
pt=ciphering(ct,pad,True)
byted=bytes(pt)
decompressed=zlib.decompress(byted).decode()
return decompressed | 24.789474 | 146 | 0.735669 | 260 | 1,884 | 5.215385 | 0.323077 | 0.058997 | 0.047198 | 0.056047 | 0.158555 | 0.158555 | 0.054572 | 0 | 0 | 0 | 0 | 0.00536 | 0.108811 | 1,884 | 76 | 147 | 24.789474 | 0.802263 | 0.213907 | 0 | 0.044444 | 0 | 0 | 0.002653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0.155556 | 0.111111 | 0 | 0.288889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07f21adedf8ef7aa0ba52361a9cf4372ad43ac9a | 4,967 | py | Python | app/nextMoveLogic.py | thekitbag/starter-snake-python | 48d12d2fa61ecfc976cd5750316b1db49a641f7f | [
"MIT"
] | null | null | null | app/nextMoveLogic.py | thekitbag/starter-snake-python | 48d12d2fa61ecfc976cd5750316b1db49a641f7f | [
"MIT"
] | null | null | null | app/nextMoveLogic.py | thekitbag/starter-snake-python | 48d12d2fa61ecfc976cd5750316b1db49a641f7f | [
"MIT"
] | null | null | null | import random
class Status(object):
def getHeadPosition(gamedata):
me = gamedata['you']
my_position = me['body']
head = my_position[0]
return head
def getMyLength(gamedata):
me = gamedata['you']
my_position = me['body']
if my_position[0] == my_position[1] == my_position[2]:
return 1
elif my_position[1] == my_position[2]:
return 2
else: return len(my_position)
def getMyDirection(gamedata):
me = gamedata['you']
my_position = me['body']
if Status.getMyLength(gamedata) == 1:
return 'none'
elif my_position[0]['x'] > my_position[1]['x']:
return 'right'
elif my_position[0]['x'] < my_position[1]['x']:
return 'left'
elif my_position[0]['x'] == my_position[1]['x'] and my_position[0]['y'] < my_position[1]['y']:
return 'up'
else: return 'down'
def getHealth(gamedata):
pass
def getBoardSize(gamedata):
board_height = gamedata['board']['height']
board_width = gamedata['board']['width']
dimensions = {'height': board_height, 'width': board_width}
return dimensions
def getFoodPositions(gamedata):
pass
def getSnakesPositions(gamedata):
pass
class Assess(object):
def wallProximity(gamedata):
"""returns proximity to a wall
either parallel to, head-on or corner"""
head = Status.getHeadPosition(gamedata)
board_size = Status.getBoardSize(gamedata)
direction = Status.getMyDirection(gamedata)
height = board_size['height'] - 1
width = board_size['width'] - 1
#corners
if head['x'] == 0 and head['y'] == 0:
return {'type': 'corner', 'identifier': 'top left', 'direction': direction}
elif head['x'] == 0 and head['y'] == height:
return {'type': 'corner', 'identifier': 'bottom left', 'direction': direction}
elif head['x'] == width and head['y'] == 0:
return {'type': 'corner', 'identifier': 'top right', 'direction': direction}
elif head['x'] == width and head['y'] == height:
return {'type': 'corner', 'identifier': 'bottom right', 'direction': direction}
#headons
elif head['x'] == 0 and direction == 'left':
return {'type': 'head-on', 'identifier': 'left', 'direction': direction}
elif head['y'] == 0 and direction == 'up':
return {'type': 'head-on', 'identifier': 'top', 'direction': direction}
elif head['x'] == width and direction == 'right':
return {'type': 'head-on', 'identifier': 'right', 'direction': direction}
elif head['y'] == height and direction == 'down':
return {'type': 'head-on', 'identifier': 'bottom', 'direction': direction}
#parrallels
elif head['x'] == 0 and direction == 'up' or head['x'] == 0 and direction == 'down':
return {'type': 'parallel', 'identifier': 'left', 'direction': direction}
elif head['y'] == 0 and direction == 'right' or head['y'] == 0 and direction =='left':
return {'type': 'parallel', 'identifier': 'top', 'direction': direction}
elif head['x'] == width and direction =='down' or head['x'] == width and direction == 'up':
return {'type': 'parallel', 'identifier': 'right', 'direction': direction}
elif head['y'] == height and direction == 'left' or head['y'] == height and direction == 'right':
return {'type': 'parallel', 'identifier': 'bottom', 'direction': direction}
else: return False
def ownBodyProximity(gamedata):
pass
def killPossible(gamedata):
pass
def smallerSnakeNearby(gamedata):
pass
def biggerSnakeNearby(gamedata):
pass
def foodNearby(gamedata):
pass
class Action(object):
def avoidDeath():
pass
def chaseFood():
pass
def fleeSnake():
pass
def chaseSnake():
pass
class Decision(object):
def chooseBestOption(gamedata):
options = ['up', 'down', 'right', 'left']
current_direction = Status.getMyDirection(gamedata)
#first go
if current_direction == 'none':
choice = random.choice(options)
#remove opposite direction
if current_direction == 'up':
options.remove('down')
if current_direction == 'down':
options.remove('up')
if current_direction == 'right':
options.remove('left')
if current_direction == 'left':
options.remove('right')
#no danger keep going
if Assess.wallProximity(gamedata) == False:
choice = current_direction
#in a corner
elif Assess.wallProximity(gamedata)['type'] == 'corner':
options.remove(current_direction)
if Assess.wallProximity(gamedata)['identifier'][0] == 't' and Assess.wallProximity(gamedata)['identifier'][4] == 'l':
if 'up' in options:
choice = 'down'
else: choice = 'right'
elif Assess.wallProximity(gamedata)['identifier'][0] == 't' and Assess.wallProximity(gamedata)['identifier'][4] == 'r':
if 'up' in options:
choice = 'down'
else: choice = 'left'
#headon
elif Assess.wallProximity(gamedata)['type'] == 'head-on':
options.remove(current_direction)
choice = random.choice(options)
#parallel
elif Assess.wallProximity(gamedata)['type'] == 'parallel':
choice = current_direction
else: print("shit")
print(options)
return choice
| 29.217647 | 122 | 0.655124 | 604 | 4,967 | 5.32947 | 0.155629 | 0.055918 | 0.06151 | 0.072693 | 0.462255 | 0.370923 | 0.331469 | 0.314073 | 0.282075 | 0.177074 | 0 | 0.008033 | 0.172941 | 4,967 | 169 | 123 | 29.390533 | 0.77556 | 0.034629 | 0 | 0.227642 | 0 | 0 | 0.155635 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0.097561 | 0.00813 | 0 | 0.357724 | 0.01626 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07f27b728b22aae57de29b0aad696e2f245d31dd | 2,921 | py | Python | generator/util.py | gbtami/lichess-puzzler | e7338b35f592481141acefe39c7aaa444b26aa9e | [
"MIT"
] | 1 | 2021-02-20T11:21:53.000Z | 2021-02-20T11:21:53.000Z | generator/util.py | gbtami/lichess-puzzler | e7338b35f592481141acefe39c7aaa444b26aa9e | [
"MIT"
] | null | null | null | generator/util.py | gbtami/lichess-puzzler | e7338b35f592481141acefe39c7aaa444b26aa9e | [
"MIT"
] | null | null | null | from dataclasses import dataclass
import math
import chess
import chess.engine
from model import EngineMove, NextMovePair
from chess import Color, Board
from chess.pgn import GameNode
from chess.engine import SimpleEngine, Score
nps = []
def material_count(board: Board, side: Color) -> int:
values = { chess.PAWN: 1, chess.KNIGHT: 3, chess.BISHOP: 3, chess.ROOK: 5, chess.QUEEN: 9 }
return sum(len(board.pieces(piece_type, side)) * value for piece_type, value in values.items())
def material_diff(board: Board, side: Color) -> int:
return material_count(board, side) - material_count(board, not side)
def is_up_in_material(board: Board, side: Color) -> bool:
return material_diff(board, side) > 0
def get_next_move_pair(engine: SimpleEngine, node: GameNode, winner: Color, limit: chess.engine.Limit) -> NextMovePair:
info = engine.analyse(node.board(), multipv = 2, limit = limit)
global nps
nps.append(info[0]["nps"])
nps = nps[-20:]
# print(info)
best = EngineMove(info[0]["pv"][0], info[0]["score"].pov(winner))
second = EngineMove(info[1]["pv"][0], info[1]["score"].pov(winner)) if len(info) > 1 else None
return NextMovePair(node, winner, best, second)
def avg_knps():
global nps
return round(sum(nps) / len(nps) / 1000) if nps else 0
def win_chances(score: Score) -> float:
"""
winning chances from -1 to 1 https://graphsketch.com/?eqn1_color=1&eqn1_eqn=100+*+%282+%2F+%281+%2B+exp%28-0.004+*+x%29%29+-+1%29&eqn2_color=2&eqn2_eqn=&eqn3_color=3&eqn3_eqn=&eqn4_color=4&eqn4_eqn=&eqn5_color=5&eqn5_eqn=&eqn6_color=6&eqn6_eqn=&x_min=-1000&x_max=1000&y_min=-100&y_max=100&x_tick=100&y_tick=10&x_label_freq=2&y_label_freq=2&do_grid=0&do_grid=1&bold_labeled_lines=0&bold_labeled_lines=1&line_width=4&image_w=850&image_h=525
"""
mate = score.mate()
if mate is not None:
return 1 if mate > 0 else -1
cp = score.score()
return 2 / (1 + math.exp(-0.004 * cp)) - 1 if cp is not None else 0
CORRESP_TIME = 999999
def reject_by_time_control(line: str, has_master: bool, master_only: bool, bullet: bool, mates: bool) -> bool:
if not line.startswith("[TimeControl "):
return False
if master_only and not has_master:
return True
try:
seconds, increment = line[1:][:-2].split()[1].replace("\"", "").split("+")
total = int(seconds) + int(increment) * 40
if master_only or mates:
if bullet:
return total < 30 or total >= 160
else:
return total < 160 or total >= 480
else:
return total < (160 if has_master else 480)
except:
return True
def exclude_rating(line: str, mates: bool) -> bool:
if not line.startswith("[WhiteElo ") and not line.startswith("[BlackElo "):
return False
try:
return int(line[11:15]) < (1501 if mates else 1600)
except:
return True
| 38.946667 | 442 | 0.66176 | 455 | 2,921 | 4.123077 | 0.349451 | 0.023987 | 0.028785 | 0.030384 | 0.057569 | 0.034115 | 0.034115 | 0 | 0 | 0 | 0 | 0.063812 | 0.200616 | 2,921 | 74 | 443 | 39.472973 | 0.739615 | 0.154399 | 0 | 0.224138 | 0 | 0 | 0.024857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.137931 | 0.034483 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
07fb390e2fe8908e8e3a429d629ca30f1d77df66 | 11,225 | py | Python | test/test_python_errors.py | yangyangxcf/parso | e496b07b6342f6182225a60aad6031d7ad08f24d | [
"PSF-2.0"
] | null | null | null | test/test_python_errors.py | yangyangxcf/parso | e496b07b6342f6182225a60aad6031d7ad08f24d | [
"PSF-2.0"
] | null | null | null | test/test_python_errors.py | yangyangxcf/parso | e496b07b6342f6182225a60aad6031d7ad08f24d | [
"PSF-2.0"
] | null | null | null | """
Testing if parso finds syntax errors and indentation errors.
"""
import sys
import warnings
import pytest
import parso
from parso._compatibility import is_pypy
from .failing_examples import FAILING_EXAMPLES, indent, build_nested
if is_pypy:
# The errors in PyPy might be different. Just skip the module for now.
pytestmark = pytest.mark.skip()
def _get_error_list(code, version=None):
grammar = parso.load_grammar(version=version)
tree = grammar.parse(code)
return list(grammar.iter_errors(tree))
def assert_comparison(code, error_code, positions):
errors = [(error.start_pos, error.code) for error in _get_error_list(code)]
assert [(pos, error_code) for pos in positions] == errors
@pytest.mark.parametrize('code', FAILING_EXAMPLES)
def test_python_exception_matches(code):
wanted, line_nr = _get_actual_exception(code)
errors = _get_error_list(code)
actual = None
if errors:
error, = errors
actual = error.message
assert actual in wanted
# Somehow in Python3.3 the SyntaxError().lineno is sometimes None
assert line_nr is None or line_nr == error.start_pos[0]
def test_non_async_in_async():
"""
This example doesn't work with FAILING_EXAMPLES, because the line numbers
are not always the same / incorrect in Python 3.8.
"""
if sys.version_info[:2] < (3, 5):
pytest.skip()
# Raises multiple errors in previous versions.
code = 'async def foo():\n def nofoo():[x async for x in []]'
wanted, line_nr = _get_actual_exception(code)
errors = _get_error_list(code)
if errors:
error, = errors
actual = error.message
assert actual in wanted
if sys.version_info[:2] < (3, 8):
assert line_nr == error.start_pos[0]
else:
assert line_nr == 0 # For whatever reason this is zero in Python 3.8+
@pytest.mark.parametrize(
('code', 'positions'), [
('1 +', [(1, 3)]),
('1 +\n', [(1, 3)]),
('1 +\n2 +', [(1, 3), (2, 3)]),
('x + 2', []),
('[\n', [(2, 0)]),
('[\ndef x(): pass', [(2, 0)]),
('[\nif 1: pass', [(2, 0)]),
('1+?', [(1, 2)]),
('?', [(1, 0)]),
('??', [(1, 0)]),
('? ?', [(1, 0)]),
('?\n?', [(1, 0), (2, 0)]),
('? * ?', [(1, 0)]),
('1 + * * 2', [(1, 4)]),
('?\n1\n?', [(1, 0), (3, 0)]),
]
)
def test_syntax_errors(code, positions):
assert_comparison(code, 901, positions)
@pytest.mark.parametrize(
('code', 'positions'), [
(' 1', [(1, 0)]),
('def x():\n 1\n 2', [(3, 0)]),
('def x():\n 1\n 2', [(3, 0)]),
('def x():\n1', [(2, 0)]),
]
)
def test_indentation_errors(code, positions):
assert_comparison(code, 903, positions)
def _get_actual_exception(code):
with warnings.catch_warnings():
# We don't care about warnings where locals/globals misbehave here.
# It's as simple as either an error or not.
warnings.filterwarnings('ignore', category=SyntaxWarning)
try:
compile(code, '<unknown>', 'exec')
except (SyntaxError, IndentationError) as e:
wanted = e.__class__.__name__ + ': ' + e.msg
line_nr = e.lineno
except ValueError as e:
# The ValueError comes from byte literals in Python 2 like '\x'
# that are oddly enough not SyntaxErrors.
wanted = 'SyntaxError: (value error) ' + str(e)
line_nr = None
else:
assert False, "The piece of code should raise an exception."
# SyntaxError
# Python 2.6 has a bit different error messages here, so skip it.
if sys.version_info[:2] == (2, 6) and wanted == 'SyntaxError: unexpected EOF while parsing':
wanted = 'SyntaxError: invalid syntax'
if wanted == 'SyntaxError: non-keyword arg after keyword arg':
# The python 3.5+ way, a bit nicer.
wanted = 'SyntaxError: positional argument follows keyword argument'
elif wanted == 'SyntaxError: assignment to keyword':
return [wanted, "SyntaxError: can't assign to keyword",
'SyntaxError: cannot assign to __debug__'], line_nr
elif wanted == 'SyntaxError: assignment to None':
# Python 2.6 does has a slightly different error.
wanted = 'SyntaxError: cannot assign to None'
elif wanted == 'SyntaxError: can not assign to __debug__':
# Python 2.6 does has a slightly different error.
wanted = 'SyntaxError: cannot assign to __debug__'
elif wanted == 'SyntaxError: can use starred expression only as assignment target':
# Python 3.4/3.4 have a bit of a different warning than 3.5/3.6 in
# certain places. But in others this error makes sense.
return [wanted, "SyntaxError: can't use starred expression here"], line_nr
elif wanted == 'SyntaxError: f-string: unterminated string':
wanted = 'SyntaxError: EOL while scanning string literal'
elif wanted == 'SyntaxError: f-string expression part cannot include a backslash':
return [
wanted,
"SyntaxError: EOL while scanning string literal",
"SyntaxError: unexpected character after line continuation character",
], line_nr
elif wanted == "SyntaxError: f-string: expecting '}'":
wanted = 'SyntaxError: EOL while scanning string literal'
elif wanted == 'SyntaxError: f-string: empty expression not allowed':
wanted = 'SyntaxError: invalid syntax'
elif wanted == "SyntaxError: f-string expression part cannot include '#'":
wanted = 'SyntaxError: invalid syntax'
elif wanted == "SyntaxError: f-string: single '}' is not allowed":
wanted = 'SyntaxError: invalid syntax'
return [wanted], line_nr
def test_default_except_error_postition():
# For this error the position seemed to be one line off, but that doesn't
# really matter.
code = 'try: pass\nexcept: pass\nexcept X: pass'
wanted, line_nr = _get_actual_exception(code)
error, = _get_error_list(code)
assert error.message in wanted
assert line_nr != error.start_pos[0]
# I think this is the better position.
assert error.start_pos[0] == 2
def test_statically_nested_blocks():
def build(code, depth):
if depth == 0:
return code
new_code = 'if 1:\n' + indent(code)
return build(new_code, depth - 1)
def get_error(depth, add_func=False):
code = build('foo', depth)
if add_func:
code = 'def bar():\n' + indent(code)
errors = _get_error_list(code)
if errors:
assert errors[0].message == 'SyntaxError: too many statically nested blocks'
return errors[0]
return None
assert get_error(19) is None
assert get_error(19, add_func=True) is None
assert get_error(20)
assert get_error(20, add_func=True)
def test_future_import_first():
def is_issue(code, *args):
code = code % args
return bool(_get_error_list(code))
i1 = 'from __future__ import division'
i2 = 'from __future__ import absolute_import'
assert not is_issue(i1)
assert not is_issue(i1 + ';' + i2)
assert not is_issue(i1 + '\n' + i2)
assert not is_issue('"";' + i1)
assert not is_issue('"";' + i1)
assert not is_issue('""\n' + i1)
assert not is_issue('""\n%s\n%s', i1, i2)
assert not is_issue('""\n%s;%s', i1, i2)
assert not is_issue('"";%s;%s ', i1, i2)
assert not is_issue('"";%s\n%s ', i1, i2)
assert is_issue('1;' + i1)
assert is_issue('1\n' + i1)
assert is_issue('"";1\n' + i1)
assert is_issue('""\n%s\nfrom x import a\n%s', i1, i2)
assert is_issue('%s\n""\n%s', i1, i2)
def test_named_argument_issues(works_not_in_py):
message = works_not_in_py.get_error_message('def foo(*, **dict): pass')
message = works_not_in_py.get_error_message('def foo(*): pass')
if works_not_in_py.version.startswith('2'):
assert message == 'SyntaxError: invalid syntax'
else:
assert message == 'SyntaxError: named arguments must follow bare *'
works_not_in_py.assert_no_error_in_passing('def foo(*, name): pass')
works_not_in_py.assert_no_error_in_passing('def foo(bar, *, name=1): pass')
works_not_in_py.assert_no_error_in_passing('def foo(bar, *, name=1, **dct): pass')
def test_escape_decode_literals(each_version):
"""
We are using internal functions to assure that unicode/bytes escaping is
without syntax errors. Here we make a bit of quality assurance that this
works through versions, because the internal function might change over
time.
"""
def get_msg(end, to=1):
base = "SyntaxError: (unicode error) 'unicodeescape' " \
"codec can't decode bytes in position 0-%s: " % to
return base + end
def get_msgs(escape):
return (get_msg('end of string in escape sequence'),
get_msg(r"truncated %s escape" % escape))
error, = _get_error_list(r'u"\x"', version=each_version)
assert error.message in get_msgs(r'\xXX')
error, = _get_error_list(r'u"\u"', version=each_version)
assert error.message in get_msgs(r'\uXXXX')
error, = _get_error_list(r'u"\U"', version=each_version)
assert error.message in get_msgs(r'\UXXXXXXXX')
error, = _get_error_list(r'u"\N{}"', version=each_version)
assert error.message == get_msg(r'malformed \N character escape', to=2)
error, = _get_error_list(r'u"\N{foo}"', version=each_version)
assert error.message == get_msg(r'unknown Unicode character name', to=6)
# Finally bytes.
error, = _get_error_list(r'b"\x"', version=each_version)
wanted = r'SyntaxError: (value error) invalid \x escape'
if sys.version_info >= (3, 0):
# The positioning information is only available in Python 3.
wanted += ' at position 0'
assert error.message == wanted
def test_too_many_levels_of_indentation():
assert not _get_error_list(build_nested('pass', 99))
assert _get_error_list(build_nested('pass', 100))
base = 'def x():\n if x:\n'
assert not _get_error_list(build_nested('pass', 49, base=base))
assert _get_error_list(build_nested('pass', 50, base=base))
@pytest.mark.parametrize(
'code', [
"f'{*args,}'",
r'f"\""',
r'f"\\\""',
r'fr"\""',
r'fr"\\\""',
r"print(f'Some {x:.2f} and some {y}')",
]
)
def test_valid_fstrings(code):
assert not _get_error_list(code, version='3.6')
@pytest.mark.parametrize(
('code', 'message'), [
("f'{1+}'", ('invalid syntax')),
(r'f"\"', ('invalid syntax')),
(r'fr"\"', ('invalid syntax')),
]
)
def test_invalid_fstrings(code, message):
"""
Some fstring errors are handled differntly in 3.6 and other versions.
Therefore check specifically for these errors here.
"""
error, = _get_error_list(code, version='3.6')
assert message in error.message
@pytest.mark.parametrize(
'code', [
"from foo import (\nbar,\n rab,\n)",
"from foo import (bar, rab, )",
]
)
def test_trailing_comma(code):
errors = _get_error_list(code)
assert not errors
| 34.860248 | 96 | 0.625568 | 1,546 | 11,225 | 4.372574 | 0.207633 | 0.031953 | 0.035503 | 0.023669 | 0.383728 | 0.331953 | 0.29142 | 0.219379 | 0.203698 | 0.156805 | 0 | 0.020049 | 0.240178 | 11,225 | 321 | 97 | 34.968847 | 0.772541 | 0.132829 | 0 | 0.173913 | 0 | 0 | 0.25599 | 0 | 0 | 0 | 0 | 0 | 0.221739 | 1 | 0.091304 | false | 0.052174 | 0.052174 | 0.004348 | 0.195652 | 0.004348 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07fd108f6337b8e7a88da0155cf318b6098e4ae4 | 2,585 | py | Python | src/grader/machine.py | MrKaStep/csc230-grader | 559846f4d921c5c4be6b6e9ba8629fb24b448e41 | [
"MIT"
] | null | null | null | src/grader/machine.py | MrKaStep/csc230-grader | 559846f4d921c5c4be6b6e9ba8629fb24b448e41 | [
"MIT"
] | null | null | null | src/grader/machine.py | MrKaStep/csc230-grader | 559846f4d921c5c4be6b6e9ba8629fb24b448e41 | [
"MIT"
] | null | null | null | import getpass
from plumbum import local
from plumbum.machines.paramiko_machine import ParamikoMachine
from plumbum.path.utils import copy
def _once(f):
res = None
def wrapped(*args, **kwargs):
nonlocal res
if res is None:
res = f(*args, **kwargs)
return res
return wrapped
@_once
def get_remote_machine_with_password(host, user):
password = getpass.getpass(prompt=f"Password for {user}@{host}: ", stream=None)
rem = ParamikoMachine(host, user=user, password=password)
return rem
@_once
def get_remote_machine(host, user, keyfile):
rem = ParamikoMachine(host, user=user, keyfile=keyfile)
return rem
def get_local_machine():
return local
def with_machine_rule(cls):
old_init = cls.__init__
def new_init(self, config):
if "machine" not in config:
machine_type = "local"
else:
machine_type = config["machine"]["type"]
if machine_type == "local":
self.machine = get_local_machine()
self.files_to_copy = None
elif machine_type == "remote":
if "keyfile" in config["machine"]:
self.machine = get_remote_machine(config["machine"]["host"], config["machine"]["user"], config["machine"]["keyfile"])
else:
self.machine = get_remote_machine_with_password(config["machine"]["host"], config["machine"]["user"])
self.files_to_copy = config["machine"].get("files_to_copy")
else:
raise ValueError(f"Invalid machine type: {config['machine']['type']}")
self.machine_type = machine_type
old_init(self, config)
cls.__init__ = new_init
old_apply = cls.apply
def new_apply(self, project):
with self.machine.tempdir() as tempdir:
project_path = tempdir / "project"
project_path.mkdir()
existing_files = set([f.name for f in project.root.list()])
if self.files_to_copy:
for fname in self.files_to_copy:
if fname in existing_files:
copy(project.root / fname, project_path / fname)
else:
for f in project.files():
if f.name in existing_files:
copy(f.path, project_path / f.name)
with self.machine.cwd(project_path):
self.session = self.machine.session()
self.session.run(f"cd {project_path}")
return old_apply(self, project)
cls.apply = new_apply
return cls
| 32.3125 | 133 | 0.600387 | 311 | 2,585 | 4.787781 | 0.215434 | 0.087307 | 0.036938 | 0.040296 | 0.20685 | 0.045668 | 0 | 0 | 0 | 0 | 0 | 0 | 0.292843 | 2,585 | 79 | 134 | 32.721519 | 0.814551 | 0 | 0 | 0.125 | 0 | 0 | 0.087848 | 0.010449 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.078125 | 0.0625 | 0.015625 | 0.296875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07ff0da6e717ab9585c2e512803b8604ff985d37 | 2,793 | py | Python | tests/test_tree.py | andreax79/airflow-code-editor | 031170387496bbc6d540179c6c2f1765e1e70694 | [
"Apache-2.0"
] | 194 | 2019-08-06T13:03:11.000Z | 2022-03-25T15:29:29.000Z | tests/test_tree.py | andreax79/airflow-code-editor | 031170387496bbc6d540179c6c2f1765e1e70694 | [
"Apache-2.0"
] | 29 | 2019-08-23T16:07:17.000Z | 2022-03-31T03:43:47.000Z | tests/test_tree.py | andreax79/airflow-code-editor | 031170387496bbc6d540179c6c2f1765e1e70694 | [
"Apache-2.0"
] | 32 | 2019-08-15T12:13:37.000Z | 2022-03-31T17:27:24.000Z | #!/usr/bin/env python
import os
import os.path
import airflow
import airflow.plugins_manager
from airflow import configuration
from flask import Flask
from unittest import TestCase, main
from airflow_code_editor.commons import PLUGIN_NAME
from airflow_code_editor.tree import (
get_tree,
)
assert airflow.plugins_manager
app = Flask(__name__)
class TestTree(TestCase):
def setUp(self):
self.root_dir = os.path.dirname(os.path.realpath(__file__))
configuration.conf.set(PLUGIN_NAME, 'git_init_repo', 'False')
configuration.conf.set(PLUGIN_NAME, 'root_directory', self.root_dir)
def test_tree(self):
with app.app_context():
t = get_tree()
self.assertTrue(len(t) > 0)
self.assertTrue('git' in (x['id'] for x in t))
def test_tags(self):
with app.app_context():
t = get_tree("tags")
self.assertIsNotNone(t)
def test_local_branches(self):
with app.app_context():
t = get_tree("local-branches")
self.assertIsNotNone(t)
def test_remote_branches(self):
with app.app_context():
t = get_tree("remote-branches")
self.assertIsNotNone(t)
def test_files(self):
with app.app_context():
t = get_tree("files")
self.assertTrue(
len([x.get('id') for x in t if x.get('id') == 'test_utils.py']) == 1
)
t = get_tree("files/folder")
self.assertTrue(len([x.get('id') for x in t if x.get('id') == '1']) == 1)
def test_git(self):
with app.app_context():
t = get_tree("git/HEAD")
self.assertTrue(t is not None)
class TestTreeGitDisabled(TestCase):
def setUp(self):
self.root_dir = os.path.dirname(os.path.realpath(__file__))
configuration.conf.set(PLUGIN_NAME, 'git_init_repo', 'False')
configuration.conf.set(PLUGIN_NAME, 'root_directory', self.root_dir)
configuration.conf.set(PLUGIN_NAME, 'git_enabled', 'False')
def test_tree(self):
with app.app_context():
t = get_tree()
self.assertTrue(len(t) > 0)
self.assertTrue('git' not in (x['id'] for x in t))
t = get_tree("tags")
self.assertEqual(t, [])
t = get_tree("local-branches")
self.assertEqual(t, [])
t = get_tree("remote-branches")
self.assertEqual(t, [])
t = get_tree("files")
self.assertTrue(
len([x.get('id') for x in t if x.get('id') == 'test_utils.py']) == 1
)
t = get_tree("files/folder")
self.assertTrue(len([x.get('id') for x in t if x.get('id') == '1']) == 1)
if __name__ == '__main__':
main()
| 31.382022 | 85 | 0.583602 | 371 | 2,793 | 4.191375 | 0.191375 | 0.063023 | 0.066881 | 0.063023 | 0.736334 | 0.708682 | 0.612862 | 0.561415 | 0.510611 | 0.463023 | 0 | 0.003982 | 0.280702 | 2,793 | 88 | 86 | 31.738636 | 0.770035 | 0.007161 | 0 | 0.597222 | 0 | 0 | 0.090188 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
07ff31219d3e42ddfa090b695c0d4b6ede8d31e9 | 2,826 | py | Python | examples/token_freshness.py | greenape/flask-jwt-extended | 11ac3bf0937ee199aea7d6dc47c748bef9bf1d2f | [
"MIT"
] | 2 | 2021-03-20T01:55:08.000Z | 2021-11-14T12:20:23.000Z | examples/token_freshness.py | greenape/flask-jwt-extended | 11ac3bf0937ee199aea7d6dc47c748bef9bf1d2f | [
"MIT"
] | 1 | 2020-08-06T23:02:45.000Z | 2020-09-26T01:36:21.000Z | examples/token_freshness.py | greenape/flask-jwt-extended | 11ac3bf0937ee199aea7d6dc47c748bef9bf1d2f | [
"MIT"
] | 1 | 2020-10-28T20:09:00.000Z | 2020-10-28T20:09:00.000Z | from quart import Quart, jsonify, request
from quart_jwt_extended import (
JWTManager,
jwt_required,
create_access_token,
jwt_refresh_token_required,
create_refresh_token,
get_jwt_identity,
fresh_jwt_required,
)
app = Quart(__name__)
app.config["JWT_SECRET_KEY"] = "super-secret" # Change this!
jwt = JWTManager(app)
# Standard login endpoint. Will return a fresh access token and
# a refresh token
@app.route("/login", methods=["POST"])
async def login():
username = (await request.get_json()).get("username", None)
password = (await request.get_json()).get("password", None)
if username != "test" or password != "test":
return {"msg": "Bad username or password"}, 401
# create_access_token supports an optional 'fresh' argument,
# which marks the token as fresh or non-fresh accordingly.
# As we just verified their username and password, we are
# going to mark the token as fresh here.
ret = {
"access_token": create_access_token(identity=username, fresh=True),
"refresh_token": create_refresh_token(identity=username),
}
return ret, 200
# Refresh token endpoint. This will generate a new access token from
# the refresh token, but will mark that access token as non-fresh,
# as we do not actually verify a password in this endpoint.
@app.route("/refresh", methods=["POST"])
@jwt_refresh_token_required
async def refresh():
current_user = get_jwt_identity()
new_token = create_access_token(identity=current_user, fresh=False)
ret = {"access_token": new_token}
return ret, 200
# Fresh login endpoint. This is designed to be used if we need to
# make a fresh token for a user (by verifying they have the
# correct username and password). Unlike the standard login endpoint,
# this will only return a new access token, so that we don't keep
# generating new refresh tokens, which entirely defeats their point.
@app.route("/fresh-login", methods=["POST"])
async def fresh_login():
username = (await request.get_json()).get("username", None)
password = (await request.get_json()).get("password", None)
if username != "test" or password != "test":
return {"msg": "Bad username or password"}, 401
new_token = create_access_token(identity=username, fresh=True)
ret = {"access_token": new_token}
return ret, 200
# Any valid JWT can access this endpoint
@app.route("/protected", methods=["GET"])
@jwt_required
async def protected():
username = get_jwt_identity()
return dict(logged_in_as=username), 200
# Only fresh JWTs can access this endpoint
@app.route("/protected-fresh", methods=["GET"])
@fresh_jwt_required
async def protected_fresh():
username = get_jwt_identity()
return dict(fresh_logged_in_as=username), 200
if __name__ == "__main__":
app.run()
| 33.247059 | 75 | 0.714084 | 398 | 2,826 | 4.89196 | 0.28392 | 0.067797 | 0.043657 | 0.039034 | 0.402671 | 0.327684 | 0.276323 | 0.237288 | 0.154083 | 0.154083 | 0 | 0.009044 | 0.178344 | 2,826 | 84 | 76 | 33.642857 | 0.829457 | 0.31564 | 0 | 0.288462 | 0 | 0 | 0.13309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.115385 | 0.038462 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
07ffdb3c18cae37c2fe662c5c84ed5398af39b35 | 1,345 | py | Python | keras/linear/model/pipeline_train.py | PipelineAI/models | d8df07877aa8b10ce9b84983bb440af75e84dca7 | [
"Apache-2.0"
] | 44 | 2017-11-17T06:19:05.000Z | 2021-11-03T06:00:56.000Z | keras/linear/model/pipeline_train.py | PipelineAI/models | d8df07877aa8b10ce9b84983bb440af75e84dca7 | [
"Apache-2.0"
] | 3 | 2018-08-09T14:28:17.000Z | 2018-09-10T03:32:42.000Z | keras/linear/model/pipeline_train.py | PipelineAI/models | d8df07877aa8b10ce9b84983bb440af75e84dca7 | [
"Apache-2.0"
] | 21 | 2017-11-18T15:12:12.000Z | 2020-08-15T07:08:33.000Z | import os
os.environ['KERAS_BACKEND'] = 'theano'
os.environ['THEANO_FLAGS'] = 'floatX=float32,device=cpu'
import cloudpickle as pickle
import pipeline_invoke
import pandas as pd
import numpy as np
import keras
from keras.layers import Input, Dense
from keras.models import Model
from keras.models import save_model, load_model
from sklearn.preprocessing import StandardScaler, MinMaxScaler, Normalizer
if __name__ == '__main__':
df = pd.read_csv("../input/training/training.csv")
df["People per Television"] = pd.to_numeric(df["People per Television"],errors='coerce')
df = df.dropna()
x = df["People per Television"].values.reshape(-1,1).astype(np.float64)
y = df["People per Physician"].values.reshape(-1,1).astype(np.float64)
# min-max -1,1
sc = MinMaxScaler(feature_range=(-1,1))
x_ = sc.fit_transform(x)
y_ = sc.fit_transform(y)
inputs = Input(shape=(1,))
preds = Dense(1,activation='linear')(inputs)
model = Model(inputs=inputs,outputs=preds)
sgd = keras.optimizers.SGD()
model.compile(optimizer=sgd ,loss='mse')
model.fit(x_,y_, batch_size=1, verbose=1, epochs=10, shuffle=False)
save_model(model, 'state/keras_theano_linear_model_state.h5')
# model_pkl_path = 'model.pkl'
# with open(model_pkl_path, 'wb') as fh:
# pickle.dump(pipeline_invoke, fh)
| 30.568182 | 92 | 0.709294 | 196 | 1,345 | 4.69898 | 0.464286 | 0.034745 | 0.047774 | 0.068404 | 0.065147 | 0.065147 | 0.065147 | 0 | 0 | 0 | 0 | 0.018437 | 0.15316 | 1,345 | 43 | 93 | 31.27907 | 0.790167 | 0.094424 | 0 | 0 | 0 | 0 | 0.191261 | 0.078318 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
580134063c60e1903557dccde046d7a394258b01 | 319 | py | Python | dictionary.py | SchmitzAndrew/OSS-101-example | 1efecd4c5bfef4495904568d11e3f8d0a5ed9bd0 | [
"MIT"
] | null | null | null | dictionary.py | SchmitzAndrew/OSS-101-example | 1efecd4c5bfef4495904568d11e3f8d0a5ed9bd0 | [
"MIT"
] | null | null | null | dictionary.py | SchmitzAndrew/OSS-101-example | 1efecd4c5bfef4495904568d11e3f8d0a5ed9bd0 | [
"MIT"
] | null | null | null | word = input("Enter a word: ")
if word == "a":
print("one; any")
elif word == "apple":
print("familiar, round fleshy fruit")
elif word == "rhinoceros":
print("large thick-skinned animal with one or two horns on its nose")
else:
print("That word must not exist. This dictionary is very comprehensive.")
| 29 | 77 | 0.667712 | 47 | 319 | 4.531915 | 0.765957 | 0.075117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197492 | 319 | 10 | 78 | 31.9 | 0.832031 | 0 | 0 | 0 | 0 | 0 | 0.595611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
58035ad02fa85d7c60de0ef4d5c14279175bc2ac | 566 | py | Python | setup.py | sdnhub/kube-navi | d16a9289ba7261011e6c8d19c48cdc9bd533e629 | [
"Apache-2.0"
] | null | null | null | setup.py | sdnhub/kube-navi | d16a9289ba7261011e6c8d19c48cdc9bd533e629 | [
"Apache-2.0"
] | null | null | null | setup.py | sdnhub/kube-navi | d16a9289ba7261011e6c8d19c48cdc9bd533e629 | [
"Apache-2.0"
] | null | null | null | from distutils.core import setup
setup(
name = 'kube_navi',
packages = ['kube_navi'], # this must be the same as the name above
version = '0.1',
description = 'Kubernetes resource discovery toolkit',
author = 'Srini Seetharaman',
author_email = 'srini.seetharaman@gmail.com',
url = 'https://github.com/sdnhub/kube-navi', # use the URL to the github repo
download_url = 'https://github.com/sdnhub/kube-navi/archive/0.1.tar.gz', # I'll explain this in a second
keywords = ['testing', 'logging', 'example'], # arbitrary keywords
classifiers = [],
)
| 40.428571 | 106 | 0.69788 | 79 | 566 | 4.949367 | 0.670886 | 0.081841 | 0.071611 | 0.086957 | 0.158568 | 0.158568 | 0.158568 | 0 | 0 | 0 | 0 | 0.008403 | 0.159011 | 566 | 13 | 107 | 43.538462 | 0.813025 | 0.210247 | 0 | 0 | 0 | 0.076923 | 0.479638 | 0.061086 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5805a2c8d616906daf19682b40baa91f10a88715 | 1,845 | py | Python | app/routes/register.py | AuFeld/COAG | 3874a9c1c6ceb908a6bbabfb49e2c701d8e54f20 | [
"MIT"
] | 1 | 2021-06-03T10:29:12.000Z | 2021-06-03T10:29:12.000Z | app/routes/register.py | AuFeld/COAG | 3874a9c1c6ceb908a6bbabfb49e2c701d8e54f20 | [
"MIT"
] | 45 | 2021-06-05T14:47:09.000Z | 2022-03-30T06:16:44.000Z | app/routes/register.py | AuFeld/COAG | 3874a9c1c6ceb908a6bbabfb49e2c701d8e54f20 | [
"MIT"
] | null | null | null | from typing import Callable, Optional, Type, cast
from fastapi import APIRouter, HTTPException, Request, status
from app.models import users
from app.common.user import ErrorCode, run_handler
from app.users.user import (
CreateUserProtocol,
InvalidPasswordException,
UserAlreadyExists,
ValidatePasswordProtocol,
)
def get_register_router(
create_user: CreateUserProtocol,
user_model: Type[users.BaseUser],
user_create_model: Type[users.BaseUserCreate],
after_register: Optional[Callable[[users.UD, Request], None]] = None,
validate_password: Optional[ValidatePasswordProtocol] = None,
) -> APIRouter:
"""Generate a router with the register route."""
router = APIRouter()
@router.post(
"/register", response_model=user_model, status_code=status.HTTP_201_CREATED
)
async def register(request: Request, user: user_create_model): # type: ignore
user = cast(users.BaseUserCreate, user) # Prevent mypy complain
if validate_password:
try:
await validate_password(user.password, user)
except InvalidPasswordException as e:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail={
"code": ErrorCode.REGISTER_INVALID_PASSWORD,
"reason": e.reason,
},
)
try:
created_user = await create_user(user, safe=True)
except UserAlreadyExists:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ErrorCode.REGISTER_USER_ALREADY_EXISTS,
)
if after_register:
await run_handler(after_register, created_user, request)
return created_user
return router
| 32.946429 | 83 | 0.648238 | 184 | 1,845 | 6.293478 | 0.380435 | 0.018135 | 0.041451 | 0.051813 | 0.098446 | 0.098446 | 0.098446 | 0.098446 | 0.098446 | 0.098446 | 0 | 0.006767 | 0.279133 | 1,845 | 55 | 84 | 33.545455 | 0.86391 | 0.042276 | 0 | 0.133333 | 0 | 0 | 0.010795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0.155556 | 0.111111 | 0 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
580792f2d4c1bf5c14b84d5f807f69b1126aead4 | 5,422 | py | Python | src/advanceoperate/malimgthread.py | zengrx/S.M.A.R.T | 47a9abe89008e9b34f9b9d057656dbf3fb286456 | [
"MIT"
] | 10 | 2017-07-11T01:08:28.000Z | 2021-05-07T01:49:00.000Z | src/advanceoperate/malimgthread.py | YanqiangHuang/S.M.A.R.T | 47a9abe89008e9b34f9b9d057656dbf3fb286456 | [
"MIT"
] | null | null | null | src/advanceoperate/malimgthread.py | YanqiangHuang/S.M.A.R.T | 47a9abe89008e9b34f9b9d057656dbf3fb286456 | [
"MIT"
] | 6 | 2017-05-02T14:27:15.000Z | 2017-05-15T05:56:40.000Z | #coding=utf-8
from PyQt4 import QtCore
import os, glob, numpy, sys
from PIL import Image
from sklearn.cross_validation import StratifiedKFold
from sklearn.metrics import confusion_matrix
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import BallTree
from sklearn import cross_validation
from sklearn.utils import shuffle
import sklearn
import leargist
import cPickle
import random
import sys
reload(sys)
sys.setdefaultencoding( "utf-8" )
class ValidationResult(QtCore.QThread):
finishSignal = QtCore.pyqtSignal(list)
def __init__(self, parent=None):
super(ValidationResult, self).__init__(parent)
def getClassifyLabel(self):
X = numpy.load("./datafiles/img_features.npy") # 特征
y = numpy.load("./datafiles/img_labels.npy") # 标签
n = cPickle.load(open("./datafiles/img.p","rb")) # 标号
l = cPickle.load(open("./datafiles/imglabel.p", "rb")) # [家族号, 家族中序号, 文件名, 总序号]
return X, y, n ,l
'''
准备绘制矩阵的数据
@X:特征矩阵
@y:标签
@n:所有样本家族名称
@l:对应家族个数
'''
def prepareData2Matrix(self, X, y, n, l):
n_samples, useless = X.shape
p = range(n_samples)
random.seed(random.random())
random.shuffle(p)
X, y = X[p], y[p] # 打乱数组
kfold = 10 # 10重
skf = StratifiedKFold(y,kfold)
skfind = [None] * len(skf)
cnt = 0
for train_index in skf:
skfind[cnt] = train_index
cnt += 1
list_fams = n
cache = []
no_imgs = []
for l_list in l:
if 0 == l_list[1]:
# print l[l_list[3] - 1]
# print l_list
cache.append(l[l_list[3] - 1][1] + 1)
no_imgs = cache[1:len(cache)]
no_imgs.append(cache[0])
# print no_imgs # 输出所有家族包含文件个数
conf_mat = numpy.zeros((len(no_imgs), len(no_imgs))) # 初始化矩阵
n_neighbors = 5
# 10-fold Cross Validation
for i in range(kfold):
train_indices = skfind[i][0]
test_indices = skfind[i][1]
clf = []
clf = KNeighborsClassifier(n_neighbors, weights='distance')
X_train = X[train_indices]
y_train = y[train_indices]
X_test = X[test_indices]
y_test = y[test_indices]
# Training
import time
tic = time.time()
clf.fit(X_train,y_train)
toc = time.time()
print "training time= ", toc-tic # roughly 2.5 secs
# Testing
y_predict = []
tic = time.time()
y_predict = clf.predict(X_test) # output is labels and not indices
toc = time.time()
print "testing time = ", toc-tic # roughly 0.3 secs
# Compute confusion matrix
cm = []
cm = confusion_matrix(y_test,y_predict)
conf_mat = conf_mat + cm
return conf_mat, no_imgs, list_fams
def run(self):
print "start draw"
X, y, n, l = self.getClassifyLabel()
cm, nimg, listf = self.prepareData2Matrix(X, y, n, l)
msg = [cm, nimg, listf]
self.finishSignal.emit(msg)
class MalwareImageClass(QtCore.QThread):
malwarSignal = QtCore.pyqtSignal(int, list)
concluSignal = QtCore.pyqtSignal(int, list)
def __init__(self, filename, parent=None):
super(MalwareImageClass, self).__init__(parent)
self.filename = str(filename)#.encode('cp936')
self.feature = ''
'''
获取训练结果
特征,标签,文件名称及相应的序号
'''
def getClassifyLabel(self):
X = numpy.load("./datafiles/img_features.npy") # 特征
y = numpy.load("./datafiles/img_labels.npy") # 标签
n = cPickle.load(open("./datafiles/img.p","rb")) # 标号
l = cPickle.load(open("./datafiles/imglabel.p", "rb")) # [家族号, 家族中序号, 文件名, 总序号]
return X, y, n ,l
'''
对图片进行分类
train@训练集特征
label@训练集标签
'''
def classifyImage(self, feature_X, label_y, number):
im = Image.open(self.filename)
im1 = im.resize((64,64), Image.ANTIALIAS); # 转换为64x64
des = leargist.color_gist(im1); # 960 values
feature = des[0:320]; # 生成灰阶图,只需要前320内容
query_feature = feature.reshape(1, -1)
self.feature = query_feature
# 获取特征和标签
X = feature_X
y = label_y
n = number
n_neighbors = 5; # better to have this at the start of the code
knn = KNeighborsClassifier(n_neighbors, weights='distance')
knn.fit(X, y)
num = int(knn.predict(query_feature))
classname = n[num]
proba = knn.predict_proba(query_feature)
msg = [num, classname, proba]
self.malwarSignal.emit(1, msg)
'''
balltrees寻找数据集中最相近的样本
返回距离值及样本标签号
'''
def findMostSimilarImg(self, feature_X, serial):
X = feature_X
b = BallTree(X)
# 5个最相近的样本
dist, ind = b.query(self.feature, k=3)
print dist, ind
ind = ind[0]
# print ind
l = serial
imgs = []
for rank in ind:
# print rank
for name in l:
if rank == name[3]:
# print name
imgs.append(name[2])
self.concluSignal.emit(2, imgs)
def run(self):
X, y, n ,l = self.getClassifyLabel()
self.classifyImage(X, y, n)
self.findMostSimilarImg(X, l)
| 31.34104 | 87 | 0.568241 | 669 | 5,422 | 4.487294 | 0.284006 | 0.006662 | 0.006995 | 0.007995 | 0.177881 | 0.138574 | 0.122585 | 0.122585 | 0.122585 | 0.122585 | 0 | 0.017013 | 0.317042 | 5,422 | 172 | 88 | 31.523256 | 0.793681 | 0.080229 | 0 | 0.174603 | 0 | 0 | 0.053957 | 0.032163 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.119048 | null | null | 0.031746 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6afe84146c4619406b9150aea7be577bdc37e585 | 2,929 | py | Python | tests/delete_regress/models.py | PirosB3/django | 9b729ddd8f2040722971ccfb3b12f7d8162633d1 | [
"BSD-3-Clause"
] | 2 | 2015-01-21T15:45:07.000Z | 2015-02-21T02:38:13.000Z | tests/delete_regress/models.py | PirosB3/django | 9b729ddd8f2040722971ccfb3b12f7d8162633d1 | [
"BSD-3-Clause"
] | null | null | null | tests/delete_regress/models.py | PirosB3/django | 9b729ddd8f2040722971ccfb3b12f7d8162633d1 | [
"BSD-3-Clause"
] | 1 | 2020-05-25T08:55:19.000Z | 2020-05-25T08:55:19.000Z | from django.contrib.contenttypes.fields import (
GenericForeignKey, GenericRelation
)
from django.contrib.contenttypes.models import ContentType
from django.db import models
class Award(models.Model):
name = models.CharField(max_length=25)
object_id = models.PositiveIntegerField()
content_type = models.ForeignKey(ContentType)
content_object = GenericForeignKey()
class AwardNote(models.Model):
award = models.ForeignKey(Award)
note = models.CharField(max_length=100)
class Person(models.Model):
name = models.CharField(max_length=25)
awards = GenericRelation(Award)
class Book(models.Model):
pagecount = models.IntegerField()
class Toy(models.Model):
name = models.CharField(max_length=50)
class Child(models.Model):
name = models.CharField(max_length=50)
toys = models.ManyToManyField(Toy, through='PlayedWith')
class PlayedWith(models.Model):
child = models.ForeignKey(Child)
toy = models.ForeignKey(Toy)
date = models.DateField(db_column='date_col')
class PlayedWithNote(models.Model):
played = models.ForeignKey(PlayedWith)
note = models.TextField()
class Contact(models.Model):
label = models.CharField(max_length=100)
class Email(Contact):
email_address = models.EmailField(max_length=100)
class Researcher(models.Model):
contacts = models.ManyToManyField(Contact, related_name="research_contacts")
class Food(models.Model):
name = models.CharField(max_length=20, unique=True)
class Eaten(models.Model):
food = models.ForeignKey(Food, to_field="name")
meal = models.CharField(max_length=20)
# Models for #15776
class Policy(models.Model):
policy_number = models.CharField(max_length=10)
class Version(models.Model):
policy = models.ForeignKey(Policy)
class Location(models.Model):
version = models.ForeignKey(Version, blank=True, null=True)
class Item(models.Model):
version = models.ForeignKey(Version)
location = models.ForeignKey(Location, blank=True, null=True)
# Models for #16128
class File(models.Model):
pass
class Image(File):
class Meta:
proxy = True
class Photo(Image):
class Meta:
proxy = True
class FooImage(models.Model):
my_image = models.ForeignKey(Image)
class FooFile(models.Model):
my_file = models.ForeignKey(File)
class FooPhoto(models.Model):
my_photo = models.ForeignKey(Photo)
class FooFileProxy(FooFile):
class Meta:
proxy = True
class OrgUnit(models.Model):
name = models.CharField(max_length=64, unique=True)
class Login(models.Model):
description = models.CharField(max_length=32)
orgunit = models.ForeignKey(OrgUnit)
class House(models.Model):
address = models.CharField(max_length=32)
class OrderedPerson(models.Model):
name = models.CharField(max_length=32)
lives_in = models.ForeignKey(House)
class Meta:
ordering = ['name']
| 20.626761 | 80 | 0.725162 | 354 | 2,929 | 5.920904 | 0.271186 | 0.125954 | 0.111641 | 0.148855 | 0.275763 | 0.203721 | 0.134065 | 0.078244 | 0 | 0 | 0 | 0.016769 | 0.165244 | 2,929 | 141 | 81 | 20.77305 | 0.840491 | 0.011267 | 0 | 0.139241 | 0 | 0 | 0.014874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.012658 | 0.037975 | 0 | 0.911392 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6afebab1780e5e05d2dbd1b300b2e8c2a43c36a7 | 17,003 | py | Python | apps/UI_phone_mcdm.py | industrial-optimization-group/researchers-night | 68f2fcb8530032e157badda772a795e1f3bb2c4b | [
"MIT"
] | null | null | null | apps/UI_phone_mcdm.py | industrial-optimization-group/researchers-night | 68f2fcb8530032e157badda772a795e1f3bb2c4b | [
"MIT"
] | null | null | null | apps/UI_phone_mcdm.py | industrial-optimization-group/researchers-night | 68f2fcb8530032e157badda772a795e1f3bb2c4b | [
"MIT"
] | null | null | null | import dash
from dash.exceptions import PreventUpdate
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output, State
import dash_bootstrap_components as dbc
import dash_table
import plotly.express as ex
import plotly.graph_objects as go
import pandas as pd
import numpy as np
data = pd.read_csv("./data/Phone_dataset_new.csv", header=0)
details = pd.read_csv("./data/Phone_details.csv", header=0)
names = details.loc[0]
data = data.rename(columns=names)
details = details.rename(columns=names)
maxi = details.loc[1].astype(int)
details_on_card = details.loc[2].astype(int)
details_on_card = details.columns[details_on_card == 1]
fitness_columns = {
"Memory": -1,
"RAM": -1,
"Camera (MP)": -1,
"Price (Euros)": 1,
}
fitness_data = data[fitness_columns] * maxi[fitness_columns].values
external_stylesheets = ["https://codepen.io/chriddyp/pen/bWLwgP.css"]
app = dash.Dash(
__name__,
external_stylesheets=[dbc.themes.LITERA],
eager_loading=True,
suppress_callback_exceptions=True,
)
app.layout = html.Div(
children=[
# .container class is fixed, .container.scalable is scalable
dbc.Row(
[
dbc.Col(
html.H1(
children="What is your optimal phone?",
className="text-center mt-4",
)
)
]
),
dbc.Row(
[
dbc.Col(
children=[
# Top card with details(?)
dbc.Card(
children=[
dbc.CardBody(
[
html.H4(
"Researcher's Night Event",
className="card-title text-center",
),
html.P(
(
"This app uses decision support tools to "
"quickly and easily find phones which reflect "
"the user's desires. Input your preferences "
"below. The box on top right shows the phone "
"which matches the preferences the best. "
"The box on bottom right provides some "
"close alternatives."
),
className="card-text",
),
]
)
],
className="mr-3 ml-3 mb-2 mt-2",
),
dbc.Form(
[
dbc.FormGroup(
children=[
dbc.Label(
"Choose desired operating system",
html_for="os-choice",
),
dbc.RadioItems(
options=[
{
"label": "Android",
"value": "Android",
},
{"label": "iOS", "value": "IOS"},
{
"label": "No preference",
"value": "both",
},
],
id="os-choice",
value="both",
inline=True,
# className="text-center mt-4",
),
],
className="mr-3 ml-3 mb-2 mt-2",
),
dbc.FormGroup(
children=[
dbc.Label(
"Choose desired Memory capacity (GB)",
html_for="memory-choice",
),
dcc.Slider(
id="memory-choice",
min=16,
max=256,
step=None,
included=False,
value=256,
marks={
16: "16",
32: "32",
64: "64",
128: "128",
256: "256",
},
# className="text-center mt-5",
),
],
className="mr-3 ml-3 mb-2 mt-2",
),
dbc.FormGroup(
children=[
dbc.Label(
"Choose desired RAM capacity (GB)",
html_for="ram-choice",
),
dcc.Slider(
id="ram-choice",
min=2,
max=12,
step=1,
value=12,
included=False,
marks={
2: "2",
3: "3",
4: "4",
5: "5",
6: "6",
7: "7",
8: "8",
9: "9",
10: "10",
11: "11",
12: "12",
},
className="text-center mt-5",
),
],
className="mr-3 ml-3 mb-2 mt-2",
),
dbc.FormGroup(
children=[
dbc.Label(
"Choose desired camera resolution (MP)",
html_for="cam-choice",
),
dcc.Slider(
id="cam-choice",
min=0,
max=130,
step=1,
included=False,
value=70,
marks={
0: "0",
10: "10",
30: "30",
50: "50",
70: "70",
90: "90",
110: "110",
130: "130",
},
className="text-center mt-5",
),
],
className="mr-3 ml-3 mb-2 mt-2",
),
dbc.FormGroup(
children=[
dbc.Label(
"Choose desired budget (Euros)",
html_for="cost-choice",
),
dcc.Slider(
id="cost-choice",
min=0,
max=1400,
step=1,
included=False,
value=100,
marks={
0: "0",
200: "200",
400: "400",
600: "600",
800: "800",
1000: "1000",
1200: "1200",
1400: "1400",
},
className="text-center mt-5",
),
],
className="mr-3 ml-3 mb-2 mt-2",
),
],
style={"maxHeight": "560px", "overflow": "auto"},
),
],
width={"size": 5, "offset": 1},
),
dbc.Col(
children=[
dbc.Card(
children=[
dbc.CardHeader("The best phone for you is:"),
dbc.CardBody(id="results"),
],
className="mb-4",
),
dbc.Card(
children=[
dbc.CardHeader("Other great phones:"),
dbc.CardBody(
id="other-results",
children=(
[
html.P(
html.Span(
f"{i}. ",
id=f"other-results-list-{i}",
)
)
for i in range(2, 6)
]
+ [
dbc.Tooltip(
id=f"other-results-tooltip-{i}",
target=f"other-results-list-{i}",
placement="right",
style={
"maxWidth": 700,
"background-color": "white",
"color": "white",
"border-style": "solid",
"border-color": "black",
},
)
for i in range(2, 6)
]
),
),
],
className="mt-4",
),
html.Div(id="tooltips"),
],
width={"size": 5, "offset": 0},
className="mb-2 mt-2",
),
]
),
dbc.Row([html.Div(id="callback-dump")]),
],
)
@app.callback(
[
Output("results", "children"),
*[Output(f"other-results-list-{i}", "children") for i in range(2, 6)],
*[Output(f"other-results-tooltip-{i}", "children") for i in range(2, 6)],
],
[
Input(f"{attr}-choice", "value")
for attr in ["os", "memory", "ram", "cam", "cost"]
],
)
def results(*choices):
if choices[0] == "both":
choice_data = data
elif choices[0] == "IOS":
choice_data = data[[True if "IOS" in st else False for st in data["OS"]]]
if choices[0] == "Android":
choice_data = data[[True if "Android" in st else False for st in data["OS"]]]
relevant_data = choice_data[
["Memory", "RAM", "Camera (MP)", "Price (Euros)",]
].reset_index(drop=True)
card_data = choice_data[details_on_card].reset_index(drop=True)
maxi = np.asarray([-1, -1, -1, 1])
relevant_data = relevant_data * maxi
ideal = relevant_data.min().values
nadir = relevant_data.max().values
aspirations = choices[1:] * maxi
distance = (aspirations - relevant_data) / (ideal - nadir)
distance = distance.max(axis=1)
distance_order = np.argsort(distance)
best = table_from_data(card_data.loc[distance_order.values[0]], choices[1:])
total_number = len(distance_order)
if total_number >= 4:
others, tooltips = other_options(card_data.loc[distance_order.values[1:5]])
else:
others, tooltips = other_options(
card_data.loc[distance_order.values[1:total_number]]
)
others = others + [f"{i}. -" for i in range(len(others) + 2, 6)]
tooltips = tooltips + [None for i in range(len(tooltips) + 2, 6)]
return (best, *others, *tooltips)
"""@app.callback(Output("tooltips", "children"), [Input("callback-dump", "children")])
def tooltips(tooldict):
num = len(tooldict["ids"])
content = []
for i in range(num):
content.append(dbc.Tooltip(tooldict["tables"][i], target=tooldict["ids"][i]))
return content"""
def table_from_data(data, choices):
# print(choices)
to_compare = ["Memory", "RAM", "Camera (MP)", "Price (Euros)"]
# print(data[to_compare].values)
diff = (data[to_compare].values - choices) * [1, 1, 1, -1]
colors = [None, None, None] + ["green" if x >= 0 else "red" for x in diff]
# print(np.sign(diff))
return dbc.Table(
[
html.Tbody(
[
html.Tr(
[
html.Th(col),
html.Td([str(data[col]),],),
html.Td([html.Span(" ▉", style={"color": c,},)],),
]
)
for (col, c) in zip(data.index, colors)
]
)
]
)
def table_from_data_horizontal(data):
header = [html.Thead(html.Tr([html.Th(col) for col in data.index]))]
body = [html.Tbody([html.Tr([html.Td(data[col]) for col in data.index])])]
return dbc.Table(header + body)
def other_options(data):
contents = []
tables = []
ids = []
i = 2
for index, row in data.iterrows():
contents.append(f"{i}. {row['Model']}")
tables.append(table_from_data_horizontal(row))
i = i + 1
return contents, tables
if __name__ == "__main__":
app.run_server(debug=False)
| 43.485934 | 95 | 0.283597 | 1,115 | 17,003 | 4.24574 | 0.253812 | 0.020913 | 0.007393 | 0.008872 | 0.263836 | 0.17744 | 0.133502 | 0.124842 | 0.115547 | 0.104563 | 0 | 0.040494 | 0.628183 | 17,003 | 390 | 96 | 43.597436 | 0.708162 | 0.012351 | 0 | 0.309456 | 0 | 0 | 0.098339 | 0.010186 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011461 | false | 0 | 0.031519 | 0 | 0.054441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed0142db547eada6fd1f50b0e7939a47e99944a3 | 1,746 | py | Python | tests/test_hedges.py | aplested/DC_Pyps | da33fc7d0e7365044e368488d1c7cbbae7473cc7 | [
"MIT"
] | 1 | 2021-03-25T18:09:25.000Z | 2021-03-25T18:09:25.000Z | tests/test_hedges.py | aplested/DC_Pyps | da33fc7d0e7365044e368488d1c7cbbae7473cc7 | [
"MIT"
] | null | null | null | tests/test_hedges.py | aplested/DC_Pyps | da33fc7d0e7365044e368488d1c7cbbae7473cc7 | [
"MIT"
] | null | null | null | from dcstats.hedges import Hedges_d
from dcstats.statistics_EJ import simple_stats as mean_SD
import random
import math
def generate_sample (length, mean, sigma):
#generate a list of normal distributed samples
sample = []
for n in range(length):
sample.append(random.gauss(mean, sigma))
return sample
def close_enough (a, b, count_error):
if math.fabs (a - b) < math.fabs((a + b) / (count_error * 2)) :
return True
else:
return False
def gaussian_case (sig):
sample_size = 200
count_error = math.sqrt(sample_size)
m1 = 1
m2 = 2
s1 = generate_sample (sample_size, m1, sig)
s2 = generate_sample (sample_size, m2, sig)
h_testing = Hedges_d(s1, s2)
h_testing.hedges_d_unbiased() #answer is in self.d
approx_95CI_lower, approx_95CI_upper = h_testing.approx_CI()
bs_95CI_lower, bs_95CI_upper = h_testing.bootstrap_CI(5000)
print (mean_SD(s1), mean_SD(s2))
print ("h_testing.d, analytic, correction = ", h_testing.d, (m2 - m1) / sig, h_testing.correction)
print ("lower: approx, bootstrap", approx_95CI_lower, bs_95CI_lower)
print ("upper: approx, bootstrap", approx_95CI_upper, bs_95CI_upper)
#bootstrap is similar at high d but gives wider intervals at low d
assert close_enough(approx_95CI_lower, bs_95CI_lower, count_error)
assert close_enough(approx_95CI_upper, bs_95CI_upper, count_error)
assert close_enough(h_testing.d, (m2 - m1) / sig, count_error)
###tests
def test_gaussian_case_low():
gaussian_case(0.2) #expect d = 5
def test_gaussian_case_med():
gaussian_case(0.5) #expect d = 2
def test_gaussian_case_high():
gaussian_case(1.0) #expect d = 1, fail
| 29.59322 | 102 | 0.689003 | 265 | 1,746 | 4.264151 | 0.328302 | 0.056637 | 0.039823 | 0.039823 | 0.183186 | 0.120354 | 0 | 0 | 0 | 0 | 0 | 0.041575 | 0.214777 | 1,746 | 58 | 103 | 30.103448 | 0.78264 | 0.100802 | 0 | 0 | 1 | 0 | 0.053915 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 1 | 0.157895 | false | 0 | 0.105263 | 0 | 0.342105 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed03eb092480421cebe7ff1098718fc83eac9aac | 3,324 | py | Python | magic_mirror.py | alcinnz/Historical-Twin | 54a9ab5dc130aaeb2e00058bbaeace7377e2ff3d | [
"MIT"
] | 1 | 2018-08-16T10:06:21.000Z | 2018-08-16T10:06:21.000Z | magic_mirror.py | alcinnz/Historical-Twin | 54a9ab5dc130aaeb2e00058bbaeace7377e2ff3d | [
"MIT"
] | null | null | null | magic_mirror.py | alcinnz/Historical-Twin | 54a9ab5dc130aaeb2e00058bbaeace7377e2ff3d | [
"MIT"
] | null | null | null | #! /usr/bin/python2
import time
start = time.time()
import pygame, numpy
import pygame.camera
# Init display
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
pygame.display.set_caption("Magic Mirror")
#pygame.mouse.set_visible(False)
# Init font
pygame.font.init()
font_colour = 16, 117, 186
fonts = {40: pygame.font.Font("Futura.ttc", 40)}
def font(font_size = 40):
if font_size not in fonts:
fonts[font_size] = pygame.font.Font("Futura.ttc", font_size)
return fonts[font_size]
def write(text, colour = font_colour, font_size = 40):
return font(font_size).render(str(text), True, colour)
# Init AI
import recognition
import sys, os
def find_faces(pygame_capture):
capture = numpy.array(pygame.surfarray.pixels3d(pygame_capture))
capture = numpy.swapaxes(capture, 0, 1)
return recognition.align.getAllFaceBoundingBoxes(capture), capture
index = recognition.MultiBinaryTree()
imgdir = sys.argv[1] if len(sys.argv) > 1 else "images"
photo_samples = []
screen.blit(write("Loading index... %fs" % (time.time() - start)), (0,0))
pygame.display.flip()
with open(os.path.join(imgdir, "index.tsv")) as f:
for line in f:
line = line.strip().split("\t")
img = os.path.join(imgdir, line[0])
description = numpy.array([float(n) for n in line[1:]])
index.insert(description, img)
screen.blit(write("Loading images... %fs" % (time.time() - start)), (0,50))
pygame.display.flip()
for img in os.listdir(os.path.join(imgdir, "thumbnails")):
photo_samples.append(pygame.image.load(os.path.join(imgdir, "thumbnails", img)))
# Init clock
clock = pygame.time.Clock()
# Init camera
pygame.camera.init()
cameras = pygame.camera.list_cameras()
if not cameras:
pygame.quit()
print "No cameras found, exiting!"
sys.exit(1)
camera = pygame.camera.Camera(cameras[0])
camera.start()
# Mainloop
def recognize(capture, faces):
fullscreen = pygame.Rect(0, 0, screen.get_width(), screen.get_height())
pygame.draw.rect(screen, (255, 255, 255), fullscreen)
pygame.display.flip()
face = recognition.average(recognition.getRepBBox(capture, face) for face in faces)
img = index.nearest(face)
screen.blit(pygame.image.load(img), (0,0))
pygame.display.flip()
pygame.time.wait(10*1000) # 30s
def main():
countdown = 10
lastFaceCount = 0
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type in (pygame.QUIT, pygame.KEYDOWN):
return
capture = camera.get_image()
faces, capture_data = find_faces(capture)
for bbox in faces:
rect = pygame.Rect(bbox.left(), bbox.top(), bbox.width(), bbox.height())
pygame.draw.rect(capture, (255, 0, 0), rect, 2)
capture = pygame.transform.flip(capture, True, False)
screen.blit(pygame.transform.smoothscale(capture, screen.get_size()), (0,0))
if len(faces) == 0 or len(faces) != lastFaceCount:
countdown = 10
lastFaceCount = len(faces)
elif countdown == 0:
recognize(capture_data, faces)
countdown = 10
else:
screen.blit(write(countdown), (0,0))
countdown -= 1
pygame.display.flip()
pygame.quit()
if __name__ == "__main__":
main()
| 28.655172 | 87 | 0.653129 | 449 | 3,324 | 4.761693 | 0.302895 | 0.042563 | 0.039757 | 0.029935 | 0.07811 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028658 | 0.202166 | 3,324 | 115 | 88 | 28.904348 | 0.777526 | 0.034898 | 0 | 0.121951 | 0 | 0 | 0.045014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.060976 | null | null | 0.012195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed0fc8cf4f946e650eb4b14f0a5d7690952a62a3 | 980 | py | Python | python/old_password_test.py | XelaRellum/old_password | b461941069bc7f1187776a992f86c89317ab215e | [
"MIT"
] | null | null | null | python/old_password_test.py | XelaRellum/old_password | b461941069bc7f1187776a992f86c89317ab215e | [
"MIT"
] | null | null | null | python/old_password_test.py | XelaRellum/old_password | b461941069bc7f1187776a992f86c89317ab215e | [
"MIT"
] | null | null | null | import unittest
import pytest
from old_password import old_password
import csv
import re
@pytest.mark.parametrize("password,expected_hash", [
(None, None),
("", ""),
("a", "60671c896665c3fa"),
("abc", "7cd2b5942be28759"),
("ä", "0751368d49315f7f"),
])
def test_old_password(password, expected_hash):
assert old_password(password) == expected_hash
def test_password_with_space():
"""
spaces in password are skipped
"""
assert old_password("pass word") == old_password("password")
def test_password_with_tab():
"""
tabs in password are skipped
"""
assert old_password("pass\tword") == old_password("password")
def test_password_from_testdata():
with open("../testdata.csv", "r") as file:
for line in file:
line = line.strip()
password, expected_hash = line.split(";")
hash = old_password(password)
assert hash == expected_hash, "password: %s" % password
| 22.272727 | 67 | 0.643878 | 112 | 980 | 5.419643 | 0.383929 | 0.163097 | 0.156507 | 0.088962 | 0.349259 | 0.247117 | 0.135091 | 0.135091 | 0 | 0 | 0 | 0.04712 | 0.220408 | 980 | 43 | 68 | 22.790698 | 0.747382 | 0.060204 | 0 | 0 | 0 | 0 | 0.15618 | 0.024719 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.16 | false | 0.48 | 0.2 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ed17fa4c7a350d13f37c06feb06cdcd3b65f55bf | 859 | gyp | Python | binding.gyp | HupuInc/node-mysql-listener | d23e55910acd1559d8339f36b1549f21aee8adaa | [
"MIT"
] | 2 | 2015-10-04T02:09:11.000Z | 2021-02-03T00:12:28.000Z | binding.gyp | HupuInc/node-mysql-listener | d23e55910acd1559d8339f36b1549f21aee8adaa | [
"MIT"
] | 1 | 2015-10-04T02:10:02.000Z | 2015-10-05T07:29:40.000Z | binding.gyp | HupuInc/node-mysql-listener | d23e55910acd1559d8339f36b1549f21aee8adaa | [
"MIT"
] | null | null | null | {
'targets': [
{
# have to specify 'liblib' here since gyp will remove the first one :\
'target_name': 'mysql_bindings',
'sources': [
'src/mysql_bindings.cc',
'src/mysql_bindings_connection.cc',
'src/mysql_bindings_result.cc',
'src/mysql_bindings_statement.cc',
],
'conditions': [
['OS=="win"', {
# no Windows support yet...
}, {
'libraries': [
'<!@(mysql_config --libs_r)'
],
}],
['OS=="mac"', {
# cflags on OS X are stupid and have to be defined like this
'xcode_settings': {
'OTHER_CFLAGS': [
'<!@(mysql_config --cflags)'
]
}
}, {
'cflags': [
'<!@(mysql_config --cflags)'
],
}]
]
}
]
}
| 23.861111 | 76 | 0.436554 | 75 | 859 | 4.8 | 0.64 | 0.180556 | 0.177778 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.407451 | 859 | 35 | 77 | 24.542857 | 0.707269 | 0.178114 | 0 | 0.21875 | 0 | 0 | 0.424501 | 0.159544 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed1acc095f46eeb713b4bbe4bbc113d4ca38760c | 399 | py | Python | setup.py | rlbellaire/ActT | b6e936e5037c5f92ad1c281e2bf3700bf91aea42 | [
"BSD-3-Clause"
] | 2 | 2020-01-24T20:20:02.000Z | 2021-09-25T03:32:17.000Z | setup.py | rlbellaire/ActT | b6e936e5037c5f92ad1c281e2bf3700bf91aea42 | [
"BSD-3-Clause"
] | 1 | 2020-11-16T17:08:08.000Z | 2020-11-16T17:08:08.000Z | setup.py | rlbellaire/ActT | b6e936e5037c5f92ad1c281e2bf3700bf91aea42 | [
"BSD-3-Clause"
] | 1 | 2020-11-16T16:58:39.000Z | 2020-11-16T16:58:39.000Z | from setuptools import find_packages, setup
setup(name='ActT',
version='0.6',
description='Active Testing',
url='',
author='',
author_email='none',
license='BSD',
packages=find_packages(),
install_requires=[
'numpy', 'pandas', 'matplotlib','scipy','scikit-learn','opencv-python',
'statswag','tensorflow'
],
zip_safe=True)
| 24.9375 | 79 | 0.588972 | 40 | 399 | 5.75 | 0.875 | 0.104348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006645 | 0.245614 | 399 | 15 | 80 | 26.6 | 0.757475 | 0 | 0 | 0 | 0 | 0 | 0.243108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed1b9e6a531c569f1a1cfb6234bd90d5b845bbe9 | 1,909 | py | Python | src/quanguru/classes/exceptions.py | Qfabiolous/QuanGuru | 285ca44ae857cc61337f73ea2eb600f485a09e32 | [
"BSD-3-Clause"
] | null | null | null | src/quanguru/classes/exceptions.py | Qfabiolous/QuanGuru | 285ca44ae857cc61337f73ea2eb600f485a09e32 | [
"BSD-3-Clause"
] | null | null | null | src/quanguru/classes/exceptions.py | Qfabiolous/QuanGuru | 285ca44ae857cc61337f73ea2eb600f485a09e32 | [
"BSD-3-Clause"
] | null | null | null | # TODO turn prints into actual error raise, they are print for testing
def qSystemInitErrors(init):
def newFunction(obj, **kwargs):
init(obj, **kwargs)
if obj._genericQSys__dimension is None:
className = obj.__class__.__name__
print(className + ' requires a dimension')
elif obj.frequency is None:
className = obj.__class__.__name__
print(className + ' requires a frequency')
return newFunction
def qCouplingInitErrors(init):
def newFunction(obj, *args, **kwargs):
init(obj, *args, **kwargs)
if obj.couplingOperators is None: # pylint: disable=protected-access
className = obj.__class__.__name__
print(className + ' requires a coupling functions')
elif obj.coupledSystems is None: # pylint: disable=protected-access
className = obj.__class__.__name__
print(className + ' requires a coupling systems')
#for ind in range(len(obj._qCoupling__qSys)):
# if len(obj._qCoupling__cFncs) != len(obj._qCoupling__qSys):
# className = obj.__class__.__name__
# print(className + ' requires same number of systems as coupling functions')
return newFunction
def sweepInitError(init):
def newFunction(obj, **kwargs):
init(obj, **kwargs)
if obj.sweepList is None:
className = obj.__class__.__name__
print(className + ' requires either a list or relevant info, here are givens'
+ '\n' + # noqa: W503, W504
'sweepList: ', obj.sweepList, '\n' + # noqa: W504
'sweepMax: ', obj.sweepMax, '\n' + # noqa: W504
'sweepMin: ', obj.sweepMin, '\n' + # noqa: W504
'sweepPert: ', obj.sweepPert, '\n' + # noqa: W504
'logSweep: ', obj.logSweep)
return newFunction
| 40.617021 | 92 | 0.600838 | 200 | 1,909 | 5.435 | 0.35 | 0.066237 | 0.093836 | 0.115915 | 0.417663 | 0.417663 | 0.417663 | 0.378105 | 0.378105 | 0.333027 | 0 | 0.013413 | 0.297014 | 1,909 | 46 | 93 | 41.5 | 0.796572 | 0.223677 | 0 | 0.363636 | 0 | 0 | 0.149081 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.272727 | 0.151515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed1c74c77f9a61e232ea9a2a837cdc1274993efb | 6,997 | py | Python | reagent/gym/tests/test_gym.py | alexnikulkov/ReAgent | e404c5772ea4118105c2eb136ca96ad5ca8e01db | [
"BSD-3-Clause"
] | null | null | null | reagent/gym/tests/test_gym.py | alexnikulkov/ReAgent | e404c5772ea4118105c2eb136ca96ad5ca8e01db | [
"BSD-3-Clause"
] | null | null | null | reagent/gym/tests/test_gym.py | alexnikulkov/ReAgent | e404c5772ea4118105c2eb136ca96ad5ca8e01db | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
import logging
import os
import pprint
import unittest
import numpy as np
# pyre-fixme[21]: Could not find module `pytest`.
import pytest
import torch
from parameterized import parameterized
from reagent.core.types import RewardOptions
from reagent.gym.agents.agent import Agent
from reagent.gym.agents.post_step import train_with_replay_buffer_post_step
from reagent.gym.envs.union import Env__Union
from reagent.gym.runners.gymrunner import evaluate_for_n_episodes, run_episode
from reagent.gym.utils import build_normalizer, fill_replay_buffer
from reagent.model_managers.model_manager import ModelManager
from reagent.model_managers.union import ModelManager__Union
from reagent.replay_memory.circular_replay_buffer import ReplayBuffer
from reagent.tensorboardX import summary_writer_context
from reagent.test.base.horizon_test_base import HorizonTestBase
from torch.utils.tensorboard import SummaryWriter
try:
# Use internal runner or OSS otherwise
from reagent.runners.fb.fb_batch_runner import FbBatchRunner as BatchRunner
except ImportError:
from reagent.runners.oss_batch_runner import OssBatchRunner as BatchRunner
# for seeding the environment
SEED = 0
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
"""
Put on-policy gym tests here in the format (test name, path to yaml config).
Format path to be: "configs/<env_name>/<model_name>_<env_name>_online.yaml."
NOTE: These tests should ideally finish quickly (within 10 minutes) since they are
unit tests which are run many times.
"""
GYM_TESTS = [
("Discrete DQN Cartpole", "configs/cartpole/discrete_dqn_cartpole_online.yaml"),
("Discrete C51 Cartpole", "configs/cartpole/discrete_c51_cartpole_online.yaml"),
("Discrete QR Cartpole", "configs/cartpole/discrete_qr_cartpole_online.yaml"),
(
"Discrete DQN Open Gridworld",
"configs/open_gridworld/discrete_dqn_open_gridworld.yaml",
),
("SAC Pendulum", "configs/pendulum/sac_pendulum_online.yaml"),
("TD3 Pendulum", "configs/pendulum/td3_pendulum_online.yaml"),
("Parametric DQN Cartpole", "configs/cartpole/parametric_dqn_cartpole_online.yaml"),
(
"Parametric SARSA Cartpole",
"configs/cartpole/parametric_sarsa_cartpole_online.yaml",
),
(
"Sparse DQN Changing Arms",
"configs/sparse/discrete_dqn_changing_arms_online.yaml",
),
("SlateQ RecSim", "configs/recsim/slate_q_recsim_online.yaml"),
("PossibleActionsMask DQN", "configs/functionality/dqn_possible_actions_mask.yaml"),
]
curr_dir = os.path.dirname(__file__)
class TestGym(HorizonTestBase):
# pyre-fixme[16]: Module `parameterized` has no attribute `expand`.
@parameterized.expand(GYM_TESTS)
def test_gym_cpu(self, name: str, config_path: str):
logger.info(f"Starting {name} on CPU")
self.run_from_config(
run_test=run_test,
config_path=os.path.join(curr_dir, config_path),
use_gpu=False,
)
logger.info(f"{name} passes!")
# pyre-fixme[16]: Module `parameterized` has no attribute `expand`.
@parameterized.expand(GYM_TESTS)
@pytest.mark.serial
# pyre-fixme[56]: Argument `not torch.cuda.is_available()` to decorator factory
# `unittest.skipIf` could not be resolved in a global scope.
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
def test_gym_gpu(self, name: str, config_path: str):
logger.info(f"Starting {name} on GPU")
self.run_from_config(
run_test=run_test,
config_path=os.path.join(curr_dir, config_path),
use_gpu=True,
)
logger.info(f"{name} passes!")
def run_test(
env: Env__Union,
model: ModelManager__Union,
replay_memory_size: int,
train_every_ts: int,
train_after_ts: int,
num_train_episodes: int,
passing_score_bar: float,
num_eval_episodes: int,
use_gpu: bool,
):
env = env.value
env.seed(SEED)
env.action_space.seed(SEED)
normalization = build_normalizer(env)
logger.info(f"Normalization is: \n{pprint.pformat(normalization)}")
manager: ModelManager = model.value
runner = BatchRunner(use_gpu, manager, RewardOptions(), normalization)
trainer = runner.initialize_trainer()
reporter = manager.get_reporter()
trainer.reporter = reporter
training_policy = manager.create_policy(trainer)
replay_buffer = ReplayBuffer(
replay_capacity=replay_memory_size, batch_size=trainer.minibatch_size
)
device = torch.device("cuda") if use_gpu else torch.device("cpu")
# first fill the replay buffer to burn_in
train_after_ts = max(train_after_ts, trainer.minibatch_size)
fill_replay_buffer(
env=env, replay_buffer=replay_buffer, desired_size=train_after_ts
)
post_step = train_with_replay_buffer_post_step(
replay_buffer=replay_buffer,
env=env,
trainer=trainer,
training_freq=train_every_ts,
batch_size=trainer.minibatch_size,
device=device,
)
agent = Agent.create_for_env(
env, policy=training_policy, post_transition_callback=post_step, device=device
)
writer = SummaryWriter()
with summary_writer_context(writer):
train_rewards = []
for i in range(num_train_episodes):
trajectory = run_episode(
env=env, agent=agent, mdp_id=i, max_steps=env.max_steps
)
ep_reward = trajectory.calculate_cumulative_reward()
train_rewards.append(ep_reward)
logger.info(
f"Finished training episode {i} (len {len(trajectory)})"
f" with reward {ep_reward}."
)
logger.info("============Train rewards=============")
logger.info(train_rewards)
logger.info(f"average: {np.mean(train_rewards)};\tmax: {np.max(train_rewards)}")
# Check whether the max score passed the score bar; we explore during training
# the return could be bad (leading to flakiness in C51 and QRDQN).
assert np.max(train_rewards) >= passing_score_bar, (
f"max reward ({np.max(train_rewards)})after training for "
f"{len(train_rewards)} episodes is less than < {passing_score_bar}.\n"
)
serving_policy = manager.create_serving_policy(normalization, trainer)
agent = Agent.create_for_env_with_serving_policy(env, serving_policy)
eval_rewards = evaluate_for_n_episodes(
n=num_eval_episodes, env=env, agent=agent, max_steps=env.max_steps
).squeeze(1)
logger.info("============Eval rewards==============")
logger.info(eval_rewards)
mean_eval = np.mean(eval_rewards)
logger.info(f"average: {mean_eval};\tmax: {np.max(eval_rewards)}")
assert (
mean_eval >= passing_score_bar
), f"Eval reward is {mean_eval}, less than < {passing_score_bar}.\n"
if __name__ == "__main__":
unittest.main()
| 36.253886 | 88 | 0.711019 | 910 | 6,997 | 5.214286 | 0.286813 | 0.030137 | 0.018546 | 0.0196 | 0.175553 | 0.132771 | 0.084299 | 0.084299 | 0.084299 | 0.084299 | 0 | 0.003686 | 0.185651 | 6,997 | 192 | 89 | 36.442708 | 0.829063 | 0.093754 | 0 | 0.089041 | 0 | 0 | 0.226025 | 0.122685 | 0 | 0 | 0 | 0.010417 | 0.013699 | 1 | 0.020548 | false | 0.047945 | 0.157534 | 0 | 0.184932 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed1f38ec9a444c4d387d2b1c3bbd4a46cc3895ba | 2,132 | py | Python | mcpython/common/block/ISlab.py | mcpython4-coding/core | e4c4f59dab68c90e2028db3add2e5065116bf4a6 | [
"CC0-1.0",
"MIT"
] | 2 | 2019-11-02T05:26:11.000Z | 2019-11-03T08:52:18.000Z | mcpython/common/block/ISlab.py | mcpython4-coding/core | e4c4f59dab68c90e2028db3add2e5065116bf4a6 | [
"CC0-1.0",
"MIT"
] | 25 | 2019-11-02T05:24:29.000Z | 2022-02-09T14:09:08.000Z | mcpython/common/block/ISlab.py | mcpython4-coding/core | e4c4f59dab68c90e2028db3add2e5065116bf4a6 | [
"CC0-1.0",
"MIT"
] | 5 | 2019-11-09T05:36:06.000Z | 2021-11-28T13:07:08.000Z | """
mcpython - a minecraft clone written in python licenced under the MIT-licence
(https://github.com/mcpython4-coding/core)
Contributors: uuk, xkcdjerry (inactive)
Based on the game of fogleman (https://github.com/fogleman/Minecraft), licenced under the MIT-licence
Original game "minecraft" by Mojang Studios (www.minecraft.net), licenced under the EULA
(https://account.mojang.com/documents/minecraft_eula)
Mod loader inspired by "Minecraft Forge" (https://github.com/MinecraftForge/MinecraftForge) and similar
This project is not official by mojang and does not relate to it.
"""
import mcpython.common.block.AbstractBlock
import mcpython.engine.physics.AxisAlignedBoundingBox
import mcpython.util.enums
from mcpython.util.enums import SlabModes
BBOX_DICT = {
SlabModes.TOP: mcpython.engine.physics.AxisAlignedBoundingBox.AxisAlignedBoundingBox(
(1, 0.5, 1), (0, 0.5, 0)
),
SlabModes.BOTTOM: mcpython.engine.physics.AxisAlignedBoundingBox.AxisAlignedBoundingBox(
(1, 0.5, 1)
),
SlabModes.DOUBLE: mcpython.engine.physics.AxisAlignedBoundingBox.FULL_BLOCK_BOUNDING_BOX,
}
class ISlab(mcpython.common.block.AbstractBlock.AbstractBlock):
"""
Base class for slabs
"""
IS_SOLID = False
DEFAULT_FACE_SOLID = 0
def __init__(self):
super().__init__()
self.type = SlabModes.TOP
async def on_block_added(self):
if self.real_hit and self.real_hit[1] - self.position[1] > 0:
self.type = SlabModes.TOP
else:
self.type = SlabModes.BOTTOM
await self.schedule_network_update()
def get_model_state(self):
return {"type": self.type.name.lower()}
def set_model_state(self, state: dict):
if "type" in state:
self.type = SlabModes[state["type"].upper()]
DEBUG_WORLD_BLOCK_STATES = [{"type": x.name.upper()} for x in SlabModes]
async def on_player_interact(
self, player, itemstack, button, modifiers, exact_hit
) -> bool:
# todo: add half -> double convert
return False
def get_view_bbox(self):
return BBOX_DICT[self.type]
| 32.30303 | 103 | 0.701689 | 272 | 2,132 | 5.371324 | 0.452206 | 0.032854 | 0.057495 | 0.117728 | 0.130048 | 0.094456 | 0.094456 | 0.094456 | 0.094456 | 0 | 0 | 0.009878 | 0.192777 | 2,132 | 65 | 104 | 32.8 | 0.839047 | 0.297373 | 0 | 0.108108 | 0 | 0 | 0.010847 | 0 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.108108 | false | 0 | 0.108108 | 0.054054 | 0.405405 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed207a7611696af8395d372e4e8d01f42d7c6467 | 25,419 | py | Python | CourseOutlineBackend/courseoutline/serializers.py | stancsz/web-development-project-ensf-607 | 03b11df4971afd4f27fee54a1800a40d4cc10240 | [
"Apache-2.0"
] | null | null | null | CourseOutlineBackend/courseoutline/serializers.py | stancsz/web-development-project-ensf-607 | 03b11df4971afd4f27fee54a1800a40d4cc10240 | [
"Apache-2.0"
] | null | null | null | CourseOutlineBackend/courseoutline/serializers.py | stancsz/web-development-project-ensf-607 | 03b11df4971afd4f27fee54a1800a40d4cc10240 | [
"Apache-2.0"
] | null | null | null | from rest_framework import serializers
from .models import *
class CoordinatorSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
FName = serializers.CharField(max_length=100, required=False)
LName = serializers.CharField(max_length=100, required=False)
Phone = serializers.CharField(max_length=100, required=False)
Office = serializers.CharField(max_length=100, required=False)
Email = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
# Once the request data has been validated, we can create a todo item instance in the database
return Coordinator.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
FName=validated_data.get('FName'),
LName=validated_data.get('LName'),
Phone=validated_data.get('Phone'),
Office=validated_data.get('Office'),
Email=validated_data.get('Email')
)
def update(self, instance, validated_data):
# Once the request data has been validated, we can update the todo item instance in the database
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.FName = validated_data.get('FName', instance.FName)
instance.LName = validated_data.get('LName', instance.LName)
instance.Phone = validated_data.get('Phone', instance.Phone)
instance.Office = validated_data.get('Office', instance.Office)
instance.Email = validated_data.get('Email', instance.Email)
instance.save()
return instance
class Meta:
model = Coordinator
fields = (
'ModelID',
'CourseID',
'FName',
'LName',
'Phone',
'Office',
'Email'
)
class InfoSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
GradeNotes = serializers.CharField(max_length=5000, required=False)
Examination = serializers.CharField(max_length=5000, required=False)
CourseDescription = serializers.CharField(max_length=5000, required=False)
UseCalc = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Info.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
GradeNotes=validated_data.get('GradeNotes'),
Examination=validated_data.get('Examination'),
CourseDescription=validated_data.get('CourseDescription'),
UseCalc=validated_data.get('UseCalc')
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.GradeNotes = validated_data.get('GradeNotes', instance.GradeNotes)
instance.Examination = validated_data.get('Examination', instance.Examination)
instance.CourseDescription = validated_data.get('CourseDescription', instance.CourseDescription)
instance.UseCalc = validated_data.get('UseCalc', instance.UseCalc)
instance.save()
return instance
class Meta:
model = Info
fields = (
'ModelID',
'CourseID',
'GradeNotes',
'Examination',
'CourseDescription',
'UseCalc'
)
class GradeDeterminationSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
Component = serializers.CharField(max_length=100, required=False)
OutcomeEvaluated = serializers.CharField(max_length=100, required=False)
Weight = serializers.IntegerField(required=False)
def create(self, validated_data):
# Once the request data has been validated, we can create a todo item instance in the database
return GradeDetermination.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
Component=validated_data.get('Component'),
OutcomeEvaluated=validated_data.get('OutcomeEvaluated'),
Weight=validated_data.get('Weight'),
)
def update(self, instance, validated_data):
# Once the request data has been validated, we can update the todo item instance in the database
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.Component = validated_data.get('Component', instance.Component)
instance.OutcomeEvaluated = validated_data.get('OutcomeEvaluated', instance.OutcomeEvaluated)
instance.Weight = validated_data.get('Weight', instance.Weight)
instance.save()
return instance
class Meta:
model = GradeDetermination
fields = (
'ModelID',
'CourseID',
'Component',
'OutcomeEvaluated',
'Weight'
)
class OutcomeSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
OutcomeNum = serializers.IntegerField(required=False) # removed max_length=100
Description = serializers.CharField(max_length=500, required=False) # Changed max_length to 500
GraduateAttribute = serializers.CharField(max_length=100, required=False)
InstructionLvl = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Outcome.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
OutcomeNum=validated_data.get('OutcomeNum'),
Description=validated_data.get('Description'),
GraduateAttribute=validated_data.get('GraduateAttribute'),
InstructionLvl=validated_data.get('InstructionLvl'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.OutcomeNum = validated_data.get('OutcomeNum', instance.OutcomeNum)
instance.Description = validated_data.get('Description', instance.Description)
instance.GraduateAttribute = validated_data.get('GraduateAttribute', instance.GraduateAttribute)
instance.InstructionLvl = validated_data.get('InstructionLvl', instance.InstructionLvl)
instance.save()
return instance
class Meta:
model = Outcome
fields = (
'ModelID',
'CourseID',
'OutcomeNum',
'Description',
'GraduateAttribute',
'InstructionLvl'
)
class TimetableSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
SectionNum = serializers.CharField(max_length=100, required=False)
Days = serializers.CharField(max_length=100, required=False)
Time = serializers.CharField(max_length=100, required=False)
Location = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Timetable.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
SectionNum=validated_data.get('SectionNum'),
Days=validated_data.get('Days'),
Time=validated_data.get('Time'),
Location=validated_data.get('Location'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.SectionNum = validated_data.get('SectionNum', instance.SectionNum)
instance.Days = validated_data.get('Days', instance.Days)
instance.Time = validated_data.get('Time', instance.Time)
instance.Location = validated_data.get('Location', instance.Location)
instance.save()
return instance
class Meta:
model = Timetable
fields = (
'ModelID',
'CourseID',
'SectionNum',
'Days',
'Time',
'Location'
)
class GradeDistributionSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
LowerLimit = serializers.IntegerField(required=False) # removed max_length = 100
UpperLimit = serializers.IntegerField(required=False) # removed max_length = 100
LetterGrade = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return GradeDistribution.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
LowerLimit=validated_data.get('LowerLimit'),
UpperLimit=validated_data.get('UpperLimit'),
LetterGrade=validated_data.get('LetterGrade'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.LowerLimit = validated_data.get('LowerLimit', instance.LowerLimit)
instance.UpperLimit = validated_data.get('UpperLimit', instance.UpperLimit)
instance.LetterGrade = validated_data.get('LetterGrade', instance.LetterGrade)
instance.save()
return instance
class Meta:
model = GradeDistribution
fields = (
'ModelID',
'CourseID',
'LowerLimit',
'UpperLimit',
'LetterGrade'
)
class LectureSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
LectureNum = serializers.CharField(max_length=100, required=False)
FName = serializers.CharField(max_length=100, required=False)
LName = serializers.CharField(max_length=100, required=False)
Phone = serializers.CharField(max_length=100, required=False)
Office = serializers.CharField(max_length=100, required=False)
Email = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Lecture.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
LectureNum=validated_data.get('LectureNum'),
FName=validated_data.get('FName'),
LName=validated_data.get('LName'),
Phone=validated_data.get('Phone'),
Office=validated_data.get('Office'),
Email=validated_data.get('Email'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.LectureNum = validated_data.get('LectureNum', instance.LectureNum)
instance.FName = validated_data.get('FName', instance.FName)
instance.LName = validated_data.get('LName', instance.LName)
instance.Phone = validated_data.get('Phone', instance.Phone)
instance.Office = validated_data.get('Office', instance.Office)
instance.Email = validated_data.get('Email', instance.Email)
instance.save()
return instance
class Meta:
model = Lecture
fields = (
'ModelID',
'CourseID',
'LectureNum',
'FName',
'LName',
'Phone',
'Office',
'Email'
)
class TutorialSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
TutorialNum = serializers.CharField(max_length=100, required=False) # Changed Tutorial Num to CharField
FName = serializers.CharField(max_length=100, required=False) # Changed FName to CharField
LName = serializers.CharField(max_length=100, required=False)
Phone = serializers.CharField(max_length=100, required=False)
Office = serializers.CharField(max_length=100, required=False)
Email = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Tutorial.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
TutorialNum=validated_data.get('TutorialNum'),
FName=validated_data.get('FName'),
LName=validated_data.get('LName'),
Phone=validated_data.get('Phone'),
Office=validated_data.get('Office'),
Email=validated_data.get('Email'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.TutorialNum = validated_data.get('TutorialNum', instance.TutorialNum)
instance.FName = validated_data.get('FName', instance.FName)
instance.LName = validated_data.get('LName', instance.LName)
instance.Phone = validated_data.get('Phone', instance.Phone)
instance.Office = validated_data.get('Office', instance.Office)
instance.Email = validated_data.get('Email', instance.Email)
instance.save()
return instance
class Meta:
model = Tutorial
fields = (
'ModelID',
'CourseID',
'TutorialNum',
'FName',
'LName',
'Phone',
'Office',
'Email'
)
class CourseSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
CourseHours = serializers.CharField(max_length=100, required=False) # Changed CourseHours to CharField
CourseName = serializers.CharField(max_length=100, required=False) # Changed CourseName to CharField
CalenderRefrence = serializers.CharField(max_length=100, required=False)
AcademicCredit = serializers.IntegerField(required=False) # Changed AcademicCredit to IntegerField
DateCreated = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Course.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
CourseHours=validated_data.get('CourseHours'),
CourseName=validated_data.get('CourseName'),
CalenderRefrence=validated_data.get('CalenderRefrence'),
AcademicCredit=validated_data.get('AcademicCredit'),
DateCreated=validated_data.get('DateCreated'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.CourseHours = validated_data.get('CourseHours', instance.CourseHours)
instance.CourseName = validated_data.get('CourseName', instance.CourseName)
instance.CalenderRefrence = validated_data.get('CalenderRefrence', instance.CalenderRefrence)
instance.AcademicCredit = validated_data.get('AcademicCredit', instance.AcademicCredit)
instance.DateCreated = validated_data.get('DateCreated', instance.DateCreated)
instance.save()
return instance
class Meta:
model = Course
fields = (
'ModelID',
'CourseID',
'CourseHours',
'CourseName',
'CalenderRefrence',
'AcademicCredit',
'DateCreated'
)
class TextbookSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
TITLE = serializers.CharField(max_length=100, required=False)
Publisher = serializers.CharField(max_length=100, required=False)
Author = serializers.CharField(max_length=100, required=False)
Edition = serializers.CharField(max_length=100, required=False)
type = serializers.CharField(max_length=100, required=False)
def create(self, validated_data):
return Textbook.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
TITLE=validated_data.get('TITLE'),
Publisher=validated_data.get('Publisher'),
Author=validated_data.get('Author'),
Edition=validated_data.get('Edition'),
type=validated_data.get('type'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.TITLE = validated_data.get('TITLE', instance.TITLE)
instance.Publisher = validated_data.get('Publisher', instance.Publisher)
instance.Author = validated_data.get('Author', instance.Author)
instance.Edition = validated_data.get('Edition', instance.Edition)
instance.type = validated_data.get('type', instance.type)
instance.save()
return instance
class Meta:
model = Textbook
fields = (
'ModelID',
'CourseID',
'TITLE',
'Publisher',
'Author',
'Edition',
'type'
)
class AuWeightSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
Category = serializers.CharField(max_length=100, required=True)
AU = serializers.IntegerField(required=False)
def create(self, validated_data):
return AuWeight.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
Category=validated_data.get('Category'),
AU=validated_data.get('AU'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.Category = validated_data.get('Category', instance.Category)
instance.AU = validated_data.get('AU', instance.AU)
instance.save()
return instance
class Meta:
model = AuWeight
fields = (
'ModelID',
'CourseID',
'Category',
'AU'
)
class ContentCategorySerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
CategoryType = serializers.CharField(max_length=100, required=True)
Element = serializers.CharField(max_length=100, required=True)
def create(self, validated_data):
return ContentCategory.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
CategoryType=validated_data.get('CategoryType'),
Element=validated_data.get('Element'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.CategoryType = validated_data.get('CategoryType', instance.CategoryType)
instance.Element = validated_data.get('Element', instance.Element)
instance.save()
return instance
class Meta:
model = ContentCategory
fields = (
'ModelID',
'CourseID',
'CategoryType',
'Element'
)
class LabSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
LabNum = serializers.CharField(max_length=100, required=True)
NumberOfLabs = serializers.IntegerField(required=False)
LabType = serializers.CharField(max_length=100, required=True)
SafetyExamined = serializers.CharField(max_length=100, required=True)
SafetyTaught = serializers.CharField(max_length=100, required=True)
FName = serializers.CharField(max_length=100, required=True)
LName = serializers.CharField(max_length=100, required=True)
Phone = serializers.CharField(max_length=100, required=True)
Office = serializers.CharField(max_length=100, required=True)
Email = serializers.CharField(max_length=100, required=True)
def create(self, validated_data):
return Lab.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
LabNum=validated_data.get('LabNum'),
NumberOfLabs=validated_data.get('NumberOfLabs'),
LabType=validated_data.get('LabType'),
SafetyExamined=validated_data.get('SafetyExamined'),
SafetyTaught=validated_data.get('SafetyTaught'),
FName=validated_data.get('FName'),
LName=validated_data.get('LName'),
Phone=validated_data.get('Phone'),
Office=validated_data.get('Office'),
Email=validated_data.get('Email'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.LabNum = validated_data.get('LabNum', instance.LabNum)
instance.NumberOfLabs = validated_data.get('NumberOfLabs', instance.NumberOfLabs)
instance.LabType = validated_data.get('LabType', instance.LabType)
instance.SafetyExamined = validated_data.get('SafetyExamined', instance.SafetyExamined)
instance.SafetyTaught = validated_data.get('SafetyTaught', instance.SafetyTaught)
instance.FName = validated_data.get('FName', instance.FName)
instance.LName = validated_data.get('LName', instance.LName)
instance.Phone = validated_data.get('Phone', instance.Phone)
instance.Office = validated_data.get('Office', instance.Office)
instance.Email = validated_data.get('Email', instance.Email)
instance.save()
return instance
class Meta:
model = Lab
fields = (
'ModelID',
'CourseID',
'LabNum',
'NumberOfLabs',
'LabType',
'SafetyExamined',
'SafetyTaught',
'FName',
'LName',
'Phone',
'Office',
'Email'
)
class SectionSerializer(serializers.ModelSerializer):
# ModelID = serializers.CharField(max_length=100, required=True)
CourseID = serializers.CharField(max_length=100, required=True)
SectionNumber = serializers.CharField(max_length=100, required=False)
Students = serializers.IntegerField(required=False)
Hours = serializers.IntegerField(required=False)
type = serializers.CharField(max_length=100, required=True)
def create(self, validated_data):
return Section.objects.create(
ModelID=validated_data.get('ModelID'),
CourseID=validated_data.get('CourseID'),
SectionNumber=validated_data.get('SectionNumber'),
Students=validated_data.get('Students'),
Hours=validated_data.get('Hours'),
type=validated_data.get('type'),
)
def update(self, instance, validated_data):
instance.ModelID = validated_data.get('ModelID', instance.ModelID)
instance.CourseID = validated_data.get('CourseID', instance.CourseID)
instance.SectionNumber = validated_data.get('SectionNumber', instance.SectionNumber)
instance.Students = validated_data.get('Students', instance.Students)
instance.Hours = validated_data.get('Hours', instance.Hours)
instance.type = validated_data.get('type', instance.type)
instance.save()
return instance
class Meta:
model = Section
fields = (
'ModelID',
'CourseID',
'SectionNumber',
'Students',
'Hours',
'type'
)
| 42.649329 | 108 | 0.66824 | 2,481 | 25,419 | 6.726723 | 0.050383 | 0.163581 | 0.174486 | 0.142489 | 0.8206 | 0.645875 | 0.638444 | 0.554137 | 0.531128 | 0.523998 | 0 | 0.013246 | 0.224832 | 25,419 | 595 | 109 | 42.721008 | 0.833739 | 0.059837 | 0 | 0.517787 | 0 | 0 | 0.090867 | 0 | 0 | 0 | 0 | 0.001681 | 0 | 1 | 0.055336 | false | 0 | 0.003953 | 0.027668 | 0.322134 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed2203dc728c9eb06bb608004ab33922f3baa3bc | 3,279 | py | Python | cvp_rest_api_examples/cvpLabelAdd.py | kakkotetsu/CVP-Scripts | 4075eaf9987be6220a7bed188dcee11f56a7bf35 | [
"Apache-2.0"
] | 8 | 2019-06-04T14:22:45.000Z | 2020-10-02T16:56:43.000Z | cvp_rest_api_examples/cvpLabelAdd.py | kakkotetsu/CVP-Scripts | 4075eaf9987be6220a7bed188dcee11f56a7bf35 | [
"Apache-2.0"
] | 1 | 2021-04-16T00:43:00.000Z | 2021-04-16T00:43:00.000Z | cvp_rest_api_examples/cvpLabelAdd.py | kakkotetsu/CVP-Scripts | 4075eaf9987be6220a7bed188dcee11f56a7bf35 | [
"Apache-2.0"
] | 4 | 2020-05-13T14:03:13.000Z | 2021-08-10T14:47:23.000Z | #!/usrb/bin/env python
# Copyright (c) 2019, Arista Networks, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
# - Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# - Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# - Neither the name of Arista Networks nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ARISTA NETWORKS
# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
# BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
# OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
# IF NOT ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#!/usr/bin/env python
import requests
import json
import argparse
import urllib3
def parseArgs():
parser = argparse.ArgumentParser()
parser.add_argument( '-c', '--cvpName', required=True, help='cvp name' )
parser.add_argument( '-u', '--userId', help='username',
default='cvpadmin')
parser.add_argument( '-p', '--password', help='password',
default='arista')
args = vars( parser.parse_args() )
return args.pop( 'cvpName' ), args
def getCvpInfo( cvpName ):
api = 'cvpInfo/getCvpInfo.do'
url = 'https://%s:443/web/%s' % ( cvpName, api )
print 'calling url: ', url
return requests.get( url, cookies=cookies, verify=False )
def addDeviceToLabel( cvpName, label, deviceMac ):
api = 'label/labelAssignToDevice.do'
url = 'https://%s:443/web/%s' % ( cvpName, api )
body = {'label': label, 'device': deviceMac}
print 'calling url: ', url
return requests.post( url, cookies=cookies, data=json.dumps(body), verify=False )
def authenticate( cvpName, loginInfo ):
url = 'https://%s:443/web/login/authenticate.do' % ( cvpName, )
return requests.post( url, json.dumps( loginInfo ), verify=False )
if __name__ == '__main__':
urllib3.disable_warnings()
cvpName, loginInfo = parseArgs()
cookies = authenticate( cvpName, loginInfo ).cookies
#print json.loads(getCvpInfo( cvpName ).text)
#print getCvpInfo( cvpName ).json()
print 'getCvpInfo:'
print json.dumps(getCvpInfo( cvpName ).json(), indent=2)
# ADD DEVICE TO LABEL
# label = "{ tagType: tagValue }"
label = "mlag:mlagNY"
device = "de:ad:be:ef:ca:fe"
print 'addDeviceToLabel:', label, device
print json.dumps(addDeviceToLabel( cvpName, label, device ).json(), indent=2)
| 42.584416 | 84 | 0.720647 | 431 | 3,279 | 5.452436 | 0.459397 | 0.028936 | 0.021702 | 0.015319 | 0.135745 | 0.108936 | 0.081702 | 0.081702 | 0.081702 | 0.057872 | 0 | 0.006271 | 0.173224 | 3,279 | 76 | 85 | 43.144737 | 0.860568 | 0.498933 | 0 | 0.108108 | 0 | 0 | 0.192427 | 0.030416 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.027027 | 0.108108 | null | null | 0.162162 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed22f71576a11a3b9302f73902c8de9c8f96d4dd | 1,244 | py | Python | frontends/pytorch/python/torch_mlir_torchscript_e2e_test_configs/torchscript.py | raikonenfnu/mlir-npcomp | 29e1b2fe89848d58c9bc07e7df7ce651850a5244 | [
"Apache-2.0"
] | null | null | null | frontends/pytorch/python/torch_mlir_torchscript_e2e_test_configs/torchscript.py | raikonenfnu/mlir-npcomp | 29e1b2fe89848d58c9bc07e7df7ce651850a5244 | [
"Apache-2.0"
] | null | null | null | frontends/pytorch/python/torch_mlir_torchscript_e2e_test_configs/torchscript.py | raikonenfnu/mlir-npcomp | 29e1b2fe89848d58c9bc07e7df7ce651850a5244 | [
"Apache-2.0"
] | null | null | null | # Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
import copy
from typing import Any
import torch
from torch_mlir_torchscript.e2e_test.framework import TestConfig, Trace, TraceItem
class TorchScriptTestConfig(TestConfig):
"""TestConfig that runs the torch.nn.Module through TorchScript"""
def __init__(self):
super().__init__()
def compile(self, program: torch.nn.Module) -> torch.jit.ScriptModule:
return torch.jit.script(program)
def run(self, artifact: torch.jit.ScriptModule, trace: Trace) -> Trace:
# TODO: Deepcopy the torch.jit.ScriptModule, so that if the program is
# stateful then it does not mutate the original compiled program.
result: Trace = []
for item in trace:
attr = artifact
for part in item.symbol.split('.'):
attr = getattr(attr, part)
output = attr(*item.inputs)
result.append(
TraceItem(symbol=item.symbol,
inputs=item.inputs,
output=output))
return result
| 34.555556 | 82 | 0.644695 | 150 | 1,244 | 5.273333 | 0.52 | 0.040455 | 0.075853 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005482 | 0.266881 | 1,244 | 35 | 83 | 35.542857 | 0.861842 | 0.312701 | 0 | 0 | 0 | 0 | 0.001183 | 0 | 0 | 0 | 0 | 0.028571 | 0 | 1 | 0.142857 | false | 0 | 0.190476 | 0.047619 | 0.47619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed23166702dcea8d3e5e73f8ed58f0971f2a45b0 | 2,495 | py | Python | app/balltracking/pubnubpython/pnconfiguration.py | gdmgent-1718-wot/interactive-wall | af7ecff126b1ee9c85c270fe13d1338aa790c34b | [
"Apache-2.0"
] | null | null | null | app/balltracking/pubnubpython/pnconfiguration.py | gdmgent-1718-wot/interactive-wall | af7ecff126b1ee9c85c270fe13d1338aa790c34b | [
"Apache-2.0"
] | null | null | null | app/balltracking/pubnubpython/pnconfiguration.py | gdmgent-1718-wot/interactive-wall | af7ecff126b1ee9c85c270fe13d1338aa790c34b | [
"Apache-2.0"
] | null | null | null | from .enums import PNHeartbeatNotificationOptions, PNReconnectionPolicy
from . import utils
class PNConfiguration(object):
DEFAULT_PRESENCE_TIMEOUT = 300
DEFAULT_HEARTBEAT_INTERVAL = 280
def __init__(self):
# TODO: add validation
self.uuid = None
self.origin = "ps.pndsn.com"
self.ssl = False
self.non_subscribe_request_timeout = 10
self.subscribe_request_timeout = 310
self.connect_timeout = 5
self.subscribe_key = None
self.publish_key = None
self.secret_key = None
self.cipher_key = None
self.auth_key = None
self.filter_expression = None
self.enable_subscribe = True
self.crypto_instance = None
self.log_verbosity = False
self.heartbeat_notification_options = PNHeartbeatNotificationOptions.FAILURES
self.reconnect_policy = PNReconnectionPolicy.NONE
self.daemon = False
self.heartbeat_default_values = True
self._presence_timeout = PNConfiguration.DEFAULT_PRESENCE_TIMEOUT
self._heartbeat_interval = PNConfiguration.DEFAULT_HEARTBEAT_INTERVAL
def validate(self):
assert self.uuid is None or isinstance(self.uuid, str)
if self.uuid is None:
self.uuid = utils.uuid()
def scheme(self):
if self.ssl:
return "https"
else:
return "http"
def scheme_extended(self):
return self.scheme() + "://"
def scheme_and_host(self):
return self.scheme_extended() + self.origin
def set_presence_timeout_with_custom_interval(self, timeout, interval):
self.heartbeat_default_values = False
self._presence_timeout = timeout
self._heartbeat_interval = interval
def set_presence_timeout(self, timeout):
self.set_presence_timeout_with_custom_interval(timeout, (timeout / 2) - 1)
@property
def crypto(self):
if self.crypto_instance is None:
self._init_cryptodome()
return self.crypto_instance
def _init_cryptodome(self):
from .crypto import PubNubCryptodome
self.crypto_instance = PubNubCryptodome()
@property
def port(self):
return 443 if self.ssl == "https" else 80
@property
def presence_timeout(self):
return self._presence_timeout
@property
def heartbeat_interval(self):
return self._heartbeat_interval
# TODO: set log level
# TODO: set log level
| 29.352941 | 85 | 0.666132 | 278 | 2,495 | 5.723022 | 0.302158 | 0.055311 | 0.034569 | 0.032684 | 0.045255 | 0.045255 | 0 | 0 | 0 | 0 | 0 | 0.01036 | 0.26493 | 2,495 | 84 | 86 | 29.702381 | 0.857143 | 0.024048 | 0 | 0.063492 | 0 | 0 | 0.011929 | 0 | 0 | 0 | 0 | 0.011905 | 0.015873 | 1 | 0.190476 | false | 0 | 0.047619 | 0.079365 | 0.412698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed24828337abdac65179c3d1fc89a55415ddc15a | 1,871 | py | Python | language/Basics/stringformatting.py | Binary-bug/Python | 233425ded6abc26c889599a82a181487789e3bab | [
"MIT"
] | null | null | null | language/Basics/stringformatting.py | Binary-bug/Python | 233425ded6abc26c889599a82a181487789e3bab | [
"MIT"
] | null | null | null | language/Basics/stringformatting.py | Binary-bug/Python | 233425ded6abc26c889599a82a181487789e3bab | [
"MIT"
] | null | null | null | age = 24
print("My age is " + str(age) + " years ")
# the above procedure is tedious since we dont really want to include str for every number we encounter
#Method1 Replacement Fields
print("My age is {0} years ".format(age)) # {0} is the actual replacement field, number important for multiple replacement fields
print("There are {0} days in {1}, {2}, {3}, {4}, {5}, {6} and {7} ".format(31,"January","March","May","july","August","october","december"))
#each of the arguments of .format are matched to their respective replacement fields
print("""January:{2}
February:{0}
March:{2}
April:{1}
""".format(28,30,31))
#Method2 Formatting operator not recommended though style from python 2
print("My age is %d years" % age)
print("My age is %d %s, %d %s" % (age,"years",6,"months"))
#^ old format and it was elegant -__-
#
# for i in range(1,12):
# print("No, %2d squared is %4d and cubed is %4d" %(i,i**2,i**3)) # ** operator raises power %xd x allocates spaces
#
#
#
#
# #for comparison
# print()
# for i in range(1,12):
# print("No, %d squared is %d and cubed is %d" % (i,i**2,i**3))
#
#
# #adding more precision
#
# print("Pi is approximately %12.50f" % (22/7)) # 50 decimal precsion and 12 for spaces default is 6 spaces
#
#
#
#
#Replacement field syntax variant of above Python 2 tricks
for i in range(1,12):
print("No. {0:2} squared is {1:4} and cubed is {2:4}".format(i,i**2,i**3))
print()
#for left alignment
for i in range(1,12):
print("NO. {0:<2} squared is {1:<4} and cubed is {2:<4}".format(i,i**2,i**3))
#floating point precision
print("Pi is approximately {0:.50}".format(22/7))
#use of numbers in replacement fields is optional when the default order is implied
for i in range(1,12):
print("No. {:2} squared is {:4} and cubed is {:4}".format(i,i**2,i**3))
days = "Mon, Tue, Wed, Thu, Fri, Sat, Sun"
print(days[::5]) | 25.986111 | 140 | 0.649385 | 330 | 1,871 | 3.675758 | 0.372727 | 0.016488 | 0.024732 | 0.045342 | 0.236603 | 0.155812 | 0.155812 | 0.145919 | 0.093982 | 0.093982 | 0 | 0.0589 | 0.174238 | 1,871 | 72 | 141 | 25.986111 | 0.726214 | 0.513095 | 0 | 0.142857 | 0 | 0.142857 | 0.486887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
ed25ac3871761ab8e7fb05fe5b59a6a001de70b4 | 154 | py | Python | Euler0001.py | rbarillec/project_euler | db812f9ae53090b34716452d0cb9ec14bf218290 | [
"MIT"
] | null | null | null | Euler0001.py | rbarillec/project_euler | db812f9ae53090b34716452d0cb9ec14bf218290 | [
"MIT"
] | null | null | null | Euler0001.py | rbarillec/project_euler | db812f9ae53090b34716452d0cb9ec14bf218290 | [
"MIT"
] | null | null | null | def Euler0001():
max = 1000
sum = 0
for i in range(1, max):
if i%3 == 0 or i%5 == 0:
sum += i
print(sum)
Euler0001() | 15.4 | 32 | 0.448052 | 25 | 154 | 2.76 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197802 | 0.409091 | 154 | 10 | 33 | 15.4 | 0.56044 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed26edcb5cf96b914509d3c9f532db02422a6189 | 701 | py | Python | Algorithms_easy/0461. Hamming Distance.py | VinceW0/Leetcode_Python_solutions | 09e9720afce21632372431606ebec4129eb79734 | [
"Xnet",
"X11"
] | 4 | 2020-08-11T20:45:15.000Z | 2021-03-12T00:33:34.000Z | Algorithms_easy/0461. Hamming Distance.py | VinceW0/Leetcode_Python_solutions | 09e9720afce21632372431606ebec4129eb79734 | [
"Xnet",
"X11"
] | null | null | null | Algorithms_easy/0461. Hamming Distance.py | VinceW0/Leetcode_Python_solutions | 09e9720afce21632372431606ebec4129eb79734 | [
"Xnet",
"X11"
] | null | null | null | """
0461. Hamming Distance
The Hamming distance between two integers is the number of positions at which the corresponding bits are different.
Given two integers x and y, calculate the Hamming distance.
Note:
0 ≤ x, y < 231.
Example:
Input: x = 1, y = 4
Output: 2
Explanation:
1 (0 0 0 1)
4 (0 1 0 0)
↑ ↑
The above arrows point to positions where the corresponding bits are different.
"""
class Solution:
def hammingDistance(self, x: int, y: int) :
z = x^y
res = 0
while z:
res += z&1
z = z>>1
return res
class Solution:
def hammingDistance(self, x: int, y: int) :
return bin(x^y).count('1') | 20.028571 | 115 | 0.596291 | 109 | 701 | 3.862385 | 0.46789 | 0.106888 | 0.085511 | 0.109264 | 0.356295 | 0.204276 | 0.204276 | 0.204276 | 0.204276 | 0 | 0 | 0.051975 | 0.313837 | 701 | 35 | 116 | 20.028571 | 0.817048 | 0.570613 | 0 | 0.363636 | 0 | 0 | 0.003401 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ed34a763af3706d73ed657481a4202938e665e7b | 327 | py | Python | 0100.same_tree/solution.py | WZMJ/Algorithms | 07f648541d38e24df38bda469665c12df6a50637 | [
"MIT"
] | 5 | 2020-05-23T02:18:26.000Z | 2021-07-05T05:36:01.000Z | 0100.same_tree/solution.py | WZMJ/Algorithms | 07f648541d38e24df38bda469665c12df6a50637 | [
"MIT"
] | 1 | 2020-06-10T07:17:24.000Z | 2020-07-20T02:21:24.000Z | 0100.same_tree/solution.py | WZMJ/Algorithms | 07f648541d38e24df38bda469665c12df6a50637 | [
"MIT"
] | 1 | 2019-04-23T13:01:50.000Z | 2019-04-23T13:01:50.000Z | from utils import TreeNode
class Solution:
def is_same_tree(self, p: TreeNode, q: TreeNode) -> bool:
if p is None and q is None:
return True
if not p or not q:
return False
return p.val == q.val and self.is_same_tree(p.left, q.left) and self.is_same_tree(p.right, q.right)
| 29.727273 | 107 | 0.620795 | 57 | 327 | 3.45614 | 0.438596 | 0.091371 | 0.152284 | 0.13198 | 0.182741 | 0.182741 | 0 | 0 | 0 | 0 | 0 | 0 | 0.293578 | 327 | 10 | 108 | 32.7 | 0.852814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ed3a9aaa2f6bd7c47892c2149c2e5804bf96b4fe | 1,442 | py | Python | tpv/modals/sugerencias.py | vallemrv/tpvB3 | 9988a528b32692b01bd042cc6486188c4dc2109b | [
"Apache-2.0"
] | 3 | 2017-07-16T09:31:56.000Z | 2019-03-20T11:11:24.000Z | tpv/modals/sugerencias.py | ljimaz33/tpvB3 | 9988a528b32692b01bd042cc6486188c4dc2109b | [
"Apache-2.0"
] | null | null | null | tpv/modals/sugerencias.py | ljimaz33/tpvB3 | 9988a528b32692b01bd042cc6486188c4dc2109b | [
"Apache-2.0"
] | 1 | 2022-01-02T11:22:45.000Z | 2022-01-02T11:22:45.000Z | # @Author: Manuel Rodriguez <valle>
# @Date: 10-May-2017
# @Email: valle.mrv@gmail.com
# @Last modified by: valle
# @Last modified time: 23-Feb-2018
# @License: Apache license vesion 2.0
from kivy.uix.modalview import ModalView
from kivy.uix.button import Button
from kivy.properties import ObjectProperty, StringProperty, ListProperty
from kivy.lang import Builder
Builder.load_file("view/sugerencias.kv")
class Sugerencias(ModalView):
onExit = ObjectProperty(None, allownone=True)
content = ObjectProperty(None, allownone=True)
texto = StringProperty("")
des = StringProperty("")
sug = ListProperty([])
key = StringProperty("")
tag = ObjectProperty(None, allownone=True)
def __init__(self, **kargs):
super(Sugerencias, self).__init__(**kargs)
self.auto_dismiss=False
def on_sug(self, key, value):
self.lista.rm_all_widgets()
for item in self.sug:
btn = Button(text=item)
btn.tag = item
btn.bind(on_press=self.onPress)
self.lista.add_linea(btn)
def onPress(self, b):
self.onExit(self.key, self.content, b.tag, self.tag)
def clear_text(self):
self.texto = ""
def exit(self):
self.texto = self.txtSug.text
if self.onExit:
if self.texto != "":
self.sug.append(self.texto)
self.onExit(self.key, self.content, self.texto, self.tag)
| 29.428571 | 72 | 0.645631 | 181 | 1,442 | 5.055249 | 0.453039 | 0.04918 | 0.056831 | 0.101639 | 0.061202 | 0.061202 | 0 | 0 | 0 | 0 | 0 | 0.01267 | 0.233703 | 1,442 | 48 | 73 | 30.041667 | 0.815385 | 0.124133 | 0 | 0 | 0 | 0 | 0.015127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0.121212 | 0 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ed3b6f60e4e30cf75b95e63f68e2b18f1cb5a0e8 | 1,122 | py | Python | templates/integration/__init__.py | p7g/dd-trace-py | 141ac0ab6e9962e3b3bafc9de172076075289a19 | [
"Apache-2.0",
"BSD-3-Clause"
] | 308 | 2016-12-07T16:49:27.000Z | 2022-03-15T10:06:45.000Z | templates/integration/__init__.py | p7g/dd-trace-py | 141ac0ab6e9962e3b3bafc9de172076075289a19 | [
"Apache-2.0",
"BSD-3-Clause"
] | 1,928 | 2016-11-28T17:13:18.000Z | 2022-03-31T21:43:19.000Z | templates/integration/__init__.py | p7g/dd-trace-py | 141ac0ab6e9962e3b3bafc9de172076075289a19 | [
"Apache-2.0",
"BSD-3-Clause"
] | 311 | 2016-11-27T03:01:49.000Z | 2022-03-18T21:34:03.000Z | """
The foo integration instruments the bar and baz features of the
foo library.
Enabling
~~~~~~~~
The foo integration is enabled automatically when using
:ref:`ddtrace-run <ddtracerun>` or :ref:`patch_all() <patch_all>`.
Or use :ref:`patch() <patch>` to manually enable the integration::
from ddtrace import patch
patch(foo=True)
Global Configuration
~~~~~~~~~~~~~~~~~~~~
.. py:data:: ddtrace.config.foo["service"]
The service name reported by default for foo instances.
This option can also be set with the ``DD_FOO_SERVICE`` environment
variable.
Default: ``"foo"``
Instance Configuration
~~~~~~~~~~~~~~~~~~~~~~
To configure the foo integration on an per-instance basis use the
``Pin`` API::
import foo
from ddtrace import Pin
myfoo = foo.Foo()
Pin.override(myfoo, service="myfoo")
"""
from ...internal.utils.importlib import require_modules
required_modules = ["foo"]
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
from .patch import unpatch
__all__ = ["patch", "unpatch"]
| 20.777778 | 70 | 0.678253 | 144 | 1,122 | 5.1875 | 0.506944 | 0.032129 | 0.068273 | 0.077644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186275 | 1,122 | 53 | 71 | 21.169811 | 0.818182 | 0.741533 | 0 | 0 | 0 | 0 | 0.053191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ed3c39ee9d299277d428f6d6c8408e0b9a778f0c | 17,635 | py | Python | demos/ServerSideBrowser.py | eukreign/python-v8 | f20d7bef766a2ae3573cc536e7d03e07afe9b173 | [
"Apache-2.0"
] | 2 | 2018-02-12T22:34:09.000Z | 2019-01-03T05:18:00.000Z | demos/ServerSideBrowser.py | eukreign/python-v8 | f20d7bef766a2ae3573cc536e7d03e07afe9b173 | [
"Apache-2.0"
] | null | null | null | demos/ServerSideBrowser.py | eukreign/python-v8 | f20d7bef766a2ae3573cc536e7d03e07afe9b173 | [
"Apache-2.0"
] | 3 | 2019-02-13T08:00:06.000Z | 2020-05-17T22:40:20.000Z | #!/usr/bin/env python
from __future__ import with_statement
import sys, traceback, os, os.path
import xml.dom.minidom
import logging
class Task(object):
@staticmethod
def waitAll(tasks):
pass
class FetchFile(Task):
def __init__(self, url):
self.url = url
def __call__(self):
logging.debug("fetching from %s", self.url)
try:
return urllib2.urlopen(self.url)
except:
logging.warn("fail to fetch %s: %s", self.url, traceback.format_exc())
return None
class Evaluator(Task):
def __init__(self, target):
assert hasattr(target, "eval")
self.target = target
def __call__(self):
try:
self.target.eval(self.pipeline)
except:
logging.warn("fail to evalute %s: %s", self.target, traceback.format_exc())
return self.target
def __repr__(self):
return "<Evaluator object for %s at 0x%08X>" % (self.target, id(self))
class WebObject(object):
context = []
def __enter__(self):
self.context.append(self)
logging.debug("entering %s...", self)
return self
def __exit__(self, exc_type, exc_value, traceback):
self.context.pop()
logging.debug("leaving %s...", self)
def __init__(self, parent, url):
self.children = []
self.parent = parent
self.url = url
@staticmethod
def current():
current = WebObject.context[-1] if len(WebObject.context) > 0 else None
return current
@property
def page(self):
tag = self.parent
while not isinstance(tag, WebPage):
tag = tag.parent
return tag
class WebScript(WebObject):
def __init__(self, parent, value, url):
WebObject.__init__(self, parent, url)
if type(value) in [str, unicode]:
self.script = value
elif hasattr(value, "read"):
self.script = value.read()
else:
self.func = value
def eval(self, pipeline):
if len(WebObject.context) > 0:
WebObject.context[-1].children.append((None, self))
with self:
if hasattr(self, "script"):
self.result = self.page.window.eval(self.script)
else:
self.result = self.page.window.execute(self.func)
class HtmlStyle(PyV8.JSClass):
def __init__(self, node):
self._node = node
self._attrs = self.parse(node.getAttribute("style"))
def parse(self, style):
attrs = {}
try:
for attr in style.split(';'):
if attr == '': continue
strs = attr.split(':')
if len(strs) == 2:
attrs[strs[0]] = strs[1]
else:
attrs[attr] = None
except:
logging.warn("fail to parse the style attribute: %s", sys.exc_info()[1])
return attrs
def __getattr__(self, name):
try:
try:
return object.__getattribute__(self, name)
except AttributeError:
return object.__getattribute__(self, "_attrs")[name]
except:
logging.error(sys.exc_info())
def __setattr__(self, name, value):
try:
if name[0] == '_':
return object.__setattr__(self, name, value)
else:
node = object.__getattribute__(self, "_node")
attrs = object.__getattribute__(self, "_attrs")
style = ";".join(["%s:%s" % (k, v) if v else k for k, v in attrs.items()])
if node.hasAttribute("style") and len(style) == 0:
node.removeAttribute("style")
elif len(style) > 0:
node.setAttribute("style", style)
except:
logging.error(sys.exc_info())
class WebCss(WebObject):
def __init__(self, parent, value, url):
WebObject.__init__(self, parent, url)
self.css = value if type(value) in [str, unicode] else value.read()
def eval(self, pipeline):
logging.info("evalute css: %s...", self.css[:20])
with self:
pass
class WebPage(WebObject):
def __init__(self, parent, response, url):
WebObject.__init__(self, parent, url)
self.code = response.code
self.headers = response.headers
html = response.read()
self.size = len(html)
self.dom = BeautifulSoup.BeautifulSoup(html)
self.window = HtmlWindow(self, self.dom)
def __repr__(self):
return "<WebPage at %s>" % self.url
def evalScript(self, pipeline, script, parent):
if script.has_key("type") and script["type"] != "text/javascript":
raise NotImplementedError("not support script type %s", script["type"])
elif script.has_key("src"):
if script["src"].startswith("http://www.google-analytics.com"):
return None
return pipeline.openScript(self, script["src"],
lambda child: parent.children.append((script, child)))
else:
return pipeline.evalScript(self, unicode(script.string).encode("utf-8"),
lambda child: parent.children.append((script, child)))
def evalTag(self, pipeline, tag, parent):
with parent:
tasks = []
for iframe in tag.findAll('iframe'):
tasks.append(pipeline.openPage(self, iframe["src"],
lambda page: parent.children.append((iframe, page))))
for frame in tag.findAll('frame'):
tasks.append(pipeline.openPage(self, frame["src"],
lambda page: parent.children.append((frame, page))))
for link in tag.findAll('link', rel='stylesheet', type='text/css', href=True):
tasks.append(pipeline.openCss(self, link["href"],
lambda css: parent.children.append((link, css))))
for style in tag.findAll('style,', type='text/css'):
tasks.append(pipeline.evalCss(self, unicode(style.string).encode("utf-8"),
lambda css: parent.children.append((link, css))))
for script in tag.findAll('script'):
tasks.append(self.evalScript(pipeline, script, parent))
return tasks
def eval(self, pipeline):
with self.window.ctxt:
scripts = []
self.window.document.onCreateElement = lambda element: scripts.append((element, WebObject.current())) if element.tagName == "script" else None
self.window.document.onDocumentWrite = lambda element: self.evalTag(pipeline, element.tag, WebObject.current())
tasks = self.evalTag(pipeline, self.dom, self)
Task.waitAll(tasks)
self.window.timers.sort(lambda x, y: x[0] - y[0])
for interval, code in self.window.timers:
tasks.append(pipeline.evalScript(self, code))
try:
scripts.append((self.window.document.body['onload'], self))
except:
pass
for script, parent in scripts:
with parent:
tasks.append(self.evalScript(pipeline, script.tag, parent))
class WebSession(object):
def __init__(self, root):
self.root = root
def __repr__(self):
return "<WebSession at %s>" % self.root.url
def dumpName(self, obj):
if isinstance(obj, WebCss): return "Css%d" % id(obj)
if isinstance(obj, WebScript): return "Script%d" % id(obj)
if isinstance(obj, WebPage): return "Page%d" % id(obj)
return "Object%d" % id(obj)
def dumpChildren(self, out, obj):
for tag, child in obj.children:
if isinstance(child, WebCss):
self.dumpCss(out, child)
elif isinstance(child, WebScript):
self.dumpScript(out, child)
elif isinstance(child, WebPage):
self.dumpPAge(out, child)
def dumpCss(self, out, css):
print >>out, '%s [label="%s"];' % (self.dumpName(css), css.url or "inline CSS")
print >>out, '%s -> %s;' % (self.dumpName(css.parent), self.dumpName(css))
self.dumpChildren(out, css)
def dumpScript(self, out, script):
print >>out, '%s [label="%s"];' % (self.dumpName(script), script.url or "inline Script")
print >>out, '%s -> %s;' % (self.dumpName(script.parent), self.dumpName(script))
self.dumpChildren(out, script)
def dumpPage(self, out, page):
print >>out, '%s [label="%s"];' % (self.dumpName(page), page.url)
self.dumpChildren(out, page)
def save(self, filename):
with open(filename, "w") as f:
print >>f, "digraph WebSession {"
self.dumpPage(f, self.root)
print >>f, "}"
class Pipeline(object):
def __init__(self):
self.evalPage = self.getEvaluator(WebPage)
self.openPage = self.getOpener(WebPage)
self.evalCss = self.getEvaluator(WebCss)
self.openCss = self.getOpener(WebCss)
self.evalScript = self.getEvaluator(WebScript)
self.openScript = self.getOpener(WebScript)
def queue(self, task, callback):
try:
task.pipeline = self
result = task()
if result:
task.result = callback(result)
return task
except:
logging.error("fail to execute task %s", task)
logging.debug(traceback.format_exc())
def openSession(self, url, callback):
self.openPage(None, url,
lambda page: callback(WebSession(page)))
def getEvaluator(self, clazz):
def evaluator(parent, target, callback=None):
self.queue(Evaluator(clazz(parent, target, None)),
lambda result: callback(result) if callback else None)
return evaluator
def getOpener(self, clazz):
def opener(parent, url, callback=None):
if parent:
url = urlparse.urljoin(parent.url, url)
self.queue(FetchFile(url),
lambda response: self.queue(Evaluator(clazz(parent, response, url)),
lambda result: callback(result) if callback else None))
return opener
class Browser(object):
pipeline = Pipeline()
sessions = []
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
@property
def version(self):
return "0.1 (Google v8 engine v%s)" % PyV8.JSEngine.version
def parseCmdLine(self):
from optparse import OptionParser
parser = OptionParser(version="%prog v" + self.version)
parser.add_option("-q", "--quiet", action="store_const",
const=logging.FATAL, dest="logLevel", default=logging.WARN)
parser.add_option("-v", "--verbose", action="store_const",
const=logging.INFO, dest="logLevel")
parser.add_option("-d", "--debug", action="store_const",
const=logging.DEBUG, dest="logLevel")
parser.add_option("--log-format", dest="logFormat",
default="%(asctime)s %(levelname)s %(message)s")
(self.opts, self.args) = parser.parse_args()
return True
def switchMode(self, mode):
self.mode = mode
def terminate(self):
self.terminated = True
def loadJSFile(self, filename):
logging.info("load javascript file %s" % filename)
with open(filename) as f:
PyV8.JSEngine().compile(f.read()).run()
def openUrl(self, url):
self.pipeline.openSession(url, lambda session: self.sessions.append(session))
def findSessions(self, pattern):
for p in pattern.split():
try:
yield self.sessions[int(p)]
except:
for s in self.sessions:
if s.root.url.find(p) >= 0:
yield s
def listSessions(self, pattern):
for session in self.findSessions(pattern) if pattern else self.sessions:
print "#%d\t%s" % (self.sessions.index(session), session.root.url)
COMMANDS = (
{
"names" : ["javascript", "js"],
"help" : "switch to the javascript mode",
"handler" : lambda self, line: self.switchMode("javascript"),
},
{
"names" : ["python", "py"],
"help" : "switch to the python mode",
"handler" : lambda self, line: self.switchMode("python"),
},
{
"names" : ["shell", "sh"],
"help" : "switch to the shell mode",
"handler" : lambda self, line: self.switchMode("shell"),
},
{
"names" : ["exit", "quit", "q"],
"help" : "exit the shell",
"handler" : lambda self, line: self.terminate(),
},
{
"names" : ["help", "?"],
"help" : "show the help screen"
},
{
"names" : ["load", "l"],
"help" : "load javascript file",
"handler" : lambda self, line: self.loadJSFile(line),
},
{
"names" : ["open", "o"],
"help" : "open a HTML page",
"handler" : lambda self, line: self.openUrl(line)
},
{
"names" : ["sessions", "s"],
"help" : "list the web sessions",
"handler" : lambda self, line: self.listSessions(line)
},
)
def runCommand(self, line):
for command in self.COMMANDS:
for name in command["names"]:
if line.startswith(name):
if command.has_key("handler"):
try:
return command["handler"](self, line[len(name):].strip())
except:
traceback.print_exc()
break
else:
break
for command in self.COMMANDS:
print "%s %s" % (", ".join(command["names"]).rjust(15), command["help"])
def runJavascript(self, source):
try:
result = PyV8.JSEngine().compile(source).run()
if result:
print str(result)
except:
traceback.print_exc()
def runShellCommand(self, line):
try:
os.system(line)
except:
traceback.print_exc()
MODES = {
"python" : {
"abbr" : "py"
},
"javascript" : {
"abbr" : "js"
},
"shell" : {
"abbr" : "sh"
},
}
def runShell(self):
import code
logging.basicConfig(level=self.opts.logLevel,
format=self.opts.logFormat)
logging.debug("settings: %s", self.opts)
self.mode = "python"
self.console = code.InteractiveConsole({"sessions" : self.sessions})
self.terminated = False
while not self.terminated:
line = self.console.raw_input(self.MODES[self.mode]["abbr"] + ">").strip()
if len(line) == 0: continue
if line[0] == '`':
self.runCommand(line[1:])
elif line[0] == '?':
self.runJavascript(line[1:])
elif line[0] == '!':
self.runShellCommand(line[1:])
else:
if self.mode == "python":
self.console.runsource(line)
elif self.mode == "javascript":
self.runJavascript(line)
elif self.mode == "shell":
self.runShellCommand(line)
else:
print "unknown mode - " + self.mode
if __name__ == "__main__":
with Browser() as browser:
if browser.parseCmdLine():
browser.runShell() | 34.176357 | 155 | 0.489424 | 1,706 | 17,635 | 4.967761 | 0.178195 | 0.00944 | 0.011681 | 0.017345 | 0.216283 | 0.127788 | 0.084366 | 0.046726 | 0.037522 | 0.025723 | 0 | 0.003549 | 0.392855 | 17,635 | 516 | 156 | 34.176357 | 0.787989 | 0.001134 | 0 | 0.205584 | 0 | 0 | 0.078363 | 0 | 0 | 0 | 0 | 0 | 0.002538 | 0 | null | null | 0.010152 | 0.015228 | null | null | 0.035533 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed3c6cc6ba561bf90153ae8e03fe8da305f91245 | 4,169 | py | Python | pypeira/io/fits.py | WielderOfMjoelnir/pypeira | 4ef554c577875e09f55673f8e6ea53ba129fb37f | [
"MIT"
] | null | null | null | pypeira/io/fits.py | WielderOfMjoelnir/pypeira | 4ef554c577875e09f55673f8e6ea53ba129fb37f | [
"MIT"
] | null | null | null | pypeira/io/fits.py | WielderOfMjoelnir/pypeira | 4ef554c577875e09f55673f8e6ea53ba129fb37f | [
"MIT"
] | null | null | null | from __future__ import division
import fitsio
"""
A FITS file is comprised of segments called Header/Data Units (HDUs), where the first
HDU is called the 'Primary HDU', or 'Primary Array'. The primary data array can contain
a 1-999 dimensional array of 1, 2 or 4 byte integers or 4 or 8 byte floating point numbers
using IEEE representation. A typical primary array could contain a 1-D spectrum, a 2-D image,
or a 3-D data cube (this is what's coming from the SSC).
Any number of additional HDUs may follow the primary array. These additional HDUs are
referred to as FITS 'extensions'. Three types of standard extensions are currently defined:
* Image Extensions
* Contain a 0-999 dimensional array of pixels, similar to primary array
* Header begins with XTENSION = 'IMAGE'
* ASCII Tables Extensions
* Store tabular information with all numberic information stored in ASCII formats
While ASCII tables are generellay less efficient than binary tables, they can be
made relatively human readable and can store numeric information with essentially
arbitrary size and accuracy (e.g., 16 byte reals).
* Header begins with XTENSION = 'TABLE'
* Binary Table Extensions
* Store tabular information in a binary represetation. Each cell in the table
can be an array but the dimensionality of the array must be constant within a
column. The strict standard supports only one-dimensional arrays, but a convention
to support multi-dimensional arrays are widely accepted.
* Header begins with XTENSION = 'BINTABLE'
In addition to the structures above, there is one other type of FITS HDU called
"Random Groups" that is almost exclusively used for applications in radio interferometry.
The random groups format should not be used for other types of applications.
.. [REF] fits.gsfc.nasa.gov/fits_primer.html
"""
def read_headers(path, *args, **kwargs):
# Reads the headers from the FITS file
header = fitsio.read_header(path, *args, **kwargs)
return header
def read_image(path, *args, **kwargs):
# Reads the image data from the FITS file
data = fitsio.read(path, *args, **kwargs)
return data
def read_fits(path, headers_only=False, image_only=False, *args, **kwargs):
"""
Reader function for the FITS files. Takes advantage of the fitsio
reader function.
Parameters
----------
path: str
Path to the FITS file you want to read
headers_only: bool, optional
Set to True if you only want to read the headers of the file. If True, the data
return will only be the headers of the files read. Default is False.
image_only: bool, optional
Set to True if you only want to read the image data of the file. If True, the data
return will be a numpy array corresponding to the image data of the files read.
Default is False.
*args: optional
Contains all arguments that will be passed onto the fitsio reader. This reader will
be fitsio.read_headers() or fitsio.FITS() depending on if 'headers_only' is True or False.
**kwargs: optional
Contains all keyword arguments that will be passed to the fitsio reader.
Returns
-------
hdr, image: FITSHDR object, np.array
If none of the "only"-keywords are not False, then a (FITSHDR, np.array)-pair will be returned.
Note that a FITSHDR can be access by indexing as a normal dictionary.
See fitsio.fitslib.FITSHDR for implementation of FITSHDR.
FITSHDR object
If 'headers_only' is not False it will return in the same manner as for normally,
but now the type of the files will be FITSHDR objects.
numpy.array
If 'image_only' is not False it will return in the same manner as for the FITS object,
but now the type of the tiles will be numpy.arrays.
"""
if headers_only:
hdr = read_headers(path, *args, **kwargs)
return hdr
elif image_only:
image = read_image(path, *args, **kwargs)
return image
else:
hdr = read_headers(path, *args, **kwargs)
image = read_image(path, *args, **kwargs)
return hdr, image
| 38.962617 | 104 | 0.710962 | 635 | 4,169 | 4.631496 | 0.338583 | 0.015301 | 0.038082 | 0.034002 | 0.208433 | 0.15845 | 0.127168 | 0.085005 | 0.085005 | 0.063244 | 0 | 0.005607 | 0.230031 | 4,169 | 106 | 105 | 39.330189 | 0.910592 | 0.371792 | 0 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.105263 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ed4086f481b4822d573ede5f8a9108ee4da236b6 | 290 | py | Python | coding202-parsing-json/get-ap-json-1.py | firodj/ciscodevnet-coding-skills-sample-code | 4fca975e450cf0c913001fe1b36582f7a094b1e7 | [
"Apache-2.0"
] | null | null | null | coding202-parsing-json/get-ap-json-1.py | firodj/ciscodevnet-coding-skills-sample-code | 4fca975e450cf0c913001fe1b36582f7a094b1e7 | [
"Apache-2.0"
] | null | null | null | coding202-parsing-json/get-ap-json-1.py | firodj/ciscodevnet-coding-skills-sample-code | 4fca975e450cf0c913001fe1b36582f7a094b1e7 | [
"Apache-2.0"
] | null | null | null | import requests
url = 'https://64.103.26.61/api/contextaware/v1/maps/info/DevNetCampus/DevNetBuilding/DevNetZone'
headers = {'Authorization': 'Basic bGVhcm5pbmc6bGVhcm5pbmc=='}
response = requests.get(url, headers=headers, verify=False)
responseString = response.text
print(responseString)
| 41.428571 | 97 | 0.8 | 33 | 290 | 7.030303 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04797 | 0.065517 | 290 | 6 | 98 | 48.333333 | 0.808118 | 0 | 0 | 0 | 0 | 0.166667 | 0.458621 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed4193bd5735a5283b0caa027d333560a4c2793c | 1,078 | py | Python | lldb/test/API/lang/swift/optimized_code/bound_generic_enum/TestSwiftOptimizedBoundGenericEnum.py | LaudateCorpus1/llvm-project | ff2e0f0c1112558b3f30d8afec7c9882c33c79e3 | [
"Apache-2.0"
] | 605 | 2019-10-18T01:15:54.000Z | 2022-03-31T14:31:04.000Z | lldb/test/API/lang/swift/optimized_code/bound_generic_enum/TestSwiftOptimizedBoundGenericEnum.py | LaudateCorpus1/llvm-project | ff2e0f0c1112558b3f30d8afec7c9882c33c79e3 | [
"Apache-2.0"
] | 3,180 | 2019-10-18T01:21:21.000Z | 2022-03-31T23:25:41.000Z | lldb/test/API/lang/swift/optimized_code/bound_generic_enum/TestSwiftOptimizedBoundGenericEnum.py | LaudateCorpus1/llvm-project | ff2e0f0c1112558b3f30d8afec7c9882c33c79e3 | [
"Apache-2.0"
] | 275 | 2019-10-18T05:27:22.000Z | 2022-03-30T09:04:21.000Z | import lldb
from lldbsuite.test.decorators import *
import lldbsuite.test.lldbtest as lldbtest
import lldbsuite.test.lldbutil as lldbutil
import os
import unittest2
class TestSwiftOptimizedBoundGenericEnum(lldbtest.TestBase):
mydir = lldbtest.TestBase.compute_mydir(__file__)
@swiftTest
def test(self):
"""Test the bound generic enum types in "optimized" code."""
self.build()
target, process, thread, bkpt = lldbutil.run_to_source_breakpoint(self,
'break one', lldb.SBFileSpec('main.swift'))
bkpt_two = target.BreakpointCreateBySourceRegex(
'break two', lldb.SBFileSpec('main.swift'))
self.assertGreater(bkpt_two.GetNumLocations(), 0)
var_self = self.frame().FindVariable("self")
# FIXME, this fails with a data extractor error.
lldbutil.check_variable(self, var_self, False, value=None)
lldbutil.continue_to_breakpoint(process, bkpt_two)
var_self = self.frame().FindVariable("self")
lldbutil.check_variable(self, var_self, True, value="success")
| 35.933333 | 79 | 0.705009 | 125 | 1,078 | 5.928 | 0.528 | 0.037787 | 0.051282 | 0.062078 | 0.17274 | 0.17274 | 0 | 0 | 0 | 0 | 0 | 0.002299 | 0.19295 | 1,078 | 29 | 80 | 37.172414 | 0.849425 | 0.09462 | 0 | 0.095238 | 0 | 0 | 0.054639 | 0 | 0 | 0 | 0 | 0.034483 | 0.047619 | 1 | 0.047619 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed42d7a8a9a02745abd1180b0c82d5235e6a3711 | 903 | py | Python | bc_gym_planning_env/envs/base/action.py | ghostFaceKillah/bc-gym-planning-env | 3cc0eb03adb752d304c3f007675cfff86691d007 | [
"MIT"
] | 2 | 2019-04-28T02:26:23.000Z | 2021-12-06T16:04:36.000Z | bc_gym_planning_env/envs/base/action.py | ghostFaceKillah/bc-gym-planning-env | 3cc0eb03adb752d304c3f007675cfff86691d007 | [
"MIT"
] | 7 | 2019-03-12T14:07:40.000Z | 2019-05-02T04:46:30.000Z | bc_gym_planning_env/envs/base/action.py | ghostFaceKillah/bc-gym-planning-env | 3cc0eb03adb752d304c3f007675cfff86691d007 | [
"MIT"
] | 7 | 2019-01-08T08:09:09.000Z | 2022-02-07T09:57:02.000Z | """ Code for wrapping the motion primitive action in an object. """
from __future__ import division
from __future__ import absolute_import
import attr
import numpy as np
from bc_gym_planning_env.utilities.serialize import Serializable
@attr.s(cmp=False)
class Action(Serializable):
""" Object representing an 'action' - a motion primitive to execute in the environment """
VERSION = 1
command = attr.ib(type=np.ndarray)
@classmethod
def from_cmds(cls, wanted_linear_velocity_of_baselink, wanted_front_wheel_angle):
return cls(command=np.array([wanted_linear_velocity_of_baselink, wanted_front_wheel_angle]))
def __eq__(self, other):
if not isinstance(other, Action):
return False
if (self.command != other.command).any():
return False
return True
def __ne__(self, other):
return not self.__eq__(other)
| 28.21875 | 100 | 0.713178 | 119 | 903 | 5.084034 | 0.537815 | 0.049587 | 0.052893 | 0.072727 | 0.168595 | 0.168595 | 0.168595 | 0.168595 | 0.168595 | 0 | 0 | 0.001395 | 0.20598 | 903 | 31 | 101 | 29.129032 | 0.842399 | 0.158361 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.25 | 0.1 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ed4408c93538d9f83abe75060897c6705abd216b | 2,219 | py | Python | social_webpy/app.py | python-social-auth/social-app-webpy | edcfd8dd95c66a3524961e5212e13c9c2e8515a3 | [
"BSD-3-Clause"
] | 2 | 2017-06-21T15:29:09.000Z | 2022-01-26T21:12:25.000Z | social_webpy/app.py | python-social-auth/social-app-webpy | edcfd8dd95c66a3524961e5212e13c9c2e8515a3 | [
"BSD-3-Clause"
] | null | null | null | social_webpy/app.py | python-social-auth/social-app-webpy | edcfd8dd95c66a3524961e5212e13c9c2e8515a3 | [
"BSD-3-Clause"
] | 1 | 2018-10-21T07:33:36.000Z | 2018-10-21T07:33:36.000Z | import web
from social_core.actions import do_auth, do_complete, do_disconnect
from .utils import psa, load_strategy, load_strategy
urls = (
r'/login/(?P<backend>[^/]+)/?', 'auth',
r'/complete/(?P<backend>[^/]+)/?', 'complete',
r'/disconnect/(?P<backend>[^/]+)/?', 'disconnect',
r'/disconnect/(?P<backend>[^/]+)/(?P<association_id>\d+)/?', 'disconnect',
)
class BaseViewClass(object):
def __init__(self, *args, **kwargs):
self.session = web.web_session
method = web.ctx.method == 'POST' and 'post' or 'get'
self.strategy = load_strategy()
self.data = web.input(_method=method)
self.backend = None
self._user = None
super(BaseViewClass, self).__init__(*args, **kwargs)
def get_current_user(self):
if not hasattr(self, '_user'):
if self.session.get('logged_in'):
self._user = self.strategy.get_user(
self.session.get('user_id')
)
else:
self._user = None
return self._user
def login_user(self, user):
self.session['logged_in'] = True
self.session['user_id'] = user.id
class auth(BaseViewClass):
def GET(self, backend):
return self._auth(backend)
def POST(self, backend):
return self._auth(backend)
@psa('/complete/%(backend)s/')
def _auth(self, backend):
return do_auth(self.backend)
class complete(BaseViewClass):
def GET(self, backend, *args, **kwargs):
return self._complete(backend, *args, **kwargs)
def POST(self, backend, *args, **kwargs):
return self._complete(backend, *args, **kwargs)
@psa('/complete/%(backend)s/')
def _complete(self, backend, *args, **kwargs):
return do_complete(
self.backend,
login=lambda backend, user, social_user: self.login_user(user),
user=self.get_current_user(), *args, **kwargs
)
class disconnect(BaseViewClass):
@psa()
def POST(self, backend, association_id=None):
return do_disconnect(self.backend, self.get_current_user(),
association_id)
app_social = web.application(urls, locals())
| 28.818182 | 78 | 0.598918 | 259 | 2,219 | 4.942085 | 0.216216 | 0.094531 | 0.066406 | 0.042188 | 0.222656 | 0.1375 | 0.0875 | 0.0875 | 0.0875 | 0.0875 | 0 | 0 | 0.250113 | 2,219 | 76 | 79 | 29.197368 | 0.769231 | 0 | 0 | 0.142857 | 0 | 0 | 0.121226 | 0.085174 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.053571 | 0.125 | 0.446429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
ed4409f82d978378f6be973493d164c3f3a747dd | 2,133 | py | Python | stellar/config.py | gomyar/stellar | b2dfbe136f1540f0ca6ac5779ebaeae996a3b747 | [
"MIT"
] | null | null | null | stellar/config.py | gomyar/stellar | b2dfbe136f1540f0ca6ac5779ebaeae996a3b747 | [
"MIT"
] | null | null | null | stellar/config.py | gomyar/stellar | b2dfbe136f1540f0ca6ac5779ebaeae996a3b747 | [
"MIT"
] | null | null | null | import os
import logging
import yaml
from schema import Use, Schema, SchemaError, Optional
class InvalidConfig(Exception):
pass
class MissingConfig(Exception):
pass
default_config = {
'logging': 30,
'migrate_from_0_3_2': True
}
schema = Schema({
'stellar_url': Use(str),
'url': Use(str),
'project_name': Use(str),
'tracked_databases': [Use(str)],
Optional('logging'): int,
Optional('migrate_from_0_3_2'): bool
})
def get_config_path():
current_directory = os.getcwd()
while True:
try:
with open(
os.path.join(current_directory, 'stellar.yaml'),
'rb'
) as fp:
return os.path.join(current_directory, 'stellar.yaml')
except IOError:
pass
current_directory = os.path.abspath(
os.path.join(current_directory, '..')
)
if current_directory == '/':
return None
def load_config():
config = {}
stellar_config_env = os.getenv('STELLAR_CONFIG')
if stellar_config_env:
if os.path.exists(stellar_config_env):
config = yaml.safe_load(open(stellar_config_env))
else:
current_directory = os.getcwd()
while True:
try:
with open(
os.path.join(current_directory, 'stellar.yaml'),
'rb'
) as fp:
config = yaml.safe_load(fp)
break
except IOError:
pass
if current_directory == '/':
break
current_directory = os.path.abspath(
os.path.join(current_directory, '..')
)
if not config:
raise MissingConfig()
for k, v in default_config.items():
if k not in config:
config[k] = v
try:
return schema.validate(config)
except SchemaError as e:
raise InvalidConfig(e)
def save_config(config):
logging.getLogger(__name__).debug('save_config()')
with open(get_config_path(), "w") as fp:
yaml.dump(config, fp)
| 23.43956 | 70 | 0.554149 | 234 | 2,133 | 4.854701 | 0.307692 | 0.15493 | 0.044014 | 0.074824 | 0.310739 | 0.286092 | 0.286092 | 0.253521 | 0.253521 | 0.253521 | 0 | 0.005678 | 0.339428 | 2,133 | 90 | 71 | 23.7 | 0.800568 | 0 | 0 | 0.402778 | 0 | 0 | 0.078293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0.055556 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ed44cdd790149a7a7aba7ae668b2598d57504c5a | 9,404 | py | Python | movement_validation/features/feature_processing_options.py | eulerkaku/movement_validation | af939a42a97c1de889cf13bad0c22a2824d60947 | [
"MIT"
] | null | null | null | movement_validation/features/feature_processing_options.py | eulerkaku/movement_validation | af939a42a97c1de889cf13bad0c22a2824d60947 | [
"MIT"
] | null | null | null | movement_validation/features/feature_processing_options.py | eulerkaku/movement_validation | af939a42a97c1de889cf13bad0c22a2824d60947 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
This module will hold a class that will be referenced when processing features.
I'd like to move things from "config" into here ...
- @JimHokanson
"""
from __future__ import division
from .. import utils
#Can't do this, would be circular
#from .worm_features import WormFeatures
class FeatureProcessingOptions(object):
def __init__(self,fps):
#The idea with this attribute is that functions will check if they are
#in this list. If they are then they can display some sort of popup that
#clarifies how they are working.
#
#No functions actually use this yet. It is just a place holder.
#
#An example of this might be:
# 'morphology.length'
# s
self.functions_to_explain = []
#This indicates that, where possible, code should attempt to replicate
#the errors and inconsistencies present in the way that the Schafer lab
#computed features. This can be useful for ensuring that we are able to
#compute features in the same way that they did.
#
#NOTE: There are a few instances where this is not supported such that
#the behavior will not match even if this value is set to True.
self.mimic_old_behaviour = True
self.locomotion = LocomotionOptions(fps)
self.posture = PostureOptions(fps)
#TODO: Implement this
#This is not yet implemented. The idea is to support not
#computing certain features. We might also allow disabling certain
#groups of feature.
self.features_to_ignore = []
def should_compute_feature(self,feature_name,worm_features):
"""
"""
#TODO: Implement this ...
return True
def disable_contour_features(self):
"""
Contour features:
"""
#see self.features_to_ignore
contour_dependent_features = [\
'morphology.width',
'morphology.area',
'morphology.area_per_length',
'morphology.width_per_length',
'posture.eccentricity']
self.features_to_ignore = list(set(self.features_to_ignore + contour_dependent_features))
def disable_feature_sections(self,section_names):
"""
This can be used to disable processing of features by section (see the
options available below)
Modifies 'features_to_ignore'
Parameters
----------
section_names : list[str]
Options are:
- morphology
- locomotion
- posture
- path
Examples
--------
fpo.disable_feature_sections(['morphology'])
fpo.disable_feature_sections(['morphology','locomotion'])
"""
new_ignores = []
f = IgnorableFeatures()
for section in section_names:
new_ignores.extend(getattr(f,section))
self.features_to_ignore = list(set(self.features_to_ignore + new_ignores))
def __repr__(self):
return utils.print_object(self)
class PostureOptions(object):
def __init__(self,fps):
self.n_eccentricity_grid_points = 50 # Grid size for estimating eccentricity, this is the
# max # of points that will fill the wide dimension.
# (scalar) The # of points to place in the long dimension. More points
# gives a more accurate estimate of the ellipse but increases
# the calculation time.
#
#Used by: posture_features.get_eccentricity_and_orientation
self.coiling_frame_threshold = round(1/5 * fps) #This is the # of
#frames that an epoch must exceed in order for it to be truly
#considered a coiling event
#Current value translation: 1/5 of a second
#
#Used by: posture_features.get_worm_coils
self.n_eigenworms_use = 6
#The maximum # of available values is 7 although technically there
#are generally 48 eigenvectors avaiable, we've just only precomputed
#7 to use for the projections
#
#Used by:
self.kink_length_threshold_pct = 1/12 #This the fraction of the worm
#length that a bend must be in order to be counted. The # of worm
#points (this_value*worm_length_in_samples) is rounded to an integer
#value. The threshold value is inclusive.
#
#Use: posture_features.get_worm_kinks
self.wavelength = PostureWavelengthOptions()
class PostureWavelengthOptions(object):
"""
These options are all used in:
get_amplitude_and_wavelength
"""
def __init__(self):
self.n_points_fft = 512
self.min_dist_peaks = 5 #This value is in samples, not a
#spatial frequency. The spatial frequency sampling also varies by
#the worm length, so this resolution varies on a frame by frame basis.
self.pct_max_cutoff = 0.5
self.pct_cutoff = 2
class LocomotionOptions(object):
def __init__(self,fps):
#locomotion_features.LocomotionVelocity
#-------------------------------------
#Units: seconds
#NOTE: We could get the defaults from the class ...
self.velocity_tip_diff = 0.25
self.velocity_body_diff = 0.5
#locomotion_features.MotionEvents
#--------------------------------------
# Interpolate only this length of NaN run; anything longer is
# probably an omega turn.
# If set to "None", interpolate all lengths (i.e. infinity)
#TODO - Inf would be a better specification
self.motion_codes_longest_nan_run_to_interpolate = None
# These are a percentage of the worm's length
self.motion_codes_speed_threshold_pct = 0.05
self.motion_codes_distance_threshold_pct = 0.05
self.motion_codes_pause_threshold_pct = 0.025
# These are times (s)
self.motion_codes_min_frames_threshold = 0.5
self.motion_codes_max_interframes_threshold = 0.25
#locomotion_bends.LocomotionCrawlingBends
self.crawling_bends = LocomotionCrawlingBends(fps)
self.foraging_bends = LocomotionForagingBends(fps)
self.locomotion_turns = LocomotionTurns(fps)
def __repr__(self):
return utils.print_object(self)
class LocomotionTurns(object):
def __init__(self,fps):
self.max_interpolation_gap_allowed = 9 #frames
self.min_omega_event_length = round(fps/4)
#TODO: There is still a lot to put into here
class LocomotionForagingBends(object):
def __init__(self,fps):
#NOTE: The nose & neck can also be thought of as the head tip
#and head neck
self.min_nose_window_samples = round(0.1 * fps)
self.max_samples_interp_nose = 2*self.min_nose_window_samples - 1
class LocomotionCrawlingBends(object):
def __init__(self,fps):
self.fft_n_samples = 2 ** 14
self.bends_partitions = \
{'head': (5, 10),
'midbody': (22, 27),
'tail': (39, 44)}
self.peak_energy_threshold = 0.5
# max_amplitude_pct_bandwidth - when determining the bandwidth,
# the minimums that are found can't exceed this percentage of the maximum.
# Doing so invalidates the result.
self.max_amplitude_pct_bandwidth = 0.5
self.min_time_for_bend = 0.5
self.max_time_for_bend = 15
#TODO: What are the units on these things ????
#This is a spatial frequency
self.min_frequency = 0.25 * self.max_time_for_bend
#What is the technical max???? 0.5 fps????
self.max_frequency = 0.25 * fps
#This is a processing optimization.
#How far into the maximum peaks should we look ...
#If this value is low, an expensive computation could go faster. If it
#is too low, then we end up rerunning the calculation the whole dataset
#and we end up losing time
self.initial_max_I_pct = 0.5
def __repr__(self):
return utils.print_object(self)
class IgnorableFeatures:
"""
I'm not thrilled with where this is placed, but placing it in WormFeatures
creates a circular dependency
"""
def __init__(self):
temp = ['length','width','area','area_per_length','width_per_length']
self.morphology = ['morphology.' + s for s in temp]
#None of these are implemented ...
temp = ['velocity','motion_events','motion_mode','crawling_bends','foraging_bends','turns']
self.locomotion = ['locomotion.' + s for s in temp]
#locomotion
#crawling_bends: Done
#turns: Done
temp = ['bends','eccentricity', 'amplitude_and_wavelength','kinks','coils','directions','eigen_projection']
self.posture = ['posture.' + s for s in temp]
#None of these are implemented ...
| 34.072464 | 116 | 0.606231 | 1,125 | 9,404 | 4.873778 | 0.322667 | 0.010213 | 0.01605 | 0.018603 | 0.139522 | 0.089732 | 0.0766 | 0.053256 | 0.053256 | 0.030275 | 0 | 0.012403 | 0.314122 | 9,404 | 276 | 117 | 34.072464 | 0.837674 | 0.43205 | 0 | 0.16092 | 0 | 0 | 0.067644 | 0.015456 | 0 | 0 | 0 | 0.01087 | 0 | 1 | 0.16092 | false | 0 | 0.022989 | 0.034483 | 0.321839 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed49ab3c5d4f24945d71fc6987cac08699af6050 | 692 | py | Python | problems/139.Word_Break/AC_dp_n2.py | subramp-prep/leetcode | d125201d9021ab9b1eea5e5393c2db4edd84e740 | [
"Unlicense"
] | null | null | null | problems/139.Word_Break/AC_dp_n2.py | subramp-prep/leetcode | d125201d9021ab9b1eea5e5393c2db4edd84e740 | [
"Unlicense"
] | null | null | null | problems/139.Word_Break/AC_dp_n2.py | subramp-prep/leetcode | d125201d9021ab9b1eea5e5393c2db4edd84e740 | [
"Unlicense"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Author: illuz <iilluzen[at]gmail.com>
# File: AC_dp_n2.py
# Create Date: 2015-04-21 10:21:18
# Usage: AC_dp_n2.py
# Descripton:
class Solution:
# @param s, a string
# @param dict, a set of string
# @return a boolean
def wordBreak(self, s, dict):
n = len(s)
dp = [False] * (n + 1)
dp[0] = True
for i in range(n):
if dp[i]:
for word in dict:
j = len(word)
if i + j <= n and s[i: i + j] == word:
dp[i + j] = True
return dp[n]
# debug
s = Solution()
print s.wordBreak('a', ['a'])
| 23.862069 | 58 | 0.460983 | 101 | 692 | 3.118812 | 0.554455 | 0.019048 | 0.038095 | 0.050794 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045131 | 0.391619 | 692 | 28 | 59 | 24.714286 | 0.703088 | 0.362717 | 0 | 0 | 0 | 0 | 0.004651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed4a99de0c0a371040dd37f7fc6fad45e488b616 | 132 | py | Python | 6.爬取豆瓣排行榜电影数据(含GUI界面版)/main.py | shengqiangzhang/examples-of-web-crawlers | 89eb6c169b8824a6a9bc78e7a32e064d33560aa7 | [
"MIT"
] | 12,023 | 2019-03-13T08:53:27.000Z | 2022-03-31T21:31:15.000Z | 6.爬取豆瓣排行榜电影数据(含GUI界面版)/main.py | shengqiangzhang/examples-of-web-crawlers | 89eb6c169b8824a6a9bc78e7a32e064d33560aa7 | [
"MIT"
] | 100 | 2019-03-14T04:09:12.000Z | 2022-03-22T14:24:11.000Z | 6.爬取豆瓣排行榜电影数据(含GUI界面版)/main.py | shengqiangzhang/examples-of-web-crawlers | 89eb6c169b8824a6a9bc78e7a32e064d33560aa7 | [
"MIT"
] | 3,693 | 2019-03-13T08:21:22.000Z | 2022-03-31T16:07:08.000Z | # -*- coding:utf-8 -*-
from uiObject import uiObject
# main入口
if __name__ == '__main__':
ui = uiObject()
ui.ui_process() | 13.2 | 29 | 0.621212 | 16 | 132 | 4.5625 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.219697 | 132 | 10 | 30 | 13.2 | 0.699029 | 0.204545 | 0 | 0 | 0 | 0 | 0.07767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed4ce61eb4af04f3704ae96a5870d43583535a63 | 524 | py | Python | photos/models.py | eude313/vault | d3e24cf01d15de94244b7d2e80316355a0827f74 | [
"MIT"
] | null | null | null | photos/models.py | eude313/vault | d3e24cf01d15de94244b7d2e80316355a0827f74 | [
"MIT"
] | null | null | null | photos/models.py | eude313/vault | d3e24cf01d15de94244b7d2e80316355a0827f74 | [
"MIT"
] | null | null | null | from django.db import models
from cloudinary.models import CloudinaryField
# Create your models here.
class Category(models.Model):
name = models.CharField( max_length=200, null=False, blank=False )
def __str__(self):
return self.name
class Photo(models.Model):
category = models.ForeignKey( Category, on_delete=models.SET_NULL, null=True, blank=True )
image = CloudinaryField('image', default='')
description = models.TextField()
def __str__(self):
return self.description | 29.111111 | 94 | 0.71374 | 64 | 524 | 5.671875 | 0.546875 | 0.077135 | 0.055096 | 0.088154 | 0.110193 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007026 | 0.185115 | 524 | 18 | 95 | 29.111111 | 0.843091 | 0.045802 | 0 | 0.166667 | 0 | 0 | 0.01002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
ed4da4c3e62ea1a20080eade8fbb9743d55cdd88 | 3,558 | py | Python | doc/examples.py | Enerccio/mahjong | 903505a7886c31845dfa6b3f54c936a4feb29e6e | [
"MIT"
] | 254 | 2017-09-20T15:02:20.000Z | 2022-03-28T11:33:28.000Z | doc/examples.py | Enerccio/mahjong | 903505a7886c31845dfa6b3f54c936a4feb29e6e | [
"MIT"
] | 39 | 2017-09-23T14:28:36.000Z | 2022-01-06T08:41:57.000Z | doc/examples.py | Enerccio/mahjong | 903505a7886c31845dfa6b3f54c936a4feb29e6e | [
"MIT"
] | 38 | 2017-10-19T09:06:53.000Z | 2022-03-15T05:08:22.000Z | from mahjong.hand_calculating.hand import HandCalculator
from mahjong.meld import Meld
from mahjong.hand_calculating.hand_config import HandConfig, OptionalRules
from mahjong.shanten import Shanten
from mahjong.tile import TilesConverter
calculator = HandCalculator()
# useful helper
def print_hand_result(hand_result):
print(hand_result.han, hand_result.fu)
print(hand_result.cost['main'])
print(hand_result.yaku)
for fu_item in hand_result.fu_details:
print(fu_item)
print('')
####################################################################
# Tanyao hand by ron #
####################################################################
# we had to use all 14 tiles in that array
tiles = TilesConverter.string_to_136_array(man='22444', pin='333567', sou='444')
win_tile = TilesConverter.string_to_136_array(sou='4')[0]
result = calculator.estimate_hand_value(tiles, win_tile)
print_hand_result(result)
####################################################################
# Tanyao hand by tsumo #
####################################################################
result = calculator.estimate_hand_value(tiles, win_tile, config=HandConfig(is_tsumo=True))
print_hand_result(result)
####################################################################
# Add open set to hand #
####################################################################
melds = [Meld(meld_type=Meld.PON, tiles=TilesConverter.string_to_136_array(man='444'))]
result = calculator.estimate_hand_value(tiles, win_tile, melds=melds, config=HandConfig(options=OptionalRules(has_open_tanyao=True)))
print_hand_result(result)
####################################################################
# Shanten calculation #
####################################################################
shanten = Shanten()
tiles = TilesConverter.string_to_34_array(man='13569', pin='123459', sou='443')
result = shanten.calculate_shanten(tiles)
print(result)
####################################################################
# Kazoe as a sanbaiman #
####################################################################
tiles = TilesConverter.string_to_136_array(man='22244466677788')
win_tile = TilesConverter.string_to_136_array(man='7')[0]
melds = [
Meld(Meld.KAN, TilesConverter.string_to_136_array(man='2222'), False)
]
dora_indicators = [
TilesConverter.string_to_136_array(man='1')[0],
TilesConverter.string_to_136_array(man='1')[0],
TilesConverter.string_to_136_array(man='1')[0],
TilesConverter.string_to_136_array(man='1')[0],
]
config = HandConfig(is_riichi=True, options=OptionalRules(kazoe=HandConfig.KAZOE_SANBAIMAN))
result = calculator.estimate_hand_value(tiles, win_tile, melds, dora_indicators, config)
print_hand_result(result)
####################################################################
# Change the cost of yaku #
####################################################################
config = HandConfig(is_renhou=True)
# renhou as an yakuman - old style
config.yaku.renhou.han_closed = 13
tiles = TilesConverter.string_to_136_array(man='22444', pin='333567', sou='444')
win_tile = TilesConverter.string_to_136_array(sou='4')[0]
result = calculator.estimate_hand_value(tiles, win_tile, config=config)
print_hand_result(result)
| 35.227723 | 133 | 0.537943 | 355 | 3,558 | 5.132394 | 0.261972 | 0.1427 | 0.15697 | 0.164654 | 0.493963 | 0.403952 | 0.384193 | 0.322173 | 0.322173 | 0.236004 | 0 | 0.039509 | 0.153457 | 3,558 | 100 | 134 | 35.58 | 0.565405 | 0.13575 | 0 | 0.282609 | 0 | 0 | 0.032992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.108696 | 0 | 0.130435 | 0.26087 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed4edc151ca26cac5de8e4d708a84551964ac057 | 14,366 | py | Python | sdk/python/pulumi_oci/database/get_external_non_container_database.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/database/get_external_non_container_database.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/database/get_external_non_container_database.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'GetExternalNonContainerDatabaseResult',
'AwaitableGetExternalNonContainerDatabaseResult',
'get_external_non_container_database',
]
@pulumi.output_type
class GetExternalNonContainerDatabaseResult:
"""
A collection of values returned by getExternalNonContainerDatabase.
"""
def __init__(__self__, character_set=None, compartment_id=None, database_configuration=None, database_edition=None, database_management_config=None, database_version=None, db_id=None, db_packs=None, db_unique_name=None, defined_tags=None, display_name=None, external_non_container_database_id=None, freeform_tags=None, id=None, lifecycle_details=None, ncharacter_set=None, operations_insights_config=None, state=None, time_created=None, time_zone=None):
if character_set and not isinstance(character_set, str):
raise TypeError("Expected argument 'character_set' to be a str")
pulumi.set(__self__, "character_set", character_set)
if compartment_id and not isinstance(compartment_id, str):
raise TypeError("Expected argument 'compartment_id' to be a str")
pulumi.set(__self__, "compartment_id", compartment_id)
if database_configuration and not isinstance(database_configuration, str):
raise TypeError("Expected argument 'database_configuration' to be a str")
pulumi.set(__self__, "database_configuration", database_configuration)
if database_edition and not isinstance(database_edition, str):
raise TypeError("Expected argument 'database_edition' to be a str")
pulumi.set(__self__, "database_edition", database_edition)
if database_management_config and not isinstance(database_management_config, dict):
raise TypeError("Expected argument 'database_management_config' to be a dict")
pulumi.set(__self__, "database_management_config", database_management_config)
if database_version and not isinstance(database_version, str):
raise TypeError("Expected argument 'database_version' to be a str")
pulumi.set(__self__, "database_version", database_version)
if db_id and not isinstance(db_id, str):
raise TypeError("Expected argument 'db_id' to be a str")
pulumi.set(__self__, "db_id", db_id)
if db_packs and not isinstance(db_packs, str):
raise TypeError("Expected argument 'db_packs' to be a str")
pulumi.set(__self__, "db_packs", db_packs)
if db_unique_name and not isinstance(db_unique_name, str):
raise TypeError("Expected argument 'db_unique_name' to be a str")
pulumi.set(__self__, "db_unique_name", db_unique_name)
if defined_tags and not isinstance(defined_tags, dict):
raise TypeError("Expected argument 'defined_tags' to be a dict")
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name and not isinstance(display_name, str):
raise TypeError("Expected argument 'display_name' to be a str")
pulumi.set(__self__, "display_name", display_name)
if external_non_container_database_id and not isinstance(external_non_container_database_id, str):
raise TypeError("Expected argument 'external_non_container_database_id' to be a str")
pulumi.set(__self__, "external_non_container_database_id", external_non_container_database_id)
if freeform_tags and not isinstance(freeform_tags, dict):
raise TypeError("Expected argument 'freeform_tags' to be a dict")
pulumi.set(__self__, "freeform_tags", freeform_tags)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if lifecycle_details and not isinstance(lifecycle_details, str):
raise TypeError("Expected argument 'lifecycle_details' to be a str")
pulumi.set(__self__, "lifecycle_details", lifecycle_details)
if ncharacter_set and not isinstance(ncharacter_set, str):
raise TypeError("Expected argument 'ncharacter_set' to be a str")
pulumi.set(__self__, "ncharacter_set", ncharacter_set)
if operations_insights_config and not isinstance(operations_insights_config, dict):
raise TypeError("Expected argument 'operations_insights_config' to be a dict")
pulumi.set(__self__, "operations_insights_config", operations_insights_config)
if state and not isinstance(state, str):
raise TypeError("Expected argument 'state' to be a str")
pulumi.set(__self__, "state", state)
if time_created and not isinstance(time_created, str):
raise TypeError("Expected argument 'time_created' to be a str")
pulumi.set(__self__, "time_created", time_created)
if time_zone and not isinstance(time_zone, str):
raise TypeError("Expected argument 'time_zone' to be a str")
pulumi.set(__self__, "time_zone", time_zone)
@property
@pulumi.getter(name="characterSet")
def character_set(self) -> str:
"""
The character set of the external database.
"""
return pulumi.get(self, "character_set")
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> str:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="databaseConfiguration")
def database_configuration(self) -> str:
"""
The Oracle Database configuration
"""
return pulumi.get(self, "database_configuration")
@property
@pulumi.getter(name="databaseEdition")
def database_edition(self) -> str:
"""
The Oracle Database edition.
"""
return pulumi.get(self, "database_edition")
@property
@pulumi.getter(name="databaseManagementConfig")
def database_management_config(self) -> 'outputs.GetExternalNonContainerDatabaseDatabaseManagementConfigResult':
"""
The configuration of the Database Management service.
"""
return pulumi.get(self, "database_management_config")
@property
@pulumi.getter(name="databaseVersion")
def database_version(self) -> str:
"""
The Oracle Database version.
"""
return pulumi.get(self, "database_version")
@property
@pulumi.getter(name="dbId")
def db_id(self) -> str:
"""
The Oracle Database ID, which identifies an Oracle Database located outside of Oracle Cloud.
"""
return pulumi.get(self, "db_id")
@property
@pulumi.getter(name="dbPacks")
def db_packs(self) -> str:
"""
The database packs licensed for the external Oracle Database.
"""
return pulumi.get(self, "db_packs")
@property
@pulumi.getter(name="dbUniqueName")
def db_unique_name(self) -> str:
"""
The `DB_UNIQUE_NAME` of the external database.
"""
return pulumi.get(self, "db_unique_name")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Mapping[str, Any]:
"""
Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm).
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
The user-friendly name for the external database. The name does not have to be unique.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="externalNonContainerDatabaseId")
def external_non_container_database_id(self) -> str:
return pulumi.get(self, "external_non_container_database_id")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Mapping[str, Any]:
"""
Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{"Department": "Finance"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter
def id(self) -> str:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the Oracle Cloud Infrastructure external database resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="lifecycleDetails")
def lifecycle_details(self) -> str:
"""
Additional information about the current lifecycle state.
"""
return pulumi.get(self, "lifecycle_details")
@property
@pulumi.getter(name="ncharacterSet")
def ncharacter_set(self) -> str:
"""
The national character of the external database.
"""
return pulumi.get(self, "ncharacter_set")
@property
@pulumi.getter(name="operationsInsightsConfig")
def operations_insights_config(self) -> 'outputs.GetExternalNonContainerDatabaseOperationsInsightsConfigResult':
"""
The configuration of Operations Insights for the external database
"""
return pulumi.get(self, "operations_insights_config")
@property
@pulumi.getter
def state(self) -> str:
"""
The current state of the Oracle Cloud Infrastructure external database resource.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> str:
"""
The date and time the database was created.
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="timeZone")
def time_zone(self) -> str:
"""
The time zone of the external database. It is a time zone offset (a character type in the format '[+|-]TZH:TZM') or a time zone region name, depending on how the time zone value was specified when the database was created / last altered.
"""
return pulumi.get(self, "time_zone")
class AwaitableGetExternalNonContainerDatabaseResult(GetExternalNonContainerDatabaseResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetExternalNonContainerDatabaseResult(
character_set=self.character_set,
compartment_id=self.compartment_id,
database_configuration=self.database_configuration,
database_edition=self.database_edition,
database_management_config=self.database_management_config,
database_version=self.database_version,
db_id=self.db_id,
db_packs=self.db_packs,
db_unique_name=self.db_unique_name,
defined_tags=self.defined_tags,
display_name=self.display_name,
external_non_container_database_id=self.external_non_container_database_id,
freeform_tags=self.freeform_tags,
id=self.id,
lifecycle_details=self.lifecycle_details,
ncharacter_set=self.ncharacter_set,
operations_insights_config=self.operations_insights_config,
state=self.state,
time_created=self.time_created,
time_zone=self.time_zone)
def get_external_non_container_database(external_non_container_database_id: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetExternalNonContainerDatabaseResult:
"""
This data source provides details about a specific External Non Container Database resource in Oracle Cloud Infrastructure Database service.
Gets information about a specific external non-container database.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_external_non_container_database = oci.database.get_external_non_container_database(external_non_container_database_id=oci_database_external_non_container_database["test_external_non_container_database"]["id"])
```
:param str external_non_container_database_id: The external non-container database [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
"""
__args__ = dict()
__args__['externalNonContainerDatabaseId'] = external_non_container_database_id
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('oci:database/getExternalNonContainerDatabase:getExternalNonContainerDatabase', __args__, opts=opts, typ=GetExternalNonContainerDatabaseResult).value
return AwaitableGetExternalNonContainerDatabaseResult(
character_set=__ret__.character_set,
compartment_id=__ret__.compartment_id,
database_configuration=__ret__.database_configuration,
database_edition=__ret__.database_edition,
database_management_config=__ret__.database_management_config,
database_version=__ret__.database_version,
db_id=__ret__.db_id,
db_packs=__ret__.db_packs,
db_unique_name=__ret__.db_unique_name,
defined_tags=__ret__.defined_tags,
display_name=__ret__.display_name,
external_non_container_database_id=__ret__.external_non_container_database_id,
freeform_tags=__ret__.freeform_tags,
id=__ret__.id,
lifecycle_details=__ret__.lifecycle_details,
ncharacter_set=__ret__.ncharacter_set,
operations_insights_config=__ret__.operations_insights_config,
state=__ret__.state,
time_created=__ret__.time_created,
time_zone=__ret__.time_zone)
| 45.034483 | 457 | 0.69748 | 1,662 | 14,366 | 5.711191 | 0.127557 | 0.028972 | 0.052676 | 0.073746 | 0.397282 | 0.263064 | 0.170881 | 0.136115 | 0.0748 | 0.0748 | 0 | 0.000088 | 0.212864 | 14,366 | 318 | 458 | 45.176101 | 0.839317 | 0.188013 | 0 | 0.105263 | 1 | 0 | 0.191274 | 0.071045 | 0 | 0 | 0 | 0 | 0 | 1 | 0.110048 | false | 0 | 0.028708 | 0.004785 | 0.253589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed55484ea14f91f98d1615b910fc743371e53922 | 13,543 | py | Python | deep-rl/lib/python2.7/site-packages/OpenGL/arrays/arraydatatype.py | ShujaKhalid/deep-rl | 99c6ba6c3095d1bfdab81bd01395ced96bddd611 | [
"MIT"
] | 87 | 2015-04-09T16:57:27.000Z | 2022-02-21T13:21:12.000Z | deep-rl/lib/python2.7/site-packages/OpenGL/arrays/arraydatatype.py | ShujaKhalid/deep-rl | 99c6ba6c3095d1bfdab81bd01395ced96bddd611 | [
"MIT"
] | 47 | 2015-04-09T21:05:30.000Z | 2021-06-22T15:21:18.000Z | deep-rl/lib/python2.7/site-packages/OpenGL/arrays/arraydatatype.py | ShujaKhalid/deep-rl | 99c6ba6c3095d1bfdab81bd01395ced96bddd611 | [
"MIT"
] | 16 | 2015-04-09T19:10:22.000Z | 2020-07-19T05:41:06.000Z | """Array data-type implementations (abstraction points for GL array types"""
import ctypes
import OpenGL
from OpenGL.raw.GL import _types
from OpenGL import plugins
from OpenGL.arrays import formathandler, _arrayconstants as GL_1_1
from OpenGL import logs
_log = logs.getLog( 'OpenGL.arrays.arraydatatype' )
from OpenGL import acceleratesupport
ADT = None
if acceleratesupport.ACCELERATE_AVAILABLE:
try:
from OpenGL_accelerate.arraydatatype import ArrayDatatype as ADT
except ImportError as err:
_log.warn(
"Unable to load ArrayDatatype accelerator from OpenGL_accelerate"
)
if ADT is None:
# Python-coded version
class HandlerRegistry( dict ):
GENERIC_OUTPUT_PREFERENCES = ['numpy','ctypesarrays']
def __init__( self, plugin_match ):
self.match = plugin_match
self.output_handler = None
self.preferredOutput = None
self.all_output_handlers = []
def __call__( self, value ):
"""Lookup of handler for given value"""
try:
typ = value.__class__
except AttributeError as err:
typ = type(value)
handler = self.get( typ )
if not handler:
if hasattr( typ, '__mro__' ):
for base in typ.__mro__:
handler = self.get( base )
if not handler:
handler = self.match( base )
if handler:
handler = handler.load()
if handler:
handler = handler()
if handler:
self[ typ ] = handler
if hasattr( handler, 'registerEquivalent' ):
handler.registerEquivalent( typ, base )
return handler
raise TypeError(
"""No array-type handler for type %s.%s (value: %s) registered"""%(
typ.__module__, type.__name__, repr(value)[:50]
)
)
return handler
def handler_by_plugin_name( self, name ):
plugin = plugins.FormatHandler.by_name( name )
if plugin:
try:
return plugin.load()
except ImportError as err:
return None
else:
raise RuntimeError( 'No handler of name %s found'%(name,))
def get_output_handler( self ):
"""Fast-path lookup for output handler object"""
if self.output_handler is None:
if self.preferredOutput is not None:
self.output_handler = self.handler_by_plugin_name( self.preferredOutput )
if not self.output_handler:
for preferred in self.GENERIC_OUTPUT_PREFERENCES:
self.output_handler = self.handler_by_plugin_name( preferred )
if self.output_handler:
break
if not self.output_handler:
raise RuntimeError(
"""Unable to find any output handler at all (not even ctypes/numpy ones!)"""
)
return self.output_handler
def register( self, handler, types=None ):
"""Register this class as handler for given set of types"""
if not isinstance( types, (list,tuple)):
types = [ types ]
for type in types:
self[ type ] = handler
if handler.isOutput:
self.all_output_handlers.append( handler )
def registerReturn( self, handler ):
"""Register this handler as the default return-type handler"""
if isinstance( handler, (str,unicode)):
self.preferredOutput = handler
self.output_handler = None
else:
self.preferredOutput = None
self.output_handler = handler
GLOBAL_REGISTRY = HandlerRegistry( plugins.FormatHandler.match)
formathandler.FormatHandler.TYPE_REGISTRY = GLOBAL_REGISTRY
class ArrayDatatype( object ):
"""Mix-in for array datatype classes
The ArrayDatatype marker essentially is used to mark a particular argument
as having an "array" type, which means that it is eligible for handling
via the arrays sub-package and its registered handlers.
"""
typeConstant = None
handler = GLOBAL_REGISTRY
getHandler = GLOBAL_REGISTRY.__call__
returnHandler = GLOBAL_REGISTRY.get_output_handler
isAccelerated = False
@classmethod
def getRegistry( cls ):
"""Get our handler registry"""
return cls.handler
def from_param( cls, value, typeConstant=None ):
"""Given a value in a known data-pointer type, convert to a ctypes pointer"""
return cls.getHandler(value).from_param( value, cls.typeConstant )
from_param = classmethod( logs.logOnFail( from_param, _log ) )
def dataPointer( cls, value ):
"""Given a value in a known data-pointer type, return long for pointer"""
try:
return cls.getHandler(value).dataPointer( value )
except Exception as err:
_log.warn(
"""Failure in dataPointer for %s instance %s""", type(value), value,
)
raise
dataPointer = classmethod( logs.logOnFail( dataPointer, _log ) )
def voidDataPointer( cls, value ):
"""Given value in a known data-pointer type, return void_p for pointer"""
pointer = cls.dataPointer( value )
try:
return ctypes.c_void_p(pointer)
except TypeError as err:
return pointer
voidDataPointer = classmethod( logs.logOnFail( voidDataPointer, _log ) )
def typedPointer( cls, value ):
"""Return a pointer-to-base-type pointer for given value"""
return ctypes.cast( cls.dataPointer(value), ctypes.POINTER( cls.baseType ))
typedPointer = classmethod( typedPointer )
def asArray( cls, value, typeCode=None ):
"""Given a value, convert to preferred array representation"""
return cls.getHandler(value).asArray( value, typeCode or cls.typeConstant )
asArray = classmethod( logs.logOnFail( asArray, _log ) )
def arrayToGLType( cls, value ):
"""Given a data-value, guess the OpenGL type of the corresponding pointer
Note: this is not currently used in PyOpenGL and may be removed
eventually.
"""
return cls.getHandler(value).arrayToGLType( value )
arrayToGLType = classmethod( logs.logOnFail( arrayToGLType, _log ) )
def arraySize( cls, value, typeCode = None ):
"""Given a data-value, calculate dimensions for the array (number-of-units)"""
return cls.getHandler(value).arraySize( value, typeCode or cls.typeConstant )
arraySize = classmethod( logs.logOnFail( arraySize, _log ) )
def unitSize( cls, value, typeCode=None ):
"""Determine unit size of an array (if possible)
Uses our local type if defined, otherwise asks the handler to guess...
"""
return cls.getHandler(value).unitSize( value, typeCode or cls.typeConstant )
unitSize = classmethod( logs.logOnFail( unitSize, _log ) )
def zeros( cls, dims, typeCode=None ):
"""Allocate a return array of the given dimensions filled with zeros"""
return cls.returnHandler().zeros( dims, typeCode or cls.typeConstant )
zeros = classmethod( logs.logOnFail( zeros, _log ) )
def dimensions( cls, value ):
"""Given a data-value, get the dimensions (assumes full structure info)"""
return cls.getHandler(value).dimensions( value )
dimensions = classmethod( logs.logOnFail( dimensions, _log ) )
def arrayByteCount( cls, value ):
"""Given a data-value, try to determine number of bytes it's final form occupies
For most data-types this is arraySize() * atomic-unit-size
"""
return cls.getHandler(value).arrayByteCount( value )
arrayByteCount = classmethod( logs.logOnFail( arrayByteCount, _log ) )
# the final array data-type classes...
class GLclampdArray( ArrayDatatype, ctypes.POINTER(_types.GLclampd )):
"""Array datatype for GLclampd types"""
baseType = _types.GLclampd
typeConstant = _types.GL_DOUBLE
class GLclampfArray( ArrayDatatype, ctypes.POINTER(_types.GLclampf )):
"""Array datatype for GLclampf types"""
baseType = _types.GLclampf
typeConstant = _types.GL_FLOAT
class GLfloatArray( ArrayDatatype, ctypes.POINTER(_types.GLfloat )):
"""Array datatype for GLfloat types"""
baseType = _types.GLfloat
typeConstant = _types.GL_FLOAT
class GLdoubleArray( ArrayDatatype, ctypes.POINTER(_types.GLdouble )):
"""Array datatype for GLdouble types"""
baseType = _types.GLdouble
typeConstant = _types.GL_DOUBLE
class GLbyteArray( ArrayDatatype, ctypes.POINTER(_types.GLbyte )):
"""Array datatype for GLbyte types"""
baseType = _types.GLbyte
typeConstant = _types.GL_BYTE
class GLcharArray( ArrayDatatype, ctypes.c_char_p):
"""Array datatype for ARB extension pointers-to-arrays"""
baseType = _types.GLchar
typeConstant = _types.GL_BYTE
GLcharARBArray = GLcharArray
class GLshortArray( ArrayDatatype, ctypes.POINTER(_types.GLshort )):
"""Array datatype for GLshort types"""
baseType = _types.GLshort
typeConstant = _types.GL_SHORT
class GLintArray( ArrayDatatype, ctypes.POINTER(_types.GLint )):
"""Array datatype for GLint types"""
baseType = _types.GLint
typeConstant = _types.GL_INT
class GLubyteArray( ArrayDatatype, ctypes.POINTER(_types.GLubyte )):
"""Array datatype for GLubyte types"""
baseType = _types.GLubyte
typeConstant = _types.GL_UNSIGNED_BYTE
GLbooleanArray = GLubyteArray
class GLushortArray( ArrayDatatype, ctypes.POINTER(_types.GLushort )):
"""Array datatype for GLushort types"""
baseType = _types.GLushort
typeConstant = _types.GL_UNSIGNED_SHORT
class GLuintArray( ArrayDatatype, ctypes.POINTER(_types.GLuint )):
"""Array datatype for GLuint types"""
baseType = _types.GLuint
typeConstant = _types.GL_UNSIGNED_INT
class GLint64Array( ArrayDatatype, ctypes.POINTER(_types.GLint64 )):
"""Array datatype for GLuint types"""
baseType = _types.GLint64
typeConstant = None # TODO: find out what this should be!
class GLuint64Array( ArrayDatatype, ctypes.POINTER(_types.GLuint64 )):
"""Array datatype for GLuint types"""
baseType = _types.GLuint64
typeConstant = _types.GL_UNSIGNED_INT64
class GLenumArray( ArrayDatatype, ctypes.POINTER(_types.GLenum )):
"""Array datatype for GLenum types"""
baseType = _types.GLenum
typeConstant = _types.GL_UNSIGNED_INT
class GLsizeiArray( ArrayDatatype, ctypes.POINTER(_types.GLsizei )):
"""Array datatype for GLsizei types"""
baseType = _types.GLsizei
typeConstant = _types.GL_INT
class GLvoidpArray( ArrayDatatype, ctypes.POINTER(_types.GLvoid )):
"""Array datatype for GLenum types"""
baseType = _types.GLvoidp
typeConstant = _types.GL_VOID_P
else:
# Cython-coded array handler
_log.info( 'Using accelerated ArrayDatatype' )
ArrayDatatype = ADT( None, None )
GLclampdArray = ADT( GL_1_1.GL_DOUBLE, _types.GLclampd )
GLclampfArray = ADT( GL_1_1.GL_FLOAT, _types.GLclampf )
GLdoubleArray = ADT( GL_1_1.GL_DOUBLE, _types.GLdouble )
GLfloatArray = ADT( GL_1_1.GL_FLOAT, _types.GLfloat )
GLbyteArray = ADT( GL_1_1.GL_BYTE, _types.GLbyte )
GLcharArray = GLcharARBArray = ADT( GL_1_1.GL_BYTE, _types.GLchar )
GLshortArray = ADT( GL_1_1.GL_SHORT, _types.GLshort )
GLintArray = ADT( GL_1_1.GL_INT, _types.GLint )
GLubyteArray = GLbooleanArray = ADT( GL_1_1.GL_UNSIGNED_BYTE, _types.GLubyte )
GLushortArray = ADT( GL_1_1.GL_UNSIGNED_SHORT, _types.GLushort )
GLuintArray = ADT( GL_1_1.GL_UNSIGNED_INT, _types.GLuint )
GLint64Array = ADT( None, _types.GLint64 )
GLuint64Array = ADT( GL_1_1.GL_UNSIGNED_INT64, _types.GLuint64 )
GLenumArray = ADT( GL_1_1.GL_UNSIGNED_INT, _types.GLenum )
GLsizeiArray = ADT( GL_1_1.GL_INT, _types.GLsizei )
GLvoidpArray = ADT( _types.GL_VOID_P, _types.GLvoidp )
GL_CONSTANT_TO_ARRAY_TYPE = {
GL_1_1.GL_DOUBLE : GLclampdArray,
GL_1_1.GL_FLOAT : GLclampfArray,
GL_1_1.GL_FLOAT : GLfloatArray,
GL_1_1.GL_DOUBLE : GLdoubleArray,
GL_1_1.GL_BYTE : GLbyteArray,
GL_1_1.GL_SHORT : GLshortArray,
GL_1_1.GL_INT : GLintArray,
GL_1_1.GL_UNSIGNED_BYTE : GLubyteArray,
GL_1_1.GL_UNSIGNED_SHORT : GLushortArray,
GL_1_1.GL_UNSIGNED_INT : GLuintArray,
#GL_1_1.GL_UNSIGNED_INT : GLenumArray,
}
| 44.844371 | 100 | 0.616333 | 1,442 | 13,543 | 5.597087 | 0.180999 | 0.009664 | 0.012886 | 0.018585 | 0.164911 | 0.112502 | 0.072606 | 0.029488 | 0.008425 | 0 | 0 | 0.00826 | 0.302739 | 13,543 | 301 | 101 | 44.993355 | 0.846447 | 0.159197 | 0 | 0.166667 | 0 | 0 | 0.017495 | 0.002486 | 0 | 0 | 0 | 0.003322 | 0 | 1 | 0.081081 | false | 0 | 0.045045 | 0 | 0.509009 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ed5b25db8eee2bdd6eb22e7c4a9c331775d6cf05 | 1,651 | py | Python | services/server/server/apps/checkout/migrations/0001_initial.py | AyanSamanta23/moni-moni | 8e8aa4edf4cd2e2b005f6dbe8c885ecc791e6a2b | [
"MIT"
] | null | null | null | services/server/server/apps/checkout/migrations/0001_initial.py | AyanSamanta23/moni-moni | 8e8aa4edf4cd2e2b005f6dbe8c885ecc791e6a2b | [
"MIT"
] | null | null | null | services/server/server/apps/checkout/migrations/0001_initial.py | AyanSamanta23/moni-moni | 8e8aa4edf4cd2e2b005f6dbe8c885ecc791e6a2b | [
"MIT"
] | null | null | null | # Generated by Django 4.0.2 on 2022-02-26 15:52
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='FundingOptions',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('funding_name', models.CharField(help_text='Required', max_length=255, verbose_name='funding_name')),
('funding_price', models.DecimalField(decimal_places=2, help_text='Required', max_digits=1000, verbose_name='funding price')),
('funding_timeframe', models.CharField(help_text='Required', max_length=255, verbose_name='funding timeframe')),
('funding_window', models.CharField(help_text='Required', max_length=255, verbose_name='funding window')),
],
options={
'verbose_name': 'Funding Option',
'verbose_name_plural': 'Funding Options',
},
),
migrations.CreateModel(
name='PaymentSelections',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(help_text='Required', max_length=255, verbose_name='name')),
('is_active', models.BooleanField(default=True)),
],
options={
'verbose_name': 'Payment Selection',
'verbose_name_plural': 'Payment Selections',
},
),
]
| 40.268293 | 142 | 0.59358 | 161 | 1,651 | 5.875776 | 0.391304 | 0.127907 | 0.084567 | 0.100423 | 0.432347 | 0.432347 | 0.432347 | 0.432347 | 0.432347 | 0.432347 | 0 | 0.026823 | 0.277408 | 1,651 | 40 | 143 | 41.275 | 0.766136 | 0.027256 | 0 | 0.363636 | 1 | 0 | 0.208229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed5bb200d9597641b3d366c18b6bda01b9a7883d | 6,119 | py | Python | src/TF-gui/tftrain.py | jeetsagar/turbojet | 9b17edde0a7e01d0fa320261fbc2734ce53577d2 | [
"MIT"
] | null | null | null | src/TF-gui/tftrain.py | jeetsagar/turbojet | 9b17edde0a7e01d0fa320261fbc2734ce53577d2 | [
"MIT"
] | null | null | null | src/TF-gui/tftrain.py | jeetsagar/turbojet | 9b17edde0a7e01d0fa320261fbc2734ce53577d2 | [
"MIT"
] | 2 | 2021-05-20T05:47:59.000Z | 2021-08-24T07:44:37.000Z | #!python3
import os
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# gpu_devices = tf.config.experimental.list_physical_devices("GPU")
# for device in gpu_devices:
# tf.config.experimental.set_memory_growth(device, True)
def trainModel(data_in, params_in):
data_in = data_in.take(2048)
data_in = data_in.shuffle(24)
data_in = data_in.batch(1024)
arch = params_in["Architecture"]
dropout = params_in["Dropout"]
lr = params_in["LearningRate"]
attrs = params_in["Attrs"]
epochs = params_in["Epochs"]
if arch == "BaseCNN":
if params_in["BatchNorm"]:
model = tf.keras.Sequential([
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu", input_shape=(1, 50, attrs)),
layers.Dropout(dropout),
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Conv1D(filters=1, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.BatchNormalization(),
layers.Flatten(),
layers.Dense(50, "relu"),
layers.Dense(1)
])
else:
model = tf.keras.Sequential([
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu", input_shape=(1, 50, attrs)),
layers.Dropout(dropout),
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Conv1D(filters=1, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Flatten(),
layers.Dense(50, "relu"),
layers.Dense(1)
])
elif arch == "CNN-LSTM":
if params_in["BatchNorm"]:
model = tf.keras.Sequential([
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu", input_shape=(1, 50, attrs)),
layers.Dropout(dropout),
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Conv1D(filters=1, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.BatchNormalization(),
layers.Reshape((5, 10)),
layers.LSTM(30, return_sequences=False),
layers.Dense(50, "relu"),
layers.Dense(1)
])
else:
model = tf.keras.Sequential([
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu", input_shape=(1, 50, attrs)),
layers.Dropout(dropout),
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Conv1D(filters=1, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Reshape((5, 10)),
layers.LSTM(30, return_sequences=False),
layers.Dense(50, "relu"),
layers.Dense(1)
])
elif arch == "CNN-2LSTM":
if params_in["BatchNorm"]:
model = tf.keras.Sequential([
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu", input_shape=(1, 50, attrs)),
layers.Dropout(dropout),
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Conv1D(filters=1, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.BatchNormalization(),
layers.Reshape((5, 10)),
layers.LSTM(30, return_sequences=True),
layers.LSTM(30, return_sequences=False),
layers.Dense(1)
])
else:
model = tf.keras.Sequential([
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu", input_shape=(1, 50, attrs)),
layers.Dropout(dropout),
layers.Conv1D(filters=10, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Conv1D(filters=1, kernel_size=5, padding="same", activation="relu"),
layers.Dropout(dropout),
layers.Reshape((5, 10)),
layers.LSTM(30, return_sequences=True),
layers.LSTM(30, return_sequences=False),
layers.Dense(1)
])
model.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.Adam(learning_rate=lr, amsgrad=True))
filepath = "./checkpoints/Model_in-" + arch + str(attrs) + ".h5"
losses = []
class CustomModelCheckPoint(tf.keras.callbacks.Callback):
def __init__(self, **kargs):
super(CustomModelCheckPoint, self).__init__(**kargs)
self.epoch_loss = {} # accuracy at given epoch
def on_epoch_begin(self, epoch, logs={}):
# Things done on beginning of epoch.
return
def on_epoch_end(self, epoch, logs={}):
# things done on end of the epoch
self.epoch_loss[epoch] = logs.get("loss")
losses.append(self.epoch_loss[epoch])
if params_in["ResumeTraining"]:
model.load_weights(filepath)
checkpoint2 = CustomModelCheckPoint()
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='loss', verbos=0, save_best_only=True,
save_freq='epoch')
model.fit(data_in, epochs=epochs, callbacks=[checkpoint, checkpoint2])
df_loss = pd.DataFrame()
df_loss["Epochs"] = list(range(1, epochs + 1))
df_loss["Loss"] = losses
df_loss.to_csv("./losses/lossTrend.csv", index=False)
| 42.493056 | 120 | 0.57346 | 664 | 6,119 | 5.161145 | 0.201807 | 0.063029 | 0.099796 | 0.094543 | 0.674934 | 0.657426 | 0.642836 | 0.642836 | 0.642836 | 0.638751 | 0 | 0.032579 | 0.292695 | 6,119 | 143 | 121 | 42.79021 | 0.759242 | 0.041183 | 0 | 0.669492 | 0 | 0 | 0.061263 | 0.007679 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.033898 | 0.008475 | 0.084746 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed5bcaf7cb360ac7f0af74528df0eb589224f1a5 | 5,434 | py | Python | library/kong_api.py | sebastienc/ansible-kong-module | c1e7b471a517d1ec99c5629f3729ebc34088bd64 | [
"MIT"
] | 34 | 2016-03-09T17:10:52.000Z | 2019-12-25T08:31:49.000Z | library/kong_api.py | sebastienc/ansible-kong-module | c1e7b471a517d1ec99c5629f3729ebc34088bd64 | [
"MIT"
] | 6 | 2016-05-16T14:09:05.000Z | 2018-07-23T21:09:33.000Z | library/kong_api.py | sebastienc/ansible-kong-module | c1e7b471a517d1ec99c5629f3729ebc34088bd64 | [
"MIT"
] | 23 | 2016-02-17T12:18:16.000Z | 2021-05-06T09:39:35.000Z | #!/usr/bin/python
DOCUMENTATION = '''
---
module: kong
short_description: Configure a Kong API Gateway
'''
EXAMPLES = '''
- name: Register a site
kong:
kong_admin_uri: http://127.0.0.1:8001/apis/
name: "Mockbin"
taget_url: "http://mockbin.com"
request_host: "mockbin.com"
state: present
- name: Delete a site
kong:
kong_admin_uri: http://127.0.0.1:8001/apis/
name: "Mockbin"
state: absent
'''
import json, requests, os
class KongAPI:
def __init__(self, base_url, auth_username=None, auth_password=None):
self.base_url = base_url
if auth_username is not None and auth_password is not None:
self.auth = (auth_username, auth_password)
else:
self.auth = None
def __url(self, path):
return "{}{}" . format (self.base_url, path)
def _api_exists(self, name, api_list):
for api in api_list:
if name == api.get("name", None):
return True
return False
def add_or_update(self, name, upstream_url, request_host=None, request_path=None, strip_request_path=False, preserve_host=False):
method = "post"
url = self.__url("/apis/")
api_list = self.list().json().get("data", [])
api_exists = self._api_exists(name, api_list)
if api_exists:
method = "patch"
url = "{}{}" . format (url, name)
data = {
"name": name,
"upstream_url": upstream_url,
"strip_request_path": strip_request_path,
"preserve_host": preserve_host
}
if request_host is not None:
data['request_host'] = request_host
if request_path is not None:
data['request_path'] = request_path
return getattr(requests, method)(url, data, auth=self.auth)
def list(self):
url = self.__url("/apis")
return requests.get(url, auth=self.auth)
def info(self, id):
url = self.__url("/apis/{}" . format (id))
return requests.get(url, auth=self.auth)
def delete_by_name(self, name):
info = self.info(name)
id = info.json().get("id")
return self.delete(id)
def delete(self, id):
path = "/apis/{}" . format (id)
url = self.__url(path)
return requests.delete(url, auth=self.auth)
class ModuleHelper:
def __init__(self, fields):
self.fields = fields
def get_module(self):
args = dict(
kong_admin_uri = dict(required=False, type='str'),
kong_admin_username = dict(required=False, type='str'),
kong_admin_password = dict(required=False, type='str'),
name = dict(required=False, type='str'),
upstream_url = dict(required=False, type='str'),
request_host = dict(required=False, type='str'),
request_path = dict(required=False, type='str'),
strip_request_path = dict(required=False, default=False, type='bool'),
preserve_host = dict(required=False, default=False, type='bool'),
state = dict(required=False, default="present", choices=['present', 'absent', 'latest', 'list', 'info'], type='str'),
)
return AnsibleModule(argument_spec=args,supports_check_mode=False)
def prepare_inputs(self, module):
url = module.params['kong_admin_uri']
auth_user = module.params['kong_admin_username']
auth_password = module.params['kong_admin_password']
state = module.params['state']
data = {}
for field in self.fields:
value = module.params.get(field, None)
if value is not None:
data[field] = value
return (url, data, state, auth_user, auth_password)
def get_response(self, response, state):
if state == "present":
meta = response.json()
has_changed = response.status_code in [201, 200]
if state == "absent":
meta = {}
has_changed = response.status_code == 204
if state == "list":
meta = response.json()
has_changed = False
return (has_changed, meta)
def main():
fields = [
'name',
'upstream_url',
'request_host',
'request_path',
'strip_request_path',
'preserve_host'
]
helper = ModuleHelper(fields)
global module # might not need this
module = helper.get_module()
base_url, data, state, auth_user, auth_password = helper.prepare_inputs(module)
api = KongAPI(base_url, auth_user, auth_password)
if state == "present":
response = api.add_or_update(**data)
if state == "absent":
response = api.delete_by_name(data.get("name"))
if state == "list":
response = api.list()
if response.status_code == 401:
module.fail_json(msg="Please specify kong_admin_username and kong_admin_password", meta=response.json())
elif response.status_code == 403:
module.fail_json(msg="Please check kong_admin_username and kong_admin_password", meta=response.json())
else:
has_changed, meta = helper.get_response(response, state)
module.exit_json(changed=has_changed, meta=meta)
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
if __name__ == '__main__':
main()
| 30.188889 | 133 | 0.597902 | 662 | 5,434 | 4.685801 | 0.193353 | 0.034816 | 0.054803 | 0.047389 | 0.303997 | 0.199871 | 0.179884 | 0.088975 | 0.066409 | 0.066409 | 0 | 0.008947 | 0.280088 | 5,434 | 179 | 134 | 30.357542 | 0.783998 | 0.006625 | 0 | 0.145985 | 0 | 0 | 0.159036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.094891 | false | 0.072993 | 0.021898 | 0.007299 | 0.211679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ed5e905c814c4d72273c16c39c47e06ae62fc1f0 | 897 | gyp | Python | tools/android/android_tools.gyp | SlimKatLegacy/android_external_chromium_org | ee480ef5039d7c561fc66ccf52169ead186f1bea | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2015-03-04T02:36:53.000Z | 2016-06-25T11:22:17.000Z | tools/android/android_tools.gyp | j4ckfrost/android_external_chromium_org | a1a3dad8b08d1fcf6b6b36c267158ed63217c780 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | tools/android/android_tools.gyp | j4ckfrost/android_external_chromium_org | a1a3dad8b08d1fcf6b6b36c267158ed63217c780 | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 4 | 2015-02-09T08:49:30.000Z | 2017-08-26T02:03:34.000Z | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
{
'targets': [
# Intermediate target grouping the android tools needed to run native
# unittests and instrumentation test apks.
{
'target_name': 'android_tools',
'type': 'none',
'dependencies': [
'adb_reboot/adb_reboot.gyp:adb_reboot',
'forwarder2/forwarder.gyp:forwarder2',
'md5sum/md5sum.gyp:md5sum',
'purge_ashmem/purge_ashmem.gyp:purge_ashmem',
],
},
{
'target_name': 'memdump',
'type': 'none',
'dependencies': [
'memdump/memdump.gyp:memdump',
],
},
{
'target_name': 'memconsumer',
'type': 'none',
'dependencies': [
'memconsumer/memconsumer.gyp:memconsumer',
],
},
],
}
| 25.628571 | 73 | 0.596433 | 94 | 897 | 5.585106 | 0.595745 | 0.057143 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013761 | 0.270903 | 897 | 34 | 74 | 26.382353 | 0.788991 | 0.298774 | 0 | 0.357143 | 0 | 0 | 0.536116 | 0.325843 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6739ff5f8ea8f36a47b3ca0134c0c015b0b7c7 | 3,499 | py | Python | fuzzybee/joboard/views.py | youtaya/knight | 6899e18ca6b1ef01daaae7d7fd14b50a26aa0aee | [
"MIT"
] | null | null | null | fuzzybee/joboard/views.py | youtaya/knight | 6899e18ca6b1ef01daaae7d7fd14b50a26aa0aee | [
"MIT"
] | null | null | null | fuzzybee/joboard/views.py | youtaya/knight | 6899e18ca6b1ef01daaae7d7fd14b50a26aa0aee | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from django.shortcuts import get_object_or_404, render_to_response, render
from django.http import HttpResponseRedirect, HttpResponse
from django.core.urlresolvers import reverse
from django.shortcuts import redirect
from joboard.models import Factory
from joboard.forms import FactoryForm
from django.template import RequestContext
from django.core.exceptions import ObjectDoesNotExist
from urllib import urlopen, urlencode
import urllib2
from fuzzybee.conf import b_url, b_ak, geo_table, l_url, app_id, app_key
from utils.pack_json import toJSON, fromJSON
from django.contrib.auth.decorators import login_required
from people.models import People
import logging
logger = logging.getLogger(__name__)
@login_required
def index(request):
form = None
if request.method == 'POST':
form = FactoryForm(request.POST)
print form
if form.is_valid():
factory = form.cleaned_data
logger.debug("lat: " + str(factory['fact_lat']))
logger.debug("addr: " + factory['fact_addr'])
#save factory in model
factmodel = form.save(commit=False)
print request.user
factmodel.fact_maintainer = People.objects.get(user=request.user)
factmodel.save()
factid = factmodel.id
#save in public server: leancloud and baidu
save_factory_cloud(factory, factid)
return HttpResponseRedirect(reverse('board:detail', args=(factid,)))
else:
form = FactoryForm()
return render_to_response('board/new.html', {'form': form}, context_instance=RequestContext(request))
@login_required
def detail(request, fact_id):
print fact_id
info = get_object_or_404(Factory, pk=fact_id)
return render(request, 'board/detail.html', {'info':info})
@login_required
def manager(request):
print "manager..."
try:
people = People.objects.get(user=request.user)
factory = Factory.objects.get(fact_maintainer=people)
except ObjectDoesNotExist:
print 'no hire action...'
return redirect(reverse('joboard.views.index', args=[]))
return render(request, 'board/manager.html', {'info':factory})
def save_factory_cloud(fact_info, fact_id):
title = fact_info['fact_name']
address = fact_info['fact_addr']
lat = fact_info['fact_lat']
lng = fact_info['fact_lng']
num = fact_info['hire_num']
data = {
'title': title.encode("utf-8"),
'address': address.encode("utf-8"),
'latitude': lat,
'longitude': lng,
'job_num': num,
'factory_id': fact_id,
}
head = {
'X-AVOSCloud-Application-Id': app_id,
'X-AVOSCloud-Application-Key': app_key,
'Content-Type': 'application/json',
}
req = urllib2.Request(l_url, toJSON(data), head)
print str(req)
response = urllib2.urlopen(req)
#print respone.read()
lean_response = fromJSON(response.read())
print lean_response
lean_objectId = lean_response['objectId']
# save in Baidu Map
params = urlencode({
'title': title.encode("utf-8"),
'address': address.encode("utf-8"),
'latitude': lat,
'longitude': lng,
'coord_type': 3,
'geotable_id': geo_table,
'ak': b_ak,
'job_num': num,
'lean_id': lean_objectId,
})
req = urllib2.Request(b_url, params)
#print str(req)
response = urllib2.urlopen(req)
#print respone.read()
| 33.009434 | 105 | 0.658474 | 427 | 3,499 | 5.234192 | 0.318501 | 0.03132 | 0.026846 | 0.022371 | 0.134228 | 0.134228 | 0.106488 | 0.106488 | 0.106488 | 0.106488 | 0 | 0.006264 | 0.22435 | 3,499 | 105 | 106 | 33.32381 | 0.817244 | 0.044584 | 0 | 0.168539 | 0 | 0 | 0.124138 | 0.015892 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.168539 | null | null | 0.078652 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed693e39d7414ae26d14dc6568bc549d2c30f321 | 1,452 | py | Python | DD/Terrain.py | CodingBullywug/DDreshape | 393e5ea336eb6cb78f31345731ccf52baf19bfac | [
"MIT"
] | 2 | 2020-04-13T04:47:26.000Z | 2022-02-19T06:10:04.000Z | DD/Terrain.py | CodingBullywug/DDreshape | 393e5ea336eb6cb78f31345731ccf52baf19bfac | [
"MIT"
] | null | null | null | DD/Terrain.py | CodingBullywug/DDreshape | 393e5ea336eb6cb78f31345731ccf52baf19bfac | [
"MIT"
] | 1 | 2020-04-13T04:47:30.000Z | 2020-04-13T04:47:30.000Z | from DD.utils import PoolByteArray2NumpyArray, NumpyArray2PoolByteArray
from DD.Entity import Entity
import numpy as np
class Terrain(Entity):
def __init__(self, json, width, height, scale=4, terrain_types=4):
super(Terrain, self).__init__(json)
self._scale = scale
self.terrain_types = terrain_types
self.splat = PoolByteArray2NumpyArray(self._json['splat']).reshape(height*self._scale, width*self._scale, self.terrain_types, order='C')
def get_json(self):
json = self._json
json['splat'] = NumpyArray2PoolByteArray(self.splat.reshape(np.prod(self.splat.shape), order='C'))
return json
def pad(self, top, bottom, left, right):
self.splat = np.pad(self.splat,
((top*self._scale, bottom*self._scale), (left*self._scale, right*self._scale), (0,0)),
mode='edge')
def crop(self, top, bottom, left, right):
self.splat = self._crop_map_safe(self.splat, top, bottom, left, right, self._scale)
def fliplr(self, width):
self.splat = np.fliplr(self.splat)
def flipud(self, height):
self.splat = np.flipud(self.splat)
def rot90(self, width, height):
self.splat = self._rot90_map(self.splat)
def rot180(self, width, height):
self.splat = self._rot180_map(self.splat)
def rot270(self, width, height):
self.splat = self._rot270_map(self.splat)
| 37.230769 | 144 | 0.644628 | 188 | 1,452 | 4.808511 | 0.244681 | 0.169248 | 0.057522 | 0.059735 | 0.185841 | 0.161504 | 0.068584 | 0 | 0 | 0 | 0 | 0.021409 | 0.227961 | 1,452 | 38 | 145 | 38.210526 | 0.785013 | 0 | 0 | 0 | 0 | 0 | 0.011019 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.310345 | false | 0 | 0.103448 | 0 | 0.482759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6ad5b625da50e0023d94d78806dbcd8acd64a1 | 28,127 | py | Python | datasets/medicalImage.py | UpCoder/YNe | 2f932456eda29b1e04f4c7e212e2ab0dacfe831b | [
"MIT"
] | null | null | null | datasets/medicalImage.py | UpCoder/YNe | 2f932456eda29b1e04f4c7e212e2ab0dacfe831b | [
"MIT"
] | null | null | null | datasets/medicalImage.py | UpCoder/YNe | 2f932456eda29b1e04f4c7e212e2ab0dacfe831b | [
"MIT"
] | null | null | null | # -*- coding=utf-8 -*-
import SimpleITK as itk
import pydicom
import numpy as np
from PIL import Image, ImageDraw
import gc
from skimage.morphology import disk, dilation
import nipy
import os
from glob import glob
import scipy
import cv2
from xml.dom.minidom import Document
typenames = ['CYST', 'FNH', 'HCC', 'HEM', 'METS']
typeids = [0, 1, 2, 3, 4]
def get_voxel_size(file_path):
load_image_obj = nipy.load_image(file_path)
header = load_image_obj.header
x_size = header['srow_x'][0]
y_size = header['srow_y'][1]
z_size = header['srow_z'][2]
return [x_size, y_size, z_size]
def read_nii(file_path):
return nipy.load_image(file_path).get_data()
def read_nii_with_header(file_path):
img_obj = nipy.load_image(file_path)
header_obj = img_obj.header
res_dict = {}
res_dict['voxel_spacing'] = [header_obj['srow_x'][0], header_obj['srow_y'][1], header_obj['srow_z'][2]]
img_arr = img_obj.get_data()
return img_arr, res_dict
# 读取文件序列
def read_dicom_series(dir_name):
reader = itk.ImageSeriesReader()
dicom_series = reader.GetGDCMSeriesFileNames(dir_name)
reader.SetFileNames(dicom_series)
images = reader.Execute()
image_array = itk.GetArrayFromImage(images)
return image_array
# 将DICOM序列转化成MHD文件
def convert_dicomseries2mhd(dicom_series_dir, save_path):
data = read_dicom_series(dicom_series_dir)
save_mhd_image(data, save_path)
# 读取单个DICOM文件
def read_dicom_file(file_name):
header = pydicom.read_file(file_name)
image = header.pixel_array
image = header.RescaleSlope * image + header.RescaleIntercept
return image
# 读取mhd文件
def read_mhd_image(file_path, rejust=False):
header = itk.ReadImage(file_path)
image = np.array(itk.GetArrayFromImage(header))
if rejust:
image[image < -70] = -70
image[image > 180] = 180
image = image + 70
return np.array(image)
# 保存mhd文件
def save_mhd_image(image, file_name):
header = itk.GetImageFromArray(image)
itk.WriteImage(header, file_name)
# 根据文件名返回期项名
def return_phasename(file_name):
phasenames = ['NC', 'ART', 'PV']
for phasename in phasenames:
if file_name.find(phasename) != -1:
return phasename
# 读取DICOM文件中包含的病例ID信息
def read_patientId(dicom_file_path):
ds = pydicom.read_file(dicom_file_path)
return ds.PatientID
# 返回病灶类型和ID的字典类型的数据 key是typename value是typeid
def return_type_nameid():
res = {}
res['CYST'] = 0
res['FNH'] = 1
res['HCC'] = 2
res['HEM'] = 3
res['METS'] = 4
return res
# 返回病灶类型ID和名称的字典类型的数据 key是typeid value是typename
def return_type_idname():
res = {}
res[0] = 'CYST'
res[1] = 'FNH'
res[2] = 'HCC'
res[3] = 'HEM'
res[4] = 'METS'
return res
# 根据病灶类型的ID返回类型的字符串
def return_typename_byid(typeid):
idname_dict = return_type_idname()
return idname_dict[typeid]
# 根据病灶类型的name返回id的字符串
def return_typeid_byname(typename):
nameid_dict = return_type_nameid()
return nameid_dict[typename]
# 填充图像
def fill_region(image):
# image.show()
from scipy import ndimage
image = ndimage.binary_fill_holes(image).astype(np.uint8)
return image
def close_operation(binary_image, kernel_size=5):
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_size, kernel_size))
close_r = cv2.morphologyEx(binary_image, cv2.MORPH_CLOSE, kernel)
return close_r
def open_operation(slice_image, kernel_size=3):
opening = cv2.morphologyEx(slice_image, cv2.MORPH_OPEN,
cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (kernel_size, kernel_size)))
return opening
def get_kernel_filters(kernel_size):
'''
返回进行kernel操作的5个模版 (1个是正常的dilated操作,还有四个是分别对四个方向进行单独进行dilated的操作)
:param kernel_size:
:return: [5, kernel_size, kernel_size]
'''
kernel_whole = np.ones([kernel_size, kernel_size], np.uint8)
half_size = kernel_size // 2
kernel_left = np.copy(kernel_whole)
kernel_left[:, half_size + 1:] = 0
kernel_right = np.copy(kernel_whole)
kernel_right[:, :half_size] = 0
kernel_top = np.copy(kernel_whole)
kernel_top[half_size + 1:, :] = 0
kernel_bottom = np.copy(kernel_whole)
kernel_bottom[:half_size, :] = 0
return np.concatenate([
np.expand_dims(kernel_whole, axis=0),
np.expand_dims(kernel_left, axis=0),
np.expand_dims(kernel_right, axis=0),
np.expand_dims(kernel_top, axis=0),
np.expand_dims(kernel_bottom, axis=0),
], axis=0)
def image_erode(img, kernel_size=5):
import cv2
import numpy as np
kernel = np.ones((kernel_size, kernel_size), np.uint8)
erosion = cv2.erode(img, kernel, iterations=1)
return erosion
def image_expand(img, kernel_size=5):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (kernel_size, kernel_size))
image = cv2.dilate(img, kernel)
return image
def image_erode(img, kernel_size=5):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (kernel_size, kernel_size))
image = cv2.erode(img, kernel)
return image
# 图像膨胀
# def image_expand(image, size):
#
def find_significant_layer(mask_image):
'''
找到显著层
:param mask_image: [depth, width, height]
:return: idx
'''
sum_res = np.sum(np.sum(mask_image, axis=1), axis=1)
return np.argmax(sum_res)
# 将一个矩阵保存为图片
def save_image(image_arr, save_path):
image = Image.fromarray(np.asarray(image_arr, np.uint8))
image.save(save_path)
def show_image(image):
img = np.asarray(image, np.uint8)
import matplotlib.pyplot as plt
plt.figure("Image")
# 这里必须加 cmap='gray' ,否则尽管原图像是灰度图(下图1),但是显示的是伪彩色图像(下图2)(如果不加的话)
plt.imshow(img, cmap='gray')
plt.axis('on')
plt.title('image')
plt.show()
# 将图像画出来,并且画出标记的病灶
def save_image_with_mask(image_arr, mask_image, save_path):
image_arr[image_arr < -70] = -70
image_arr[image_arr > 180] = 180
image_arr = image_arr + 70
shape = list(np.shape(image_arr))
image_arr_rgb = np.zeros(shape=[shape[0], shape[1], 3])
image_arr_rgb[:, :, 0] = image_arr
image_arr_rgb[:, :, 1] = image_arr
image_arr_rgb[:, :, 2] = image_arr
image = Image.fromarray(np.asarray(image_arr_rgb, np.uint8))
image_draw = ImageDraw.Draw(image)
[ys, xs] = np.where(mask_image != 0)
miny = np.min(ys)
maxy = np.max(ys)
minx = np.min(xs)
maxx = np.max(xs)
ROI = image_arr_rgb[miny - 1:maxy + 1, minx - 1:maxx + 1, :]
ROI_Image = Image.fromarray(np.asarray(ROI, np.uint8))
for index, y in enumerate(ys):
image_draw.point([xs[index], y], fill=(255, 0, 0))
if save_path is None:
image.show()
else:
image.save(save_path)
ROI_Image.save(os.path.join(os.path.dirname(save_path), os.path.basename(save_path).split('.')[0] + '_ROI.jpg'))
del image, ROI_Image
gc.collect()
def compress22dim(image):
'''
将一个矩阵如果可能,压缩到三维的空间
'''
shape = list(np.shape(image))
if len(shape) == 3:
return np.squeeze(image)
return image
def extract_ROI(image, mask_image):
'''
提取一幅图像中的ROI
'''
xs, ys = np.where(mask_image == 1)
xs_min = np.min(xs)
xs_max = np.max(xs)
ys_min = np.min(ys)
ys_max = np.max(ys)
return image[xs_min: xs_max + 1, ys_min: ys_max + 1]
def resize_image(image, size):
image = Image.fromarray(np.asarray(image, np.uint8))
return image.resize((size, size))
# def image_expand(mask_image, r):
# return dilation(mask_image, disk(r))
'''
将形式如(512, 512)格式的图像转化为(1, 512, 512)形式的图片
'''
def expand23D(mask_image):
shape = list(np.shape(mask_image))
if len(shape) == 2:
mask_image = np.expand_dims(mask_image, axis=0)
print('after expand23D', np.shape(mask_image))
return mask_image
'''
返回一个mask图像的中心,是对xyz坐标计算平均值之后的结果
'''
def find_centroid3D(image, flag):
[x, y, z] = np.where(image == flag)
centroid_x = int(np.mean(x))
centroid_y = int(np.mean(y))
centroid_z = int(np.mean(z))
return centroid_x, centroid_y, centroid_z
'''
将[w, h, d]reshape为[d, w, h]
'''
def convert2depthfirst(image):
image = np.array(image)
shape = np.shape(image)
new_image = np.zeros([shape[2], shape[0], shape[1]])
for i in range(shape[2]):
new_image[i, :, :] = image[:, :, i]
return new_image
# def test_convert2depthfirst():
# zeros = np.zeros([100, 100, 30])
# after_zeros = convert2depthfirst(zeros)
# print np.shape(after_zeros)
# test_convert2depthfirst()
'''
将[d, w, h]reshape为[w, h, d]
'''
def convert2depthlastest(image):
image = np.array(image)
shape = np.shape(image)
new_image = np.zeros([shape[1], shape[2], shape[0]])
for i in range(shape[0]):
new_image[:, :, i] = image[i, :, :]
return new_image
def read_image_file(file_path):
if file_path.endswith('.nii'):
return read_nil(file_path)
if file_path.endswith('.mhd'):
return read_mhd_image(file_path)
print('the format of image is not support in this version')
return None
def processing(image, size_training):
image = np.array(image)
# numpy_clip
bottom = -300.
top = 500.
image = np.clip(image, bottom, top)
# to float
minval = -350
interv = 500 - (-350)
image -= minval
# scale down to 0 - 2
image /= (interv / 2)
# zoom
desired_size = [size_training, size_training]
desired_size = np.asarray(desired_size, dtype=np.int)
zooms = desired_size / np.array(image[:, :, 0].shape, dtype=np.float)
print(zooms)
after_zoom = np.zeros([size_training, size_training, np.shape(image)[2]])
for i in range(np.shape(after_zoom)[2]):
after_zoom[:, :, i] = scipy.ndimage.zoom(image[:, :, i], zooms, order=1) # order = 1 => biliniear interpolation
return after_zoom
def preprocessing_agumentation(image, size_training):
image = np.array(image)
# numpy_clip
c_minimum = -300.
c_maximum = 500.
s_maximum = 255.
image = np.clip(image, c_minimum, c_maximum)
interv = float(c_maximum - c_minimum)
image = (image - c_minimum) / interv * s_maximum
minval = 0.
maxval = 255.
image -= minval
interv = maxval - minval
# print('static scaler 0', interv)
# scale down to 0 - 2
# image /= (interv / 2)
image = np.asarray(image, np.float32)
image = image / interv
image = image * 2.0
# zoom
desired_size = [size_training, size_training]
desired_size = np.asarray(desired_size, dtype=np.int)
zooms = desired_size / np.array(image[:, :, 0].shape, dtype=np.float)
print(zooms)
after_zoom = np.zeros([size_training, size_training, np.shape(image)[2]])
for i in range(np.shape(after_zoom)[2]):
after_zoom[:, :, i] = scipy.ndimage.zoom(image[:, :, i], zooms, order=1) # order = 1 => biliniear interpolation
return after_zoom
def MICCAI2018_Iterator(image_dir, execute_func, *parameters):
'''
遍历MICCAI2018文件夹的框架
:param execute_func:
:return:
'''
for sub_name in ['train', 'val', 'test']:
names = os.listdir(os.path.join(image_dir, sub_name))
for name in names:
cur_slice_dir = os.path.join(image_dir, sub_name, name)
execute_func(cur_slice_dir, *parameters)
def dicom2jpg_singlephase(slice_dir, save_dir, phase_name='PV'):
mhd_image_path = glob(os.path.join(slice_dir, phase_name+'_Image*.mhd'))[0]
mhd_mask_path = glob(os.path.join(slice_dir, phase_name + '_Mask*.mhd'))[0]
mhd_image = read_mhd_image(mhd_image_path)
mask_image = read_mhd_image(mhd_mask_path)
mhd_image = np.asarray(np.squeeze(mhd_image), np.float32)
mhd_image = np.expand_dims(mhd_image, axis=2)
mhd_image = np.concatenate([mhd_image, mhd_image, mhd_image], axis=2)
mask_image = np.asarray(np.squeeze(mask_image), np.uint8)
max_v = 300.
min_v = -350.
mhd_image[mhd_image > max_v] = max_v
mhd_image[mhd_image < min_v] = min_v
print(np.mean(mhd_image, dtype=np.float32))
mhd_image -= np.mean(mhd_image)
min_v = np.min(mhd_image)
max_v = np.max(mhd_image)
interv = max_v - min_v
mhd_image = (mhd_image - min_v) / interv
file_name = os.path.basename(slice_dir)
dataset_name = os.path.basename(os.path.dirname(slice_dir))
save_path = os.path.join(save_dir, phase_name, dataset_name, file_name+'.jpg')
if not os.path.exists(os.path.dirname(save_path)):
os.makedirs(os.path.dirname(save_path))
print('the shape of mhd_image is ', np.shape(mhd_image), np.min(mhd_image), np.max(mhd_image))
cv2.imwrite(save_path, mhd_image * 255)
xml_save_dir = os.path.join(save_dir, phase_name, dataset_name+'_xml')
if not os.path.exists(xml_save_dir):
os.makedirs(xml_save_dir)
evulate_gt_dir = os.path.join(save_dir, phase_name, dataset_name+'_gt')
if not os.path.exists(evulate_gt_dir):
os.makedirs(evulate_gt_dir)
xml_save_path = os.path.join(xml_save_dir, file_name + '.xml')
gt_save_path = os.path.join(evulate_gt_dir, file_name + '.txt') # for evulate
doc = Document()
root_node = doc.createElement('annotation')
doc.appendChild(root_node)
folder_name = os.path.basename(save_dir) + '/' + phase_name
folder_node = doc.createElement('folder')
root_node.appendChild(folder_node)
folder_txt_node = doc.createTextNode(folder_name)
folder_node.appendChild(folder_txt_node)
file_name = file_name + '.jpg'
filename_node = doc.createElement('filename')
root_node.appendChild(filename_node)
filename_txt_node = doc.createTextNode(file_name)
filename_node.appendChild(filename_txt_node)
shape = list(np.shape(mhd_image))
size_node = doc.createElement('size')
root_node.appendChild(size_node)
width_node = doc.createElement('width')
width_node.appendChild(doc.createTextNode(str(shape[0])))
height_node = doc.createElement('height')
height_node.appendChild(doc.createTextNode(str(shape[1])))
depth_node = doc.createElement('depth')
depth_node.appendChild(doc.createTextNode(str(3)))
size_node.appendChild(width_node)
size_node.appendChild(height_node)
size_node.appendChild(depth_node)
mask_image[mask_image != 1] = 0
xs, ys = np.where(mask_image == 1)
min_x = np.min(xs)
min_y = np.min(ys)
max_x = np.max(xs)
max_y = np.max(ys)
object_node = doc.createElement('object')
root_node.appendChild(object_node)
name_node = doc.createElement('name')
name_node.appendChild(doc.createTextNode('Cyst'))
object_node.appendChild(name_node)
truncated_node = doc.createElement('truncated')
object_node.appendChild(truncated_node)
truncated_node.appendChild(doc.createTextNode('0'))
difficult_node = doc.createElement('difficult')
object_node.appendChild(difficult_node)
difficult_node.appendChild(doc.createTextNode('0'))
bndbox_node = doc.createElement('bndbox')
object_node.appendChild(bndbox_node)
xmin_node = doc.createElement('xmin')
xmin_node.appendChild(doc.createTextNode(str(min_y)))
bndbox_node.appendChild(xmin_node)
ymin_node = doc.createElement('ymin')
ymin_node.appendChild(doc.createTextNode(str(min_x)))
bndbox_node.appendChild(ymin_node)
xmax_node = doc.createElement('xmax')
xmax_node.appendChild(doc.createTextNode(str(max_y)))
bndbox_node.appendChild(xmax_node)
ymax_node = doc.createElement('ymax')
ymax_node.appendChild(doc.createTextNode(str(max_x)))
bndbox_node.appendChild(ymax_node)
with open(xml_save_path, 'wb') as f:
f.write(doc.toprettyxml(indent='\t', encoding='utf-8'))
line = '%s %d %d %d %d\n' % ('Cyst', min_y, min_x, max_y, max_x)
print(line)
lines = []
lines.append(line)
with open(gt_save_path, 'w') as f:
f.writelines(lines)
f.close()
def dicom2jpg_multiphase(slice_dir, save_dir, phasenames=['NC', 'ART', 'PV'], target_phase='PV', suffix_name='npy'):
target_mask = None
mhd_images = []
for phase_name in phasenames:
mhd_image_path = glob(os.path.join(slice_dir, 'Image_%s*.mhd' % phase_name))[0]
mhd_mask_path = glob(os.path.join(slice_dir, 'Mask_%s*.mhd' % phase_name))[0]
mhd_image = read_mhd_image(mhd_image_path)
mask_image = read_mhd_image(mhd_mask_path)
mhd_image = np.asarray(np.squeeze(mhd_image), np.float32)
mhd_images.append(mhd_image)
mask_image = np.asarray(np.squeeze(mask_image), np.uint8)
if phase_name == target_phase:
target_mask = mask_image
print(np.shape(mhd_images))
mask_image = target_mask
mask_image_shape = list(np.shape(mask_image))
if len(mask_image_shape) == 3:
mask_image = mask_image[1, :, :]
print('the mask image shape is ', np.shape(mask_image))
if suffix_name == 'jpg':
mhd_images = np.transpose(np.asarray(mhd_images, np.float32), axes=[1, 2, 0])
mhd_image = mhd_images
elif suffix_name == 'npy':
mhd_images = np.concatenate(np.asarray(mhd_images, np.float), axis=0)
mhd_images = np.transpose(np.asarray(mhd_images, np.float32), axes=[1, 2, 0])
mhd_image = mhd_images
else:
print('the suffix name does not support')
assert False
max_v = 300.
min_v = -350.
mhd_image[mhd_image > max_v] = max_v
mhd_image[mhd_image < min_v] = min_v
print(np.mean(mhd_image, dtype=np.float32))
mhd_image -= np.mean(mhd_image)
min_v = np.min(mhd_image)
max_v = np.max(mhd_image)
interv = max_v - min_v
mhd_image = (mhd_image - min_v) / interv
file_name = os.path.basename(slice_dir)
dataset_name = os.path.basename(os.path.dirname(slice_dir))
phase_name = ''.join(phasenames)
save_path = os.path.join(save_dir, phase_name, dataset_name, file_name+'.' + suffix_name)
if not os.path.exists(os.path.dirname(save_path)):
os.makedirs(os.path.dirname(save_path))
print('the shape of mhd_image is ', np.shape(mhd_image), np.min(mhd_image), np.max(mhd_image))
#cv2.imwrite(save_path, mhd_image * 255)
np.save(save_path, mhd_image * 255)
xml_save_dir = os.path.join(save_dir, phase_name, dataset_name+'_xml')
if not os.path.exists(xml_save_dir):
os.makedirs(xml_save_dir)
evulate_gt_dir = os.path.join(save_dir, phase_name, dataset_name+'_gt')
if not os.path.exists(evulate_gt_dir):
os.makedirs(evulate_gt_dir)
xml_save_path = os.path.join(xml_save_dir, file_name + '.xml')
gt_save_path = os.path.join(evulate_gt_dir, file_name + '.txt') # for evulate
doc = Document()
root_node = doc.createElement('annotation')
doc.appendChild(root_node)
folder_name = os.path.basename(save_dir) + '/' + phase_name
folder_node = doc.createElement('folder')
root_node.appendChild(folder_node)
folder_txt_node = doc.createTextNode(folder_name)
folder_node.appendChild(folder_txt_node)
file_name = file_name + '.jpg'
filename_node = doc.createElement('filename')
root_node.appendChild(filename_node)
filename_txt_node = doc.createTextNode(file_name)
filename_node.appendChild(filename_txt_node)
shape = list(np.shape(mhd_image))
size_node = doc.createElement('size')
root_node.appendChild(size_node)
width_node = doc.createElement('width')
width_node.appendChild(doc.createTextNode(str(shape[0])))
height_node = doc.createElement('height')
height_node.appendChild(doc.createTextNode(str(shape[1])))
depth_node = doc.createElement('depth')
depth_node.appendChild(doc.createTextNode(str(3)))
size_node.appendChild(width_node)
size_node.appendChild(height_node)
size_node.appendChild(depth_node)
mask_image[mask_image != 1] = 0
xs, ys = np.where(mask_image == 1)
print(xs, ys)
min_x = np.min(xs)
min_y = np.min(ys)
max_x = np.max(xs)
max_y = np.max(ys)
object_node = doc.createElement('object')
root_node.appendChild(object_node)
name_node = doc.createElement('name')
name_node.appendChild(doc.createTextNode('Cyst'))
object_node.appendChild(name_node)
truncated_node = doc.createElement('truncated')
object_node.appendChild(truncated_node)
truncated_node.appendChild(doc.createTextNode('0'))
difficult_node = doc.createElement('difficult')
object_node.appendChild(difficult_node)
difficult_node.appendChild(doc.createTextNode('0'))
bndbox_node = doc.createElement('bndbox')
object_node.appendChild(bndbox_node)
xmin_node = doc.createElement('xmin')
xmin_node.appendChild(doc.createTextNode(str(min_y)))
bndbox_node.appendChild(xmin_node)
ymin_node = doc.createElement('ymin')
ymin_node.appendChild(doc.createTextNode(str(min_x)))
bndbox_node.appendChild(ymin_node)
xmax_node = doc.createElement('xmax')
xmax_node.appendChild(doc.createTextNode(str(max_y)))
bndbox_node.appendChild(xmax_node)
ymax_node = doc.createElement('ymax')
ymax_node.appendChild(doc.createTextNode(str(max_x)))
bndbox_node.appendChild(ymax_node)
with open(xml_save_path, 'wb') as f:
f.write(doc.toprettyxml(indent='\t', encoding='utf-8'))
line = '%s %d %d %d %d\n' % ('Cyst', min_y, min_x, max_y, max_x)
print(line)
lines = []
lines.append(line)
with open(gt_save_path, 'w') as f:
f.writelines(lines)
f.close()
def static_pixel_num(image_dir, target_phase='PV'):
# {0: 217784361, 1: 1392043, 2: 209128, 3: 1486676, 4: 458278, 5: 705482}
# {0: 1.0, 156, 1041, 146, 475, 308}
static_res = {
0: 0,
1: 0,
2: 0,
3: 0,
4: 0,
5: 0
}
from convert2jpg import extract_bboxs_mask_from_mask
from config import pixel2type, type2pixel
for sub_name in ['train', 'val', 'test']:
names = os.listdir(os.path.join(image_dir, sub_name))
for name in names:
cur_slice_dir = os.path.join(image_dir, sub_name, name)
mhd_mask_path = glob(os.path.join(cur_slice_dir, 'Mask_%s*.mhd' % target_phase))[0]
mask_image = read_mhd_image(mhd_mask_path)
min_xs, min_ys, max_xs, max_ys, names, mask = extract_bboxs_mask_from_mask(mask_image,
os.path.join(cur_slice_dir,
'tumor_types'))
for key in pixel2type.keys():
mask[mask == key] = type2pixel[pixel2type[key]][0]
pixel_value_set = np.unique(mask)
print pixel_value_set
for value in list(pixel_value_set):
static_res[value] += np.sum(mask == value)
print(static_res)
def convertCase2PNGs(volume_path, seg_path, save_dir=None, z_axis=5.0, short_edge=64):
'''
将nii转化成PNG
:param volume_path: nii的路径
:param seg_path:
:return:
'''
from skimage.measure import label
volume, header = read_nii_with_header(volume_path)
# volume = np.transpose(volume, [1, 0, 2])
volume = np.asarray(volume, np.float32)
max_v = 250.
min_v = -200.
# max_v = 180
# min_v = -70
volume[volume > max_v] = max_v
volume[volume < min_v] = min_v
volume -= np.mean(volume)
min_v = np.min(volume)
max_v = np.max(volume)
interv = max_v - min_v
volume = (volume - min_v) / interv
z_axis_case = header['voxel_spacing'][-1]
slice_num = int(z_axis / z_axis_case)
if slice_num == 0:
slice_num = 1
seg = read_nii(seg_path)
# print np.shape(volume), np.shape(seg)
[_, _, channel] = np.shape(volume)
imgs = []
names = []
masks = []
tumor_weakly_masks = []
liver_masks = []
i = slice_num + 1
pos_slice_num = np.sum(np.sum(np.sum(seg == 2, axis=0), axis=0) != 0)
total_slice_num = np.shape(seg)[-1]
print('pos_slice_num is ', pos_slice_num, total_slice_num)
neg_rate = (3.0 * pos_slice_num) / total_slice_num # 正样本是负样本的
if neg_rate > 1.0:
neg_rate = 1.0
for i in range(channel):
seg_slice = seg[:, :, i]
mid_slice = np.expand_dims(volume[:, :, i], axis=0)
pre_slice = []
# pre_end = i - slice_num / 2
# pre_end = i
# for j in range(1, slice_num + 1):
# z = pre_end - j
# if z < 0:
# z = 0
# pre_slice.append(volume[:, :, z])
if (i - 1) < 0:
pre_slice = np.expand_dims(volume[:, :, i], axis=0)
else:
pre_slice = np.expand_dims(volume[:, :, i-1], axis=0)
next_slice = []
# next_start = i + slice_num / 2
# next_start = i
# for j in range(1, slice_num + 1):
# z = next_start + j
# if z >= channel:
# z = channel - 1
# next_slice.append(volume[:, :, z])
if (i + 1) >= channel:
next_slice = np.expand_dims(volume[:, :, i], axis=0)
else:
next_slice = np.expand_dims(volume[:, :, i+1], axis=0)
# pre_slice = np.mean(pre_slice, axis=0, keepdims=True)
# next_slice = np.mean(next_slice, axis=0, keepdims=True)
imgs.append(
np.transpose(np.concatenate([pre_slice, mid_slice, next_slice], axis=0),
axes=[1, 2, 0]))
names.append(os.path.basename(volume_path).split('.')[0].split('-')[1] + '-' + str(i))
binary_seg_slice = np.asarray(seg_slice == 2, np.uint8)
# print np.max(binary_seg_slice)
masks.append(binary_seg_slice)
labeled_mask = label(binary_seg_slice)
weakly_label_mask = np.zeros_like(binary_seg_slice, np.uint8)
for idx in range(1, np.max(labeled_mask) + 1):
xs, ys = np.where(labeled_mask == idx)
min_xs = np.min(xs)
max_xs = np.max(xs)
min_ys = np.min(ys)
max_ys = np.max(ys)
weakly_label_mask[min_xs: max_xs, min_ys: max_ys] = 1
liver_masks.append(np.asarray(seg_slice == 1, np.uint8))
tumor_weakly_masks.append(weakly_label_mask)
# i += 1
return np.asarray(imgs, np.float32), np.asarray(masks, np.uint8), np.asarray(liver_masks, np.uint8), np.asarray(
tumor_weakly_masks, np.uint8)
def statics_num_slices_lesion(nii_dir):
'''
统计每个case,有多少slice具有病灶
:param nii_dir:
:return:
'''
mask_nii_paths = glob(os.path.join(nii_dir, 'segmentation-*.nii'))
for mask_nii_path in mask_nii_paths:
mask_img = read_nii(mask_nii_path)
has_lesion = np.asarray(np.sum(np.sum(mask_img == 2, axis=0), axis=0)>0, np.bool)
num_lesion_slices = np.sum(has_lesion)
print os.path.basename(mask_nii_path), num_lesion_slices, np.shape(mask_img)[-1]
if __name__ == '__main__':
# for phasename in ['NC', 'ART', 'PV']:
# convert_dicomseries2mhd(
# '/home/give/github/Cascaded-FCN-Tensorflow/Cascaded-FCN/tensorflow-unet/z_testdata/304176-2802027/' + phasename,
# '/home/give/github/Cascaded-FCN-Tensorflow/Cascaded-FCN/tensorflow-unet/z_testdata/304176-2802027/MHD/' + phasename + '.mhd'
# )
# names = os.listdir('/home/give/Documents/dataset/ISBI2017/media/nas/01_Datasets/CT/LITS/Training_Batch_2')
# for name in names:
# path = os.path.join('/home/give/Documents/dataset/ISBI2017/media/nas/01_Datasets/CT/LITS/Training_Batch_2', name)
# image = read_nil(path)
# print(np.shape(image))
# conver2JPG single phase
# image_dir = '/home/give/Documents/dataset/MICCAI2018/Slices/crossvalidation/0'
# save_dir = '/home/give/Documents/dataset/MICCAI2018_Detection/SinglePhase'
# phase_name = 'NC'
# MICCAI2018_Iterator(image_dir, dicom2jpg_singlephase, save_dir, phase_name)
# conver2JPG multi phase
# image_dir = '/home/give/Documents/dataset/LiverLesionDetection_Splited/0'
# static_pixel_num(image_dir, 'PV')
statics_num_slices_lesion('/media/give/CBMIR/ld/dataset/ISBI2017/media/nas/01_Datasets/CT/LITS/Training_Batch_2') | 33.484524 | 138 | 0.657731 | 4,003 | 28,127 | 4.37322 | 0.113415 | 0.02879 | 0.036559 | 0.036559 | 0.55004 | 0.507483 | 0.485948 | 0.465955 | 0.453045 | 0.428996 | 0 | 0.023685 | 0.211932 | 28,127 | 840 | 139 | 33.484524 | 0.766083 | 0.093611 | 0 | 0.424658 | 0 | 0.001712 | 0.035461 | 0.0034 | 0 | 0 | 0 | 0 | 0.001712 | 0 | null | null | 0 | 0.032534 | null | null | 0.030822 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6aff1082796c2046965ddce3d39f2087944e89 | 925 | py | Python | setup.py | marcus-luck/zohoreader | e832f076a8a87bf27607980fb85a1d2bc8339743 | [
"MIT"
] | 1 | 2020-11-11T02:19:50.000Z | 2020-11-11T02:19:50.000Z | setup.py | marcus-luck/zohoreader | e832f076a8a87bf27607980fb85a1d2bc8339743 | [
"MIT"
] | null | null | null | setup.py | marcus-luck/zohoreader | e832f076a8a87bf27607980fb85a1d2bc8339743 | [
"MIT"
] | null | null | null | from setuptools import setup
def readme():
with open('README.rst') as f:
return f.read()
setup(name='zohoreader',
version='0.1',
description='A simple reader for zoho projects API to get all projects, users and timereports',
long_description=readme(),
classifiers=[
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.5',
],
keywords='zoho, API, zoho project',
url='https://github.com/marcus-luck/zohoreader',
author='Marcus Luck',
author_email='marcus.luck@outlook.com',
license='MIT',
packages=['zohoreader'],
zip_safe=False,
install_requires=[
'requests>=2.12.4',
'python-dateutil>=2.7.2'
],
test_suite='nose.collector',
tests_require=['nose', 'nose-cover3'],
include_package_data=True
)
| 28.030303 | 101 | 0.596757 | 106 | 925 | 5.132075 | 0.754717 | 0.055147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019034 | 0.261622 | 925 | 32 | 102 | 28.90625 | 0.777452 | 0 | 0 | 0.071429 | 0 | 0 | 0.418378 | 0.048649 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | true | 0 | 0.035714 | 0 | 0.107143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6c19de3061a6952b4f83f10500239e87852cc5 | 2,883 | py | Python | autumn/projects/covid_19/sri_lanka/sri_lanka/project.py | emmamcbryde/AuTuMN-1 | b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | autumn/projects/covid_19/sri_lanka/sri_lanka/project.py | emmamcbryde/AuTuMN-1 | b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | autumn/projects/covid_19/sri_lanka/sri_lanka/project.py | emmamcbryde/AuTuMN-1 | b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | import numpy as np
from autumn.calibration.proposal_tuning import perform_all_params_proposal_tuning
from autumn.core.project import Project, ParameterSet, load_timeseries, build_rel_path, get_all_available_scenario_paths, \
use_tuned_proposal_sds
from autumn.calibration import Calibration
from autumn.calibration.priors import UniformPrior, BetaPrior,TruncNormalPrior
from autumn.calibration.targets import (
NormalTarget,
get_dispersion_priors_for_gaussian_targets,
)
from autumn.models.covid_19 import base_params, build_model
from autumn.settings import Region, Models
from autumn.projects.covid_19.sri_lanka.sri_lanka.scenario_builder import get_all_scenario_dicts
# Load and configure model parameters.
default_path = build_rel_path("params/default.yml")
#scenario_paths = [build_rel_path(f"params/scenario-{i}.yml") for i in range(7, 9)]
mle_path = build_rel_path("params/mle-params.yml")
baseline_params = base_params.update(default_path).update(mle_path, calibration_format=True)
all_scenario_dicts = get_all_scenario_dicts("LKA")
#scenario_params = [baseline_params.update(p) for p in scenario_paths]
scenario_params = [baseline_params.update(sc_dict) for sc_dict in all_scenario_dicts]
param_set = ParameterSet(baseline=baseline_params, scenarios=scenario_params)
ts_set = load_timeseries(build_rel_path("timeseries.json"))
notifications_ts = ts_set["notifications"].rolling(7).mean().loc[350::7]
death_ts = ts_set["infection_deaths"].loc[350:]
targets = [
NormalTarget(notifications_ts),
NormalTarget(death_ts),
]
priors = [
# Dispersion parameters based on targets
*get_dispersion_priors_for_gaussian_targets(targets),
*get_dispersion_priors_for_gaussian_targets(targets),
# Regional parameters
UniformPrior("contact_rate", [0.024, 0.027]),
UniformPrior("infectious_seed", [275.0, 450.0]),
# Detection
UniformPrior("testing_to_detection.assumed_cdr_parameter", [0.009, 0.025]),
UniformPrior("infection_fatality.multiplier", [0.09, 0.13]),
#VoC
UniformPrior("voc_emergence.alpha_beta.start_time", [370, 410]),
UniformPrior("voc_emergence.alpha_beta.contact_rate_multiplier", [3.2, 4.5]),
UniformPrior("voc_emergence.delta.start_time", [475, 530]),
UniformPrior("voc_emergence.delta.contact_rate_multiplier", [8.5, 11.5]),
]
# Load proposal sds from yml file
# use_tuned_proposal_sds(priors, build_rel_path("proposal_sds.yml"))
calibration = Calibration(priors, targets)
# FIXME: Replace with flexible Python plot request API.
import json
plot_spec_filepath = build_rel_path("timeseries.json")
with open(plot_spec_filepath) as f:
plot_spec = json.load(f)
project = Project(
Region.SRI_LANKA, Models.COVID_19, build_model, param_set, calibration, plots=plot_spec
)
#perform_all_params_proposal_tuning(project, calibration, priors, n_points=50, relative_likelihood_reduction=0.2) | 43.029851 | 123 | 0.794658 | 401 | 2,883 | 5.399002 | 0.351621 | 0.036952 | 0.038799 | 0.030485 | 0.213395 | 0.06097 | 0.04388 | 0.04388 | 0 | 0 | 0 | 0.027413 | 0.10163 | 2,883 | 67 | 124 | 43.029851 | 0.808494 | 0.181755 | 0 | 0.043478 | 0 | 0 | 0.151193 | 0.105622 | 0 | 0 | 0 | 0.014925 | 0 | 1 | 0 | false | 0 | 0.217391 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6c49af1afdf5e937dac3ecb68b0de9cb7816d4 | 11,421 | py | Python | selfdrive/sensord/rawgps/structs.py | TC921/openpilot | d5d91e6e3be02e2525ed8d6137e5fdca2b81657c | [
"MIT"
] | null | null | null | selfdrive/sensord/rawgps/structs.py | TC921/openpilot | d5d91e6e3be02e2525ed8d6137e5fdca2b81657c | [
"MIT"
] | null | null | null | selfdrive/sensord/rawgps/structs.py | TC921/openpilot | d5d91e6e3be02e2525ed8d6137e5fdca2b81657c | [
"MIT"
] | null | null | null | from struct import unpack_from, calcsize
LOG_GNSS_POSITION_REPORT = 0x1476
LOG_GNSS_GPS_MEASUREMENT_REPORT = 0x1477
LOG_GNSS_CLOCK_REPORT = 0x1478
LOG_GNSS_GLONASS_MEASUREMENT_REPORT = 0x1480
LOG_GNSS_BDS_MEASUREMENT_REPORT = 0x1756
LOG_GNSS_GAL_MEASUREMENT_REPORT = 0x1886
LOG_GNSS_OEMDRE_MEASUREMENT_REPORT = 0x14DE
LOG_GNSS_OEMDRE_SVPOLY_REPORT = 0x14E1
LOG_GNSS_ME_DPO_STATUS = 0x1838
LOG_GNSS_CD_DB_REPORT = 0x147B
LOG_GNSS_PRX_RF_HW_STATUS_REPORT = 0x147E
LOG_CGPS_SLOW_CLOCK_CLIB_REPORT = 0x1488
LOG_GNSS_CONFIGURATION_STATE = 0x1516
glonass_measurement_report = """
uint8_t version;
uint32_t f_count;
uint8_t glonass_cycle_number;
uint16_t glonass_number_of_days;
uint32_t milliseconds;
float time_bias;
float clock_time_uncertainty;
float clock_frequency_bias;
float clock_frequency_uncertainty;
uint8_t sv_count;
"""
glonass_measurement_report_sv = """
uint8_t sv_id;
int8_t frequency_index;
uint8_t observation_state; // SVObservationStates
uint8_t observations;
uint8_t good_observations;
uint8_t hemming_error_count;
uint8_t filter_stages;
uint16_t carrier_noise;
int16_t latency;
uint8_t predetect_interval;
uint16_t postdetections;
uint32_t unfiltered_measurement_integral;
float unfiltered_measurement_fraction;
float unfiltered_time_uncertainty;
float unfiltered_speed;
float unfiltered_speed_uncertainty;
uint32_t measurement_status;
uint8_t misc_status;
uint32_t multipath_estimate;
float azimuth;
float elevation;
int32_t carrier_phase_cycles_integral;
uint16_t carrier_phase_cycles_fraction;
float fine_speed;
float fine_speed_uncertainty;
uint8_t cycle_slip_count;
uint32_t pad;
"""
gps_measurement_report = """
uint8_t version;
uint32_t f_count;
uint16_t week;
uint32_t milliseconds;
float time_bias;
float clock_time_uncertainty;
float clock_frequency_bias;
float clock_frequency_uncertainty;
uint8_t sv_count;
"""
gps_measurement_report_sv = """
uint8_t sv_id;
uint8_t observation_state; // SVObservationStates
uint8_t observations;
uint8_t good_observations;
uint16_t parity_error_count;
uint8_t filter_stages;
uint16_t carrier_noise;
int16_t latency;
uint8_t predetect_interval;
uint16_t postdetections;
uint32_t unfiltered_measurement_integral;
float unfiltered_measurement_fraction;
float unfiltered_time_uncertainty;
float unfiltered_speed;
float unfiltered_speed_uncertainty;
uint32_t measurement_status;
uint8_t misc_status;
uint32_t multipath_estimate;
float azimuth;
float elevation;
int32_t carrier_phase_cycles_integral;
uint16_t carrier_phase_cycles_fraction;
float fine_speed;
float fine_speed_uncertainty;
uint8_t cycle_slip_count;
uint32_t pad;
"""
position_report = """
uint8 u_Version; /* Version number of DM log */
uint32 q_Fcount; /* Local millisecond counter */
uint8 u_PosSource; /* Source of position information */ /* 0: None 1: Weighted least-squares 2: Kalman filter 3: Externally injected 4: Internal database */
uint32 q_Reserved1; /* Reserved memory field */
uint16 w_PosVelFlag; /* Position velocity bit field: (see DM log 0x1476 documentation) */
uint32 q_PosVelFlag2; /* Position velocity 2 bit field: (see DM log 0x1476 documentation) */
uint8 u_FailureCode; /* Failure code: (see DM log 0x1476 documentation) */
uint16 w_FixEvents; /* Fix events bit field: (see DM log 0x1476 documentation) */
uint32 _fake_align_week_number;
uint16 w_GpsWeekNumber; /* GPS week number of position */
uint32 q_GpsFixTimeMs; /* GPS fix time of week of in milliseconds */
uint8 u_GloNumFourYear; /* Number of Glonass four year cycles */
uint16 w_GloNumDaysInFourYear; /* Glonass calendar day in four year cycle */
uint32 q_GloFixTimeMs; /* Glonass fix time of day in milliseconds */
uint32 q_PosCount; /* Integer count of the number of unique positions reported */
uint64 t_DblFinalPosLatLon[2]; /* Final latitude and longitude of position in radians */
uint32 q_FltFinalPosAlt; /* Final height-above-ellipsoid altitude of position */
uint32 q_FltHeadingRad; /* User heading in radians */
uint32 q_FltHeadingUncRad; /* User heading uncertainty in radians */
uint32 q_FltVelEnuMps[3]; /* User velocity in east, north, up coordinate frame. In meters per second. */
uint32 q_FltVelSigmaMps[3]; /* Gaussian 1-sigma value for east, north, up components of user velocity */
uint32 q_FltClockBiasMeters; /* Receiver clock bias in meters */
uint32 q_FltClockBiasSigmaMeters; /* Gaussian 1-sigma value for receiver clock bias in meters */
uint32 q_FltGGTBMeters; /* GPS to Glonass time bias in meters */
uint32 q_FltGGTBSigmaMeters; /* Gaussian 1-sigma value for GPS to Glonass time bias uncertainty in meters */
uint32 q_FltGBTBMeters; /* GPS to BeiDou time bias in meters */
uint32 q_FltGBTBSigmaMeters; /* Gaussian 1-sigma value for GPS to BeiDou time bias uncertainty in meters */
uint32 q_FltBGTBMeters; /* BeiDou to Glonass time bias in meters */
uint32 q_FltBGTBSigmaMeters; /* Gaussian 1-sigma value for BeiDou to Glonass time bias uncertainty in meters */
uint32 q_FltFiltGGTBMeters; /* Filtered GPS to Glonass time bias in meters */
uint32 q_FltFiltGGTBSigmaMeters; /* Filtered Gaussian 1-sigma value for GPS to Glonass time bias uncertainty in meters */
uint32 q_FltFiltGBTBMeters; /* Filtered GPS to BeiDou time bias in meters */
uint32 q_FltFiltGBTBSigmaMeters; /* Filtered Gaussian 1-sigma value for GPS to BeiDou time bias uncertainty in meters */
uint32 q_FltFiltBGTBMeters; /* Filtered BeiDou to Glonass time bias in meters */
uint32 q_FltFiltBGTBSigmaMeters; /* Filtered Gaussian 1-sigma value for BeiDou to Glonass time bias uncertainty in meters */
uint32 q_FltSftOffsetSec; /* SFT offset as computed by WLS in seconds */
uint32 q_FltSftOffsetSigmaSec; /* Gaussian 1-sigma value for SFT offset in seconds */
uint32 q_FltClockDriftMps; /* Clock drift (clock frequency bias) in meters per second */
uint32 q_FltClockDriftSigmaMps; /* Gaussian 1-sigma value for clock drift in meters per second */
uint32 q_FltFilteredAlt; /* Filtered height-above-ellipsoid altitude in meters as computed by WLS */
uint32 q_FltFilteredAltSigma; /* Gaussian 1-sigma value for filtered height-above-ellipsoid altitude in meters */
uint32 q_FltRawAlt; /* Raw height-above-ellipsoid altitude in meters as computed by WLS */
uint32 q_FltRawAltSigma; /* Gaussian 1-sigma value for raw height-above-ellipsoid altitude in meters */
uint32 align_Flt[14];
uint32 q_FltPdop; /* 3D position dilution of precision as computed from the unweighted
uint32 q_FltHdop; /* Horizontal position dilution of precision as computed from the unweighted least-squares covariance matrix */
uint32 q_FltVdop; /* Vertical position dilution of precision as computed from the unweighted least-squares covariance matrix */
uint8 u_EllipseConfidence; /* Statistical measure of the confidence (percentage) associated with the uncertainty ellipse values */
uint32 q_FltEllipseAngle; /* Angle of semimajor axis with respect to true North, with increasing angles moving clockwise from North. In units of degrees. */
uint32 q_FltEllipseSemimajorAxis; /* Semimajor axis of final horizontal position uncertainty error ellipse. In units of meters. */
uint32 q_FltEllipseSemiminorAxis; /* Semiminor axis of final horizontal position uncertainty error ellipse. In units of meters. */
uint32 q_FltPosSigmaVertical; /* Gaussian 1-sigma value for final position height-above-ellipsoid altitude in meters */
uint8 u_HorizontalReliability; /* Horizontal position reliability 0: Not set 1: Very Low 2: Low 3: Medium 4: High */
uint8 u_VerticalReliability; /* Vertical position reliability */
uint16 w_Reserved2; /* Reserved memory field */
uint32 q_FltGnssHeadingRad; /* User heading in radians derived from GNSS only solution */
uint32 q_FltGnssHeadingUncRad; /* User heading uncertainty in radians derived from GNSS only solution */
uint32 q_SensorDataUsageMask; /* Denotes which additional sensor data were used to compute this position fix. BIT[0] 0x00000001 <96> Accelerometer BIT[1] 0x00000002 <96> Gyro 0x0000FFFC - Reserved A bit set to 1 indicates that certain fields as defined by the SENSOR_AIDING_MASK were aided with sensor data*/
uint32 q_SensorAidMask; /* Denotes which component of the position report was assisted with additional sensors defined in SENSOR_DATA_USAGE_MASK BIT[0] 0x00000001 <96> Heading aided with sensor data BIT[1] 0x00000002 <96> Speed aided with sensor data BIT[2] 0x00000004 <96> Position aided with sensor data BIT[3] 0x00000008 <96> Velocity aided with sensor data 0xFFFFFFF0 <96> Reserved */
uint8 u_NumGpsSvsUsed; /* The number of GPS SVs used in the fix */
uint8 u_TotalGpsSvs; /* Total number of GPS SVs detected by searcher, including ones not used in position calculation */
uint8 u_NumGloSvsUsed; /* The number of Glonass SVs used in the fix */
uint8 u_TotalGloSvs; /* Total number of Glonass SVs detected by searcher, including ones not used in position calculation */
uint8 u_NumBdsSvsUsed; /* The number of BeiDou SVs used in the fix */
uint8 u_TotalBdsSvs; /* Total number of BeiDou SVs detected by searcher, including ones not used in position calculation */
"""
def name_to_camelcase(nam):
ret = []
i = 0
while i < len(nam):
if nam[i] == "_":
ret.append(nam[i+1].upper())
i += 2
else:
ret.append(nam[i])
i += 1
return ''.join(ret)
def parse_struct(ss):
st = "<"
nams = []
for l in ss.strip().split("\n"):
typ, nam = l.split(";")[0].split()
#print(typ, nam)
if typ == "float" or '_Flt' in nam:
st += "f"
elif typ == "double" or '_Dbl' in nam:
st += "d"
elif typ in ["uint8", "uint8_t"]:
st += "B"
elif typ in ["int8", "int8_t"]:
st += "b"
elif typ in ["uint32", "uint32_t"]:
st += "I"
elif typ in ["int32", "int32_t"]:
st += "i"
elif typ in ["uint16", "uint16_t"]:
st += "H"
elif typ in ["int16", "int16_t"]:
st += "h"
elif typ == "uint64":
st += "Q"
else:
print("unknown type", typ)
assert False
if '[' in nam:
cnt = int(nam.split("[")[1].split("]")[0])
st += st[-1]*(cnt-1)
for i in range(cnt):
nams.append("%s[%d]" % (nam.split("[")[0], i))
else:
nams.append(nam)
return st, nams
def dict_unpacker(ss, camelcase = False):
st, nams = parse_struct(ss)
if camelcase:
nams = [name_to_camelcase(x) for x in nams]
sz = calcsize(st)
return lambda x: dict(zip(nams, unpack_from(st, x))), sz
| 50.76 | 403 | 0.698976 | 1,487 | 11,421 | 5.174849 | 0.246133 | 0.040026 | 0.02872 | 0.02924 | 0.488369 | 0.446524 | 0.424691 | 0.382326 | 0.369591 | 0.312411 | 0 | 0.047333 | 0.228614 | 11,421 | 224 | 404 | 50.986607 | 0.826107 | 0.001313 | 0 | 0.349057 | 0 | 0.051887 | 0.832968 | 0.108637 | 0 | 0 | 0.015958 | 0 | 0.004717 | 1 | 0.014151 | false | 0 | 0.004717 | 0 | 0.033019 | 0.004717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6e652c3847138189ca7b951889b9b3a32aa8ce | 1,702 | py | Python | jassen/django/project/project/urls.py | cabilangan112/intern-drf-blog | b2d6c7a4af1316b2c7ce38547bd9df99b4f3e8b9 | [
"MIT"
] | null | null | null | jassen/django/project/project/urls.py | cabilangan112/intern-drf-blog | b2d6c7a4af1316b2c7ce38547bd9df99b4f3e8b9 | [
"MIT"
] | null | null | null | jassen/django/project/project/urls.py | cabilangan112/intern-drf-blog | b2d6c7a4af1316b2c7ce38547bd9df99b4f3e8b9 | [
"MIT"
] | null | null | null | """project URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.conf.urls import url, include
from rest_framework import routers
from blog import views
from blog.views import PostViewSet,CommentViewSet,CategoryViewSet,TagViewSet,DraftViewSet,HideViewSet
from django.conf import settings
from django.conf.urls.static import static
router = routers.DefaultRouter()
router.register(r'hide',HideViewSet, base_name='hiddinn')
router.register(r'draft',DraftViewSet, base_name='draft')
router.register(r'post', PostViewSet, base_name='post')
router.register(r'comment', CommentViewSet, base_name='comment')
router.register(r'tags', TagViewSet, base_name='tags')
router.register(r'category', CategoryViewSet, base_name='category')
from django.contrib import admin
from django.urls import path
urlpatterns = [
path('admin/', admin.site.urls),
url(r'^', include(router.urls)),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
]
urlpatterns.extend(
static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) +
static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
) | 37.822222 | 101 | 0.756757 | 239 | 1,702 | 5.313808 | 0.343096 | 0.047244 | 0.070866 | 0.018898 | 0.092126 | 0.092126 | 0.059055 | 0 | 0 | 0 | 0 | 0.005348 | 0.121034 | 1,702 | 45 | 102 | 37.822222 | 0.843583 | 0.36604 | 0 | 0 | 0 | 0 | 0.10904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ed6f5b3794c25687738dfe6c60b7b8d1ed6647b2 | 14,621 | py | Python | join_peaks.py | nijibabulu/chip_tools | 04def22059a6018b3b49247d69d7b04eee1dcd89 | [
"MIT"
] | null | null | null | join_peaks.py | nijibabulu/chip_tools | 04def22059a6018b3b49247d69d7b04eee1dcd89 | [
"MIT"
] | null | null | null | join_peaks.py | nijibabulu/chip_tools | 04def22059a6018b3b49247d69d7b04eee1dcd89 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
import os
import sys
import math
import csv
import collections
import docopt
import peakzilla_qnorm_mapq_patched as pz
__doc__ = '''
Usage: join_peaks.py [options] PEAKS CHIP INPUT [ (PEAKS CHIP INPUT) ... ]
This script finds peaks in common between multiple ChIP experiments determined
by peakzilla. For each ChIP experiment, input a PEAKS file as otuput by
peakzilla, and 2 BED files (CHIP and INPUT) as input to peakzilla.
This will output a table with 3 columns identifying the peaks (Chromosome,
Start, End, Name,'NPeaks','Spread','ChipSE','EnrichSE'). NPeaks signifies the
number of peaks that were called among all the ChIP experiments, Spread is the
difference between the biggest and smallest ChIP peak, ChipSE and EnrichSE are
the standard error on the mean among the ChIP and Enrich values for the peaks.
For each experinent "X", information about the peaks are output: 'XPZName','XPZScore',
'XPZChip','XPZInput','XPZEnrich','XPZFDR','XChip','XInput','XEnrich','XMapq'.
All 'PZ' columns are the original output from peakzilla and the remaining
columns are re-calculated in this script (also output regardless of the presence
of a peak).
Options:
--max-distance=DIST maximum summit distance to join peaks [default: 10]
'''
args = docopt.docopt(__doc__)
#np.set_printoptions(precision=1,suppress=True)
def stddev(l):
mean = sum(l)/float(len(l))
variance = sum((x-mean)**2 for x in l)/(len(l)-1)
return math.sqrt(variance)
def std_err(l):
return stddev(l)/math.sqrt(len(l))
class Peak(object):
def dist(self,other):
if self.chrom == other.chrom:
return abs(self.center-other.center)
else:
return -1
def compute_fold_enrichment(self):
self.computed_fold_enrichment = float(self.computed_chip
)/self.computed_control
class SlavePeak(Peak):
def __init__(self,set_name,center):
self.name = 'Slave'
self.set_name = set_name
self.center = center
class PZPeak(Peak):
def __init__(self,set_name,chrom,start,end,name,summit,score,chip,control,
fold_enrichment,distribution_score,fdr):
self.set_name = set_name
self.chrom = chrom
self.start = int(start)
self.end = int(end)
self.name = name
self.center = int(summit)
self.score = float(score)
self.chip = float(chip)
self.control = float(control)
self.fold_enrichment = float(fold_enrichment)
self.distribution_score = float(distribution_score)
self.fdr = float(fdr)
def width(self):
return self.end-self.start+1
class JoinedPeak(Peak):
WIDTH = 0
HEADER = ['#Chromosome','Start','End','Name','NPeaks','Spread','ChipSE','EnrichSE']
HEADER_TYPES = set()
def __init__(self,pzpeak):
self.chrom = pzpeak.chrom
self.peaks = {}
self.center = self.add(pzpeak) #pzpeak.center
def can_add(self,pzpeak):
return not pzpeak.set_name in self.peaks
def add(self,pzpeak):
self.HEADER_TYPES.add(pzpeak.set_name)
self.peaks[pzpeak.set_name] = pzpeak
return sum(p.center for p in self.peaks.values())/len(self.peaks)
def name(self):
return '%s_%d' % (self.chrom,self.center)
@classmethod
def header(cls):
s = '\t'.join(cls.HEADER) + '\t'
#'#Chromosome\tPosition\tNPeaks\tSpread\t'
for htype in cls.HEADER_TYPES:
s += '\t'.join(
htype + '_' + x for x in [
'PZName','PZScore','PZChip','PZInput','PZEnrich','PZFDR','Chip','Input','Enrich','Mapq']
) + '\t'
return s
def __str__(self):
s = ''
called_peaks = 0
peak_signals = []
peak_enrichs = []
for set_name,peak in self.peaks.items():
if hasattr(peak,'score'):
s += peak.name + '\t' + '\t'.join('%.2f' % x for x in
[peak.score,peak.chip,peak.control,peak.fold_enrichment,peak.fdr]) + '\t'
called_peaks += 1
#s += '%.1f\t%.1f\t%.1f\t%.1f\t' % (
#peak.score,peak.chip,peak.control,peak.fold_enrichment)
else:
s += 'NA\tNA\tNA\tNA\tNA\tNA\t'
if hasattr(peak,'pzpeak'):
s += '\t'.join('%.2f' % x for x in [
peak.pzpeak.nrom_signal,peak.pzpeak.norm_background,peak.pzpeak.fold_enrichment,peak.pzpeak.mapq_score
]) + '\t'
peak_signals.append(peak.pzpeak.nrom_signal)
peak_enrichs.append(peak.pzpeak.fold_enrichment)
else:
s += 'NA\tNA\tNA\tNA\tNA\t'
#peak.computed_chip,peak.computed_control,peak.computed_fold_enrichment
#s += '%.1f\t%.1f\t%.1f\t' % (
#peak.computed_chip,peak.computed_control,peak.computed_fold_enrichment)
#s += '\t'.join([str(x) for x in
#[peak.score,peak.chip,peak.fold_enrichment]])
try:
if len(peak_signals):
s = '\t'.join([self.chrom,str(self.center-self.WIDTH/2),str(self.center+self.WIDTH/2),
self.chrom+'_'+str(self.center),str(called_peaks)]) +\
'\t%.2f\t%.2f\t%.2f\t' % (
max(peak_signals)/(min(peak_signals) + sys.float_info.epsilon),
std_err(peak_signals), std_err(peak_enrichs),
) + s
else:
s = '\t'.join([self.chrom,str(self.center),
self.chrom+'_'+str(self.center),str(called_peaks)]) +\
'\tNA\tNA\tNA\t' + s
except:
print max(peak_signals),min(peak_signals)
raise
return s
class PeakScorer(pz.PeakContainer):
def __init__(self, ip_tags, control_tags, peak_size, plus_model, minus_model):
self.ip_tags = ip_tags
self.control_tags = control_tags
self.peak_size = peak_size
self.peak_shift = (peak_size - 1) / 2
self.score_threshold = 10
self.plus_model = plus_model
self.minus_model = minus_model
self.peaks = collections.defaultdict(list)
self.peak_count = 0
self.plus_window = collections.deque([])
self.minus_window = collections.deque([])
self.position = 0
def fill_scores(self,chrom,libtype,scoretype):
plus_tags = collections.deque(getattr(self,'%s_tags' % libtype).get_tags(chrom, '+'))
plus_mapq = collections.deque(getattr(self,'%s_tags' % libtype).get_mapq(chrom, '+'))
minus_tags = collections.deque(getattr(self,'%s_tags' % libtype).get_tags(chrom, '-'))
minus_mapq = collections.deque(getattr(self,'%s_tags' % libtype).get_mapq(chrom, '-'))
self.plus_window = collections.deque([])
self.minus_window = collections.deque([])
self.plus_mapq = collections.deque([])
self.minus_mapq = collections.deque([])
for peak in self.peaks[chrom]:
# fill windows
while plus_tags and plus_tags[0] <= (peak.position + self.peak_shift):
self.plus_window.append(plus_tags.popleft())
self.plus_mapq.append(plus_mapq.popleft())
while minus_tags and minus_tags[0] <= (peak.position + self.peak_shift):
self.minus_window.append(minus_tags.popleft())
self.minus_mapq.append(minus_mapq.popleft())
# get rid of old tags not fitting in the window any more
while self.plus_window and self.plus_window[0] < (peak.position - self.peak_shift):
self.plus_window.popleft()
self.plus_mapq.popleft()
while self.minus_window and self.minus_window[0] < (peak.position - self.peak_shift):
self.minus_window.popleft()
self.minus_mapq.popleft()
# calculate normalized background level
# add position to region if over threshold
self.position = peak.position
if libtype == 'ip':
peak.mapq_score = float(sum(self.plus_mapq) + sum(self.minus_mapq)
)/max(1,(len(self.plus_mapq) + len(self.minus_mapq)))
#if peak.name == 'Peak_12869':
#print zip(self.plus_window,self.plus_mapq)
#print zip(self.minus_window,self.minus_mapq)
#print sum(self.plus_mapq) , sum(self.minus_mapq), len(self.plus_mapq) , len(self.minus_mapq)
#print peak.mapq_score
setattr(peak,scoretype,self.calculate_score())
def score_peaks(self,peak_dict):
for chrom,peaks in peak_dict.items():
for jp in peaks:
jp.pzpeak = pz.Peak()
jp.pzpeak.size = self.peak_size
jp.pzpeak.shift = self.peak_shift
jp.pzpeak.position = jp.center
jp.pzpeak.name = jp.name
self.peaks[chrom].append(jp.pzpeak)
self.peak_count += 1
for chrom,peaks in self.peaks.items():
self.peaks[chrom] = sorted(self.peaks[chrom],
lambda a,b: cmp(a.position,b.position))
self.fill_scores(chrom,'ip','score')
self.fill_scores(chrom,'control','background')
self.determine_fold_enrichment(chrom)
self.determine_signal_over_background(chrom)
class FileSet(object):
def __init__(self,peakfile,chipfile,controlfile):
self.peakfile = peakfile
self.chip_file = chipfile
self.chip_tags = pz.TagContainer(store_mapq=True)
self.chip_tags(chipfile,True)
self.control_file = controlfile
self.control_tags = pz.TagContainer(store_mapq=True)
self.control_tags(controlfile,True)
#print self.chip_tags, self.control_tags
def get_file(self,type):
return getattr(self, '%s_file' % type)
def get_tagcount(self,type):
return getattr(self, '%s_tags' % type)
maxdist = int(args['--max-distance'])
peaksets = {}
filesets = {}
for peakfile,chipfile,controlfile in zip(args['PEAKS'],args['CHIP'],args['INPUT']):
set_name = os.path.basename(peakfile).split('.')[0]
peaksets[set_name] = collections.defaultdict(list)
filesets[set_name] = FileSet(peakfile,chipfile,controlfile)
r = csv.reader(open(peakfile),delimiter='\t')
r.next() # header
'''
#XXX: limit peaks
maxpeaks = 20
peakcounter = 0
for row in r:
if float(row[5]) >= 100 and float(row[8]) >= 10:
peakcounter += 1
if peakcounter > maxpeaks:
break
peaksets[set_name][row[0]].append(PZPeak(set_name,*row))
'''
for row in r:
peaksets[set_name][row[0]].append(PZPeak(set_name,*row))
JoinedPeak.WIDTH += peaksets[set_name].itervalues().next()[0].width()
JoinedPeak.WIDTH /= len(peaksets)
# find closest peak to each peak in the new set
# make new peaks when there's no qualifying one
npeaks = 0
joined_peaks = collections.defaultdict(list)
for set_name,peakset in peaksets.items():
for chrom,peaks in peakset.items():
for peak in peaks:
closest = None
for jp in joined_peaks[chrom]:
dist = jp.dist(peak)
if dist >= 0 and dist <= maxdist:
if closest is None or closest.dist(peak) > dist:
closest = jp
if closest is None or not closest.can_add(peak):
npeaks += 1
joined_peaks[chrom].append(JoinedPeak(peak))
else:
closest.add(peak)
plus_model,minus_model = pz.generate_ideal_model(JoinedPeak.WIDTH)
for set_name,fileset in filesets.items():
scorer = PeakScorer(fileset.chip_tags,fileset.control_tags,
JoinedPeak.WIDTH,plus_model,minus_model)
peaks_to_score = collections.defaultdict(list)
for chrom,peaks in joined_peaks.items():
for jp in peaks:
if set_name not in jp.peaks:
jp.peaks[set_name] = SlavePeak(set_name,jp.center)
peaks_to_score[chrom].append(jp.peaks[set_name])
scorer.score_peaks(peaks_to_score)
print JoinedPeak.header()
for chrom,peaks in joined_peaks.items():
for peak in peaks:
print peak
#plus_model,minus_model = pz.generate_ideal_model(JoinedPeak.WIDTH)
#def get_coverage(fileset,type,jp,pseudocount=0):
#score = 0
#start = max(0,jp.center-JoinedPeak.WIDTH/2)
#for aln in fileset.get_file(type).fetch(
#reference = jp.chrom, start = start,
#end = jp.center+JoinedPeak.WIDTH/2):
#if aln.is_reverse:
#score += minus_model[aln.pos-start]
#else:
#score += plus_model[aln.pos-start]
#return (score+pseudocount)*10.**6/fileset.get_tagcount(type)
#return 10.**6*fileset.get_file(type).count(
#reference = jp.chrom,
#start = max(0,jp.center-JoinedPeak.WIDTH/2),
#end = jp.center+JoinedPeak.WIDTH/2)/fileset.get_tagcount(type)
#start = jp.center,
#end = jp.center+1)
#matrix = np.zeros((npeaks,len(peaksets)*2))
#i = 0
#for chrom,peaks in joined_peaks.items():
#for jp in peaks:
#for j,set_name in enumerate(peaksets.keys()):
#control_coverage = get_coverage(filesets[set_name],'control',jp,pseudocount=1)
#chip_coverage = get_coverage(filesets[set_name],'chip',jp)
#matrix[i][j] = float(chip_coverage)
#matrix[i][j+len(peaksets)] = float(control_coverage)
#i += 1
#quantile_normalize.quantile_norm(matrix)
#i = 0
#for chrom,peaks in joined_peaks.items():
#for jp in peaks:
#for j,set_name in enumerate(peaksets.keys()):
#if set_name not in jp.peaks:
#jp.peaks[set_name] = SlavePeak(
#set_name,matrix[i][j],matrix[i][j + len(peaksets)])
#else:
#jp.peaks[set_name].computed_chip = matrix[i][j]
#jp.peaks[set_name].computed_control = matrix[i][j+len(peaksets)]
#jp.peaks[set_name].compute_fold_enrichment()
#print jp
#i += 1
'''
i = 0
for chrom,peaks in joined_peaks.items():
for jp in peaks:
for j,set_name in enumerate(filesets.keys()):
matrix[i][j] = float(jp.peaks[set_name].computed_chip)
matrix[i][j+len(peaksets)] = float(jp.peaks[set_name].computed_control)
i += 1
'''
| 39.730978 | 122 | 0.603584 | 1,894 | 14,621 | 4.508976 | 0.160507 | 0.030328 | 0.011241 | 0.014052 | 0.312412 | 0.291686 | 0.238993 | 0.228337 | 0.177635 | 0.122248 | 0 | 0.00789 | 0.271869 | 14,621 | 367 | 123 | 39.839237 | 0.794289 | 0.165173 | 0 | 0.087866 | 0 | 0 | 0.125933 | 0.014407 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.029289 | null | null | 0.012552 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed6ff0df42bec5dfbd4d71634bb7ab44a9c003d2 | 9,473 | py | Python | django_town/rest_swagger/views.py | uptown/django-town | 4c3b078a8ce5dcc275d65faa4a1cdfb7ebc74a50 | [
"MIT"
] | null | null | null | django_town/rest_swagger/views.py | uptown/django-town | 4c3b078a8ce5dcc275d65faa4a1cdfb7ebc74a50 | [
"MIT"
] | null | null | null | django_town/rest_swagger/views.py | uptown/django-town | 4c3b078a8ce5dcc275d65faa4a1cdfb7ebc74a50 | [
"MIT"
] | null | null | null | from django_town.rest import RestApiView, rest_api_manager
from django_town.http import http_json_response
from django_town.cache.utlis import SimpleCache
from django_town.oauth2.swagger import swagger_authorizations_data
from django_town.social.oauth2.permissions import OAuth2Authenticated, OAuth2AuthenticatedOrReadOnly
from django_town.social.permissions import Authenticated, AuthenticatedOrReadOnly
class ApiDocsView(RestApiView):
def read(self, request, api_version):
def load_cache(api_version="alpha"):
manager = rest_api_manager(api_version)
ret = {'title': manager.name,
'description': manager.description,
'apiVersion': manager.api_version, 'swaggerVersion': "1.2", 'basePath': manager.base_url,
'resourcePath': manager.base_url, 'info': manager.info,
'authorizations': swagger_authorizations_data()}
apis = []
models = {
"Error": {
"id": "Error",
"required": ['error'],
"properties": {
"error": {
"type": "string"
},
"field": {
"type": "string"
},
"message": {
"type": "string"
},
"resource": {
"type": "string"
}
}
}
}
for view_cls in manager.api_list:
operations = []
global_params = []
path = view_cls.path()
if path == "":
continue
if '{}' in path:
path = path.replace('{}', '{pk}')
global_params.append(
{
"paramType": "path",
"name": 'pk',
"description": 'primary key for object',
"dataType": 'integer',
"format": 'int64',
"required": True,
}
)
responseMessages = [
{
'code': 404,
"message": "not_found",
"responseModel": "Error"
},
{
'code': 500,
"message": "internal_error",
"responseModel": "Error"
},
{
'code': 409,
"message": "method_not_allowed",
"responseModel": "Error"
},
{
'code': 409,
"message": "conflict",
"responseModel": "Error"
},
{
'code': 403,
"message": "forbidden",
"responseModel": "Error"
},
{
'code': 401,
"message": "permission_denied",
"responseModel": "Error"
},
{
'code': 401,
"message": "unauthorized",
"responseModel": "Error"
},
{
'code': 400,
"message": "form_invalid",
"responseModel": "Error"
},
{
'code': 400,
"message": "form_required",
"responseModel": "Error"
},
{
'code': 400,
"message": "bad_request",
"responseModel": "Error"
},
]
current_api = {
'path': path,
'description': view_cls.__doc__,
}
operations = []
if 'create' in view_cls.crud_method_names and hasattr(view_cls, 'create'):
create_op = {
'method': 'POST',
'parameters': global_params,
'responseMessages': responseMessages,
'nickname': 'create ' + path,
}
operations.append(create_op)
if 'read' in view_cls.crud_method_names and hasattr(view_cls, 'read'):
op = {
'method': 'GET',
'responseMessages': responseMessages,
'nickname': 'read ' + path
}
params = global_params.copy()
for each_permission in view_cls.permission_classes:
if issubclass(each_permission, OAuth2Authenticated):
params.append(
{
"paramType": "query",
"name": 'access_token',
"dataType": 'string',
"required": True,
}
)
if hasattr(view_cls, 'read_safe_parameters'):
for each in view_cls.read_safe_parameters:
if isinstance(each, tuple):
if each[1] == int:
params.append(
{
"paramType": "query",
"name": each[0],
"dataType": 'int',
"format": 'int64',
"required": True,
}
)
elif each[1] == float:
params.append(
{
"paramType": "query",
"name": each[0],
"dataType": 'float',
"format": 'float',
"required": True,
}
)
else:
params.append(
{
"paramType": "query",
"name": each[0],
"dataType": 'string',
"required": True,
}
)
else:
params.append(
{
"paramType": "query",
"name": each,
"dataType": 'string',
"required": True,
}
)
pass
pass
op['parameters'] = params
operations.append(op)
if 'update' in view_cls.crud_method_names and hasattr(view_cls, 'update'):
op = {
'method': 'UPDATE',
'parameters': global_params,
'responseMessages': responseMessages,
'errorResponses': [],
'nickname': 'read ' + path,
}
operations.append(op)
if 'delete' in view_cls.crud_method_names and hasattr(view_cls, 'delete'):
op = {
'method': 'DELETE',
'parameters': global_params,
'responseMessages': responseMessages,
'errorResponses': [],
'nickname': 'read ' + path,
}
operations.append(op)
current_api['operations'] = operations
apis.append(current_api)
ret['apis'] = apis
ret["models"] = models
return ret
ret = SimpleCache(key_format="api-doc:%(api_version)s", duration=60 * 60 * 24,
load_callback=load_cache).get(api_version=api_version)
response = http_json_response(ret)
response["Access-Control-Allow-Origin"] = "*"
response["Access-Control-Allow-Methods"] = "GET"
response["Access-Control-Max-Age"] = "1000"
response["Access-Control-Allow-Headers"] = "*"
return response
| 43.059091 | 108 | 0.331468 | 525 | 9,473 | 5.819048 | 0.28 | 0.032079 | 0.064812 | 0.042553 | 0.304092 | 0.207856 | 0.184288 | 0.184288 | 0.153191 | 0.120458 | 0 | 0.01407 | 0.579859 | 9,473 | 219 | 109 | 43.255708 | 0.753518 | 0 | 0 | 0.320755 | 0 | 0 | 0.146522 | 0.013512 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009434 | false | 0.009434 | 0.028302 | 0 | 0.051887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed75ef3dbcd90991f3b2e3a5c73442983622bbb5 | 452 | py | Python | thinkutils_plus/eventbus/sample/myeventbus.py | ThinkmanWang/thinkutils_plus | 65d56a1a0cfce22dff08a4f0baea6b4eb08a2e35 | [
"MIT"
] | null | null | null | thinkutils_plus/eventbus/sample/myeventbus.py | ThinkmanWang/thinkutils_plus | 65d56a1a0cfce22dff08a4f0baea6b4eb08a2e35 | [
"MIT"
] | null | null | null | thinkutils_plus/eventbus/sample/myeventbus.py | ThinkmanWang/thinkutils_plus | 65d56a1a0cfce22dff08a4f0baea6b4eb08a2e35 | [
"MIT"
] | null | null | null | __author__ = 'Xsank'
import time
from thinkutils_plus.eventbus.eventbus import EventBus
from myevent import GreetEvent
from myevent import ByeEvent
from mylistener import MyListener
if __name__=="__main__":
eventbus=EventBus()
eventbus.register(MyListener())
ge=GreetEvent('world')
be=ByeEvent('world')
eventbus.async_post(be)
eventbus.post(ge)
time.sleep(0.1)
eventbus.unregister(MyListener())
eventbus.destroy() | 23.789474 | 54 | 0.743363 | 53 | 452 | 6.075472 | 0.490566 | 0.149068 | 0.10559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005236 | 0.154867 | 452 | 19 | 55 | 23.789474 | 0.837696 | 0 | 0 | 0 | 0 | 0 | 0.050773 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ed7b8022569fdf95c3598fcd38e2d1c4182f053f | 1,437 | py | Python | processing_tools/number_of_tenants.py | apanda/modeling | e032abd413bb3325ad6e5995abadeef74314f383 | [
"BSD-3-Clause"
] | 3 | 2017-08-30T05:24:11.000Z | 2021-02-25T12:17:19.000Z | processing_tools/number_of_tenants.py | apanda/modeling | e032abd413bb3325ad6e5995abadeef74314f383 | [
"BSD-3-Clause"
] | null | null | null | processing_tools/number_of_tenants.py | apanda/modeling | e032abd413bb3325ad6e5995abadeef74314f383 | [
"BSD-3-Clause"
] | 2 | 2017-11-15T07:00:48.000Z | 2020-12-13T17:29:03.000Z | import sys
from collections import defaultdict
def Process (fnames):
tenant_time = defaultdict(lambda: defaultdict(lambda: 0.0))
tenant_run = defaultdict(lambda: defaultdict(lambda:0))
for fname in fnames:
f = open(fname)
for l in f:
if l.startswith("tenant"):
continue
parts = l.strip().split()
tenants = int(parts[0])
priv = int(parts[1])
pub = int(parts[2])
num_machines = tenants * priv * pub
int_checks = (tenants * tenants * priv * (priv - 1)) / 2
int_time = int_checks * float(parts[3])
ext_checks = (tenants * priv) * ((tenants - 1) * pub)
ext_time = ext_checks * float(parts[4])
oext_check = (tenants * priv) * (tenants * pub)
oext_time = oext_check * float(parts[5])
total = int_time + ext_time + oext_time
tenant_time[(priv, pub)][tenants] += total
tenant_run[(priv, pub)][tenants] += 1
for k in sorted(tenant_run.keys()):
print "# ----%s------"%(str(k))
for k2 in sorted(tenant_run[k].keys()):
print "%d %d %f"%(k2, tenant_run[k][k2], \
tenant_time[k][k2]/float(tenant_run[k][k2]))
print
print
#print "%d %d %f"%(k, runs[k], machines[k]/float(runs[k]))
if __name__ == "__main__":
Process(sys.argv[1:])
| 35.04878 | 68 | 0.526792 | 180 | 1,437 | 4.038889 | 0.3 | 0.074278 | 0.041265 | 0.093535 | 0.096286 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.325679 | 1,437 | 40 | 69 | 35.925 | 0.73065 | 0.039666 | 0 | 0.060606 | 0 | 0 | 0.026106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.060606 | null | null | 0.121212 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ed7d1c9bb5710045f4cb95dccf219d3b5c6faaa9 | 2,564 | py | Python | pyfisher/mpi.py | borisbolliet/pyfisher | 715e192baa4fadbff754416d2b001c3708c9276c | [
"BSD-3-Clause"
] | 7 | 2017-12-06T18:16:13.000Z | 2021-02-09T19:25:26.000Z | pyfisher/mpi.py | borisbolliet/pyfisher | 715e192baa4fadbff754416d2b001c3708c9276c | [
"BSD-3-Clause"
] | 34 | 2016-01-25T19:48:07.000Z | 2021-02-03T22:34:09.000Z | pyfisher/mpi.py | borisbolliet/pyfisher | 715e192baa4fadbff754416d2b001c3708c9276c | [
"BSD-3-Clause"
] | 10 | 2017-02-01T15:14:22.000Z | 2021-02-16T01:34:16.000Z | from __future__ import print_function
import numpy as np
import os,sys,time
"""
Copied from orphics.mpi
"""
try:
disable_mpi_env = os.environ['DISABLE_MPI']
disable_mpi = True if disable_mpi_env.lower().strip() == "true" else False
except:
disable_mpi = False
"""
Use the below cleanup stuff only for intel-mpi!
If you use it on openmpi, you will have no traceback for errors
causing hours of endless confusion and frustration! - Sincerely, past frustrated Mat
"""
# From Sigurd's enlib.mpi:
# Uncaught exceptions don't cause mpi to abort. This can lead to thousands of
# wasted CPU hours
# def cleanup(type, value, traceback):
# sys.__excepthook__(type, value, traceback)
# MPI.COMM_WORLD.Abort(1)
# sys.excepthook = cleanup
class fakeMpiComm:
"""
A Simple Fake MPI implementation
"""
def __init__(self):
pass
def Get_rank(self):
return 0
def Get_size(self):
return 1
def Barrier(self):
pass
def Abort(self,dummy):
pass
try:
if disable_mpi: raise
from mpi4py import MPI
except:
if not(disable_mpi): print("WARNING: mpi4py could not be loaded. Falling back to fake MPI. This means that if you submitted multiple processes, they will all be assigned the same rank of 0, and they are potentially doing the same thing.")
class template:
pass
MPI = template()
MPI.COMM_WORLD = fakeMpiComm()
def mpi_distribute(num_tasks,avail_cores,allow_empty=False):
# copied to mapsims.convert_noise_templates
if not(allow_empty): assert avail_cores<=num_tasks
min_each, rem = divmod(num_tasks,avail_cores)
num_each = np.array([min_each]*avail_cores) # first distribute equally
if rem>0: num_each[-rem:] += 1 # add the remainder to the last set of cores (so that rank 0 never gets extra jobs)
task_range = list(range(num_tasks)) # the full range of tasks
cumul = np.cumsum(num_each).tolist() # the end indices for each task
task_dist = [task_range[x:y] for x,y in zip([0]+cumul[:-1],cumul)] # a list containing the tasks for each core
assert sum(num_each)==num_tasks
assert len(num_each)==avail_cores
assert len(task_dist)==avail_cores
return num_each,task_dist
def distribute(njobs,verbose=True,**kwargs):
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
numcores = comm.Get_size()
num_each,each_tasks = mpi_distribute(njobs,numcores,**kwargs)
if rank==0: print ("At most ", max(num_each) , " tasks...")
my_tasks = each_tasks[rank]
return comm,rank,my_tasks
| 29.813953 | 242 | 0.697738 | 391 | 2,564 | 4.409207 | 0.432225 | 0.032483 | 0.020882 | 0.020882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0059 | 0.206708 | 2,564 | 85 | 243 | 30.164706 | 0.841691 | 0.207878 | 0 | 0.166667 | 0 | 0.020833 | 0.135747 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.145833 | false | 0.083333 | 0.083333 | 0.041667 | 0.354167 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
71ec7e1ab519fe39c3c2b69f2a497fd39095d1ca | 15,524 | py | Python | tests/pytests/test_tags.py | wayn111/RediSearch | 897b2de35988b84851dd8380c614a21ad8da7c0f | [
"BSD-3-Clause",
"Ruby",
"Apache-2.0",
"MIT"
] | null | null | null | tests/pytests/test_tags.py | wayn111/RediSearch | 897b2de35988b84851dd8380c614a21ad8da7c0f | [
"BSD-3-Clause",
"Ruby",
"Apache-2.0",
"MIT"
] | null | null | null | tests/pytests/test_tags.py | wayn111/RediSearch | 897b2de35988b84851dd8380c614a21ad8da7c0f | [
"BSD-3-Clause",
"Ruby",
"Apache-2.0",
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from includes import *
from common import *
def search(env, r, *args):
return r.execute_command('ft.search', *args)
def testTagIndex(env):
r = env
env.expect('ft.create', 'idx', 'ON', 'HASH','schema', 'title', 'text', 'tags', 'tag').ok()
N = 10
for n in range(N):
env.expect('ft.add', 'idx', 'doc%d' % n, 1.0, 'fields',
'title', 'hello world term%d' % n, 'tags', 'foo bar,xxx,tag %d' % n).ok()
for _ in r.retry_with_rdb_reload():
waitForIndex(r, 'idx')
res = env.cmd('ft.search', 'idx', 'hello world')
env.assertEqual(10, res[0])
res = env.cmd('ft.search', 'idx', 'foo bar')
env.assertEqual(0, res[0])
res = env.cmd('ft.search', 'idx', '@tags:{foo bar}')
env.assertEqual(N, res[0])
# inorder should not affect tags
res = env.cmd(
'ft.search', 'idx', '@tags:{tag 1} @tags:{foo bar}', 'slop', '0', 'inorder')
env.assertEqual(1, res[0])
for n in range(N - 1):
res = env.cmd(
'ft.search', 'idx', '@tags:{tag %d}' % n, 'nocontent')
env.assertEqual(1, res[0])
env.assertEqual('doc%d' % n, res[1])
res = env.cmd(
'ft.search', 'idx', '@tags:{tag\\ %d}' % n, 'nocontent')
env.assertEqual(1, res[0])
res = env.cmd(
'ft.search', 'idx', 'hello world @tags:{tag\\ %d|tag %d}' % (n, n + 1), 'nocontent')
env.assertEqual(2, res[0])
res = py2sorted(res[1:])
env.assertEqual('doc%d' % n, res[0])
env.assertEqual('doc%d' % (n + 1), res[1])
res = env.cmd(
'ft.search', 'idx', 'term%d @tags:{tag %d}' % (n, n), 'nocontent')
env.assertEqual(1, res[0])
env.assertEqual('doc%d' % n, res[1])
def testSeparator(env):
r = env
env.expect(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'tags', 'tag', 'separator', ':').ok()
env.expect('ft.add', 'idx', 'doc1', 1.0, 'fields',
'title', 'hello world', 'tags', 'x:hello world: fooz bar:foo,bar:BOO FAR').ok()
for _ in r.retry_with_rdb_reload():
waitForIndex(r, 'idx')
for q in ('@tags:{hello world}', '@tags:{fooz bar}', '@tags:{foo\\,bar}', '@tags:{boo\\ far}', '@tags:{x}'):
res = env.cmd('ft.search', 'idx', q)
env.assertEqual(1, res[0])
def testTagPrefix(env):
env.skipOnCluster()
r = env
env.expect(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'tags', 'tag', 'separator', ',').ok()
env.expect('ft.add', 'idx', 'doc1', 1.0, 'fields', 'title', 'hello world',
'tags', 'hello world,hello-world,hell,jell').ok()
env.expect('FT.DEBUG', 'dump_tagidx', 'idx', 'tags') \
.equal([['hell', [1]], ['hello world', [1]], ['hello-world', [1]], ['jell', [1]]])
for _ in r.retry_with_rdb_reload():
waitForIndex(r, 'idx')
for q in ('@tags:{hello world}', '@tags:{hel*}', '@tags:{hello\\-*}', '@tags:{he*}'):
res = env.cmd('ft.search', 'idx', q)
env.assertEqual(res[0], 1)
def testTagFieldCase(env):
r = env
env.expect(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'TAgs', 'tag').ok()
env.expect('ft.add', 'idx', 'doc1', 1.0, 'fields',
'title', 'hello world', 'TAgs', 'HELLO WORLD,FOO BAR').ok()
for _ in r.retry_with_rdb_reload():
waitForIndex(r, 'idx')
env.assertListEqual([0], r.execute_command(
'FT.SEARCH', 'idx', '@tags:{HELLO WORLD}'))
env.assertListEqual([1, 'doc1'], r.execute_command(
'FT.SEARCH', 'idx', '@TAgs:{HELLO WORLD}', 'NOCONTENT'))
env.assertListEqual([1, 'doc1'], r.execute_command(
'FT.SEARCH', 'idx', '@TAgs:{foo bar}', 'NOCONTENT'))
env.assertListEqual([0], r.execute_command(
'FT.SEARCH', 'idx', '@TAGS:{foo bar}', 'NOCONTENT'))
def testInvalidSyntax(env):
r = env
# invalid syntax
with env.assertResponseError():
r.execute_command(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'tags', 'tag', 'separator')
with env.assertResponseError():
r.execute_command(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'tags', 'tag', 'separator', "foo")
with env.assertResponseError():
r.execute_command(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'tags', 'tag', 'separator', "")
def testTagVals(env):
r = env
r.execute_command(
'ft.create', 'idx', 'ON', 'HASH',
'schema', 'title', 'text', 'tags', 'tag', 'othertags', 'tag')
N = 100
alltags = set()
for n in range(N):
tags = ('foo %d' % n, 'bar %d' % n, 'x')
alltags.add(tags[0])
alltags.add(tags[1])
alltags.add(tags[2])
env.assertOk(r.execute_command('ft.add', 'idx', 'doc%d' % n, 1.0, 'fields',
'tags', ','.join(tags), 'othertags', 'baz %d' % int(n // 2)))
for _ in r.retry_with_rdb_reload():
waitForIndex(r, 'idx')
res = r.execute_command('ft.tagvals', 'idx', 'tags')
env.assertEqual(N * 2 + 1, len(res))
env.assertEqual(alltags, set(res))
res = r.execute_command('ft.tagvals', 'idx', 'othertags')
env.assertEqual(N / 2, len(res))
env.expect('ft.tagvals', 'idx').raiseError()
env.expect('ft.tagvals', 'idx', 'idx', 'idx').raiseError()
env.expect('ft.tagvals', 'fake_idx', 'tags').raiseError()
env.expect('ft.tagvals', 'idx', 'fake_tags').raiseError()
env.expect('ft.tagvals', 'idx', 'title').raiseError()
def testSearchNotExistsTagValue(env):
# this test basically make sure we are not leaking
env.expect('FT.CREATE idx ON HASH SCHEMA t TAG SORTABLE').ok()
env.expect('FT.SEARCH idx @t:{val}').equal([0])
def testIssue1305(env):
env.expect('FT.CREATE myIdx ON HASH SCHEMA title TAG').ok()
env.expect('FT.ADD myIdx doc2 1.0 FIELDS title "work"').ok()
env.expect('FT.ADD myIdx doc2 1.0 FIELDS title "hello"').error()
env.expect('FT.ADD myIdx doc3 1.0 FIELDS title "hello"').ok()
env.expect('FT.ADD myIdx doc1 1.0 FIELDS title "hello,work"').ok()
expectedRes = {'doc1' : ['inf', ['title', '"hello,work"']], 'doc3' : ['inf', ['title', '"hello"']], 'doc2' : ['inf', ['title', '"work"']]}
res = env.cmd('ft.search', 'myIdx', '~@title:{wor} ~@title:{hell}', 'WITHSCORES')[1:]
res = {res[i]:res[i + 1: i + 3] for i in range(0, len(res), 3)}
env.assertEqual(res, expectedRes)
def testTagCaseSensitive(env):
conn = getConnectionByEnv(env)
env.expect('FT.CREATE idx1 SCHEMA t TAG').ok()
env.expect('FT.CREATE idx2 SCHEMA t TAG CASESENSITIVE').ok()
env.expect('FT.CREATE idx3 SCHEMA t TAG SEPARATOR .').ok()
env.expect('FT.CREATE idx4 SCHEMA t TAG SEPARATOR . CASESENSITIVE').ok()
env.expect('FT.CREATE idx5 SCHEMA t TAG CASESENSITIVE SEPARATOR .').ok()
conn.execute_command('HSET', 'doc1', 't', 'foo,FOO')
conn.execute_command('HSET', 'doc2', 't', 'FOO')
conn.execute_command('HSET', 'doc3', 't', 'foo')
if not env.is_cluster():
conn.execute_command('FT.CONFIG', 'SET', 'FORK_GC_CLEAN_THRESHOLD', '0')
env.expect('FT.DEBUG', 'dump_tagidx', 'idx1', 't').equal([['foo', [1, 2, 3]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx2', 't').equal([['foo', [1, 3]], ['FOO', [1, 2]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx3', 't').equal([['foo', [2, 3]], ['foo,foo', [1]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx4', 't').equal([['foo', [3]], ['foo,FOO', [1]], ['FOO', [2]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx5', 't').equal([['foo', [3]], ['foo,FOO', [1]], ['FOO', [2]]])
env.expect('FT.SEARCH', 'idx1', '@t:{FOO}') \
.equal([3, 'doc1', ['t', 'foo,FOO'], 'doc2', ['t', 'FOO'], 'doc3', ['t', 'foo']])
env.expect('FT.SEARCH', 'idx1', '@t:{foo}') \
.equal([3, 'doc1', ['t', 'foo,FOO'], 'doc2', ['t', 'FOO'], 'doc3', ['t', 'foo']])
env.expect('FT.SEARCH', 'idx2', '@t:{FOO}') \
.equal([2, 'doc1', ['t', 'foo,FOO'], 'doc2', ['t', 'FOO']])
env.expect('FT.SEARCH', 'idx2', '@t:{foo}') \
.equal([2, 'doc1', ['t', 'foo,FOO'], 'doc3', ['t', 'foo']])
conn.execute_command('HSET', 'doc1', 't', 'f o,F O')
conn.execute_command('HSET', 'doc2', 't', 'F O')
conn.execute_command('HSET', 'doc3', 't', 'f o')
if not env.is_cluster():
forceInvokeGC(env, 'idx1')
forceInvokeGC(env, 'idx2')
forceInvokeGC(env, 'idx3')
forceInvokeGC(env, 'idx4')
forceInvokeGC(env, 'idx5')
env.expect('FT.DEBUG', 'dump_tagidx', 'idx1', 't').equal([['f o', [4, 5, 6]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx2', 't').equal([['f o', [4, 6]], ['F O', [4, 5]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx3', 't').equal([['f o', [5, 6]], ['f o,f o', [4]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx4', 't').equal([['f o', [6]], ['f o,F O', [4]], ['F O', [5]]])
env.expect('FT.DEBUG', 'dump_tagidx', 'idx5', 't').equal([['f o', [6]], ['f o,F O', [4]], ['F O', [5]]])
# not casesensitive
env.expect('FT.SEARCH', 'idx1', '@t:{F\\ O}') \
.equal([3, 'doc1', ['t', 'f o,F O'], 'doc2', ['t', 'F O'], 'doc3', ['t', 'f o']])
env.expect('FT.SEARCH', 'idx1', '@t:{f\\ o}') \
.equal([3, 'doc1', ['t', 'f o,F O'], 'doc2', ['t', 'F O'], 'doc3', ['t', 'f o']])
# casesensitive
env.expect('FT.SEARCH', 'idx2', '@t:{F\\ O}') \
.equal([2, 'doc1', ['t', 'f o,F O'], 'doc2', ['t', 'F O']])
env.expect('FT.SEARCH', 'idx2', '@t:{f\\ o}') \
.equal([2, 'doc1', ['t', 'f o,F O'], 'doc3', ['t', 'f o']])
# not casesensitive
env.expect('FT.SEARCH', 'idx3', '@t:{f\\ o\\,f\\ o}') \
.equal([1, 'doc1', ['t', 'f o,F O']])
env.expect('FT.SEARCH', 'idx3', '@t:{f\\ o\\,F\\ O}') \
.equal([1, 'doc1', ['t', 'f o,F O']])
env.expect('FT.SEARCH', 'idx3', '@t:{F\\ O\\,F\\ O}') \
.equal([1, 'doc1', ['t', 'f o,F O']])
env.expect('FT.SEARCH', 'idx3', '@t:{F\\ O}') \
.equal([2, 'doc2', ['t', 'F O'], 'doc3', ['t', 'f o']])
env.expect('FT.SEARCH', 'idx3', '@t:{f\\ o}') \
.equal([2, 'doc2', ['t', 'F O'], 'doc3', ['t', 'f o']])
# casesensitive
env.expect('FT.SEARCH', 'idx4', '@t:{f\\ o\\,f\\ o}') \
.equal([0])
env.expect('FT.SEARCH', 'idx4', '@t:{f\\ o\\,F\\ O}') \
.equal([1, 'doc1', ['t', 'f o,F O']])
env.expect('FT.SEARCH', 'idx4', '@t:{F\\ O\\,F\\ O}') \
.equal([0])
env.expect('FT.SEARCH', 'idx4', '@t:{F\\ O}') \
.equal([1, 'doc2', ['t', 'F O']])
env.expect('FT.SEARCH', 'idx4', '@t:{f\\ o}') \
.equal([1, 'doc3', ['t', 'f o']])
def testTagGCClearEmpty(env):
env.skipOnCluster()
conn = getConnectionByEnv(env)
conn.execute_command('FT.CONFIG', 'SET', 'FORK_GC_CLEAN_THRESHOLD', '0')
conn.execute_command('FT.CREATE', 'idx', 'SCHEMA', 't', 'TAG')
conn.execute_command('HSET', 'doc1', 't', 'foo')
conn.execute_command('HSET', 'doc2', 't', 'bar')
conn.execute_command('HSET', 'doc3', 't', 'baz')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([['foo', [1]], ['bar', [2]], ['baz', [3]]])
env.expect('FT.SEARCH', 'idx', '@t:{foo}').equal([1, 'doc1', ['t', 'foo']])
# delete two tags
conn.execute_command('DEL', 'doc1')
conn.execute_command('DEL', 'doc2')
forceInvokeGC(env, 'idx')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([['baz', [3]]])
env.expect('FT.SEARCH', 'idx', '@t:{foo}').equal([0])
# delete last tag
conn.execute_command('DEL', 'doc3')
forceInvokeGC(env, 'idx')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([])
# check term can be used after being empty
conn.execute_command('HSET', 'doc4', 't', 'foo')
conn.execute_command('HSET', 'doc5', 't', 'foo')
env.expect('FT.SEARCH', 'idx', '@t:{foo}') \
.equal([2, 'doc4', ['t', 'foo'], 'doc5', ['t', 'foo']])
def testTagGCClearEmptyWithCursor(env):
env.skipOnCluster()
conn = getConnectionByEnv(env)
conn.execute_command('FT.CONFIG', 'SET', 'FORK_GC_CLEAN_THRESHOLD', '0')
conn.execute_command('FT.CREATE', 'idx', 'SCHEMA', 't', 'TAG')
conn.execute_command('HSET', 'doc1', 't', 'foo')
conn.execute_command('HSET', 'doc2', 't', 'foo')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([['foo', [1, 2]]])
res, cursor = env.cmd('FT.AGGREGATE', 'idx', '@t:{foo}', 'WITHCURSOR', 'COUNT', '1')
env.assertEqual(res, [1, []])
# delete both documents and run the GC to clean 'foo' inverted index
env.expect('DEL', 'doc1').equal(1)
env.expect('DEL', 'doc2').equal(1)
forceInvokeGC(env, 'idx')
# make sure the inverted index was cleaned
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([])
# read from the cursor
res, cursor = env.cmd('FT.CURSOR', 'READ', 'idx', cursor)
env.assertEqual(res, [0])
env.assertEqual(cursor, 0)
def testTagGCClearEmptyWithCursorAndMoreData(env):
env.skipOnCluster()
conn = getConnectionByEnv(env)
conn.execute_command('FT.CONFIG', 'SET', 'FORK_GC_CLEAN_THRESHOLD', '0')
conn.execute_command('FT.CREATE', 'idx', 'SCHEMA', 't', 'TAG')
conn.execute_command('HSET', 'doc1', 't', 'foo')
conn.execute_command('HSET', 'doc2', 't', 'foo')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([['foo', [1, 2]]])
res, cursor = env.cmd('FT.AGGREGATE', 'idx', '@t:{foo}', 'WITHCURSOR', 'COUNT', '1')
env.assertEqual(res, [1, []])
# delete both documents and run the GC to clean 'foo' inverted index
env.expect('DEL', 'doc1').equal(1)
env.expect('DEL', 'doc2').equal(1)
forceInvokeGC(env, 'idx')
# make sure the inverted index was cleaned
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([])
# add data
conn.execute_command('HSET', 'doc3', 't', 'foo')
conn.execute_command('HSET', 'doc4', 't', 'foo')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([['foo', [3, 4]]])
# read from the cursor
res, cursor = conn.execute_command('FT.CURSOR', 'READ', 'idx', cursor)
env.assertEqual(res, [0])
env.assertEqual(cursor, 0)
# ensure later documents with same tag are read
res = conn.execute_command('FT.AGGREGATE', 'idx', '@t:{foo}')
env.assertEqual(res, [1, [], []])
@unstable
def testEmptyTagLeak(env):
env.skipOnCluster()
cycles = 1
tags = 30
conn = getConnectionByEnv(env)
conn.execute_command('FT.CONFIG', 'SET', 'FORK_GC_CLEAN_THRESHOLD', '0')
conn.execute_command('FT.CREATE', 'idx', 'SCHEMA', 't', 'TAG')
pl = conn.pipeline()
for i in range(cycles):
for j in range(tags):
x = j + i * tags
pl.execute_command('HSET', 'doc{}'.format(x), 't', 'tag{}'.format(x))
pl.execute()
for j in range(tags):
pl.execute_command('DEL', 'doc{}'.format(j + i * tags))
pl.execute()
forceInvokeGC(env, 'idx')
env.expect('FT.DEBUG', 'DUMP_TAGIDX', 'idx', 't').equal([])
| 42.184783 | 142 | 0.523448 | 2,093 | 15,524 | 3.832776 | 0.090301 | 0.078534 | 0.090501 | 0.046622 | 0.753677 | 0.708053 | 0.648716 | 0.603216 | 0.571429 | 0.51446 | 0 | 0.022653 | 0.229387 | 15,524 | 367 | 143 | 42.299728 | 0.647914 | 0.03646 | 0 | 0.43662 | 0 | 0 | 0.284643 | 0.009506 | 0 | 0 | 0 | 0 | 0.116197 | 1 | 0.049296 | false | 0 | 0.007042 | 0.003521 | 0.059859 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71eed31624e7569b476bc79838cb16831bf90648 | 1,105 | py | Python | customer/admin.py | matheusdemicheli/dogtel | 4eed44c8214fe814c26a6df0125af9b065c81c1c | [
"MIT"
] | null | null | null | customer/admin.py | matheusdemicheli/dogtel | 4eed44c8214fe814c26a6df0125af9b065c81c1c | [
"MIT"
] | null | null | null | customer/admin.py | matheusdemicheli/dogtel | 4eed44c8214fe814c26a6df0125af9b065c81c1c | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.utils.safestring import mark_safe
from customer.models import Owner, Dog, Breed, SubBreed
class OwnerAdmin(admin.ModelAdmin):
"""
Owner ModelAdmin.
"""
search_fields = ['name']
class BreedAdmin(admin.ModelAdmin):
"""
Breed ModelAdmin.
"""
search_fields = ['name']
class SubBreedAdmin(admin.ModelAdmin):
"""
SubBreed ModelAdmin.
"""
search_fields = ['name', 'breed__name']
autocomplete_fields = ['breed']
list_display = ['name', 'breed']
class DogAdmin(admin.ModelAdmin):
"""
Dog ModelAdmin.
"""
search_fields = ['name', 'owner__name']
autocomplete_fields = ['owner', 'breed', 'sub_breed']
list_display = ['name', 'owner', 'breed', 'sub_breed', 'img_photo']
def img_photo(self, obj):
"""
Render the dog's photo.
"""
return mark_safe('<img src="%s" width="70">' % obj.photo.url)
admin.site.register(Dog, DogAdmin)
admin.site.register(Owner, OwnerAdmin)
admin.site.register(Breed, BreedAdmin)
admin.site.register(SubBreed, SubBreedAdmin) | 24.021739 | 71 | 0.650679 | 123 | 1,105 | 5.699187 | 0.349594 | 0.085592 | 0.125535 | 0.148359 | 0.088445 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00226 | 0.199095 | 1,105 | 46 | 72 | 24.021739 | 0.789831 | 0.086878 | 0 | 0.095238 | 0 | 0 | 0.138229 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.809524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
71f1b81a3d9a5c16cb1f510f6c706a2a817d94b4 | 2,737 | py | Python | plugins/number.py | motakine/ILAS_slackbot | ddfb34db1cddcb459fef34cfc04c498d6f85d135 | [
"MIT"
] | null | null | null | plugins/number.py | motakine/ILAS_slackbot | ddfb34db1cddcb459fef34cfc04c498d6f85d135 | [
"MIT"
] | null | null | null | plugins/number.py | motakine/ILAS_slackbot | ddfb34db1cddcb459fef34cfc04c498d6f85d135 | [
"MIT"
] | null | null | null | import slackbot.bot
import random
answer = random.randint(1, 50)
max = 50
def number(num):
'''number 判定
Args:
num (int): 判定する数字
Returns:
str: num が answer より大きい: 'Too large'
num が answer より小さい: 'Too small'
num が answer と一致: 'Correct!'、新しくゲームを始める
その他: 'Can I kick you?.'
0は不可思議な数である
maxが1の時に2以上を答えると1だけだと言われる
'''
global answer
global max
# 入力された値に応じて返答を構成、正解ならニューゲーム
if num == 0:
return ' is a mysterious number...'
elif num < max + 1:
if num > answer:
return ' is too large. The answer is more small.'
elif num < answer:
return ' is too small. The answer is more large.'
elif num == answer:
answer = random.randint(1, max)
return ' is correct! :tada: Now, start a new game.'
elif max == 1:
return '? Can I kick you? Only 1.'
return '? Can I kick you? 1 to %d.' % max
def number_set(num):
'''number set判定
Args:
num (int): 判定する数字
Returns:
str: 答えのmax(答えになりうる値の最大)を変更する。デフォは50
1にするとマジ?と訊かれる。それだけ。
不可思議な数字は0である
'''
global answer
global max
# 入力された値に応じて返答を構成、maxを変更、ニューゲーム
if num == 0:
return 'There is a mysterious number... It is '
elif num == 1:
max = 1
answer = random.randint(1, max)
return '1? Really? Then, the maximum of the answer is '
max = num
answer = random.randint(1, max)
return 'OK. Then, the maximum of the answer is '
@slackbot.bot.respond_to(r'^number\s+set\s+(\d+)')
def resp_set(message, digitstr):
'''number set (数字) 形式への返答
(数字)部のnumber set判定を行い、返事する
Args:
'''
# number set 判定
nbs = number_set(int(digitstr))
# 返事する文字列を構成
reply = '{0:s}{1:s}.'.format(nbs, digitstr)
message.reply(reply)
@slackbot.bot.respond_to(r'^number\s+(\d+)')
def resp_number(message, digitstr):
'''number (数字) 形式への返答
(数字) 部のnumber判定を行い, 'number (数字) 判定' を返事する
Args:
message (slackbot.dispatcher.Message): slack message
digtstr (str): 数値の文字列
'''
# number 判定
nb = number(int(digitstr))
# 返事する文字列を構成
reply = '{0:s}{1:s}'.format(digitstr, nb)
message.reply(reply)
@slackbot.bot.respond_to(r'^number\s+giveup')
def resp_giveup(message):
'''number giveup への返答
正解を表示し、新しい正解を設定、'Start a new game.'を返す
Args:
'''
global answer
global max
# 表示する答えを設定、次のゲームの解答を設定
showanswer = answer
answer = random.randint(1, max)
# 返事する文字列を構成
message.reply('Hahaha! Failed! :ghost: The answer is %d. Start a new game.' % showanswer)
message.react('stuck_out_tongue_winking_eye')
| 25.110092 | 93 | 0.588235 | 353 | 2,737 | 4.529745 | 0.320113 | 0.037523 | 0.059412 | 0.062539 | 0.353971 | 0.283927 | 0.152595 | 0.101313 | 0.101313 | 0.056285 | 0 | 0.01552 | 0.293752 | 2,737 | 108 | 94 | 25.342593 | 0.811174 | 0.049324 | 0 | 0.28 | 0 | 0 | 0.272624 | 0.027715 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71f5039371a3b37776d4da5587717221d15a60a1 | 5,276 | py | Python | VAE/reduced_model/nesm_generator.py | youngmg1995/NES-Music-Maker | aeda10a541cfd439cfa46c45e63411e0d98e41c1 | [
"MIT"
] | 3 | 2020-06-26T22:02:35.000Z | 2021-11-20T19:24:33.000Z | VAE/reduced_model/nesm_generator.py | youngmg1995/NES-Music-Maker | aeda10a541cfd439cfa46c45e63411e0d98e41c1 | [
"MIT"
] | null | null | null | VAE/reduced_model/nesm_generator.py | youngmg1995/NES-Music-Maker | aeda10a541cfd439cfa46c45e63411e0d98e41c1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Apr 1 17:14:19 2020
@author: Mitchell
nesm_generator.py
~~~~~~~~~~~~~~~~~
This file serves as a script for using our pre-trained VAE model to generate
brand new NES music soundtracks. NOTE - using the reduced model we only
generate the first melodic voice for each track rather than each of the four
voices present in an NESM track. To do so we first reconstruct our model using
the file VAE class defined in `VAE.py` and the same parameters used in
`model_training`. Then we use functions from the file `generation_utils` to
have our trained model create entirely new and original NES music.
"""
# Imports
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# NOTE - nesmdb folder manually added to environment libraries
from dataset_utils import load_training
from VAE import VAE
from generation_utils import generate_seprsco, latent_SVD, get_latent_vecs,\
plot_track, filter_tracks
import nesmdb
from nesmdb.vgm.vgm_to_wav import save_vgmwav
import tensorflow as tf
import numpy as np
import os, json
### Load Mappings
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Parameters for shape of dataset (note these are also used for model def.)
measures = 8
measure_len = 96
# load data
training_foldername = '../../nesmdb24_seprsco/train/'
train_save_filename = 'transformed_dataset.json'
dataset , labels2int_map , int2labels_map = \
load_training(training_foldername, train_save_filename,
measures = measures, measure_len = measure_len)
### Reinitiate Model
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### Model Parameters
latent_dim = 124
input_dim = len(int2labels_map) - 1
dropout = .1
maxnorm = None
vae_b1 , vae_b2 = .02 , .1
print('Reinitiating VAE Model')
# Build Model
model = VAE(latent_dim, input_dim, measures, measure_len, dropout,
maxnorm, vae_b1 , vae_b2)
# Reload Saved Weights
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "model_ckpt")
model.load_weights(checkpoint_prefix)
model.build(tf.TensorShape([None, measures, measure_len, ]))
# Print Summary of Model
model.summary()
### Sample Latent Variable Distributions
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Here we use SVD to more effectively sample from the orthogonal components
# of our latent space
# Parameters for sampling
num_songs = 10
print('Generating Latent Samples to Generate {} New Tracks'.format(num_songs))
# Grab distributions of dataset over latent space
# Have to run in batches due to size of the dataset
batch_size = 300
latent_vecs = get_latent_vecs(model, dataset, batch_size)
# Sample from normal distribution
rand_vecs = np.random.normal(0.0, 1.0, (num_songs, latent_dim))
# perform SVD
plot_eigenvalues = True
sample_vecs = latent_SVD(latent_vecs, rand_vecs, plot_eigenvalues)
### Generate New Tracks
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Create new seprsco tracks using our model and the random samples
# Seprsco files can later be converted to valid NES music format
# Parameters for track generation (specifically filtering)
p_min = .5
print('Generating New Tracks from Latent Samples')
# Decode samples using VAE
decoded_tracks = model.decoder(sample_vecs)
# Plot first decoded track
print("Example Model Generated Track")
plot_track(decoded_tracks[0])
# Filter Track
decoded_tracks = filter_tracks(decoded_tracks, p_min)
# Plot first filtered track
print("Example Filtered Track")
plot_track(decoded_tracks[0])
# Convert tracks to seprsco format
print('Converting Model Output to Seprsco')
seprsco_tracks = generate_seprsco(decoded_tracks, int2labels_map)
### Convert to WAV
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Convert seprsco tracks to WAV files so we can listen!!!
print('Converting Seprsco to WAV Audio')
wav_tracks = []
for track in seprsco_tracks:
wav = nesmdb.convert.seprsco_to_wav(track)
wav_tracks.append(wav)
### Save WAV Files
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Save our wav tracks to appropriate files (be sure not to overwrite existing)
# Also save latent variables so we can reproduce songs we like
# Save WAV tracks
save_wav = False
if save_wav:
print('Saving Generated WAV Audio Tracks')
wav_folder = 'model_gen_files/'
for i in range(len(wav_tracks)):
wav_file = wav_folder+'VAE_NESM_{}.wav'.format(i)
save_vgmwav(wav_file, wav_tracks[i])
# Save Latent Variables
save_latent_var = False
if save_latent_var:
print('Saving Latent Variables for Generated Tracks')
latent_filename = os.path.join(wav_folder, "latent_variables.json")
with open(latent_filename, 'w') as f:
json.dump({
'VAE_NESM_{}.wav'.format(i): sample_vecs[i].tolist()
for i in range(sample_vecs.shape[0])
}, f)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#----------------------------------END FILE------------------------------------
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 32.567901 | 79 | 0.638931 | 666 | 5,276 | 4.899399 | 0.336336 | 0.023904 | 0.016549 | 0.006129 | 0.027582 | 0.017162 | 0 | 0 | 0 | 0 | 0 | 0.010179 | 0.14348 | 5,276 | 162 | 80 | 32.567901 | 0.711883 | 0.485027 | 0 | 0.029851 | 1 | 0 | 0.173454 | 0.036199 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.119403 | 0 | 0.119403 | 0.134328 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
71f98ce850b9a0d28247d4a6575715ee5f7a82c8 | 2,955 | py | Python | src/rpocore/migrations/0007_auto_20160927_1517.py | 2martens/rpo-website | 14990920722c537810aecd2b97f5af6bbdd1b5ec | [
"MIT"
] | null | null | null | src/rpocore/migrations/0007_auto_20160927_1517.py | 2martens/rpo-website | 14990920722c537810aecd2b97f5af6bbdd1b5ec | [
"MIT"
] | null | null | null | src/rpocore/migrations/0007_auto_20160927_1517.py | 2martens/rpo-website | 14990920722c537810aecd2b97f5af6bbdd1b5ec | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10 on 2016-09-27 13:17
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import mezzanine.core.fields
class Migration(migrations.Migration):
dependencies = [
('rpocore', '0006_auto_20160921_1924'),
]
operations = [
migrations.CreateModel(
name='SupportingOrganization',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('_order', mezzanine.core.fields.OrderField(null=True, verbose_name='Order')),
('name', models.CharField(max_length=100, verbose_name='Name')),
('logo', models.ImageField(upload_to='', verbose_name='Logo of organization')),
('url', models.CharField(max_length=200, verbose_name='URL')),
],
options={
'verbose_name_plural': 'Supporting organizations',
'ordering': ('_order',),
'verbose_name': 'Supporting organization',
},
),
migrations.AlterField(
model_name='carouselitem',
name='homepage',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='carousel_items', to='rpocore.HomepagePage', verbose_name='Homepage'),
),
migrations.AlterField(
model_name='homepagepage',
name='process',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='rpocore.Process', verbose_name='Process'),
),
migrations.AlterField(
model_name='notablesupporter',
name='supporter_page',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notable_supporters', to='rpocore.SupporterPage', verbose_name='Supporter page'),
),
migrations.AlterField(
model_name='phase',
name='process',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='rpocore.Process', verbose_name='Process'),
),
migrations.AlterField(
model_name='statementpage',
name='formal_statements',
field=models.ManyToManyField(blank=True, to='rpocore.FormalStatement', verbose_name='Formal statements'),
),
migrations.AlterField(
model_name='statementpage',
name='informal_statements',
field=models.ManyToManyField(blank=True, to='rpocore.InformalStatement', verbose_name='Informal statements'),
),
migrations.AlterField(
model_name='supporter',
name='support_group',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.PROTECT, to='rpocore.SupportGroup', verbose_name='Support group'),
),
]
| 43.455882 | 186 | 0.626396 | 288 | 2,955 | 6.25 | 0.347222 | 0.085556 | 0.097222 | 0.112778 | 0.383333 | 0.343889 | 0.308889 | 0.308889 | 0.248889 | 0.232778 | 0 | 0.017056 | 0.246024 | 2,955 | 67 | 187 | 44.104478 | 0.790844 | 0.022335 | 0 | 0.316667 | 1 | 0 | 0.214137 | 0.039501 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.116667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c0252cbabe1a0b566b1ac4670f0fdedec520c7a | 370 | py | Python | Part1/AverageAccuracy.py | efkandurakli/Graduation-Project1 | fd2cba89929da2cef49ec67214b54c310b57ce01 | [
"MIT"
] | 1 | 2019-12-18T08:16:55.000Z | 2019-12-18T08:16:55.000Z | Part1/AverageAccuracy.py | efkandurakli/Graduation-Project1 | fd2cba89929da2cef49ec67214b54c310b57ce01 | [
"MIT"
] | null | null | null | Part1/AverageAccuracy.py | efkandurakli/Graduation-Project1 | fd2cba89929da2cef49ec67214b54c310b57ce01 | [
"MIT"
] | null | null | null | import numpy as np
from operator import truediv
def AA_andEachClassAccuracy(confusion_matrix):
counter = confusion_matrix.shape[0]
list_diag = np.diag(confusion_matrix)
list_raw_sum = np.sum(confusion_matrix, axis=1)
each_acc = np.nan_to_num(truediv(list_diag, list_raw_sum))
average_acc = np.mean(each_acc)
return each_acc, average_acc | 37 | 63 | 0.751351 | 56 | 370 | 4.642857 | 0.517857 | 0.230769 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006515 | 0.17027 | 370 | 10 | 64 | 37 | 0.840391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c0adaf30cf08d06f29b5570721ed08bc9df2c6a | 937 | py | Python | bc4py/bip32/utils.py | namuyan/bc4py | 6484d356096261d0d57e9e1f5ffeae1f9a9865f3 | [
"MIT"
] | 12 | 2018-09-19T14:02:09.000Z | 2020-01-27T16:20:14.000Z | bc4py/bip32/utils.py | namuyan/bc4py | 6484d356096261d0d57e9e1f5ffeae1f9a9865f3 | [
"MIT"
] | 1 | 2019-09-09T23:58:47.000Z | 2019-09-16T09:33:20.000Z | bc4py/bip32/utils.py | namuyan/bc4py | 6484d356096261d0d57e9e1f5ffeae1f9a9865f3 | [
"MIT"
] | 6 | 2018-11-13T17:20:14.000Z | 2020-02-15T11:46:52.000Z | from bc4py_extension import PyAddress
import hashlib
def is_address(ck: PyAddress, hrp, ver):
"""check bech32 format and version"""
try:
if ck.hrp != hrp:
return False
if ck.version != ver:
return False
except ValueError:
return False
return True
def get_address(pk, hrp, ver) -> PyAddress:
"""get address from public key"""
identifier = hashlib.new('ripemd160', hashlib.sha256(pk).digest()).digest()
return PyAddress.from_param(hrp, ver, identifier)
def convert_address(ck: PyAddress, hrp, ver) -> PyAddress:
"""convert address's version"""
return PyAddress.from_param(hrp, ver, ck.identifier())
def dummy_address(dummy_identifier) -> PyAddress:
assert len(dummy_identifier) == 20
return PyAddress.from_param('dummy', 0, dummy_identifier)
__all__ = [
"is_address",
"get_address",
"convert_address",
"dummy_address",
]
| 24.025641 | 79 | 0.662753 | 114 | 937 | 5.280702 | 0.368421 | 0.049834 | 0.094684 | 0.119601 | 0.179402 | 0.099668 | 0 | 0 | 0 | 0 | 0 | 0.016416 | 0.219851 | 937 | 38 | 80 | 24.657895 | 0.807114 | 0.090715 | 0 | 0.12 | 0 | 0 | 0.075359 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.16 | false | 0 | 0.08 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9c0b22579bc28f35e8719b18a2963cb1c1518847 | 2,687 | py | Python | cubi_tk/snappy/kickoff.py | LaborBerlin/cubi-tk | 4aa5306c547c38eb41d5623ff6e4bace828f85b1 | [
"MIT"
] | null | null | null | cubi_tk/snappy/kickoff.py | LaborBerlin/cubi-tk | 4aa5306c547c38eb41d5623ff6e4bace828f85b1 | [
"MIT"
] | null | null | null | cubi_tk/snappy/kickoff.py | LaborBerlin/cubi-tk | 4aa5306c547c38eb41d5623ff6e4bace828f85b1 | [
"MIT"
] | null | null | null | """``cubi-tk snappy kickoff``: kickoff SNAPPY pipeline."""
import argparse
import os
import subprocess
import typing
from logzero import logger
from toposort import toposort
from . import common
from cubi_tk.exceptions import ParseOutputException
def run(
args, _parser: argparse.ArgumentParser, _subparser: argparse.ArgumentParser
) -> typing.Optional[int]:
logger.info("Try to find SNAPPY pipeline directory...")
try:
path = common.find_snappy_root_dir(args.path or os.getcwd(), common.DEPENDENCIES.keys())
except common.CouldNotFindPipelineRoot:
return 1
# TODO: this assumes standard naming which is a limitation...
logger.info("Looking for pipeline directories (assuming standard naming)...")
logger.debug("Looking in %s", path)
step_set = {name for name in common.DEPENDENCIES if (path / name).exists()}
steps: typing.List[str] = []
for names in toposort({k: set(v) for k, v in common.DEPENDENCIES.items()}):
steps += [name for name in names if name in step_set]
logger.info("Will run the steps: %s", ", ".join(steps))
logger.info("Submitting with sbatch...")
jids: typing.Dict[str, str] = {}
for step in steps:
dep_jids = [jids[dep] for dep in common.DEPENDENCIES[step] if dep in jids]
cmd = ["sbatch"]
if dep_jids:
cmd += ["--dependency", "afterok:%s" % ":".join(map(str, dep_jids))]
cmd += ["pipeline_job.sh"]
logger.info("Submitting step %s: %s", step, " ".join(cmd))
if args.dry_run:
jid = "<%s>" % step
else:
stdout_raw = subprocess.check_output(cmd, cwd=str(path / step), timeout=args.timeout)
stdout = stdout_raw.decode("utf-8")
if not stdout.startswith("Submitted batch job "):
raise ParseOutputException("Did not understand sbatch output: %s" % stdout)
jid = stdout.split()[-1]
logger.info(" => JID: %s", jid)
jids[step] = jid
return None
def setup_argparse(parser: argparse.ArgumentParser) -> None:
"""Setup argument parser for ``cubi-tk snappy pull-sheet``."""
parser.add_argument("--hidden-cmd", dest="snappy_cmd", default=run, help=argparse.SUPPRESS)
parser.add_argument(
"--dry-run",
"-n",
default=False,
action="store_true",
help="Perform dry-run, do not do anything.",
)
parser.add_argument(
"--timeout", default=10, type=int, help="Number of seconds to wait for commands."
)
parser.add_argument(
"path",
nargs="?",
help="Path into SNAPPY directory (below a directory containing .snappy_pipeline).",
)
| 34.448718 | 97 | 0.633048 | 338 | 2,687 | 4.961538 | 0.39645 | 0.035778 | 0.040549 | 0.015504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002427 | 0.233346 | 2,687 | 77 | 98 | 34.896104 | 0.81165 | 0.063268 | 0 | 0.05 | 0 | 0 | 0.205108 | 0 | 0 | 0 | 0 | 0.012987 | 0 | 1 | 0.033333 | false | 0 | 0.133333 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c100a9a9e5db785c7efc1726ba5b0b98ff396a7 | 2,469 | py | Python | src/printReport.py | griimx/Summer-2016 | 08bf0a68a0e12ee81318409f68448adaf75983fe | [
"MIT"
] | null | null | null | src/printReport.py | griimx/Summer-2016 | 08bf0a68a0e12ee81318409f68448adaf75983fe | [
"MIT"
] | null | null | null | src/printReport.py | griimx/Summer-2016 | 08bf0a68a0e12ee81318409f68448adaf75983fe | [
"MIT"
] | null | null | null | from __future__ import print_function
from connection import *
from jinja2 import Environment, FileSystemLoader
import webbrowser
def print_report(id):
env = Environment(loader=FileSystemLoader('.'))
template = env.get_template("src/template.html")
cursor = db.cursor(MySQLdb.cursors.DictCursor)
sql = "SELECT e.*, b.*, d.`depName` "
sql += "FROM `employees` e, `baccounts` b, `departments` d "
sql +="WHERE e.`empID` = b.`empdb_empID` "
sql +="AND e.`depDB_depID` = d.`depID` "
sql +="AND e.`empID` = '"+ id +"'"
# print(sql)
cursor.execute(sql)
result = cursor.fetchall()
# print(result[0])
result = result[0]
print(result)
template_vars = {"empID" : result['empID'],
"firstName" : result['firstName'],
"lastName" : result['lastName'],
"address" : result['address'],
"pin" : result['pin'],
"state" : result['state'],
"adharID" : result['adharID'],
"panID" : result['panID'],
"designation" : result['designation'],
"unit" : result['unit'],
"email" : result['email'],
"mobile" : result['mobile'],
"depName" : result['depName'],
"IFSC" : result['IFSC'],
"ACNo" : result['ACNo'],
"BranchAdd" : result['BranchAdd']
}
content = template.render(template_vars)
with open('print.html', 'w') as static_file:
static_file.write(content)
webbrowser.open_new_tab('print.html')
# self.entry_text(self.entry_name, result['firstName']+" "+result['lastName'] )
# self.entry_text(self.entry_EmpID, result['empID'])
# self.entry_text(self.entry_EmpName, result['firstName']+" "+result['lastName'])
# self.entry_text(self.entry_personalno, result['empID'])
# self.entry_text(self.entry_address,result['address'] )
# self.entry_text(self.entry_pin, result['pin'])
# self.entry_text(self.entry_state, result['state'])
# self.entry_text(self.entry_adhar, result['adharID'])
# self.entry_text(self.entry_pan, result['panID'])
# self.entry_text(self.entry_designation, result['designation'])
# self.entry_text(self.entry_unit, result['unit'])
# self.entry_text(self.entry_emailid, result['email'])
# self.entry_text(self.entry_mobile, result['mobile'])
# self.entry_text(self.entry_department, result['depName'])
# self.entry_text(self.entry_ifsc, result['IFSC'])
# self.entry_text(self.enrtry_acno, result['ACNo'])
# self.entry_text(self.entry_branch, result['BranchAdd'])
| 37.409091 | 86 | 0.654111 | 298 | 2,469 | 5.255034 | 0.271812 | 0.189655 | 0.141124 | 0.184547 | 0.275862 | 0.10728 | 0.10728 | 0.065134 | 0.065134 | 0 | 0 | 0.001456 | 0.165249 | 2,469 | 65 | 87 | 37.984615 | 0.75837 | 0.430539 | 0 | 0 | 0 | 0 | 0.289322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.105263 | 0 | 0.131579 | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c13630030f6d62b875010ab48a5f1a305094328 | 1,266 | py | Python | nadmin/plugins/sortable.py | A425/django-xadmin-1.8 | 9ab06192311b22ec654778935ce3e3c5ffd39a00 | [
"MIT"
] | 1 | 2015-10-10T08:04:26.000Z | 2015-10-10T08:04:26.000Z | nadmin/plugins/sortable.py | A425/django-xadmin-1.8 | 9ab06192311b22ec654778935ce3e3c5ffd39a00 | [
"MIT"
] | 1 | 2016-03-25T01:41:36.000Z | 2016-03-25T01:41:36.000Z | nadmin/plugins/sortable.py | A425/django-xadmin-1.8 | 9ab06192311b22ec654778935ce3e3c5ffd39a00 | [
"MIT"
] | null | null | null | #coding:utf-8
from nadmin.sites import site
from nadmin.views import BaseAdminPlugin, ListAdminView
SORTBY_VAR = '_sort_by'
class SortablePlugin(BaseAdminPlugin):
sortable_fields = ['sort']
# Media
def get_media(self, media):
if self.sortable_fields and self.request.GET.get(SORTBY_VAR):
media = media + self.vendor('nadmin.plugin.sortable.js')
return media
# Block Views
def block_top_toolbar(self, context, nodes):
if self.sortable_fields:
pass
# current_refresh = self.request.GET.get(REFRESH_VAR)
# context.update({
# 'has_refresh': bool(current_refresh),
# 'clean_refresh_url': self.admin_view.get_query_string(remove=(REFRESH_VAR,)),
# 'current_refresh': current_refresh,
# 'refresh_times': [{
# 'time': r,
# 'url': self.admin_view.get_query_string({REFRESH_VAR: r}),
# 'selected': str(r) == current_refresh,
# } for r in self.refresh_times],
# })
# nodes.append(loader.render_to_string('nadmin/blocks/refresh.html', context_instance=context))
site.register_plugin(SortablePlugin, ListAdminView)
| 34.216216 | 107 | 0.611374 | 139 | 1,266 | 5.330935 | 0.453237 | 0.094467 | 0.037787 | 0.053981 | 0.080972 | 0.080972 | 0.080972 | 0 | 0 | 0 | 0 | 0.001093 | 0.277251 | 1,266 | 36 | 108 | 35.166667 | 0.808743 | 0.436809 | 0 | 0 | 0 | 0 | 0.053009 | 0.035817 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.076923 | 0.153846 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9c1453b1473bf17ef5373079c50724a0067a38a2 | 3,311 | py | Python | rotkehlchen/tests/integration/test_blockchain.py | coblee/rotki | d675f5c2d0df5176337b7b10038524ee74923482 | [
"BSD-3-Clause"
] | null | null | null | rotkehlchen/tests/integration/test_blockchain.py | coblee/rotki | d675f5c2d0df5176337b7b10038524ee74923482 | [
"BSD-3-Clause"
] | 3 | 2021-01-28T21:30:46.000Z | 2022-03-25T19:17:00.000Z | rotkehlchen/tests/integration/test_blockchain.py | coblee/rotki | d675f5c2d0df5176337b7b10038524ee74923482 | [
"BSD-3-Clause"
] | null | null | null | import operator
import os
from unittest.mock import patch
import pytest
import requests
from rotkehlchen.chain.ethereum.manager import NodeName
from rotkehlchen.constants.assets import A_BTC
from rotkehlchen.tests.utils.blockchain import mock_etherscan_query
from rotkehlchen.typing import SupportedBlockchain
@pytest.mark.skipif(
os.name == 'nt',
reason='Not testing running with geth in windows at the moment',
)
@pytest.mark.parametrize('have_blockchain_backend', [True])
def test_eth_connection_initial_balances(
blockchain,
inquirer, # pylint: disable=unused-argument
):
"""TODO for this test. Either:
1. Not use own chain but use a normal open node for this test.
2. If we use own chain, deploy the eth-scan contract there.
But probably (1) makes more sense
"""
msg = 'Should be connected to ethereum node'
assert blockchain.ethereum.web3_mapping.get(NodeName.OWN) is not None, msg
def test_query_btc_balances(blockchain):
blockchain.query_btc_balances()
assert 'BTC' not in blockchain.totals
account = '3BZU33iFcAiyVyu2M2GhEpLNuh81GymzJ7'
blockchain.modify_btc_account(account, 'append', operator.add)
blockchain.query_btc_balances()
assert blockchain.totals[A_BTC].usd_value is not None
assert blockchain.totals[A_BTC].amount is not None
@pytest.mark.parametrize('number_of_eth_accounts', [0])
def test_add_remove_account_assure_all_balances_not_always_queried(blockchain):
"""Due to a programming mistake at addition and removal of blockchain accounts
after the first time all balances were queried every time. That slowed
everything down (https://github.com/rotki/rotki/issues/678).
This is a regression test for that behaviour
TODO: Is this still needed? Shouldn't it just be removed?
Had to add lots of mocks to make it not be a slow test
"""
addr1 = '0xe188c6BEBB81b96A65aa20dDB9e2aef62627fa4c'
addr2 = '0x78a087fCf440315b843632cFd6FDE6E5adcCc2C2'
etherscan_patch = mock_etherscan_query(
eth_map={addr1: {'ETH': 1}, addr2: {'ETH': 2}},
etherscan=blockchain.ethereum.etherscan,
original_requests_get=requests.get,
original_queries=[],
)
ethtokens_max_chunks_patch = patch(
'rotkehlchen.chain.ethereum.tokens.ETHERSCAN_MAX_TOKEN_CHUNK_LENGTH',
new=800,
)
with etherscan_patch, ethtokens_max_chunks_patch:
blockchain.add_blockchain_accounts(
blockchain=SupportedBlockchain.ETHEREUM,
accounts=[addr1],
)
assert addr1 in blockchain.accounts.eth
with etherscan_patch, ethtokens_max_chunks_patch, patch.object(blockchain, 'query_balances') as mock: # noqa: E501
blockchain.remove_blockchain_accounts(
blockchain=SupportedBlockchain.ETHEREUM,
accounts=[addr1],
)
assert addr1 not in blockchain.accounts.eth
assert mock.call_count == 0, 'blockchain.query_balances() should not have been called'
addr2 = '0x78a087fCf440315b843632cFd6FDE6E5adcCc2C2'
with etherscan_patch, ethtokens_max_chunks_patch, patch.object(blockchain, 'query_balances') as mock: # noqa: E501
blockchain.add_blockchain_accounts(
blockchain=SupportedBlockchain.ETHEREUM,
accounts=[addr2],
)
| 36.788889 | 119 | 0.735125 | 406 | 3,311 | 5.82266 | 0.406404 | 0.045685 | 0.030457 | 0.038917 | 0.259729 | 0.201354 | 0.201354 | 0.18401 | 0.150592 | 0.083756 | 0 | 0.039048 | 0.187859 | 3,311 | 89 | 120 | 37.202247 | 0.840089 | 0.18363 | 0 | 0.213115 | 0 | 0 | 0.174094 | 0.112538 | 0 | 0 | 0.047583 | 0.022472 | 0.114754 | 1 | 0.04918 | false | 0 | 0.147541 | 0 | 0.196721 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c17411640986aa0b93f332bd22849aaf0fdf53b | 3,080 | py | Python | tools/mkcodelet.py | bobmittmann/yard-ice | 3b27f94279d806d3a222de60adccf934994ed168 | [
"MIT"
] | 2 | 2019-04-08T19:00:23.000Z | 2019-11-30T23:42:58.000Z | tools/mkcodelet.py | bobmittmann/yard-ice | 3b27f94279d806d3a222de60adccf934994ed168 | [
"MIT"
] | null | null | null | tools/mkcodelet.py | bobmittmann/yard-ice | 3b27f94279d806d3a222de60adccf934994ed168 | [
"MIT"
] | 2 | 2016-02-12T14:12:41.000Z | 2019-09-18T14:50:29.000Z | #!/usr/bin/python
from struct import *
from getopt import *
import sys
import os
import re
def usage():
global progname
print >> sys.stderr, ""
print >> sys.stderr, " Usage:", progname, "[options] fname"
print >> sys.stderr, ""
print >> sys.stderr, "Options"
print >> sys.stderr, " -h, --help show this help message and exit"
print >> sys.stderr, " -o FILENAME, --addr=FILENAME"
print >> sys.stderr, ""
def error(msg):
print >> sys.stderr, ""
print >> sys.stderr, "#error:", msg
usage()
sys.exit(2)
def mk_codelet(in_fname, out_fname, hdr_fname):
try:
in_file = open(in_fname, mode='r')
except:
print >> sys.stderr, "#error: can't open file: '%s'" % in_fname
sys.exit(1)
try:
c_file = open(out_fname, mode='w')
except:
print >> sys.stderr, "#error: can't create file: %s" % out_fname
sys.exit(1)
try:
h_file = open(hdr_fname, mode='w')
except:
print >> sys.stderr, "#error: can't create file: %s" % hdr_fname
sys.exit(1)
i = 0
for line in in_file:
if re.match("SYMBOL TABLE:", line):
break
s_pat = re.compile("([0-9a-f]{8}) ..*[0-9a-f]{8} ([.A-Za-z_][A-Za-z_0-9]*)")
sym = {}
for line in in_file:
m = s_pat.findall(line)
if m:
addr = int(m[0][0], 16)
name = m[0][1]
sym[addr] = name
else:
break
for line in in_file:
if re.match("Contents of section .text:", line):
break
token_pat = re.compile("([0-9a-f]{2})([0-9a-f]{2})([0-9a-f]{2})([0-9a-f]{2})")
c_file.write("#include <stdint.h>\n\n")
h_file.write("#include <stdint.h>\n\n")
addr = 0
i = 0
for line in in_file:
for a, b, c, d in token_pat.findall(line):
try:
sym[addr]
if (i > 0):
c_file.write("\n};\n\n")
c_file.write("const uint32_t %s[] = {" % sym[addr])
h_file.write("extern const uint32_t %s[];\n\n" % sym[addr])
i = 0
except KeyError:
pass
if ((i % 4) == 0):
if (i > 0):
c_file.write(",")
c_file.write("\n\t0x" + d + c + b + a)
else:
c_file.write(", 0x" + d + c + b + a )
i = i + 1;
addr = addr + 4
c_file.write("\n};\n")
in_file.close()
c_file.close()
h_file.close()
return
def main():
global progname
progname = sys.argv[0]
try:
opts, args = getopt(sys.argv[1:], "ho:", \
["help", "output="])
except GetoptError, err:
error(str(err))
for o, a in opts:
if o in ("-h", "--help"):
usage()
sys.exit()
elif o in ("-o", "--output"):
out_fname = a
else:
assert False, "unhandled option"
if len(args) == 0:
error("missing fname")
if len(args) > 1:
error("too many arguments")
in_fname = args[0]
try:
out_fname
except NameError:
dirname, fname = os.path.split(in_fname)
basename, extension = os.path.splitext(fname)
out_fname = basename + '.' + 'c'
dirname, fname = os.path.split(out_fname)
basename, extension = os.path.splitext(fname)
hdr_fname = basename + '.' + 'h'
mk_codelet(in_fname, out_fname, hdr_fname)
if __name__ == "__main__":
main()
| 21.538462 | 80 | 0.566558 | 488 | 3,080 | 3.461066 | 0.247951 | 0.056838 | 0.099467 | 0.044997 | 0.390172 | 0.326821 | 0.243931 | 0.137359 | 0.071048 | 0.071048 | 0 | 0.021992 | 0.247078 | 3,080 | 142 | 81 | 21.690141 | 0.706339 | 0.005195 | 0 | 0.324561 | 0 | 0.017544 | 0.192742 | 0.026361 | 0 | 0 | 0 | 0 | 0.008772 | 0 | null | null | 0.008772 | 0.04386 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c2147f6458e9854c24fb91bf25b8791fe2188ff | 528 | py | Python | src/supplier/templates/supplier/urls.py | vandana0608/Pharmacy-Managament | f99bdec11c24027a432858daa19247a21cecc092 | [
"bzip2-1.0.6"
] | null | null | null | src/supplier/templates/supplier/urls.py | vandana0608/Pharmacy-Managament | f99bdec11c24027a432858daa19247a21cecc092 | [
"bzip2-1.0.6"
] | null | null | null | src/supplier/templates/supplier/urls.py | vandana0608/Pharmacy-Managament | f99bdec11c24027a432858daa19247a21cecc092 | [
"bzip2-1.0.6"
] | null | null | null | from django.urls import path
from . import views
urlpatterns = [
path('', views.SupplierList.as_view(), name='supplier_list'),
path('view/<int:pk>', views.SupplierView.as_view(), name='supplier_view'),
path('new', views.SupplierCreate.as_view(), name='supplier_new'),
path('view/<int:pk>', views.SupplierView.as_view(), name='supplier_view'),
path('edit/<int:pk>', views.SupplierUpdate.as_view(), name='supplier_edit'),
path('delete/<int:pk>', views.SupplierDelete.as_view(), name='supplier_delete'),
] | 44 | 84 | 0.69697 | 70 | 528 | 5.085714 | 0.314286 | 0.101124 | 0.168539 | 0.303371 | 0.314607 | 0.314607 | 0.314607 | 0.314607 | 0.314607 | 0.314607 | 0 | 0 | 0.104167 | 528 | 12 | 85 | 44 | 0.752643 | 0 | 0 | 0.2 | 0 | 0 | 0.257089 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c22036370e0f940a80ab34156b825acd98d5b1a | 205 | py | Python | web_scraper/extract/common.py | rarc41/web_scraper_pro | f297c785617c6b1617ced8f29ad11afec31f2968 | [
"MIT"
] | null | null | null | web_scraper/extract/common.py | rarc41/web_scraper_pro | f297c785617c6b1617ced8f29ad11afec31f2968 | [
"MIT"
] | null | null | null | web_scraper/extract/common.py | rarc41/web_scraper_pro | f297c785617c6b1617ced8f29ad11afec31f2968 | [
"MIT"
] | null | null | null | import yaml
__config=None
def config():
global __config
if not __config:
with open('config.yaml', mode='r') as f:
__config=yaml.safe_load(f)
return __config | 15.769231 | 48 | 0.585366 | 26 | 205 | 4.192308 | 0.653846 | 0.183486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.321951 | 205 | 13 | 49 | 15.769231 | 0.784173 | 0 | 0 | 0 | 0 | 0 | 0.058252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c231907fc5c90a542b71605f474b278cba43d2d | 777 | py | Python | sequence/get_seqs_from_list.py | fanglu01/cDNA_Cupcake | 60f56dc291661a2b84e40b64d469fba658889c34 | [
"BSD-3-Clause-Clear"
] | 1 | 2018-09-21T06:20:50.000Z | 2018-09-21T06:20:50.000Z | sequence/get_seqs_from_list.py | fanglu01/cDNA_Cupcake | 60f56dc291661a2b84e40b64d469fba658889c34 | [
"BSD-3-Clause-Clear"
] | null | null | null | sequence/get_seqs_from_list.py | fanglu01/cDNA_Cupcake | 60f56dc291661a2b84e40b64d469fba658889c34 | [
"BSD-3-Clause-Clear"
] | null | null | null | #!/usr/bin/env python
import os, sys
from Bio import SeqIO
def get_seqs_from_list(fastafile, listfile):
seqs = [line.strip() for line in open(listfile)]
for r in SeqIO.parse(open(fastafile), 'fasta'):
if r.id in seqs or r.id.split('|')[0] in seqs or any(r.id.startswith(x) for x in seqs):
print ">" + r.id
print r.seq
if __name__ == "__main__":
from argparse import ArgumentParser
parser = ArgumentParser("Get sequences from a fasta file from a list")
parser.add_argument("fasta_filename", help="Input fasta filename to extract sequences from")
parser.add_argument("list_filename", help="List of sequence IDs to extract")
args = parser.parse_args()
get_seqs_from_list(args.fasta_filename, args.list_filename)
| 37 | 96 | 0.693694 | 119 | 777 | 4.352941 | 0.445378 | 0.023166 | 0.042471 | 0.057915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001597 | 0.194337 | 777 | 20 | 97 | 38.85 | 0.825879 | 0.02574 | 0 | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.2 | null | null | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c245a520078fb55db53d97b8e520bef999698c6 | 9,538 | py | Python | api/base/settings/defaults.py | mattclark/osf.io | 7a362ceb6af3393d3d0423aafef336ee13277303 | [
"Apache-2.0"
] | null | null | null | api/base/settings/defaults.py | mattclark/osf.io | 7a362ceb6af3393d3d0423aafef336ee13277303 | [
"Apache-2.0"
] | null | null | null | api/base/settings/defaults.py | mattclark/osf.io | 7a362ceb6af3393d3d0423aafef336ee13277303 | [
"Apache-2.0"
] | null | null | null | """
Django settings for api project.
Generated by 'django-admin startproject' using Django 1.8.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
import os
from urlparse import urlparse
from website import settings as osf_settings
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
DATABASES = {
'default': {
'CONN_MAX_AGE': 0,
'ENGINE': 'osf.db.backends.postgresql', # django.db.backends.postgresql
'NAME': os.environ.get('OSF_DB_NAME', 'osf'),
'USER': os.environ.get('OSF_DB_USER', 'postgres'),
'PASSWORD': os.environ.get('OSF_DB_PASSWORD', ''),
'HOST': os.environ.get('OSF_DB_HOST', '127.0.0.1'),
'PORT': os.environ.get('OSF_DB_PORT', '5432'),
'ATOMIC_REQUESTS': True,
'TEST': {
'SERIALIZE': False,
},
},
}
DATABASE_ROUTERS = ['osf.db.router.PostgreSQLFailoverRouter', ]
PASSWORD_HASHERS = [
'django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
'django.contrib.auth.hashers.BCryptPasswordHasher',
]
AUTH_USER_MODEL = 'osf.OSFUser'
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = osf_settings.SECRET_KEY
AUTHENTICATION_BACKENDS = (
'api.base.authentication.backends.ODMBackend',
'guardian.backends.ObjectPermissionBackend',
)
# SECURITY WARNING: don't run with debug turned on in production!
DEV_MODE = osf_settings.DEV_MODE
DEBUG = osf_settings.DEBUG_MODE
DEBUG_PROPAGATE_EXCEPTIONS = True
# session:
SESSION_COOKIE_NAME = 'api'
SESSION_COOKIE_SECURE = osf_settings.SECURE_MODE
SESSION_COOKIE_HTTPONLY = osf_settings.SESSION_COOKIE_HTTPONLY
# csrf:
CSRF_COOKIE_NAME = 'api-csrf'
CSRF_COOKIE_SECURE = osf_settings.SECURE_MODE
CSRF_COOKIE_HTTPONLY = osf_settings.SECURE_MODE
ALLOWED_HOSTS = [
'.osf.io',
]
# Application definition
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.messages',
'django.contrib.sessions',
'django.contrib.staticfiles',
'django.contrib.admin',
# 3rd party
'django_celery_beat',
'django_celery_results',
'rest_framework',
'corsheaders',
'raven.contrib.django.raven_compat',
'django_extensions',
'guardian',
'storages',
'waffle',
'elasticsearch_metrics',
# OSF
'osf',
# Addons
'addons.osfstorage',
'addons.bitbucket',
'addons.box',
'addons.dataverse',
'addons.dropbox',
'addons.figshare',
'addons.forward',
'addons.github',
'addons.gitlab',
'addons.googledrive',
'addons.mendeley',
'addons.onedrive',
'addons.owncloud',
'addons.s3',
'addons.twofactor',
'addons.wiki',
'addons.zotero',
)
# local development using https
if osf_settings.SECURE_MODE and DEBUG:
INSTALLED_APPS += ('sslserver',)
# TODO: Are there more granular ways to configure reporting specifically related to the API?
RAVEN_CONFIG = {
'tags': {'App': 'api'},
'dsn': osf_settings.SENTRY_DSN,
'release': osf_settings.VERSION,
}
BULK_SETTINGS = {
'DEFAULT_BULK_LIMIT': 100,
}
MAX_PAGE_SIZE = 100
REST_FRAMEWORK = {
'PAGE_SIZE': 10,
'DEFAULT_RENDERER_CLASSES': (
'api.base.renderers.JSONAPIRenderer',
'api.base.renderers.JSONRendererWithESISupport',
'api.base.renderers.BrowsableAPIRendererNoForms',
),
'DEFAULT_PARSER_CLASSES': (
'api.base.parsers.JSONAPIParser',
'api.base.parsers.JSONAPIParserForRegularJSON',
'rest_framework.parsers.FormParser',
'rest_framework.parsers.MultiPartParser',
),
'EXCEPTION_HANDLER': 'api.base.exceptions.json_api_exception_handler',
'DEFAULT_CONTENT_NEGOTIATION_CLASS': 'api.base.content_negotiation.JSONAPIContentNegotiation',
'DEFAULT_VERSIONING_CLASS': 'api.base.versioning.BaseVersioning',
'DEFAULT_VERSION': '2.0',
'ALLOWED_VERSIONS': (
'2.0',
'2.1',
'2.2',
'2.3',
'2.4',
'2.5',
'2.6',
'2.7',
'2.8',
'2.9',
'2.10',
'2.11',
'2.12',
'2.13',
'2.14',
'2.15',
'2.16',
'2.17',
),
'DEFAULT_FILTER_BACKENDS': ('api.base.filters.OSFOrderingFilter',),
'DEFAULT_PAGINATION_CLASS': 'api.base.pagination.JSONAPIPagination',
'ORDERING_PARAM': 'sort',
'DEFAULT_AUTHENTICATION_CLASSES': (
# Custom auth classes
'api.base.authentication.drf.OSFBasicAuthentication',
'api.base.authentication.drf.OSFSessionAuthentication',
'api.base.authentication.drf.OSFCASAuthentication',
),
'DEFAULT_THROTTLE_CLASSES': (
'rest_framework.throttling.UserRateThrottle',
'api.base.throttling.NonCookieAuthThrottle',
),
'DEFAULT_THROTTLE_RATES': {
'user': '10000/day',
'non-cookie-auth': '100/hour',
'add-contributor': '10/second',
'create-guid': '1000/hour',
'root-anon-throttle': '1000/hour',
'test-user': '2/hour',
'test-anon': '1/hour',
'send-email': '2/minute',
},
}
# Settings related to CORS Headers addon: allow API to receive authenticated requests from OSF
# CORS plugin only matches based on "netloc" part of URL, so as workaround we add that to the list
CORS_ORIGIN_ALLOW_ALL = False
CORS_ORIGIN_WHITELIST = (
urlparse(osf_settings.DOMAIN).netloc,
osf_settings.DOMAIN,
)
# This needs to remain True to allow cross origin requests that are in CORS_ORIGIN_WHITELIST to
# use cookies.
CORS_ALLOW_CREDENTIALS = True
# Set dynamically on app init
ORIGINS_WHITELIST = ()
MIDDLEWARE = (
'api.base.middleware.DjangoGlobalMiddleware',
'api.base.middleware.CeleryTaskMiddleware',
'api.base.middleware.PostcommitTaskMiddleware',
# A profiling middleware. ONLY FOR DEV USE
# Uncomment and add "prof" to url params to recieve a profile for that url
# 'api.base.middleware.ProfileMiddleware',
# 'django.contrib.sessions.middleware.SessionMiddleware',
'api.base.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
# 'django.contrib.auth.middleware.AuthenticationMiddleware',
# 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
# 'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'waffle.middleware.WaffleMiddleware',
)
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
},
]
ROOT_URLCONF = 'api.base.urls'
WSGI_APPLICATION = 'api.base.wsgi.application'
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# https://django-storages.readthedocs.io/en/latest/backends/gcloud.html
if os.environ.get('GOOGLE_APPLICATION_CREDENTIALS', False):
# Required to interact with Google Cloud Storage
DEFAULT_FILE_STORAGE = 'api.base.storage.RequestlessURLGoogleCloudStorage'
GS_BUCKET_NAME = os.environ.get('GS_BUCKET_NAME', 'cos-osf-stage-cdn-us')
GS_FILE_OVERWRITE = os.environ.get('GS_FILE_OVERWRITE', False)
elif osf_settings.DEV_MODE or osf_settings.DEBUG_MODE:
DEFAULT_FILE_STORAGE = 'api.base.storage.DevFileSystemStorage'
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'static/vendor')
API_BASE = 'v2/'
API_PRIVATE_BASE = '_/'
STATIC_URL = '/static/'
NODE_CATEGORY_MAP = osf_settings.NODE_CATEGORY_MAP
DEBUG_TRANSACTIONS = DEBUG
JWT_SECRET = 'osf_api_cas_login_jwt_secret_32b'
JWE_SECRET = 'osf_api_cas_login_jwe_secret_32b'
ENABLE_VARNISH = osf_settings.ENABLE_VARNISH
ENABLE_ESI = osf_settings.ENABLE_ESI
VARNISH_SERVERS = osf_settings.VARNISH_SERVERS
ESI_MEDIA_TYPES = osf_settings.ESI_MEDIA_TYPES
ADDONS_FOLDER_CONFIGURABLE = ['box', 'dropbox', 's3', 'googledrive', 'figshare', 'owncloud', 'onedrive']
ADDONS_OAUTH = ADDONS_FOLDER_CONFIGURABLE + ['dataverse', 'github', 'bitbucket', 'gitlab', 'mendeley', 'zotero', 'forward']
BYPASS_THROTTLE_TOKEN = 'test-token'
OSF_SHELL_USER_IMPORTS = None
# Settings for use in the admin
OSF_URL = 'https://osf.io'
SELECT_FOR_UPDATE_ENABLED = True
# Disable anonymous user permissions in django-guardian
ANONYMOUS_USER_NAME = None
# If set to True, automated tests with extra queries will fail.
NPLUSONE_RAISE = False
# salt used for generating hashids
HASHIDS_SALT = 'pinkhimalayan'
# django-elasticsearch-metrics
ELASTICSEARCH_DSL = {
'default': {
'hosts': os.environ.get('ELASTIC6_URI', '127.0.0.1:9201'),
'retry_on_timeout': True,
},
}
# Store yearly indices for time-series metrics
ELASTICSEARCH_METRICS_DATE_FORMAT = '%Y'
WAFFLE_CACHE_NAME = 'waffle_cache'
STORAGE_USAGE_CACHE_NAME = 'storage_usage'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
},
STORAGE_USAGE_CACHE_NAME: {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'osf_cache_table',
},
WAFFLE_CACHE_NAME: {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
},
}
| 28.990881 | 123 | 0.698784 | 1,121 | 9,538 | 5.742194 | 0.369313 | 0.027187 | 0.016778 | 0.011651 | 0.09492 | 0.070219 | 0.050023 | 0.027963 | 0 | 0 | 0 | 0.015463 | 0.172783 | 9,538 | 328 | 124 | 29.079268 | 0.80038 | 0.202139 | 0 | 0.043668 | 1 | 0 | 0.450053 | 0.279863 | 0 | 0 | 0 | 0.003049 | 0 | 1 | 0 | false | 0.021834 | 0.017467 | 0 | 0.017467 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9c26cf2339aa7a4ef06216de7bd0bf3332068b1a | 948 | py | Python | api/src/error_report/models.py | Noahffiliation/corpus-christi | c69ec88784de7d2e5acde3012926f307b43e38b3 | [
"MIT"
] | 35 | 2018-11-29T20:06:52.000Z | 2021-04-12T19:01:42.000Z | api/src/error_report/models.py | Noahffiliation/corpus-christi | c69ec88784de7d2e5acde3012926f307b43e38b3 | [
"MIT"
] | 529 | 2018-12-31T23:51:25.000Z | 2022-02-26T10:42:29.000Z | api/src/error_report/models.py | Noahffiliation/corpus-christi | c69ec88784de7d2e5acde3012926f307b43e38b3 | [
"MIT"
] | 10 | 2018-12-04T16:17:00.000Z | 2021-04-07T00:47:52.000Z | from marshmallow import Schema, fields
from marshmallow.validate import Range, Length
from sqlalchemy import Column, Integer, Boolean, DateTime
from ..db import Base
from ..shared.models import StringTypes
# ---- Error-report
class ErrorReport(Base):
__tablename__ = 'error_report'
id = Column(Integer, primary_key=True)
description = Column(StringTypes.LONG_STRING, nullable=False)
time_stamp = Column(DateTime)
status_code = Column(Integer)
endpoint = Column(StringTypes.MEDIUM_STRING)
solved = Column(Boolean, default=False)
def __repr__(self):
return f"<Error-report(id={self.id})>"
class ErrorReportSchema(Schema):
id = fields.Integer(dump_only=True, required=True, validate=Range(min=1))
description = fields.String(required=True, validate=Length(min=1))
time_stamp = fields.DateTime()
status_code = fields.Integer()
endpoint = fields.String()
solved = fields.Boolean()
| 29.625 | 77 | 0.728903 | 114 | 948 | 5.912281 | 0.447368 | 0.057864 | 0.038576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002516 | 0.161392 | 948 | 31 | 78 | 30.580645 | 0.845283 | 0.017932 | 0 | 0 | 0 | 0 | 0.043057 | 0.03014 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.227273 | 0.045455 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.