hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4c86d36d1ca7f5676ec707c02279a0b7c737bbd9 | 337 | py | Python | shop_thienhi/utils/format_time.py | Lesson-ThienHi/thienhi_shop | 1c595d70299e1fcce12c3610e27b66c89bbadda6 | [
"MIT"
] | null | null | null | shop_thienhi/utils/format_time.py | Lesson-ThienHi/thienhi_shop | 1c595d70299e1fcce12c3610e27b66c89bbadda6 | [
"MIT"
] | 2 | 2022-03-30T06:34:29.000Z | 2022-03-31T06:34:49.000Z | shop_thienhi/utils/format_time.py | Lesson-ThienHi/thienhi_shop | 1c595d70299e1fcce12c3610e27b66c89bbadda6 | [
"MIT"
] | null | null | null | from datetime import datetime
def format_time_filter():
start_time = datetime.now().utcnow().replace(hour=0, minute=0, second=0, microsecond=0).timestamp()
end_time = datetime.utcnow().replace(second=0, microsecond=0).timestamp()
data = {
"start_time": start_time,
"end_time": end_time
}
return data
| 30.636364 | 103 | 0.676558 | 44 | 337 | 5 | 0.454545 | 0.122727 | 0.163636 | 0.172727 | 0.254545 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021898 | 0.186944 | 337 | 10 | 104 | 33.7 | 0.781022 | 0 | 0 | 0 | 0 | 0 | 0.053412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c8719fed243367528ac749c01c04b3271e74999 | 923 | py | Python | Algorithms/PCA/solutions.py | lcbendall/numerical_computing | 565cde92525ea44c55abe933c6419c1543f9800b | [
"CC-BY-3.0"
] | null | null | null | Algorithms/PCA/solutions.py | lcbendall/numerical_computing | 565cde92525ea44c55abe933c6419c1543f9800b | [
"CC-BY-3.0"
] | null | null | null | Algorithms/PCA/solutions.py | lcbendall/numerical_computing | 565cde92525ea44c55abe933c6419c1543f9800b | [
"CC-BY-3.0"
] | 1 | 2020-12-08T01:19:23.000Z | 2020-12-08T01:19:23.000Z | import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg as la
def PCA(dat, center=False, percentage=0.8):
M, N = dat.shape
if center:
mu = np.mean(dat,0)
dat -= mu
U, L, Vh = la.svd(dat, full_matrices=False)
V = Vh.T.conjugate()
SIGMA = np.diag(L)
X = U.dot(SIGMA)
Lam = L**2
normalized_eigenvalues = Lam/Lam.sum(dtype=float)
csum = [normalized_eigenvalues[:i+1].sum() for i in xrange(N)]
n_components = [x < percentage for x in csum].index(False) + 1
return (normalized_eigenvalues,
V[:,0:n_components],
SIGMA[0:n_components,0:n_components],
X[:,0:n_components])
def scree(normalized_eigenvalues):
fig = plt.figure()
plt.plot(normalized_eigenvalues,'b-', normalized_eigenvalues, 'bo')
plt.xlabel("Principal Components")
plt.ylabel("Percentage of Variance")
return fig
| 27.147059 | 71 | 0.630553 | 133 | 923 | 4.285714 | 0.488722 | 0.221053 | 0.084211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.241603 | 923 | 33 | 72 | 27.969697 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0.049837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4c8a0d1bb9255782fe923e33bd79defeacecfa0f | 1,298 | py | Python | tests/serialization/test_deserialization/flows/flow_template.py | dazzag24/prefect | 9d36c989c95cbbed091b071932553286edf25bb6 | [
"Apache-2.0"
] | null | null | null | tests/serialization/test_deserialization/flows/flow_template.py | dazzag24/prefect | 9d36c989c95cbbed091b071932553286edf25bb6 | [
"Apache-2.0"
] | null | null | null | tests/serialization/test_deserialization/flows/flow_template.py | dazzag24/prefect | 9d36c989c95cbbed091b071932553286edf25bb6 | [
"Apache-2.0"
] | null | null | null | import datetime
from prefect import task, Flow, Parameter
from prefect.engine.cache_validators import partial_parameters_only
from prefect.environments.execution import RemoteEnvironment
from prefect.environments.storage import Docker
from prefect.engine.result_handlers import JSONResultHandler, S3ResultHandler
from prefect.tasks.shell import ShellTask
@task(max_retries=5, retry_delay=datetime.timedelta(minutes=10))
def root_task():
pass
@task(
cache_for=datetime.timedelta(days=10),
cache_validator=partial_parameters_only(["x"]),
result_handler=JSONResultHandler(),
)
def cached_task(x, y):
pass
x = Parameter("x")
y = Parameter("y", default=42)
@task(name="Big Name", checkpoint=True, result_handler=S3ResultHandler(bucket="blob"))
def terminal_task():
pass
env = RemoteEnvironment(
executor="prefect.engine.executors.DaskExecutor",
executor_kwargs={"scheduler_address": "tcp://"},
)
storage = Docker(
registry_url="prefecthq",
image_name="flows",
image_tag="welcome-flow",
python_dependencies=["boto3"],
)
with Flow("test-serialization", storage=storage, environment=env) as f:
result = cached_task.map(x, y, upstream_tasks=[root_task, root_task])
terminal_task(upstream_tasks=[result, root_task])
f.storage.add_flow(f)
| 25.45098 | 86 | 0.75963 | 163 | 1,298 | 5.871166 | 0.484663 | 0.068966 | 0.035528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008764 | 0.120955 | 1,298 | 50 | 87 | 25.96 | 0.829974 | 0 | 0 | 0.083333 | 0 | 0 | 0.095532 | 0.028505 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.083333 | 0.194444 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4c8a63609fc662bd88f868ef8238e6f25e44baa6 | 9,616 | py | Python | blog/models.py | wjhgg/DBlog | 59274ac4353068a3795731c3f786748ba9095701 | [
"MulanPSL-1.0"
] | null | null | null | blog/models.py | wjhgg/DBlog | 59274ac4353068a3795731c3f786748ba9095701 | [
"MulanPSL-1.0"
] | null | null | null | blog/models.py | wjhgg/DBlog | 59274ac4353068a3795731c3f786748ba9095701 | [
"MulanPSL-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
import os
from django.contrib.auth.models import AbstractUser
from django.db import models
from django.conf import settings
# Create your models here.
# 用户
# class User(AbstractUser):
# u_name = models.CharField(max_length=20, verbose_name='昵称', default='')
# birthday = models.DateField(verbose_name='生日', null=True, blank=True)
# genter = models.CharField(max_length=2, choices=(("male", '男'), ('female', '女')), default='male')
# image = models.ImageField(default='images/login/', max_length=200, null=True)
# describe = models.CharField(max_length=500, default='', verbose_name='个性签名')
#
# class Meta:
# verbose_name = '用户信息'
# verbose_name_plural = verbose_name
#
# def __unicode__(self):
# return self.username
#
# # 邮箱验证码
# class EmailVerificationCode(models.Model):
# code = models.CharField(max_length=20, verbose_name=u'验证码')
# email = models.EmailField(max_length=200, verbose_name=u'邮箱')
# send_type = models.CharField(max_length=10, choices=(("register", u'注册'), ("forget", u'密码找回')))
# send_time = models.DateTimeField(auto_now_add=True, )
#
# class Meta:
# verbose_name = u'邮箱验证码'
# verbose_name_plural = verbose_name
from django.db.models.signals import post_delete, post_init, post_save, pre_delete
from django.dispatch import receiver
from django.utils.html import format_html
from mdeditor.fields import MDTextField
class Friend(models.Model):
"""
友链
"""
url = models.CharField(max_length=200, verbose_name='友链链接', default='https://my.oschina.net/chulan')
title = models.CharField(max_length=100, verbose_name='超链接title', default='OSCHINA')
name = models.CharField(max_length=20, verbose_name='友链名称', default='chulan')
class Meta:
verbose_name = '友链'
verbose_name_plural = verbose_name
def __str__(self):
return self.url
class Carousel(models.Model):
"""
首页轮播图配置
"""
carousel = models.ImageField(upload_to='carousel', verbose_name='轮播图')
carousel_title = models.TextField(blank=True, null=True, max_length=100, verbose_name='轮播图左下标题')
img_link_title = models.TextField(blank=True, null=True, max_length=100, verbose_name='图片标题')
img_alt = models.TextField(blank=True, null=True, max_length=100, verbose_name='轮播图alt')
class Meta:
verbose_name = '首页轮播图配置'
verbose_name_plural = verbose_name
def __str__(self):
return self.carousel_title
@receiver(pre_delete, sender=Carousel)
def delete_upload_files(sender, instance, **kwargs):
instance.carousel.delete(False)
@receiver(post_init, sender=Carousel)
def file_path(sender, instance, **kwargs):
instance._current_file = instance.carousel
@receiver(post_save, sender= Carousel)
def delete_old_image(sender, instance, **kwargs):
if hasattr(instance, '_current_file'):
if instance._current_file != instance.carousel.path:
instance._current_file.delete(save=False)
class Announcement(models.Model):
"""
公告
"""
head_announcement = models.CharField(max_length=30, verbose_name='头部轮播公告', default='热烈欢迎浏览本站')
main_announcement = models.TextField(blank=True, null=True, max_length=300, verbose_name='右侧公告', default='暂无公告......')
class Meta:
verbose_name = '公告'
verbose_name_plural = verbose_name
def __str__(self):
return self.head_announcement
class Conf(models.Model):
"""
网站配置信息
"""
main_website = models.CharField(max_length=64, verbose_name='主网站', default="xwboy.top")
name = models.CharField(max_length=8, verbose_name='关注我_名称', default="CL' WU")
chinese_description = models.CharField(max_length=30, verbose_name='关注我_中文描述', default='永不放弃坚持就是这么酷!要相信光')
english_description = models.TextField(max_length=100, verbose_name='关注我_英文描述', default='Never give up persistence is so cool!Believe in the light!!!')
avatar_link = models.CharField(max_length=150, verbose_name='关注我_头像超链接', default='https://avatars.githubusercontent.com/u/52145145?v=4')
website_author = models.CharField(max_length=20, verbose_name='网站作者', default='xiaowu')
website_author_link = models.CharField(max_length=200, verbose_name='网站作者链接', default='http://www.xwboy.top')
email = models.CharField(max_length=50, verbose_name='收件邮箱', default='2186656812@qq.com')
website_number = models.CharField(max_length=100, verbose_name='备案号', default='豫ICP备 2021019092号-1')
git = models.CharField(max_length=100, verbose_name='git链接', default='https://gitee.com/wu_cl')
website_logo = models.ImageField(upload_to='logo', blank=True, null=True, verbose_name='网站logo', default='')
class Meta:
verbose_name = '网站配置'
verbose_name_plural = verbose_name
def __str__(self):
return self.main_website
@receiver(pre_delete, sender=Conf)
def delete_upload_files(sender, instance, **kwargs):
instance.website_logo.delete(False)
@receiver(post_init, sender=Conf)
def file_path(sender, instance, **kwargs):
instance._current_file = instance.website_logo
@receiver(post_save, sender= Conf)
def delete_old_image(sender, instance, **kwargs):
if hasattr(instance, '_current_file'):
if instance._current_file != instance.website_logo.path:
instance._current_file.delete(save=False)
class Pay(models.Model):
"""
收款图
"""
payimg = models.ImageField(upload_to='pay', blank=True, null=True, verbose_name='捐助收款图')
class Meta:
verbose_name = '捐助收款图'
verbose_name_plural = verbose_name
@receiver(pre_delete, sender=Pay)
def delete_upload_files(sender, instance, **kwargs):
instance.payimg.delete(False)
@receiver(post_init, sender=Pay)
def file_path(sender, instance, **kwargs):
instance._current_file = instance.payimg
@receiver(post_save, sender= Pay)
def delete_old_image(sender, instance, **kwargs):
if hasattr(instance, '_current_file'):
if instance._current_file != instance.payimg.path:
instance._current_file.delete(save=False)
class Tag(models.Model):
"""
标签
"""
tag_name = models.CharField('标签名称', max_length=30, )
class Meta:
verbose_name = '标签'
verbose_name_plural = verbose_name
def __str__(self):
return self.tag_name
class Article(models.Model):
"""
文章
"""
title = models.CharField(max_length=200, verbose_name='文章标题') # 博客标题
category = models.ForeignKey('Category', verbose_name='文章类型', on_delete=models.CASCADE)
date_time = models.DateField(auto_now_add=True, verbose_name='创建时间')
content = MDTextField(blank=True, null=True, verbose_name='文章正文')
digest = models.TextField(blank=True, null=True, verbose_name='文章摘要')
author = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name='作者', on_delete=models.CASCADE)
view = models.BigIntegerField(default=0, verbose_name='阅读数')
comment = models.BigIntegerField(default=0, verbose_name='评论数')
picture = models.ImageField(upload_to='article_picture', blank=True, null=True, verbose_name='url(标题图)') # 标题图片地址
tag = models.ManyToManyField(Tag) # 标签
class Meta:
ordering = ['-date_time'] # 按时间降序
verbose_name = '博客文章'
verbose_name_plural = verbose_name
def sourceUrl(self):
source_url = settings.HOST + '/blog/detail/{id}'.format(id=self.pk)
return source_url
def content_validity(self):
"""
正文字数显示控制
"""
if len(str(self.content)) > 40: # 字数自己设置
return '{}……'.format(str(self.content)[0:40]) # 超出部分以省略号代替。
else:
return str(self.content)
def viewed(self):
"""
增加阅读数
:return:
"""
self.view += 1
self.save(update_fields=['view'])
def commenced(self):
"""
增加评论数
:return:
"""
self.comment += 1
self.save(update_fields=['comment'])
def __str__(self):
return self.title
# 需要放在最后
# 同步删除上传文件
@receiver(pre_delete, sender=Article)
def delete_upload_files(sender, instance, **kwargs):
"""
sender: 模型类名
instance.字段名
"""
instance.picture.delete(False)
# 同步修改文件
@receiver(post_init, sender=Article)
def file_path(sender, instance, **kwargs):
"""
instance.字段名
"""
instance._current_file = instance.picture
@receiver(post_save, sender= Article)
def delete_old_image(sender, instance, **kwargs):
"""
instance.字段名.path
"""
if hasattr(instance, '_current_file'):
if instance._current_file != instance.picture.path:
instance._current_file.delete(save=False)
class Category(models.Model):
"""
文章类型
"""
name = models.CharField('文章类型', max_length=30)
created_time = models.DateTimeField('创建时间', auto_now_add=True)
last_mod_time = models.DateTimeField('修改时间', auto_now=True)
class Meta:
ordering = ['name']
verbose_name = "文章类型"
verbose_name_plural = verbose_name
def __str__(self):
return self.name
class Comment(models.Model):
"""
评论
"""
title = models.CharField("标题", max_length=100)
source_id = models.CharField('文章id或source名称', max_length=25)
create_time = models.DateTimeField('评论时间', auto_now=True)
user_name = models.CharField('评论用户', max_length=25)
url = models.CharField('链接', max_length=100)
comment = models.TextField('评论内容', max_length=500)
class Meta:
ordering = ['create_time']
verbose_name = '评论'
verbose_name_plural = verbose_name
def __str__(self):
return self.title
| 32.819113 | 155 | 0.683444 | 1,196 | 9,616 | 5.266722 | 0.225753 | 0.118749 | 0.054294 | 0.072392 | 0.428639 | 0.381489 | 0.310367 | 0.232259 | 0.162565 | 0.162565 | 0 | 0.015577 | 0.185524 | 9,616 | 292 | 156 | 32.931507 | 0.787921 | 0.131344 | 0 | 0.296296 | 0 | 0 | 0.082142 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.049383 | 0.049383 | 0.62963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4c8dcae1615bebff8006d7fba1a12425b310ad35 | 477 | py | Python | engines/factory.py | valeoai/BEEF | f1c5f3708ba91f6402dd05814b76dca1d9012942 | [
"Apache-2.0"
] | 4 | 2021-05-31T16:53:35.000Z | 2021-11-30T03:03:34.000Z | engines/factory.py | valeoai/BEEF | f1c5f3708ba91f6402dd05814b76dca1d9012942 | [
"Apache-2.0"
] | 3 | 2022-02-02T20:41:56.000Z | 2022-02-24T11:47:44.000Z | engines/factory.py | valeoai/BEEF | f1c5f3708ba91f6402dd05814b76dca1d9012942 | [
"Apache-2.0"
] | null | null | null | from bootstrap.lib.options import Options
from bootstrap.lib.logger import Logger
from .extract_engine import ExtractEngine
from .predict_engine import PredictEngine
def factory():
if Options()['engine']['name'] == 'extract':
engine = ExtractEngine()
elif Options()['engine']['name'] == 'predict':
opt = Options()['engine']
engine = PredictEngine(vid_id=opt.get('vid_id', None))
else:
raise ValueError
return engine | 34.071429 | 63 | 0.660377 | 53 | 477 | 5.867925 | 0.471698 | 0.125402 | 0.102894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213836 | 477 | 14 | 64 | 34.071429 | 0.829333 | 0 | 0 | 0 | 0 | 0 | 0.098925 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4c9417844003b03d92633f2f16b78fb62fd56a2d | 1,996 | py | Python | appreview/migrations/0001_initial.py | IsaiahKe/awward-mimic | 8a5ff40d9acfbdc5323c7e9b6b8e7438f9a85d21 | [
"MIT"
] | null | null | null | appreview/migrations/0001_initial.py | IsaiahKe/awward-mimic | 8a5ff40d9acfbdc5323c7e9b6b8e7438f9a85d21 | [
"MIT"
] | null | null | null | appreview/migrations/0001_initial.py | IsaiahKe/awward-mimic | 8a5ff40d9acfbdc5323c7e9b6b8e7438f9a85d21 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.7 on 2021-09-22 09:28
import cloudinary.models
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import phonenumber_field.modelfields
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='AppVote',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('appname', models.CharField(max_length=30)),
('appimage', cloudinary.models.CloudinaryField(max_length=255, verbose_name='image')),
('author', models.CharField(max_length=30)),
('livelink', models.URLField(null=True)),
('design', models.DecimalField(decimal_places=2, default=0.0, max_digits=3)),
('usability', models.DecimalField(decimal_places=2, default=0.0, max_digits=3)),
('content', models.DecimalField(decimal_places=2, default=0.0, max_digits=3)),
('total', models.DecimalField(decimal_places=2, default=0.0, max_digits=4)),
],
),
migrations.CreateModel(
name='Profile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('userPhoto', cloudinary.models.CloudinaryField(max_length=255, verbose_name='image')),
('bio', models.TextField()),
('contact', phonenumber_field.modelfields.PhoneNumberField(max_length=128, null=True, region=None)),
('location', models.CharField(blank=True, max_length=30, null=True)),
('username', models.OneToOneField(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 44.355556 | 136 | 0.627255 | 215 | 1,996 | 5.683721 | 0.404651 | 0.04419 | 0.081833 | 0.101473 | 0.434534 | 0.39198 | 0.39198 | 0.39198 | 0.39198 | 0.295417 | 0 | 0.030283 | 0.238978 | 1,996 | 44 | 137 | 45.363636 | 0.774194 | 0.022545 | 0 | 0.27027 | 1 | 0 | 0.063109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.135135 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ca69d037973302f62772df73b1764080320eb80 | 1,066 | py | Python | ahye/lib.py | kopf/ahye | 75ab5f3f901feb85a7779365f42e86f76d68083f | [
"Apache-2.0"
] | 2 | 2015-03-29T10:21:36.000Z | 2015-11-14T15:36:42.000Z | ahye/lib.py | kopf/ahye | 75ab5f3f901feb85a7779365f42e86f76d68083f | [
"Apache-2.0"
] | null | null | null | ahye/lib.py | kopf/ahye | 75ab5f3f901feb85a7779365f42e86f76d68083f | [
"Apache-2.0"
] | null | null | null | import magic
import os
import random
import string
from ahye.settings import LOCAL_UPLOADS_DIR
def generate_filename(image_data, detect_extension=True):
alphanum = string.ascii_letters + string.digits
retval = ''
while not retval or os.path.exists(os.path.join(LOCAL_UPLOADS_DIR, retval)):
retval = ''.join(random.sample(alphanum, 8))
if detect_extension:
retval += get_file_extension(image_data)
else:
retval += '.png'
return retval
def get_file_extension(image_data):
s = magic.from_buffer(image_data)
if s.startswith('JPEG'):
return '.jpg'
elif s.startswith('GIF'):
return '.gif'
elif s.startswith('PNG'):
return '.png'
def guess_file_extension(url):
""" Used by the image mirroring service """
url = url.lower()
if '.jpg' in url or '.jpeg' in url:
return '.jpg'
elif '.gif' in url:
return '.gif'
elif '.png' in url:
return '.png'
elif '.svg' in url:
return '.svg'
else:
return '.jpg'
| 25.380952 | 81 | 0.616323 | 139 | 1,066 | 4.589928 | 0.395683 | 0.039185 | 0.068966 | 0.065831 | 0.07837 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001277 | 0.265478 | 1,066 | 41 | 82 | 26 | 0.813538 | 0.032833 | 0 | 0.257143 | 1 | 0 | 0.065494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.142857 | 0 | 0.485714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ca9ec6965d0d2705091310ae77f83d79c68ebb5 | 2,595 | py | Python | nn_interpretability/interpretation/deconv/deconv_partial_reconstruction.py | miquelmn/nn_interpretability | 2b5d2b4102016189743e09f1f3a56f2ecddfde98 | [
"MIT"
] | 41 | 2020-10-13T18:46:32.000Z | 2022-02-21T15:52:50.000Z | nn_interpretability/interpretation/deconv/deconv_partial_reconstruction.py | miquelmn/nn_interpretability | 2b5d2b4102016189743e09f1f3a56f2ecddfde98 | [
"MIT"
] | 4 | 2021-07-11T12:38:03.000Z | 2022-03-08T14:47:38.000Z | nn_interpretability/interpretation/deconv/deconv_partial_reconstruction.py | miquelmn/nn_interpretability | 2b5d2b4102016189743e09f1f3a56f2ecddfde98 | [
"MIT"
] | 7 | 2020-10-21T13:03:16.000Z | 2022-03-07T11:45:00.000Z | import torch
import torch.nn as nn
from torch.nn import Module
from torchvision import transforms
from nn_interpretability.interpretation.deconv.deconv_base import DeconvolutionBase
class DeconvolutionPartialReconstruction(DeconvolutionBase):
"""
Partial Input Reconstruction Deconvolution is a decision-based interpretability method
which aims to partially recreate the input from the output of the model by using only
a single filter in a layer of choice. The procedure is executed for every filter
in the chosen layer.
"""
def __init__(self, model: Module, classes: [str], preprocess: transforms.Compose, layer_number):
"""
:param model: The model the decisions of which needs to be interpreted.
:param classes: A collection of all classes that the given model can classify
:param preprocess: The preprocessing functions that need to be invoked for the model input.
:param layer_number: The number of the convolutional layer for which the procedure should be executed.
For example, 1 for the first CONV layer. 2 for the second CONV layer and so on.
"""
DeconvolutionBase.__init__(self, model, classes, preprocess)
self.layer_number = layer_number
if self.layer_number <= 0:
raise ValueError("Layer number can not be negative!")
def interpret(self, x):
x = self._execute_preprocess(x)
results = []
layer_index = -1
counter = self.layer_number
for i, layer in enumerate(self.layers):
if isinstance(layer, nn.Conv2d):
counter -= 1
if counter == 0:
layer_index = i
break
if layer_index < 0:
raise ValueError("Layer number is not valid!")
filters_count = self.layers[layer_index].weight.size()[0]
for i in range(filters_count):
new_weights = torch.zeros(self.layers[layer_index].weight.size()).to(self.device)
new_weights[i] = self.layers[layer_index].weight[i].clone().to(self.device)
self.transposed_layers[len(self.transposed_layers) - layer_index - 1].weight = torch.nn.Parameter(new_weights).to(self.device)
y, max_pool_indices, prev_size, view_resize = self._execute_model_forward_pass(x)
y = self._execute_transposed_model_forward_pass(y, max_pool_indices, prev_size, view_resize)
y = y.detach().cpu()
y = (y - y.min()) / (y.max() - y.min())
results.append(y)
return results
| 43.25 | 138 | 0.660886 | 336 | 2,595 | 4.958333 | 0.375 | 0.052821 | 0.038415 | 0.036014 | 0.123649 | 0.07563 | 0.039616 | 0.039616 | 0 | 0 | 0 | 0.0052 | 0.25896 | 2,595 | 59 | 139 | 43.983051 | 0.861154 | 0.27553 | 0 | 0 | 0 | 0 | 0.032851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0.057143 | 0.142857 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4cb24a662344c757d394dd28aa505276b9b46ee7 | 971 | py | Python | saleor/graphql/account/dataloaders.py | fairhopeweb/saleor | 9ac6c22652d46ba65a5b894da5f1ba5bec48c019 | [
"CC-BY-4.0"
] | 15,337 | 2015-01-12T02:11:52.000Z | 2021-10-05T19:19:29.000Z | saleor/graphql/account/dataloaders.py | fairhopeweb/saleor | 9ac6c22652d46ba65a5b894da5f1ba5bec48c019 | [
"CC-BY-4.0"
] | 7,486 | 2015-02-11T10:52:13.000Z | 2021-10-06T09:37:15.000Z | saleor/graphql/account/dataloaders.py | aminziadna/saleor | 2e78fb5bcf8b83a6278af02551a104cfa555a1fb | [
"CC-BY-4.0"
] | 5,864 | 2015-01-16T14:52:54.000Z | 2021-10-05T23:01:15.000Z | from collections import defaultdict
from ...account.models import Address, CustomerEvent, User
from ..core.dataloaders import DataLoader
class AddressByIdLoader(DataLoader):
context_key = "address_by_id"
def batch_load(self, keys):
address_map = Address.objects.in_bulk(keys)
return [address_map.get(address_id) for address_id in keys]
class UserByUserIdLoader(DataLoader):
context_key = "user_by_id"
def batch_load(self, keys):
user_map = User.objects.in_bulk(keys)
return [user_map.get(user_id) for user_id in keys]
class CustomerEventsByUserLoader(DataLoader):
context_key = "customer_events_by_user"
def batch_load(self, keys):
events = CustomerEvent.objects.filter(user_id__in=keys)
events_by_user_map = defaultdict(list)
for event in events:
events_by_user_map[event.user_id].append(event)
return [events_by_user_map.get(user_id, []) for user_id in keys]
| 30.34375 | 72 | 0.725026 | 132 | 971 | 5.045455 | 0.287879 | 0.054054 | 0.048048 | 0.072072 | 0.264264 | 0.165165 | 0.165165 | 0.093093 | 0.093093 | 0.093093 | 0 | 0 | 0.189495 | 971 | 31 | 73 | 31.322581 | 0.846252 | 0 | 0 | 0.142857 | 0 | 0 | 0.047374 | 0.023687 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4cb35be46e8b753fc4c3da524508ad7692d3c234 | 319 | py | Python | numba/__init__.py | teoliphant/numba | a2a05737b306853c86c61ef6620c2cc43cb28c18 | [
"BSD-2-Clause"
] | 3 | 2015-08-28T21:13:58.000Z | 2022-01-21T17:02:14.000Z | numba/__init__.py | teoliphant/numba | a2a05737b306853c86c61ef6620c2cc43cb28c18 | [
"BSD-2-Clause"
] | null | null | null | numba/__init__.py | teoliphant/numba | a2a05737b306853c86c61ef6620c2cc43cb28c18 | [
"BSD-2-Clause"
] | null | null | null | import sys
try:
from . import minivect
except ImportError:
print >>sys.stderr, "Did you forget to update submodule minivect?"
print >>sys.stderr, "Run 'git submodule init' followed by 'git submodule update'"
raise
from . import _numba_types
from ._numba_types import *
__all__ = _numba_types.__all__
| 22.785714 | 85 | 0.733542 | 43 | 319 | 5.116279 | 0.55814 | 0.136364 | 0.127273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188088 | 319 | 13 | 86 | 24.538462 | 0.849421 | 0 | 0 | 0 | 0 | 0 | 0.322884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4cc2cc43040196bd3c73760172314b2b65f1c12f | 602 | py | Python | project/server/main/views.py | jkassel/cerebro | 387cdde4e5b95ca30b14d05526bc6357e5cfd418 | [
"MIT"
] | null | null | null | project/server/main/views.py | jkassel/cerebro | 387cdde4e5b95ca30b14d05526bc6357e5cfd418 | [
"MIT"
] | null | null | null | project/server/main/views.py | jkassel/cerebro | 387cdde4e5b95ca30b14d05526bc6357e5cfd418 | [
"MIT"
] | null | null | null | # project/server/main/views.py
import os
#################
#### imports ####
#################
from flask import render_template, Blueprint
from project.server import app
################
#### config ####
################
main_blueprint = Blueprint('main', __name__,)
################
#### routes ####
################
@main_blueprint.route('/')
def home():
#env = os.environ['APP_SETTINGS']
env = app.config.get('APP_SETTINGS')
return render_template('main/home.html', environment=env)
@main_blueprint.route("/about/")
def about():
return render_template("main/about.html")
| 17.705882 | 61 | 0.566445 | 62 | 602 | 5.306452 | 0.451613 | 0.12766 | 0.109422 | 0.145897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126246 | 602 | 33 | 62 | 18.242424 | 0.625475 | 0.141196 | 0 | 0 | 0 | 0 | 0.135204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.272727 | 0.090909 | 0.636364 | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4cc9907c3e6982c53be1c37022a333762d1c73f3 | 473 | py | Python | users/migrations/0010_auto_20200321_1902.py | jakubzadrozny/hackcrisis | 4fe27423cda013bf01d5e9d3fc734c707f06b708 | [
"MIT"
] | null | null | null | users/migrations/0010_auto_20200321_1902.py | jakubzadrozny/hackcrisis | 4fe27423cda013bf01d5e9d3fc734c707f06b708 | [
"MIT"
] | 4 | 2021-03-19T01:03:55.000Z | 2021-06-10T18:44:03.000Z | users/migrations/0010_auto_20200321_1902.py | jakubzadrozny/hackcrisis | 4fe27423cda013bf01d5e9d3fc734c707f06b708 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.4 on 2020-03-21 19:02
from django.db import migrations
import phonenumber_field.modelfields
class Migration(migrations.Migration):
dependencies = [
('users', '0009_auto_20200321_1438'),
]
operations = [
migrations.AlterField(
model_name='customuser',
name='phone',
field=phonenumber_field.modelfields.PhoneNumberField(max_length=128, region=None, unique=True),
),
]
| 23.65 | 107 | 0.655391 | 51 | 473 | 5.941176 | 0.803922 | 0.105611 | 0.178218 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094708 | 0.241015 | 473 | 19 | 108 | 24.894737 | 0.749304 | 0.095137 | 0 | 0 | 1 | 0 | 0.100939 | 0.053991 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4cd0870f8e1c2e5c492adaf82b4a9329b5b17f1d | 5,925 | py | Python | zplane.py | m1ch/pysim | 58b806d55585d785156813afa572741bfca6e3f1 | [
"MIT"
] | null | null | null | zplane.py | m1ch/pysim | 58b806d55585d785156813afa572741bfca6e3f1 | [
"MIT"
] | null | null | null | zplane.py | m1ch/pysim | 58b806d55585d785156813afa572741bfca6e3f1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Combination of
http://scipy-central.org/item/52/1/zplane-function
and
http://www.dsprelated.com/showcode/244.php
with my own modifications
"""
# Copyright (c) 2011 Christopher Felton
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# The following is derived from the slides presented by
# Alexander Kain for CS506/606 "Special Topics: Speech Signal Processing"
# CSLU / OHSU, Spring Term 2011.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import patches
from matplotlib.pyplot import axvline, axhline
from collections import defaultdict
def zplane(z, p, filename=None):
"""Plot the complex z-plane given zeros and poles.
"""
# get a figure/plot
ax = plt.subplot(2, 2, 1)
# TODO: should just inherit whatever subplot it's called in?
# Add unit circle and zero axes
unit_circle = patches.Circle((0,0), radius=1, fill=False,
color='black', ls='solid', alpha=0.1)
ax.add_patch(unit_circle)
axvline(0, color='0.7')
axhline(0, color='0.7')
# Plot the poles and set marker properties
poles = plt.plot(p.real, p.imag, 'x', markersize=9, alpha=0.5)
# Plot the zeros and set marker properties
zeros = plt.plot(z.real, z.imag, 'o', markersize=9,
color='none', alpha=0.5,
markeredgecolor=poles[0].get_color(), # same color as poles
)
# Scale axes to fit
r = 1.5 * np.amax(np.concatenate((abs(z), abs(p), [1])))
plt.axis('scaled')
plt.axis([-r, r, -r, r])
# ticks = [-1, -.5, .5, 1]
# plt.xticks(ticks)
# plt.yticks(ticks)
"""
If there are multiple poles or zeros at the same point, put a
superscript next to them.
TODO: can this be made to self-update when zoomed?
"""
# Finding duplicates by same pixel coordinates (hacky for now):
poles_xy = ax.transData.transform(np.vstack(poles[0].get_data()).T)
zeros_xy = ax.transData.transform(np.vstack(zeros[0].get_data()).T)
# dict keys should be ints for matching, but coords should be floats for
# keeping location of text accurate while zooming
# TODO make less hacky, reduce duplication of code
d = defaultdict(int)
coords = defaultdict(tuple)
for xy in poles_xy:
key = tuple(np.rint(xy).astype('int'))
d[key] += 1
coords[key] = xy
print(d)
for key, value in d.items():
if value > 1:
x, y = ax.transData.inverted().transform(coords[key])
plt.text(x, y,
r' ${}^{' + str(value) + '}$',
fontsize=13,
)
d = defaultdict(int)
coords = defaultdict(tuple)
for xy in zeros_xy:
key = tuple(np.rint(xy).astype('int'))
d[key] += 1
coords[key] = xy
for key, value in d.items():
if value > 1:
x, y = ax.transData.inverted().transform(coords[key])
plt.text(x, y,
r' ${}^{' + str(value) + '}$',
fontsize=13,
)
if filename is None:
plt.show()
else:
plt.savefig(filename)
print( 'Pole-zero plot saved to ' + str(filename))
if __name__ == "__main__":
from scipy.signal import (freqz, butter, bessel, cheby1, cheby2, ellip,
tf2zpk, zpk2tf, lfilter, buttap, bilinear, cheb2ord, cheb2ap
)
from numpy import asarray, tan, array, pi, arange, cos, log10, unwrap, angle
from matplotlib.pyplot import (stem, title, grid, show, plot, xlabel,
ylabel, subplot, xscale, figure, xlim,
margins)
# # Cosine function
# omega = pi/4
# b = array([1.0, -cos(omega)])
# a = array([1, -2*cos(omega), 1.0])
b, a = butter(2, [0.06, 0.7], 'bandpass')
# Get the poles and zeros
z, p, k = tf2zpk(b, a)
# Create zero-pole plot
figure(figsize=(16, 9))
subplot(2, 2, 1)
zplane(z, p)
grid(True, color='0.9', linestyle='-', which='both', axis='both')
title('Poles and zeros')
# Display zeros, poles and gain
print( str(len(z)) + " zeros: " + str(z))
print( str(len(p)) + " poles: " + str(p))
print( "gain: " + str(k))
# Impulse response
index = arange(0,20)
u = 1.0*(index==0)
y = lfilter(b, a, u)
subplot(2, 2, 3)
stem(index,y)
title('Impulse response')
margins(0, 0.1)
grid(True, color='0.9', linestyle='-', which='both', axis='both')
show()
# Frequency response
w, h = freqz(b, a)
subplot(2, 2, 2)
plot(w/pi, 20*log10(abs(h)))
xscale('log')
title('Frequency response')
xlabel('Normalized frequency')
ylabel('Amplitude [dB]')
margins(0, 0.1)
grid(True, color = '0.7', linestyle='-', which='major', axis='both')
grid(True, color = '0.9', linestyle='-', which='minor', axis='both')
show()
# Phase
subplot(2, 2, 4)
plot(w/pi, 180/pi * unwrap(angle(h)))
xscale('log')
xlabel('Normalized frequency')
ylabel('Phase [degrees]')
grid(True, color = '0.7', linestyle='-', which='major')
grid(True, color = '0.9', linestyle='-', which='minor')
show() | 32.377049 | 90 | 0.585485 | 825 | 5,925 | 4.18303 | 0.373333 | 0.013909 | 0.022602 | 0.024341 | 0.220226 | 0.220226 | 0.193567 | 0.173863 | 0.128658 | 0.103159 | 0 | 0.029886 | 0.277131 | 5,925 | 183 | 91 | 32.377049 | 0.775858 | 0.30346 | 0 | 0.33 | 0 | 0 | 0.077749 | 0 | 0 | 0 | 0 | 0.016393 | 0 | 1 | 0.01 | false | 0.01 | 0.08 | 0 | 0.09 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4cdd1fd0a18fed3da4d3c58601225a03c0e5fbd6 | 1,571 | py | Python | test.py | ThomDietrich/singletonify-python | 11cc56237095544c61c4d45bb61f1a7824da19dc | [
"MIT"
] | 3 | 2018-10-08T07:01:15.000Z | 2019-12-12T03:48:53.000Z | test.py | ThomDietrich/singletonify-python | 11cc56237095544c61c4d45bb61f1a7824da19dc | [
"MIT"
] | 1 | 2021-05-19T00:04:48.000Z | 2021-06-01T17:11:05.000Z | test.py | ThomDietrich/singletonify-python | 11cc56237095544c61c4d45bb61f1a7824da19dc | [
"MIT"
] | 1 | 2021-06-01T16:35:57.000Z | 2021-06-01T16:35:57.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2017~2999 - cologler <skyoflw@gmail.com>
# ----------
#
# ----------
from pytest import raises
from singletonify import singleton
def test_base():
@singleton()
class A:
pass
assert not A._is_init()
assert A() is A()
assert A._is_init()
def test_with_args():
@singleton(x='s')
class A:
def __init__(self, x):
self.x = x
assert A() is A()
assert A().x == 's'
def test_instance_check():
@singleton()
class A:
pass
assert isinstance(A(), A)
def test_subclass_check():
class B:
pass
@singleton()
class A(B):
pass
assert issubclass(A, B)
def test_multi_apply():
@singleton()
class A:
pass
@singleton()
class B:
pass
assert A() is A()
assert B() is B()
assert A() is not B()
def test_with_slots():
@singleton()
class D:
pass
@singleton()
class S:
__slots__ = ('buffer', )
assert hasattr(D(), '__dict__')
assert not hasattr(S(), '__dict__')
def test_inherit():
class B:
pass
@singleton()
class A(B):
pass
assert A() is A()
assert B() is not B()
assert A() is not B()
assert type(A()) is A
assert isinstance(A(), A)
def test_inherit_from_singleton():
@singleton()
class B:
pass
# cannot inherit
with raises(TypeError, match='cannot inherit from a singleton class'):
@singleton()
class A(B):
pass
| 16.536842 | 74 | 0.54233 | 202 | 1,571 | 4.044554 | 0.267327 | 0.188494 | 0.077111 | 0.0612 | 0.348837 | 0.270502 | 0.133415 | 0.133415 | 0.133415 | 0 | 0 | 0.00838 | 0.316359 | 1,571 | 94 | 75 | 16.712766 | 0.752328 | 0.085296 | 0 | 0.606061 | 0 | 0 | 0.042687 | 0 | 0 | 0 | 0 | 0 | 0.257576 | 1 | 0.136364 | false | 0.166667 | 0.030303 | 0 | 0.378788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4cdde0fb0db22226b5857fab163db859a979f97e | 7,440 | py | Python | sf2_to_dex.py | rupa/sf2_to_dex | d7e074b0332d668385b4a955e3509dd4fbe0f55c | [
"MIT"
] | null | null | null | sf2_to_dex.py | rupa/sf2_to_dex | d7e074b0332d668385b4a955e3509dd4fbe0f55c | [
"MIT"
] | null | null | null | sf2_to_dex.py | rupa/sf2_to_dex | d7e074b0332d668385b4a955e3509dd4fbe0f55c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
cleanup and refactor -> pretty much a rewrite
soundfonts are messy, you gotta kind of figure out where the note names
and velocities are in sample name. usually the pitch info is wack
"""
from chunk import Chunk
import logging
import os
import re
import struct
import wave
logging.basicConfig(level=logging.INFO)
SAMPLE_TYPES = {1: 'mono', 2: 'right', 4: 'left', 8: 'linked'}
NOTE_NAMES = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B']
ENHARMONICS = {
'Db': 'C#',
'Eb': 'D#',
'Gb': 'F#',
'Ab': 'G#',
'Bb': 'A#',
}
def _read_dword(f):
return struct.unpack('<i', f.read(4))[0]
def _read_word(f):
return struct.unpack('<h', f.read(2))[0]
def _read_byte(f):
return struct.unpack('<b', f.read(1))[0]
def _write_dword(f, v):
f.write(struct.pack('<i', v))
def _write_word(f, v):
f.write(struct.pack('<h', v))
class SfSample:
def __init__(self):
pass
def __str__(self):
return self.name
def __repr__(self):
return 'SfSample(name="{}",start={})'.format(self.name, self.start)
def parse_sf2(sf2file):
samples = []
with open(sf2file, 'rb') as f:
chfile = Chunk(f)
_ = chfile.getname() # riff
_ = chfile.read(4) # WAVE
while 1:
try:
chunk = Chunk(chfile, bigendian=0)
except EOFError:
break
name = chunk.getname()
if name == 'smpl':
sample_data_start = chfile.tell() + 8
logging.debug('samples start: {}'.format(sample_data_start))
chunk.skip()
elif name == 'shdr':
for i in range((chunk.chunksize / 46) - 1):
s = SfSample()
s.name = chfile.read(20).rstrip('\0')
s.start = _read_dword(chfile)
s.end = _read_dword(chfile)
s.startLoop = _read_dword(chfile)
s.endLoop = _read_dword(chfile)
s.sampleRate = _read_dword(chfile)
s.pitch = _read_byte(chfile)
s.correction = _read_byte(chfile)
s.link = _read_word(chfile)
s.type = _read_word(chfile)
samples.append(s)
chfile.read(46)
elif name == 'LIST':
_ = chfile.read(4)
else:
chunk.skip()
for s in samples:
type_name = SAMPLE_TYPES[s.type & 0x7fff]
logging.debug('{} {} {} {} {} {} {} {} {} {}'.format(
s.name,
type_name,
s.pitch,
s.start,
s.end,
s.startLoop,
s.endLoop,
s.sampleRate,
s.correction,
s.link
))
return samples, sample_data_start
def write_loop(filename):
with open(filename, 'r+b') as f:
f.seek(4)
riff_size = _read_dword(f)
f.seek(4)
_write_dword(f, riff_size + 0x76)
f.seek(8 + riff_size)
_write_dword(f, 0x20657563) # 'cue '
_write_dword(f, 0x34)
_write_dword(f, 0x2) # num cues
_write_dword(f, 0x1) # id
_write_dword(f, s.startLoop-s.start) # position
_write_dword(f, 0x61746164) # 'data'
_write_dword(f, 0x0)
_write_dword(f, 0x0)
_write_dword(f, s.startLoop-s.start) # position
_write_dword(f, 0x2) # id
_write_dword(f, s.endLoop-s.start) # position
_write_dword(f, 0x61746164) # 'data'
_write_dword(f, 0x0)
_write_dword(f, 0x0)
_write_dword(f, s.endLoop-s.start) # position
_write_dword(f, 0x5453494C) # 'LIST'
_write_dword(f, 0x32)
_write_dword(f, 0x6C746461) # 'adtl'
_write_dword(f, 0x6C62616C) # 'labl'
_write_dword(f, 0x10)
_write_dword(f, 0x1) # id
_write_dword(f, 0x706F6F4C) # 'Loop'
_write_dword(f, 0x61745320) # ' Sta'
_write_dword(f, 0x7472) # 'rt'
_write_dword(f, 0x6C62616C) # 'labl'
_write_dword(f, 0x0E)
_write_dword(f, 0x2) # id
_write_dword(f, 0x706F6F4C) # 'Loop'
_write_dword(f, 0x646E4520) # ' End'
_write_word(f, 0x0)
f.close()
if __name__ == '__main__':
import sys
sf2file = sys.argv[1]
samples, sample_data_start = parse_sf2(sf2file)
F = open(sf2file, 'rb')
F2 = open(sf2file, 'rb')
# make a dir for our samples
folder_name = os.path.basename(sf2file).split('.')[0]
folder_name = "".join(x for x in folder_name if x.isalnum() or x == ' ')
if not os.path.exists(folder_name):
os.mkdir(folder_name)
os.chdir(folder_name)
for i, s in enumerate(samples):
# Here's where we gotta get creative, depending on the soundfont
type_name = SAMPLE_TYPES[s.type & 0x7fff]
# mono or L, we'll pick up R channel via s.link
if s.type not in [1, 4]:
# print 'skipping', type_name, s.name
continue
# os impl
"""
filename = "".join(x for x in s.name if x.isalnum())
filename += '_'
filename += note_names[s.pitch % 12]
filename += str((s.pitch/12) - 1)
filename += '.wav'
"""
# Steinway B-JNv2.0.sf2
"""
n, note, end = re.split('([ABCDEFG]#?[0123456789])', s.name)
filename = '{}_{}.wav'.format(s.name.strip().replace(' ', ''), note)
"""
# Chateau Grand-v1.8.sf2
"""
pre, note, end = re.split('([ABCDEFG]#?[0123456789])', s.name)
vel_match = re.findall('([01234567])L', end)
if not vel_match:
continue
filename = 'Chateau_{}_V{}.wav'.format(note, vel_match[0])
"""
# Rhodes EPs Plus-JN1.5.sf2
"""
if not s.name.startswith('RHODES'):
continue
pre, note, end = re.split('([ABCDEFG]#?[0123456789])', s.name)
filename = '{}_{}_V{}.wav'.format(s.name.replace(' ', '-'), note, end.strip())
filename = 'RHODES_{}_V{}.wav'.format(note, end.strip())
"""
# Nice-Steinway-v3.8.sf2
"""
note, lvl = re.search('([ABCDEFG][#b]?)([0123456789]+)', s.name).groups()
note = ENHARMONICS.get(note, note)
filename = 'Piano.ff.{}_V{}.wav'.format(note, lvl)
"""
print '[{}]\t-> [{}]'.format(s.name, filename)
continue
# once we're ok with filenames, write a file
g = wave.open(filename, 'w')
g.setsampwidth(2)
g.setframerate(s.sampleRate)
F.seek(sample_data_start + 2*s.start)
frames = s.end-s.start+1
if s.type == 1:
g.setnchannels(1)
data = F.read(2*frames)
g.writeframesraw(data)
else:
g.setnchannels(2)
F2.seek(sample_data_start + 2 * samples[s.link].start)
for i in range(frames):
data = F.read(2)
g.writeframesraw(data)
data = F2.read(2)
g.writeframesraw(data)
g.close()
loop_length = s.endLoop - s.startLoop
if loop_length > 1:
write_loop(filename)
| 30.617284 | 86 | 0.510618 | 909 | 7,440 | 4 | 0.264026 | 0.054455 | 0.093784 | 0.022002 | 0.219197 | 0.188944 | 0.179043 | 0.162541 | 0.129263 | 0.072607 | 0 | 0.050966 | 0.345968 | 7,440 | 242 | 87 | 30.743802 | 0.69626 | 0.065323 | 0 | 0.213836 | 0 | 0 | 0.033405 | 0.005002 | 0 | 0 | 0.031797 | 0 | 0 | 0 | null | null | 0.006289 | 0.044025 | null | null | 0.006289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4cdfad127953dd829561e4a0404e4a6449e304d9 | 4,126 | py | Python | src/slack.py | planetrics/aws-iam-key-rotator | 890c28d80e062dfc569e6577bc48fac23dc0b1a0 | [
"MIT"
] | null | null | null | src/slack.py | planetrics/aws-iam-key-rotator | 890c28d80e062dfc569e6577bc48fac23dc0b1a0 | [
"MIT"
] | null | null | null | src/slack.py | planetrics/aws-iam-key-rotator | 890c28d80e062dfc569e6577bc48fac23dc0b1a0 | [
"MIT"
] | null | null | null | import os
import json
import logging
import requests
logger = logging.getLogger('slack')
logger.setLevel(logging.INFO)
def notify(url, account, userName, existingAccessKey, accessKey=None, secretKey=None, instruction=None, deleteAfterDays=None):
if accessKey is not None:
# New key pair generated
logger.info('Sending notification to {} about new access key generation via {}'.format(userName, url))
msg = {
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":mega: NEW KEY PAIR GENERATED FOR *{}* :mega:".format(userName)
}
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Account ID:*\n{}".format(account['id'])
},
{
"type": "mrkdwn",
"text": "*Account Name:*\n{}".format(account['name'])
}
]
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Access Key:*\n{}".format(accessKey)
},
{
"type": "mrkdwn",
"text": "*Secret Key:*\n{}".format(secretKey)
}
]
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Instruction:* {}".format(instruction)
}
},
{
"type": "divider"
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*NOTE:* Existing key pair *{}* will be deleted after {} days so please update the new key pair wherever required".format(existingAccessKey, deleteAfterDays)
}
},
]
}
else:
# Old key pair is deleted
logger.info('Sending notification to {} about deletion of old access key via {}'.format(userName, url))
msg = {
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":mega: OLD KEY PAIR DELETED :mega:".format(userName)
}
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Account ID:*\n{}".format(account['id'])
},
{
"type": "mrkdwn",
"text": "*Account Name:*\n{}".format(account['name'])
}
]
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*User:*\n{}".format(userName)
},
{
"type": "mrkdwn",
"text": "*Old Access Key:*\n{}".format(existingAccessKey)
}
]
}
]
}
resp = requests.post(url=url, json=msg)
if resp.status_code == 200:
logger.info('Notification sent to {} about key deletion via {}'.format(userName, url))
else:
logger.error('Notificaiton failed with {} status code. Reason: {}'.format(resp.status_code, resp.text))
| 36.513274 | 189 | 0.335676 | 264 | 4,126 | 5.238636 | 0.299242 | 0.086768 | 0.121475 | 0.054953 | 0.397686 | 0.397686 | 0.303688 | 0.303688 | 0.303688 | 0.303688 | 0 | 0.001583 | 0.540717 | 4,126 | 112 | 190 | 36.839286 | 0.728232 | 0.011149 | 0 | 0.35514 | 0 | 0.009346 | 0.223939 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009346 | false | 0 | 0.037383 | 0 | 0.046729 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ce10c58d708a32523b675746af1ff74ba6e03e0 | 895 | py | Python | easy-todo-backend/module/user/handler.py | hubenchang0515/EasyTodo | b3cde21090f76401c0649a760b152ebbdd1d4fbe | [
"MIT"
] | null | null | null | easy-todo-backend/module/user/handler.py | hubenchang0515/EasyTodo | b3cde21090f76401c0649a760b152ebbdd1d4fbe | [
"MIT"
] | null | null | null | easy-todo-backend/module/user/handler.py | hubenchang0515/EasyTodo | b3cde21090f76401c0649a760b152ebbdd1d4fbe | [
"MIT"
] | null | null | null | from flask import Flask, request, jsonify
from ..common import app, db, getJson
from .model import User
from .method import *
@app.route("/api/user/register", methods=["POST"])
def register():
json = getJson()
username = json['username']
password = json['password']
userId = addUser(username, password)
if userId != 0:
return jsonify({"status": "ok", "username": username, "user id": userId})
else:
return jsonify({"status": "error", "username": username, "message": username + " is exist"})
@app.route("/api/user/login", methods=["GET", "POST"])
def login():
json = getJson()
username = json['username']
password = json['password']
if checkPassword(username, password):
return jsonify({"status": "ok", "username": username})
else:
return jsonify({"status": "error", "username": username, "message": "Auth failed"})
| 33.148148 | 100 | 0.634637 | 101 | 895 | 5.623762 | 0.386139 | 0.112676 | 0.133803 | 0.052817 | 0.489437 | 0.489437 | 0.359155 | 0.359155 | 0 | 0 | 0 | 0.001381 | 0.191061 | 895 | 26 | 101 | 34.423077 | 0.783149 | 0 | 0 | 0.347826 | 0 | 0 | 0.209172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0.173913 | 0.173913 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4ce1e5e3825fda71c1d79c7d91358a7fb5966bfa | 4,326 | py | Python | contrib/bin_wrapper.py | brikkho-net/windmill | 994bd992b17f3f2d6f6b276fe17391fea08f32c3 | [
"Apache-2.0"
] | 61 | 2015-03-16T18:36:06.000Z | 2021-12-02T10:08:17.000Z | contrib/bin_wrapper.py | admc/windmill | 4304ee7258eb0c2814f215d8ce90abf02b1f737f | [
"Apache-2.0"
] | 8 | 2015-03-10T10:01:26.000Z | 2020-05-18T10:51:24.000Z | contrib/bin_wrapper.py | admc/windmill | 4304ee7258eb0c2814f215d8ce90abf02b1f737f | [
"Apache-2.0"
] | 14 | 2015-01-29T16:28:33.000Z | 2021-09-04T11:19:48.000Z | # ***** BEGIN LICENSE BLOCK *****
# Version: MPL 1.1/GPL 2.0/LGPL 2.1
#
# The contents of this file are subject to the Mozilla Public License Version
# 1.1 (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
# http://www.mozilla.org/MPL/
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
# for the specific language governing rights and limitations under the
# License.
#
# The Original Code is Mozilla Corporation Code.
#
# The Initial Developer of the Original Code is
# Mikeal Rogers.
# Portions created by the Initial Developer are Copyright (C) 2008
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mikeal Rogers <mikeal.rogers@gmail.com>
#
# Alternatively, the contents of this file may be used under the terms of
# either the GNU General Public License Version 2 or later (the "GPL"), or
# the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
# in which case the provisions of the GPL or the LGPL are applicable instead
# of those above. If you wish to allow use of your version of this file only
# under the terms of either the GPL or the LGPL, and not to allow others to
# use your version of this file under the terms of the MPL, indicate your
# decision by deleting the provisions above and replace them with the notice
# and other provisions required by the GPL or the LGPL. If you do not delete
# the provisions above, a recipient may use your version of this file under
# the terms of any one of the MPL, the GPL or the LGPL.
#
# ***** END LICENSE BLOCK *****
import sys, os
if sys.platform != 'win32':
import pwd
import commands
import logging
import signal
import exceptions
from StringIO import StringIO
from time import sleep
import subprocess
from datetime import datetime
from datetime import timedelta
if sys.platform != 'cygwin':
from windmill.dep import mozrunner
killableprocess = mozrunner.killableprocess
else:
import subprocess as killableprocess
logger = logging.getLogger(__name__)
stdout_wrap = StringIO()
def run_command(cmd, env=None):
"""Run the given command in killable process."""
kwargs = {'stdout':-1 ,'stderr':sys.stderr, 'stdin':sys.stdin}
if sys.platform != "win32":
return killableprocess.Popen(cmd, preexec_fn=lambda : os.setpgid(0, 0), env=env, **kwargs)
else:
return killableprocess.Popen(cmd, **kwargs)
def get_pids(name, minimun_pid=0):
"""Get all the pids matching name, exclude any pids below minimum_pid."""
if sys.platform == 'win32':
import win32api, win32pdhutil, win32con
#win32pdhutil.ShowAllProcesses() #uncomment for testing
pids = win32pdhutil.FindPerformanceAttributesByName(name)
else:
get_pids_cmd = ['ps', 'ax']
h = killableprocess.runCommand(get_pids_cmd, stdout=subprocess.PIPE, universal_newlines=True)
h.wait()
data = h.stdout.readlines()
pids = [int(line.split()[0]) for line in data if line.find(name) is not -1]
matching_pids = [m for m in pids if m > minimun_pid and m != os.getpid()]
return matching_pids
def kill_process_by_name(name):
"""Find and kill all processes containing a certain name"""
pids = get_pids(name)
if sys.platform == 'win32':
for p in pids:
handle = win32api.OpenProcess(win32con.PROCESS_TERMINATE, 0, p) #get process handle
win32api.TerminateProcess(handle,0) #kill by handle
win32api.CloseHandle(handle) #close api
else:
for pid in pids:
os.kill(pid, signal.SIGTERM)
sleep(.5)
if len(get_pids(name)) is not 0:
try:
os.kill(pid, signal.SIGKILL)
except OSError: pass
sleep(.5)
if len(get_pids(name)) is not 0:
logger.error('Could not kill process')
def main():
"""Command Line main function."""
args = list(sys.argv)
args.pop(0)
name = args[0]
kill_process_by_name(name)
print "Starting "+str(args)
sys.exit(subprocess.call(args))
if __name__ == "__main__":
main() | 34.608 | 101 | 0.676144 | 615 | 4,326 | 4.697561 | 0.360976 | 0.016615 | 0.017307 | 0.019038 | 0.155071 | 0.062998 | 0.046383 | 0.046383 | 0.046383 | 0.046383 | 0 | 0.016611 | 0.234628 | 4,326 | 125 | 102 | 34.608 | 0.855935 | 0.395747 | 0 | 0.15625 | 0 | 0 | 0.036441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.015625 | 0.21875 | null | null | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ce456f58b59cb287e63e3fc893ff6046bbcd1b1 | 474 | py | Python | backpack/extensions/secondorder/diag_ggn/conv1d.py | jabader97/backpack | 089daafa0d611e13901fd7ecf8a0d708ce7a5928 | [
"MIT"
] | 395 | 2019-10-04T09:37:52.000Z | 2022-03-29T18:00:56.000Z | backpack/extensions/secondorder/diag_ggn/conv1d.py | jabader97/backpack | 089daafa0d611e13901fd7ecf8a0d708ce7a5928 | [
"MIT"
] | 78 | 2019-10-11T18:56:43.000Z | 2022-03-23T01:49:54.000Z | backpack/extensions/secondorder/diag_ggn/conv1d.py | jabader97/backpack | 089daafa0d611e13901fd7ecf8a0d708ce7a5928 | [
"MIT"
] | 50 | 2019-10-03T16:31:10.000Z | 2022-03-15T19:36:14.000Z | from backpack.core.derivatives.conv1d import Conv1DDerivatives
from backpack.extensions.secondorder.diag_ggn.convnd import (
BatchDiagGGNConvND,
DiagGGNConvND,
)
class DiagGGNConv1d(DiagGGNConvND):
def __init__(self):
super().__init__(derivatives=Conv1DDerivatives(), params=["bias", "weight"])
class BatchDiagGGNConv1d(BatchDiagGGNConvND):
def __init__(self):
super().__init__(derivatives=Conv1DDerivatives(), params=["bias", "weight"])
| 29.625 | 84 | 0.751055 | 43 | 474 | 7.883721 | 0.55814 | 0.070796 | 0.064897 | 0.094395 | 0.377581 | 0.377581 | 0.377581 | 0.377581 | 0.377581 | 0.377581 | 0 | 0.014528 | 0.128692 | 474 | 15 | 85 | 31.6 | 0.806295 | 0 | 0 | 0.363636 | 0 | 0 | 0.042194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ce462d62058170799793cdc170a8f43baf76ca6 | 1,088 | py | Python | api/scripts/test/test_generate_promoter_terminator.py | IsaacLuo/webexe | aec0582b8669f7e941b8a14df1a9154993470f05 | [
"MIT"
] | null | null | null | api/scripts/test/test_generate_promoter_terminator.py | IsaacLuo/webexe | aec0582b8669f7e941b8a14df1a9154993470f05 | [
"MIT"
] | 6 | 2021-03-02T00:34:35.000Z | 2022-03-24T14:26:50.000Z | api/scripts/test/test_generate_promoter_terminator.py | IsaacLuo/webexe | aec0582b8669f7e941b8a14df1a9154993470f05 | [
"MIT"
] | null | null | null | import subprocess
import pytest
import os
import json
def test_call_generate_promoter_terminator():
print('')
process_result = subprocess.run(['python', 'generate_promoter_terminator.py', './test/1.gff.json', '500', '200'], \
capture_output=True)
assert process_result.returncode == 0
result_line = process_result.stdout.decode().splitlines()[-1]
result_obj = json.loads(result_line)
assert result_obj['type'] == 'result'
file_url = result_obj['data']['files'][0]['url']
assert file_url
with open(os.path.join('test', '1.gff.json')) as fp:
src_gff = json.load(fp)
with open(os.path.join('results', file_url)) as fp:
dst_gff = json.load(fp)
assert len(dst_gff['records']) > len(src_gff['records'])
#all sequence must have hash
for record in dst_gff['records']:
assert 'sequenceHash' in record
assert record['sequenceHash'] == tools.get_sequence_hash(dst_gff, record['chrName'], record['start'], record['end'], record['strand'])
os.remove(os.path.join('results', file_url))
| 32.969697 | 142 | 0.662684 | 148 | 1,088 | 4.689189 | 0.445946 | 0.040346 | 0.043228 | 0.034582 | 0.106628 | 0.069164 | 0 | 0 | 0 | 0 | 0 | 0.01236 | 0.181985 | 1,088 | 32 | 143 | 34 | 0.767416 | 0.024816 | 0 | 0 | 1 | 0 | 0.166195 | 0.029273 | 0 | 0 | 0 | 0 | 0.26087 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.217391 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4cebc8d8c5709c45d465740aef28dc7747b5c871 | 4,234 | py | Python | tables.py | sadaszewski/scimd | 3f8cad382e4891cd710c8e4e9c48aa4d56130040 | [
"BSD-2-Clause"
] | null | null | null | tables.py | sadaszewski/scimd | 3f8cad382e4891cd710c8e4e9c48aa4d56130040 | [
"BSD-2-Clause"
] | null | null | null | tables.py | sadaszewski/scimd | 3f8cad382e4891cd710c8e4e9c48aa4d56130040 | [
"BSD-2-Clause"
] | null | null | null | #
# Copyright (C) 2015, Stanislaw Adaszewski
# s.adaszewski@gmail.com
# http://algoholic.eu
#
# License: 2-clause BSD
#
from markdown import Extension
from markdown.blockprocessors import BlockProcessor
from markdown.util import etree
import numpy as np
from collections import defaultdict
import numpy.core.defchararray as dca
class TableExtension(Extension):
def extendMarkdown(self, md, md_globals):
md.parser.blockprocessors.add('table',
TableProcessor(md.parser),
'<hashheader')
def makeExtension(configs={}):
return TableExtension(configs=configs)
class TableProcessor(BlockProcessor):
def test(self, parent, block):
lines = block.split('\n')
for l in lines:
if set(l.strip()) == set(('-', '|')):
return True
return False
def run(self, parent, blocks):
block = blocks.pop(0)
lines = map(lambda x: list(x.strip()), block.split('\n'))
# print 'lines:', lines
ary = np.array(lines, dtype='|U1')
cstart = np.zeros(ary.shape, dtype=np.int)
cend = np.zeros(ary.shape, dtype=np.int)
for r in xrange(ary.shape[0]):
for c in xrange(ary.shape[1]):
if ary[r, c] == '|':
if c + 1 < ary.shape[1] and (r == 0 or ary[r - 1, c + 1] == '-'):
cstart[r, c] = True
if c > 0 and (r + 1 == ary.shape[0] or ary[r + 1, c - 1] == '-'):
cend[r, c] = True
cstart = zip(*np.nonzero(cstart))
cend = zip(*np.nonzero(cend))
# print 'cstart:', cstart
# print 'cend:', cend
rpos = np.nonzero(np.max(ary == '-', axis=1))
cpos = np.nonzero(np.max(ary == '|', axis=0))
# print rpos
# print cpos
assert(len(cstart) == len(cend))
cells = []
for k in xrange(len(cstart)):
r, c = cstart[k][0], cstart[k][1] + 1
while r < ary.shape[0] and c < ary.shape[1]:
# print r, c
if ary[r, c] == '|':
if (r, c) in cend:
rowspan = len(np.nonzero((rpos >= cstart[k][0]) * (rpos <= r))[0]) + 1
colspan = len(np.nonzero((cpos >= cstart[k][1]) * (cpos <= c))[0]) - 1
# print 'Cell', k, cstart[k], (r, c), 'rowspan:', rowspan, 'colspan:', colspan
# print ' %s' % ary[cstart[k][0]:r+1, cstart[k][1]:c-1].tostring()
cells.append((cstart[k], (r, c), rowspan, colspan))
break
else:
r += 1
c = cstart[k][1]
c += 1
# print cells
table = etree.SubElement(parent, 'table')
# table.set('style', 'border: solid 1px black;')
table.set('border', '1')
rows = defaultdict(lambda: [])
for k in xrange(len(cells)):
cell = cells[k]
r = len(np.nonzero(rpos < cells[k][0][0])[0])
c = len(np.nonzero(cpos < cells[k][0][1])[0])
# print 'Cell', k, 'r:', r, 'c:', c, 'rowspan:', cells[k][2], 'colspan:', cells[k][3]
text = ary[cells[k][0][0]:cells[k][1][0]+1, cells[k][0][1]+1:cells[k][1][1]]
text = map(lambda x: u''.join(x).strip(), text)
# text = list(np.ravel(text))
# text = np
text = u'\n'.join(text) # map(lambda x: x.tostring().strip(), text))
# print ' %s' % text
rows[r].append((text, cells[k][2], cells[k][3]))
for r in xrange(len(rows)):
# print 'Row', r
tr = etree.SubElement(table, 'tr')
for c in xrange(len(rows[r])):
td = etree.SubElement(tr, 'td')
try:
td.text = rows[r][c][0] # .encode('utf-8')
except:
print str(type(block))
raise ValueError(str(rows[r][c][0]) + ' ' + str(type(rows[r][c][0])))
td.set('rowspan', str(rows[r][c][1]))
td.set('colspan', str(rows[r][c][2]))
# return table
| 39.570093 | 102 | 0.467879 | 538 | 4,234 | 3.680297 | 0.232342 | 0.015152 | 0.015152 | 0.010606 | 0.107071 | 0.056566 | 0.035354 | 0 | 0 | 0 | 0 | 0.024687 | 0.358999 | 4,234 | 106 | 103 | 39.943396 | 0.704864 | 0.151157 | 0 | 0.027027 | 0 | 0 | 0.017937 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0 | null | null | 0 | 0.081081 | null | null | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4cec6669f8861c5d6808c43e630416fb7bc66a24 | 1,538 | py | Python | agronet_be/AgronetApp/views/orderDetailView.py | lauraC4MP0/Prueba-github | 291fc266fc0a8efc80ab36dd6eb4bff3e98e7c1f | [
"MIT"
] | 1 | 2021-10-06T00:39:08.000Z | 2021-10-06T00:39:08.000Z | agronet_be/AgronetApp/views/orderDetailView.py | lauraC4MP0/Prueba-github | 291fc266fc0a8efc80ab36dd6eb4bff3e98e7c1f | [
"MIT"
] | null | null | null | agronet_be/AgronetApp/views/orderDetailView.py | lauraC4MP0/Prueba-github | 291fc266fc0a8efc80ab36dd6eb4bff3e98e7c1f | [
"MIT"
] | 1 | 2021-10-03T13:39:31.000Z | 2021-10-03T13:39:31.000Z | from django.conf import settings
from django.db.models.query import QuerySet
from rest_framework import views
from rest_framework.response import Response
from AgronetApp.serializers import orderDetailSerializer
from AgronetApp.serializers.orderDetailSerializer import OrderDetailSerializer
from AgronetApp.models.orderDetail import OrderDetail
from rest_framework.permissions import AllowAny
from rest_framework import status
from rest_framework import generics
class OrderDetailView(generics.ListCreateAPIView):
queryset = OrderDetail.objects.all()
serializer_class = OrderDetailSerializer
class OrderDetailDetail(generics.RetrieveUpdateDestroyAPIView):
queryset = OrderDetail.objects.all()
serializer_class = OrderDetailSerializer
#class OrderDetailView(views.APIView):
# permission_classes = (AllowAny,)
# def get(self, request):
# Detalle_orden = OrderDetail.objects.all()
# serializer = orderDetailSerializer.OrderDetailSerializer(Detalle_orden, many=True)
# return Response(serializer.data,status=status.HTTP_200_OK)
#def post(self, request):
# Detalle_orden = request.data.get('Detalle_orden')
# serializer = orderDetailSerializer.OrderDetailSerializer(data=Detalle_orden)
# if serializer.is_valid(raise_exception=True):
# Detail_saved = serializer.save()
#return Response(serializer.data,{"success": "Orden Detalle '{}' creada correctamente".format(Detail_saved)})
| 42.722222 | 117 | 0.751625 | 150 | 1,538 | 7.58 | 0.386667 | 0.03518 | 0.074758 | 0.060686 | 0.123131 | 0.123131 | 0.123131 | 0.123131 | 0 | 0 | 0 | 0.00236 | 0.173602 | 1,538 | 35 | 118 | 43.942857 | 0.892211 | 0.429129 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4ced33f2e305fc01ed18bf724293146d776b1f32 | 1,012 | py | Python | predict.py | kadn/carla-imitation | 874030f4f4d726f80e739721fb704489672da9b0 | [
"MIT"
] | null | null | null | predict.py | kadn/carla-imitation | 874030f4f4d726f80e739721fb704489672da9b0 | [
"MIT"
] | null | null | null | predict.py | kadn/carla-imitation | 874030f4f4d726f80e739721fb704489672da9b0 | [
"MIT"
] | null | null | null | import tensorflow as tf
import numpy as np
from network import make_network
from data_provider import DataProvider
from tensorflow.core.protobuf import saver_pb2
import time
import os
from IPython import embed
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
network = make_network()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(write_version=saver_pb2.SaverDef.V2)
saver.restore(sess, './data/step-10500.ckpt')
val_provider = DataProvider('val.tfrecords', sess)
one_batch = val_provider.get_minibatch()
for i in range(120):
one_image = one_batch.images[i,...][None]
one_speed = one_batch.data[0][i][None]
a = time.time()
target_control, = sess.run(network['outputs'],
feed_dict={network['inputs'][0]: one_image,
network['inputs'][1]: one_speed})
b = time.time()
print("Inference consumes %.5f seconds" % (b-a))
print(target_control[0])
| 29.764706 | 74 | 0.677866 | 138 | 1,012 | 4.804348 | 0.521739 | 0.036199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019827 | 0.202569 | 1,012 | 33 | 75 | 30.666667 | 0.801735 | 0 | 0 | 0 | 0 | 0 | 0.083992 | 0.021739 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.32 | 0 | 0.32 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4cee63421a6a0026a74361c99866ca8a1654719f | 494 | py | Python | App0/migrations/0019_auto_20210118_0317.py | LTSana/lost-empire | 495397345f1226b025434e37c5e1703273f475a8 | [
"CC0-1.0"
] | null | null | null | App0/migrations/0019_auto_20210118_0317.py | LTSana/lost-empire | 495397345f1226b025434e37c5e1703273f475a8 | [
"CC0-1.0"
] | null | null | null | App0/migrations/0019_auto_20210118_0317.py | LTSana/lost-empire | 495397345f1226b025434e37c5e1703273f475a8 | [
"CC0-1.0"
] | null | null | null | # Generated by Django 3.1.5 on 2021-01-18 01:17
from django.db import migrations, models
import gdstorage.storage
class Migration(migrations.Migration):
dependencies = [
('App0', '0018_auto_20210117_1820'),
]
operations = [
migrations.AlterField(
model_name='products',
name='image_1',
field=models.ImageField(blank=True, null=True, storage=gdstorage.storage.GoogleDriveStorage(), upload_to='lost-empire/'),
),
]
| 24.7 | 133 | 0.645749 | 55 | 494 | 5.690909 | 0.763636 | 0.102236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087302 | 0.234818 | 494 | 19 | 134 | 26 | 0.740741 | 0.091093 | 0 | 0 | 1 | 0 | 0.120805 | 0.051454 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4cf793269fc1e46f707bfa6b409a7afeda8934b0 | 606 | py | Python | neighbor/models.py | ShaviyaVictor/nyumbakumi- | 933d825844da139998867594c1e21b09ba5c8e63 | [
"MIT"
] | null | null | null | neighbor/models.py | ShaviyaVictor/nyumbakumi- | 933d825844da139998867594c1e21b09ba5c8e63 | [
"MIT"
] | null | null | null | neighbor/models.py | ShaviyaVictor/nyumbakumi- | 933d825844da139998867594c1e21b09ba5c8e63 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils import timezone
from django.contrib.auth.models import User
# Create your models here.
class Neighbor(models.Model) :
n_name = models.CharField(max_length=35)
n_location = models.CharField(max_length=35)
n_image = models.ImageField(upload_to='n_posts/')
n_title = models.CharField(max_length=100)
n_post = models.TextField()
n_author = models.ForeignKey(User, on_delete=models.CASCADE)
n_date_posted = models.DateTimeField(default=timezone.now)
def __str__(self) :
return self.n_title
class Meta :
ordering = ['n_date_posted'] | 26.347826 | 62 | 0.759076 | 88 | 606 | 4.988636 | 0.545455 | 0.068337 | 0.123007 | 0.164009 | 0.123007 | 0.123007 | 0 | 0 | 0 | 0 | 0 | 0.013436 | 0.140264 | 606 | 23 | 63 | 26.347826 | 0.829175 | 0.039604 | 0 | 0 | 0 | 0 | 0.036145 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0.066667 | 0.933333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9806a9bda8cab4a2b412b1e85490eb2a071b19ed | 888 | py | Python | moviecritic/models.py | mdameenh/elysia | ff173f036d13c179191a75c3d54e47314435bc28 | [
"BSD-3-Clause"
] | null | null | null | moviecritic/models.py | mdameenh/elysia | ff173f036d13c179191a75c3d54e47314435bc28 | [
"BSD-3-Clause"
] | 3 | 2020-02-11T23:32:55.000Z | 2021-06-10T19:02:19.000Z | moviecritic/models.py | mdameenh/elysia | ff173f036d13c179191a75c3d54e47314435bc28 | [
"BSD-3-Clause"
] | null | null | null | from django.db import models
from django.contrib.postgres.fields import ArrayField
# Create your models here.
class Movie_Details(models.Model):
name = models.CharField(max_length=100)
year = models.IntegerField(default=0)
boxoffice = models.BigIntegerField(default=0)
imdb = models.IntegerField(default=0)
metacritic = models.IntegerField(default=0)
rottentomatoes = models.IntegerField(default=0)
genre = ArrayField(models.CharField(max_length=40), default=list, size=50)
director = ArrayField(models.CharField(max_length=40), default=list, size=50)
lang = ArrayField(models.CharField(max_length=40), default=list, size=50)
country = ArrayField(models.CharField(max_length=40), default=list, size=50)
prod = ArrayField(models.CharField(max_length=40), default=list, size=50)
def __str__(self):
return self.name | 42.285714 | 81 | 0.734234 | 113 | 888 | 5.672566 | 0.380531 | 0.140406 | 0.168487 | 0.224649 | 0.413417 | 0.413417 | 0.413417 | 0.413417 | 0.413417 | 0.413417 | 0 | 0.037284 | 0.154279 | 888 | 21 | 82 | 42.285714 | 0.816245 | 0.027027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0.0625 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9806d568292fc34f46e2f6473bf682841aa7e86b | 399 | py | Python | djangular/tests/utils.py | jianglb-alibaba/djangular-0.2.7 | d1e2d188cf4ab8ae757bd9bc3069ffef8f0fc753 | [
"Apache-2.0"
] | 145 | 2015-01-01T12:09:30.000Z | 2022-01-28T13:59:50.000Z | djangular/tests/utils.py | jianglb-alibaba/djangular-0.2.7 | d1e2d188cf4ab8ae757bd9bc3069ffef8f0fc753 | [
"Apache-2.0"
] | 25 | 2015-01-07T11:42:21.000Z | 2016-12-14T19:23:45.000Z | djangular/tests/utils.py | jianglb-alibaba/djangular-0.2.7 | d1e2d188cf4ab8ae757bd9bc3069ffef8f0fc753 | [
"Apache-2.0"
] | 40 | 2015-02-07T13:23:09.000Z | 2022-01-28T13:59:53.000Z | import os
from djangular import utils
from django.test import SimpleTestCase
class SiteAndPathUtilsTest(SimpleTestCase):
site_utils = utils.SiteAndPathUtils()
def test_djangular_root(self):
current_dir = os.path.dirname(os.path.abspath(__file__))
djangular_dir = os.path.dirname(current_dir)
self.assertEqual(djangular_dir, self.site_utils.get_djangular_root())
| 26.6 | 77 | 0.761905 | 49 | 399 | 5.918367 | 0.469388 | 0.062069 | 0.062069 | 0.110345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155388 | 399 | 14 | 78 | 28.5 | 0.860534 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
98070a04422061ac22173ccd227116ef553e0ba2 | 1,790 | py | Python | src/wee/urls.py | dipkakwani/wee_app | a0f15053ec64a49611d759eaae6d780d608bea46 | [
"MIT"
] | 2 | 2016-11-18T18:43:10.000Z | 2018-10-17T18:31:52.000Z | src/wee/urls.py | dipkakwani/wee_app | a0f15053ec64a49611d759eaae6d780d608bea46 | [
"MIT"
] | null | null | null | src/wee/urls.py | dipkakwani/wee_app | a0f15053ec64a49611d759eaae6d780d608bea46 | [
"MIT"
] | null | null | null | from django.conf.urls import patterns, include, url
from django.contrib import admin
from django.conf.urls.static import static
from userModule.views import home
from userModule.views import userSettings
from userModule.views import logout
from groupModule.views import createGroup
from groupModule.views import group
from groupModule.views import selectgroup
from groupModule.views import groupSettings
from wee.views import *
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
import settings
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'wee.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^admin/', include(admin.site.urls)),
url(r'^home/$', home),
url(r'^newsfeed/$', newsfeed),
url(r'^logout/$', logout),
url(r'^post/$', newPost),
url(r'^newgroup/$', createGroup),
url(r'^settings/$', userSettings),
url(r'^group/(?P<groupId>\d+)/$', group),
url(r'^groups/$' , selectgroup),
url(r'^group/(?P<groupId>\d+)/settings/$', groupSettings),
url(r'^friends/$' , friends) ,
url(r'^timeline/(?P<profileUserId>\d+)/(?P<change>\w)/friend/$', updateFriend),
url(r'^timeline/(?P<profileUserId>\d+)/follow/$', updateFollow),
url(r'^timeline/(?P<profileUserId>\d+)/$', timeline),
url(r'^search/$', search),
url(r'^like/(?P<postId>\d+)/$', like),
url(r'^getlike/(?P<postId>\d+)/$', getLike),
url(r'^comment/(?P<postId>\d+)/$', comment),
url(r'^getcomment/(?P<postId>\d+)/$', getComment),
url(r'^share/(?P<postId>\d+)/$', share),
url(r'^getshare/(?P<postId>\d+)/$', getShare),
)
urlpatterns += staticfiles_urlpatterns()
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns += patterns('', url(r'^.*/$', notfound), )
| 37.291667 | 83 | 0.663128 | 229 | 1,790 | 5.161572 | 0.257642 | 0.081218 | 0.040609 | 0.087986 | 0.098985 | 0.098985 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12514 | 1,790 | 47 | 84 | 38.085106 | 0.754789 | 0.050279 | 0 | 0 | 0 | 0 | 0.260024 | 0.20342 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
981358a60d12ba10bedc463e2907dbad81cfa191 | 1,683 | py | Python | epuap_watchdog/institutions/serializers.py | ad-m/epuap-watchdog | ff2dbbfe6c999e825dbf3f2bf2a94d8baa0a08ea | [
"MIT"
] | 2 | 2017-07-30T16:41:41.000Z | 2020-03-28T12:20:56.000Z | epuap_watchdog/institutions/serializers.py | ad-m/epuap-watchdog | ff2dbbfe6c999e825dbf3f2bf2a94d8baa0a08ea | [
"MIT"
] | 5 | 2017-07-18T12:13:46.000Z | 2017-07-28T15:48:38.000Z | epuap_watchdog/institutions/serializers.py | ad-m/epuap-watchdog | ff2dbbfe6c999e825dbf3f2bf2a94d8baa0a08ea | [
"MIT"
] | null | null | null | from rest_framework import serializers
from teryt_tree.rest_framework_ext.serializers import JednostkaAdministracyjnaSerializer
from .models import RESP, REGONError, REGON, JSTConnection, Institution, ESP
class RESPSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = RESP
fields = ['id', 'created', 'modified', 'institution_id', 'name', 'data']
class REGONErrorSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = REGONError
fields = ['id', 'created', 'modified', 'regon_id', 'exception']
class REGONSerializer(serializers.HyperlinkedModelSerializer):
regonerror_set = REGONErrorSerializer(many=True)
class Meta:
model = REGON
fields = ['id', 'created', 'modified', 'institution_id', 'name', 'regon', 'regonerror_set', 'data']
class JSTConnectionSerializer(serializers.HyperlinkedModelSerializer):
# jst = JednostkaAdministracyjnaSerializer()
class Meta:
model = JSTConnection
fields = ['id', 'created', 'modified', 'institution_id', 'jst_id']
class ESPSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = ESP
fields = ['id', 'created', 'modified', 'institution_id', 'name', 'active']
class InstitutionSerializer(serializers.HyperlinkedModelSerializer):
resp = RESPSerializer()
regon_data = REGONSerializer()
jstconnection = JSTConnectionSerializer()
esp_set = ESPSerializer(many=True)
class Meta:
model = Institution
fields = ['id', 'created', 'modified', 'name', 'epuap_id', 'regon', 'active',
'esp_set', 'jstconnection', 'regon_data', 'resp']
| 32.365385 | 107 | 0.69697 | 147 | 1,683 | 7.863946 | 0.265306 | 0.192042 | 0.072664 | 0.119377 | 0.305363 | 0.134948 | 0.103806 | 0 | 0 | 0 | 0 | 0 | 0.184195 | 1,683 | 51 | 108 | 33 | 0.841952 | 0.024955 | 0 | 0.181818 | 0 | 0 | 0.172666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.606061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9813904cd1f0fe02015ac63d50232c8db9af77e9 | 21,950 | py | Python | ross/fluid_flow/fluid_flow_coefficients.py | hiagopinacio/ross | 1bc84061f23df455d9e37cb11b244ac795c836ad | [
"MIT"
] | 1 | 2020-01-21T02:05:21.000Z | 2020-01-21T02:05:21.000Z | ross/fluid_flow/fluid_flow_coefficients.py | hiagopinacio/ross | 1bc84061f23df455d9e37cb11b244ac795c836ad | [
"MIT"
] | null | null | null | ross/fluid_flow/fluid_flow_coefficients.py | hiagopinacio/ross | 1bc84061f23df455d9e37cb11b244ac795c836ad | [
"MIT"
] | 1 | 2020-01-20T23:19:24.000Z | 2020-01-20T23:19:24.000Z | import warnings
from math import isnan
import numpy as np
from scipy import integrate
from ross.fluid_flow.fluid_flow_geometry import move_rotor_center
def calculate_oil_film_force(fluid_flow_object, force_type=None):
"""This function calculates the forces of the oil film in the N and T directions, ie in the
opposite direction to the eccentricity and in the tangential direction.
Parameters
----------
fluid_flow_object: A FluidFlow object.
force_type: str
If set, calculates the oil film force matrix analytically considering the chosen type: 'short' or 'long'.
If set to 'numerical', calculates the oil film force numerically.
Returns
-------
radial_force: float
Force of the oil film in the opposite direction to the eccentricity direction.
tangential_force: float
Force of the oil film in the tangential direction
f_x: float
Components of forces in the x direction
f_y: float
Components of forces in the y direction
Examples
--------
>>> from ross.fluid_flow.fluid_flow import fluid_flow_example
>>> my_fluid_flow = fluid_flow_example()
>>> calculate_oil_film_force(my_fluid_flow) # doctest: +ELLIPSIS
(...
"""
if force_type != "numerical" and (
force_type == "short" or fluid_flow_object.bearing_type == "short_bearing"
):
radial_force = (
0.5
* fluid_flow_object.viscosity
* (fluid_flow_object.radius_rotor / fluid_flow_object.radial_clearance) ** 2
* (fluid_flow_object.length ** 3 / fluid_flow_object.radius_rotor)
* (
(
2
* fluid_flow_object.eccentricity_ratio ** 2
* fluid_flow_object.omega
)
/ (1 - fluid_flow_object.eccentricity_ratio ** 2) ** 2
)
)
tangential_force = (
0.5
* fluid_flow_object.viscosity
* (fluid_flow_object.radius_rotor / fluid_flow_object.radial_clearance) ** 2
* (fluid_flow_object.length ** 3 / fluid_flow_object.radius_rotor)
* (
(np.pi * fluid_flow_object.eccentricity_ratio * fluid_flow_object.omega)
/ (2 * (1 - fluid_flow_object.eccentricity_ratio ** 2) ** (3.0 / 2))
)
)
elif force_type != "numerical" and (
force_type == "long" or fluid_flow_object.bearing_type == "long_bearing"
):
radial_force = (
6
* fluid_flow_object.viscosity
* (fluid_flow_object.radius_rotor / fluid_flow_object.radial_clearance) ** 2
* fluid_flow_object.radius_rotor
* fluid_flow_object.length
* (
(
2
* fluid_flow_object.eccentricity_ratio ** 2
* fluid_flow_object.omega
)
/ (
(2 + fluid_flow_object.eccentricity_ratio ** 2)
* (1 - fluid_flow_object.eccentricity_ratio ** 2)
)
)
)
tangential_force = (
6
* fluid_flow_object.viscosity
* (fluid_flow_object.radius_rotor / fluid_flow_object.radial_clearance) ** 2
* fluid_flow_object.radius_rotor
* fluid_flow_object.length
* (
(np.pi * fluid_flow_object.eccentricity_ratio * fluid_flow_object.omega)
/ (
(2 + fluid_flow_object.eccentricity_ratio ** 2)
* (1 - fluid_flow_object.eccentricity_ratio ** 2) ** 0.5
)
)
)
else:
p_mat = fluid_flow_object.p_mat_numerical
a = np.zeros([fluid_flow_object.nz, fluid_flow_object.ntheta])
b = np.zeros([fluid_flow_object.nz, fluid_flow_object.ntheta])
g1 = np.zeros(fluid_flow_object.nz)
g2 = np.zeros(fluid_flow_object.nz)
base_vector = np.array(
[
fluid_flow_object.xre[0][0] - fluid_flow_object.xi,
fluid_flow_object.yre[0][0] - fluid_flow_object.yi,
]
)
for i in range(fluid_flow_object.nz):
for j in range(int(fluid_flow_object.ntheta / 2)):
vector_from_rotor = np.array(
[
fluid_flow_object.xre[i][j] - fluid_flow_object.xi,
fluid_flow_object.yre[i][j] - fluid_flow_object.yi,
]
)
angle_between_vectors = np.arccos(
np.dot(base_vector, vector_from_rotor)
/ (np.linalg.norm(base_vector) * np.linalg.norm(vector_from_rotor))
)
if isnan(angle_between_vectors):
angle_between_vectors = 0
if angle_between_vectors != 0 and j * fluid_flow_object.dtheta > np.pi:
angle_between_vectors += np.pi
a[i][j] = p_mat[i][j] * np.cos(angle_between_vectors)
b[i][j] = p_mat[i][j] * np.sin(angle_between_vectors)
for i in range(fluid_flow_object.nz):
g1[i] = integrate.simps(a[i][:], fluid_flow_object.gama[0])
g2[i] = integrate.simps(b[i][:], fluid_flow_object.gama[0])
integral1 = integrate.simps(g1, fluid_flow_object.z_list)
integral2 = integrate.simps(g2, fluid_flow_object.z_list)
radial_force = -fluid_flow_object.radius_rotor * integral1
tangential_force = fluid_flow_object.radius_rotor * integral2
force_x = -radial_force * np.sin(
fluid_flow_object.attitude_angle
) + tangential_force * np.cos(fluid_flow_object.attitude_angle)
force_y = radial_force * np.cos(
fluid_flow_object.attitude_angle
) + tangential_force * np.sin(fluid_flow_object.attitude_angle)
return radial_force, tangential_force, force_x, force_y
def calculate_stiffness_and_damping_coefficients(fluid_flow_object):
"""This function calculates the bearing stiffness and damping matrices numerically.
Parameters
----------
fluid_flow_object: A FluidFlow object.
Returns
-------
Two lists of floats
A list of length four including stiffness floats in this order: kxx, kxy, kyx, kyy.
And another list of length four including damping floats in this order: cxx, cxy, cyx, cyy.
And
Examples
--------
>>> from ross.fluid_flow.fluid_flow import fluid_flow_example
>>> my_fluid_flow = fluid_flow_example()
>>> calculate_stiffness_and_damping_coefficients(my_fluid_flow) # doctest: +ELLIPSIS
([428...
"""
N = 6
t = np.linspace(0, 2 * np.pi / fluid_flow_object.omegap, N)
fluid_flow_object.xp = fluid_flow_object.radial_clearance * 0.0001
fluid_flow_object.yp = fluid_flow_object.radial_clearance * 0.0001
dx = np.zeros(N)
dy = np.zeros(N)
xdot = np.zeros(N)
ydot = np.zeros(N)
radial_force = np.zeros(N)
tangential_force = np.zeros(N)
force_xx = np.zeros(N)
force_yx = np.zeros(N)
force_xy = np.zeros(N)
force_yy = np.zeros(N)
X1 = np.zeros([N, 3])
X2 = np.zeros([N, 3])
F1 = np.zeros(N)
F2 = np.zeros(N)
F3 = np.zeros(N)
F4 = np.zeros(N)
for i in range(N):
fluid_flow_object.t = t[i]
delta_x = fluid_flow_object.xp * np.sin(
fluid_flow_object.omegap * fluid_flow_object.t
)
move_rotor_center(fluid_flow_object, delta_x, 0)
dx[i] = delta_x
xdot[i] = (
fluid_flow_object.omegap
* fluid_flow_object.xp
* np.cos(fluid_flow_object.omegap * fluid_flow_object.t)
)
fluid_flow_object.geometry_description()
fluid_flow_object.calculate_pressure_matrix_numerical(direction="x")
[
radial_force[i],
tangential_force[i],
force_xx[i],
force_yx[i],
] = calculate_oil_film_force(fluid_flow_object, force_type="numerical")
delta_y = fluid_flow_object.yp * np.sin(
fluid_flow_object.omegap * fluid_flow_object.t
)
move_rotor_center(fluid_flow_object, -delta_x, 0)
move_rotor_center(fluid_flow_object, 0, delta_y)
dy[i] = delta_y
ydot[i] = (
fluid_flow_object.omegap
* fluid_flow_object.yp
* np.cos(fluid_flow_object.omegap * fluid_flow_object.t)
)
fluid_flow_object.geometry_description()
fluid_flow_object.calculate_pressure_matrix_numerical(direction="y")
[
radial_force[i],
tangential_force[i],
force_xy[i],
force_yy[i],
] = calculate_oil_film_force(fluid_flow_object, force_type="numerical")
move_rotor_center(fluid_flow_object, 0, -delta_y)
fluid_flow_object.geometry_description()
fluid_flow_object.calculate_pressure_matrix_numerical()
X1[i] = [1, dx[i], xdot[i]]
X2[i] = [1, dy[i], ydot[i]]
F1[i] = -force_xx[i]
F2[i] = -force_xy[i]
F3[i] = -force_yx[i]
F4[i] = -force_yy[i]
P1 = np.dot(
np.dot(np.linalg.inv(np.dot(np.transpose(X1), X1)), np.transpose(X1)), F1
)
P2 = np.dot(
np.dot(np.linalg.inv(np.dot(np.transpose(X2), X2)), np.transpose(X2)), F2
)
P3 = np.dot(
np.dot(np.linalg.inv(np.dot(np.transpose(X1), X1)), np.transpose(X1)), F3
)
P4 = np.dot(
np.dot(np.linalg.inv(np.dot(np.transpose(X2), X2)), np.transpose(X2)), F4
)
K = [P1[1], P2[1], P3[1], P4[1]]
C = [P1[2], P2[2], P3[2], P4[2]]
return K, C
def calculate_short_stiffness_matrix(fluid_flow_object):
"""This function calculates the stiffness matrix for the short bearing.
Parameters
----------
fluid_flow_object: A FluidFlow object.
Returns
-------
list of floats
A list of length four including stiffness floats in this order: kxx, kxy, kyx, kyy
Examples
--------
>>> from ross.fluid_flow.fluid_flow import fluid_flow_example
>>> my_fluid_flow = fluid_flow_example()
>>> calculate_short_stiffness_matrix(my_fluid_flow) # doctest: +ELLIPSIS
[417...
"""
h0 = 1.0 / (
(
(np.pi ** 2) * (1 - fluid_flow_object.eccentricity_ratio ** 2)
+ 16 * fluid_flow_object.eccentricity_ratio ** 2
)
** 1.5
)
a = fluid_flow_object.load / fluid_flow_object.radial_clearance
kxx = (
a
* h0
* 4
* (
(np.pi ** 2) * (2 - fluid_flow_object.eccentricity_ratio ** 2)
+ 16 * fluid_flow_object.eccentricity_ratio ** 2
)
)
kxy = (
a
* h0
* np.pi
* (
(np.pi ** 2) * (1 - fluid_flow_object.eccentricity_ratio ** 2) ** 2
- 16 * fluid_flow_object.eccentricity_ratio ** 4
)
/ (
fluid_flow_object.eccentricity_ratio
* np.sqrt(1 - fluid_flow_object.eccentricity_ratio ** 2)
)
)
kyx = (
-a
* h0
* np.pi
* (
(np.pi ** 2)
* (1 - fluid_flow_object.eccentricity_ratio ** 2)
* (1 + 2 * fluid_flow_object.eccentricity_ratio ** 2)
+ (32 * fluid_flow_object.eccentricity_ratio ** 2)
* (1 + fluid_flow_object.eccentricity_ratio ** 2)
)
/ (
fluid_flow_object.eccentricity_ratio
* np.sqrt(1 - fluid_flow_object.eccentricity_ratio ** 2)
)
)
kyy = (
a
* h0
* 4
* (
(np.pi ** 2) * (1 + 2 * fluid_flow_object.eccentricity_ratio ** 2)
+ (
(32 * fluid_flow_object.eccentricity_ratio ** 2)
* (1 + fluid_flow_object.eccentricity_ratio ** 2)
)
/ (1 - fluid_flow_object.eccentricity_ratio ** 2)
)
)
return [kxx, kxy, kyx, kyy]
def calculate_short_damping_matrix(fluid_flow_object):
"""This function calculates the damping matrix for the short bearing.
Parameters
-------
fluid_flow_object: A FluidFlow object.
Returns
-------
list of floats
A list of length four including damping floats in this order: cxx, cxy, cyx, cyy
Examples
--------
>>> from ross.fluid_flow.fluid_flow import fluid_flow_example
>>> my_fluid_flow = fluid_flow_example()
>>> calculate_short_damping_matrix(my_fluid_flow) # doctest: +ELLIPSIS
[...
"""
# fmt: off
h0 = 1.0 / (((np.pi ** 2) * (1 - fluid_flow_object.eccentricity_ratio ** 2)
+ 16 * fluid_flow_object.eccentricity_ratio ** 2) ** 1.5)
a = fluid_flow_object.load / (fluid_flow_object.radial_clearance * fluid_flow_object.omega)
cxx = (a * h0 * 2 * np.pi * np.sqrt(1 - fluid_flow_object.eccentricity_ratio ** 2) *
((np.pi ** 2) * (1 + 2 * fluid_flow_object.eccentricity_ratio ** 2)
- 16 * fluid_flow_object.eccentricity_ratio ** 2) / fluid_flow_object.eccentricity_ratio)
cxy = (-a * h0 * 8 * ((np.pi ** 2) * (1 + 2 * fluid_flow_object.eccentricity_ratio ** 2)
- 16 * fluid_flow_object.eccentricity_ratio ** 2))
cyx = cxy
cyy = (a * h0 * (2 * np.pi * (
(np.pi ** 2) * (1 - fluid_flow_object.eccentricity_ratio ** 2) ** 2
+ 48 * fluid_flow_object.eccentricity_ratio ** 2)) /
(fluid_flow_object.eccentricity_ratio * np.sqrt(1 - fluid_flow_object.eccentricity_ratio ** 2)))
# fmt: on
return [cxx, cxy, cyx, cyy]
def find_equilibrium_position(
fluid_flow_object,
print_along=True,
tolerance=1e-05,
increment_factor=1e-03,
max_iterations=10,
increment_reduction_limit=1e-04,
return_iteration_map=False,
):
"""This function returns an eccentricity value with calculated forces matching the load applied,
meaning an equilibrium position of the rotor.
It first moves the rotor center on x-axis, aiming for the minimum error in the force on x (zero), then
moves on y-axis, aiming for the minimum error in the force on y (meaning load minus force on y equals zero).
Parameters
----------
fluid_flow_object: A FluidFlow object.
print_along: bool, optional
If True, prints the iteration process.
tolerance: float, optional
increment_factor: float, optional
This number will multiply the first eccentricity found to reach an increment number.
max_iterations: int, optional
increment_reduction_limit: float, optional
The error should always be approximating zero. If it passes zeros (for instance, from a positive error
to a negative one), the iteration goes back one step and the increment is reduced. This reduction must
have a limit to avoid long iterations.
return_iteration_map: bool, optional
If True, along with the eccentricity found, the function will return a map of position and errors in
each step of the iteration.
Returns
-------
None, or
Matrix of floats
A matrix [4, n], being n the number of iterations. In each line, it contains the x and y of the rotor
center, followed by the error in force x and force y.
Examples
--------
>>> from ross.fluid_flow.fluid_flow import fluid_flow_example2
>>> my_fluid_flow = fluid_flow_example2()
>>> find_equilibrium_position(my_fluid_flow, print_along=False,
... tolerance=0.1, increment_factor=0.01,
... max_iterations=5, increment_reduction_limit=1e-03)
"""
fluid_flow_object.calculate_coefficients()
fluid_flow_object.calculate_pressure_matrix_numerical()
r_force, t_force, force_x, force_y = calculate_oil_film_force(
fluid_flow_object, force_type="numerical"
)
increment = increment_factor * fluid_flow_object.eccentricity
error_x = abs(force_x)
error_y = abs(force_y - fluid_flow_object.load)
error = max(error_x, error_y)
k = 1
map_vector = []
while error > tolerance and k <= max_iterations:
increment_x = increment
increment_y = increment
iter_x = 0
iter_y = 0
previous_x = fluid_flow_object.xi
previous_y = fluid_flow_object.yi
infinite_loop_x_check = False
infinite_loop_y_check = False
if print_along:
print("\nIteration " + str(k) + "\n")
while error_x > tolerance:
iter_x += 1
move_rotor_center(fluid_flow_object, increment_x, 0)
fluid_flow_object.calculate_coefficients()
fluid_flow_object.calculate_pressure_matrix_numerical()
(
new_r_force,
new_t_force,
new_force_x,
new_force_y,
) = calculate_oil_film_force(fluid_flow_object, force_type="numerical")
new_error_x = abs(new_force_x)
move_rotor_center(fluid_flow_object, -increment_x, 0)
if print_along:
print("Iteration in x axis " + str(iter_x))
print("Force x: " + str(new_force_x))
print("Previous force x: " + str(force_x))
print("Increment x: ", str(increment_x))
print("Error x: " + str(new_error_x))
print("Previous error x: " + str(error_x) + "\n")
if new_force_x * force_x < 0:
infinite_loop_x_check = False
increment_x = increment_x / 10
if print_along:
print("Went beyond error 0. Reducing increment. \n")
if abs(increment_x) < abs(increment * increment_reduction_limit):
if print_along:
print("Increment too low. Breaking x iteration. \n")
break
elif new_error_x > error_x:
if print_along:
print("Error increased. Changing sign of increment. \n")
increment_x = -increment_x
if infinite_loop_x_check:
break
else:
infinite_loop_x_check = True
else:
infinite_loop_x_check = False
move_rotor_center(fluid_flow_object, increment_x, 0)
error_x = new_error_x
force_x = new_force_x
force_y = new_force_y
error_y = abs(new_force_y - fluid_flow_object.load)
error = max(error_x, error_y)
while error_y > tolerance:
iter_y += 1
move_rotor_center(fluid_flow_object, 0, increment_y)
fluid_flow_object.calculate_coefficients()
fluid_flow_object.calculate_pressure_matrix_numerical()
(
new_r_force,
new_t_force,
new_force_x,
new_force_y,
) = calculate_oil_film_force(fluid_flow_object, force_type="numerical")
new_error_y = abs(new_force_y - fluid_flow_object.load)
move_rotor_center(fluid_flow_object, 0, -increment_y)
if print_along:
print("Iteration in y axis " + str(iter_y))
print("Force y: " + str(new_force_y))
print("Previous force y: " + str(force_y))
print("Increment y: ", str(increment_y))
print(
"Force y minus load: " + str(new_force_y - fluid_flow_object.load)
)
print(
"Previous force y minus load: "
+ str(force_y - fluid_flow_object.load)
)
print("Error y: " + str(new_error_y))
print("Previous error y: " + str(error_y) + "\n")
if (new_force_y - fluid_flow_object.load) * (
force_y - fluid_flow_object.load
) < 0:
infinite_loop_y_check = False
increment_y = increment_y / 10
if print_along:
print("Went beyond error 0. Reducing increment. \n")
if abs(increment_y) < abs(increment * increment_reduction_limit):
if print_along:
print("Increment too low. Breaking y iteration. \n")
break
elif new_error_y > error_y:
if print_along:
print("Error increased. Changing sign of increment. \n")
increment_y = -increment_y
if infinite_loop_y_check:
break
else:
infinite_loop_y_check = True
else:
infinite_loop_y_check = False
move_rotor_center(fluid_flow_object, 0, increment_y)
error_y = new_error_y
force_y = new_force_y
force_x = new_force_x
error_x = abs(new_force_x)
error = max(error_x, error_y)
if print_along:
print("Iteration " + str(k))
print("Error x: " + str(error_x))
print("Error y: " + str(error_y))
print(
"Current x, y: ("
+ str(fluid_flow_object.xi)
+ ", "
+ str(fluid_flow_object.yi)
+ ")"
)
k += 1
map_vector.append(
[fluid_flow_object.xi, fluid_flow_object.yi, error_x, error_y]
)
if previous_x == fluid_flow_object.xi and previous_y == fluid_flow_object.yi:
if print_along:
print("Rotor center did not move during iteration. Breaking.")
break
if print_along:
print(map_vector)
if return_iteration_map:
return map_vector
| 39.266547 | 113 | 0.579954 | 2,718 | 21,950 | 4.374172 | 0.104489 | 0.155185 | 0.218269 | 0.093111 | 0.679872 | 0.618471 | 0.548827 | 0.50593 | 0.455379 | 0.415931 | 0 | 0.018241 | 0.325649 | 21,950 | 558 | 114 | 39.336918 | 0.784961 | 0.186241 | 0 | 0.341176 | 0 | 0 | 0.040606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011765 | false | 0 | 0.011765 | 0 | 0.035294 | 0.094118 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
981440fe7da5408c2f393c5c158c741ef85a08d1 | 486 | py | Python | dashboard/admin.py | AliBigdeli/Django-Metric-Monitoring-App | a251dc9c4eab26561029437ad437f43bffc479f7 | [
"MIT"
] | null | null | null | dashboard/admin.py | AliBigdeli/Django-Metric-Monitoring-App | a251dc9c4eab26561029437ad437f43bffc479f7 | [
"MIT"
] | null | null | null | dashboard/admin.py | AliBigdeli/Django-Metric-Monitoring-App | a251dc9c4eab26561029437ad437f43bffc479f7 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Device,Metric
class DeviceAdmin(admin.ModelAdmin):
list_display = ["name", "token","user", "created_date"]
search_fields = ["name", "token"]
list_filter = ("user",)
class MetricAdmin(admin.ModelAdmin):
list_display = ["device","temperature", "humidity", "created_date"]
search_fields = ["device"]
list_filter = ("device",)
admin.site.register(Device, DeviceAdmin)
admin.site.register(Metric, MetricAdmin)
| 28.588235 | 71 | 0.709877 | 55 | 486 | 6.127273 | 0.472727 | 0.094955 | 0.11276 | 0.154303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13786 | 486 | 16 | 72 | 30.375 | 0.804296 | 0 | 0 | 0 | 0 | 0 | 0.179012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9818a800cb69cee7cd1d2943f67320ac45add3c8 | 956 | py | Python | core/tests/test_views.py | honno/ascii-forever | 8364219db115229fa9eb0b059e9c0611dcb689cf | [
"MIT"
] | null | null | null | core/tests/test_views.py | honno/ascii-forever | 8364219db115229fa9eb0b059e9c0611dcb689cf | [
"MIT"
] | null | null | null | core/tests/test_views.py | honno/ascii-forever | 8364219db115229fa9eb0b059e9c0611dcb689cf | [
"MIT"
] | null | null | null | from django.urls import reverse
from pytest import mark
from core.models import *
urls = [reverse(name) for name in ["core:index", "core:arts"]]
@mark.parametrize("url", urls)
@mark.django_db
def test_nsfw_filter(url, django_user_model, client):
target = django_user_model.objects.create(username="bob", password="pass")
follower = django_user_model.objects.create(username="alice", password="pass")
follower.following.add(target)
sfw = Art(id=1, artist=target, title="sfw", text="sfw", nsfw=False)
nsfw = Art(id=2, artist=target, title="nsfw", text="nsfw", nsfw=True)
sfw.save()
nsfw.save()
client.force_login(follower)
response = client.get(url)
assert sfw in response.context["arts"]
assert nsfw in response.context["arts"]
follower.nsfw_pref = "HA"
follower.save()
response = client.get(url)
assert sfw in response.context["arts"]
assert nsfw not in response.context["arts"]
| 25.837838 | 82 | 0.694561 | 134 | 956 | 4.873134 | 0.410448 | 0.061256 | 0.104135 | 0.128637 | 0.294028 | 0.294028 | 0.183767 | 0.183767 | 0.183767 | 0.183767 | 0 | 0.002503 | 0.164226 | 956 | 36 | 83 | 26.555556 | 0.814768 | 0 | 0 | 0.173913 | 0 | 0 | 0.073222 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.043478 | false | 0.086957 | 0.130435 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e21afb57aa8ccce52815f0ad4d4f545a41684adb | 512 | py | Python | native/java/lang/double.py | wonderyue/TinyJVM | 5559730ab2aad35963fce977fb9b3ea78eb9a8e2 | [
"MIT"
] | null | null | null | native/java/lang/double.py | wonderyue/TinyJVM | 5559730ab2aad35963fce977fb9b3ea78eb9a8e2 | [
"MIT"
] | null | null | null | native/java/lang/double.py | wonderyue/TinyJVM | 5559730ab2aad35963fce977fb9b3ea78eb9a8e2 | [
"MIT"
] | null | null | null | import struct
def double_to_raw_long_bits(frame):
"""
public static native long doubleToRawLongBits(double value);
"""
value = frame.get_local_double(0)
b = struct.pack("d", value)
i = struct.unpack("l", b)[0]
frame.push_operand_long(i)
def long_bits_to_double(frame):
"""
public static native double longBitsToDouble(long bits);
"""
i = frame.get_local_long(0)
b = struct.pack("l", i)
value = struct.unpack("d", b)[0]
frame.push_operand_double(value)
| 23.272727 | 64 | 0.65625 | 73 | 512 | 4.39726 | 0.356164 | 0.074766 | 0.105919 | 0.143302 | 0.11215 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009852 | 0.207031 | 512 | 21 | 65 | 24.380952 | 0.780788 | 0.228516 | 0 | 0 | 0 | 0 | 0.010989 | 0 | 0 | 0 | 0 | 0.047619 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e21eb67206281500e398b6199cc031ce513a61af | 1,713 | py | Python | tests/types/test_boolean.py | arthurazs/py61850 | ba9c5f40ef21bfecd14a8d380e9ff512da9ba5bf | [
"MIT"
] | 3 | 2020-09-21T02:13:58.000Z | 2021-09-18T02:32:56.000Z | tests/types/test_boolean.py | arthurazs/py61850 | ba9c5f40ef21bfecd14a8d380e9ff512da9ba5bf | [
"MIT"
] | null | null | null | tests/types/test_boolean.py | arthurazs/py61850 | ba9c5f40ef21bfecd14a8d380e9ff512da9ba5bf | [
"MIT"
] | 2 | 2020-12-29T15:09:50.000Z | 2022-01-04T16:19:48.000Z | from pytest import fixture, raises
from py61850.types import Boolean
@fixture
def true():
return Boolean(True)
# === DECODE ===
def test_byte_true_min_raw_value():
assert Boolean(b'\x01').raw_value == b'\x01'
def test_byte_true_min_value():
assert Boolean(b'\x01').value is True
def test_byte_true_max_raw_value():
assert Boolean(b'\xFF').raw_value == b'\xFF'
def test_byte_true_max_value():
assert Boolean(b'\xFF').value is True
def test_byte_false_raw_value():
assert Boolean(b'\x00').raw_value == b'\x00'
def test_byte_false_value():
assert Boolean(b'\x00').value is False
# === TRUE ===
def test_true_value(true):
assert true.value is True
def test_true_raw_value(true):
assert true.raw_value != b'\x00'
# === FALSE ===
def test_false_value():
assert Boolean(False).value is False
def test_false_raw_value(true):
assert Boolean(False).raw_value == b'\x00'
# === UNCHANGED VALUES ===
def test_raw_tag(true):
assert true.raw_tag == b'\x83'
def test_tag(true):
assert true.tag == 'Boolean'
def test_raw_length(true):
assert true.raw_length == b'\x01'
def test_length(true):
assert true.length == 1
def test_bytes():
assert bytes(Boolean(False)) == b'\x83\x01\x00'
def test_len(true):
assert len(true) == 3
# === EXCEPTIONS ===
def test_encode_decode():
with raises(TypeError):
Boolean(1)
def test_decode_below():
with raises(ValueError):
Boolean(b'')
def test_decode_above():
with raises(ValueError):
Boolean(b'\x00\x00')
def test_none():
with raises(TypeError):
Boolean(None)
def test_none_empty():
with raises(TypeError):
Boolean()
| 15.861111 | 51 | 0.664332 | 251 | 1,713 | 4.298805 | 0.171315 | 0.136237 | 0.116775 | 0.105653 | 0.296571 | 0.040779 | 0 | 0 | 0 | 0 | 0 | 0.027556 | 0.19498 | 1,713 | 107 | 52 | 16.009346 | 0.754895 | 0.049621 | 0 | 0.096154 | 0 | 0 | 0.048705 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.423077 | false | 0 | 0.038462 | 0.019231 | 0.480769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e222ff90231f967a2d2347ddf5dcea5c451c243b | 3,908 | py | Python | models/model.py | andersro93/School.ICT441.TextGenerator | e2efe2253a3cb24a196358d0074209340503069a | [
"MIT"
] | null | null | null | models/model.py | andersro93/School.ICT441.TextGenerator | e2efe2253a3cb24a196358d0074209340503069a | [
"MIT"
] | null | null | null | models/model.py | andersro93/School.ICT441.TextGenerator | e2efe2253a3cb24a196358d0074209340503069a | [
"MIT"
] | null | null | null | from nt import DirEntry
import os
import self as self
from keras import Model as KerasModel
class Model(object):
"""
Base class for the different models in this project
"""
_files: dict = {}
""" The files that is used to parse the model """
_raw_content: str = ''
""" The files contents as one long string """
_assets_path: str = 'assets_test'
""" Path to the assets to read from """
_model: KerasModel = None
""" The Keras model that is used """
_model_path: str = None
""" The path where to save and use the model from """
_activation_method: str = 'softmax'
""" Activation method to use """
_optimizer: str = 'adam'
""" Optimizer method to use """
_loss_method: str = 'categorical_crossentropy'
""" Loss method to use """
_training_data_encoding: str = 'utf-8'
""" The encoding used on the training data """
def print_model_summary(self) -> self:
"""
Returns a model summary if the model has been created
:return: str
"""
if self._model:
self._model.summary()
return
print('No model has been created yet')
return self
def load_weights(self, weights: str) -> self:
"""
Loads weights from the given weights file
:param weights:
:return: self
"""
if not self._model:
print('No model has been created, please create the model first!')
return
self._model.load_weights(weights)
self.compile_model()
return self
def compile_model(self) -> self:
"""
Compiles the model and runs some optimizer on it
:return: self
"""
if not self._model:
print('No model has been created, please create the model first!')
return
self._model.compile(loss=self._loss_method, optimizer=self._optimizer)
return self
def _read_data_from_assets(self) -> self:
"""
Reads and parses the data from the assets folder into the object itself
:return: self
"""
for directory in os.scandir(self._get_assets_full_path()):
self._parse_directory(directory)
return self
def _concat_assets_content_to_one_string(self) -> self:
"""
Concatenates the contents from all the assets to one string
:return: self
"""
for key, value in self._files.items():
self._raw_content = self._raw_content + value
self._raw_content = self._raw_content.lower()
return self
def _parse_directory(self, directory: DirEntry) -> self:
"""
Recursively parses the given directory and starts to parse any found files
:param directory:
:return: self
"""
entry: DirEntry
for entry in os.scandir(directory):
if entry.is_dir():
self._parse_directory(entry)
else:
self._parse_file(entry)
return self
def _parse_file(self, file: DirEntry) -> self:
"""
Tries to parse the given file and puts it in self._files dictionary
:param file: DirEntry
:return: self
"""
data: str
try:
with open(file.path, 'r', encoding=self._training_data_encoding) as file_reader:
data = file_reader.read()
file_reader.close()
except Exception:
print(f"Unable to parse file: {file.path}")
return
self._files[file.path] = data
return self
def _get_assets_full_path(self) -> str:
"""
Returns a computed full path to the directory where the assets are located as a string
:return: str
"""
return os.path.join(os.path.dirname(os.path.dirname(__file__)), self._assets_path)
| 27.138889 | 94 | 0.58956 | 476 | 3,908 | 4.661765 | 0.262605 | 0.072105 | 0.041009 | 0.03425 | 0.136999 | 0.118071 | 0.081118 | 0.081118 | 0.081118 | 0.081118 | 0 | 0.000378 | 0.322416 | 3,908 | 143 | 95 | 27.328671 | 0.837613 | 0.185261 | 0 | 0.238095 | 0 | 0 | 0.089799 | 0.009453 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126984 | false | 0 | 0.063492 | 0 | 0.539683 | 0.079365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e22cdecef594a7a4d01026c734e302cfb7902186 | 673 | py | Python | django/users/migrations/0004_auto_20160408_1032.py | BD2KGenomics/brca-website | 243bee560d5714f7cf5d98d06c83be345f1a11b4 | [
"Apache-2.0"
] | 5 | 2016-01-12T01:29:50.000Z | 2017-03-10T08:34:52.000Z | django/users/migrations/0004_auto_20160408_1032.py | BD2KGenomics/brca-website-deprecated | 243bee560d5714f7cf5d98d06c83be345f1a11b4 | [
"Apache-2.0"
] | 141 | 2015-08-06T18:51:37.000Z | 2017-04-03T20:41:30.000Z | django/users/migrations/0004_auto_20160408_1032.py | BD2KGenomics/brca-website-deprecated | 243bee560d5714f7cf5d98d06c83be345f1a11b4 | [
"Apache-2.0"
] | 8 | 2015-08-08T00:32:18.000Z | 2016-07-29T16:05:44.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.4 on 2016-04-08 10:32
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('users', '0003_myuser_has_image'),
]
operations = [
migrations.AddField(
model_name='myuser',
name='activation_key',
field=models.CharField(blank=True, max_length=40),
),
migrations.AddField(
model_name='myuser',
name='key_expires',
field=models.DateTimeField(default=django.utils.timezone.now),
),
]
| 24.925926 | 74 | 0.616642 | 73 | 673 | 5.506849 | 0.684932 | 0.054726 | 0.094527 | 0.134328 | 0.18408 | 0.18408 | 0 | 0 | 0 | 0 | 0 | 0.044625 | 0.267459 | 673 | 26 | 75 | 25.884615 | 0.770791 | 0.099554 | 0 | 0.315789 | 1 | 0 | 0.104478 | 0.034826 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e22ed24dd1982b59d460dc15a81e37cb147cdf17 | 582 | py | Python | src/page_object_pattern/test_template.py | paulbodean88/automation-design-patterns | b160f317a0c0a1de409908f938fbeab0772c8147 | [
"MIT"
] | 14 | 2017-07-25T10:11:06.000Z | 2022-03-25T10:17:25.000Z | src/page_object_pattern/test_template.py | paulbodean88/automation-design-patterns | b160f317a0c0a1de409908f938fbeab0772c8147 | [
"MIT"
] | 3 | 2017-07-23T17:19:14.000Z | 2017-07-24T19:54:52.000Z | src/page_object_pattern/test_template.py | paulbodean88/automation-design-patterns | b160f317a0c0a1de409908f938fbeab0772c8147 | [
"MIT"
] | 5 | 2019-08-29T02:35:04.000Z | 2020-02-24T14:39:09.000Z | """
Description:
- Test Template class.
Methods:
- test setup
- test teardown
- test implementation
@author: Paul Bodean
@date: 26/12/2017
"""
import unittest
from selenium import webdriver
class TestTemplate(unittest.TestCase):
def setUp(self):
"""
Open the page to be tested
:return: the driver implementation
"""
self.driver = webdriver.Chrome()
self.driver.get("https://en.wikipedia.org/wiki/Main_Page")
def tearDown(self):
"""
Quit the browser
"""
self.driver.quit()
| 18.1875 | 66 | 0.603093 | 64 | 582 | 5.46875 | 0.65625 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019231 | 0.285223 | 582 | 31 | 67 | 18.774194 | 0.822115 | 0.395189 | 0 | 0 | 0 | 0 | 0.134483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e22fce042441cdd77adbcb54853ba9d70a939d7d | 14,940 | py | Python | gmacpyutil/gmacpyutil/profiles_test.py | rgayon/macops | 1181ca269c9ae3235c1e9e7ae1bad4755b33c299 | [
"Apache-2.0"
] | 758 | 2015-01-05T19:48:20.000Z | 2022-02-18T10:44:52.000Z | gmacpyutil/gmacpyutil/profiles_test.py | rgayon/macops | 1181ca269c9ae3235c1e9e7ae1bad4755b33c299 | [
"Apache-2.0"
] | 161 | 2015-04-17T21:15:42.000Z | 2019-05-27T03:05:19.000Z | gmacpyutil/gmacpyutil/profiles_test.py | rgayon/macops | 1181ca269c9ae3235c1e9e7ae1bad4755b33c299 | [
"Apache-2.0"
] | 106 | 2015-01-20T21:21:00.000Z | 2022-03-04T00:15:41.000Z | """Tests for profiles module."""
import mock
from google.apputils import basetest
import profiles
class ProfilesModuleTest(basetest.TestCase):
def testGenerateUUID(self):
self.assertIsInstance(profiles.GenerateUUID('a'), str)
self.assertTrue(profiles.GenerateUUID('a').isupper())
self.assertEqual(profiles.GenerateUUID('a'),
profiles.GenerateUUID('a'))
def testValidatePayload(self):
payload = {}
with self.assertRaises(profiles.PayloadValidationError):
profiles.ValidatePayload(payload)
payload.update({profiles.PAYLOADKEYS_IDENTIFIER: 'a',
profiles.PAYLOADKEYS_DISPLAYNAME: 'a',
profiles.PAYLOADKEYS_TYPE: 'com.apple.welcome.to.1984'})
profiles.ValidatePayload(payload)
self.assertEqual(payload.get(profiles.PAYLOADKEYS_UUID),
profiles.GenerateUUID('a'))
self.assertEqual(payload.get(profiles.PAYLOADKEYS_ENABLED), True)
self.assertEqual(payload.get(profiles.PAYLOADKEYS_VERSION), 1)
class ProfileClassTest(basetest.TestCase):
"""Tests for the Profile class."""
def _GetValidProfile(self, include_payload=True):
profile = profiles.Profile()
profile.Set(profiles.PAYLOADKEYS_DISPLAYNAME, 'Acme Corp Config Profile')
profile.Set(profiles.PAYLOADKEYS_IDENTIFIER, 'com.acme.configprofile')
profile.Set(profiles.PAYLOADKEYS_ORG, 'Acme Corp')
profile.Set(profiles.PAYLOADKEYS_SCOPE, ['System', 'User'])
profile.Set(profiles.PAYLOADKEYS_TYPE, 'Configuration')
if include_payload:
profile.AddPayload(self._GetValidPayload())
return profile
def _GetValidPayload(self):
test_payload = {profiles.PAYLOADKEYS_IDENTIFIER: 'com.test.payload',
profiles.PAYLOADKEYS_DISPLAYNAME: 'Test Payload',
profiles.PAYLOADKEYS_TYPE: 'com.apple.welcome.to.1984'}
return test_payload
def testInit(self):
"""Test the __init__ method."""
profile = profiles.Profile()
self.assertIsNotNone(profile._profile)
self.assertEqual(profile._profile[profiles.PAYLOADKEYS_CONTENT], [])
def testGet(self):
profile = profiles.Profile()
profile._profile['TestKey'] = 'TestValue'
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_CONTENT), [])
self.assertEqual(profile.Get('TestKey'), 'TestValue')
def testSet(self):
profile = profiles.Profile()
profile.Set('TestKey', 'TestValue')
profile.Set('OtherKey', 'OtherValue')
self.assertEqual(profile._profile['TestKey'], 'TestValue')
self.assertEqual(profile._profile['OtherKey'], 'OtherValue')
def testStr(self):
profile = self._GetValidProfile()
self.assertEqual(profile.__str__(), 'Acme Corp Config Profile')
def testAddPayload(self):
profile = self._GetValidProfile(include_payload=False)
test_payload = self._GetValidPayload()
with self.assertRaises(profiles.PayloadValidationError):
profile.AddPayload('Payloads should be dicts')
profile.AddPayload(test_payload)
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_CONTENT), [test_payload])
def testValidateProfile(self):
profile = profiles.Profile()
with self.assertRaises(profiles.ProfileValidationError):
profile._ValidateProfile()
profile = self._GetValidProfile(include_payload=False)
with self.assertRaises(profiles.ProfileValidationError):
profile._ValidateProfile()
profile.AddPayload(self._GetValidPayload())
profile._ValidateProfile()
self.assertIsNotNone(profile.Get(profiles.PAYLOADKEYS_UUID))
self.assertIsNotNone(profile.Get(profiles.PAYLOADKEYS_VERSION))
@mock.patch.object(profiles.plistlib, 'writePlist')
def testSaveSuccess(self, mock_writeplist):
profile = self._GetValidProfile()
profile.Save('/tmp/hello')
mock_writeplist.assert_called_once_with(profile._profile, '/tmp/hello')
@mock.patch.object(profiles.plistlib, 'writePlist')
def testSaveIOError(self, mock_writeplist):
profile = self._GetValidProfile()
mock_writeplist.side_effect = IOError
with self.assertRaises(profiles.ProfileSaveError):
profile.Save('/tmp/hello')
mock_writeplist.assert_called_once_with(profile._profile, '/tmp/hello')
@mock.patch.object(profiles.gmacpyutil, 'RunProcess')
@mock.patch.object(profiles.Profile, 'Save')
def testInstallSuccess(self, mock_save, mock_runprocess):
profile = self._GetValidProfile()
mock_runprocess.return_value = ['Output', None, 0]
profile.Install()
mock_save.assert_called_once_with(mock.ANY)
mock_runprocess.assert_called_once_with(
[profiles.CMD_PROFILES, '-I', '-F', mock.ANY],
sudo=None, sudo_password=None)
@mock.patch.object(profiles.gmacpyutil, 'RunProcess')
@mock.patch.object(profiles.Profile, 'Save')
def testInstallSudoPassword(self, mock_save, mock_runprocess):
profile = self._GetValidProfile()
mock_runprocess.return_value = ['Output', None, 0]
profile.Install(sudo_password='ladygagaeatssocks')
mock_save.assert_called_once_with(mock.ANY)
mock_runprocess.assert_called_once_with(
[profiles.CMD_PROFILES, '-I', '-F', mock.ANY],
sudo='ladygagaeatssocks', sudo_password='ladygagaeatssocks')
@mock.patch.object(profiles.gmacpyutil, 'RunProcess')
@mock.patch.object(profiles.Profile, 'Save')
def testInstallCommandFail(self, mock_save, mock_runprocess):
profile = self._GetValidProfile()
mock_runprocess.return_value = ['Output', 'Errors', 42]
with self.assertRaisesRegexp(profiles.ProfileInstallationError,
'Profile installation failed!\n'
'Output, Errors, 42'):
profile.Install(sudo_password='ladygagaeatssocks')
mock_save.assert_called_once_with(mock.ANY)
mock_runprocess.assert_called_once_with(
[profiles.CMD_PROFILES, '-I', '-F', mock.ANY],
sudo='ladygagaeatssocks', sudo_password='ladygagaeatssocks')
@mock.patch.object(profiles.gmacpyutil, 'RunProcess')
@mock.patch.object(profiles.Profile, 'Save')
def testInstallCommandException(self, mock_save, mock_runprocess):
profile = self._GetValidProfile()
mock_runprocess.side_effect = profiles.gmacpyutil.GmacpyutilException
with self.assertRaisesRegexp(profiles.ProfileInstallationError,
'Profile installation failed!\n'):
profile.Install(sudo_password='ladygagaeatssocks')
mock_save.assert_called_once_with(mock.ANY)
mock_runprocess.assert_called_once_with(
[profiles.CMD_PROFILES, '-I', '-F', mock.ANY],
sudo='ladygagaeatssocks', sudo_password='ladygagaeatssocks')
class NetworkProfileClassTest(basetest.TestCase):
"""Tests for the NetworkProfile class."""
def testInit(self):
profile = profiles.NetworkProfile('testuser')
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_DISPLAYNAME),
'Network Profile (testuser)')
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_DESCRIPTION),
'Network authentication settings')
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_IDENTIFIER),
'com.megacorp.networkprofile')
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_SCOPE),
['System', 'User'])
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_TYPE), 'Configuration')
self.assertEqual(profile.Get(profiles.PAYLOADKEYS_CONTENT), [])
def testGenerateID(self):
profile = profiles.NetworkProfile('testuser')
self.assertEqual(profile._GenerateID('test_suffix'),
'com.megacorp.networkprofile.test_suffix')
self.assertEqual(profile._GenerateID('another_suffix'),
'com.megacorp.networkprofile.another_suffix')
@mock.patch.object(profiles.NetworkProfile, 'AddPayload')
@mock.patch.object(profiles.crypto, 'load_privatekey')
@mock.patch.object(profiles.crypto, 'load_certificate')
@mock.patch.object(profiles.crypto, 'PKCS12Type')
@mock.patch.object(profiles.certs, 'Certificate')
def testAddMachineCertificateSuccess(self, mock_certificate, mock_pkcs12,
mock_loadcert, mock_loadkey,
mock_addpayload):
mock_certobj = mock.MagicMock()
mock_certobj.subject_cn = 'My Cert Subject'
mock_certobj.osx_fingerprint = '0011223344556677889900'
mock_certificate.return_value = mock_certobj
mock_pkcs12obj = mock.MagicMock()
mock_pkcs12obj.export.return_value = '-----PKCS12 Data-----'
mock_pkcs12.return_value = mock_pkcs12obj
mock_loadcert.return_value = 'certobj'
mock_loadkey.return_value = 'keyobj'
profile = profiles.NetworkProfile('testuser')
profile.AddMachineCertificate('fakecert', 'fakekey')
mock_pkcs12.assert_called_once_with()
mock_pkcs12obj.set_certificate.assert_called_once_with('certobj')
mock_pkcs12obj.set_privatekey.assert_called_once_with('keyobj')
mock_pkcs12obj.export.assert_called_once_with('0011223344556677889900')
mock_loadcert.assert_called_once_with(1, 'fakecert')
mock_loadkey.assert_called_once_with(1, 'fakekey')
mock_addpayload.assert_called_once_with(
{profiles.PAYLOADKEYS_IDENTIFIER:
'com.megacorp.networkprofile.machine_cert',
profiles.PAYLOADKEYS_TYPE: 'com.apple.security.pkcs12',
profiles.PAYLOADKEYS_DISPLAYNAME: 'My Cert Subject',
profiles.PAYLOADKEYS_ENABLED: True,
profiles.PAYLOADKEYS_VERSION: 1,
profiles.PAYLOADKEYS_CONTENT: profiles.plistlib.Data(
'-----PKCS12 Data-----'),
profiles.PAYLOADKEYS_UUID: mock.ANY,
'Password': '0011223344556677889900'})
@mock.patch.object(profiles.crypto, 'load_privatekey')
@mock.patch.object(profiles.crypto, 'load_certificate')
@mock.patch.object(profiles.crypto, 'PKCS12Type')
@mock.patch.object(profiles.certs, 'Certificate')
def testAddMachineCertificateInvalidKey(self, mock_certificate, mock_pkcs12,
mock_loadcert, mock_loadkey):
mock_certobj = mock.MagicMock()
mock_certobj.subject_cn = 'My Cert Subject'
mock_certobj.osx_fingerprint = '0011223344556677889900'
mock_certificate.return_value = mock_certobj
mock_pkcs12obj = mock.MagicMock()
mock_pkcs12obj.export.side_effect = profiles.crypto.Error
mock_pkcs12.return_value = mock_pkcs12obj
mock_loadcert.return_value = 'certobj'
mock_loadkey.return_value = 'keyobj_from_different_cert'
profile = profiles.NetworkProfile('testuser')
with self.assertRaises(profiles.CertificateError):
profile.AddMachineCertificate('fakecert', 'otherfakekey')
@mock.patch.object(profiles.certs, 'Certificate')
def testAddMachineCertificateBadCert(self, mock_certificate):
mock_certificate.side_effect = profiles.certs.CertError
profile = profiles.NetworkProfile('testuser')
with self.assertRaises(profiles.CertificateError):
profile.AddMachineCertificate('fakecert', 'fakekey')
@mock.patch.object(profiles.NetworkProfile, 'AddPayload')
@mock.patch.object(profiles.certs, 'Certificate')
def testAddAnchorCertificateSuccess(self, mock_certificate, mock_addpayload):
mock_certobj = mock.MagicMock()
mock_certobj.subject_cn = 'My Cert Subject'
mock_certobj.osx_fingerprint = '0011223344556677889900'
mock_certificate.return_value = mock_certobj
profile = profiles.NetworkProfile('testuser')
profile.AddAnchorCertificate('my_cert')
mock_certificate.assert_called_once_with('my_cert')
mock_addpayload.assert_called_once_with(
{profiles.PAYLOADKEYS_IDENTIFIER:
'com.megacorp.networkprofile.0011223344556677889900',
profiles.PAYLOADKEYS_TYPE: 'com.apple.security.pkcs1',
profiles.PAYLOADKEYS_DISPLAYNAME: 'My Cert Subject',
profiles.PAYLOADKEYS_CONTENT: profiles.plistlib.Data('my_cert'),
profiles.PAYLOADKEYS_ENABLED: True,
profiles.PAYLOADKEYS_VERSION: 1,
profiles.PAYLOADKEYS_UUID: mock.ANY})
@mock.patch.object(profiles.certs, 'Certificate')
def testAddAnchorCertificateBadCert(self, mock_certificate):
mock_certificate.side_effect = profiles.certs.CertError
profile = profiles.NetworkProfile('testuser')
with self.assertRaises(profiles.CertificateError):
profile.AddAnchorCertificate('test_cert')
@mock.patch.object(profiles.NetworkProfile, 'AddPayload')
def testAddNetworkPayloadSSID(self, mock_addpayload):
profile = profiles.NetworkProfile('test_user')
profile._auth_cert = '00000000-AUTH-CERT-UUID-00000000'
profile._anchor_certs = ['00000000-ANCH-ORCE-RTUU-ID000000']
profile.AddTrustedServer('radius.company.com')
profile.AddNetworkPayload('SSID')
eap_client_data = {'AcceptEAPTypes': [13],
'PayloadCertificateAnchorUUID':
['00000000-ANCH-ORCE-RTUU-ID000000'],
'TLSTrustedServerNames':
['radius.company.com'],
'TLSAllowTrustExceptions': False}
mock_addpayload.assert_called_once_with(
{'AutoJoin': True,
'SetupModes': ['System', 'User'],
'PayloadCertificateUUID': '00000000-AUTH-CERT-UUID-00000000',
'EncryptionType': 'WPA',
'Interface': 'BuiltInWireless',
profiles.PAYLOADKEYS_DISPLAYNAME: 'SSID',
profiles.PAYLOADKEYS_IDENTIFIER:
'com.megacorp.networkprofile.ssid.SSID',
profiles.PAYLOADKEYS_TYPE: 'com.apple.wifi.managed',
'SSID_STR': 'SSID',
'EAPClientConfiguration': eap_client_data})
@mock.patch.object(profiles.NetworkProfile, 'AddPayload')
def testAddNetworkPayloadWired(self, mock_addpayload):
profile = profiles.NetworkProfile('test_user')
profile._auth_cert = '00000000-AUTH-CERT-UUID-00000000'
profile._anchor_certs = ['00000000-ANCH-ORCE-RTUU-ID000000']
profile.AddTrustedServer('radius.company.com')
profile.AddNetworkPayload('wired')
eap_client_data = {'AcceptEAPTypes': [13],
'PayloadCertificateAnchorUUID':
['00000000-ANCH-ORCE-RTUU-ID000000'],
'TLSTrustedServerNames':
['radius.company.com'],
'TLSAllowTrustExceptions': False}
mock_addpayload.assert_called_once_with(
{'AutoJoin': True,
'SetupModes': ['System', 'User'],
'PayloadCertificateUUID': '00000000-AUTH-CERT-UUID-00000000',
'EncryptionType': 'Any',
'Interface': 'FirstActiveEthernet',
profiles.PAYLOADKEYS_DISPLAYNAME: 'Wired',
profiles.PAYLOADKEYS_IDENTIFIER:
'com.megacorp.networkprofile.wired',
profiles.PAYLOADKEYS_TYPE: 'com.apple.firstactiveethernet.managed',
'EAPClientConfiguration': eap_client_data})
if __name__ == '__main__':
basetest.main()
| 40.16129 | 79 | 0.71178 | 1,458 | 14,940 | 7.080247 | 0.146776 | 0.082825 | 0.036327 | 0.055701 | 0.705899 | 0.636346 | 0.558268 | 0.502373 | 0.456166 | 0.439892 | 0 | 0.025477 | 0.175033 | 14,940 | 371 | 80 | 40.269542 | 0.812089 | 0.007831 | 0 | 0.547368 | 1 | 0 | 0.174774 | 0.072423 | 0 | 0 | 0 | 0 | 0.192982 | 1 | 0.087719 | false | 0.031579 | 0.010526 | 0 | 0.115789 | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e236809ac5baf6f429907ed386884a66c65abed5 | 790 | py | Python | metrics_layer/core/parse/manifest.py | Zenlytic/metrics_layer | 45e291186c9171b44222a49444153c5df14985c4 | [
"Apache-2.0"
] | 5 | 2021-11-11T15:39:23.000Z | 2022-03-17T19:54:17.000Z | metrics_layer/core/parse/manifest.py | Zenlytic/metrics_layer | 45e291186c9171b44222a49444153c5df14985c4 | [
"Apache-2.0"
] | 10 | 2021-11-23T21:44:56.000Z | 2022-03-21T02:01:51.000Z | metrics_layer/core/parse/manifest.py | Zenlytic/metrics_layer | 45e291186c9171b44222a49444153c5df14985c4 | [
"Apache-2.0"
] | null | null | null | class Manifest:
def __init__(self, definition: dict):
self._definition = definition
def exists(self):
return self._definition is not None and self._definition != {}
def _resolve_node(self, name: str):
key = next((k for k in self._definition["nodes"].keys() if name == k.split(".")[-1]), None)
if key is None:
raise ValueError(
f"Could not find the ref {name} in the co-located dbt project."
" Please check the name in your dbt project."
)
return self._definition["nodes"][key]
def resolve_name(self, name: str):
node = self._resolve_node(name)
# return f"{node['database']}.{node['schema']}.{node['alias']}"
return f"{node['schema']}.{node['alias']}"
| 37.619048 | 99 | 0.583544 | 101 | 790 | 4.425743 | 0.435644 | 0.187919 | 0.089485 | 0.085011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001742 | 0.273418 | 790 | 20 | 100 | 39.5 | 0.777003 | 0.077215 | 0 | 0 | 0 | 0 | 0.200825 | 0.044017 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.0625 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e237d3872eb66e61cab21518227533afd94c87e8 | 252 | py | Python | desafio/desafio051.py | henriquekirchheck/Curso-em-video-Python | 1a29f68515313af85c8683f626ba35f8fcdd10e7 | [
"MIT"
] | null | null | null | desafio/desafio051.py | henriquekirchheck/Curso-em-video-Python | 1a29f68515313af85c8683f626ba35f8fcdd10e7 | [
"MIT"
] | null | null | null | desafio/desafio051.py | henriquekirchheck/Curso-em-video-Python | 1a29f68515313af85c8683f626ba35f8fcdd10e7 | [
"MIT"
] | null | null | null | print('=====================')
print(' 10 Termos de um PA')
print('=====================')
p = int(input('Primeiro Termo: '))
r = int(input('Razão: '))
for loop in range(p, ((r * 10) + p), r):
print('{} ->' .format(loop), end=' ')
print('Acabou') | 25.2 | 41 | 0.452381 | 32 | 252 | 3.5625 | 0.625 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018692 | 0.150794 | 252 | 10 | 42 | 25.2 | 0.514019 | 0 | 0 | 0.25 | 0 | 0 | 0.379447 | 0.166008 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
e23831c5c1e4e4aaba38057ace81941b540c3a57 | 1,159 | py | Python | push.py | mikofski/dulwichPorcelain | 2e4aa751ed70f9c4167de5e8aa5297b5cc6f583f | [
"BSD-2-Clause"
] | 4 | 2015-07-13T17:47:51.000Z | 2017-09-10T02:57:07.000Z | push.py | mikofski/dulwichPorcelain | 2e4aa751ed70f9c4167de5e8aa5297b5cc6f583f | [
"BSD-2-Clause"
] | null | null | null | push.py | mikofski/dulwichPorcelain | 2e4aa751ed70f9c4167de5e8aa5297b5cc6f583f | [
"BSD-2-Clause"
] | null | null | null | from dulwich.repo import Repo
from dulwich.client import get_transport_and_path
import sys
def push(remote_url, repo_path='.'):
"""
Push to a remote repository
:param remote_url: <str> url of remote repository
:param repo_path: <str> path of local repository
:return refs: <dict> dictionary of ref-sha pairs
"""
client, path = get_transport_and_path(remote_url)
r = Repo(repo_path)
objsto = r.object_store
refs = r.get_refs()
def update_refs(old):
# TODO: Too complicated, not necessary to find the refs that
# differ - it's fine to update a ref even if it already exists.
# TODO: Also error out if there are non-fast forward updates
same = list(set(refs).intersection(old))
new = dict([(k,refs[k]) for k in same if refs[k] != old[k]])
dfky = list(set(refs) - set(new))
dfrnt = dict([(k,refs[k]) for k in dfky if k != 'HEAD'])
return dict(new.items() + dfrnt.items())
return client.send_pack(path,
update_refs,
objsto.generate_pack_contents,
sys.stdout.write)
| 38.633333 | 71 | 0.612597 | 166 | 1,159 | 4.162651 | 0.463855 | 0.039074 | 0.043415 | 0.054993 | 0.04631 | 0.04631 | 0.04631 | 0 | 0 | 0 | 0 | 0 | 0.284728 | 1,159 | 29 | 72 | 39.965517 | 0.833534 | 0.308024 | 0 | 0 | 0 | 0 | 0.006477 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 1 | 0.111111 | false | 0 | 0.166667 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e23e5edeb769867120d432db8d1a63dd68cde4ce | 15,509 | py | Python | django_flex_user/models/user.py | ebenh/django-flex-user | efffb21e4ce33d2ea8665756334e2a391f4b5a72 | [
"MIT"
] | 1 | 2021-09-13T20:26:02.000Z | 2021-09-13T20:26:02.000Z | django_flex_user/models/user.py | ebenh/django-flex-user | efffb21e4ce33d2ea8665756334e2a391f4b5a72 | [
"MIT"
] | null | null | null | django_flex_user/models/user.py | ebenh/django-flex-user | efffb21e4ce33d2ea8665756334e2a391f4b5a72 | [
"MIT"
] | null | null | null | from django.db import models
from django.db.models.signals import pre_save, post_save
from django.dispatch import receiver
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
from django.contrib.auth.models import PermissionsMixin
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import ValidationError, NON_FIELD_ERRORS
from phonenumber_field.modelfields import PhoneNumberField
from dirtyfields import DirtyFieldsMixin
from django_flex_user.validators import FlexUserUnicodeUsernameValidator
from django_flex_user.fields import CICharField
# Reference: https://docs.djangoproject.com/en/3.0/topics/auth/customizing/
# Reference: https://simpleisbetterthancomplex.com/tutorial/2016/07/22/how-to-extend-django-user-model.html
class FlexUserManager(BaseUserManager):
"""
Our custom implementation of django.contrib.auth.models.UserManager.
"""
@classmethod
def normalize_email(cls, email):
"""
Normalize email by lowercasing and IDNA encoding its domain part.
:param email:
:return:
"""
if email is None:
return None
try:
email_name, domain_part = email.strip().rsplit('@', 1)
email = email_name + '@' + domain_part.lower().encode('idna').decode('ascii')
except UnicodeError:
pass
except ValueError:
pass
return email
def _create_user(self, username=None, email=None, phone=None, password=None, **extra_fields):
user = self.model(username=username, email=email, phone=phone, **extra_fields)
user.set_password(password)
user.full_clean()
user.save(using=self._db)
return user
def create_user(self, username=None, email=None, phone=None, password=None, **extra_fields):
"""
Create a user. You must supply at least one of ``username``, ``email``, or ``phone``.
If ``password`` is None, the user's password will be set using \
:meth:`~django.contrib.auth.models.User.set_unusable_password`.
.. warning::
This method does not run :setting:`AUTH_PASSWORD_VALIDATORS` against ``password``. It's the
caller's responsibility to run password validators before calling this method.
:param username: The username for the user, defaults to None.
:type username: str, optional
:param email: The email address for the user, defaults to None.
:type email: str, optional
:param phone: The phone number for the user, defaults to None.
:type phone: str, optional
:param password: The password for the user, defaults to None.
:type password: str, optional
:param extra_fields: Additional model fields you wish to set for the user.
:type extra_fields: dict, optional
:raises ~django.core.exceptions.ValidationError: If any of the supplied parameters fails model field validation
(e.g. the supplied phone number is already in use by another user, the supplied username is invalid, etc.)
:return: The newly created user.
:rtype: ~django_flex_user.models.user.FlexUser
"""
extra_fields.setdefault('is_staff', False)
extra_fields.setdefault('is_superuser', False)
return self._create_user(username, email, phone, password, **extra_fields)
def create_superuser(self, username=None, email=None, phone=None, password=None, **extra_fields):
"""
Create a super user. You must supply at least one of ``username``, ``email``, or ``phone``.
If ``password`` is None, the user's password will be set using \
:meth:`~django.contrib.auth.models.User.set_unusable_password`.
.. warning::
This method does not run :setting:`AUTH_PASSWORD_VALIDATORS` against ``password``. It's the
caller's responsibility to run password validators before calling this method.
:param username: The username for the user, defaults to None.
:type username: str, optional
:param email: The email address for the user, defaults to None.
:type email: str, optional
:param phone: The phone number for the user, defaults to None.
:type phone: str, optional
:param password: The password for the user, defaults to None.
:type password: str, optional
:param extra_fields: Additional model fields you wish to set for the user.
:type extra_fields: dict, optional
:raises ~django.core.exceptions.ValidationError: If any of the supplied parameters fails model field validation
(e.g. the supplied phone number is already in use by another user, the supplied username is invalid, etc.)
:return: The newly created user.
:rtype: ~django_flex_user.models.user.FlexUser
"""
extra_fields.setdefault('is_staff', True)
extra_fields.setdefault('is_superuser', True)
if extra_fields.get('is_staff') is not True:
raise ValueError('Superuser must have is_staff=True.')
if extra_fields.get('is_superuser') is not True:
raise ValueError('Superuser must have is_superuser=True.')
return self._create_user(username, email, phone, password, **extra_fields)
def get_by_natural_key(self, username=None, email=None, phone=None):
if username is None and email is None and phone is None:
raise ValueError('You must supply at least one of username, email or phone number')
q = {}
if username is not None:
q.update({'username': username})
if email is not None:
q.update({'email': email})
if phone is not None:
q.update({'phone': phone})
return self.get(**q)
class FlexUser(AbstractBaseUser, PermissionsMixin, DirtyFieldsMixin):
"""
Our implementation django.contrib.auth.models.User.
This user model is designed to give users the flexibility to sign up and sign in using their choice of username,
email address or phone number.
Our implementation is identical to django.contrib.auth.models.User except in the following ways:
username field sets null=True and blank=True.
email field sets null=True and blank = True.
phone field is introduced. It defines unique=True, null=True and blank=True.
first_name and last_name fields are omitted.
For each of username, email and phone we set blank = True to preserve the ordinary functioning of the
admin site. Setting blank = True on model fields results in form fields which have required = False set,
thereby enabling users to supply any subset of username, email and phone when configuring a user on the
admin site. Furthermore, when null = True and blank = True are set together on model fields, the value of empty
form fields are conveniently coerced to None. Unfortunately, setting blank = True on model fields has the
undesirable consequence that empty string values will not by rejected by clean_fields/full_clean methods. To
remedy this, we reject empty string values for username, email and phone in our clean method (see below).
clean method:
- Ensures that at least one of username, email or phone is defined for the user.
- Ensures that none of username, email and phone are equal to the empty string. We must do this
because we set blank = True for each of these fields (see above).
- Normalizes email in addition to username.
get_username method returns one of username, email, phone or id. This method evaluates each of these
fields in order and returns the first truthy value.
natural_key method returns a tuple of username, email and phone.
We place the following restrictions on username, email and phone:
- It shouldn't be possible to interpret username as an email address or phone number
- It shouldn't be possible to interpret email as a username or phone number
- It shouldn't be possible to interpret phone as a username or email address
These restrictions are enforced by field validators which apply the constraints below:
- username may not begin with "+" or a decimal number, nor may it contain "@"
- email must contain "@"
- phone must contain "+" and may not contain "@"
These constraints make it possible to receive an unspecified user identifier and infer whether it is a username,
email address or phone number.
"""
username_validator = FlexUserUnicodeUsernameValidator()
email = models.EmailField(
_('email address'),
unique=True,
null=True, # new
blank=True, # new
error_messages={
'unique': _("A user with that email address already exists."),
},
)
phone = PhoneNumberField( # new
_('phone number'),
unique=True,
null=True,
blank=True,
error_messages={
'unique': _("A user with that phone number already exists."),
},
)
# username = models.CharField(
# _('username'),
# max_length=150,
# unique=True,
# null=True, # new
# blank=True, # new
# help_text=_('150 characters or fewer. Letters, digits and ./-/_ only.'),
# validators=[username_validator],
# error_messages={
# 'unique': _("A user with that username already exists."),
# },
# )
username = CICharField(
_('username'),
max_length=150,
unique=True,
null=True, # new
blank=True, # new
help_text=_('150 characters or fewer. Letters, digits and ./-/_ only.'),
validators=[username_validator],
error_messages={
'unique': _("A user with that username already exists."),
},
)
is_staff = models.BooleanField(
_('staff status'),
default=False,
help_text=_('Designates whether the user can log into this admin site.'),
)
is_active = models.BooleanField(
_('active'),
default=True,
help_text=_(
'Designates whether this user should be treated as active. '
'Unselect this instead of deleting accounts.'
),
)
date_joined = models.DateTimeField(_('date joined'), default=timezone.now)
# We remove these fields from our user model implementation
# first_name = models.CharField(_('first name'), max_length=30, blank=True)
# last_name = models.CharField(_('last name'), max_length=150, blank=True)
EMAIL_FIELD = 'email'
USERNAME_FIELD = 'username'
REQUIRED_FIELDS = []
objects = FlexUserManager()
class Meta:
verbose_name = _('user')
verbose_name_plural = _('users')
def clean(self):
errors = {}
if self.username is None and self.email is None and self.phone is None:
errors[NON_FIELD_ERRORS] = 'You must supply at least one of {username}, {email} or {phone}.'.format(
username=self._meta.get_field('username').verbose_name,
email=self._meta.get_field('email').verbose_name,
phone=self._meta.get_field('phone').verbose_name
)
# For fields which have blank = False:
# django.db.models.fields.Field.clean first executes django.db.models.fields.Field.validate which raises an
# exception if the field contains a blank value. If an exception is raised, the subsequent call to
# django.db.models.fields.Field.run_validators is not made.
#
# For fields which have blank = True:
# django.db.models.base.Model.clean_fields executes django.db.models.fields.Field.clean for each of its fields.
# However, it skips this call for fields which contain a blank value.
#
# Therefore, validators are not run for blank values no matter what. So we cannot depend on validators to reject
# empty values.
if self.username == '':
errors['username'] = 'This field may not be blank.'
if self.email == '':
errors['email'] = 'This field may not be blank.'
if self.phone == '':
errors['phone'] = 'This field may not be blank.'
if errors:
raise ValidationError(errors)
# Normalize username and email
self.username = self.normalize_username(self.username)
self.email = FlexUser.objects.normalize_email(self.email)
def get_username(self):
"""Return the identifying username for this user"""
return self.username or self.email or (str(self.phone) if self.phone else None) or str(self.id)
def natural_key(self):
return self.username, self.email, self.phone
@receiver(pre_save, sender=FlexUser)
def my_pre__save_handler(sender, **kwargs):
pass
@receiver(post_save, sender=FlexUser)
def my_post_save_handler(sender, **kwargs):
user = kwargs['instance']
if kwargs['created']:
if user.email is not None:
user.emailtoken_set.create(user_id=user.id, email=user.email)
if user.phone is not None:
user.phonetoken_set.create(user_id=user.id, phone=user.phone)
else:
dirty_fields = user.get_dirty_fields(verbose=True)
if 'email' in dirty_fields:
if dirty_fields['email']['current'] is None:
# If the new value for email is None, delete the token if it exists
user.emailtoken_set.filter(user_id=user.id).delete()
elif dirty_fields['email']['saved'] is None:
# If the old value for email is None and its new value is not None, create a new token
# todo: construct this instance manually?
user.emailtoken_set.create(user=user, email=dirty_fields['email']['current'])
else:
# Otherwise, update the existing token
email_token = user.emailtoken_set.get(user=user)
email_token.email = user.email
# Reset the password
email_token.verified = False
email_token.password = None
email_token.expiration = None
email_token.save(update_fields=['email', 'verified', 'password', 'expiration'])
if 'phone' in dirty_fields:
if dirty_fields['phone']['current'] is None:
# If the new value for phone is None, delete the token if it exists
user.phonetoken_set.filter(user_id=user.id).delete()
elif dirty_fields['phone']['saved'] is None:
# If the old value for phone is None and its new value is not None, create a new token
# todo: construct this instance manually?
user.phonetoken_set.create(user=user, phone=dirty_fields['phone']['current'])
else:
# Otherwise, update the existing token
phone_token = user.phonetoken_set.get(user=user)
phone_token.phone = user.phone
# Reset the password
phone_token.verified = False
phone_token.password = None
phone_token.expiration = None
phone_token.save(update_fields=['phone', 'verified', 'password', 'expiration'])
| 43.687324 | 120 | 0.649945 | 1,974 | 15,509 | 5.013678 | 0.179331 | 0.010306 | 0.016672 | 0.01455 | 0.457411 | 0.409417 | 0.367687 | 0.33788 | 0.314338 | 0.298777 | 0 | 0.002455 | 0.264685 | 15,509 | 354 | 121 | 43.810734 | 0.865398 | 0.45954 | 0 | 0.120482 | 0 | 0 | 0.129554 | 0 | 0 | 0 | 0 | 0.002825 | 0 | 1 | 0.060241 | false | 0.078313 | 0.072289 | 0.006024 | 0.26506 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e243469fdf4c02806f27f1c408cf2cf6e88ea291 | 1,159 | py | Python | main.py | yang-233/mmsa | eed7b943746041b735d8a7af8d60b6457f0284f6 | [
"MIT"
] | 1 | 2021-04-20T07:03:50.000Z | 2021-04-20T07:03:50.000Z | main.py | yang-233/mmsa | eed7b943746041b735d8a7af8d60b6457f0284f6 | [
"MIT"
] | null | null | null | main.py | yang-233/mmsa | eed7b943746041b735d8a7af8d60b6457f0284f6 | [
"MIT"
] | null | null | null | import sys
sys.path.append("/home/ly/workspace/mmsa")
seed = 1938
import numpy as np
import torch
from torch import nn
from torch import optim
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
from models.bigru_rcnn_gate import *
from utils.train import *
from typing import *
from utils.load_raw_yelp import *
from utils.dataset import *
from utils.train import *
from utils.train import *
def main():
train_set, valid_set, test_set = load_glove_data(config)
batch_size = 2
workers = 2
train_loader, valid_loader, test_loader = get_loader(batch_size, workers, get_collate_fn(config),
train_set, valid_set, test_set)
model = Model(config)
#X, y = iter(valid_loader).next()
#res = model(X)
loss = nn.CrossEntropyLoss()
# get_parameter_number(model), loss
viz = get_Visdom()
lr = 1e-3
epoches = 20
optimizer = get_regal_optimizer(model, optim.AdamW, lr)
k_batch_train_visdom(model, optimizer, loss, valid_loader, viz, 30, 10, use_cuda=False)
if __name__ == "__main__":
# torch.cuda.set_device(1)
main() | 25.755556 | 102 | 0.707506 | 173 | 1,159 | 4.485549 | 0.445087 | 0.07732 | 0.096649 | 0.07732 | 0.229381 | 0.208763 | 0.072165 | 0 | 0 | 0 | 0 | 0.015991 | 0.190682 | 1,159 | 45 | 103 | 25.755556 | 0.811301 | 0.090595 | 0 | 0.090909 | 0 | 0 | 0.029496 | 0.021884 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.363636 | 0 | 0.393939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e243f2fb56034af4479821d1bde3670f31edfe71 | 2,113 | py | Python | back-end/www/model/timeception/core/const.py | yenchiah/deep-smoke-machine | 5f779f723a3c891145db43663c8825f9ab55dc74 | [
"BSD-3-Clause"
] | 88 | 2019-05-29T07:38:45.000Z | 2022-03-17T01:50:50.000Z | back-end/www/model/timeception/core/const.py | yenchiah/deep-smoke-machine | 5f779f723a3c891145db43663c8825f9ab55dc74 | [
"BSD-3-Clause"
] | 6 | 2019-05-30T08:47:07.000Z | 2021-09-01T07:45:54.000Z | back-end/www/model/timeception/core/const.py | yenchiah/deep-smoke-machine | 5f779f723a3c891145db43663c8825f9ab55dc74 | [
"BSD-3-Clause"
] | 22 | 2019-06-17T01:15:35.000Z | 2021-11-17T10:29:00.000Z | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
########################################################################
# GNU General Public License v3.0
# GNU GPLv3
# Copyright (c) 2019, Noureldien Hussein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
########################################################################
"""
Constants for project.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
import platform
import numpy as np
DL_FRAMEWORKS = np.array(['caffe', 'tensorflow', 'pytorch', 'keras', 'caffe2'])
DL_FRAMEWORK = None
GPU_CORE_ID = 0
CNN_FEATURE_SIZES = np.array([2048, 2048, 1000, 1024, 1000, 2048, 2048])
CNN_FEATURE_TYPES = np.array(['fc6', 'fc7', 'fc1000', 'fc1024', 'fc365', 'prob', 'pool5', 'fc8a', 'res3b7', 'res4b35', 'res5c'])
CNN_MODEL_TYPES = np.array(['resnet152', 'googlenet1k', 'vgg16', 'places365-resnet152', 'places365-vgg', 'googlenet13k'])
RESIZE_TYPES = np.array(['resize', 'resize_crop', 'resize_crop_scaled', 'resize_keep_aspect_ratio_padded'])
ROOT_PATH_TYPES = np.array(['data', 'project'])
TRAIN_SCHEMES = np.array(['ete', 'tco'])
MODEL_CLASSIFICATION_TYPES = np.array(['ml', 'sl'])
MODEL_MULTISCALE_TYPES = np.array(['dl', 'ks'])
SOLVER_NAMES = np.array(['adam', 'sgd'])
DATASET_NAMES = np.array(['charades', 'kinetics400', 'breakfast_actions', 'you_cook_2', 'multi_thumos'])
DATA_ROOT_PATH = './data'
PROJECT_ROOT_PATH = '../'
MACHINE_NAME = platform.node()
| 39.867925 | 128 | 0.682915 | 283 | 2,113 | 4.904594 | 0.59364 | 0.055476 | 0.051873 | 0.066282 | 0.059078 | 0.040346 | 0 | 0 | 0 | 0 | 0 | 0.043666 | 0.122101 | 2,113 | 52 | 129 | 40.634615 | 0.704582 | 0.358258 | 0 | 0 | 0 | 0 | 0.27043 | 0.026116 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.304348 | 0 | 0.304348 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
e246ae17b63ba59e1c28d476250fb493117de794 | 20,534 | py | Python | shahryar_webscrapping_nlp2_WebCrawling.py | ShahryarZaidi/Web-Crawler-and-NLP- | 2dfaecfc20c4ab4a711a633c088113671ffc3a89 | [
"Apache-2.0"
] | null | null | null | shahryar_webscrapping_nlp2_WebCrawling.py | ShahryarZaidi/Web-Crawler-and-NLP- | 2dfaecfc20c4ab4a711a633c088113671ffc3a89 | [
"Apache-2.0"
] | null | null | null | shahryar_webscrapping_nlp2_WebCrawling.py | ShahryarZaidi/Web-Crawler-and-NLP- | 2dfaecfc20c4ab4a711a633c088113671ffc3a89 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[65]:
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from sklearn import metrics
from sklearn.metrics import roc_auc_score, accuracy_score
import requests
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
import warnings
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
from nltk.tokenize import word_tokenize
import re
import nltk
import emoji
import string
from textblob import TextBlob
import langid
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from gensim import models, corpora
from sklearn.model_selection import train_test_split
warnings.filterwarnings('ignore')
# In[66]:
from bs4 import BeautifulSoup
import jsonpickle
import requests
from datetime import datetime, timedelta
from textblob import TextBlob
from productClass import Product
def main():
baseUrl = "https://www.amazon.in"
mainCategory = "electronics"
productCategory = "Samsung SSD"
pagesToFetch = 51
productObjectDataset = []
print("Processing...")
## interate over amazon pages where upper limit is a big number as we donts know how many pages there can be
for i in range(1, pagesToFetch + 1):
urlToFetch = baseUrl + "/s?k=" + productCategory + "&i=" + mainCategory
if (i > 1):
urlToFetch += "&page=" + str(i)
#endif
res = requests.get(urlToFetch)
soup = BeautifulSoup(res.text, 'html.parser')
content = soup.find_all('a',
class_='a-link-normal a-text-normal',
href=True)
print("Fetching: " + urlToFetch)
# breaking the loop if page not found
if (len(content) == 0):
print("Nothing found in: " + str(i))
break
#endif
for title in content:
productUrl = baseUrl + title.get('href')
productTitle = title.text
productObject = Product(productTitle, productUrl)
productObjectDataset.append(productObject)
#endfor
#endfor
for productObject in productObjectDataset:
reviews = []
needToReplace = "/product-reviews/"
for i in range(1, 1000000):
urlToFetch = extract_url(productObject).replace(
"/dp/", needToReplace) + "?pageNumber=" + str(i)
res = requests.get(urlToFetch)
soup = BeautifulSoup(res.text, 'html.parser')
content = soup.find_all(
'span', class_='a-size-base review-text review-text-content')
if (len(content) == 0):
break
#endif
for title in content:
reviews.append(title.text.strip())
#endfor
#endfor
productObject.add_reviews(reviews)
print(
extract_url(productObject) +
": status completed!, review found :" + str(len(reviews)))
#endfor
print(len(productObjectDataset))
jsonProductObjectDataset = jsonpickle.encode(productObjectDataset)
outputFile = open('filepath.json', 'w')
outputFile.write(jsonProductObjectDataset)
outputFile.close()
#enddef
def extract_title(productObject):
return productObject.title
#enddef
def extract_url(productObject):
return productObject.url
#enddef
def extract_review_list(productObject):
return productObject.review_list
#enddef
if __name__ == "__main__":
main()
#############################################################################
import requests
from bs4 import BeautifulSoup
# links and Headers
HEADERS = ({'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',
'Accept-Language': 'en-US, en;q=0.5'})
# Link to the amazon product reviews
url = 'https://www.amazon.in/Samsung-Internal-Solid-State-MZ-V7S500BW/product-reviews/B07MFBLN7K/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber='
review_list = []
def retrieve_reviews(soup):
# Get only those divs from the website which have a property data-hook and its value is review
reviews = soup.find_all("div", {'data-hook': "review"})
# Retrieving through the raw text inside the reviews
for item in reviews:
review = {
# Get the title of the review
'title': item.find("a", {'data-hook': "review-title"}).text.strip(),
# Get the rating. It will be like 4.5 out of 5 stars. So we have to remove out of 5 stars from it and only keep float value 4.5, 3.4, etc.
'rating': item.find("i", {'data-hook': "review-star-rating"}).text.replace("out of 5 stars", "").strip(),
# Get the actual review text
'review_text': item.find("span", {'data-hook': "review-body"}).text.strip()
}
review_list.append(review)
# Get the page content from amazon
# as we know we have 43 pages to visit and get content from
for pageNumber in range(1, 51):
raw_text = requests.get(url=url+(str(pageNumber)), headers = HEADERS)
soup = BeautifulSoup(raw_text.text, 'lxml')
retrieve_reviews(soup)
for index in range(len(review_list)):
# Print out all the reviews inside of a reviews_list
print(f"{index+1}) {review_list[index]}")
print("")
import csv
import pandas as pd
# Create dataframe out of all the reviews from amazon
reviews_df = pd.DataFrame(review_list)
# Put that dataframe into an excel file
reviews_df.to_excel('samsung.xlsx', index = False)
print("Done.")
# In[67]:
def remove_emojis(text):
reg = emoji.get_emoji_regexp()
emoji_free_text = reg.sub(r'', text)
return emoji_free_text
# Cleaining function
def preprocess(input_text):
lower_text = review.lower()
punctuations = '''`!()-[]{};:'"\,<>./?@#$%^&*_~=+°'''
lower_text = re.sub(r"@[A-Za-z0-9]+", "", lower_text) # Removes the @mentions from the tweets
lower_text = re.sub(r"[0-9]+", "", lower_text) # Removes the Numbers from the tweets
# tokenization
tokens = word_tokenize(lower_text)
stopwords = stopwords.words("english")
# Removing stopwords
filtered_text = [word for word in tokens if word not in stopwords]
# look for empty words or words just made of two letters and remove that
for token in filtered_text:
if token == "":
filtered_text.remove(token)
filtered_text = ' '.join([word for word in filtered_text])
clean_text = remove_emojis(filtered_text)
# Removing punctuations in string
# Using loop + punctuation string
for ele in clean_text:
if ele in punctuations:
clean_text = clean_text.replace(ele, "")
# Removing small words with length less than 3
clean_text = ' '.join([t for t in clean_text.split() if len(t)>=3])
return word_tokenize(clean_text)
# In[70]:
reviews = pd.read_excel("samsung.xlsx")
reviews.head()
# In[71]:
reviews.shape
# In[72]:
plt.figure(figsize = (7, 7))
sns.countplot(reviews["rating"])
# In[73]:
rating_count = pd.DataFrame(reviews["rating"].value_counts().reset_index())
rating_count
# In[74]:
explode = [0.05, 0.04, 0, 0.02, 0]
names = ["Rating 5.0", "Rating 4.0", "Rating 1.0", "Rating 3.0", "Rating 2.0"]
plt.figure(figsize = (10, 10))
plt.pie(rating_count["rating"],
labels = names,
labeldistance=1.05,
wedgeprops = { 'linewidth' : 1.5, 'edgecolor' : 'white' },
explode = explode,
autopct = '%.2f%%',
shadow = True,
pctdistance = .85,
textprops = {"fontsize": 14, "color":'w'},
rotatelabels = True,
radius = 1.3
)
plt.show()
# The most given rating to the product is 5.0 and 4.0. We can say here that the product is working fine.
# In[75]:
review_text = list(reviews["review_text"])
review_text[:5]
# In[76]:
reviews_df.shape
# In[77]:
product_review = list(reviews_df["review_text"])
# In[78]:
product_review[0]
# In[79]:
import emoji
def remove_emojis(text):
reg = emoji.get_emoji_regexp()
emoji_free_text = reg.sub(r'', text)
return emoji_free_text
# In[80]:
# Cleaining function
def preprocess(reviews, stopwords):
cleaned_reviews = []
for review in reviews:
lower_text = review.lower()
punctuations = '''`!()-[]{};:'"\,<>./?@#$%^&*_~=+°'''
lower_text = re.sub(r"@[A-Za-z0-9]+", "", lower_text) # Removes the @mentions from the tweets
lower_text = re.sub(r"[0-9]+", "", lower_text) # Removes the Numbers from the tweets
# tokenization
tokens = word_tokenize(lower_text)
# Removing stopwords
filtered_text = [word for word in tokens if word not in stopwords]
# look for empty words or words just made of two letters and remove that
for token in filtered_text:
if token == "":
filtered_text.remove(token)
filtered_text = ' '.join([word for word in filtered_text])
clean_text = remove_emojis(filtered_text)
# Removing punctuations in string
# Using loop + punctuation string
for ele in clean_text:
if ele in punctuations:
clean_text = clean_text.replace(ele, "")
# Removing small words with length less than 3
clean_text = ' '.join([t for t in clean_text.split() if len(t)>=3])
cleaned_reviews.append(clean_text)
return cleaned_reviews
# In[81]:
from nltk.corpus import stopwords
stopwords = stopwords.words("english")
len(stopwords)
# #### Call the preprocess function and pass the text string to clean data
# In[82]:
clean_reviews = preprocess(product_review, stopwords)
clean_reviews
# #### Stemming and Lemmatization
# In[83]:
wn_lem = nltk.wordnet.WordNetLemmatizer()
stemmer = nltk.stem.PorterStemmer()
def lemmatization(reviews):
lemmatized_reviews = []
for review in reviews:
# Tokenization
tokens = word_tokenize(review)
for index in range(len(tokens)):
tokens[index] = wn_lem.lemmatize(tokens[index])
tokens[index] = stemmer.stem(tokens[index])
lemmatized = ' '.join([token for token in tokens])
lemmatized_reviews.append(lemmatized)
return lemmatized_reviews
# In[84]:
clean_reviews = lemmatization(clean_reviews)
# 5 reviews from the list
for index in range(5):
print(f"{index+1}) {clean_reviews[index]}\n")
# ### Frequencies
# In[85]:
from collections import Counter
frequencies = Counter(' '.join([review for review in clean_reviews]).split())
frequencies.most_common(10)
# In[86]:
# Words with least frequency that is 1
singletons = [k for k, v in frequencies.items() if v == 1]
singletons[0:10]
# In[87]:
print(f"Total words used once are {len(singletons)} out of {len(frequencies)}") # 993 words that have been used only once
# In[88]:
# This function will remove words with less frequencies
def remove_useless_words(reviews, useless_words):
filtered_reviews = []
for single_review in reviews:
tokens = word_tokenize(single_review)
usefull_text = [word for word in tokens if word not in useless_words]
usefull_text = ' '.join([word for word in usefull_text])
filtered_reviews.append(usefull_text)
return filtered_reviews
# In[89]:
# Store a copy so we not need to go back for any mistake
clean_reviews_copy = clean_reviews
# In[90]:
clean_reviews = remove_useless_words(clean_reviews, singletons)
# 5 reviews from the list
for index in range(5):
print(f"{index+1}) {clean_reviews[index]}\n")
# In[91]:
# count vectoriser tells the frequency of a word.
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df = 1, max_df = 0.9)
X = vectorizer.fit_transform(clean_reviews)
word_freq_df = pd.DataFrame({'term': vectorizer.get_feature_names(), 'occurrences':np.asarray(X.sum(axis=0)).ravel().tolist()})
word_freq_df['frequency'] = word_freq_df['occurrences']/np.sum(word_freq_df['occurrences'])
# In[92]:
word_freq_df = word_freq_df.sort_values(by="occurrences", ascending = False)
word_freq_df.head()
# #### TfidfVectorizer
# In[93]:
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words='english', max_df = 0.5, smooth_idf=True)
doc_vec = vectorizer.fit_transform(clean_reviews)
names_features = vectorizer.get_feature_names()
dense = doc_vec.todense()
denselist = dense.tolist()
df = pd.DataFrame(denselist, columns = names_features)
df.head()
# # N-gram
# In[94]:
#Bi-gram
def get_top_n2_words(corpus, n=None):
vec1 = CountVectorizer(ngram_range=(2,2), #for tri-gram, put ngram_range=(3,3)
max_features=2000).fit(corpus)
bag_of_words = vec1.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in
vec1.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1],
reverse=True)
return words_freq[:n]
# In[95]:
top2_words = get_top_n2_words(clean_reviews, n=200) #top 200
top2_df = pd.DataFrame(top2_words)
top2_df.columns=["Bi-gram", "Freq"]
top2_df.head()
# In[96]:
#Bi-gram plot
import matplotlib.pyplot as plt
import seaborn as sns
top20_bigram = top2_df.iloc[0:20,:]
fig = plt.figure(figsize = (10, 5))
plot=sns.barplot(x=top20_bigram["Bi-gram"],y=top20_bigram["Freq"])
plot.set_xticklabels(rotation=45,labels = top20_bigram["Bi-gram"])
# In[97]:
#Tri-gram
def get_top_n3_words(corpus, n=None):
vec1 = CountVectorizer(ngram_range=(3,3),
max_features=2000).fit(corpus)
bag_of_words = vec1.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in
vec1.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1],
reverse=True)
return words_freq[:n]
# In[98]:
top3_words = get_top_n3_words(clean_reviews, n=200)
top3_df = pd.DataFrame(top3_words)
top3_df.columns=["Tri-gram", "Freq"]
# In[99]:
top3_df
# In[100]:
#Tri-gram plot
import seaborn as sns
top20_trigram = top3_df.iloc[0:20,:]
fig = plt.figure(figsize = (10, 5))
plot=sns.barplot(x=top20_trigram["Tri-gram"],y=top20_trigram["Freq"])
plot.set_xticklabels(rotation=45,labels = top20_trigram["Tri-gram"])
# # WordCloud
# In[101]:
string_Total = " ".join(clean_reviews)
# In[102]:
#wordcloud for entire corpus
plt.figure(figsize=(20, 20))
from wordcloud import WordCloud
wordcloud_stw = WordCloud(
background_color= 'black',
width = 1800,
height = 1500
).generate(string_Total)
plt.imshow(wordcloud_stw)
plt.axis("off")
plt.show()
# #### Singularity and Polarity using the textblob
# In[103]:
from textblob import TextBlob
# In[104]:
# Get Subjectivity of each tweet
def getSubjectivity(tweet):
return TextBlob(tweet).sentiment.subjectivity
# Get polarity of each tweet
def getPolarity(tweet):
return TextBlob(tweet).sentiment.polarity
# In[105]:
sentiment_df = pd.DataFrame(clean_reviews, columns=["reviews"])
# In[106]:
sentiment_df["Subjectivity"] = sentiment_df["reviews"].apply(getSubjectivity)
sentiment_df["Polarity"] = sentiment_df["reviews"].apply(getPolarity)
# In[107]:
sentiment_df.head()
# In[108]:
# Funciton to compute Sentiment Analysis
def getAnalysis(score):
if score < 0:
return "Negative"
elif score == 0:
return "Neutral"
else:
return "Positive"
# In[109]:
sentiment_df["Analysis"] = sentiment_df["Polarity"].apply(getAnalysis)
sentiment_df.head()
# In[110]:
plt.figure(figsize=(3, 6))
sns.countplot(sentiment_df["Analysis"])
# In[111]:
# All Positive Reviews
pos_rvs = sentiment_df[sentiment_df["Analysis"] == "Positive"].sort_values(by = ["Polarity"])
print("All Positive Reviews are: \n")
for index in range(pos_rvs.shape[0]):
print(f"{index + 1} ) {pos_rvs.iloc[index, 0]} \n")
# In[112]:
# All Negative Reviews
neg_rvs = sentiment_df[sentiment_df["Analysis"] == "Negative"].sort_values(by = ["Polarity"])
print("All Negative Reviews are: \n")
for index in range(neg_rvs.shape[0]):
print(f"{index + 1} ) {neg_rvs.iloc[index, 0]} \n")
# In[113]:
token_reviews = []
for review in clean_reviews:
token_reviews.append(word_tokenize(review))
dictionary = corpora.Dictionary(token_reviews)
dictionary.items()
# In[114]:
dictionary = corpora.Dictionary(token_reviews)
for key in dictionary:
print(key, dictionary[key])
# In[115]:
corpus = [dictionary.doc2bow(review) for review in token_reviews]
corpus
# In[116]:
clean_reviews[200]
# In[117]:
corpus[200]
# ### Building a Tfidf model
# In[118]:
tfidf_model = models.TfidfModel(corpus)
corpus_tfidf = tfidf_model[corpus]
corpus_tfidf
# ### LSI Model (Latent Semantic Indexing)
# In[119]:
from gensim.models.lsimodel import LsiModel
from gensim import similarities
# In[120]:
lsi_model = LsiModel(corpus = corpus_tfidf, id2word = dictionary, num_topics = 400)
index = similarities.MatrixSimilarity(lsi_model[corpus])
# ### The function will return 10 similar reviews to a given review
# In[121]:
def text_lsi(new_text, num = 10):
text_tokens = word_tokenize(new_text)
new_vec = dictionary.doc2bow(text_tokens)
vec_lsi = lsi_model[new_vec]
similars = index[vec_lsi]
similars = sorted(enumerate(similars), key = lambda item: -item[1])
return [(s, clean_reviews[s[0]]) for s in similars[:num]]
# In[122]:
clean_reviews[100]
# In[123]:
text_lsi(clean_reviews[100])
# # ML Algorithm
# In[124]:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(reviews['review_text'], reviews['rating'], test_size=0.1, random_state=0)
print('Load %d training examples and %d validation examples. \n' %(X_train.shape[0],X_test.shape[0]))
print('Show a review in the training set : \n', X_train.iloc[10])
X_train,y_train
# In[125]:
def cleanText(raw_text, remove_stopwords=False, stemming=False, split_text=False, ):
'''
Convert a raw review to a cleaned review
'''
text = BeautifulSoup(raw_text, 'html.parser').get_text()
letters_only = re.sub("[^a-zA-Z]", " ", text)
words = letters_only.lower().split()
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
if stemming==True:
stemmer = SnowballStemmer('english')
words = [stemmer.stem(w) for w in words]
if split_text==True:
return (words)
return( " ".join(words))
# In[126]:
X_train_cleaned = []
X_test_cleaned = []
for d in X_train:
X_train_cleaned.append(cleanText(d))
print('Show a cleaned review in the training set : \n', X_train_cleaned[10])
for d in X_test:
X_test_cleaned.append(cleanText(d))
# In[127]:
countVect = CountVectorizer()
X_train_countVect = countVect.fit_transform(X_train_cleaned)
mnb = MultinomialNB()
mnb.fit(X_train_countVect, y_train)
# In[128]:
def modelEvaluation(predictions):
print ("\nAccuracy {:.4f}".format(accuracy_score(y_test, predictions)))
print("\nClassification report : \n", metrics.classification_report(y_test, predictions))
# In[129]:
predictions = mnb.predict(countVect.transform(X_test_cleaned))
modelEvaluation(predictions)
# In[130]:
tfidf = TfidfVectorizer(min_df=5)
X_train_tfidf = tfidf.fit_transform(X_train)
# Logistic Regression
lr = LogisticRegression()
lr.fit(X_train_tfidf, y_train)
# In[131]:
feature_names = np.array(tfidf.get_feature_names())
sorted_coef_index = lr.coef_[0].argsort()
print('\nTop 10 features with smallest coefficients :\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Top 10 features with largest coefficients : \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
# In[132]:
predictions = lr.predict(tfidf.transform(X_test_cleaned))
modelEvaluation(predictions)
# In[ ]:
| 22.739756 | 177 | 0.665774 | 2,755 | 20,534 | 4.817423 | 0.221416 | 0.019892 | 0.005274 | 0.006781 | 0.315778 | 0.272152 | 0.239075 | 0.215491 | 0.191832 | 0.171941 | 0 | 0.028703 | 0.207656 | 20,534 | 902 | 178 | 22.764967 | 0.786908 | 0.148437 | 0 | 0.274112 | 0 | 0.005076 | 0.113331 | 0.006523 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.121827 | null | null | 0.058376 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e24dc2e412714a0bc17cc1fafa7639e2e6028663 | 11,524 | py | Python | reliefparser/models/pointer_net.py | XuezheMax/ReLiefParser | 4ffb2495002809de70809689b84d80d2a59cd2ac | [
"MIT"
] | 6 | 2016-11-02T20:28:01.000Z | 2018-06-25T03:37:25.000Z | reliefparser/models/pointer_net.py | XuezheMax/ReLiefParser | 4ffb2495002809de70809689b84d80d2a59cd2ac | [
"MIT"
] | null | null | null | reliefparser/models/pointer_net.py | XuezheMax/ReLiefParser | 4ffb2495002809de70809689b84d80d2a59cd2ac | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
from encoder import Encoder
from decoder import Decoder, TreeDecoder
import bisect
from time import time
class PointerNet(object):
def __init__(self, vsize, esize, hsize, asize, buckets, **kwargs):
super(PointerNet, self).__init__()
self.name = kwargs.get('name', self.__class__.__name__)
self.scope = kwargs.get('scope', self.name)
self.enc_vsize = vsize
self.enc_esize = esize
self.enc_hsize = hsize
self.dec_msize = self.enc_hsize * 2 # concatenation of bidirectional RNN states
self.dec_isize = self.enc_hsize * 2 # concatenation of bidirectional RNN states
self.dec_hsize = hsize
self.dec_asize = asize
self.buckets = buckets
self.max_len = self.buckets[-1]
self.max_grad_norm = kwargs.get('max_grad_norm', 100)
self.optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
# self.optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-2)
self.num_layer = kwargs.get('num_layer', 1)
self.rnn_class = kwargs.get('rnn_class', tf.nn.rnn_cell.BasicLSTMCell)
# self.rnn_class = kwargs.get('rnn_class', tf.nn.rnn_cell.GRUCell)
self.encoder = Encoder(self.enc_vsize, self.enc_esize, self.enc_hsize,
rnn_class=self.rnn_class, num_layer = self.num_layer)
if kwargs.get('tree_decoder', False):
self.decoder = TreeDecoder(self.dec_isize, self.dec_hsize, self.dec_msize, self.dec_asize, self.max_len,
rnn_class=self.rnn_class, num_layer = self.num_layer, epsilon=1.0)
else:
self.decoder = Decoder(self.dec_isize, self.dec_hsize, self.dec_msize, self.dec_asize, self.max_len,
rnn_class=self.rnn_class, num_layer = self.num_layer, epsilon=1.0)
self.baselines = []
self.bl_ratio = kwargs.get('bl_ratio', 0.95)
for i in range(self.max_len):
self.baselines.append(tf.Variable(0.0, trainable=False))
def __call__(self, enc_input, dec_input_indices, valid_indices, left_indices, right_indices, values, valid_masks=None):
batch_size = tf.shape(enc_input)[0]
# forward computation graph
with tf.variable_scope(self.scope):
# encoder output
enc_memory, enc_final_state_fw, _ = self.encoder(enc_input)
# decoder
dec_hiddens, dec_actions, dec_act_logps = self.decoder(
enc_memory, dec_input_indices,
valid_indices, left_indices, right_indices,
valid_masks, init_state=enc_final_state_fw)
# cost
costs = []
update_ops = []
for step_idx, (act_logp, value, baseline) in enumerate(zip(dec_act_logps, values, self.baselines)):
# costs.append(-tf.reduce_mean(act_logp * (value - baseline)))
new_baseline = self.bl_ratio * baseline + (1-self.bl_ratio) * tf.reduce_mean(value)
costs.append(-tf.reduce_mean(act_logp * value))
update_ops.append(tf.assign(baseline, new_baseline))
# gradient computation graph
self.params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=self.scope)
train_ops = []
for limit in self.buckets:
print '0 ~ %d' % (limit-1)
grad_params = tf.gradients(tf.reduce_sum(tf.pack(costs[:limit])), self.params)
if self.max_grad_norm is not None:
clipped_gradients, norm = tf.clip_by_global_norm(grad_params, self.max_grad_norm)
else:
clipped_gradients = grad_params
train_op = self.optimizer.apply_gradients(
zip(clipped_gradients, self.params))
with tf.control_dependencies([train_op] + update_ops[:limit]):
# train_ops.append(tf.Print(tf.constant(1.), [norm]))
train_ops.append(tf.constant(1.))
return dec_hiddens, dec_actions, train_ops
#### test script
if __name__ == '__main__':
# hyper-parameters
vsize = 1000
esize = 256
hsize = 256
asize = 256
isize = 333
buckets = [10]#, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120]
max_len = buckets[-1]
####################
# symbolic section
####################
# model initialization
pointer_net = PointerNet(vsize, esize, hsize, asize, buckets, dec_isize=isize)
# placeholders
enc_input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='enc_input')
input_indices, rewards = [], []
valid_indices, left_indices, right_indices = [], [], []
for i in range(max_len):
rewards.append(tf.placeholder(dtype=tf.float32, name='reward_%d'%i))
input_indices.append(tf.placeholder(dtype=tf.int32, shape=[None, 2], name='input_index_%d'%i))
valid_indices.append(tf.placeholder(dtype=tf.int32, name='valid_index_%d'%i))
left_indices.append (tf.placeholder(dtype=tf.int32, name='left_index_%d'%i))
right_indices.append(tf.placeholder(dtype=tf.int32, name='right_index_%d'%i))
# build computation graph
dec_hiddens, dec_actions, train_ops = pointer_net(enc_input, input_indices, valid_indices, left_indices, right_indices, rewards)
####################
# run-time section
####################
lsize = 10
bsize = 32
all_feeds = []
all_feeds.extend(rewards)
all_feeds.extend(input_indices)
all_feeds.extend(valid_indices)
all_feeds.extend(right_indices)
all_feeds.extend(left_indices)
all_feeds.append(enc_input)
all_fetches = []
all_fetches.extend(dec_hiddens)
all_fetches.extend(dec_actions)
# get from evironment
def take_action(action):
input_idx = np.repeat(np.arange(2).astype(np.int32).reshape(1, -1), bsize, axis=0)
valid_idx = np.repeat(np.arange(lsize).astype(np.int32).reshape(1, -1), bsize, axis=0)
left_idx = np.repeat(np.arange(lsize).astype(np.int32).reshape(1, -1), bsize, axis=0)
right_idx = np.repeat(np.arange(lsize).astype(np.int32).reshape(1, -1), bsize, axis=0)
if action is None:
reward = None
else:
reward = np.ones(action.shape)
return reward, input_idx, valid_idx, left_idx, right_idx
with tf.Session() as sess:
tf.initialize_all_variables().run()
enc_input_np = np.random.randint(0, vsize, size=[bsize, lsize]).astype(np.int32)
_, init_inidx_np, init_vdidx_np, init_ltidx_np, init_rtidx_np = take_action(None)
bucket_id = bisect.bisect_left(buckets, lsize)
train_op = train_ops[bucket_id]
print train_op
# bucket_id = bisect.bisect_left(buckets, lsize)
# grad_w = grad_params_buckets[bucket_id]
##############################
input_indices_np, valid_indices_np, left_indices_np, right_indices_np = [], [], [], []
hiddens_np, actions_np, rewards_np = [], [], []
input_indices_np.append(init_inidx_np)
valid_indices_np.append(init_vdidx_np)
left_indices_np.append(init_ltidx_np)
right_indices_np.append(init_rtidx_np)
# t = time()
# feed_dict={enc_input:enc_input_np}
# for i in range(lsize):
# # t_i = time()
# feed_dict.update({input_indices[i]:input_indices_np[i],
# valid_indices[i]:valid_indices_np[i],
# left_indices[i]:left_indices_np[i],
# right_indices[i]:right_indices_np[i]})
# h_i_np, a_i_np = sess.run([dec_hiddens[i], dec_actions[i]], feed_dict=feed_dict)
# hiddens_np.append(h_i_np)
# actions_np.append(a_i_np)
# reward_i, input_idx_np, valid_idx_np, left_idx_np, right_idx_np = take_action(actions_np[i])
# rewards_np.append(reward_i)
# input_indices_np.append(input_idx_np)
# valid_indices_np.append(valid_idx_np)
# left_indices_np.append(left_idx_np)
# right_indices_np.append(right_idx_np)
# # print i, time() - t_i
# print time() - t
# t = time()
# # feed_dict.update({go:go_np for go, go_np in zip(rewards, rewards_np)})
# # grad_w_np_2 = sess.run(grad_w, feed_dict=feed_dict)
# sess.run(train_op, feed_dict=feed_dict)
# print time() - t
##############################
##############################
input_indices_np, valid_indices_np, left_indices_np, right_indices_np = [], [], [], []
hiddens_np, actions_np, rewards_np = [], [], []
input_indices_np.append(init_inidx_np)
valid_indices_np.append(init_vdidx_np)
left_indices_np.append(init_ltidx_np)
right_indices_np.append(init_rtidx_np)
t = time()
# handle = sess.partial_run_setup(all_fetches+grad_w, all_feeds)
handle = sess.partial_run_setup(all_fetches+[train_op], all_feeds)
for i in range(lsize):
# t_i = time()
feed_dict = {input_indices[i]:input_indices_np[i],
valid_indices[i]:valid_indices_np[i],
left_indices[i]:left_indices_np[i],
right_indices[i]:right_indices_np[i]}
if i == 0:
feed_dict.update({enc_input:enc_input_np})
h_i_np, a_i_np = sess.partial_run(handle, [dec_hiddens[i], dec_actions[i]], feed_dict=feed_dict)
hiddens_np.append(h_i_np)
actions_np.append(a_i_np)
reward_i, input_idx_np, valid_idx_np, left_idx_np, right_idx_np = take_action(actions_np[i])
rewards_np.append(reward_i)
input_indices_np.append(input_idx_np)
valid_indices_np.append(valid_idx_np)
left_indices_np.append(left_idx_np)
right_indices_np.append(right_idx_np)
# print i, time() - t_i
print time() - t
p_before = sess.run(pointer_net.params[0])
t = time()
# grad_w_np_1 = sess.partial_run(handle, grad_w, feed_dict={go:go_np for go, go_np in zip(rewards, rewards_np)})
sess.partial_run(handle, train_op, feed_dict={go:go_np for go, go_np in zip(rewards, rewards_np)})
print time() - t
p_after = sess.run(pointer_net.params[0])
print np.allclose(p_before, p_after)
# # # print type(grad_w_np_1), type(grad_w_np_2)
# for g1, g2 in zip(grad_w_np_1, grad_w_np_2):
# if type(g1) != type(g2):
# print 'diff in type', type(g1), type(g2)
# continue
# elif not isinstance(g1, np.ndarray):
# print 'not numpy array', type(g1), type(g2)
# continue
# if not np.allclose(g1, g2):
# print 'g1', np.max(g1), np.min(g1)
# print 'g2', np.max(g2), np.min(g2)
# else:
# print 'Pass: g1 = g2', g1.shape, g2.shape
# if np.allclose(g1, np.zeros_like(g1)):
# print 'Fail: g1 != 0', np.max(g1), np.min(g1)
# if np.allclose(g2, np.zeros_like(g2)):
# print 'Fail: g2 != 0', np.max(g2), np.min(g2)
| 41.453237 | 132 | 0.596234 | 1,533 | 11,524 | 4.172211 | 0.148728 | 0.045028 | 0.037523 | 0.023765 | 0.483583 | 0.447936 | 0.417448 | 0.378831 | 0.340838 | 0.3202 | 0 | 0.018387 | 0.277942 | 11,524 | 277 | 133 | 41.602888 | 0.75027 | 0.224488 | 0 | 0.14094 | 0 | 0 | 0.016916 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.040268 | null | null | 0.033557 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2609871df4431162077f02f9f954e5270a79a11 | 1,608 | py | Python | emLam/corpus/component.py | DavidNemeskey/emLam | 89359e7eee5b7b9c596dec8ab6654591d4039e3e | [
"MIT"
] | 2 | 2018-03-31T10:00:11.000Z | 2018-09-15T19:38:19.000Z | emLam/corpus/component.py | DavidNemeskey/emLam | 89359e7eee5b7b9c596dec8ab6654591d4039e3e | [
"MIT"
] | 16 | 2017-02-28T13:58:28.000Z | 2018-03-14T11:42:01.000Z | emLam/corpus/component.py | dlt-rilmta/emLam | 2b7274dcda4080445698e10b34a3db2e2eed5112 | [
"MIT"
] | 1 | 2017-01-30T15:06:37.000Z | 2017-01-30T15:06:37.000Z | #!/usr/bin/env python3
"""An instantiable component. See the docstring for the class."""
from __future__ import absolute_import, division, print_function
from future.utils import with_metaclass
import logging
import inspect
class NamedClass(type):
"""
A read-only name property for classes. See
http://stackoverflow.com/questions/3203286/how-to-create-a-read-only-class-property-in-python
"""
@property
def name(cls):
return getattr(cls, 'NAME', None)
@property
def description(cls):
return getattr(cls, 'DESCRIPTION', None)
class Component(with_metaclass(NamedClass, object)):
"""
Base class for corpus and preprocessor objects. All corpus and preprocessor
classes must be subclasses of Component. Also, multiple inheritence is
discouraged, as it may break some parts of the code.
"""
def __init__(self):
self.logger = logging.getLogger(inspect.getmodule(self).__name__)
self.logger.setLevel(self.logger.parent.level)
@classmethod
def instantiate(cls, process_id=0, **kwargs):
"""
Instantiates the class from keyword arguments. The process_id (not a
real pid, but an ordinal starting from 0) is there so that components
that use external resources can "plan" accordingly.
"""
argspec = inspect.getargspec(cls.__init__).args
component_args = {k: kwargs[k] for k in argspec[1:] if k in kwargs}
logging.getLogger(cls.__module__).debug(
'Instantiating with parameters {}'.format(component_args))
return cls(**component_args)
| 35.733333 | 97 | 0.69403 | 204 | 1,608 | 5.328431 | 0.563725 | 0.027599 | 0.022079 | 0.034959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008661 | 0.210199 | 1,608 | 44 | 98 | 36.545455 | 0.847244 | 0.378731 | 0 | 0.090909 | 0 | 0 | 0.051535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0.090909 | 0.590909 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e26904d170e4e8c6e1dcb9ac5ffac8b016dc97a4 | 732 | py | Python | scripts/serializers.py | sul-cidr/scriptchart-backend | 38bb4139d77d683d85f31839a1a06096fe2fabbc | [
"MIT"
] | 1 | 2019-06-05T23:05:32.000Z | 2019-06-05T23:05:32.000Z | scripts/serializers.py | sul-cidr/scriptchart-backend | 38bb4139d77d683d85f31839a1a06096fe2fabbc | [
"MIT"
] | 42 | 2019-01-24T23:51:42.000Z | 2021-09-08T01:04:45.000Z | scripts/serializers.py | sul-cidr/scriptchart-backend | 38bb4139d77d683d85f31839a1a06096fe2fabbc | [
"MIT"
] | 1 | 2019-08-05T12:47:57.000Z | 2019-08-05T12:47:57.000Z | from rest_framework import serializers
from scripts.models import Manuscript
from scripts.models import Page
from scripts.models import Coordinates
class ManuscriptSerializer(serializers.ModelSerializer):
class Meta:
model = Manuscript
fields = ('id', 'slug', 'shelfmark', 'date', 'manifest')
class PageSerializer(serializers.ModelSerializer):
class Meta:
model = Page
fields = ('id', 'manuscript', 'url', 'height', 'width')
class CoordinatesSerializer(serializers.ModelSerializer):
page = PageSerializer(read_only=True)
class Meta:
model = Coordinates
fields = ('id', 'page', 'letter', 'top', 'left', 'width', 'height',
'binary_url', 'page')
| 28.153846 | 75 | 0.669399 | 72 | 732 | 6.763889 | 0.458333 | 0.067762 | 0.104723 | 0.141684 | 0.164271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210383 | 732 | 25 | 76 | 29.28 | 0.842561 | 0 | 0 | 0.166667 | 0 | 0 | 0.132514 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e26b0de3f1cabd89fc29afca7fa65f5643740f6e | 750 | py | Python | test/generators/utils.py | yanqd0/LeetCode | 8c669b954f4e4ae5e31a14727bf4ceedc58ea363 | [
"MIT"
] | null | null | null | test/generators/utils.py | yanqd0/LeetCode | 8c669b954f4e4ae5e31a14727bf4ceedc58ea363 | [
"MIT"
] | 3 | 2019-08-29T02:33:12.000Z | 2019-08-29T02:34:23.000Z | test/generators/utils.py | yanqd0/LeetCode | 8c669b954f4e4ae5e31a14727bf4ceedc58ea363 | [
"MIT"
] | null | null | null | import csv
import re
from os import makedirs
from os.path import abspath, basename, dirname, isdir, join
def generate_csv(path, fields, rows, quote_empty=False):
path = abspath(path)
name = basename(path)
name = re.sub('py$', 'csv', name)
cases = join(dirname(dirname(path)), 'cases')
if not isdir(cases):
makedirs(cases)
csv_path = join(cases, name)
with open(csv_path, 'w') as fobj:
writer = csv.DictWriter(fobj, fieldnames=fields, lineterminator='\n')
writer.writeheader()
with open(csv_path, 'a') as fobj:
quoting = csv.QUOTE_NONNUMERIC if quote_empty else csv.QUOTE_MINIMAL
writer = csv.writer(fobj, quoting=quoting, lineterminator='\n')
writer.writerows(rows)
| 31.25 | 77 | 0.669333 | 101 | 750 | 4.891089 | 0.415842 | 0.05668 | 0.044534 | 0.060729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209333 | 750 | 23 | 78 | 32.608696 | 0.833052 | 0 | 0 | 0 | 1 | 0 | 0.022667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e27110053843a24a09a5e561022d265c1c30eb63 | 637 | py | Python | moai/utils/arguments/__init__.py | tzole1155/moai | d1afb3aaf8ddcd7a1c98b84d6365afb846ae3180 | [
"Apache-2.0"
] | null | null | null | moai/utils/arguments/__init__.py | tzole1155/moai | d1afb3aaf8ddcd7a1c98b84d6365afb846ae3180 | [
"Apache-2.0"
] | null | null | null | moai/utils/arguments/__init__.py | tzole1155/moai | d1afb3aaf8ddcd7a1c98b84d6365afb846ae3180 | [
"Apache-2.0"
] | null | null | null | from moai.utils.arguments.common import (
assert_numeric,
assert_non_negative,
assert_negative,
)
from moai.utils.arguments.choices import (
assert_choices,
ensure_choices,
)
from moai.utils.arguments.list import (
ensure_numeric_list,
ensure_string_list,
assert_sequence_size,
)
from moai.utils.arguments.path import (
assert_path,
ensure_path,
)
__all__ = [
"assert_numeric",
"ensure_numeric_list",
"ensure_string_list",
"assert_choices",
"ensure_choices",
"assert_sequence_size",
"assert_non_negative",
"assert_negative",
"assert_path",
"ensure_path",
] | 20.548387 | 42 | 0.706436 | 73 | 637 | 5.726027 | 0.246575 | 0.076555 | 0.124402 | 0.210526 | 0.334928 | 0.186603 | 0.186603 | 0 | 0 | 0 | 0 | 0 | 0.194662 | 637 | 31 | 43 | 20.548387 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.242947 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e27404f1e9416d7b05bddb353f28ac49feb953fb | 195 | py | Python | main.py | NawrasseDahman/Qr-Code-Generator | 0f1bb8b0979f887c980cec3a241457176515b1b9 | [
"MIT"
] | 1 | 2021-12-31T07:12:09.000Z | 2021-12-31T07:12:09.000Z | main.py | NawrasseDahman/Qr-Code-Generator | 0f1bb8b0979f887c980cec3a241457176515b1b9 | [
"MIT"
] | null | null | null | main.py | NawrasseDahman/Qr-Code-Generator | 0f1bb8b0979f887c980cec3a241457176515b1b9 | [
"MIT"
] | null | null | null | import qrcode
# data example
data = "www.google.com"
# file name
file_name = "qrcode.png"
# generate qr code
img = qrcode.make(data=data)
# save generated qr code as img
img.save(file_name)
| 13 | 31 | 0.717949 | 32 | 195 | 4.3125 | 0.5625 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174359 | 195 | 14 | 32 | 13.928571 | 0.857143 | 0.353846 | 0 | 0 | 1 | 0 | 0.198347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e277978869ba969473b22353a021df73d2ed4b99 | 1,381 | py | Python | backend/ReceiptProcessor/data_generator.py | shrey-bansal/ABINBEV | 09d0eaca6e7edf1820aa79b88a56d1ed39b6300f | [
"Apache-2.0"
] | 1 | 2020-08-17T01:26:27.000Z | 2020-08-17T01:26:27.000Z | backend/ReceiptProcessor/data_generator.py | shrey-bansal/ABINBEV | 09d0eaca6e7edf1820aa79b88a56d1ed39b6300f | [
"Apache-2.0"
] | 1 | 2020-10-20T01:40:24.000Z | 2020-11-05T17:38:53.000Z | backend/ReceiptProcessor/data_generator.py | shrey-bansal/ABINBEV | 09d0eaca6e7edf1820aa79b88a56d1ed39b6300f | [
"Apache-2.0"
] | 2 | 2021-12-14T16:57:58.000Z | 2021-12-23T11:51:10.000Z | import os
import cv2
from ReceiptGenerator.draw_receipt import create_crnn_sample
NUM_OF_TRAINING_IMAGES = 3000
NUM_OF_TEST_IMAGES = 1000
TEXT_TYPES = ['word', 'word_column', 'word_bracket', 'int', 'float', 'price_left', 'price_right', 'percentage']
# TEXT_TYPES = ['word']
with open('./ReceiptProcessor/training_images/Train/sample.txt', 'w') as input_file:
for type in TEXT_TYPES:
if not os.path.exists('./ReceiptProcessor/training_images/Train/{}'.format(type)):
os.mkdir('./ReceiptProcessor/training_images/Train/{}'.format(type))
for i in range(0, NUM_OF_TRAINING_IMAGES):
img, label = create_crnn_sample(type)
cv2.imwrite('./ReceiptProcessor/training_images/Train/{}/{}.jpg'.format(type, i), img)
input_file.write('{}/{}.jpg {}\n'.format(type, i, label))
with open('./ReceiptProcessor/training_images/Test/sample.txt', 'w') as input_file:
for type in TEXT_TYPES:
if not os.path.exists('./ReceiptProcessor/training_images/Test/{}'.format(type)):
os.mkdir('./ReceiptProcessor/training_images/Test/{}'.format(type))
for i in range(0, NUM_OF_TEST_IMAGES):
img, label = create_crnn_sample(type)
cv2.imwrite('./ReceiptProcessor/training_images/Test/{}/{}.jpg'.format(type, i), img)
input_file.write('{}/{}.jpg {}\n'.format(type, i, label))
| 47.62069 | 111 | 0.672701 | 181 | 1,381 | 4.917127 | 0.303867 | 0.157303 | 0.269663 | 0.157303 | 0.746067 | 0.660674 | 0.640449 | 0.534831 | 0.534831 | 0.474157 | 0 | 0.011265 | 0.164374 | 1,381 | 28 | 112 | 49.321429 | 0.759965 | 0.015206 | 0 | 0.272727 | 0 | 0 | 0.343152 | 0.27246 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e27f7aa8f3b09c4a3cfeb2e39b74ae35d8c6e4d4 | 870 | py | Python | instances/migrations/0001_initial.py | glzjin/webvirtcloud | ecaf11e02aeb57654257ed502d3da6fd8405f21b | [
"Apache-2.0"
] | 1 | 2020-11-06T00:50:06.000Z | 2020-11-06T00:50:06.000Z | instances/migrations/0001_initial.py | qmutz/webvirtcloud | 159e06221af435700047a8e5ababe758a12d7579 | [
"Apache-2.0"
] | null | null | null | instances/migrations/0001_initial.py | qmutz/webvirtcloud | 159e06221af435700047a8e5ababe758a12d7579 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.10 on 2020-01-28 07:01
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('computes', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Instance',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=120)),
('uuid', models.CharField(max_length=36)),
('is_template', models.BooleanField(default=False)),
('created', models.DateField(auto_now_add=True)),
('compute', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='computes.Compute')),
],
),
]
| 31.071429 | 115 | 0.595402 | 91 | 870 | 5.582418 | 0.604396 | 0.047244 | 0.055118 | 0.086614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039246 | 0.267816 | 870 | 27 | 116 | 32.222222 | 0.758242 | 0.052874 | 0 | 0 | 1 | 0 | 0.09854 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2858d86f914bf75d274567c879ac15e007d7753 | 229 | py | Python | 2 semester/PP/9/Code/1.3.py | kurpenok/Labs | 069c92b7964a1445d093313b38ebdc56318d2a73 | [
"MIT"
] | null | null | null | 2 semester/PP/9/Code/1.3.py | kurpenok/Labs | 069c92b7964a1445d093313b38ebdc56318d2a73 | [
"MIT"
] | null | null | null | 2 semester/PP/9/Code/1.3.py | kurpenok/Labs | 069c92b7964a1445d093313b38ebdc56318d2a73 | [
"MIT"
] | null | null | null | sort = lambda array: [sublist for sublist in sorted(array, key=lambda x: x[1])]
if __name__ == "__main__":
print(sort([
("English", 88),
("Social", 82),
("Science", 90),
("Math", 97)
]))
| 20.818182 | 79 | 0.50655 | 27 | 229 | 4 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055901 | 0.296943 | 229 | 10 | 80 | 22.9 | 0.614907 | 0 | 0 | 0 | 0 | 0 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2867f314d7004069b6df31d48702399fd7727ef | 451 | py | Python | users/models.py | jannetasa/haravajarjestelma | 419f23656306d94ae4d9a8d3477a6325cc80b601 | [
"MIT"
] | null | null | null | users/models.py | jannetasa/haravajarjestelma | 419f23656306d94ae4d9a8d3477a6325cc80b601 | [
"MIT"
] | 79 | 2018-11-26T09:43:41.000Z | 2022-02-10T08:19:11.000Z | users/models.py | jannetasa/haravajarjestelma | 419f23656306d94ae4d9a8d3477a6325cc80b601 | [
"MIT"
] | 3 | 2018-11-27T08:08:22.000Z | 2022-03-25T08:30:34.000Z | from django.db import models
from django.utils.translation import ugettext_lazy as _
from helusers.models import AbstractUser
class User(AbstractUser):
is_official = models.BooleanField(verbose_name=_("official"), default=False)
class Meta:
verbose_name = _("user")
verbose_name_plural = _("users")
ordering = ("id",)
def can_view_contract_zone_details(user):
return user.is_authenticated and user.is_official
| 26.529412 | 80 | 0.738359 | 56 | 451 | 5.660714 | 0.625 | 0.104101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.172949 | 451 | 16 | 81 | 28.1875 | 0.849866 | 0 | 0 | 0 | 0 | 0 | 0.042129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e287c088eb04f012860164c20f94da353ad49546 | 3,904 | py | Python | src/main/python/tranquilitybase/gcpdac/main/core/terraform/terraform_utils.py | tranquilitybase-io/tb-gcp-dac | 1d65afced1ab7427262dcdf98ee544370201439a | [
"Apache-2.0"
] | 2 | 2020-04-23T16:50:26.000Z | 2021-05-09T11:30:42.000Z | src/main/python/tranquilitybase/gcpdac/main/core/terraform/terraform_utils.py | tranquilitybase-io/tb-gcp-dac | 1d65afced1ab7427262dcdf98ee544370201439a | [
"Apache-2.0"
] | 156 | 2020-04-08T14:08:47.000Z | 2021-07-01T14:48:15.000Z | src/main/python/tranquilitybase/gcpdac/main/core/terraform/terraform_utils.py | tranquilitybase-io/tb-gcp-dac | 1d65afced1ab7427262dcdf98ee544370201439a | [
"Apache-2.0"
] | 2 | 2020-06-24T11:19:58.000Z | 2020-06-24T13:27:22.000Z | import time
import traceback
from python_terraform import Terraform
from src.main.python.tranquilitybase.gcpdac.configuration.helpers.eaglehelper import EagleConfigHelper
from src.main.python.tranquilitybase.gcpdac.configuration.helpers.envhelper import EnvHelper
from src.main.python.tranquilitybase.gcpdac.main.core.terraform.terraform_config import get_terraform_path
from src.main.python.tranquilitybase.lib.common.FileUtils import FileUtils
from src.main.python.tranquilitybase.lib.common.StringUtils import is_none_or_empty
# --- Logger ---
import inspect
from src.main.python.tranquilitybase.lib.common.local_logging import get_logger, get_frame_name
logger = get_logger(get_frame_name(inspect.currentframe()))
def validate_terraform_path():
terraform_source_path = get_terraform_path('folder_creation')
if not FileUtils.dir_exists(terraform_source_path):
raise Exception("terraform directory not found: " + terraform_source_path)
if EnvHelper.is_ide():
logger.warn("running in IDE skipping terraform validation")
return
tf = Terraform(working_dir=terraform_source_path)
terraform_plan(tf)
def validate_terraform_config():
ec_config = EagleConfigHelper.config_dict
terraform_state_bucket = ec_config['terraform_state_bucket']
tb_discriminator = ec_config['tb_discriminator']
if is_none_or_empty(terraform_state_bucket) or \
is_none_or_empty(tb_discriminator):
raise Exception("terraform value from ec_config found to be invalid")
def terraform_plan(tf: Terraform):
return_code, stdout, stderr = tf.plan(capture_output=True)
logger.debug('Terraform plan return code is {}'.format(return_code))
logger.debug('Terraform plan stdout is {}'.format(stdout))
logger.debug('Terraform plan stderr is {}'.format(stderr))
def terraform_init(backend_prefix, terraform_state_bucket, tf: Terraform):
return_code, stdout, stderr = tf.init(capture_output=True,
backend_config={'bucket': terraform_state_bucket,
'prefix': backend_prefix})
logger.debug('Terraform init return code is {}'.format(return_code))
logger.debug('Terraform init stdout is {}'.format(stdout))
logger.debug('Terraform init stderr is {}'.format(stderr))
def terraform_apply(env_data, tf: Terraform):
retry_count = 0
return_code = 0
while retry_count < 5:
logger.debug("Try {}".format(retry_count))
return_code, stdout, stderr = tf.apply(skip_plan=True, var_file=env_data, capture_output=True)
logger.debug('Terraform apply return code is {}'.format(return_code))
logger.debug('Terraform apply stdout is {}'.format(stdout))
logger.debug("Terraform apply stderr is {}".format(stderr))
retry_count += 1
if return_code == 0:
break
time.sleep(30)
if return_code == 0:
show_return_code, tf_state, stdout = tf.show(json=True)
logger.debug('Terraform show return code is {}'.format(show_return_code))
logger.debug('Terraform show stdout is {}'.format(stdout))
tf_outputs = tf.output()
for output_value in tf_outputs:
logger.debug('Terraform output value is {}'.format(output_value))
else:
# TODO get output for errors
tf_state = {}
tf_outputs = {}
traceback.print_stack()
return {"tf_return_code": return_code, "tf_outputs": tf_outputs, "tf_state": tf_state}
def terraform_destroy(env_data, tf):
return_code, stdout, stderr = tf.destroy(var_file=env_data, capture_output=True)
logger.debug('Terraform destroy return code is {}'.format(return_code))
logger.debug('Terraform destroy stdout is {}'.format(stdout))
logger.debug('Terraform destroy stderr is {}'.format(stderr))
return {"tf_return_code": return_code}
| 41.094737 | 106 | 0.71542 | 496 | 3,904 | 5.405242 | 0.209677 | 0.082059 | 0.111899 | 0.038046 | 0.402089 | 0.357329 | 0.284595 | 0.152928 | 0.109661 | 0.038046 | 0 | 0.00251 | 0.183658 | 3,904 | 94 | 107 | 41.531915 | 0.83872 | 0.010502 | 0 | 0.028571 | 0 | 0 | 0.177461 | 0.005699 | 0 | 0 | 0 | 0.010638 | 0 | 1 | 0.085714 | false | 0 | 0.142857 | 0 | 0.271429 | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e28e4be1e1115462a9f71610a6de2f1ea15e9d02 | 3,912 | py | Python | momentumnet-main/momentumnet/exact_rep_pytorch.py | ZhuFanCheng/Thesis | eba9a7567a5c254acb2e78fdac0cda7dddabb327 | [
"MIT"
] | null | null | null | momentumnet-main/momentumnet/exact_rep_pytorch.py | ZhuFanCheng/Thesis | eba9a7567a5c254acb2e78fdac0cda7dddabb327 | [
"MIT"
] | null | null | null | momentumnet-main/momentumnet/exact_rep_pytorch.py | ZhuFanCheng/Thesis | eba9a7567a5c254acb2e78fdac0cda7dddabb327 | [
"MIT"
] | null | null | null | # Authors: Michael Sander, Pierre Ablin
# License: MIT
"""
Original code from
Maclaurin, Dougal, David Duvenaud, and Ryan Adams.
"Gradient-based hyperparameter optimization through reversible learning."
International conference on machine learning. PMLR, 2015.
"""
import numpy as np
import torch
RADIX_SCALE = 2 ** 52
class TorchExactRep(object):
def __init__(
self,
val,
from_intrep=False,
shape=None,
device=None,
from_representation=None,
):
if from_representation is not None:
intrep, store = from_representation
self.intrep = intrep
self.aux = BitStore(0, 0, store=store)
else:
if device is None:
device = val.device.type
if shape is not None:
self.intrep = torch.zeros(
*shape, dtype=torch.long, device=device
)
else:
shape = val.shape
if from_intrep:
self.intrep = val
else:
self.intrep = self.float_to_intrep(val)
self.aux = BitStore(shape, device)
def __imul__(self, a):
self.mul(a)
return self
def __iadd__(self, a):
self.add(a)
return self
def __isub__(self, a):
self.sub(a)
return self
def __itruediv__(self, a):
self.div(a)
return self
def add(self, A):
"""Reversible addition of vector or scalar A."""
self.intrep += self.float_to_intrep(A)
return self
def sub(self, A):
self.add(-A)
return self
def rational_mul(self, n, d):
self.aux.push(self.intrep % d, d) # Store remainder bits externally
# self.intrep //= d # Divide by denominator
self.intrep = torch.div(self.intrep, d, rounding_mode="trunc")
self.intrep *= n # Multiply by numerator
self.intrep += self.aux.pop(n) # Pack bits into the remainder
def mul(self, a):
n, d = self.float_to_rational(a)
self.rational_mul(n, d)
return self
def div(self, a):
n, d = self.float_to_rational(a)
self.rational_mul(d, n)
return self
def float_to_rational(self, a):
d = 2 ** 16 // int(a + 1)
n = int(a * d + 1)
return n, d
def float_to_intrep(self, x):
if type(x) is torch.Tensor:
return (x * RADIX_SCALE).long()
return int(x * RADIX_SCALE)
def __repr__(self):
return repr(self.val)
def n_max_iter(self, beta):
d, n = self.float_to_rational(beta)
return int((64 - np.log2(n)) / np.abs(np.log2(n) - np.log2(d)))
@property
def val(self):
return self.intrep.float() / RADIX_SCALE
def copy(self):
v = TorchExactRep(self.val)
v.intrep = torch.clone(self.intrep)
v.aux.store = torch.clone(self.aux.store)
return v
def reset(self):
self.intrep.fill_(0)
self.aux.store.fill_(0)
class BitStore(object):
"""
Efficiently stores information with non-integer number of bits (up to 16).
"""
def __init__(self, shape, device, store=None):
# Use an array of Python 'long' ints which conveniently grow
# as large as necessary. It's about 50X slower though...
if store is not None:
self.store = store
else:
self.store = torch.zeros(shape, dtype=torch.long).to(device)
def push(self, N, M):
"""Stores integer N, given that 0 <= N < M"""
self.store *= M
self.store += N
def pop(self, M):
"""Retrieves the last integer stored."""
N = self.store % M
# self.store //= M
self.store = torch.div(self.store, M, rounding_mode="trunc")
return N
def __repr__(self):
return repr(self.store)
| 26.612245 | 78 | 0.562117 | 512 | 3,912 | 4.169922 | 0.289063 | 0.065574 | 0.048712 | 0.039344 | 0.162529 | 0.158314 | 0.0637 | 0.0637 | 0.039344 | 0.039344 | 0 | 0.009927 | 0.330521 | 3,912 | 146 | 79 | 26.794521 | 0.805269 | 0.179448 | 0 | 0.16 | 0 | 0 | 0.003162 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21 | false | 0 | 0.02 | 0.03 | 0.42 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2943c239d9c2e7a22ad3b9b20bec4da90c41c08 | 594 | py | Python | master/fresh-samples-master/fresh-samples-master/python_samples/create_contact.py | AlexRogalskiy/DevArtifacts | 931aabb8cbf27656151c54856eb2ea7d1153203a | [
"MIT"
] | 4 | 2018-09-07T15:35:24.000Z | 2019-03-27T09:48:12.000Z | master/fresh-samples-master/fresh-samples-master/python_samples/create_contact.py | AlexRogalskiy/DevArtifacts | 931aabb8cbf27656151c54856eb2ea7d1153203a | [
"MIT"
] | 371 | 2020-03-04T21:51:56.000Z | 2022-03-31T20:59:11.000Z | master/fresh-samples-master/fresh-samples-master/python_samples/create_contact.py | AlexRogalskiy/DevArtifacts | 931aabb8cbf27656151c54856eb2ea7d1153203a | [
"MIT"
] | 3 | 2019-06-18T19:57:17.000Z | 2020-11-06T03:55:08.000Z | ## This script requires "requests": http://docs.python-requests.org/
## To install: pip install requests
import requests
import json
FRESHDESK_ENDPOINT = "http://YOUR_DOMAIN.freshdesk.com" # check if you have configured https, modify accordingly
FRESHDESK_KEY = "YOUR_API_TOKEN"
user_info = {"user":{"name":"Example User", "email":"example@example.com"}}
my_headers = {"Content-Type": "application/json"}
r = requests.post(FRESHDESK_ENDPOINT + '/contacts.json',
auth=(FRESHDESK_KEY, "X"), data=json.dumps(user_info),
headers=my_headers)
print r.status_code
print r.content
| 31.263158 | 112 | 0.734007 | 80 | 594 | 5.3 | 0.6125 | 0.066038 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124579 | 594 | 18 | 113 | 33 | 0.815385 | 0.257576 | 0 | 0 | 0 | 0 | 0.305747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.181818 | null | null | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e294c03490a934e79c6f93eaa739cbcd7738d18b | 1,102 | py | Python | ordenes/migrations/0003_auto_20200307_0359.py | Omar-Gonzalez/echangarro-demo | a7a970d9793c5e467ca117e9f515a9da423fac14 | [
"MIT"
] | null | null | null | ordenes/migrations/0003_auto_20200307_0359.py | Omar-Gonzalez/echangarro-demo | a7a970d9793c5e467ca117e9f515a9da423fac14 | [
"MIT"
] | 9 | 2021-03-19T11:25:28.000Z | 2022-03-12T00:35:18.000Z | ordenes/migrations/0003_auto_20200307_0359.py | Omar-Gonzalez/echangarro-demo | a7a970d9793c5e467ca117e9f515a9da423fac14 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.2 on 2020-03-07 03:59
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ordenes', '0002_auto_20200305_0056'),
]
operations = [
migrations.AddField(
model_name='orden',
name='guia_de_envio',
field=models.CharField(blank=True, max_length=640, null=True),
),
migrations.AlterField(
model_name='orden',
name='estado',
field=models.CharField(choices=[('TENTATIVA', 'TENTATIVA'), ('PENDIENTE PAGO', 'PENDIENTE PAGO'), ('PAGADO', 'PAGADO'), ('ENVIADO', 'ENVIADO'), ('ENTREGADO', 'ENTREGADO'), ('CANCELADO', 'CANCELADO'), ('DEVUELTO', 'DEVUELTO')], default='INICIADO', max_length=110),
),
migrations.AlterField(
model_name='orden',
name='preferencia_de_pago',
field=models.CharField(choices=[('MERCADO PAGO', 'MERCADO PAGO'), ('PAYPAL', 'PAYPAL'), ('TRANSFERENCIA BANCARIA', 'TRANSFERENCIA BANCARIA')], default='MERCADO LIBRE', max_length=110),
),
]
| 38 | 275 | 0.606171 | 109 | 1,102 | 6.009174 | 0.53211 | 0.041221 | 0.064122 | 0.082443 | 0.116031 | 0.116031 | 0 | 0 | 0 | 0 | 0 | 0.047281 | 0.232305 | 1,102 | 28 | 276 | 39.357143 | 0.72695 | 0.040835 | 0 | 0.363636 | 1 | 0 | 0.291943 | 0.021801 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e298277ea2466de2883498bb7a044f52d8a88109 | 665 | py | Python | dodo_commands/extra/dodo_standard_commands/decorators/pause.py | mnieber/dodo-commands | 82330006af2c6739b030ce932ba1ff9078b241ee | [
"MIT"
] | 8 | 2016-12-01T16:45:45.000Z | 2020-05-05T20:56:57.000Z | dodo_commands/extra/dodo_standard_commands/decorators/pause.py | mnieber/dodo-commands | 82330006af2c6739b030ce932ba1ff9078b241ee | [
"MIT"
] | 75 | 2017-01-29T19:25:45.000Z | 2020-01-28T09:40:47.000Z | dodo_commands/extra/dodo_standard_commands/decorators/pause.py | mnieber/dodo-commands | 82330006af2c6739b030ce932ba1ff9078b241ee | [
"MIT"
] | 2 | 2017-06-01T09:55:20.000Z | 2017-06-08T14:45:08.000Z | """Pauses the execution."""
import time
from dodo_commands.framework.decorator_utils import uses_decorator
class Decorator:
def is_used(self, config, command_name, decorator_name):
return uses_decorator(config, command_name, decorator_name)
def add_arguments(self, parser): # override
parser.add_argument(
"--pause-ms", type=int, help="Pause in milliseconds before continuing"
)
def modify_args(self, command_line_args, args_tree_root_node, cwd): # override
if getattr(command_line_args, "pause_ms", 0):
time.sleep(command_line_args.pause_ms / 1000)
return args_tree_root_node, cwd
| 33.25 | 83 | 0.709774 | 87 | 665 | 5.126437 | 0.54023 | 0.047085 | 0.100897 | 0.116592 | 0.318386 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009416 | 0.201504 | 665 | 19 | 84 | 35 | 0.830508 | 0.06015 | 0 | 0 | 0 | 0 | 0.092233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.153846 | 0.076923 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e29aea0adeb87fbcda39b28f2cebe8dcefd85597 | 883 | py | Python | tests/unit/lms/extensions/feature_flags/views/_predicates_test.py | mattdricker/lms | 40b8a04f95e69258c6c0d7ada543f4b527918ecf | [
"BSD-2-Clause"
] | 38 | 2017-12-30T23:49:53.000Z | 2022-02-15T21:07:49.000Z | tests/unit/lms/extensions/feature_flags/views/_predicates_test.py | mattdricker/lms | 40b8a04f95e69258c6c0d7ada543f4b527918ecf | [
"BSD-2-Clause"
] | 1,733 | 2017-11-09T18:46:05.000Z | 2022-03-31T11:05:50.000Z | tests/unit/lms/extensions/feature_flags/views/_predicates_test.py | mattdricker/lms | 40b8a04f95e69258c6c0d7ada543f4b527918ecf | [
"BSD-2-Clause"
] | 10 | 2018-07-11T17:12:46.000Z | 2022-01-07T20:00:23.000Z | from unittest import mock
from lms.extensions.feature_flags.views._predicates import FeatureFlagViewPredicate
class TestFeatureFlagsViewPredicate:
def test_text(self):
assert (
FeatureFlagViewPredicate("test_feature", mock.sentinel.config).text()
== "feature_flag = test_feature"
)
def test_phash(self):
assert (
FeatureFlagViewPredicate("test_feature", mock.sentinel.config).phash()
== "feature_flag = test_feature"
)
def test_it_delegates_to_request_dot_feature(self, pyramid_request):
view_predicate = FeatureFlagViewPredicate("test_feature", mock.sentinel.config)
matches = view_predicate(mock.sentinel.context, pyramid_request)
pyramid_request.feature.assert_called_once_with("test_feature")
assert matches == pyramid_request.feature.return_value
| 33.961538 | 87 | 0.714609 | 90 | 883 | 6.7 | 0.411111 | 0.109453 | 0.174129 | 0.19403 | 0.393035 | 0.393035 | 0.208955 | 0.208955 | 0 | 0 | 0 | 0 | 0.202718 | 883 | 25 | 88 | 35.32 | 0.856534 | 0 | 0 | 0.222222 | 0 | 0 | 0.115515 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.166667 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e29bd2810c7f8c926395ee26eadb354bd458bdc4 | 468 | py | Python | 2015/python/01.py | gcp825/advent_of_code | b4ea17572847e1a9044487041b3e12a0da58c94b | [
"MIT"
] | 1 | 2021-12-29T09:32:08.000Z | 2021-12-29T09:32:08.000Z | 2015/python/01.py | gcp825/advent_of_code | b4ea17572847e1a9044487041b3e12a0da58c94b | [
"MIT"
] | null | null | null | 2015/python/01.py | gcp825/advent_of_code | b4ea17572847e1a9044487041b3e12a0da58c94b | [
"MIT"
] | null | null | null | def read_file(filepath):
with open(filepath,'r') as i:
inst = [int(x) for x in i.read().replace(')','-1,').replace('(','1,').strip('\n').strip(',').split(',')]
return inst
def calculate(inst,floor=0):
for i,f in enumerate(inst):
floor += f
if floor < 0: break
return sum(inst), i+1
def main(filepath):
pt1, pt2 = calculate(read_file(filepath))
return pt1, pt2
print(main('1.txt'))
| 22.285714 | 112 | 0.532051 | 66 | 468 | 3.742424 | 0.5 | 0.064777 | 0.129555 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029499 | 0.275641 | 468 | 20 | 113 | 23.4 | 0.699115 | 0 | 0 | 0 | 0 | 0 | 0.036325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0 | 0 | 0.461538 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e29c27f26d7b20a1fcc5692f39ceeba53ab303aa | 4,810 | py | Python | Unit3_StructuredTypes/ps3_hangman.py | myzzdeedee/MITx_6001x | 0843ac666e1d58e52bd09c8ce9144fe9d6eb78c8 | [
"MIT"
] | null | null | null | Unit3_StructuredTypes/ps3_hangman.py | myzzdeedee/MITx_6001x | 0843ac666e1d58e52bd09c8ce9144fe9d6eb78c8 | [
"MIT"
] | null | null | null | Unit3_StructuredTypes/ps3_hangman.py | myzzdeedee/MITx_6001x | 0843ac666e1d58e52bd09c8ce9144fe9d6eb78c8 | [
"MIT"
] | null | null | null | # Hangman game
#
# -----------------------------------
# Helper code
# You don't need to understand this helper code,
# but you will have to know how to use the functions
# (so be sure to read the docstrings!)
import random
import string
WORDLIST_FILENAME = "/Users/deedeebanh/Documents/MITx_6.00.1.x/ProblemSet3/words.txt"
def loadWords():
"""
Returns a list of valid words. Words are strings of lowercase letters.
Depending on the size of the word list, this function may
take a while to finish.
"""
print("Loading word list from file...")
# inFile: file
inFile = open(WORDLIST_FILENAME, 'r')
# line: string
line = inFile.readline()
# wordlist: list of strings
wordlist = line.split()
print(" ", len(wordlist), "words loaded.")
return wordlist
def chooseWord(wordlist):
"""
wordlist (list): list of words (strings)
Returns a word from wordlist at random
"""
return random.choice(wordlist)
# end of helper code
# -----------------------------------
# Load the list of words into the variable wordlist
# so that it can be accessed from anywhere in the program
wordlist = loadWords()
def isWordGuessed(secretWord, lettersGuessed):
'''
secretWord: string, the word the user is guessing
lettersGuessed: list, what letters have been guessed so far
returns: boolean, True if all the letters of secretWord are in lettersGuessed;
False otherwise
'''
# FILL IN YOUR CODE HERE...
isAinB = [item in lettersGuessed for item in secretWord]
return (all(isAinB))
def getGuessedWord(secretWord, lettersGuessed):
'''
secretWord: string, the word the user is guessing
lettersGuessed: list, what letters have been guessed so far
returns: string, comprised of letters and underscores that represents
what letters in secretWord have been guessed so far.
'''
# FILL IN YOUR CODE HERE...
store = list('_'*len(secretWord)) #first set up ___ = length of secretWord
for i in range(len(secretWord)):
for j in range(len(lettersGuessed)):
if lettersGuessed[j] == secretWord[i]:
store[i] = lettersGuessed[j] #replace _ with the letter
return (''.join(store))
def getAvailableLetters(lettersGuessed):
'''
lettersGuessed: list, what letters have been guessed so far
returns: string, comprised of letters that represents what letters have not
yet been guessed.
'''
# FILL IN YOUR CODE HERE...
diff = [item for item in (list(string.ascii_lowercase)) if item not in lettersGuessed]
return (''.join(diff))
def hangman(secretWord):
'''
secretWord: string, the secret word to guess.
Starts up an interactive game of Hangman.
* At the start of the game, let the user know how many
letters the secretWord contains.
* Ask the user to supply one guess (i.e. letter) per round.
* The user should receive feedback immediately after each guess
about whether their guess appears in the computers word.
* After each round, you should also display to the user the
partially guessed word so far, as well as letters that the
user has not yet guessed.
Follows the other limitations detailed in the problem write-up.
'''
# FILL IN YOUR CODE HERE...
print('Welcome to the game, Hangman!')
print('I am thinking of a word that is ' + str(len(secretWord)) + ' letters long.')
print('-------------')
numOfGuesses = 8
lettersGuessed = list()
while numOfGuesses > 0:
print("You have " + str(numOfGuesses) + " guesses left.")
print("Available letters: " + getAvailableLetters(lettersGuessed))
var = input("Please guess a letter: ")
var = var.lower()
if var in lettersGuessed:
print("Oops! You've already guessed that letter: " + getGuessedWord(secretWord, lettersGuessed))
elif var not in secretWord:
print("Oops! That letter is not in my word: " + getGuessedWord(secretWord, lettersGuessed))
lettersGuessed.append(var)
numOfGuesses -= 1
else:
lettersGuessed.append(var)
print("Good Guess: " + getGuessedWord(secretWord, lettersGuessed))
print("------------")
if (isWordGuessed(secretWord, lettersGuessed) == True):
print("Congratulations, you won!")
return 1
if (numOfGuesses == 0):
print("Sorry, you ran out of guesses. The word was " + secretWord)
return 1
return 0
# When you've completed your hangman function, uncomment these two lines
# and run this file to test! (hint: you might want to pick your own
# secretWord while you're testing)
secretWord = chooseWord(wordlist).lower()
#secretWord = 'c'
hangman(secretWord)
| 33.636364 | 108 | 0.65447 | 606 | 4,810 | 5.179868 | 0.353135 | 0.01561 | 0.019114 | 0.021663 | 0.138898 | 0.109589 | 0.109589 | 0.109589 | 0.109589 | 0.109589 | 0 | 0.003288 | 0.241164 | 4,810 | 142 | 109 | 33.873239 | 0.856712 | 0.452391 | 0 | 0.071429 | 0 | 0 | 0.178571 | 0.025862 | 0 | 0 | 0 | 0.028169 | 0 | 1 | 0.107143 | false | 0 | 0.035714 | 0 | 0.285714 | 0.232143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2a1706b79dfe59b8505a0173c3194887a5e11ef | 613 | py | Python | decrypt.py | angelodpadron/asymmetric-encryption-exercise | f204c3cc293db170e79be41a1125c329b07e9c3b | [
"Unlicense"
] | null | null | null | decrypt.py | angelodpadron/asymmetric-encryption-exercise | f204c3cc293db170e79be41a1125c329b07e9c3b | [
"Unlicense"
] | null | null | null | decrypt.py | angelodpadron/asymmetric-encryption-exercise | f204c3cc293db170e79be41a1125c329b07e9c3b | [
"Unlicense"
] | null | null | null | # Seguridad Informatica
# ejercicio de encriptacion
# Angelo Padron (42487)
from Crypto.Cipher import AES
from Crypto.Random import get_random_bytes
with open('key.bin', 'rb') as k:
key = k.read()
with open('vector.bin', 'rb') as v:
init_vector = v.read()
cipher = AES.new(key, AES.MODE_CBC, init_vector)
with open('encrypted_file', 'rb') as encrypted:
e_file = encrypted.read()
# el metodo strip es utilizado para remover el padding agregado durante la encriptacion
with open('decrypted_file.txt', 'wb') as decrypted:
decrypted.write(cipher.decrypt(e_file).strip())
| 26.652174 | 88 | 0.698206 | 89 | 613 | 4.707865 | 0.550562 | 0.076372 | 0.033413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01002 | 0.185971 | 613 | 22 | 89 | 27.863636 | 0.829659 | 0.252855 | 0 | 0 | 0 | 0 | 0.132251 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2af856c04d6440da75265a16b72d785a5cf429e | 2,877 | py | Python | openapi_core/schema/schemas/_format.py | gjo/openapi-core | cabe512fb043d3e95b93fbe7a20b8e2d095d7d99 | [
"BSD-3-Clause"
] | null | null | null | openapi_core/schema/schemas/_format.py | gjo/openapi-core | cabe512fb043d3e95b93fbe7a20b8e2d095d7d99 | [
"BSD-3-Clause"
] | null | null | null | openapi_core/schema/schemas/_format.py | gjo/openapi-core | cabe512fb043d3e95b93fbe7a20b8e2d095d7d99 | [
"BSD-3-Clause"
] | null | null | null | from base64 import b64encode, b64decode
import binascii
from datetime import datetime
from uuid import UUID
from jsonschema._format import FormatChecker
from jsonschema.exceptions import FormatError
from six import binary_type, text_type, integer_types
DATETIME_HAS_STRICT_RFC3339 = False
DATETIME_HAS_ISODATE = False
DATETIME_RAISES = ()
try:
import isodate
except ImportError:
pass
else:
DATETIME_HAS_ISODATE = True
DATETIME_RAISES += (ValueError, isodate.ISO8601Error)
try:
import strict_rfc3339
except ImportError:
pass
else:
DATETIME_HAS_STRICT_RFC3339 = True
DATETIME_RAISES += (ValueError, TypeError)
class StrictFormatChecker(FormatChecker):
def check(self, instance, format):
if format not in self.checkers:
raise FormatError(
"Format checker for %r format not found" % (format, ))
return super(StrictFormatChecker, self).check(
instance, format)
oas30_format_checker = StrictFormatChecker()
@oas30_format_checker.checks('int32')
def is_int32(instance):
return isinstance(instance, integer_types)
@oas30_format_checker.checks('int64')
def is_int64(instance):
return isinstance(instance, integer_types)
@oas30_format_checker.checks('float')
def is_float(instance):
return isinstance(instance, float)
@oas30_format_checker.checks('double')
def is_double(instance):
# float has double precision in Python
# It's double in CPython and Jython
return isinstance(instance, float)
@oas30_format_checker.checks('binary')
def is_binary(instance):
return isinstance(instance, binary_type)
@oas30_format_checker.checks('byte', raises=(binascii.Error, TypeError))
def is_byte(instance):
if isinstance(instance, text_type):
instance = instance.encode()
return b64encode(b64decode(instance)) == instance
@oas30_format_checker.checks("date-time", raises=DATETIME_RAISES)
def is_datetime(instance):
if isinstance(instance, binary_type):
return False
if not isinstance(instance, text_type):
return True
if DATETIME_HAS_STRICT_RFC3339:
return strict_rfc3339.validate_rfc3339(instance)
if DATETIME_HAS_ISODATE:
return isodate.parse_datetime(instance)
return True
@oas30_format_checker.checks("date", raises=ValueError)
def is_date(instance):
if isinstance(instance, binary_type):
return False
if not isinstance(instance, text_type):
return True
return datetime.strptime(instance, "%Y-%m-%d")
@oas30_format_checker.checks("uuid", raises=AttributeError)
def is_uuid(instance):
if isinstance(instance, binary_type):
return False
if not isinstance(instance, text_type):
return True
try:
uuid_obj = UUID(instance)
except ValueError:
return False
return text_type(uuid_obj) == instance
| 24.801724 | 72 | 0.73236 | 344 | 2,877 | 5.924419 | 0.229651 | 0.105986 | 0.088322 | 0.105986 | 0.314033 | 0.286555 | 0.251227 | 0.251227 | 0.199215 | 0.199215 | 0 | 0.028169 | 0.18561 | 2,877 | 115 | 73 | 25.017391 | 0.841656 | 0.024331 | 0 | 0.333333 | 0 | 0 | 0.033524 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123457 | false | 0.024691 | 0.135802 | 0.061728 | 0.506173 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c35def8ee1647a9c2698f2968e607d08e8fd841 | 192 | py | Python | App/urls.py | python1801aclchemy/AXF | 64f8d1ceff49a1a9398b06dca8a3bcf9d0c76527 | [
"Apache-2.0"
] | null | null | null | App/urls.py | python1801aclchemy/AXF | 64f8d1ceff49a1a9398b06dca8a3bcf9d0c76527 | [
"Apache-2.0"
] | null | null | null | App/urls.py | python1801aclchemy/AXF | 64f8d1ceff49a1a9398b06dca8a3bcf9d0c76527 | [
"Apache-2.0"
] | null | null | null | from flask_restful import Api
from App.apis import Hello, Home
api = Api()
def init_urls(app):
api.init_app(app=app)
api.add_resource(Hello, "/hello/")
api.add_resource(Home, "/home/") | 17.454545 | 34 | 0.71875 | 32 | 192 | 4.15625 | 0.4375 | 0.090226 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135417 | 192 | 11 | 35 | 17.454545 | 0.801205 | 0 | 0 | 0 | 0 | 0 | 0.067358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c3962395215ba9ce077e3f3b133d79c166c4278 | 1,599 | py | Python | beerhunter/breweries/models.py | zhukovvlad/beerhunt-project | e841f4946c08275e9d189605ffe9026d6657d63f | [
"MIT"
] | null | null | null | beerhunter/breweries/models.py | zhukovvlad/beerhunt-project | e841f4946c08275e9d189605ffe9026d6657d63f | [
"MIT"
] | null | null | null | beerhunter/breweries/models.py | zhukovvlad/beerhunt-project | e841f4946c08275e9d189605ffe9026d6657d63f | [
"MIT"
] | null | null | null | import os
from uuid import uuid4
from django.db import models
from django.urls import reverse
from django.utils.timezone import now as timezone_now
from autoslug import AutoSlugField
from model_utils.models import TimeStampedModel
from django_countries.fields import CountryField
from django.utils.translation import gettext as _
from imagekit.models import ImageSpecField
from imagekit.processors import ResizeToFill
def brewery_directory_path_with_uuid(instance, filename):
now = timezone_now()
extension = os.path.splitext(filename)[1]
extension = extension.lower()
uuid_for_url = uuid4()
return f"{now:%Y/%m}/breweries/{uuid_for_url}{instance.pk}{extension}"
class Brewery(TimeStampedModel):
title = models.CharField(_('Title of brewery'), max_length=255)
slug = AutoSlugField(
"Brewery Slug",
unique=True,
always_update=False,
populate_from='title'
)
country_of_origin = CountryField(
"Country of Origin", blank=True
)
image = models.ImageField(
upload_to=brewery_directory_path_with_uuid,
default='images/default/fermentation.png',
null=True,
blank=True
)
icon = ImageSpecField(
source='image',
processors=[ResizeToFill(100, 100)],
format='PNG',
options={'quality': 60}
)
def __str__(self):
return self.title
def get_absolute_url(self):
return reverse("breweries:BreweryDetail", kwargs={"slug": self.slug})
class Meta:
verbose_name = "Brewery"
verbose_name_plural = "Breweries"
| 27.101695 | 77 | 0.695435 | 187 | 1,599 | 5.770053 | 0.475936 | 0.046339 | 0.027804 | 0.044486 | 0.0519 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.212008 | 1,599 | 58 | 78 | 27.568966 | 0.845238 | 0 | 0 | 0 | 0 | 0 | 0.124453 | 0.071295 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.234043 | 0.042553 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c3ce123c1b3e7bd690b52526495222fa3c1ade0 | 588 | py | Python | release_type.py | sairam4123/GodotReleaseScriptPython | 2fd2644b0301f20b89b6772a0c93cec6d012f080 | [
"MIT"
] | null | null | null | release_type.py | sairam4123/GodotReleaseScriptPython | 2fd2644b0301f20b89b6772a0c93cec6d012f080 | [
"MIT"
] | null | null | null | release_type.py | sairam4123/GodotReleaseScriptPython | 2fd2644b0301f20b89b6772a0c93cec6d012f080 | [
"MIT"
] | null | null | null | from enum import Enum, auto
class ReleaseLevel(Enum):
alpha = auto()
beta = auto()
release_candidate = auto()
public = auto()
@classmethod
def has_value(cls, value):
return value in cls._value2member_map_.values()
class ReleaseType(Enum):
bugfix = auto()
minor = auto()
major = auto()
hotfix = auto()
@classmethod
def has_value(cls, value):
return value in cls._value2member_map_.values()
def value_from_key(dict_, value):
for key in dict_:
if dict_[key] == value:
return key
return None
| 18.967742 | 55 | 0.620748 | 72 | 588 | 4.875 | 0.430556 | 0.094017 | 0.102564 | 0.119658 | 0.404558 | 0.404558 | 0.404558 | 0.404558 | 0.404558 | 0.404558 | 0 | 0.004717 | 0.278912 | 588 | 30 | 56 | 19.6 | 0.823113 | 0 | 0 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.045455 | 0.090909 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c3dba86ff323fef28c9a80d764d95da5f24f6b8 | 1,559 | py | Python | tests/test_cf_gh_pages_dns_records.py | mondeja/pre-commit-hooks | 226a386dd7cd4e7a9d7bb6c7aaff0fea7cdf269b | [
"BSD-3-Clause"
] | null | null | null | tests/test_cf_gh_pages_dns_records.py | mondeja/pre-commit-hooks | 226a386dd7cd4e7a9d7bb6c7aaff0fea7cdf269b | [
"BSD-3-Clause"
] | 14 | 2021-06-14T12:25:22.000Z | 2022-03-10T20:41:30.000Z | tests/test_cf_gh_pages_dns_records.py | mondeja/pre-commit-hooks | 226a386dd7cd4e7a9d7bb6c7aaff0fea7cdf269b | [
"BSD-3-Clause"
] | null | null | null | """Tests for 'cloudflare-gh-pages-dns' hook."""
import contextlib
import io
import os
import pytest
from hooks.cf_gh_pages_dns_records import check_cloudflare_gh_pages_dns_records
@pytest.mark.skipif(
not os.environ.get("CF_API_KEY"),
reason=(
"Cloudflare user API key defined in 'CF_API_KEY' environment variable"
" needed."
),
)
@pytest.mark.parametrize("quiet", (True, False), ids=("quiet=True", "quiet=False"))
@pytest.mark.parametrize(
("domain", "username", "expected_result", "expected_stderr"),
(
pytest.param(
"hrcgen.ml",
"mondeja",
True,
"",
id="domain=hrcgen.ml-username=mondeja", # configured with GH pages
),
pytest.param(
"foobar.baz",
"mondeja",
False,
(
"The domain 'foobar.baz' was not found being managed by your"
" Cloudflare account.\n"
),
id="domain=foobar.baz-username=mondeja", # inexistent zone
),
# TODO: add example domain to test bad configuration
),
)
def test_check_cloudflare_gh_pages_dns_records(
domain,
username,
expected_result,
expected_stderr,
quiet,
):
stderr = io.StringIO()
with contextlib.redirect_stderr(stderr):
result = check_cloudflare_gh_pages_dns_records(
domain,
username,
quiet=quiet,
)
assert result is expected_result
assert stderr.getvalue() == expected_stderr
| 25.145161 | 83 | 0.594612 | 168 | 1,559 | 5.333333 | 0.440476 | 0.046875 | 0.055804 | 0.089286 | 0.216518 | 0.216518 | 0.102679 | 0.102679 | 0 | 0 | 0 | 0 | 0.298268 | 1,559 | 61 | 84 | 25.557377 | 0.819013 | 0.085953 | 0 | 0.254902 | 0 | 0 | 0.237826 | 0.047283 | 0 | 0 | 0 | 0.016393 | 0.039216 | 1 | 0.019608 | false | 0 | 0.098039 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c3fb2a119544a5df7baf6e26ee12fd13218d950 | 5,046 | py | Python | devsupport/check_loggers/check_loggers.py | bradh/jmisb | 94456903782e08bb7a1909736810f171c2df8f8e | [
"MIT"
] | 26 | 2018-05-31T01:36:10.000Z | 2022-03-23T21:40:31.000Z | devsupport/check_loggers/check_loggers.py | bradh/jmisb | 94456903782e08bb7a1909736810f171c2df8f8e | [
"MIT"
] | 206 | 2018-05-22T17:56:12.000Z | 2022-03-18T10:55:27.000Z | devsupport/check_loggers/check_loggers.py | bradh/jmisb | 94456903782e08bb7a1909736810f171c2df8f8e | [
"MIT"
] | 10 | 2019-03-30T00:53:40.000Z | 2022-03-16T18:27:22.000Z | import os
modules = ['api', 'core']
sourcedirs = []
expectedToHaveNoTest = [ 'api/src/main/java/org/jmisb/api/klv/LdsParser.java',
'api/src/main/java/org/jmisb/api/video/VideoDecodeThread.java',
'api/src/main/java/org/jmisb/api/video/VideoOutput.java',
'api/src/main/java/org/jmisb/api/video/VideoStreamOutput.java',
'api/src/main/java/org/jmisb/api/video/DemuxerUtils.java',
'api/src/main/java/org/jmisb/api/video/MetadataDecodeThread.java',
'api/src/main/java/org/jmisb/api/video/VideoInput.java',
'api/src/main/java/org/jmisb/api/video/StreamDemuxer.java',
'api/src/main/java/org/jmisb/api/video/VideoFileOutput.java',
'api/src/main/java/org/jmisb/api/video/FfmpegLog.java',
'api/src/main/java/org/jmisb/api/video/FileDemuxer.java',
'core/src/main/java/org/jmisb/core/video/FrameConverter.java']
# flag that says whether everything was OK. Any failing check fails the result.
checkPasses = True
def fileHasMatchingLine(filePath, text):
f = open(filePath, 'r')
for line in f.readlines():
if text in line:
return True
return False
def usesJavaUtilLogging(filePath):
return fileHasMatchingLine(filePath, 'java.util.logging')
def hasLogging(filePath):
return fileHasMatchingLine(filePath, 'org.slf4j.Logger')
def isCalledLOGGER(text):
textParts = text.split('=')
leftPart = textParts[0].strip()
variableName = leftPart.split()[-1].strip()
# would be better to pick just one, but two cases isn't too bad
if variableName in ['logger', 'LOGGER']:
return True
else:
print('Unexpected variable name: ' + variableName)
return False
def isPrivateStaticFinal(text):
# TODO: fix final
# return text.startswith('private static final ')
return text.startswith('private static ')
def matchesExpectedFormat(text):
return isCalledLOGGER(text) and isPrivateStaticFinal(text)
def matchesFileName(text, filePath):
fileName = filePath.split('/')[-1]
# print(fileName)
className = fileName.split('.')[0]
# print(className)
# print(text.split('(')[-1])
classInLoggerName = text.split('(')[-1].split('.')[0]
# print(classInLoggerName)
if className == classInLoggerName:
return True
else:
print('Class from filename:' + className + ", but class from logger: " + classInLoggerName)
return False
def checkUsesExpectedLoggerName(filePath):
didFindFactory = False
f = open(filePath, 'r')
for line in f.readlines():
if "LoggerFactory.getLogger" in line:
didFindFactory = True
text = line.strip()
if not matchesExpectedFormat(text):
print('Does not match expected format ' + text)
checkPasses = False
if not matchesFileName(text, filePath):
print('Does not match expected class name ' + text)
checkPasses = False
if not didFindFactory:
print("Did not find expected factory line in " + filePath)
checkPasses = False
def addUsefulSourceFiles(filePath):
if usesJavaUtilLogging(filePath):
filesWithJavaUtilLogging.append(filePath)
if hasLogging(filePath):
filesWithLoggers.append(filePath)
def checkTestFile(testFilePath):
fileIsOK = False
f = open(testFilePath, 'r')
for line in f.readlines():
if 'extends LoggerChecks' in line:
fileIsOK = True
break
if 'TestLoggerFactory.getTestLogger' in line:
fileIsOK = True
break
if not fileIsOK:
print(testFilePath + " did not contain the expected test")
checkPasses = False
def checkHasTestCase(sourceFilePath):
# print(sourceFilePath)
testFilePath = sourceFilePath.replace('main', 'test').replace('.java', 'Test.java')
# print(testFilePath)
if not os.path.exists(testFilePath):
if not sourceFilePath.replace('../../', '') in expectedToHaveNoTest:
print('Did not find test case for ' + sourceFilePath + " at " + testFilePath)
elif sourceFilePath.replace('../../', '') in expectedToHaveNoTest:
print('Found unexpected test case for ' + sourceFilePath + " at " + testFilePath)
else:
checkTestFile(testFilePath)
filesWithJavaUtilLogging = []
filesWithLoggers = []
for module in modules:
sourcedir = os.path.join("..", "..", module, "src", "main", "java")
for subdir, dirs, files in os.walk(sourcedir):
for file in files:
filePath = os.path.join(subdir, file)
if not filePath.endswith('.java'):
continue
addUsefulSourceFiles(filePath)
if len(filesWithJavaUtilLogging) > 0:
print('The following files use legacy Java logging:')
for fileName in filesWithJavaUtilLogging:
print('\t' + fileName)
checkPasses = False
print('The following files use SLF4J logging:')
for fileName in filesWithLoggers:
print('\t' + fileName)
checkUsesExpectedLoggerName(fileName)
checkHasTestCase(fileName)
| 35.787234 | 99 | 0.664883 | 555 | 5,046 | 6.045045 | 0.273874 | 0.027124 | 0.042623 | 0.050075 | 0.276006 | 0.197019 | 0.136215 | 0.122206 | 0.122206 | 0.020864 | 0 | 0.002525 | 0.21522 | 5,046 | 140 | 100 | 36.042857 | 0.844697 | 0.065398 | 0 | 0.225225 | 0 | 0 | 0.26318 | 0.154762 | 0 | 0 | 0 | 0.007143 | 0 | 1 | 0.099099 | false | 0.054054 | 0.009009 | 0.036036 | 0.198198 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2c42e21de4e45b73eacc95f497a0cac35eca60ff | 858 | py | Python | edx/quiz/unique_values.py | spradeepv/dive-into-python | ec27d4686b7b007d21f9ba4f85d042be31ee2639 | [
"MIT"
] | null | null | null | edx/quiz/unique_values.py | spradeepv/dive-into-python | ec27d4686b7b007d21f9ba4f85d042be31ee2639 | [
"MIT"
] | null | null | null | edx/quiz/unique_values.py | spradeepv/dive-into-python | ec27d4686b7b007d21f9ba4f85d042be31ee2639 | [
"MIT"
] | null | null | null | """
Write a Python function that returns a list of keys in aDict that map to integer values that are unique (i.e. values appear exactly once in aDict). The list of keys you return should be sorted in increasing order. (If aDict does not contain any unique values, you should return an empty list.)
This function takes in a dictionary and returns a list.
"""
def uniqueValues(aDict):
l = []
temp = {}
for key, val in aDict.items():
if temp.has_key(val):
if l.count(val) > 0:
l.remove(val)
else:
temp[val] = 1
l.append(val)
li = []
for key, val in aDict.items():
if val in l:
li.append(key)
li.sort()
return li
aDict = {1:1, 2:1, 3:3, 4:2, 5:3}
print uniqueValues({1: 1, 2: 1, 3: 1})
print uniqueValues({1: 1, 3: 2, 6: 0, 7: 0, 8: 4, 10: 0})
| 31.777778 | 293 | 0.589744 | 146 | 858 | 3.458904 | 0.458904 | 0.055446 | 0.047525 | 0.043564 | 0.110891 | 0.091089 | 0.091089 | 0 | 0 | 0 | 0 | 0.051071 | 0.292541 | 858 | 26 | 294 | 33 | 0.78089 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c46e20944df9fdddc17ddf37076817334c8143f | 1,392 | py | Python | apps/workspaces/migrations/0001_initial.py | fylein/fyle-integrations-platform-connector | 72f5d364deca8d98516e8486ec0ab377a8ceaccc | [
"MIT"
] | null | null | null | apps/workspaces/migrations/0001_initial.py | fylein/fyle-integrations-platform-connector | 72f5d364deca8d98516e8486ec0ab377a8ceaccc | [
"MIT"
] | 1 | 2021-12-08T13:51:14.000Z | 2021-12-08T13:51:14.000Z | apps/workspaces/migrations/0001_initial.py | fylein/fyle-integrations-platform-connector | 72f5d364deca8d98516e8486ec0ab377a8ceaccc | [
"MIT"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-11 11:10
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Workspace',
fields=[
('id', models.AutoField(
help_text='Unique Id to identify a workspace', primary_key=True, serialize=False)),
('name', models.CharField(help_text='Name of the workspace', max_length=255)),
('fyle_org_id', models.CharField(help_text='org id', max_length=255, unique=True)),
('last_synced_at', models.DateTimeField(
help_text='Datetime when expenses were pulled last', null=True)),
('source_synced_at', models.DateTimeField(
help_text='Datetime when source dimensions were pulled', null=True)),
('destination_synced_at', models.DateTimeField(
help_text='Datetime when destination dimensions were pulled', null=True)),
('created_at', models.DateTimeField(auto_now_add=True, help_text='Created at datetime')),
('updated_at', models.DateTimeField(auto_now=True, help_text='Updated at datetime')),
],
options={
'db_table': 'workspaces',
},
),
]
| 39.771429 | 105 | 0.586207 | 147 | 1,392 | 5.380952 | 0.44898 | 0.08091 | 0.132743 | 0.102402 | 0.319848 | 0.178255 | 0.178255 | 0.178255 | 0 | 0 | 0 | 0.021583 | 0.301006 | 1,392 | 34 | 106 | 40.941176 | 0.791367 | 0.032328 | 0 | 0 | 1 | 0 | 0.255019 | 0.015613 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c47d249a7a2a9897dd96152260a3f88189f02aa | 584 | py | Python | src/db/models/artist_album.py | jsbecerrab/Loka-prueba-backend | d47250a68e3e28375c0b8e0a6cdf78223b0d12cd | [
"MIT"
] | null | null | null | src/db/models/artist_album.py | jsbecerrab/Loka-prueba-backend | d47250a68e3e28375c0b8e0a6cdf78223b0d12cd | [
"MIT"
] | null | null | null | src/db/models/artist_album.py | jsbecerrab/Loka-prueba-backend | d47250a68e3e28375c0b8e0a6cdf78223b0d12cd | [
"MIT"
] | null | null | null | from sqlalchemy import Column, ForeignKey, Integer, DateTime
from sqlalchemy.orm import relationship
from ..database import Base
class Artist_album(Base):
__tablename__ = "artists_albums"
id = Column(Integer, primary_key=True, index=True)
artist_id = Column(Integer, ForeignKey("artists.id"))
album_id = Column(Integer, ForeignKey("albums.id"))
created_at = Column(DateTime, nullable=False)
updated_at = Column(DateTime)
artist = relationship("Artist", back_populates="artists_albums")
album = relationship("Album", back_populates="artists_albums") | 36.5 | 68 | 0.75 | 69 | 584 | 6.130435 | 0.42029 | 0.092199 | 0.106383 | 0.118203 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142123 | 584 | 16 | 69 | 36.5 | 0.844311 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c488abf9ca04504184d8340dff0f547466c24fd | 1,110 | py | Python | examples/simple_resource.py | pdyba/lambdalizator | 0371b8d3e25249096a9c7e7cf90fc590a99ad536 | [
"MIT"
] | 3 | 2020-09-26T11:05:32.000Z | 2021-09-25T08:58:10.000Z | examples/simple_resource.py | pdyba/lambdalizator | 0371b8d3e25249096a9c7e7cf90fc590a99ad536 | [
"MIT"
] | 15 | 2020-09-29T12:10:55.000Z | 2021-11-17T10:42:21.000Z | examples/simple_resource.py | pdyba/lambdalizator | 0371b8d3e25249096a9c7e7cf90fc590a99ad536 | [
"MIT"
] | 1 | 2020-09-26T11:05:38.000Z | 2020-09-26T11:05:38.000Z | #!/usr/bin/env python3.8
# coding=utf-8
"""
Simple Lambda Handler
"""
from lbz.dev.server import MyDevServer
from lbz.dev.test import Client
from lbz.exceptions import LambdaFWException
from lbz.resource import Resource
from lbz.response import Response
from lbz.router import add_route
class HelloWorld(Resource):
@add_route("/", method="GET")
def list(self):
return Response({"message": "HelloWorld"})
def handle(event, context):
try:
exp = HelloWorld(event)
resp = exp()
return resp
except Exception: # pylint: disable=broad-except
return LambdaFWException().get_response(context.aws_request_id).to_dict()
class TestHelloWorld:
def setup_method(self) -> None:
# pylint: disable=attribute-defined-outside-init
self.client = Client(resource=HelloWorld)
def test_filter_queries_all_active_when_no_params(self) -> None:
data = self.client.get("/").to_dict()["body"]
assert data == '{"message":"HelloWorld"}'
if __name__ == "__main__":
server = MyDevServer(acls=HelloWorld, port=8001)
server.run()
| 26.428571 | 81 | 0.691892 | 137 | 1,110 | 5.437956 | 0.540146 | 0.056376 | 0.026846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007752 | 0.186486 | 1,110 | 41 | 82 | 27.073171 | 0.817276 | 0.120721 | 0 | 0 | 0 | 0 | 0.060104 | 0.02487 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.153846 | false | 0 | 0.230769 | 0.038462 | 0.576923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c48f750067a643a09b94ad43993a6e0c7fcf3bf | 774 | py | Python | blog_app/migrations/0019_auto_20200901_0727.py | Rxavio/django-blog | 573ff668537465112d355490f19fa8bb8864fde8 | [
"MIT"
] | null | null | null | blog_app/migrations/0019_auto_20200901_0727.py | Rxavio/django-blog | 573ff668537465112d355490f19fa8bb8864fde8 | [
"MIT"
] | null | null | null | blog_app/migrations/0019_auto_20200901_0727.py | Rxavio/django-blog | 573ff668537465112d355490f19fa8bb8864fde8 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.3 on 2020-09-01 05:27
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('blog_app', '0018_auto_20200830_0501'),
]
operations = [
migrations.AddField(
model_name='post',
name='favourite',
field=models.ManyToManyField(blank=True, related_name='favourite', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='post',
name='status',
field=models.CharField(choices=[('published', 'Published'), ('draft', 'Draft')], default='published', max_length=10),
),
]
| 29.769231 | 129 | 0.630491 | 82 | 774 | 5.792683 | 0.646341 | 0.042105 | 0.067368 | 0.088421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056314 | 0.242894 | 774 | 25 | 130 | 30.96 | 0.754266 | 0.05814 | 0 | 0.210526 | 1 | 0 | 0.137552 | 0.031637 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c55a44f1708355490f6623e534cfe988d374906 | 45,032 | py | Python | tools/mytools/ARIA/src/py/aria/Network.py | fmareuil/Galaxy_test_pasteur | 6f84fb0fc52e3e7dd358623b5da5354c66e16a5f | [
"CC-BY-3.0"
] | null | null | null | tools/mytools/ARIA/src/py/aria/Network.py | fmareuil/Galaxy_test_pasteur | 6f84fb0fc52e3e7dd358623b5da5354c66e16a5f | [
"CC-BY-3.0"
] | null | null | null | tools/mytools/ARIA/src/py/aria/Network.py | fmareuil/Galaxy_test_pasteur | 6f84fb0fc52e3e7dd358623b5da5354c66e16a5f | [
"CC-BY-3.0"
] | null | null | null | """
Authors: Bardiaux Benjamin
Institut Pasteur, Paris
IBPC, Paris
Copyright (C) 2005 Michael Habeck,
Wolfgang Rieping and Benjamin Bardiaux
No warranty implied or expressed.
All rights reserved.
$Author: bardiaux $
$Revision: 1.1.1.1 $
$Date: 2010/03/23 15:27:24 $
"""
from aria.ariabase import *
from aria.Settings import Settings
from aria.xmlutils import XMLElement, XMLBasePickler
import aria.TypeChecking as TCheck
from aria.Chain import TYPE_NONPOLYMER
import numpy
from time import clock
from aria.AriaPeak import TextPickler
from aria.AriaPeak import ASSIGNMENT_TYPE_DICT, NA, \
HEADER_PROJECT, HEADER_ASSIGNMENT_TYPE, \
HEADER_SEQUENCE_SEPARATION, HEADER_RESTRAINT_DEFINITION, \
HEADER_RESTRAINT_ACTIVE
HEADER_SEQUENCE_SEPARATION = \
"""
# sep: sequence separation s: I: s == 0 (intra-residual)
# Q: s == 1 (sequential)
# S: 2 <= s <= 3 (short)
# M: 4 <= s <= 5 (medium)
# L: s > 5 (long)
# i: inter-monomer
"""[1:-1]
HEADER_DICT = {'project': HEADER_PROJECT,
'assignment_type': HEADER_ASSIGNMENT_TYPE,
'sequence_separation': HEADER_SEQUENCE_SEPARATION,
'restraint_definition': HEADER_RESTRAINT_DEFINITION,
'restraint_active': HEADER_RESTRAINT_ACTIVE}
HEADER_ABBREVIATIONS = \
("""
#
# Abbreviations:
#
%(restraint_definition)s
%(restraint_active)s
#
#
%(assignment_type)s
#
""" % HEADER_DICT)[1:-1]
HEADER_ALL = \
"""
#
# List of distance restraints.
#
# Created by Aria 2.3, %(creation_date)s
#
%(project)s
#
# Restraints used during calculation: %(n_active)d
# Violated: %(n_violated)d
#
%(abbreviations)s
%(sequence_separation)s
#
# n_c: The number of contributions. (see noe_restraints.assignments for
# explicit list of contributions).
#
# net_res: Network-anchoring score per residue.
#
# net_ato: Network-anchoring score per atom.
#
"""[1:]
class NetworkScoreTextPickler(TextPickler):
def encode_common(self, ap):
distance_format = '%.2f'
number = '%d' % ap.getId()
rp = ap.getReferencePeak()
x = rp.getNumber()
try:
ref_peak_number = '%d' % x
except:
ref_peak_number = NA
x = rp.getSpectrum().getName()
try:
ref_peak_spectrum = str(x)
except:
ref_peak_spectrum = NA
x = ap.isActive()
if x:
active = YES
else:
active = NO
at = rp.getAssignmentType()
assignment_type = ASSIGNMENT_TYPE_DICT[at]
# BARDIAUX
net = ap._network
net_res = '%.2f' % ap._network['residue']
net_ato = '%.2f' % ap._network['atom']
values = ref_peak_spectrum, ref_peak_number, number, \
active, net_res, net_ato, assignment_type
return list(values)
def encode(self, ap):
values = self.encode_common(ap)
## contributions
contributions = ap.getContributions()
## take only active contributions
contributions = ap.getActiveContributions()
if len(contributions) == 1:
## get sequence separation
## in case of multuple spin-pairs,
## we just take the first one, since all are
## involve the same two residues
atom1, atom2 = contributions[0].getSpinPairs()[0].getAtoms()
if atom1.getSegid() <> atom2.getSegid():
# we have an inter
values.append('1') # n_c
values.append('i')
return values
seq_pos1 = atom1.getResidue().getNumber()
seq_pos2 = atom2.getResidue().getNumber()
seq_sep = abs(seq_pos1 - seq_pos2)
## intra-residue
if seq_sep == 0:
descr = 'I'
## sequential
elif seq_sep == 1:
descr = 'Q'
## TODO: are these the correct values?
## short range
elif seq_sep <= 3:
descr = 'S'
## medium range
elif seq_sep <= 5:
descr = 'M'
else:
descr = 'L'
values.append('1') # n_c
values.append(descr)
## multiple contributions
else:
values.append(str(len(contributions)))
values.append('-') # sep
return values
def dumps(self, ap):
return '\n'.join(self.encode(ap))
class NetworkAnchoringTextPickler(TextPickler):
HEADER_COMMON = ['ref_spec', 'ref_no', 'id', 'active', 'net_res', 'net_ato', 'a_type']
COLUMNS = {'all' : HEADER_COMMON + ['n_c', 'sep'],}
HEADER = {'all' : HEADER_ALL,}
def __init__(self, settings):
#check_type(settings, 'AriaPeakListTextPicklerSettings')
TextPickler.__init__(self, settings = settings)
def get_column_header(self, _type):
"""
_type is 'ambig' or 'unambig'
"""
if not _type in ('ambig', 'unambig', 'all'):
s = 'Header for peak-type "%s" not known.' % _type
self.error(TypeError, s)
return list(self.COLUMNS[_type])
def encode(self, peak_list, header):
pickler = NetworkScoreTextPickler()
all = map(pickler.encode, peak_list)
## add header
if not len(all):
return header
if len(header) <> len(all[0]):
s = 'Number of columns must match header-length.'
self.error(Exception, s)
header[0] = '# ' + header[0]
## show additional information
active = [p for p in peak_list if p.isActive()]
n_violated = len([p for p in active if p.analysis.isViolated()])
d = self._compile_header_dict()
d['n_violated'] = n_violated
d['n_active'] = len(active)
d['abbreviations'] = HEADER_ABBREVIATIONS
text = self.format_output(all, header = header)
## add \n
text = [line + '\n' for line in text]
## make string
text = ''.join(text)
return text, d
def _write(self, s, filename, gzip = 0):
import os
if s is None:
import aria.tools as tools
tools.touch(filename)
return
if gzip:
from aria.tools import gzip_open as open_func
else:
open_func = open
filename = os.path.expanduser(filename)
f = open_func(filename, 'w')
f.write(s)
f.close()
def _compile_header_dict(self):
from aria.Singleton import ProjectSingleton
import time
from copy import copy
project = ProjectSingleton()
project_settings = project.getSettings()
infra = project.getInfrastructure()
run_path = infra.get_run_path()
d = {'date': project_settings['date'],
'project': project_settings['name'],
'run': project_settings['run'],
'author': project_settings['author'],
'working_directory': run_path}
x = copy(HEADER_DICT)
x['project'] %= d
x['creation_date'] =time.ctime()
return x
def dump_network(self, peak_list, filename, gzip = 0):
if peak_list:
header = self.get_column_header('all')
text, d = self.encode(peak_list, header)
d.update(self._compile_header_dict())
header = (self.HEADER['all'] % d)[1:]
s = header + text
# s = header.replace('\n\n','\n') + text
else:
s = None
return self._write(s, filename, gzip)
class NetworkPsPickler:
def __init__(self, network):
self.peaks = network.peaks
self.p_id = network._protons_id
self.net_res = network.residue_score
self.mol = network.molecule
self.it_n = network.iteration.getNumber()
def get_matrix(self):
# since we just support symmetric dimer
n_chains = len(self.mol.get_chains())
#max_res = len([r for c in self.mol.get_chains() for r in c.getResidues()])
max_res = [c.getResidues()[-1].getNumber() for c in self.mol.get_chains() \
if c.getType() != TYPE_NONPOLYMER]
from aria.Singleton import ProjectSingleton
from aria.DataContainer import DATA_SYMMETRY
project = ProjectSingleton()
sym_settings = project.getData(DATA_SYMMETRY)[0]
if n_chains < 2 or (n_chains > 1 and sym_settings['symmetry_type'] not in ["C2","C3","D2","C5"]):
# monomeric prot or hetero dimer
matrix = numpy.zeros((max_res[0]+1, max_res[0]+1), numpy.float)
for k, r_net in self.net_res.items():
r1, r2 = map(lambda a: a.getNumber(), k)
matrix[r1,r2] = r_net
matrix[r2,r1] = r_net
return matrix, None
else:
# homo-dimer
matrix_a = numpy.zeros((max_res[0]+1, max_res[0]+1), numpy.float)
matrix_r = numpy.zeros((max_res[0]+1, max_res[1]+1), numpy.float)
for k, r_net in self.net_res.items():
r1, r2 = map(lambda a: a.getNumber(), k)
s1, s2 = map(lambda a: a.getChain().getSegid(), k)
if s1 <> s2:
matrix_r[r1,r2] = r_net
matrix_r[r2,r1] = r_net
else:
matrix_a[r1,r2] = r_net
matrix_a[r2,r1] = r_net
return matrix_a, matrix_r
def plot_matrix(self):
# mask zero-values
from matplotlib import rcParams
from numpy import ma
rcParams['numerix'] = 'numpy'
pylab = self.pylab
msg = ""
matrix_a, matrix_r = self.get_matrix()
first_res = [c.getResidues()[0].getNumber() for c in self.mol.get_chains() if c.getType() != TYPE_NONPOLYMER]
max_res = [c.getResidues()[-1].getNumber() for c in self.mol.get_chains() if c.getType() != TYPE_NONPOLYMER]
if matrix_r is not None:
ax1 = pylab.subplot(2,1,1)
#matrix = matrix_r[1:,1:]
matrix = matrix_r[first_res[0]:,first_res[1]:]
X = ma.array(matrix, mask = numpy.equal(matrix, 0.))
xyticks = (first_res[0], max_res[0], first_res[1], max_res[1])
kw = {'origin':'lower',
'interpolation':'nearest',
'aspect' : 'equal',
'extent' : xyticks}
pylab.imshow(X, cmap=pylab.cm.Reds, **kw)
pylab.grid()
pylab.colorbar(orientation = 'vertical')
pylab.ylabel("Residue Number (Inter-molecular)")
#pylab.setp( ax1.get_xticklabels(), visible=False)
pylab.subplot(212)#, sharex=ax1)
#pos = pylab.axes([0.85, 0.1, 0.04, 0.8])
#pylab.colorbar(cax = pos)#, orientation = 'horizontal')
msg = " (Intra-molecular)"
matrix = matrix_a[first_res[0]:,first_res[0]:]
#matrix = matrix_a[1:,1:]
X = ma.array(matrix, mask = numpy.equal(matrix, 0.))
xyticks = (first_res[0], max_res[0], first_res[0], max_res[0])
kw = {'origin':'lower',
'interpolation':'nearest',
'aspect' : 'equal',
'extent' : xyticks}
pylab.imshow(X, cmap=pylab.cm.Reds, **kw)
if len(msg):
orientation = 'vertical'
else:
orientation = 'horizontal'
pylab.colorbar(orientation = orientation)
pylab.grid()
pylab.xlabel("Residue Number")
pylab.ylabel("Residue Number" + msg)
def plot_profile(self, type, n):
pylab = self.pylab
if type not in ['residue', 'atom']:
return
colors = {'residue' : 'b',
'atom' : 'r'}
scores = [p._network[type] for p in self.peaks]
nbins = int(max(scores))
#nbins = 1 + int(numpy.log(len(scores))/numpy.log(2))
nbins = int(1.0 + 3.3 * numpy.log(len(scores)))
pylab.subplot(2, 1, n)
pylab.hist(scores, bins = nbins +1, facecolor = colors[type])
pylab.xlabel("Network Anchoring score per %s" % type)
pylab.ylabel("Number of Peaks")
def plot(self, path):
try:
import matplotlib
matplotlib.use('PS', warn=False)
except:
return
import matplotlib.pylab as pylab
self.pylab = pylab
pylab.figure(num=1, figsize=(8,11))
pylab.clf()
pylab.figtext(0.3,0.95, 'Network Anchoring for iteration %s' % str(self.it_n))
pylab.figtext(0.3,0.90, 'Network Anchoring scores distribution')
self.plot_profile('residue', 1)
self.plot_profile('atom', 2)
pylab.subplots_adjust(top = 0.85)
pylab.figure(num=2, figsize=(8,11))
pylab.clf()
pylab.figtext(0.3,0.95, 'Residue-wise Network Anchoring scores for iteration %d' % self.it_n)
self.plot_matrix()
pylab.figure(1)
pylab.savefig(path +'_dist.ps', papertype='a4', dpi = 72)
pylab.figure(2)
pylab.savefig(path + '_2D.ps', papertype='a4', dpi = 72)
class NetworkSettings(Settings):
def create(self):
from aria.Settings import NonNegativeFloat
from aria.Settings import YesNoChoice
d = {}
# public settings
descr = "Network anchoring removes restraints which are not surrounded by a network of active restraints."
d['enabled'] = YesNoChoice(description = descr)
descr = "High network-anchoring score per residue for a peak to be active."
d['high_residue_threshold'] = NonNegativeFloat(description = descr)
descr = """Minimal network-anchoring score per residue for a peak to be active. (In combination with \"min_atom_threshold\")"""
d['min_residue_threshold'] = NonNegativeFloat(description = descr)
descr = """Minimal network-anchoring score per atoms for a peak to be active. (In combination with \"min_residue_threshold\")"""
d['min_atom_threshold'] = NonNegativeFloat(description = descr)
# private
descr = "Maximal distance for covalent inter-proton distance."
d['distance_max'] = NonNegativeFloat(description = descr)
descr = "Maximal network anchoring score for covalent distance."
d['v_max'] = NonNegativeFloat(description = descr)
descr = "Minimal network anchoring score for intraresidual/sequential distance."
d['v_min'] = NonNegativeFloat(description = descr)
return d
def create_default_values(self):
d = {}
d['enabled'] = NO
d['high_residue_threshold'] = 4.
d['min_residue_threshold'] = 1.0
d['min_atom_threshold'] = 0.25
d['distance_max'] = 5.5
d['v_max'] = 1.0
d['v_min'] = 0.1
return d
class CovalentConstraint:
def __init__(self, id, atom1, atom2, distance):
self.atom1 = atom1
self.atom2 = atom2
self.distance = distance
self.id = id
def getId(self):
return self.id
def getScore(self):
return 0.
def getAtoms(self):
return (self.atom1, self.atom2)
def getDistance(self):
return self.distance
def __str__(self):
s = "CovalentConstraint(id=%d, atoms=%s, d=%5.3f)" % (self.id, self.getAtoms(), self.distance)
return s
class NetworkAnchoring(AriaBaseClass):
def __init__(self, settings):
TCheck.check_type(settings, 'NetworkSettings')
AriaBaseClass.__init__(self)
self.setSettings(settings)
self.anchoring = None
self.peaks = None
self.getSettings()['v_min'] = 0.1
self.getSettings()['v_max'] = 1.0
self.getSettings()['distance_max'] = 5.5
def setup(self):
"""
Setup some lists and matrices.
"""
from sets import Set
if self.anchoring is not None:
# if we already have a network, just recreate self._c_id with copied contribuitions
self.message('Retrieving Network ...')
self._c_id = {}
self._c_id[-1] = [] # covalent
for p in self.peaks:
for c in p.getContributions():
for sp in c.getSpinPairs():
sid = sp.getId() + 1
self._c_id.setdefault(sid, Set())
self._c_id[sid].add(c)
self.addDistanceRestraints()
return 1
# if we run network_anchoring for 1st time, create all list and spinpair matrices
self.message('Initializing ...')
if not self.peaks:
return 0
# list with all protons
if self._is_noesy_only:
self._protons_id = [a for c in self.molecule.get_chains() for r in c.getResidues() \
for a in r.getAtoms() if a.isProton()]
else:
self._protons_id = [a for c in self.molecule.get_chains() for r in c.getResidues() \
for a in r.getAtoms() if a.isProton() or a.getType() in ['N','C']]
self._protons_id.sort(lambda a,b: cmp(a.getId(), b.getId()))
# dict with protons id as key, and indices in self._protons_id as values
self._protons_num = {}
for a in range(0, len(self._protons_id)):
self._protons_num[self._protons_id[a].getId()] = a
# list with protons residues number
# add chain levels to residues numbering
self._residues_num = {}
for c in self.molecule.get_chains():
cid = c.getSegid()
self._residues_num[cid] = [a.getResidue().getNumber() for a in self._protons_id]# if a.getSegid() == cid]
# dict with residues number as key and list of protons ids as values
self._residues_id = {}
for c in self.molecule.get_chains():
cid = c.getSegid()
self._residues_id[cid] = {}
for a in range(0, len(self._protons_id)):
r, cid = self._protons_id[a].getResidue().getNumber(), self._protons_id[a].getSegid()
self._residues_id[cid].setdefault(r, [])
self._residues_id[cid][r].append(a)
# dict with SpinPair.getId() + 1 as key and Set of contributions as values
self._c_id = {}
self._c_id[-1] = []
# dict with SpinPair.getId() + 1 as key and spinpair as values
self.spinpairs = {}
for p in self.peaks:
for c in p.getContributions():
for sp in c.getSpinPairs():
sid = sp.getId() + 1
self._c_id.setdefault(sid, Set())
self._c_id[sid].add(c)
if not self.spinpairs.has_key(sid):
self.spinpairs[sid] = sp
# add additional distance restraints
self.addDistanceRestraints()
# matrix to hold wether 2 protons are connected with spinpair(1), covalent(2) or not connected(0)
self._sp = numpy.zeros((len(self._protons_id), len(self._protons_id)))
# matrix to store the id of the spinpair connecting 2 atoms
self._sp_id = numpy.zeros((len(self._protons_id), len(self._protons_id)))
# matrix to store covalent score of a spinpair
self._sp_cov_scores = numpy.zeros((len(self._protons_id), len(self._protons_id)))
# matrix to store sum of contributions volumes of each spinpair
self._sp_sum_scores = numpy.zeros(len(self.spinpairs.keys()) , numpy.float)
for spid, sp in self.spinpairs.items():
a, b = sp.getAtoms()
a, b = self._protons_num[a.getId()], self._protons_num[b.getId()]
self._sp[a][b] = 1
self._sp[b][a] = 1
self._sp_id[a][b] = spid
self._sp_id[b][a] = spid
self.addCovalentConstraints()
self.addStructureRestraints()
for spid, sp in self.spinpairs.items():
a, b = sp.getAtoms()
a, b = self._protons_num[a.getId()], self._protons_num[b.getId()]
cov_score = self._get_covalent_score(a, b)
self._sp_cov_scores[a][b] = cov_score
self._sp_cov_scores[b][a] = cov_score
return 1
def setDefaultNetworkScores(self, s):
for p in self.peaks:
contribs = p.getContributions()
n = len(contribs)
[c.setNetworkScore(s/n) for c in contribs]
# use additional distance restraints
def addDistanceRestraints(self):
"""
Distance contraints
"""
# get list of DistanceRestraints valid for NA
restraints = []
restraint_list = self.iteration.getDistanceRestraints()
for l, r in restraint_list.items():
if l.getListSource()['add_to_network'] == YES:
restraints += r
if not restraints:
return
from sets import Set
for r in restraints:
for c in r.getContributions():
for sp in c.getSpinPairs():
sid = sp.getId() + 1
self._c_id.setdefault(sid, Set())
self._c_id[sid].add(c)
if not self.spinpairs.has_key(sid):
self.spinpairs[sid] = sp
def addStructureRestraints(self):
check = {}
vmax = self.getSettings()['v_max']
for c in self.molecule.get_chains():
residues = c.getResidues()
atoms = [a for r in residues for a in r.getAtoms() if a.isProton() and a.getName() in ['HA', 'H']]
for i in range(0, len(atoms)-1):
for j in range(i+1, len(atoms)):
a, b = atoms[i], atoms[j]
id = (min(a.getId(),b.getId()), max(a.getId(),b.getId()))
if not check.has_key(id):
check[id] = 1
res1 =int(a.getResidue().getNumber())
str1 = a.getResidue().getStructure()
t1 = a.getName()
res2 = int(b.getResidue().getNumber())
str2 = b.getResidue().getStructure()
t2 = b.getName()
if str1 == "" or str2 == "":
continue
sep = abs(res1 - res2)
if sep > 4:
continue
both_H = str1 == str2 and str1[0] == 'H'
both_B = str1 == str2 and str1[0] == 'B'
if not both_B or not both_H:
continue
HA_HN = (t1 == 'HA' and t2 == 'H') or \
(t1 == 'H' and t2 == 'HA')
HN_HN = (t1 == t2) and (t1 == 'H')
# check if valid constraints in SS
d = 0
# Sheets, dHA,HN(i,i+1)
if both_B and HA_HN and sep == 1:
d = 1
if both_H:
if HA_HN and sep <= 4:
d = 1
if HN_HN and sep <= 2:
d = 1
if d:
##cc = CovalentConstraint(n, a, b, d)
a, b = self._protons_num[a.getId()], self._protons_num[b.getId()]
if self._sp_id[a][b] == 0:
self._sp_id[a][b] = -1
if self._sp_id[b][a] == 0:
self._sp_id[b][a] = -1
self._sp[a][b] = 2
self._sp[b][a] = 2
self._sp_cov_scores[a][b] = vmax
self._sp_cov_scores[b][a] = vmax
n+= 1
def addCovalentConstraints(self):
"""
Covalent contraints
"""
dmax = self.getSettings()['distance_max']
vmax = self.getSettings()['v_max']
from aria.CovalentDistances import CovalentDistances
cd = CovalentDistances()
check = {}
n = 0
for c in self.molecule.get_chains():
residues = c.getResidues()
for r in range(len(residues)-1):
atoms = residues[r].getAtoms() + residues[r+1].getAtoms()
# NOESY
atoms = [a for a in atoms if a.isProton()]
for i in range(0, len(atoms)-1):
for j in range(i+1, len(atoms)):
aa, bb = atoms[i], atoms[j]
id = (min(aa.getId(),bb.getId()), max(aa.getId(),bb.getId()))
if not check.has_key(id):
check[id] = 1
d = cd.areConnected(aa, bb)
if d:
cc = CovalentConstraint(n, aa, bb, d)
a, b = self._protons_num[aa.getId()], self._protons_num[bb.getId()]
if self._sp_id[a][b] == 0:
self._sp_id[a][b] = -1
if self._sp_id[b][a] == 0:
self._sp_id[b][a] = -1
self._sp[a][b] = 2
self._sp[b][a] = 2
self._sp_cov_scores[a][b] = vmax
self._sp_cov_scores[b][a] = vmax
# valid also for hetero atom
if self._is_noesy_only:
continue
ah, bh = aa.getHeteroAtom(), bb.getHeteroAtom()
if ah and bh and (ah.getType() in ['N','C'] and bh.getType() in ['N','C']) :
ai, bi = self._protons_num[ah.getId()], self._protons_num[bh.getId()]
if self._sp_id[ai][bi] == 0:
self._sp_id[ai][bi] = -1
if self._sp_id[bi][ai] == 0:
self._sp_id[bi][ai] = -1
self._sp[ai][bi] = 2
self._sp[bi][ai] = 2
self._sp_cov_scores[ai][bi] = vmax
self._sp_cov_scores[bi][ai] = vmax
n+= 1
## # cov_score
## for spid, sp in self.spinpairs.items():
## a, b = sp.getAtoms()
## d = cd.areConnected(a, b)
## if d:
## map(lambda c: (c.setCovalentScore(1.)), self._c_id[spid])
def create_network(self):
"""
create the network itself
dictionnary : key = spid
value = Set of gammas
"""
if self.anchoring is not None:
return
self.message('Creating network ...')
from sets import Set
self.anchoring = {}
#t1 = clock()
for spid, sp in self.spinpairs.items():
a, b = sp.getAtoms()
sa, sb = a.getSegid(), b.getSegid()
a, b = self._protons_num[a.getId()], self._protons_num[b.getId()]
# dim0
r = self._residues_num[sa][a]
res_bound = []
for i in range(r-1, r+2):
if self._residues_id[sa].has_key(i):
res_bound += self._residues_id[sa][i]
x = numpy.take(self._sp, res_bound, axis = 0)
both_0 = x[:,a] * x[:,b]
x0 = [res_bound[i] for i in numpy.flatnonzero(both_0)]
# dim1
r = self._residues_num[sb][b]
res_bound = []
for i in range(r-1, r+2):
if self._residues_id[sb].has_key(i):
res_bound += self._residues_id[sb][i]
x = numpy.take(self._sp, res_bound, axis = 1)
both_1 = x[a,:] * x[b,:]
x1 = [res_bound[i] for i in numpy.flatnonzero(both_1)]
x12 = Set(x0).union(x1)
self.anchoring[spid] = x12
self.message("Done.")
def _get_covalent_score(self, id_a, id_b):
"""
score according to covalent structure (a, b, are two atoms
{ Vmax if covalent constraint
S = { Vmin if intraresidual /sequential connectivity
{ 0 if long-range connectivity
"""
# argument : contribution ? => then get max distance from contribution's spinpairs (use ISPA Model)
# a spin pairs ?
# 2 atoms
vmin = self.getSettings()['v_min']
vmax = self.getSettings()['v_max']
if self._sp[id_a][id_b] == 2:
covalent_score = vmax
else:
if self._isSequential(id_a, id_b):
covalent_score = vmin
else:
covalent_score = 0.
return covalent_score
def _heaviside(self, x):
if x < 0:
return 0.
elif x == 0:
return .5
else:
return 1.
def _isSequential(self, id_a, id_b):
sa, sb = self._protons_id[id_a].getSegid(), self._protons_id[id_b].getSegid()
if sa <> sb:
return 0
else:
return abs(self._residues_num[sa][id_a] - self._residues_num[sb][id_b]) <= 1
def _sumContribScore(self):
self._sp_sum_scores = {}
for spid, contribs in self._c_id.items():
s = numpy.sum([c.getScore()/len(c.getSpinPairs()) for c in contribs])
self._sp_sum_scores[spid] = s
def updateContributionsNetworkScores(self):
"""
calulate network_score for each contribution and update network_score
"""
contribs_scores = {}
#t = clock()
self._sumContribScore()
#t = clock()
v_min = self.getSettings()['v_min']
for k, gammas in self.anchoring.items():
sp = self.spinpairs[k]
score = 0.
a, b = sp.getAtoms()
id_a = self._protons_num[a.getId()]
id_b = self._protons_num[b.getId()]
gammas = list(gammas)
# a-g
#g_scores_a = numpy.take(self._sp_sum_scores, numpy.take( self._sp_id[id_a,:], gammas))
g_scores_a = [self._sp_sum_scores[x] for x in numpy.take( self._sp_id[id_a,:], gammas)]
cov_scores_a = numpy.take(self._sp_cov_scores[id_a,:], gammas)
nus_a = numpy.where(numpy.greater(g_scores_a, cov_scores_a), g_scores_a, cov_scores_a)
nus_a *= numpy.greater(nus_a - v_min, 0)
# b-g
#g_scores_b = numpy.take(self._sp_sum_scores, numpy.take( self._sp_id[id_b,:], gammas))
g_scores_b = [self._sp_sum_scores[x] for x in numpy.take( self._sp_id[id_b,:], gammas)]
cov_scores_b = numpy.take(self._sp_cov_scores[id_b,:], gammas)
nus_b = numpy.where(numpy.greater(g_scores_b, cov_scores_b), g_scores_b, cov_scores_b)
nus_b *= numpy.greater(nus_b - v_min, 0)
score = numpy.sum(numpy.sqrt(nus_a * nus_b))
contribs = self._c_id[k]
for c in contribs:
contribs_scores.setdefault(c, [])
contribs_scores[c].append(score)
for c, ss in contribs_scores.items():
c.setNetworkScore(numpy.sum(ss)/len(ss))#/len(ss)
for p in self.peaks:
contribs = p.getContributions()
scores = numpy.array([c.getNetworkScore() for c in contribs])
#covalent = numpy.array([c.getCovalentScore() for c in contribs])
#covalent = numpy.greater(covalent, 1.)
#zero_scores_covalent = numpy.equal(scores, 0) * covalent
#scores = numpy.where(zero_scores_covalent, 1., scores)
sum_scores = numpy.sum(scores)
if sum_scores > 0.:
scores /= sum_scores
map(lambda c,s : (c.setNetworkScore(s)), contribs, scores)
#self.message("Done %5.3f" % (clock() -t))
def updateContributionsScores(self):
"""
calulate score of ecah contribution and update score
"""
for p in self.peaks:
contribs = p.getContributions()
#mask = [c.isInter() for c in contribs]
scores = numpy.array([c.getNetworkScore() * c.getWeight() for c in contribs])
#numpy.putmask(scores, mask, scores * 1.5)
sum_scores = numpy.sum(scores)
if sum_scores > 0.:
scores /= sum_scores
map(lambda c,s : (c.setScore(s)), contribs, scores)
#self.message("Done %5.3f" % (clock() -t))
def dump_text(self):
settings = None
peak_list = self.peaks
itn = self.iteration.getNumber()
infra = self.project.getInfrastructure()
import os
from aria.Protocol import REPORT_NOE_RESTRAINTS
path = infra.get_iteration_path(itn)
filename = os.path.join(path, REPORT_NOE_RESTRAINTS + '.network')
pickler = NetworkAnchoringTextPickler(settings)
pickler.dump_network(peak_list, filename, gzip = 0)
self.message('Network-Anchoring scores (text) written (%s).' % filename)
def dump_ps(self):
itn = self.iteration.getNumber()
infra = self.project.getInfrastructure()
import os
from aria.Protocol import REPORT_NOE_RESTRAINTS
path = infra.get_iteration_path(itn)
path = os.path.join(path, 'graphics/network')
np = NetworkPsPickler(self)
try:
np.plot(path)
except Exception, msg:
import aria.tools as tools
self.warning(tools.last_traceback())
msg = 'Error during creation of %s.network.' % REPORT_NOE_RESTRAINTS
self.warning(msg)
def _dump_scores(self, old_weights):
## save scores
s = ""
n = 0
for p in self.peaks:
pnetscores = self.getPeakNetScores(p)
for c in p.getContributions():
s += "NETWORK : I %4d %5d OW %5.3f W %5.3f N %5.3f S %5.3f Nres %5.3f Nat %5.3f\n" \
%(p.getId(), c.getId(), old_weights[n], c.getWeight(), c.getNetworkScore(), \
c.getScore(), pnetscores['residue'], pnetscores['atom'])
n += 1
itn = self.iteration.getNumber()
infra = self.project.getInfrastructure()
import os
path = os.path.join(infra.get_iteration_path(itn), "scores.dat")
f = open(path, 'w')
f.write(s)
f.close()
s = ''
for k, v in self.residue_score.items():
s += "%d %d %.4f\n" % (k[0],k[1], v)
path = os.path.join(infra.get_iteration_path(itn), "res_scores.dat")
f = open(path, 'w')
f.write(s)
f.close()
def getPeakNetScores(self, p):
score = {'residue' : 0.,
'atom' : 0.}
for c in p.getContributions():
res = [0,1]
for a in res:
res[a] = c.getSpinSystems()[a].getAtoms()[0].getResidue()
score['residue'] += self.getResNetScore(res) * c.getScore()
score['atom'] += c.getNetworkScore()/len(c.getSpinPairs()) * c.getScore()
return score
def getResNetScore(self, residues):
residues.sort(lambda a,b: cmp(a.getNumber(), b.getNumber()))
key = tuple(residues)
#r1, r2 = residues[0].getNumber(), residues[1].getNumber()
#key = (min((r1, r2)), max((r1, r2)))
return self.residue_score[key]
def analyze(self):
"""
Analyse contribution scores and remove non valable ones
"""
self.message('Analyzing ...')
self.result = {}
result = {}
# compute net score per residue pairs
self.residue_score = {}
for spid, sp in self.spinpairs.items():
a, b = sp.getAtoms()
r1, r2 = a.getResidue(), b.getResidue()
#sa, sb = a.getSegid(), b.getSegid()
#a, b = self._protons_num[a.getId()], self._protons_num[b.getId()]
#r1, r2 = self._residues_num[sa][a], self._residues_num[sb][b]
key = [r1, r2]
key.sort(lambda a,b: cmp(a.getNumber(), b.getNumber()))
#key = (min((r1, r2)), max((r1, r2)))
key = tuple(key)
self.residue_score.setdefault(key, 0.)
sc = max([c.getNetworkScore() for c in self._c_id[spid]])
self.residue_score[key] += sc
contribs = [c for p in self.peaks for c in p.getContributions()]
scores = [c.getScore() for c in contribs]
total = len(contribs)
#eliminated = [c for c in contribs if c.getScore() <= 0.]
eliminated = numpy.sum(numpy.less_equal(scores, 0.))
self.result['total'] = total
self.result['eliminated'] = eliminated
self.result['ratio'] = self.result['eliminated']*100./float(total)
## SET SCORE as Weight
old_weights = [c.getWeight() for c in contribs]
## save scores
#self._dump_scores(old_weights)
[c.setWeight(c.getScore()) for c in contribs]
####################################################
# FILTER PEAKS according to Nres et Natom
# First rule : <Nres>p >= Nhigh
# OR
# second rule : <Nres>p >= Nres_min AND <Natom>p >= Natom_min
#Nhigh = 4.
#Nres_min = 1.
#Nat_min = 0.25
s = self.getSettings()
for p in self.peaks:
res_score = self.getPeakNetScores(p)
p._network = res_score
if p.getReferencePeak().isReliable():
continue
## if not p.isAmbiguous() and p.getActiveContributions() and p.getActiveContributions()[0].isInter():
## continue
if not (res_score['residue'] >= s['high_residue_threshold'] or
(res_score['residue'] >= s['min_residue_threshold'] and res_score['atom'] >= s['min_atom_threshold'])):
p.isActive(0)
def update_scores(self):
self.setDefaultNetworkScores(1.)
#[ c.setScore(c.getNetworkScore() * c.getWeight()) for p in self.peaks for c in p.getContributions()]
self.updateContributionsScores()
n = 0
while n < 3:
self.message("Round %d ..." % n)
t = clock()
self._round = n
self.updateContributionsNetworkScores()
self.updateContributionsScores()
self.debug('Time: %ss' % str(clock() - t))
n += 1
def run(self, iteration):
"""
run network anchoring.
"""
self.iteration = iteration
self.peaks = iteration.getPeakList()
restraints = []
restraint_list = self.iteration.getDistanceRestraints()
for l, r in restraint_list.items():
if l.getListSource()['filter_contributions'] == YES and \
l.getListSource()['run_network_anchoring'] == YES :
restraints += r
self.peaks += restraints
self._is_noesy_only = 1
# check if we have non H-H pairs
for p in self.peaks:
contributions = p.getActiveContributions()
if not contributions:
continue
atom1, atom2 = contributions[0].getSpinPairs()[0].getAtoms()
if not atom1.isProton() and not atom1.isProton():
self._is_noesy_only = 0
break
from aria.Singleton import ProjectSingleton
self.project = ProjectSingleton()
self.molecule = self.project.getMolecule()
# 1) initalize
done = self.setup()
if not done:
s = 'Aborting. No valid peaks or restraints.'
self.warning(s)
return
# 2') create network
t1 = clock()
self.create_network()
self.debug('Time: %ss' % str(clock() - t1))
# 2) assign network scores to contributions
self.update_scores()
# 4) Analysis
t1 = clock()
self.analyze()
s = 'Done. %(eliminated)d/%(total)d (%(ratio)5.2f %%) assignment possibilities removed.\n'
self.message(s % self.result)
self.debug('Time: %ss' % str(clock() - t1))
# 5) logs
self.dump_text()
self.dump_ps()
#self.halt()
class NetworkXMLPickler(XMLBasePickler):
def _xml_state(self, x):
e = XMLElement()
e.enabled = x['enabled']
e.high_residue_threshold = x['high_residue_threshold']
e.min_residue_threshold = x['min_residue_threshold']
e.min_atom_threshold = x['min_atom_threshold']
return e
def load_from_element(self, e):
s = NetworkSettings()
s['enabled'] = str(e.enabled)
s['high_residue_threshold'] = float(e.high_residue_threshold)
s['min_residue_threshold'] = float(e.min_residue_threshold)
s['min_atom_threshold'] = float(e.min_atom_threshold)
return s
NetworkSettings._xml_state = NetworkXMLPickler()._xml_state
## TEST
if __name__ == '__main__':
molecule_file = '~/devel/aria2.2_release/test/run3/data/sequence/hrdc.xml'
ariapeaks_file='~/devel/aria2.2_release/test/run3/structures/it0/noe_restraints.pickle'
project_file = '~/devel/aria2.2_release/test/werner2.xml'
# read molecule
import aria.AriaXML as AriaXML
pickler = AriaXML.AriaXMLPickler()
molecule = pickler.load(molecule_file)
# read pickled ariapeak list
from aria.tools import Load
aria_peaks = Load(ariapeaks_file)
project = pickler.load(project_file)
project.ccpn_data_sources = ()
project.read_molecule()
ns = project.getProtocol().getSettings()['iteration_settings'][0]['network_anchoring_settings']
N = NetworkAnchoring(ns)
class it:
def __init__(self, peaks, n):
self.peaks = peaks
self.n = n
def getPeakList(self):
return self.peaks
def getNumber(self):
return self.n
N.run(it(aria_peaks, 0))
#N.dump()
| 30.4476 | 144 | 0.504641 | 5,055 | 45,032 | 4.338477 | 0.122255 | 0.013406 | 0.007934 | 0.005016 | 0.312571 | 0.257033 | 0.232456 | 0.218503 | 0.195021 | 0.183165 | 0 | 0.015842 | 0.38184 | 45,032 | 1,478 | 145 | 30.4682 | 0.771994 | 0.089159 | 0 | 0.295511 | 0 | 0.004988 | 0.072625 | 0.014793 | 0 | 0 | 0 | 0.000677 | 0 | 0 | null | null | 0 | 0.044888 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c567e06fe0b4a514046c47e0475e1aaaccbe7e1 | 482 | py | Python | website/wiki/plugins/images/migrations/0002_auto_20151118_1811.py | Bournvita1998/serc | 5cdbe0ea89451c56bdb05b3bb6d178aad45c3a74 | [
"MIT"
] | null | null | null | website/wiki/plugins/images/migrations/0002_auto_20151118_1811.py | Bournvita1998/serc | 5cdbe0ea89451c56bdb05b3bb6d178aad45c3a74 | [
"MIT"
] | 18 | 2020-06-05T18:17:40.000Z | 2022-03-11T23:25:21.000Z | e/mail-relay/web/wiki/plugins/images/migrations/0002_auto_20151118_1811.py | zhouli121018/nodejsgm | 0ccbc8acf61badc812f684dd39253d55c99f08eb | [
"MIT"
] | 2 | 2016-12-13T10:02:39.000Z | 2019-05-16T05:58:16.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('wiki_images', '0001_initial'),
]
operations = [
migrations.AlterModelTable(
name='image',
table='wiki_images_image',
),
migrations.AlterModelTable(
name='imagerevision',
table='wiki_images_imagerevision',
),
]
| 20.956522 | 46 | 0.593361 | 40 | 482 | 6.875 | 0.625 | 0.109091 | 0.210909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014749 | 0.296681 | 482 | 22 | 47 | 21.909091 | 0.79646 | 0.043568 | 0 | 0.25 | 0 | 0 | 0.180828 | 0.054466 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c5c09b1bffec3d9d337a43ebc68475a241f04d3 | 229 | py | Python | farmy/modules/led.py | farmy-maker/farmy-py | e21cc816073e62d34a84e82a8dbc3075cb9c4d47 | [
"Apache-2.0"
] | 1 | 2017-09-28T07:44:25.000Z | 2017-09-28T07:44:25.000Z | farmy/modules/led.py | farmy-maker/farmy-py | e21cc816073e62d34a84e82a8dbc3075cb9c4d47 | [
"Apache-2.0"
] | null | null | null | farmy/modules/led.py | farmy-maker/farmy-py | e21cc816073e62d34a84e82a8dbc3075cb9c4d47 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
from modules.controller import Controller
LED_PIN = 23 # pin of led
if __name__ == "__main__":
controller = Controller(LED_PIN)
controller.run(50, 1)
print("Led Start for a second with 50% power")
| 20.818182 | 50 | 0.694323 | 34 | 229 | 4.382353 | 0.705882 | 0.174497 | 0.214765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043956 | 0.20524 | 229 | 10 | 51 | 22.9 | 0.774725 | 0.104803 | 0 | 0 | 0 | 0 | 0.222772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c5e49aea9d8124efaadeb1e5de150138508a7bb | 2,608 | py | Python | docs/conf.py | guillaume-wisniewski/elpis | 550c350fd0098751b9a502a253bc4066f15c47db | [
"Apache-2.0"
] | 118 | 2018-11-25T22:00:11.000Z | 2022-03-18T10:18:33.000Z | docs/conf.py | guillaume-wisniewski/elpis | 550c350fd0098751b9a502a253bc4066f15c47db | [
"Apache-2.0"
] | 189 | 2019-01-25T01:37:59.000Z | 2022-02-16T02:31:23.000Z | docs/conf.py | guillaume-wisniewski/elpis | 550c350fd0098751b9a502a253bc4066f15c47db | [
"Apache-2.0"
] | 34 | 2018-11-28T20:31:38.000Z | 2022-01-27T12:20:59.000Z | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
# -- Project information -----------------------------------------------------
project = 'Elpis'
copyright = '2020, The University of Queensland'
author = 'Ben Foley, Nicholas Lambourne, Nay San'
# The full version, including alpha/beta/rc tags
release = '0.96.0'
master_doc = 'index'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.coverage',
'sphinx_autodoc_typehints',
'recommonmark'
]
# Show undocumented members in docs
autodoc_default_options = {
'undoc-members': True,
}
# Mock to get RTD docs to compile
autodoc_mock_imports = ["pytest"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
# We also exclude the "ugly" auto-generated elpis.rst file and replace it with our own.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'elpis/elpis.rst']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_logo = '_static/img/logo.png'
html_theme_options = {
'logo_only': True,
}
github_url = 'https://github.com/CoEDL/elpis'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_css_files = [
'style.css',
]
# -- Extension configuration -------------------------------------------------
| 33.435897 | 87 | 0.664494 | 342 | 2,608 | 4.979532 | 0.491228 | 0.023488 | 0.022314 | 0.017616 | 0.057546 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004047 | 0.147239 | 2,608 | 77 | 88 | 33.87013 | 0.761691 | 0.674847 | 0 | 0 | 0 | 0 | 0.394608 | 0.029412 | 0 | 0 | 0 | 0.012987 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c6117485781c2c31bb5fbf18702c529d404a85e | 1,820 | py | Python | kite-python/kite_ml/kite/name_encoder/scope_encoder.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 17 | 2022-01-10T11:01:50.000Z | 2022-03-25T03:21:08.000Z | kite-python/kite_ml/kite/name_encoder/scope_encoder.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 1 | 2022-01-13T14:28:47.000Z | 2022-01-13T14:28:47.000Z | kite-python/kite_ml/kite/name_encoder/scope_encoder.py | kiteco/kiteco-public | 74aaf5b9b0592153b92f7ed982d65e15eea885e3 | [
"BSD-3-Clause"
] | 7 | 2022-01-07T03:58:10.000Z | 2022-03-24T07:38:20.000Z | from typing import Dict, Any
import tensorflow as tf
from ..utils.segmented_data import SegmentedIndices, SegmentedIndicesFeed
from ..graph_encoder.embeddings import NodeEmbeddings
class Encoder(object):
def __init__(self, nodes: NodeEmbeddings):
self._nodes = nodes
self._build()
def _build(self):
self._build_placeholders()
self._build_scope_state()
def _build_placeholders(self):
with tf.name_scope('placeholders'):
# shape [number of variables in batch]
# sample_ids[i] = s means that means that variable i is part of sample s in the batch
self._variable_node_ids = SegmentedIndices('variable_node_ids')
def _build_scope_state(self):
with tf.name_scope('build_scope_state'):
# [num variable nodes in batch]
self._variable_nodes_embedded: tf.Tensor = tf.gather(
self._nodes.embeddings,
self._variable_node_ids.indices,
name='scope_nodes_embedded',
)
# reduce across variable nodes in each graph in the batch
# shape [batch size, graph embedding depth]
self._scope_state: tf.Tensor = tf.segment_max(
self._variable_nodes_embedded,
self._variable_node_ids.sample_ids,
name='scope_state',
)
def feed_dict(self, feed: SegmentedIndicesFeed) -> Dict[tf.Tensor, Any]:
return self._variable_node_ids.feed_dict(feed)
def placeholders_dict(self) -> Dict[str, tf.Tensor]:
return self._variable_node_ids.dict()
def scope_state(self) -> tf.Tensor:
"""
:return: representation of all the variables in scope, shape [batch size, graph embedding depth]
"""
return self._scope_state
| 34.339623 | 104 | 0.644505 | 216 | 1,820 | 5.148148 | 0.296296 | 0.06295 | 0.080935 | 0.085432 | 0.138489 | 0.059353 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273626 | 1,820 | 52 | 105 | 35 | 0.84115 | 0.19011 | 0 | 0 | 0 | 0 | 0.05325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0.125 | 0.0625 | 0.46875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c62b01797bc951466927b86c37a5f651bd8ad8f | 1,390 | py | Python | auth/views.py | KenMwaura1/zoo_pitch | c83edf6fb53bdfc3739bedbea258f9ffc6f6925c | [
"MIT"
] | 2 | 2021-09-19T04:45:44.000Z | 2021-09-19T18:37:16.000Z | auth/views.py | KenMwaura1/zoo_pitch | c83edf6fb53bdfc3739bedbea258f9ffc6f6925c | [
"MIT"
] | null | null | null | auth/views.py | KenMwaura1/zoo_pitch | c83edf6fb53bdfc3739bedbea258f9ffc6f6925c | [
"MIT"
] | null | null | null | from flask import flash, render_template, redirect, request, url_for
from flask_login import login_required, login_user, logout_user
from . import auth
from .forms import UserLoginForm, UserRegForm
from app.commands import db
from app.models import User
from app.send_email import mail_message
@auth.route('/login', methods=['GET', 'POST'])
def login():
form = UserLoginForm()
if form.validate_on_submit():
user = db.session.query(User).filter_by(username=form.username.data).first()
if user is not None and user.verify_password(form.password.data):
login_user(user, form.remember.data)
return redirect(request.args.get('next') or url_for('main.index'))
flash('Invalid username or Password')
return render_template('auth/login.html', loginform=form)
@auth.route('/logout')
@login_required
def logout():
logout_user()
return redirect(url_for("main.index"))
@auth.route('/signup', methods=["GET", "POST"])
def signup():
form = UserRegForm()
print(form)
if form.validate_on_submit():
user = User(email=form.email.data, username=form.username.data, password=form.password.data)
user.save_user()
mail_message("Welcome to Zoo-Pitch","email/user_welcome",user.email,user=user)
return redirect(url_for('auth.login'))
return render_template('auth/sign-up.html', reg_form=form)
| 35.641026 | 100 | 0.709353 | 192 | 1,390 | 5 | 0.359375 | 0.025 | 0.029167 | 0.035417 | 0.104167 | 0.054167 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159712 | 1,390 | 38 | 101 | 36.578947 | 0.821918 | 0 | 0 | 0.0625 | 0 | 0 | 0.119424 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0.09375 | 0.21875 | 0 | 0.46875 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2c6b4180cafccab3b589b0d309ed2441b3687e2d | 1,465 | py | Python | src/views/send/target_address_form_view.py | Kevingislason/bitcoin_hardware_wallet_ui | 226983546c7c8838ca8bc72accdd6adbd8013446 | [
"MIT"
] | null | null | null | src/views/send/target_address_form_view.py | Kevingislason/bitcoin_hardware_wallet_ui | 226983546c7c8838ca8bc72accdd6adbd8013446 | [
"MIT"
] | 5 | 2021-06-02T03:21:46.000Z | 2022-03-12T00:55:35.000Z | src/views/send/target_address_form_view.py | Kevingislason/abacus_wallet_bridge | 226983546c7c8838ca8bc72accdd6adbd8013446 | [
"MIT"
] | null | null | null | from PyQt6.QtCore import *
from PyQt6.QtGui import *
from PyQt6.QtWidgets import *
class TargetAddressForm(QWidget):
def __init__(self):
super().__init__()
self.layout = QHBoxLayout()
self.setLayout(self.layout)
self.target_address_label = QLabel("Pay to:")
self.target_address_label.size_policy = QSizePolicy(QSizePolicy.Policy.Preferred, QSizePolicy.Policy.Fixed)
self.target_address_label.size_policy.setHorizontalStretch(1)
self.target_address_label.setSizePolicy(self.target_address_label.size_policy)
self.target_address_input = QLineEdit()
self.target_address_input.setMaxLength(74) # max address length
self.target_address_input.size_policy = QSizePolicy(QSizePolicy.Policy.Preferred, QSizePolicy.Policy.Fixed)
self.target_address_input.size_policy.setHorizontalStretch(8)
self.target_address_input.setSizePolicy(self.target_address_input.size_policy)
self.target_address_spacer = QLabel("")
self.target_address_spacer.size_policy = QSizePolicy(QSizePolicy.Policy.Preferred, QSizePolicy.Policy.Fixed)
self.target_address_spacer.size_policy.setHorizontalStretch(1)
self.target_address_spacer.setSizePolicy(self.target_address_spacer.size_policy)
self.layout.addWidget(self.target_address_label)
self.layout.addWidget(self.target_address_input)
self.layout.addWidget(self.target_address_spacer)
| 44.393939 | 116 | 0.763823 | 170 | 1,465 | 6.258824 | 0.229412 | 0.178571 | 0.303571 | 0.144737 | 0.644737 | 0.612782 | 0.332707 | 0.242481 | 0.242481 | 0.242481 | 0 | 0.006431 | 0.150853 | 1,465 | 32 | 117 | 45.78125 | 0.848875 | 0.012287 | 0 | 0 | 0 | 0 | 0.004844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.125 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c6c2e152a67cf3da0646f687e2033d8230c0268 | 518 | py | Python | tests/test_overwrite_store.py | schwa-lab/libschwa-python | aebe5b0cf91e55b9e054ecff46a6e74fcd19f490 | [
"MIT"
] | 5 | 2015-03-23T17:19:18.000Z | 2017-06-07T18:24:50.000Z | tests/test_overwrite_store.py | schwa-lab/libschwa-python | aebe5b0cf91e55b9e054ecff46a6e74fcd19f490 | [
"MIT"
] | null | null | null | tests/test_overwrite_store.py | schwa-lab/libschwa-python | aebe5b0cf91e55b9e054ecff46a6e74fcd19f490 | [
"MIT"
] | null | null | null | # vim: set et nosi ai ts=2 sts=2 sw=2:
# coding: utf-8
from __future__ import absolute_import, print_function, unicode_literals
import unittest
from schwa import dr
class Node(dr.Ann):
label = dr.Field()
class Doc(dr.Doc):
store = dr.Store(Node)
class Test(unittest.TestCase):
def _test_example(self, doc):
doc.store = None
def test_example(self):
R = 'Cannot overwrite a store (.*)'
d = Doc()
d.store.create()
self.assertRaisesRegexp(ValueError, R, lambda: self._test_example(d))
| 19.185185 | 73 | 0.69305 | 79 | 518 | 4.392405 | 0.56962 | 0.095101 | 0.080692 | 0.103746 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009479 | 0.185328 | 518 | 26 | 74 | 19.923077 | 0.812796 | 0.096525 | 0 | 0 | 0 | 0 | 0.062366 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.133333 | false | 0 | 0.2 | 0 | 0.666667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c7f7b1f3c0b5c1795a18f512daba60361ae64d1 | 2,549 | py | Python | Samples/NLPSample.py | Klangoo/MagnetApiClient.Python | adf36c0e8b094a282827801b1ccf0aaf56165b3f | [
"MIT"
] | null | null | null | Samples/NLPSample.py | Klangoo/MagnetApiClient.Python | adf36c0e8b094a282827801b1ccf0aaf56165b3f | [
"MIT"
] | null | null | null | Samples/NLPSample.py | Klangoo/MagnetApiClient.Python | adf36c0e8b094a282827801b1ccf0aaf56165b3f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -- coding: UTF-8 --
"""
Magnet API NLP Sample
Copyright 2018, Klangoo Inc.
"""
from klangooclient.MagnetAPIClient import MagnetAPIClient
ENDPOINT = 'https://nlp.klangoo.com/Service.svc'
CALK = 'enter your calk here'
SECRET_KEY = 'enter your secret key here'
client = MagnetAPIClient(ENDPOINT, CALK, SECRET_KEY)
def test_process_document():
request = { 'text' : 'The United States of America (USA), commonly known as the United States (U.S.) or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions.',
'lang' : 'en', 'format' : 'json' }
json = client.callwebmethod('ProcessDocument', request, 'POST')
print('\nProcess Document:')
print(json)
def test_get_summary():
request = { 'text' : 'The United States of America (USA), commonly known as the United States (U.S.) or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions.',
'lang' : 'en', 'format' : 'json' }
json = client.callwebmethod('GetSummary', request, 'POST')
print('\nGet Summary:')
print(json)
def test_get_entities():
request = { 'text' : 'The United States of America (USA), commonly known as the United States (U.S.) or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions.',
'lang' : 'en', 'format' : 'json' }
json = client.callwebmethod('GetEntities', request, 'POST')
print('\nGet Entities:')
print(json)
def test_get_categories():
request = { 'text' : 'The United States of America (USA), commonly known as the United States (U.S.) or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions.',
'lang' : 'en', 'format' : 'json' }
json = client.callwebmethod('GetCategories', request, 'POST')
print('\nGet Categories:')
print(json)
def test_get_key_topics():
request = { 'text' : 'The United States of America (USA), commonly known as the United States (U.S.) or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions.',
'lang' : 'en', 'format' : 'json' }
json = client.callwebmethod('GetKeyTopics', request, 'POST')
print('\nGet Key Topics:')
print(json)
if __name__ == "__main__":
test_process_document()
test_get_summary()
test_get_entities()
test_get_categories()
test_get_key_topics() | 44.719298 | 245 | 0.721852 | 346 | 2,549 | 5.225434 | 0.248555 | 0.049779 | 0.082965 | 0.05531 | 0.661504 | 0.619469 | 0.619469 | 0.619469 | 0.619469 | 0.619469 | 0 | 0.006922 | 0.149863 | 2,549 | 57 | 246 | 44.719298 | 0.827411 | 0.0357 | 0 | 0.365854 | 0 | 0.121951 | 0.592062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.02439 | 0 | 0.146341 | 0.243902 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c80a3d4d74ea16474f3364d5a8e43e993d0be8b | 952 | py | Python | 15-3SUM/solution.py | alfmunny/leetcode | e35d2164c7e6e66410309fe1667ceab5a7689bef | [
"MIT"
] | null | null | null | 15-3SUM/solution.py | alfmunny/leetcode | e35d2164c7e6e66410309fe1667ceab5a7689bef | [
"MIT"
] | null | null | null | 15-3SUM/solution.py | alfmunny/leetcode | e35d2164c7e6e66410309fe1667ceab5a7689bef | [
"MIT"
] | null | null | null | class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
if len(nums) < 3:
return []
ans = []
nums.sort()
for i in range(0, len(nums)-2):
if nums[i] > 0:
break
if i > 0 and nums[i-1] == nums[i]:
continue
left, right = i+1, len(nums)-1
while right > left:
s = nums[left] + nums[right] + nums[i]
if s == 0:
ans.append([nums[i], nums[left], nums[right]])
left += 1
right -= 1
while right > left and nums[left] == nums[left-1]:
left += 1
while right > left and nums[right] == nums[right+1]:
right -= 1
elif s < 0:
left += 1
else:
right -= 1
return ans
| 32.827586 | 72 | 0.366597 | 106 | 952 | 3.292453 | 0.292453 | 0.071633 | 0.094556 | 0.12894 | 0.126075 | 0.126075 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.514706 | 952 | 28 | 73 | 34 | 0.71645 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c86d57953ceb081ab010190a1fd6e152927560a | 8,829 | py | Python | py/sandbox.python.py | schaabs/sandbox | ee8abb2a8220ca841b9b5a2579c25d100a43eb4f | [
"MIT"
] | null | null | null | py/sandbox.python.py | schaabs/sandbox | ee8abb2a8220ca841b9b5a2579c25d100a43eb4f | [
"MIT"
] | 2 | 2018-02-01T19:58:53.000Z | 2018-02-23T00:50:18.000Z | py/sandbox.python.py | schaabs/sandbox | ee8abb2a8220ca841b9b5a2579c25d100a43eb4f | [
"MIT"
] | 1 | 2020-12-16T06:35:51.000Z | 2020-12-16T06:35:51.000Z | import hashlib
import shutil
import io
import gzip
import platform
import os
import struct
import json
import mmap
class ElfConst:
CLASS_32 = 1
CLASS_64 = 2
DATA_LE = 1
DATA_BE = 2
TYPE_RELOC = 1
TYPE_EXEC = 2
TYPE_SHARED = 3
TYPE_CORE = 4
class Layout:
ElfIdent = b'=4sBBBBBxxxxxxx'
ElfFileHeader32BE = b'>HHIIIIIHHHHHH'
ElfFileHeader32LE = b'<HHIIIIIHHHHHH'
ElfFileHeader64BE = b'>HHIQQQIHHHHHH'
ElfFileHeader64LE = b'<HHIQQQIHHHHHH'
ElfProgramHeader32BE = b'>IIIIIIII'
ElfProgramHeader32LE = b'<IIIIIIII'
ElfProgramHeader64BE = b'>IIQQQQQQ'
ElfProgramHeader64LE = b'<IIQQQQQQ'
ElfNoteHeader32BE = b'>III'
ElfNoteHeader32LE = b'<III'
ElfNoteHeader64BE = b'>III'
ElfNoteHeader64LE = b'<III'
class ExplicitLayout:
layout = None
def size(self):
if not self.layout:
return 0
return struct.calcsize(self.layout)
def _struct_unpack_from(self, file, offset=0):
file.seek(offset)
bytestr = file.read(self.size());
return struct.unpack(self.layout, bytestr)
class ElfIdent(ExplicitLayout):
magic = None
elfClass = None
elfData = None
fileVersion = None
fileAbi = None
abiVersion = None
def __init__(self):
self.layout = Layout.ElfIdent
def unpack_from(self, file, offset=0):
print 'offset=' + hex(offset) + ' size=' + hex(self.size())
(self.magic, self.elfClass, self.elfData, self.fileVersion, self.fileAbi, self.abiVersion) = self._struct_unpack_from(file, offset)
print self
def is_valid(self):
#if the magic string doesn't match the expected '\x7fELF' return false
return self.magic == '\x7fELF'
def __str__(self):
dict = {
'magic': self.magic,
'elfClass': hex(self.elfClass),
'elfData': hex(self.elfData),
'fileVersion': hex(self.fileVersion),
'fileAbi': hex(self.fileAbi),
'abiVersion': hex(self.abiVersion)
}
return json.dumps(dict)
class ElfFileHeader(ExplicitLayout):
type = None
machine = None
version = None
entry = None
phoff = None
shoff = None
flags = None
ehsize = None
phentsize = None
phnum = None
shentsize = None
shnum = None
shstrndx = None
def __init__(self, elfident):
if elfident.elfClass == ElfConst.CLASS_32:
if elfident.elfData == ElfConst.DATA_BE:
self.layout = Layout.ElfFileHeader32BE
elif elfident.elfData == ElfConst.DATA_LE:
self.layout = Layout.ElfFileHeader32LE
elif elfident.elfClass == ElfConst.CLASS_64:
if elfident.elfData == ElfConst.DATA_BE:
self.layout = Layout.ElfFileHeader64BE
elif elfident.elfData == ElfConst.DATA_LE:
self.layout = Layout.ElfFileHeader64LE
# returns the data at the specified offset as an ElfFileHeader
def unpack_from(self, file, offset):
print 'offset=' + hex(offset) + ' size=' + hex(self.size())
(self.type, self.machine, self.version, self.entry,
self.phoff, self.shoff, self.flags, self.ehsize,
self.phentsize, self.phnum, self.shentsize, self.shnum,
self.shstrndx) = self._struct_unpack_from(file, offset)
print self
def __str__(self):
dict = {
'type': hex(self.type),
'machine': hex(self.machine),
'version': hex(self.version),
'entry': hex(self.entry),
'phoff': hex(self.phoff),
'shoff': hex(self.shoff),
'flags': hex(self.flags),
'ehsize': hex(self.ehsize),
'phentsize': hex(self.phentsize),
'phnum': hex(self.phnum),
'shentsize': hex(self.shentsize),
'shnum': hex(self.shnum),
'shstrndx': hex(self.shstrndx)
}
return json.dumps(dict)
class ElfProgramHeader(ExplicitLayout):
type = None
offset = None
vaddr = None
paddr = None
filesz = None
memsz = None
flags = None
align = None
def __init__(self, elfident):
self._elfident = elfident
if elfident.elfClass == ElfConst.CLASS_32:
if elfident.elfData == ElfConst.DATA_BE:
self.layout = Layout.ElfProgramHeader32BE
elif elfident.elfData == ElfConst.DATA_LE:
self.layout = Layout.ElfProgramHeader32LE
elif elfident.elfClass == ElfConst.CLASS_64:
if elfident.elfData == ElfConst.DATA_BE:
self.layout = Layout.ElfProgramHeader64BE
elif elfident.elfData == ElfConst.DATA_LE:
self.layout = Layout.ElfProgramHeader64LE
def unpack_from(self, file, offset):
print 'offset=' + hex(offset) + ' size=' + hex(self.size())
if self._elfident.elfClass == ElfConst.CLASS_32:
(self.type, self.offset, self.vaddr, self.paddr,
self.filesz, self.memsz, self.flags, self.align) = self._struct_unpack_from(file, offset)
else:
(self.type, self.flags, self.offset, self.vaddr,
self.paddr, self.filesz, self.memsz, self.align) = self._struct_unpack_from(file, offset)
print self
def __str__(self):
str = ''.join([
'(type=', hex(self.type),
' offset=', hex(self.offset),
' vaddr=', hex(self.vaddr),
' paddr=', hex(self.paddr),
' filesz=', hex(self.filesz),
' memsz=', hex(self.memsz),
' flags=', hex(self.flags),
' align', hex(self.align), ')'
])
return str
class ElfNote:
noteHeader = None
name = None
descr = None
class ElfNoteHeader(ExplicitLayout):
namesz = None
descsz = None
type = None
def __init__(self, elfident):
if elfident.elfClass == ElfConst.CLASS_32:
if elfident.elfData == ElfConst.DATA_BE:
self.layout = Layout.ElfNoteHeader32BE
elif elfident.elfData == ElfConst.DATA_LE:
self.layout = Layout.ElfNoteHeader32LE
elif elfident.elfClass == ElfConst.CLASS_64:
if elfident.elfData == ElfConst.DATA_BE:
self.layout = Layout.ElfNoteHeader64BE
elif elfident.elfData == ElfConst.DATA_LE:
self.layout = Layout.ElfNoteHeader64LE
def unpack_from(self, file, offset):
(self.namesz, self.descsz, self.type) = self._struct_unpack_from(file, offset)
def __str__(self):
dict = {
'namesz': hex(self.namesz),
'descsz': hex(self.descsz),
'type': hex(self.type)
}
return json.dumps(dict)
class ElfFile:
ident = None
fileHeader = None
programHeaders = [ ]
notes = [ ]
@staticmethod
def unpack_from(file, offset=0):
elffile = ElfFile()
elffile.ident = ElfIdent()
elffile.ident.unpack_from(file, offset)
if not elffile.ident.is_valid():
return None
elffile.fileHeader = ElfFileHeader(elffile.ident)
elffile.fileHeader.unpack_from(file, offset + elffile.ident.size())
elffile._unpack_program_headers(file, offset)
return elffile
def _unpack_program_headers(self, file, offset):
for i in range(0, self.fileHeader.phnum):
ph = ElfProgramHeader(self.ident)
print offset + self.fileHeader.phoff + (i * self.fileHeader.phentsize)
ph.unpack_from(file, offset + self.fileHeader.phoff + (i * self.fileHeader.phentsize))
self.programHeaders.append(ph)
if ph.type == 4:
_unpack_notes(self, file, ph.offset, ph.offset + ph.filesz)
def __str__(self):
filestr = 'ident:\n' + str(self.ident) + '\nfileHeader:\n' + str(self.fileHeader) + '\nprogramHeaders:\n' + '\n'.join(str(ph) for ph in self.programHeaders)
return filestr
if __name__ == '__main__':
with open('libcoreclr.so', 'rb') as corefile:
print ''
print ''
print(ElfFile.unpack_from(corefile, 0))
| 30.030612 | 164 | 0.557028 | 891 | 8,829 | 5.396184 | 0.162739 | 0.046589 | 0.043261 | 0.067388 | 0.337146 | 0.31094 | 0.288062 | 0.288062 | 0.258527 | 0.179701 | 0 | 0.014426 | 0.340469 | 8,829 | 293 | 165 | 30.133106 | 0.811266 | 0.014724 | 0 | 0.220183 | 0 | 0 | 0.050247 | 0 | 0.004587 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.041284 | null | null | 0.045872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2c88f77b721f16bdbc0e12c2be60eceb519ef175 | 800 | py | Python | log_redaction_cli.py | kalaboster/strings | a0c7160af0715599721afce92e739283a556f80c | [
"Apache-2.0"
] | 1 | 2019-09-25T04:34:25.000Z | 2019-09-25T04:34:25.000Z | log_redaction_cli.py | kalaboster/strings | a0c7160af0715599721afce92e739283a556f80c | [
"Apache-2.0"
] | null | null | null | log_redaction_cli.py | kalaboster/strings | a0c7160af0715599721afce92e739283a556f80c | [
"Apache-2.0"
] | null | null | null | """log_redaction_cli
Usage:
log_redaction_cli.py --tarfile <tarfile> --working-dir <working-dir> --output-dir <output-dir>
log_redaction_cli.py (-h | --help)
log_redaction_cli.py --version
Options:
-h --help Pass in a string: example command: python log_redaction_cli.py --tarfile "test/files/test_output.tar.gz" --working-dir "/home/kalab/github/stringer/test/files" --output-dir log_redataction_example
--version v version
"""
from docopt import docopt
import stringer.utils.log_redaction_utils as log_redact
if __name__ == '__main__':
arguments = docopt(__doc__, version='0.0.9')
perm_list = log_redact.process_gz(file=arguments.get("<tarfile>"),working_dir=arguments.get("<working-dir>"), output_gz_dir=arguments.get("<output-dir>"))
print(str(perm_list))
| 33.333333 | 214 | 0.7325 | 116 | 800 | 4.75 | 0.431034 | 0.130672 | 0.136116 | 0.123412 | 0.087114 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004267 | 0.12125 | 800 | 23 | 215 | 34.782609 | 0.779516 | 0.55375 | 0 | 0 | 0 | 0 | 0.135057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2c9245c4a0d3d6fcf05085c489277adf12ebc45d | 1,982 | py | Python | P2_studies/theta_plus/Analysis/Mapping/match_mcl_to_leiden.py | chackoge/ERNIE_Plus | 7e480c47a69fc2f736ac7fb55ece35dbff919938 | [
"MIT"
] | 6 | 2017-09-26T23:45:52.000Z | 2021-10-18T22:58:38.000Z | P2_studies/theta_plus/Analysis/Mapping/match_mcl_to_leiden.py | NETESOLUTIONS/ERNIE | 454518f28b39a6f37ad8dde4f3be15d4dccc6f61 | [
"MIT"
] | null | null | null | P2_studies/theta_plus/Analysis/Mapping/match_mcl_to_leiden.py | NETESOLUTIONS/ERNIE | 454518f28b39a6f37ad8dde4f3be15d4dccc6f61 | [
"MIT"
] | 9 | 2017-11-22T13:42:32.000Z | 2021-05-16T17:58:03.000Z | import pandas as pd
import mapping_module as mm
import multiprocessing as mp
from sqlalchemy import create_engine
from sys import argv
user_name = argv[1]
password = argv[2]
data_type = argv[3]
start_year = int(argv[4])
end_year = int(argv[5])
leiden_input = argv[6] #quality_func_Res --> CPM_R001
schema = argv[7]
rootdir = argv[8] # "/erniedev_data3/theta_plus/Leiden/"
sql_scheme = 'postgresql://' + user_name + ':' + password + '@localhost:5432/ernie'
engine = create_engine(sql_scheme)
data_name = data_type + str(start_year) + '_' + str(end_year)
# Read from Postgres
mcl_name = data_name + '_cluster_scp_list_unshuffled'
mcl = pd.read_sql_table(table_name= mcl_name, schema=schema, con=engine)
# # Read directly
# mcl_name = data_name + '_cluster_scp_list_unshuffled.csv'
# mcl = pd.read_csv(mcl_name)
leiden_name = data_name + '_cluster_scp_list_leiden_' + leiden_input + '.csv'
leiden = pd.read_csv(leiden_name)
mcl_grouped = mcl.groupby(by='cluster_no',
as_index=False).agg('count').sort_values(by='cluster_no', ascending=True)
# To match clusters between size 30 and 350 only:
mcl_grouped = mcl_grouped[(mcl_grouped['scp'] >= 30) & (mcl_grouped['scp'] <= 350)]
mcl_cluster_list = mcl_grouped['cluster_no'].tolist()
print("Running...")
p = mp.Pool(6)
final_df = pd.DataFrame()
for mcl_cluster_no in mcl_cluster_list:
match_dict = p.starmap(mm.match_mcl_to_leiden, [(mcl_cluster_no, mcl, leiden)])
match_df = pd.DataFrame.from_dict(match_dict)
final_df = final_df.append(match_df, ignore_index=True)
save_name = rootdir + '/' + data_name + '_match_to_leiden_' + leiden_input + '.csv'
final_df.to_csv(save_name, index = None, header=True, encoding='utf-8')
# In case the connection times out:
engine = create_engine(sql_scheme)
save_name_sql = data_name + '_match_to_leiden_' + leiden_input
final_df.to_sql(save_name_sql, con=engine, schema=schema, index=False, if_exists='fail')
print("")
print("All Completed.") | 33.59322 | 99 | 0.734107 | 308 | 1,982 | 4.383117 | 0.366883 | 0.035556 | 0.026667 | 0.042222 | 0.164444 | 0.124444 | 0.105185 | 0.057778 | 0 | 0 | 0 | 0.016336 | 0.135217 | 1,982 | 59 | 100 | 33.59322 | 0.771295 | 0.135217 | 0 | 0.052632 | 0 | 0 | 0.12075 | 0.043376 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.131579 | 0 | 0.131579 | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2c962f2954bff9d6229947f6de4e0917ddfe1361 | 576 | py | Python | lib/DuplicatePairDetector.py | hapsby/deepAPIRevisited | 826c0893dd828380d13e58ac9739a49525e7f001 | [
"MIT"
] | null | null | null | lib/DuplicatePairDetector.py | hapsby/deepAPIRevisited | 826c0893dd828380d13e58ac9739a49525e7f001 | [
"MIT"
] | null | null | null | lib/DuplicatePairDetector.py | hapsby/deepAPIRevisited | 826c0893dd828380d13e58ac9739a49525e7f001 | [
"MIT"
] | null | null | null | import hashlib
class DuplicatePairDetector:
def __init__(self):
self.hashes = set()
def add_if_new(self, description, api_calls):
hash_binary = self.get_hash_binary(description, api_calls)
if hash_binary in self.hashes:
return False
self.hashes.add(hash_binary)
return True
def get_hash_binary(self, description, api_calls):
hasher = hashlib.md5(description.encode('utf-8'))
for api_call in api_calls:
hasher.update(api_call.encode('utf-8'))
return hasher.digest()[0:5]
| 26.181818 | 66 | 0.651042 | 75 | 576 | 4.746667 | 0.44 | 0.140449 | 0.160112 | 0.129213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0.253472 | 576 | 21 | 67 | 27.428571 | 0.816279 | 0 | 0 | 0 | 0 | 0 | 0.017391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.066667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2c964b266d17c7b782f7971364713668a723fd0e | 328 | py | Python | beginner/sum-of-all-numbers_ChingLingYeung.py | garvitsharma05/hacktoberithms | 25aea28f362de22414569d67436a670bea5a3aeb | [
"MIT"
] | 16 | 2018-10-05T07:35:06.000Z | 2021-10-02T12:12:52.000Z | beginner/sum-of-all-numbers_ChingLingYeung.py | garvitsharma05/hacktoberithms | 25aea28f362de22414569d67436a670bea5a3aeb | [
"MIT"
] | 50 | 2018-10-04T00:04:24.000Z | 2019-10-25T16:29:58.000Z | beginner/sum-of-all-numbers_ChingLingYeung.py | garvitsharma05/hacktoberithms | 25aea28f362de22414569d67436a670bea5a3aeb | [
"MIT"
] | 115 | 2018-10-04T02:42:18.000Z | 2021-01-27T17:34:21.000Z |
def sum_all(ls):
sum = 0
if(len(ls) != 2):
print("Invalid input")
else:
ls.sort()
start = ls[0]
end = ls[1]
if(start == end):
sum = 2 * start
else:
for i in range(start, end+1):
sum += i
return sum
| 15.619048 | 41 | 0.368902 | 40 | 328 | 3 | 0.525 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037267 | 0.509146 | 328 | 20 | 42 | 16.4 | 0.708075 | 0 | 0 | 0.142857 | 0 | 0 | 0.039755 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.142857 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ca16bdb7e66900966a36a0cd4986826181aae35 | 1,967 | py | Python | CS307/testbench_log/fileDB/Client.py | ntdgy/python_study | c3511846a89ea72418937de4cc3edf1595a46ec5 | [
"MIT"
] | null | null | null | CS307/testbench_log/fileDB/Client.py | ntdgy/python_study | c3511846a89ea72418937de4cc3edf1595a46ec5 | [
"MIT"
] | null | null | null | CS307/testbench_log/fileDB/Client.py | ntdgy/python_study | c3511846a89ea72418937de4cc3edf1595a46ec5 | [
"MIT"
] | null | null | null | # -*- coding = utf-8 -*-
# @Time: 2022/4/13 19:35
# @Author: Anshang
# @File: Client.py
# @Software: PyCharm
import socket
from multiprocessing import Process, Pipe
def connection(pipe: Pipe, username, password):
ip_bind = ("127.0.0.1", 9900)
c = socket.socket()
c.connect(ip_bind)
login = 'login ' + username + ' ' + password
c.send(bytes(login, encoding='utf-8'))
permission = str(c.recv(1024), encoding="utf-8")
print('permission:', permission)
if permission == '-1':
raise Exception("Login error")
while True:
rec = pipe.recv()
c.send(bytes(rec, encoding="utf-8"))
temp = str(c.recv(1024), encoding="utf-8")
s_send = ''
while temp != 'finish':
s_send = s_send + temp
temp = str(c.recv(1024), encoding="utf-8")
pipe.send(s_send)
class DBMSClient(object):
pa, child = Pipe()
def __init__(self, username, password):
self.p = Process(target=connection, args=(self.child, username, password))
self.p.start()
pass
def execute(self, sql: str):
self.pa.send(sql)
return self.pa.recv()
def excuse(self, sql: str):
self.pa.send(sql)
return self.pa.recv()
def close(self):
self.p.terminate()
self.pa.close()
self.child.close()
if __name__ == '__main__':
client = DBMSClient('anshang', '123456')
client.execute("insert into supply_center(id, director_name) values(2, 'name');")
client.execute("insert into supply_center(id, director_name) values(2, 'test');")
client.execute("insert into supply_center(id, director_name, supply_center) values(5, 'test', 'center');")
client.execute("update supply_center set id = 5, director_name = 'jbjbjb' where id = 2;")
print(
client.execute("select * from supply_center where id = '2' and director_name = 'test' or supply_center = 'center';"))
client.close()
| 31.222222 | 125 | 0.608033 | 257 | 1,967 | 4.536965 | 0.354086 | 0.072041 | 0.051458 | 0.030875 | 0.278731 | 0.278731 | 0.278731 | 0.258148 | 0.21012 | 0.168096 | 0 | 0.034667 | 0.237417 | 1,967 | 62 | 126 | 31.725806 | 0.742667 | 0.049822 | 0 | 0.130435 | 0 | 0.021739 | 0.263017 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0.108696 | 0.043478 | 0 | 0.217391 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2cab7c782174400d830c6863e839dd45a2ff2d54 | 2,416 | py | Python | face.py | 1MT3J45/DS-StockAnalysis | 20de4270a31e41324adc2c67ecb2343ff0c208c7 | [
"Apache-2.0"
] | null | null | null | face.py | 1MT3J45/DS-StockAnalysis | 20de4270a31e41324adc2c67ecb2343ff0c208c7 | [
"Apache-2.0"
] | null | null | null | face.py | 1MT3J45/DS-StockAnalysis | 20de4270a31e41324adc2c67ecb2343ff0c208c7 | [
"Apache-2.0"
] | null | null | null | from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder
from kivy.app import App
import YP03
import sys
import dfgui
import pandas as pd
Builder.load_string('''
<faceTool>:
num1: num1
result: result
orientation: 'vertical'
BoxLayout:
orientation: 'horizontal'
Label:
id: num1
text: 'Stock Data Analysis'
BoxLayout:
orientation: 'horizontal'
GridLayout:
cols: 6
Label:
id: blank1
Label:
id: blank2
Button:
text: 'Execute'
height: 10
width: 30
on_press: root.display_fun(self)
Label:
text: 'EMPTY SLOT'
height: 10
width: 30
on_press:
Button:
text: "Show XLS Sheet"
height: 10
width: 30
on_press: root.graph()
Button:
text: "Clear"
height: 10
width: 30
on_press: root.clear_screen()
BoxLayout:
orientation: 'horizontal'
Label:
id: result
GridLayout:
cols: 2
size_hint_y: None
Button:
text: "Clear"
on_press: root.clear_screen()
height: 10
width: 30
BubbleButton:
text: 'Exit'
on_press: root.exit_it()
height: 10
width: 30
''')
class face_app(App):
def build(self):
return faceTool()
class faceTool(BoxLayout):
def __init__(self, **kwargs):
super(faceTool, self).__init__(**kwargs)
def display_fun(self, instance):
'''Fuction called when numeric buttons are pressed,
if the operation button is pressed the numbers after will be
on the right hand side.
'''
DayClusterNames, length = YP03.execute()
res = ''
for i in range(len(DayClusterNames)):
res = str(DayClusterNames[i])+'\n'+res
self.result.text = str(res)
def exit_it(self):
sys.exit()
def graph(self):
# xls = pd.read_excel('Res.xls')
# df = pd.DataFrame(xls)
# dfgui.show(df)
import main
def clear_screen(self):
self.result.text = ''
face_app().run()
| 23.456311 | 68 | 0.500414 | 248 | 2,416 | 4.766129 | 0.419355 | 0.040609 | 0.06599 | 0.076142 | 0.175127 | 0.084602 | 0.06599 | 0 | 0 | 0 | 0 | 0.024665 | 0.412666 | 2,416 | 102 | 69 | 23.686275 | 0.808316 | 0.084023 | 0 | 0.402439 | 0 | 0 | 0.634404 | 0.010092 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.097561 | 0.012195 | 0.207317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2cb12c4d4584cf6f238a2a2f0fe96c07ce3365fd | 8,064 | py | Python | examples/resnet-v1/resnet_v1.py | statisticszhang/Image-classification-caffe-model | 33084ca0841e768dae84db582e15bb29ffeeaaec | [
"MIT"
] | 1 | 2020-06-03T12:53:43.000Z | 2020-06-03T12:53:43.000Z | examples/resnet-v1/resnet_v1.py | statisticszhang/Image-classification-caffe-model | 33084ca0841e768dae84db582e15bb29ffeeaaec | [
"MIT"
] | null | null | null | examples/resnet-v1/resnet_v1.py | statisticszhang/Image-classification-caffe-model | 33084ca0841e768dae84db582e15bb29ffeeaaec | [
"MIT"
] | null | null | null | import caffe
from caffe import layers as L
from caffe import params as P
def conv_bn_scale_relu(bottom, num_output=64, kernel_size=3, stride=1, pad=0):
conv = L.Convolution(bottom, num_output=num_output, kernel_size=kernel_size, stride=stride, pad=pad,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='xavier', std=0.01),
bias_filler=dict(type='constant', value=0))
conv_bn = L.BatchNorm(conv, use_global_stats=False, in_place=True)
conv_scale = L.Scale(conv, scale_param=dict(bias_term=True), in_place=True)
conv_relu = L.ReLU(conv, in_place=True)
return conv, conv_bn, conv_scale, conv_relu
def conv_bn_scale(bottom, num_output=64, kernel_size=3, stride=1, pad=0):
conv = L.Convolution(bottom, num_output=num_output, kernel_size=kernel_size, stride=stride, pad=pad,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='xavier', std=0.01),
bias_filler=dict(type='constant', value=0.2))
conv_bn = L.BatchNorm(conv, use_global_stats=False, in_place=True)
conv_scale = L.Scale(conv, scale_param=dict(bias_term=True), in_place=True)
return conv, conv_bn, conv_scale
def eltwize_relu(bottom1, bottom2):
residual_eltwise = L.Eltwise(bottom1, bottom2, eltwise_param=dict(operation=1))
residual_eltwise_relu = L.ReLU(residual_eltwise, in_place=True)
return residual_eltwise, residual_eltwise_relu
def residual_branch(bottom, base_output=64):
"""
input:4*base_output x n x n
output:4*base_output x n x n
:param base_output: base num_output of branch2
:param bottom: bottom layer
:return: layers
"""
branch2a, branch2a_bn, branch2a_scale, branch2a_relu = \
conv_bn_scale_relu(bottom, num_output=base_output, kernel_size=1) # base_output x n x n
branch2b, branch2b_bn, branch2b_scale, branch2b_relu = \
conv_bn_scale_relu(branch2a, num_output=base_output, kernel_size=3, pad=1) # base_output x n x n
branch2c, branch2c_bn, branch2c_scale = \
conv_bn_scale(branch2b, num_output=4 * base_output, kernel_size=1) # 4*base_output x n x n
residual, residual_relu = \
eltwize_relu(bottom, branch2c) # 4*base_output x n x n
return branch2a, branch2a_bn, branch2a_scale, branch2a_relu, branch2b, branch2b_bn, branch2b_scale, branch2b_relu, \
branch2c, branch2c_bn, branch2c_scale, residual, residual_relu
def residual_branch_shortcut(bottom, stride=2, base_output=64):
"""
:param stride: stride
:param base_output: base num_output of branch2
:param bottom: bottom layer
:return: layers
"""
branch1, branch1_bn, branch1_scale = \
conv_bn_scale(bottom, num_output=4 * base_output, kernel_size=1, stride=stride)
branch2a, branch2a_bn, branch2a_scale, branch2a_relu = \
conv_bn_scale_relu(bottom, num_output=base_output, kernel_size=1, stride=stride)
branch2b, branch2b_bn, branch2b_scale, branch2b_relu = \
conv_bn_scale_relu(branch2a, num_output=base_output, kernel_size=3, pad=1)
branch2c, branch2c_bn, branch2c_scale = \
conv_bn_scale(branch2b, num_output=4 * base_output, kernel_size=1)
residual, residual_relu = \
eltwize_relu(branch1, branch2c) # 4*base_output x n x n
return branch1, branch1_bn, branch1_scale, branch2a, branch2a_bn, branch2a_scale, branch2a_relu, branch2b, \
branch2b_bn, branch2b_scale, branch2b_relu, branch2c, branch2c_bn, branch2c_scale, residual, residual_relu
branch_shortcut_string = 'n.res(stage)a_branch1, n.res(stage)a_branch1_bn, n.res(stage)a_branch1_scale, \
n.res(stage)a_branch2a, n.res(stage)a_branch2a_bn, n.res(stage)a_branch2a_scale, n.res(stage)a_branch2a_relu, \
n.res(stage)a_branch2b, n.res(stage)a_branch2b_bn, n.res(stage)a_branch2b_scale, n.res(stage)a_branch2b_relu, \
n.res(stage)a_branch2c, n.res(stage)a_branch2c_bn, n.res(stage)a_branch2c_scale, n.res(stage)a, n.res(stage)a_relu = \
residual_branch_shortcut((bottom), stride=(stride), base_output=(num))'
branch_string = 'n.res(stage)b(order)_branch2a, n.res(stage)b(order)_branch2a_bn, n.res(stage)b(order)_branch2a_scale, \
n.res(stage)b(order)_branch2a_relu, n.res(stage)b(order)_branch2b, n.res(stage)b(order)_branch2b_bn, \
n.res(stage)b(order)_branch2b_scale, n.res(stage)b(order)_branch2b_relu, n.res(stage)b(order)_branch2c, \
n.res(stage)b(order)_branch2c_bn, n.res(stage)b(order)_branch2c_scale, n.res(stage)b(order), n.res(stage)b(order)_relu = \
residual_branch((bottom), base_output=(num))'
class ResNet(object):
def __init__(self, lmdb_train, lmdb_test, num_output):
self.train_data = lmdb_train
self.test_data = lmdb_test
self.classifier_num = num_output
def resnet_layers_proto(self, batch_size, phase='TRAIN', stages=(3, 4, 6, 3)):
"""
:param batch_size: the batch_size of train and test phase
:param phase: TRAIN or TEST
:param stages: the num of layers = 2 + 3*sum(stages), layers would better be chosen from [50, 101, 152]
{every stage is composed of 1 residual_branch_shortcut module and stage[i]-1 residual_branch
modules, each module consists of 3 conv layers}
(3, 4, 6, 3) for 50 layers; (3, 4, 23, 3) for 101 layers; (3, 8, 36, 3) for 152 layers
"""
n = caffe.NetSpec()
if phase == 'TRAIN':
source_data = self.train_data
mirror = True
else:
source_data = self.test_data
mirror = False
n.data, n.label = L.Data(source=source_data, backend=P.Data.LMDB, batch_size=batch_size, ntop=2,
transform_param=dict(crop_size=224, mean_value=[104, 117, 123], mirror=mirror))
n.conv1, n.conv1_bn, n.conv1_scale, n.conv1_relu = \
conv_bn_scale_relu(n.data, num_output=64, kernel_size=7, stride=2, pad=3) # 64x112x112
n.pool1 = L.Pooling(n.conv1, kernel_size=3, stride=2, pool=P.Pooling.MAX) # 64x56x56
for num in xrange(len(stages)): # num = 0, 1, 2, 3
for i in xrange(stages[num]):
if i == 0:
stage_string = branch_shortcut_string
bottom_string = ['n.pool1', 'n.res2b%s' % str(stages[0] - 1), 'n.res3b%s' % str(stages[1] - 1),
'n.res4b%s' % str(stages[2] - 1)][num]
else:
stage_string = branch_string
if i == 1:
bottom_string = 'n.res%sa' % str(num + 2)
else:
bottom_string = 'n.res%sb%s' % (str(num + 2), str(i - 1))
exec (stage_string.replace('(stage)', str(num + 2)).replace('(bottom)', bottom_string).
replace('(num)', str(2 ** num * 64)).replace('(order)', str(i)).
replace('(stride)', str(int(num > 0) + 1)))
exec 'n.pool5 = L.Pooling((bottom), pool=P.Pooling.AVE, global_pooling=True)'.\
replace('(bottom)', 'n.res5b%s' % str(stages[3] - 1))
n.classifier = L.InnerProduct(n.pool5, num_output=self.classifier_num,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
weight_filler=dict(type='xavier'),
bias_filler=dict(type='constant', value=0))
n.loss = L.SoftmaxWithLoss(n.classifier, n.label)
if phase == 'TRAIN':
pass
else:
n.accuracy_top1 = L.Accuracy(n.classifier, n.label, include=dict(phase=1))
n.accuracy_top5 = L.Accuracy(n.classifier, n.label, include=dict(phase=1),
accuracy_param=dict(top_k=5))
return n.to_proto()
| 51.363057 | 130 | 0.637773 | 1,171 | 8,064 | 4.156277 | 0.140051 | 0.025478 | 0.053626 | 0.032874 | 0.61804 | 0.505034 | 0.428806 | 0.399219 | 0.39285 | 0.368399 | 0 | 0.042083 | 0.242684 | 8,064 | 156 | 131 | 51.692308 | 0.754871 | 0.017609 | 0 | 0.295238 | 0 | 0.085714 | 0.032696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009524 | 0.028571 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2cb740471359d11f904828cc498bfc6b7c07a43b | 1,247 | py | Python | Prep/bbphone.py | armsky/Algorithms | 04fe858f001d7418f8e0eab454b779fe1e863483 | [
"Apache-2.0"
] | null | null | null | Prep/bbphone.py | armsky/Algorithms | 04fe858f001d7418f8e0eab454b779fe1e863483 | [
"Apache-2.0"
] | null | null | null | Prep/bbphone.py | armsky/Algorithms | 04fe858f001d7418f8e0eab454b779fe1e863483 | [
"Apache-2.0"
] | 2 | 2019-06-27T09:05:07.000Z | 2019-07-01T04:41:53.000Z | // This is the text editor interface.
// Anything you type or change here will be seen by the other person in real time.
import java.util.*;
public class HelloWorld {
public static boolean isSentence(String s, HashSet<String> d) {
return false;
}
public static void main(String[] args) {
// Prints "Hello, World" to the terminal window.
HashSet<String> dictionary=new HashSet<String> ();
dictionary.add("I");
dictionary.add("LOVE");
dictionary.add("TO");
dictionary.add("EAT");
dictionary.add("TACOS");
dictionary.add("MEET");
dictionary.add("ME");
dictionary.add("THERE");
String s="ILOVETOEATTACOS";
//String s="MEETMETHERE";
System.out.println(isSentence(s,dictionary));
}
}
def isSentence(s, d):
if not s:
return True
if s in d:
return True
mark = False
for i in range(1, len(s)+1):
if s[0:i] in d:
if isSentence(s[i:], d):
mark = True
return mark
s = "AILOVE"
print isSentence(s,d)
s = "ILOVE"
print isSentence(s,d)
s = "ILOVEA"
print isSentence(s,d)
s="ILOVETOEATTACOS"
print isSentence(s,d)
s="MEETMETHERE"
print isSentence(s,d)
| 22.267857 | 82 | 0.597434 | 164 | 1,247 | 4.542683 | 0.45122 | 0.139597 | 0.096644 | 0.114094 | 0.096644 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003308 | 0.272654 | 1,247 | 55 | 83 | 22.672727 | 0.818082 | 0 | 0 | 0.159091 | 0 | 0 | 0.085806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.022727 | null | null | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2cb9006dc93f30229a35a9a95092c8065ef3469e | 860 | py | Python | phantastes/urls.py | santeyio/phantastesproject | 5ce1e2cb59e8283fe280e01d0e185be62cd4001a | [
"MIT"
] | null | null | null | phantastes/urls.py | santeyio/phantastesproject | 5ce1e2cb59e8283fe280e01d0e185be62cd4001a | [
"MIT"
] | null | null | null | phantastes/urls.py | santeyio/phantastesproject | 5ce1e2cb59e8283fe280e01d0e185be62cd4001a | [
"MIT"
] | null | null | null | from django.conf import settings
from django.conf.urls import patterns, include, url
from django.conf.urls.static import static
from django.views.generic import TemplateView
from phantastes import views
from django.contrib import admin
urlpatterns = patterns(
"",
url(r"^$", views.index, name="home"),
url(r"^forum/", include('spirit.urls')),
url(r"^admin/", include(admin.site.urls)),
url(r"^account/", include("account.urls")),
url(r"^profile/", include("profiles.urls", namespace="profiles")),
url(r"^polls/", include("polls.urls", namespace="polls")),
url(r"^readings/", include("readings.urls", namespace="readings")),
url(r"^about/$", views.about, name="about"),
url(r'^chat/', include('djangoChat.urls', namespace="djangoChat")),
)
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 35.833333 | 76 | 0.696512 | 111 | 860 | 5.369369 | 0.324324 | 0.060403 | 0.07047 | 0.060403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119767 | 860 | 23 | 77 | 37.391304 | 0.787318 | 0 | 0 | 0 | 0 | 0 | 0.20814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.315789 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
2cc4536cac3f4a836b4d31edbb9c035b10194cbe | 937 | bzl | Python | build/buildflag_header.bzl | Lynskylate/chromium-base-bazel | e68247d002809f0359e28ee7fc6c5c33de93ce9d | [
"BSD-3-Clause"
] | null | null | null | build/buildflag_header.bzl | Lynskylate/chromium-base-bazel | e68247d002809f0359e28ee7fc6c5c33de93ce9d | [
"BSD-3-Clause"
] | null | null | null | build/buildflag_header.bzl | Lynskylate/chromium-base-bazel | e68247d002809f0359e28ee7fc6c5c33de93ce9d | [
"BSD-3-Clause"
] | 1 | 2020-04-30T08:12:46.000Z | 2020-04-30T08:12:46.000Z | # Primitive reimplementation of the buildflag_header scripts used in the gn build
def _buildflag_header_impl(ctx):
content = "// Generated by build/buildflag_header.bzl\n"
content += '// From "' + ctx.attr.name + '"\n'
content += "\n#ifndef %s_h\n" % ctx.attr.name
content += "#define %s_h\n\n" % ctx.attr.name
content += '#include "build/buildflag.h"\n\n'
for key in ctx.attr.flags:
content += "#define BUILDFLAG_INTERNAL_%s() (%s)\n" % (key, ctx.attr.flags[key])
content += "\n#endif // %s_h\n" % ctx.attr.name
ctx.actions.write(output = ctx.outputs.header, content = content)
buildflag_header = rule(
implementation = _buildflag_header_impl,
attrs = {
"flags": attr.string_dict(mandatory = True),
"header": attr.string(mandatory = True),
"header_dir": attr.string(),
},
outputs = {"header": "%{header_dir}%{header}"},
output_to_genfiles = True,
)
| 39.041667 | 88 | 0.638207 | 124 | 937 | 4.669355 | 0.370968 | 0.072539 | 0.075993 | 0.062176 | 0.093264 | 0.048359 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197439 | 937 | 23 | 89 | 40.73913 | 0.769947 | 0.084312 | 0 | 0 | 1 | 0 | 0.264019 | 0.11215 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ccae3f2f4d0694e8447d48bb97cc62a0e4c0a05 | 1,420 | py | Python | Sprint-Challenge/acme_test.py | martinclehman/DS-Unit-3-Sprint-1-Software-Engineering | 7bca22a2b398ee57021bbe7efd66e3d6cd55f527 | [
"MIT"
] | null | null | null | Sprint-Challenge/acme_test.py | martinclehman/DS-Unit-3-Sprint-1-Software-Engineering | 7bca22a2b398ee57021bbe7efd66e3d6cd55f527 | [
"MIT"
] | null | null | null | Sprint-Challenge/acme_test.py | martinclehman/DS-Unit-3-Sprint-1-Software-Engineering | 7bca22a2b398ee57021bbe7efd66e3d6cd55f527 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import unittest
from acme import Product
from acme_report import generate_products, ADJECTIVES, NOUNS
class AcmeProductTests(unittest.TestCase):
"""Making sure Acme products are the tops!"""
def test_default_product_price(self):
"""Test default product price being 10."""
prod = Product('Test Product')
self.assertEqual(prod.price, 10)
def test_default_product_weight(self):
"""Test default product weight being 20."""
prod = Product('Test Product')
self.assertEqual(prod.weight, 20)
def test_stealability_and_explosiveness(self):
prod = Product('Nuclear Weapon', price=1,
weight=1000, flammability=1000000)
self.assertEqual(prod.stealability(), 'Not so stealable...')
self.assertEqual(prod.explode(), '...BABOOM!!')
class AcmeReportTests(unittest.TestCase):
"""Making sure Acme reports are accurate."""
def test_default_num_products(self):
products = generate_products()
self.assertEqual(len(products), 30)
def test_legal_names(self):
products = generate_products()
for product in products:
split = product.name.split(' ')
adjective = split[0]
noun = split[1]
self.assertIn(adjective, ADJECTIVES)
self.assertIn(noun, NOUNS)
if __name__ == '__main__':
unittest.main()
| 30.869565 | 68 | 0.650704 | 158 | 1,420 | 5.683544 | 0.411392 | 0.038976 | 0.080178 | 0.057906 | 0.158129 | 0.091314 | 0.091314 | 0 | 0 | 0 | 0 | 0.022202 | 0.238732 | 1,420 | 45 | 69 | 31.555556 | 0.808511 | 0.122535 | 0 | 0.137931 | 1 | 0 | 0.062857 | 0 | 0 | 0 | 0 | 0 | 0.241379 | 1 | 0.172414 | false | 0 | 0.103448 | 0 | 0.344828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2cccf7b04a13dc0853d0b836ac33f0e371b7ca36 | 670 | py | Python | login/weibo_with_known_cookie.py | bobjiangps/python-spider-example | 7021dc3052fe1a667b79b810403e8ae3f03253b3 | [
"MIT"
] | null | null | null | login/weibo_with_known_cookie.py | bobjiangps/python-spider-example | 7021dc3052fe1a667b79b810403e8ae3f03253b3 | [
"MIT"
] | 3 | 2021-03-31T19:20:41.000Z | 2022-03-12T01:03:06.000Z | login/weibo_with_known_cookie.py | bobjiangps/python-spider-example | 7021dc3052fe1a667b79b810403e8ae3f03253b3 | [
"MIT"
] | null | null | null | import requests
if __name__ == "__main__":
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:73.0) Gecko/20100101 Firefox/73.0',
'Connection': 'keep-alive',
'cookie': 'replace your cookie here' # update text
}
session = requests.Session()
response = session.get('https://weibo.com/2671109275/fans?rightmod=1&wvr=6', headers=headers)
print(response.text)
print(response.status_code) | 41.875 | 145 | 0.652239 | 103 | 670 | 4.15534 | 0.660194 | 0.028037 | 0.014019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080357 | 0.164179 | 670 | 16 | 146 | 41.875 | 0.683929 | 0.016418 | 0 | 0 | 0 | 0.230769 | 0.577508 | 0.241641 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2ccddb9cadd2a8adb62f81d94b7e34242493a393 | 5,728 | py | Python | src/imagedata/formats/__init__.py | erling6232/imagedata | 69226b317ff43eb52ed48503582e5770bcb47ec4 | [
"MIT"
] | 1 | 2021-09-02T07:20:19.000Z | 2021-09-02T07:20:19.000Z | src/imagedata/formats/__init__.py | erling6232/imagedata | 69226b317ff43eb52ed48503582e5770bcb47ec4 | [
"MIT"
] | 3 | 2018-02-28T09:54:21.000Z | 2022-03-22T10:05:39.000Z | src/imagedata/formats/__init__.py | erling6232/imagedata | 69226b317ff43eb52ed48503582e5770bcb47ec4 | [
"MIT"
] | null | null | null | """This module provides plugins for various imaging formats.
Standard plugins provides support for DICOM and Nifti image file formats.
"""
# Copyright (c) 2013-2018 Erling Andersen, Haukeland University Hospital, Bergen, Norway
import logging
import sys
import numpy as np
logger = logging.getLogger(__name__)
(SORT_ON_SLICE,
SORT_ON_TAG) = range(2)
sort_on_set = {SORT_ON_SLICE, SORT_ON_TAG}
INPUT_ORDER_NONE = 'none'
INPUT_ORDER_TIME = 'time'
INPUT_ORDER_B = 'b'
INPUT_ORDER_FA = 'fa'
INPUT_ORDER_TE = 'te'
INPUT_ORDER_FAULTY = 'faulty'
input_order_set = {INPUT_ORDER_NONE, INPUT_ORDER_TIME, INPUT_ORDER_B, INPUT_ORDER_FA, INPUT_ORDER_TE,
INPUT_ORDER_FAULTY}
class NotImageError(Exception):
pass
class EmptyImageError(Exception):
pass
class UnknownInputError(Exception):
pass
class UnknownTag(Exception):
pass
class NotTimeOrder(Exception):
pass
class CannotSort(Exception):
pass
class SOPInstanceUIDNotFound(Exception):
pass
class FormatPluginNotFound(Exception):
pass
class WriteNotImplemented(Exception):
pass
def sort_on_to_str(sort_on):
if sort_on == SORT_ON_SLICE:
return "SORT_ON_SLICE"
elif sort_on == SORT_ON_TAG:
return "SORT_ON_TAG"
else:
raise (UnknownTag("Unknown numerical sort_on {:d}.".format(sort_on)))
def str_to_sort_on(s):
if s == "slice":
return SORT_ON_SLICE
elif s == "tag":
return SORT_ON_TAG
else:
raise (UnknownTag("Unknown sort_on string {}.".format(s)))
def str_to_dtype(s):
if s == "none":
return None
elif s == "uint8":
return np.uint8
elif s == "uint16":
return np.uint16
elif s == "int16":
return np.int16
elif s == "int":
return np.int16
elif s == "float":
return np.float
elif s == "float32":
return np.float32
elif s == "float64":
return np.float64
elif s == "double":
return np.double
else:
raise (ValueError("Output data type {} not implemented.".format(s)))
def input_order_to_str(input_order):
if input_order == INPUT_ORDER_NONE:
return "INPUT_ORDER_NONE"
elif input_order == INPUT_ORDER_TIME:
return "INPUT_ORDER_TIME"
elif input_order == INPUT_ORDER_B:
return "INPUT_ORDER_B"
elif input_order == INPUT_ORDER_FA:
return "INPUT_ORDER_FA"
elif input_order == INPUT_ORDER_TE:
return "INPUT_ORDER_TE"
elif input_order == INPUT_ORDER_FAULTY:
return "INPUT_ORDER_FAULTY"
elif issubclass(type(input_order), str):
return input_order
else:
raise (UnknownTag("Unknown numerical input_order {:d}.".format(input_order)))
def input_order_to_dirname_str(input_order):
if input_order == INPUT_ORDER_NONE:
return "none"
elif input_order == INPUT_ORDER_TIME:
return "time"
elif input_order == INPUT_ORDER_B:
return "b"
elif input_order == INPUT_ORDER_FA:
return "fa"
elif input_order == INPUT_ORDER_TE:
return "te"
elif input_order == INPUT_ORDER_FAULTY:
return "faulty"
elif issubclass(type(input_order), str):
keepcharacters = ('-', '_', '.', ' ')
return ''.join([c for c in input_order if c.isalnum() or c in keepcharacters]).rstrip()
else:
raise (UnknownTag("Unknown numerical input_order {:d}.".format(input_order)))
def str_to_input_order(s):
if s == "none":
return INPUT_ORDER_NONE
elif s == "time":
return INPUT_ORDER_TIME
elif s == "b":
return INPUT_ORDER_B
elif s == "fa":
return INPUT_ORDER_FA
elif s == "te":
return INPUT_ORDER_TE
elif s == "faulty":
return INPUT_ORDER_FAULTY
else:
# raise (UnknownTag("Unknown input order {}.".format(s)))
return s
def shape_to_str(shape):
"""Convert numpy image shape to printable string
Args:
shape
Returns:
printable shape (str)
Raises:
ValueError: when shape cannot be converted to printable string
"""
if len(shape) == 5:
return "{}x{}tx{}x{}x{}".format(shape[0], shape[1], shape[2], shape[3], shape[4])
elif len(shape) == 4:
return "{}tx{}x{}x{}".format(shape[0], shape[1], shape[2], shape[3])
elif len(shape) == 3:
return "{}x{}x{}".format(shape[0], shape[1], shape[2])
elif len(shape) == 2:
return "{}x{}".format(shape[0], shape[1])
elif len(shape) == 1:
return "{}".format(shape[0])
else:
raise ValueError("Unknown shape")
def get_size(obj, seen=None):
"""Recursively finds size of objects"""
size = sys.getsizeof(obj)
if seen is None:
seen = set()
obj_id = id(obj)
if obj_id in seen:
return 0
# Important mark as seen *before* entering recursion to gracefully handle
# self-referential objects
seen.add(obj_id)
if isinstance(obj, dict):
size += sum([get_size(v, seen) for v in obj.values()])
size += sum([get_size(k, seen) for k in obj.keys()])
elif hasattr(obj, '__dict__'):
size += get_size(obj.__dict__, seen)
elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
size += sum([get_size(i, seen) for i in obj])
return size
def get_plugins_list():
from imagedata import plugins
return plugins['format'] if 'format' in plugins else []
def find_plugin(ftype):
"""Return plugin for given format type."""
plugins = get_plugins_list()
for pname, ptype, pclass in plugins:
if ptype == ftype:
return pclass()
raise FormatPluginNotFound("Plugin for format {} not found.".format(ftype)) | 26.155251 | 101 | 0.638268 | 771 | 5,728 | 4.516213 | 0.20882 | 0.180931 | 0.059736 | 0.068926 | 0.350373 | 0.321367 | 0.238943 | 0.217691 | 0.115451 | 0.082711 | 0 | 0.011582 | 0.246334 | 5,728 | 219 | 102 | 26.155251 | 0.794997 | 0.107716 | 0 | 0.230769 | 0 | 0 | 0.100633 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064103 | false | 0.057692 | 0.025641 | 0 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e2bd10babdd8ebe01076d27ea6c764ee6769a395 | 351 | py | Python | CV0101EN-03-image_Region_of_img.py | reddyprasade/Computer-Vision-with-Python | 8eebec61f0fdacb05e122460d6845a32ae506c8f | [
"Apache-2.0"
] | null | null | null | CV0101EN-03-image_Region_of_img.py | reddyprasade/Computer-Vision-with-Python | 8eebec61f0fdacb05e122460d6845a32ae506c8f | [
"Apache-2.0"
] | null | null | null | CV0101EN-03-image_Region_of_img.py | reddyprasade/Computer-Vision-with-Python | 8eebec61f0fdacb05e122460d6845a32ae506c8f | [
"Apache-2.0"
] | null | null | null | # Image ROI(Region of Images)
import cv2 as cv
img = cv.imread('Photes/messi.jpg')
cv.imshow('Orginal Messi_Football',img)
ball = img[280:340, 330:390]
img[273:333, 100:160] = ball
cv.imshow('Change Messi_Football',img)
"""
import matplotlib.pyplot as plt
data = plt.imread('Photes/messi.jpg')
plt.imshow(data)
plt.show()
"""
| 17.55 | 40 | 0.669516 | 55 | 351 | 4.236364 | 0.581818 | 0.103004 | 0.145923 | 0.171674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085911 | 0.17094 | 351 | 19 | 41 | 18.473684 | 0.714777 | 0.076923 | 0 | 0 | 0 | 0 | 0.29798 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.