hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3667faea99d9e44cf1ec814efbac09de88c252f2 | 2,255 | py | Python | cassiopeia-diskstore/cassiopeia_diskstore/championgg.py | mrtolkien/cassiopeia-datastores | 1fbc6f9163ec4a5b4efdc892c219b5785f62b274 | [
"MIT"
] | 3 | 2017-11-22T20:38:18.000Z | 2018-09-04T07:48:55.000Z | cassiopeia-diskstore/cassiopeia_diskstore/championgg.py | mrtolkien/cassiopeia-datastores | 1fbc6f9163ec4a5b4efdc892c219b5785f62b274 | [
"MIT"
] | 12 | 2018-06-05T16:08:36.000Z | 2020-11-26T19:16:59.000Z | cassiopeia-diskstore/cassiopeia_diskstore/championgg.py | mrtolkien/cassiopeia-datastores | 1fbc6f9163ec4a5b4efdc892c219b5785f62b274 | [
"MIT"
] | 10 | 2017-11-14T18:59:10.000Z | 2020-09-17T15:18:29.000Z | from typing import Type, TypeVar, MutableMapping, Any, Iterable
from datapipelines import DataSource, DataSink, PipelineContext, Query, validate_query
from cassiopeia_championgg.dto import ChampionGGStatsListDto, ChampionGGStatsDto
from cassiopeia.datastores.uniquekeys import convert_region_to_platform
from .common import SimpleKVDiskService
T = TypeVar("T")
class ChampionGGDiskService(SimpleKVDiskService):
@DataSource.dispatch
def get(self, type: Type[T], query: MutableMapping[str, Any], context: PipelineContext = None) -> T:
pass
@DataSource.dispatch
def get_many(self, type: Type[T], query: MutableMapping[str, Any], context: PipelineContext = None) -> Iterable[T]:
pass
@DataSink.dispatch
def put(self, type: Type[T], item: T, context: PipelineContext = None) -> None:
pass
@DataSink.dispatch
def put_many(self, type: Type[T], items: Iterable[T], context: PipelineContext = None) -> None:
pass
_validate_get_gg_champion_list_query = Query. \
has("patch").as_(str).also. \
can_have("elo").with_default(lambda *args, **kwargs: "PLATINUM_DIAMOND_MASTER_CHALLENGER", supplies_type=str)
@get.register(ChampionGGStatsListDto)
@validate_query(_validate_get_gg_champion_list_query, convert_region_to_platform)
def get_champion_list(self, query: MutableMapping[str, Any], context: PipelineContext = None) -> ChampionGGStatsListDto:
patch = query["patch"]
elo = query["elo"]
key = "{clsname}.{patch}.{elo}".format(clsname=ChampionGGStatsListDto.__name__,
patch=patch,
elo=elo)
data = self._get(key)
data["data"] = [ChampionGGStatsDto(champion) for champion in data["data"]]
return ChampionGGStatsListDto(data)
@put.register(ChampionGGStatsListDto)
def put_champion_list(self, item: ChampionGGStatsListDto, context: PipelineContext = None) -> None:
key = "{clsname}.{patch}.{elo}".format(clsname=ChampionGGStatsListDto.__name__,
patch=item["patch"],
elo=item["elo"])
self._put(key, item)
| 43.365385 | 124 | 0.66031 | 232 | 2,255 | 6.228448 | 0.306034 | 0.091349 | 0.107958 | 0.035986 | 0.347405 | 0.299654 | 0.209689 | 0.174394 | 0.174394 | 0.088581 | 0 | 0 | 0.233259 | 2,255 | 51 | 125 | 44.215686 | 0.835743 | 0 | 0 | 0.25641 | 0 | 0 | 0.050111 | 0.035477 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.102564 | 0.128205 | 0 | 0.358974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
366887a1056798b43b8bf1750a891210075a1524 | 5,788 | py | Python | src/cogs/fun-commands.py | ShubhamPatilsd/johnpeter-discord | 40738f8df57f85275eb4887ab2ed9a9b96ba9e40 | [
"MIT"
] | 1 | 2021-07-08T09:03:08.000Z | 2021-07-08T09:03:08.000Z | src/cogs/fun-commands.py | vidhyanijadala/johnpeter-discord | 40738f8df57f85275eb4887ab2ed9a9b96ba9e40 | [
"MIT"
] | null | null | null | src/cogs/fun-commands.py | vidhyanijadala/johnpeter-discord | 40738f8df57f85275eb4887ab2ed9a9b96ba9e40 | [
"MIT"
] | 1 | 2021-07-08T09:03:04.000Z | 2021-07-08T09:03:04.000Z | import asyncio
import json
import os
import random
import re
import urllib
import urllib.request
from glob import glob
from os import getenv
from random import choice
import discord
from discord.ext import commands
from utils.cms import get_sponsor_intro, get_sponsor_audio
from utils.commands import only_random, require_vc, OnlyAllowedInChannels
class FunCommands(commands.Cog, name="Fun"):
def __init__(self, bot):
self.bot: commands.Bot = bot
self.random_channel = int(getenv("CHANNEL_RANDOM", 689534362760642676))
self.mod_log = int(getenv("CHANNEL_MOD_LOG", 689216590297694211))
# Downloads mp3 files
urls = get_sponsor_audio()
if not os.path.isdir("./cache/sponsorships/"):
os.makedirs("./cache/sponsorships/")
for url in urls:
file_name = re.sub("(h.*\/)+", "", url)
urllib.request.urlretrieve(url, f"./cache/sponsorships/{file_name}")
file_name = re.sub("(h.*\/)+", "", get_sponsor_intro())
urllib.request.urlretrieve(get_sponsor_intro(), f"./cache/{file_name}")
self.sponsorships = []
for file in glob("./cache/sponsorships/*.mp3"):
print(file)
self.sponsorships.append(file)
@commands.Cog.listener()
async def on_message(self, message):
msg = message.content
manyAnimalsRegex = re.compile(f"{await self.bot.get_prefix(message)}(cat|dog)((?:n't)+)")
match = manyAnimalsRegex.match(msg)
if match:
if message.channel.id != int(getenv("CHANNEL_RANDOM", 689534362760642676)): # hacky @only_random replacement
await message.channel.send(f"You can only do that in <#{getenv('CHANNEL_RANDOM', 689534362760642676)}>")
return
animal,nts = match.group(1,2)
animal_commands = ["cat","dog"]
command_to_call = animal_commands[(animal_commands.index(animal) + nts.count("n't"))%2]
await self.bot.get_command(command_to_call)(message.channel)
@commands.command(name="cat",aliases=["kitten", "kitty", "catto"])
@only_random
async def cat(self, ctx):
with urllib.request.urlopen("https://aws.random.cat/meow") as url:
data = json.loads(url.read().decode())
await ctx.send(data.get('file'))
@commands.command(name="doggo",aliases=["dog", "puppy", "pupper"])
@only_random
async def doggo(self,ctx):
with urllib.request.urlopen("https://dog.ceo/api/breeds/image/random") as url:
data = json.loads(url.read().decode())
await ctx.send(data.get('message'))
@commands.command(name="floof", aliases=["floofer","floofo"])
@only_random
async def floof(self, ctx):
await ctx.invoke(self.bot.get_command(random.choice(['doggo', 'cat'])))
@commands.command(name="bird", aliases=["birb","birdy","birdie"])
@only_random
async def bird(self, ctx):
with urllib.request.urlopen("https://some-random-api.ml/img/birb") as url:
data = json.loads(url.read().decode())
await ctx.send(data.get('link'))
@commands.command(name ="fish", aliases=["cod", "codday", "phish"])
@only_random
async def fish(self, ctx):
fish = ["https://tinyurl.com/s8zadryh", "https://tinyurl.com/v2xsewah", "https://tinyurl.com/hnmdr2we", "https://tinyurl.com/ypbcsa3u"]
await ctx.send(random.choice(fish))
@commands.command(name="triggered", aliases=["mad","angry"])
@only_random
async def triggered(self, ctx, arg):
if arg:
await ctx.send("https://some-random-api.ml/canvas/triggered?avatar={}".format(arg))
else:
await ctx.send("<:revoltLola:829824598178529311> Hey you didn't tell me an image URL!")
@commands.command(name="owo")
@only_random
async def owo(self, ctx):
"""owo"""
await ctx.send(f"owo what's {ctx.author.mention}?")
@commands.command(name="uwu")
@only_random
async def uwu(self, ctx):
"""uwu"""
await ctx.send(f"uwu what's {ctx.author.mention}?")
@commands.command(
name="up-down-up-down-left-right-left-right-b-a-start",
hidden=True,
aliases=["updownupdownleftrightleftrightbastart"],
)
@only_random
async def updownupdownleftrightleftrightbastart(
self, ctx,
):
"""A lot of typing for nothing."""
await ctx.send("wow that's a long cheat code. You win 20 CodeCoin!!")
@commands.command(pass_context=True, aliases=["disconnect"])
async def disconnectvc(self, ctx):
await ctx.message.delete()
vc = ctx.message.guild.voice_client
if vc is None:
await ctx.send("You silly, I'm not in any VCs right now.")
else:
await vc.disconnect()
@commands.command(
name="sponsorship",
aliases=[
"sponsor",
"sponsormessage",
"sponsor-message",
"sponsor_message",
"sponsors",
],
)
@require_vc
async def sponsorship(self, ctx):
"""Says a message from a sponsor."""
retval = os.getcwd()
vc = await ctx.message.author.voice.channel.connect()
try:
file = choice(self.sponsorships)
intro = discord.FFmpegPCMAudio(f"./cache/sponsor-intro.mp3")
sponsor = discord.FFmpegPCMAudio(f"{file}")
player = vc.play(intro)
while vc.is_playing():
await asyncio.sleep(1)
player = vc.play(sponsor)
while vc.is_playing():
await asyncio.sleep(1)
finally:
await vc.disconnect()
def setup(bot):
bot.add_cog(FunCommands(bot))
| 35.292683 | 143 | 0.609537 | 698 | 5,788 | 4.975645 | 0.30659 | 0.029945 | 0.054708 | 0.046646 | 0.158077 | 0.116902 | 0.116902 | 0.085805 | 0.04319 | 0.04319 | 0 | 0.023842 | 0.246372 | 5,788 | 163 | 144 | 35.509202 | 0.772352 | 0.008639 | 0 | 0.166667 | 0 | 0.007576 | 0.202231 | 0.067115 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.007576 | 0.106061 | null | null | 0.007576 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
366e02860b9652ab88acf006189e3756c87e9843 | 575 | py | Python | logger/hum_test3.py | scsibug/Raspberry-Pi-Sensor-Node | 606cf2a15a72ac1503c7318a39c9f3cc523a9c4a | [
"Unlicense"
] | 1 | 2015-12-23T04:27:16.000Z | 2015-12-23T04:27:16.000Z | logger/hum_test3.py | scsibug/Raspberry-Pi-Sensor-Node | 606cf2a15a72ac1503c7318a39c9f3cc523a9c4a | [
"Unlicense"
] | null | null | null | logger/hum_test3.py | scsibug/Raspberry-Pi-Sensor-Node | 606cf2a15a72ac1503c7318a39c9f3cc523a9c4a | [
"Unlicense"
] | null | null | null | # this example came from http://www.raspberrypi.org/phpBB3/viewtopic.php?f=32&t=29454&sid=4543fbd8f48478644e608d741309c12b&start=25
import smbus
import time
b = smbus.SMBus(1)
d = []
addr = 0x27
b.write_quick(addr)
time.sleep(0.05)
d = b.read_i2c_block_data(addr, 0,4)
status = (d[0] & 0xc0) >> 6
humidity = (((d[0] & 0x3f) << 8) + d[1])*100/16383
tempC = ((d[2] << 6) + ((d[3] & 0xfc) >> 2))*165/16383 - 40
tempF = tempC*9/5 + 32
print "Data: ", "%02x "*len(d)%tuple(d)
print "Status: ", status
print "Humidity: ", humidity, "%"
print "Temperature:", tempF, "F"
| 31.944444 | 131 | 0.634783 | 94 | 575 | 3.840426 | 0.62766 | 0.01108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168724 | 0.154783 | 575 | 17 | 132 | 33.823529 | 0.574074 | 0.224348 | 0 | 0 | 0 | 0 | 0.123874 | 0 | 0 | 0 | 0.036036 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
366e485c9bc0322e636a5736840956b42775c986 | 4,493 | py | Python | vendor/python-pika/tests/frame_tests.py | suthat/signal | 9730f7ee1a3b00a65eb4d9a2cce4f3a5eee33451 | [
"Apache-2.0"
] | 1 | 2018-09-02T22:28:56.000Z | 2018-09-02T22:28:56.000Z | vendor/python-pika/tests/frame_tests.py | suthat/signal | 9730f7ee1a3b00a65eb4d9a2cce4f3a5eee33451 | [
"Apache-2.0"
] | null | null | null | vendor/python-pika/tests/frame_tests.py | suthat/signal | 9730f7ee1a3b00a65eb4d9a2cce4f3a5eee33451 | [
"Apache-2.0"
] | null | null | null | """
Tests for pika.frame
"""
try:
import unittest2 as unittest
except ImportError:
import unittest
from pika import exceptions
from pika import frame
from pika import spec
class FrameTests(unittest.TestCase):
BASIC_ACK = ('\x01\x00\x01\x00\x00\x00\r\x00<\x00P\x00\x00\x00\x00\x00\x00'
'\x00d\x00\xce')
BODY_FRAME = '\x03\x00\x01\x00\x00\x00\x14I like it that sound\xce'
BODY_FRAME_VALUE = 'I like it that sound'
CONTENT_HEADER = ('\x02\x00\x01\x00\x00\x00\x0f\x00<\x00\x00\x00'
'\x00\x00\x00\x00\x00\x00d\x10\x00\x02\xce')
HEARTBEAT = '\x08\x00\x00\x00\x00\x00\x00\xce'
PROTOCOL_HEADER = 'AMQP\x00\x00\t\x01'
def frame_marshal_not_implemented_test(self):
frame_obj = frame.Frame(0x000A000B, 1)
self.assertRaises(NotImplementedError, frame_obj.marshal)
def frame_underscore_marshal_test(self):
basic_ack = frame.Method(1, spec.Basic.Ack(100))
self.assertEqual(basic_ack.marshal(), self.BASIC_ACK)
def headers_marshal_test(self):
header = frame.Header(1, 100,
spec.BasicProperties(delivery_mode=2))
self.assertEqual(header.marshal(), self.CONTENT_HEADER)
def body_marshal_test(self):
body = frame.Body(1, 'I like it that sound')
self.assertEqual(body.marshal(), self.BODY_FRAME)
def heartbeat_marshal_test(self):
heartbeat = frame.Heartbeat()
self.assertEqual(heartbeat.marshal(), self.HEARTBEAT)
def protocol_header_marshal_test(self):
protocol_header = frame.ProtocolHeader()
self.assertEqual(protocol_header.marshal(), self.PROTOCOL_HEADER)
def decode_protocol_header_instance_test(self):
self.assertIsInstance(frame.decode_frame(self.PROTOCOL_HEADER)[1],
frame.ProtocolHeader)
def decode_protocol_header_bytes_test(self):
self.assertEqual(frame.decode_frame(self.PROTOCOL_HEADER)[0], 8)
def decode_method_frame_instance_test(self):
self.assertIsInstance(frame.decode_frame(self.BASIC_ACK)[1],
frame.Method)
def decode_protocol_header_failure_test(self):
self.assertEqual(frame.decode_frame('AMQPa'), (0, None))
def decode_method_frame_bytes_test(self):
self.assertEqual(frame.decode_frame(self.BASIC_ACK)[0], 21)
def decode_method_frame_method_test(self):
self.assertIsInstance(frame.decode_frame(self.BASIC_ACK)[1].method,
spec.Basic.Ack)
def decode_header_frame_instance_test(self):
self.assertIsInstance(frame.decode_frame(self.CONTENT_HEADER)[1],
frame.Header)
def decode_header_frame_bytes_test(self):
self.assertEqual(frame.decode_frame(self.CONTENT_HEADER)[0], 23)
def decode_header_frame_properties_test(self):
frame_value = frame.decode_frame(self.CONTENT_HEADER)[1]
self.assertIsInstance(frame_value.properties, spec.BasicProperties)
def decode_frame_decoding_failure_test(self):
self.assertEqual(frame.decode_frame('\x01\x00\x01\x00\x00\xce'),
(0, None))
def decode_frame_decoding_no_end_byte_test(self):
self.assertEqual(frame.decode_frame(self.BASIC_ACK[:-1]), (0, None))
def decode_frame_decoding_wrong_end_byte_test(self):
self.assertRaises(exceptions.InvalidFrameError,
frame.decode_frame,
self.BASIC_ACK[:-1] + 'A')
def decode_body_frame_instance_test(self):
self.assertIsInstance(frame.decode_frame(self.BODY_FRAME)[1],
frame.Body)
def decode_body_frame_fragment_test(self):
self.assertEqual(frame.decode_frame(self.BODY_FRAME)[1].fragment,
self.BODY_FRAME_VALUE)
def decode_body_frame_fragment_consumed_bytes_test(self):
self.assertEqual(frame.decode_frame(self.BODY_FRAME)[0], 28)
def decode_heartbeat_frame_test(self):
self.assertIsInstance(frame.decode_frame(self.HEARTBEAT)[1],
frame.Heartbeat)
def decode_heartbeat_frame_bytes_consumed_test(self):
self.assertEqual(frame.decode_frame(self.HEARTBEAT)[0], 8)
def decode_frame_invalid_frame_type_test(self):
self.assertRaises(exceptions.InvalidFrameError,
frame.decode_frame,
'\x09\x00\x00\x00\x00\x00\x00\xce')
| 38.401709 | 79 | 0.672157 | 565 | 4,493 | 5.084956 | 0.159292 | 0.064741 | 0.068918 | 0.062652 | 0.482423 | 0.420118 | 0.387052 | 0.337974 | 0.26488 | 0.180299 | 0 | 0.049771 | 0.221901 | 4,493 | 116 | 80 | 38.732759 | 0.772025 | 0.004451 | 0 | 0.047619 | 0 | 0.02381 | 0.081317 | 0.058916 | 0 | 0 | 0.00224 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.071429 | 0 | 0.440476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
366f7150d8d059f7a7975d57fa08b372b8cccd5e | 2,126 | py | Python | hunter/mlops/experiments/experiment.py | akamlani/hunter-core | 98b08b69fc460eb4cb75e67a36f86e80e03029bd | [
"MIT"
] | null | null | null | hunter/mlops/experiments/experiment.py | akamlani/hunter-core | 98b08b69fc460eb4cb75e67a36f86e80e03029bd | [
"MIT"
] | null | null | null | hunter/mlops/experiments/experiment.py | akamlani/hunter-core | 98b08b69fc460eb4cb75e67a36f86e80e03029bd | [
"MIT"
] | null | null | null | import numpy as np
import os
import time
import shutil
import logging
logger = logging.getLogger(__name__)
class Experiment(object):
"""Create experiment directory structure, track, and store data.
To be used as a base class or derived `Experiment` classes.
"""
def __init__(self, project_name:str, experiment_dir:str="dev-platform/experiments/snapshots", tags:list=None) -> None:
"""Initialize Experiment class instance.
Args:
project_name (str): Name of Project
experiment_dir (str, optional): Path of location to experiment snapshots
param3 (:obj:`list` of :obj:`str`): List of Tags to associate with experiment
"""
super().__init__()
# copy directory structure to project name
user_home = os.getenv("HOME")
experiment_dir = os.path.join(user_home, experiment_dir)
template_dir = os.path.join(experiment_dir, "template")
project_dir = os.path.join(experiment_dir, project_name)
self._project_dir = project_dir
self._export_dir = os.path.join(self._project_dir, "exports")
self._model_dir = os.path.join(self._export_dir, "artifacts")
self._copy_dir_template(template_dir, project_dir)
self._tags = {k: v for k,v in tags.items()}
self._name = project_name
self._id = -1
def list_project_dir(self):
print( os.listdir(self._project_dir) )
@property
def name(self):
return self._name
@property
def experiment_id(self):
return self._id
@property
def tags(self):
return self._tags
def add_data_uri(self, data_uri):
self.datasrc_uri = data_uri
def add_vocab_uri(self, vocab_uri):
self.vocabsrc_uri = vocab_uri
def _add_tag(self, tag):
self._tags[tag.key] = tag.value
def _copy_dir_template(self, src, dest):
if os.path.exists(dest):
shutil.rmtree(dest)
shutil.copytree(src, dest)
def __repr__(self):
return f"{self.name}"
| 27.973684 | 122 | 0.62841 | 272 | 2,126 | 4.643382 | 0.334559 | 0.055424 | 0.035629 | 0.051465 | 0.068092 | 0.041172 | 0 | 0 | 0 | 0 | 0 | 0.001297 | 0.274694 | 2,126 | 75 | 123 | 28.346667 | 0.817769 | 0.193791 | 0 | 0.069767 | 0 | 0 | 0.044242 | 0.020606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.232558 | false | 0 | 0.116279 | 0.093023 | 0.465116 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36720adea01a0055ba50050d6d0540ddb952c604 | 2,926 | py | Python | nc/tests/test_views.py | OpenDataPolicingNC/Traffic-Stops | 74e0d16ad2ac32addca6f04d34c2ddf36d023990 | [
"MIT"
] | 25 | 2015-09-12T23:10:52.000Z | 2021-03-24T08:39:46.000Z | nc/tests/test_views.py | OpenDataPolicingNC/Traffic-Stops | 74e0d16ad2ac32addca6f04d34c2ddf36d023990 | [
"MIT"
] | 159 | 2015-07-01T03:57:23.000Z | 2021-04-17T21:09:19.000Z | nc/tests/test_views.py | copelco/NC-Traffic-Stops | 74e0d16ad2ac32addca6f04d34c2ddf36d023990 | [
"MIT"
] | 8 | 2015-10-02T16:56:40.000Z | 2020-10-18T01:16:29.000Z | from django.core.urlresolvers import reverse
from django.test import TestCase
from nc.tests import factories
class ViewTests(TestCase):
multi_db = True
def test_home(self):
response = self.client.get(reverse('nc:home'))
self.assertEqual(200, response.status_code)
def test_search(self):
response = self.client.get(reverse('nc:stops-search'))
self.assertEqual(200, response.status_code)
def test_agency_detail(self):
agency = factories.AgencyFactory(name="Durham")
response = self.client.get(reverse('nc:agency-detail', args=[agency.pk]))
self.assertEqual(200, response.status_code)
def test_agency_list(self):
response = self.client.get(reverse('nc:agency-list'))
self.assertEqual(200, response.status_code)
def test_agency_list_sorted_agencies(self):
"""
Verify that agencies are delivered in an appropriately sorted and
chunked form.
"""
factories.AgencyFactory(name="Abc")
factories.AgencyFactory(name="Def")
factories.AgencyFactory(name="Ghi")
factories.AgencyFactory(name="Abc_")
factories.AgencyFactory(name="Def_")
factories.AgencyFactory(name="Ghi_")
factories.AgencyFactory(name="Abc__")
factories.AgencyFactory(name="Def__")
factories.AgencyFactory(name="Ghi__")
factories.AgencyFactory(name="Abc___")
factories.AgencyFactory(name="Def___")
factories.AgencyFactory(name="Ghi___")
factories.AgencyFactory(name="Abc____")
factories.AgencyFactory(name="Def____")
factories.AgencyFactory(name="Ghi____")
response = self.client.get(reverse('nc:agency-list'))
sorted_agencies = response.context['sorted_agencies']
# Verify that there are three alphabetic categories
self.assertEqual(3, len(sorted_agencies))
keys = [pair[0] for pair in sorted_agencies]
# Verify that the relevant letters are in there
self.assertTrue("A" in keys)
self.assertTrue("D" in keys)
self.assertTrue("G" in keys)
# Verify that each alphabetic category contains three chunks
# with the appropriate number of pieces (i.e. 2, 2, 1)
for (letter, chunks) in sorted_agencies:
self.assertEqual(3, len(chunks))
self.assertEqual(2, len(chunks[0]))
self.assertEqual(2, len(chunks[1]))
self.assertEqual(1, len(chunks[2]))
def test_homepage_find_a_stop(self):
"""Test Find a Stop form is present on NC homepage"""
response = self.client.get(reverse('md:home'))
# make sure form is in context
self.assertTrue('find_a_stop_form' in response.context)
form = response.context['find_a_stop_form']
# make sure required agency field label is present
self.assertContains(response, form['agency'].label)
| 39.013333 | 81 | 0.66473 | 348 | 2,926 | 5.416667 | 0.272989 | 0.186737 | 0.22069 | 0.066844 | 0.476923 | 0.435544 | 0.435544 | 0.378249 | 0.312997 | 0.287003 | 0 | 0.010577 | 0.224539 | 2,926 | 74 | 82 | 39.540541 | 0.820185 | 0.14149 | 0 | 0.117647 | 0 | 0 | 0.08502 | 0 | 0 | 0 | 0 | 0 | 0.27451 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.215686 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3674bf513c78cb14c7a7197f8717bd835994b08c | 9,730 | py | Python | config/settings/defaults.py | lucaluca/palimpsest | 64565d1b188d68bc978253a21a98440b769e26ee | [
"BSD-3-Clause"
] | null | null | null | config/settings/defaults.py | lucaluca/palimpsest | 64565d1b188d68bc978253a21a98440b769e26ee | [
"BSD-3-Clause"
] | null | null | null | config/settings/defaults.py | lucaluca/palimpsest | 64565d1b188d68bc978253a21a98440b769e26ee | [
"BSD-3-Clause"
] | null | null | null | """
Base settings to build other settings files upon.
"""
import environ
ROOT_DIR = environ.Path(__file__) - 3 # (palimpsest/config/settings/base.py - 3 = palimpsest/)
APPS_DIR = ROOT_DIR.path('palimpsest')
env = environ.Env()
READ_DOT_ENV_FILE = env.bool('DJANGO_READ_DOT_ENV_FILE', default=False)
if READ_DOT_ENV_FILE:
# OS environment variables take precedence over variables from .env
env.read_env(str(ROOT_DIR.path('.env')))
# GENERAL
# ------------------------------------------------------------------------------
DEBUG = env.bool('DJANGO_DEBUG', False)
TIME_ZONE = 'CST'
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
USE_I18N = False
USE_L10N = True
USE_TZ = True
# DATABASES
# ------------------------------------------------------------------------------
# SQLite by default
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(APPS_DIR, 'db.sqlite3'),
}
}
# URLS
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#root-urlconf
ROOT_URLCONF = 'config.urls'
WSGI_APPLICATION = 'config.wsgi.application'
# APPS
# ------------------------------------------------------------------------------
DJANGO_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.admin',
]
THIRD_PARTY_APPS = [
'crispy_forms',
'allauth',
'allauth.account',
'allauth.socialaccount',
'rest_framework',
]
LOCAL_APPS = [
'users',
]
INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS
# MIGRATIONS
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#migration-modules
MIGRATION_MODULES = {
'sites': 'palimpsest.contrib.sites.migrations'
}
# AUTHENTICATION
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#authentication-backends
AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend',
]
AUTH_USER_MODEL = 'users.User'
LOGIN_REDIRECT_URL = 'users:redirect'
LOGIN_URL = 'account_login'
# PASSWORDS
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#password-hashers
PASSWORD_HASHERS = [
# https://docs.djangoproject.com/en/dev/topics/auth/passwords/#using-argon2-with-django
'django.contrib.auth.hashers.Argon2PasswordHasher',
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
'django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
'django.contrib.auth.hashers.BCryptPasswordHasher',
]
# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{ 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', },
{ 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', },
{ 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', },
{ 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', },
]
# MIDDLEWARE
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#middleware
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
# STATIC
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#static-root
STATIC_ROOT = str(ROOT_DIR('staticfiles'))
# https://docs.djangoproject.com/en/dev/ref/settings/#static-url
STATIC_URL = '/static/'
# https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS
STATICFILES_DIRS = [
str(APPS_DIR.path('static')),
]
# https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders
STATICFILES_FINDERS = [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
]
# MEDIA
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#media-root
MEDIA_ROOT = str(APPS_DIR('media'))
# https://docs.djangoproject.com/en/dev/ref/settings/#media-url
MEDIA_URL = '/media/'
# TEMPLATES
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#templates
TEMPLATES = [
{
# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND
'BACKEND': 'django.template.backends.django.DjangoTemplates',
# https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs
'DIRS': [
str(APPS_DIR.path('templates')),
],
'OPTIONS': {
# https://docs.djangoproject.com/en/dev/ref/settings/#template-debug
'debug': DEBUG,
# https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders
# https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types
'loaders': [
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
],
# https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
],
},
},
]
# http://django-crispy-forms.readthedocs.io/en/latest/install.html#template-packs
CRISPY_TEMPLATE_PACK = 'bootstrap4'
# FIXTURES
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#fixture-dirs
FIXTURE_DIRS = (
str(APPS_DIR.path('fixtures')),
)
# EMAIL
# ------------------------------------------------------------------------------
# https://docs.djangoproject.com/en/dev/ref/settings/#email-backend
EMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND', default='django.core.mail.backends.smtp.EmailBackend')
# ADMIN
# ------------------------------------------------------------------------------
# Django Admin URL.
ADMIN_URL = env('DJANGO_ADMIN_URL', default='admin/')
# https://docs.djangoproject.com/en/dev/ref/settings/#admins
ADMINS = [
("""Jonathan Giuffrida""", 'me@jcgiuffrida.com'),
]
# https://docs.djangoproject.com/en/dev/ref/settings/#managers
MANAGERS = ADMINS
# Celery
# ------------------------------------------------------------------------------
INSTALLED_APPS += ['palimpsest.taskapp.celery.CeleryAppConfig']
if USE_TZ:
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-timezone
CELERY_TIMEZONE = TIME_ZONE
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-broker_url
CELERY_BROKER_URL = env('CELERY_BROKER_URL')
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-result_backend
CELERY_RESULT_BACKEND = CELERY_BROKER_URL
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-accept_content
CELERY_ACCEPT_CONTENT = ['json']
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-task_serializer
CELERY_TASK_SERIALIZER = 'json'
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-result_serializer
CELERY_RESULT_SERIALIZER = 'json'
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-time-limit
# TODO: set to whatever value is adequate in your circumstances
CELERYD_TASK_TIME_LIMIT = 5 * 60
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-soft-time-limit
# TODO: set to whatever value is adequate in your circumstances
CELERYD_TASK_SOFT_TIME_LIMIT = 60
# DJANGO-ALLAUTH
# https://django-allauth.readthedocs.io/en/latest/configuration.html
# ------------------------------------------------------------------------------
ACCOUNT_ALLOW_REGISTRATION = env.bool('DJANGO_ACCOUNT_ALLOW_REGISTRATION', True)
ACCOUNT_AUTHENTICATION_METHOD = 'username'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
ACCOUNT_ADAPTER = 'palimpsest.users.adapters.AccountAdapter'
SOCIALACCOUNT_ADAPTER = 'palimpsest.users.adapters.SocialAccountAdapter'
# django-compressor
# ------------------------------------------------------------------------------
# https://django-compressor.readthedocs.io/en/latest/quickstart/#installation
INSTALLED_APPS += ['compressor']
STATICFILES_FINDERS += ['compressor.finders.CompressorFinder']
# Your stuff...
# ------------------------------------------------------------------------------
| 40.041152 | 100 | 0.626619 | 957 | 9,730 | 6.23302 | 0.231975 | 0.054484 | 0.088516 | 0.100587 | 0.342498 | 0.333445 | 0.302263 | 0.296731 | 0.226991 | 0.161609 | 0 | 0.002882 | 0.10853 | 9,730 | 242 | 101 | 40.206612 | 0.684805 | 0.466598 | 0 | 0.021583 | 0 | 0 | 0.491522 | 0.395308 | 0 | 0 | 0 | 0.004132 | 0 | 1 | 0 | false | 0.079137 | 0.007194 | 0 | 0.007194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3674f6fa0841c2939ec3314e0546dac6ce11be16 | 1,717 | py | Python | params_tuning/ls_iter_n/out_comparator.py | bleakTwig/ophs_grasp | 3e3986154d096a476805269cb818f8c8709a7bda | [
"MIT"
] | 3 | 2019-04-21T06:28:00.000Z | 2022-01-20T15:39:34.000Z | params_tuning/ls_iter_n/out_comparator.py | bleakTwig/ophs_grasp | 3e3986154d096a476805269cb818f8c8709a7bda | [
"MIT"
] | 1 | 2020-03-08T07:21:52.000Z | 2022-01-20T15:40:04.000Z | params_tuning/ls_iter_n/out_comparator.py | bleakTwig/ophs_grasp | 3e3986154d096a476805269cb818f8c8709a7bda | [
"MIT"
] | 2 | 2019-11-13T13:05:06.000Z | 2020-05-21T18:09:03.000Z | INSTANCES = 405
ITERS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 50, 100]
N_ITERS = len(ITERS)
# === RESULTS GATHERING ====================================================== #
# results_m is a [INSTANCES][N_ITERS] matrix to store every test result
results_m = [[0 for x in range(N_ITERS)] for y in range(INSTANCES)]
for I in range(N_ITERS):
fin = open("tests/" + str(ITERS[I]))
out = fin.read()
fin.close()
counter = 0
for line in out.splitlines():
results_m[counter][I] = int(line)
counter += 1
# === CALCULATING AVERAGES =================================================== #
averages = [0.0 for x in range(N_ITERS)]
for I in range(INSTANCES):
for J in range(N_ITERS):
results_m[I][J] = results_m[I][J] - results_m[I][0]
if (results_m[I][N_ITERS-1] != 0):
results_m[I][J] = float(results_m[I][J] / results_m[I][N_ITERS-1])
averages[J] += results_m[I][J]
for J in range(N_ITERS):
averages[J] = averages[J]/INSTANCES
for J in range(N_ITERS-1, 1, -1):
averages[J] -= averages[J-1]
# === PRINTING RESULTS ======================================================= #
print("========================================")
print(" all tests:")
for J in range(1, N_ITERS):
if (ITERS[J] < 10):
print(" " + str(ITERS[J]) + ": " + str(100 * averages[J]) + '%')
elif (ITERS[J] < 100):
print(" " + str(ITERS[J]) + ": " + str(100 * averages[J]) + '%')
else:
print(" " + str(ITERS[J]) + ": " + str(100 * averages[J]) + '%')
print("========================================")
# ============================================================================ #
| 36.531915 | 95 | 0.447292 | 226 | 1,717 | 3.300885 | 0.292035 | 0.088472 | 0.096515 | 0.104558 | 0.375335 | 0.36059 | 0.306971 | 0.172922 | 0 | 0 | 0 | 0.049927 | 0.206756 | 1,717 | 46 | 96 | 37.326087 | 0.497797 | 0.221316 | 0 | 0.212121 | 0 | 0 | 0.092006 | 0.060332 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3678d7bd470a57edb6cab13b9b66d70383e92ec4 | 457 | py | Python | data_augmentation/eda/image/transforms/normalize.py | simran-arora/emmental-tutorials | 249a82a57be58e960408a45e2e0daa72980d210a | [
"MIT"
] | null | null | null | data_augmentation/eda/image/transforms/normalize.py | simran-arora/emmental-tutorials | 249a82a57be58e960408a45e2e0daa72980d210a | [
"MIT"
] | null | null | null | data_augmentation/eda/image/transforms/normalize.py | simran-arora/emmental-tutorials | 249a82a57be58e960408a45e2e0daa72980d210a | [
"MIT"
] | null | null | null | import torchvision.transforms as transforms
from eda.image.transforms.transform import EdaTransform
class Normalize(EdaTransform):
def __init__(self, mean, std, name=None, prob=1.0, level=0):
self.mean = mean
self.std = std
self.transform_func = transforms.Normalize(mean, std)
super().__init__(name, prob, level)
def transform(self, pil_img, label, **kwargs):
return self.transform_func(pil_img), label
| 28.5625 | 64 | 0.695842 | 59 | 457 | 5.186441 | 0.491525 | 0.052288 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008197 | 0.199125 | 457 | 15 | 65 | 30.466667 | 0.827869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3679ff23a41d265863b5224bc85c9f29783e0ef7 | 894 | py | Python | lemon_markets/tests/ctest_account.py | leonhma/lemon_markets_sdk | 2739799e6f6b8fa781af8b19e92c32068565cdd3 | [
"MIT"
] | null | null | null | lemon_markets/tests/ctest_account.py | leonhma/lemon_markets_sdk | 2739799e6f6b8fa781af8b19e92c32068565cdd3 | [
"MIT"
] | 1 | 2021-09-29T16:32:49.000Z | 2021-09-29T16:56:26.000Z | lemon_markets/tests/ctest_account.py | leonhma/lemon_markets_sdk | 2739799e6f6b8fa781af8b19e92c32068565cdd3 | [
"MIT"
] | null | null | null | from os import environ
from unittest import TestCase
from lemon_markets.account import Account
client_id = environ.get('CLIENT_ID')
client_token = environ.get('CLIENT_TOKEN')
class _TestAccount(TestCase):
def setUp(self):
try:
self.account = Account(client_id, client_token)
except Exception as e:
self._account_exception = e
def skip_if_acc_failed(self):
if not hasattr(self, 'account'):
self.skipTest('Account creation failed!')
def test_create_account(self):
if not hasattr(self, 'account'):
self.fail(self._account_exception)
def test_auth_token_type(self):
self.skip_if_acc_failed()
self.assertIs(type(self.account.access_token), str)
def test_auth_header_type(self):
self.skip_if_acc_failed()
self.assertIs(type(self.account._authorization), dict)
| 27.9375 | 62 | 0.682327 | 116 | 894 | 4.991379 | 0.362069 | 0.132988 | 0.046632 | 0.07772 | 0.319516 | 0.29361 | 0.29361 | 0.186529 | 0.186529 | 0.186529 | 0 | 0 | 0.223714 | 894 | 31 | 63 | 28.83871 | 0.834294 | 0 | 0 | 0.173913 | 0 | 0 | 0.065996 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.217391 | false | 0 | 0.130435 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
367fa896207ba09a4df55eba008b8c69a5715343 | 3,396 | py | Python | pandapower/pypower/idx_bus.py | bergkvist/pandapower | 450bbd99888e7e5913905b20b848ee1cfa669ee8 | [
"BSD-3-Clause"
] | 1 | 2020-04-09T08:03:48.000Z | 2020-04-09T08:03:48.000Z | pandapower/pypower/idx_bus.py | bergkvist/pandapower | 450bbd99888e7e5913905b20b848ee1cfa669ee8 | [
"BSD-3-Clause"
] | 1 | 2019-04-17T14:58:53.000Z | 2019-04-17T14:58:53.000Z | pandapower/pypower/idx_bus.py | gdgarcia/pandapower | 630e3278ca012535f78282ae73f1b86f3fe932fc | [
"BSD-3-Clause"
] | 1 | 2020-11-03T01:40:38.000Z | 2020-11-03T01:40:38.000Z | # -*- coding: utf-8 -*-
# Copyright 1996-2015 PSERC. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
# Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics
# and Energy System Technology (IEE), Kassel. All rights reserved.
"""Defines constants for named column indices to bus matrix.
Some examples of usage, after defining the constants using the line above,
are::
Pd = bus[3, PD] # get the real power demand at bus 4
bus[:, VMIN] = 0.95 # set the min voltage magnitude to 0.95 at all buses
The index, name and meaning of each column of the bus matrix is given
below:
columns 0-12 must be included in input matrix (in case file)
0. C{BUS_I} bus number (1 to 29997)
1. C{BUS_TYPE} bus type (1 = PQ, 2 = PV, 3 = ref, 4 = isolated)
2. C{PD} real power demand (MW)
3. C{QD} reactive power demand (MVAr)
4. C{GS} shunt conductance (MW at V = 1.0 p.u.)
5. C{BS} shunt susceptance (MVAr at V = 1.0 p.u.)
6. C{BUS_AREA} area number, 1-100
7. C{VM} voltage magnitude (p.u.)
8. C{VA} voltage angle (degrees)
9. C{BASE_KV} base voltage (kV)
10. C{ZONE} loss zone (1-999)
11. C{VMAX} maximum voltage magnitude (p.u.)
12. C{VMIN} minimum voltage magnitude (p.u.)
columns 13-16 are added to matrix after OPF solution
they are typically not present in the input matrix
(assume OPF objective function has units, u)
13. C{LAM_P} Lagrange multiplier on real power mismatch (u/MW)
14. C{LAM_Q} Lagrange multiplier on reactive power mismatch (u/MVAr)
15. C{MU_VMAX} Kuhn-Tucker multiplier on upper voltage limit (u/p.u.)
16. C{MU_VMIN} Kuhn-Tucker multiplier on lower voltage limit (u/p.u.)
additional constants, used to assign/compare values in the C{BUS_TYPE} column
1. C{PQ} PQ bus
2. C{PV} PV bus
3. C{REF} reference bus
4. C{NONE} isolated bus
@author: Ray Zimmerman (PSERC Cornell)
@author: Richard Lincoln
"""
# define bus types
PQ = 1
PV = 2
REF = 3
NONE = 4
# define the indices
BUS_I = 0 # bus number (1 to 29997)
BUS_TYPE = 1 # bus type
PD = 2 # Pd, real power demand (MW)
QD = 3 # Qd, reactive power demand (MVAr)
GS = 4 # Gs, shunt conductance (MW at V = 1.0 p.u.)
BS = 5 # Bs, shunt susceptance (MVAr at V = 1.0 p.u.)
BUS_AREA = 6 # area number, 1-100
VM = 7 # Vm, voltage magnitude (p.u.)
VA = 8 # Va, voltage angle (degrees)
BASE_KV = 9 # baseKV, base voltage (kV)
ZONE = 10 # zone, loss zone (1-999)
VMAX = 11 # maxVm, maximum voltage magnitude (p.u.)
VMIN = 12 # minVm, minimum voltage magnitude (p.u.)
# included in opf solution, not necessarily in input
# assume objective function has units, u
LAM_P = 13 # Lagrange multiplier on real power mismatch (u/MW)
LAM_Q = 14 # Lagrange multiplier on reactive power mismatch (u/MVAr)
MU_VMAX = 15 # Kuhn-Tucker multiplier on upper voltage limit (u/p.u.)
MU_VMIN = 16 # Kuhn-Tucker multiplier on lower voltage limit (u/p.u.)
# Additional pandapower extensions to ppc
CID = 13 # coefficient of constant current load at rated voltage in range [0,1]
CZD = 14 # coefficient of constant impedance load at rated voltage in range [0,1]
bus_cols = 15
| 38.157303 | 95 | 0.654594 | 573 | 3,396 | 3.848168 | 0.331588 | 0.012698 | 0.046259 | 0.04898 | 0.395465 | 0.238549 | 0.238549 | 0.238549 | 0.136054 | 0.136054 | 0 | 0.054224 | 0.250589 | 3,396 | 88 | 96 | 38.590909 | 0.812181 | 0.895465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3682efa29fdf319c29d1fc0f9982530abedd7083 | 2,586 | py | Python | example.py | vinhntb/geo_redis | 2b79ab844bdf1a56e442393911437c76449bedb0 | [
"MIT"
] | null | null | null | example.py | vinhntb/geo_redis | 2b79ab844bdf1a56e442393911437c76449bedb0 | [
"MIT"
] | null | null | null | example.py | vinhntb/geo_redis | 2b79ab844bdf1a56e442393911437c76449bedb0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# !/usr/bin/python
#
# example.py
#
#
# Created by vinhntb on 6/27/17.
# Copyright (c) 2017 geo_redis. All rights reserved.
import sys
from bunch import Bunch
from constants import GEO_USER_VISITED
from geo_redis.geo_redis import GeoRedis
def add_user_visited():
user_1 = Bunch(longitude=13.583333, latitude=37.316667, member='101') # The member is user_id
user_2 = Bunch(longitude=13.361389, latitude=38.115556, member='102') # The member is user_id
user_3 = Bunch(longitude=13.583433, latitude=37.317667, member='103') # The member is user_id
user_4 = Bunch(longitude=13.583533, latitude=37.318667, member='104') # The member is user_id
user_5 = Bunch(longitude=13.583633, latitude=37.318767, member='105') # The member is user_id
user_6 = Bunch(longitude=13.583733, latitude=37.318867, member='106') # The member is user_id
user_7 = Bunch(longitude=13.361389, latitude=38.115556, member='107') # The member is user_id
user_8 = Bunch(longitude=15.087269, latitude=37.502669, member='108') # The member is user_id
redis_instance = GeoRedis()
redis_instance.geo_add(GEO_USER_VISITED, **user_1)
redis_instance.geo_add(GEO_USER_VISITED, **user_2)
redis_instance.geo_add(GEO_USER_VISITED, **user_3)
redis_instance.geo_add(GEO_USER_VISITED, **user_4)
redis_instance.geo_add(GEO_USER_VISITED, **user_5)
redis_instance.geo_add(GEO_USER_VISITED, **user_6)
redis_instance.geo_add(GEO_USER_VISITED, **user_7)
redis_instance.geo_add(GEO_USER_VISITED, **user_8)
def get_geo_radius():
redis_instance = GeoRedis()
unit = 'm'
results = redis_instance.geo_radius(name=GEO_USER_VISITED, longitude=13.58, latitude=37.316, radius=500, unit=unit,
withcoord=True, withdist=True)
"""The results are in nested array depend of parameters input. At this example, we have parameters:
withcoord, withdist, withash. So result
1) member (user_id) [0]
2) distance [1]
3) 1) longitude [2][0]
2) latitude [2][1]
"""
print "The results: \n"
print "Total: %s" % len(results)
for member in results:
print '--------------------------------------\n'
print "member: %s \n" % member[0]
print "distance: %s%s \n" % (member[1], unit)
print "longitude: %s \n" % member[2][0]
print "latitude: %s \n" % member[2][0]
def main(argv):
add_user_visited()
get_geo_radius()
if __name__ == '__main__':
main(sys.argv)
| 35.424658 | 119 | 0.655066 | 377 | 2,586 | 4.270557 | 0.291777 | 0.081988 | 0.086957 | 0.074534 | 0.352795 | 0.329814 | 0.238509 | 0.238509 | 0 | 0 | 0 | 0.101074 | 0.208043 | 2,586 | 72 | 120 | 35.916667 | 0.685059 | 0.119876 | 0 | 0.05 | 0 | 0 | 0.081026 | 0.020513 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.175 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
368eabb014f813c693697e1b58b1279e93227806 | 1,070 | py | Python | BowlingGame/bowling_game_test.py | WisWang/code-kata | 179188e4e42686807ab3691e0fb68edac08304be | [
"MIT"
] | 2 | 2019-06-17T03:31:13.000Z | 2019-06-17T03:31:16.000Z | BowlingGame/bowling_game_test.py | WisWang/code-kata | 179188e4e42686807ab3691e0fb68edac08304be | [
"MIT"
] | null | null | null | BowlingGame/bowling_game_test.py | WisWang/code-kata | 179188e4e42686807ab3691e0fb68edac08304be | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2017 - hongzhi.wang <hongzhi.wang@moji.com>
'''
Author: hongzhi.wang
Create Date: 2019-09-04
Modify Date: 2019-09-04
'''
import unittest
from .bowling_game import BowlingGame
class TestBowlingGame(unittest.TestCase):
def setUp(self):
self.g = BowlingGame()
def test_game_all_zero(self):
self.roll_many(20, 0)
self.assertEqual(0, self.g.score())
def test_game_all_one(self):
self.roll_many(20, 1)
self.assertEqual(20, self.g.score())
def roll_many(self, times, pin):
for i in range(times):
self.g.roll(pin)
def test_game_first_strike(self):
self.g.roll(4)
self.g.roll(6)
self.roll_many(18, 1)
self.assertEqual(29, self.g.score())
def test_perfect_game(self):
self.roll_many(12, 10)
self.assertEqual(300, self.g.score())
def test_game_first_two_strike(self):
self.roll_many(4, 5)
self.roll_many(16, 1)
self.assertEqual(42, self.g.score()) | 23.777778 | 60 | 0.626168 | 159 | 1,070 | 4.069182 | 0.408805 | 0.069552 | 0.111283 | 0.098918 | 0.146832 | 0.064915 | 0 | 0 | 0 | 0 | 0 | 0.062271 | 0.234579 | 1,070 | 45 | 61 | 23.777778 | 0.727717 | 0.157944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 1 | 0.269231 | false | 0 | 0.076923 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
369efff06f7a72f84b0e9a781b19f53a96b5ca56 | 1,610 | py | Python | Francisco_Trujillo/Assignments/registration/serverre.py | webguru001/Python-Django-Web | 6264bc4c90ef1432ba0902c76b567cf3caaae221 | [
"MIT"
] | 5 | 2019-05-17T01:30:02.000Z | 2021-06-17T21:02:58.000Z | Francisco_Trujillo/Assignments/registration/serverre.py | curest0x1021/Python-Django-Web | 6264bc4c90ef1432ba0902c76b567cf3caaae221 | [
"MIT"
] | null | null | null | Francisco_Trujillo/Assignments/registration/serverre.py | curest0x1021/Python-Django-Web | 6264bc4c90ef1432ba0902c76b567cf3caaae221 | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request, redirect, session, flash
import re
EMAIL_REGEX = re.compile(r'^[a-zA-Z0-9.+_-]+@[a-zA-Z0-9._-]+\.[a-zA-Z]+$')
app = Flask(__name__)
app.secret_key = 'irtndvieurnviur'
@app.route('/')
def index():
return render_template("index.html")
#check all for empty and password >=8
def checkForValuelength(form):
if ((len(form['email']))< 1 or
(len(form['fname']))< 1 or
(len(form['lname']))< 1 or
(len(form['password']))<=8 or
(len(form['cpassword']))<= 8):
return False
return True
# check for valid name and last name
def validNamefileds(form):
if not form['fname'].isalpha() or not form['lname'].isalpha():
return False
return True
# invalid EMAIL
def matchPassword(form):
if not form['password'] == form['cpassword']:
return False
return True
@app.route('/process', methods=['POST'])
def form_page():
if not checkForValuelength(request.form):
flash("All fileds are required and password must be 8 or more characater")
return redirect('/')
elif not validNamefileds(request.form):
flash("Name and last name must not contain numbers")
return redirect('/')
elif not EMAIL_REGEX.match(request.form['email']):
flash("Invalid Email address")
return redirect('/')
elif not matchPassword(request.form):
flash("Password do not match")
return redirect ('/')
flash("Form sccessfully submitted")
return redirect('/')
@app.route('/')
def result_page():
return redirect('/')
app.run(debug=True)
| 28.245614 | 82 | 0.636646 | 209 | 1,610 | 4.842105 | 0.363636 | 0.083004 | 0.035573 | 0.029644 | 0.014822 | 0.014822 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.207453 | 1,610 | 56 | 83 | 28.75 | 0.784483 | 0.052795 | 0 | 0.318182 | 0 | 0.022727 | 0.21682 | 0.029566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.159091 | 0.045455 | 0.045455 | 0.477273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
369f8e2a1f4297f6ea89787a4d4e52bb3722aa03 | 63,998 | py | Python | pedantic/tests/tests_pedantic.py | LostInDarkMath/Pedantic-python-decorators | 32ed54c9593e80f63c0499093cb07847d8a5e1df | [
"Apache-2.0"
] | null | null | null | pedantic/tests/tests_pedantic.py | LostInDarkMath/Pedantic-python-decorators | 32ed54c9593e80f63c0499093cb07847d8a5e1df | [
"Apache-2.0"
] | null | null | null | pedantic/tests/tests_pedantic.py | LostInDarkMath/Pedantic-python-decorators | 32ed54c9593e80f63c0499093cb07847d8a5e1df | [
"Apache-2.0"
] | null | null | null | import os.path
import sys
import types
import typing
import unittest
from datetime import datetime, date
from functools import wraps
from io import BytesIO, StringIO
from typing import List, Tuple, Callable, Any, Optional, Union, Dict, Set, FrozenSet, NewType, TypeVar, Sequence, \
AbstractSet, Iterator, NamedTuple, Collection, Type, Generator, Generic, BinaryIO, TextIO, Iterable, Container, \
NoReturn, ClassVar
from enum import Enum, IntEnum
from pedantic import pedantic_class
from pedantic.exceptions import PedanticTypeCheckException, PedanticException, PedanticCallWithArgsException, \
PedanticTypeVarMismatchException
from pedantic.decorators.fn_deco_pedantic import pedantic
TEST_FILE = 'test.txt'
class Parent:
pass
class Child(Parent):
def method(self, a: int):
pass
class TestDecoratorRequireKwargsAndTypeCheck(unittest.TestCase):
def tearDown(self) -> None:
if os.path.isfile(TEST_FILE):
os.remove(TEST_FILE)
def test_no_kwargs(self):
@pedantic
def calc(n: int, m: int, i: int) -> int:
return n + m + i
with self.assertRaises(expected_exception=PedanticCallWithArgsException):
calc(42, 40, 38)
with self.assertRaises(expected_exception=PedanticCallWithArgsException):
calc(42, m=40, i=38)
calc(n=42, m=40, i=38)
def test_nested_type_hints_1(self):
@pedantic
def calc(n: int) -> List[List[float]]:
return [0.0 * n]
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42)
def test_nested_type_hints_1_corrected(self):
@pedantic
def calc(n: int) -> List[List[float]]:
return [[0.0 * n]]
calc(n=42)
def test_nested_type_hints_2(self):
"""Problem here: int != float"""
@pedantic
def calc(n: int) -> List[Tuple[float, str]]:
return [(n, str(n))]
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42)
def test_nested_type_hints_2_corrected(self):
@pedantic
def calc(n: int) -> List[Tuple[int, str]]:
return [(n, str(n))]
@pedantic
def calc_2(n: float) -> List[Tuple[float, str]]:
return [(n, str(n))]
calc(n=42)
calc_2(n=42.0)
def test_nested_type_hints_3(self):
"""Problem here: inner function actually returns Tuple[int, str]"""
@pedantic
def calc(n: int) -> Callable[[int, float], Tuple[float, str]]:
@pedantic
def f(x: int, y: float) -> Tuple[float, str]:
return n * x, str(y)
return f
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42)(x=3, y=3.14)
def test_nested_type_hints_3_corrected(self):
@pedantic
def calc(n: int) -> Callable[[int, float], Tuple[int, str]]:
@pedantic
def f(x: int, y: float) -> Tuple[int, str]:
return n * x, str(y)
return f
calc(n=42)(x=3, y=3.14)
def test_nested_type_hints_4(self):
"""Problem here: return type is actually float"""
@pedantic
def calc(n: List[List[float]]) -> int:
return n[0][0]
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=[[42.0]])
def test_nested_type_hints_corrected(self):
@pedantic
def calc(n: List[List[float]]) -> int:
return int(n[0][0])
calc(n=[[42.0]])
def test_nested_type_hints_5(self):
"""Problem here: Tuple[float, str] != Tuple[float, float]"""
@pedantic
def calc(n: int) -> Callable[[int, float], Tuple[float, str]]:
@pedantic
def f(x: int, y: float) -> Tuple[float, float]:
return n * float(x), y
return f
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42)
def test_nested_type_hints_5_corrected(self):
@pedantic
def calc(n: int) -> Callable[[int, float], Tuple[float, float]]:
@pedantic
def f(x: int, y: float) -> Tuple[float, float]:
return n * float(x), y
return f
calc(n=42)
def test_missing_type_hint_1(self):
"""Problem here: type hint for n missed"""
@pedantic
def calc(n) -> float:
return 42.0 * n
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42)
def test_missing_type_hint_1_corrected(self):
@pedantic
def calc(n: int) -> float:
return 42.0 * n
calc(n=42)
def test_missing_type_hint_2(self):
"""Problem here: Return type annotation missed"""
@pedantic
def calc(n: int):
return 'Hi' + str(n)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42)
def test_missing_type_hint_2_corrected(self):
@pedantic
def calc(n: int) -> str:
return 'Hi' + str(n)
calc(n=42)
def test_missing_type_hint_3(self):
"""Problem here: type hint for i missed"""
@pedantic
def calc(n: int, m: int, i) -> int:
return n + m + i
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=38)
def test_missing_type_hint_3_corrected(self):
@pedantic
def calc(n: int, m: int, i: int) -> int:
return n + m + i
calc(n=42, m=40, i=38)
def test_all_ok_2(self):
@pedantic
def calc(n: int, m: int, i: int) -> str:
return str(n + m + i)
calc(n=42, m=40, i=38)
def test_all_ok_3(self):
@pedantic
def calc(n: int, m: int, i: int) -> None:
str(n + m + i)
calc(n=42, m=40, i=38)
def test_all_ok_4(self):
@pedantic
def calc(n: int) -> List[List[int]]:
return [[n]]
calc(n=42)
def test_all_ok_5(self):
@pedantic
def calc(n: int) -> List[Tuple[float, str]]:
return [(float(n), str(n))]
calc(n=42)
def test_all_ok_6(self):
@pedantic
def calc(n: int) -> Callable[[int, float], Tuple[float, str]]:
@pedantic
def f(x: int, y: float) -> Tuple[float, str]:
return n * float(x), str(y)
return f
calc(n=42)(x=72, y=3.14)
def test_all_ok_7(self):
@pedantic
def calc(n: List[List[float]]) -> Any:
return n[0][0]
calc(n=[[42.0]])
def test_all_ok_8(self):
@pedantic
def calc(n: int) -> Callable[[int, float], Tuple[float, str]]:
@pedantic
def f(x: int, y: float) -> Tuple[float, str]:
return n * float(x), str(y)
return f
calc(n=42)(x=3, y=3.14)
def test_wrong_type_hint_1(self):
"""Problem here: str != int"""
@pedantic
def calc(n: int, m: int, i: int) -> str:
return n + m + i
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=38)
def test_wrong_type_hint_1_corrected(self):
@pedantic
def calc(n: int, m: int, i: int) -> str:
return str(n + m + i)
calc(n=42, m=40, i=38)
def test_wrong_type_hint_2(self):
"""Problem here: str != int"""
@pedantic
def calc(n: int, m: int, i: str) -> int:
return n + m + i
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=38)
def test_wrong_type_hint_2_corrected(self):
@pedantic
def calc(n: int, m: int, i: str) -> int:
return n + m + int(i)
calc(n=42, m=40, i='38')
def test_wrong_type_hint_3(self):
"""Problem here: None != int"""
@pedantic
def calc(n: int, m: int, i: int) -> None:
return n + m + i
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=38)
def test_wrong_type_hint_corrected(self):
@pedantic
def calc(n: int, m: int, i: int) -> None:
print(n + m + i)
calc(n=42, m=40, i=38)
def test_wrong_type_hint_4(self):
"""Problem here: None != int"""
@pedantic
def calc(n: int, m: int, i: int) -> int:
print(n + m + i)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=38)
def test_wrong_type_hint_4_corrected(self):
@pedantic
def calc(n: int, m: int, i: int) -> int:
return n + m + i
calc(n=42, m=40, i=38)
def test_none_1(self):
"""Problem here: None is not accepted"""
@pedantic
def calc(n: int, m: int, i: int) -> int:
return n + m + i
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=None)
def test_none_2(self):
@pedantic
def calc(n: int, m: int, i: Optional[int]) -> int:
return n + m + i if i is not None else n + m
calc(n=42, m=40, i=None)
def test_none_3(self):
@pedantic
def calc(n: int, m: int, i: Union[int, None]) -> int:
return n + m + i if i is not None else n + m
calc(n=42, m=40, i=None)
def test_none_4(self):
"""Problem here: function may return None"""
@pedantic
def calc(n: int, m: int, i: Union[int, None]) -> int:
return n + m + i if i is not None else None
calc(n=42, m=40, i=42)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(n=42, m=40, i=None)
def test_none_5(self):
@pedantic
def calc(n: int, m: int, i: Union[int, None]) -> Optional[int]:
return n + m + i if i is not None else None
calc(n=42, m=40, i=None)
def test_inheritance_1(self):
class MyClassA:
pass
class MyClassB(MyClassA):
pass
@pedantic
def calc(a: MyClassA) -> str:
return str(a)
calc(a=MyClassA())
calc(a=MyClassB())
def test_inheritance_2(self):
"""Problem here: A is not a subtype of B"""
class MyClassA:
pass
class MyClassB(MyClassA):
pass
@pedantic
def calc(a: MyClassB) -> str:
return str(a)
calc(a=MyClassB())
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(a=MyClassA())
def test_instance_method_1(self):
class MyClassA:
@pedantic
def calc(self, i: int) -> str:
return str(i)
a = MyClassA()
a.calc(i=42)
def test_instance_method_2(self):
"""Problem here: 'i' has no type annotation"""
class MyClassA:
@pedantic
def calc(self, i) -> str:
return str(i)
a = MyClassA()
with self.assertRaises(expected_exception=PedanticTypeCheckException):
a.calc(i=42)
def test_instance_method_2_corrected(self):
class MyClassA:
@pedantic
def calc(self, i: int) -> str:
return str(i)
a = MyClassA()
a.calc(i=42)
def test_instance_method_int_is_not_float(self):
class MyClassA:
@pedantic
def calc(self, i: float) -> str:
return str(i)
a = MyClassA()
with self.assertRaises(expected_exception=PedanticTypeCheckException):
a.calc(i=42)
def test_instance_method_3_corrected(self):
class MyClassA:
@pedantic
def calc(self, i: float) -> str:
return str(i)
a = MyClassA()
a.calc(i=42.0)
def test_instance_method_no_kwargs(self):
class MyClassA:
@pedantic
def calc(self, i: int) -> str:
return str(i)
a = MyClassA()
with self.assertRaises(expected_exception=PedanticCallWithArgsException):
a.calc(42)
def test_instance_method_5(self):
"""Problem here: instance methods is not called with kwargs"""
class MyClassA:
@pedantic
def calc(self, i: int) -> str:
return str(i)
a = MyClassA()
a.calc(i=42)
def test_lambda_1(self):
@pedantic
def calc(i: float) -> Callable[[float], str]:
return lambda x: str(x * i)
calc(i=42.0)(10.0)
def test_lambda_3(self):
@pedantic
def calc(i: float) -> Callable[[float], str]:
def res(x: float) -> str:
return str(x * i)
return res
calc(i=42.0)(10.0)
def test_lambda_int_is_not_float(self):
@pedantic
def calc(i: float) -> Callable[[float], str]:
def res(x: int) -> str:
return str(x * i)
return res
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(i=42.0)(x=10)
def test_lambda_4_almost_corrected(self):
"""Problem here: float != str"""
@pedantic
def calc(i: float) -> Callable[[float], str]:
@pedantic
def res(x: int) -> str:
return str(x * i)
return res
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(i=42.0)(x=10)
def test_lambda_4_almost_corrected_2(self):
@pedantic
def calc(i: float) -> Callable[[int], str]:
@pedantic
def res(x: int) -> str:
return str(x * i)
return res
calc(i=42.0)(x=10)
def test_lambda_5(self):
"""Problem here: float != int"""
@pedantic
def calc(i: float) -> Callable[[float], str]:
@pedantic
def res(x: float) -> str:
return str(x * i)
return res
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(i=42.0)(x=10)
def test_lambda_corrected(self):
@pedantic
def calc(i: float) -> Callable[[float], str]:
@pedantic
def res(x: float) -> str:
return str(x * i)
return res
calc(i=42.0)(x=10.0)
def test_tuple_without_type_args(self):
@pedantic
def calc(i: Tuple) -> str:
return str(i)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(i=(42.0, 43, 'hi'))
def test_tuple_without_args_corrected(self):
@pedantic
def calc(i: Tuple[Any, ...]) -> str:
return str(i)
calc(i=(42.0, 43, 'hi'))
def test_callable_without_type_args(self):
@pedantic
def calc(i: Callable) -> str:
return str(i(' you'))
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(i=lambda x: (42.0, 43, 'hi', x))
def test_callable_without_args_correct_with_lambdas(self):
@pedantic
def calc(i: Callable[[Any], Tuple[Any, ...]]) -> str:
return str(i(x=' you'))
calc(i=lambda x: (42.0, 43, 'hi', x))
def test_callable_without_args_corrected(self):
@pedantic
def calc(i: Callable[[Any], Tuple[Any, ...]]) -> str:
return str(i(x=' you'))
@pedantic
def arg(x: Any) -> Tuple[Any, ...]:
return 42.0, 43, 'hi', x
calc(i=arg)
def test_list_without_args(self):
@pedantic
def calc(i: List) -> Any:
return [i]
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(i=[42.0, 43, 'hi'])
def test_list_without_args_corrected(self):
@pedantic
def calc(i: List[Any]) -> List[List[Any]]:
return [i]
calc(i=[42.0, 43, 'hi'])
def test_ellipsis_in_callable_1(self):
@pedantic
def calc(i: Callable[..., int]) -> int:
return i()
@pedantic
def call() -> int:
return 42
calc(i=call)
def test_ellipsis_in_callable_2(self):
@pedantic
def calc(i: Callable[..., int]) -> int:
return i(x=3.14, y=5)
@pedantic
def call(x: float, y: int) -> int:
return 42
calc(i=call)
def test_ellipsis_in_callable_3(self):
"""Problem here: call to "call" misses one argument"""
@pedantic
def calc(i: Callable[..., int]) -> int:
return i(x=3.14)
@pedantic
def call(x: float, y: int) -> int:
return 42
with self.assertRaises(expected_exception=PedanticException):
calc(i=call)
def test_optional_args_1(self):
@pedantic
def calc(a: int, b: int = 42) -> int:
return a + b
calc(a=2)
def test_optional_args_2(self):
@pedantic
def calc(a: int = 3, b: int = 42, c: float = 5.0) -> float:
return a + b + c
calc()
calc(a=1)
calc(b=1)
calc(c=1.0)
calc(a=1, b=1)
calc(a=1, c=1.0)
calc(b=1, c=1.0)
calc(a=1, b=1, c=1.0)
def test_optional_args_3(self):
"""Problem here: optional argument c: 5 is not a float"""
@pedantic
def calc(a: int = 3, b: int = 42, c: float = 5) -> float:
return a + b + c
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc()
def test_optional_args_3_corrected(self):
@pedantic
def calc(a: int = 3, b: int = 42, c: float = 5.0) -> float:
return a + b + c
calc()
def test_optional_args_4(self):
class MyClass:
@pedantic
def foo(self, a: int, b: Optional[int] = 1) -> int:
return a + b
my_class = MyClass()
my_class.foo(a=10)
def test_optional_args_5(self):
@pedantic
def calc(d: Optional[Dict[int, int]] = None) -> Optional[int]:
if d is None:
return None
return sum(d.keys())
calc(d=None)
calc()
calc(d={42: 3})
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(d={42: 3.14})
def test_optional_args_6(self):
""""Problem here: str != int"""
@pedantic
def calc(d: int = 42) -> int:
return int(d)
calc(d=99999)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(d='999999')
def test_enum_1(self):
"""Problem here: Type hint for 'a' should be MyEnum instead of MyEnum.GAMMA"""
class MyEnum(Enum):
ALPHA = 'startEvent'
BETA = 'task'
GAMMA = 'sequenceFlow'
class MyClass:
@pedantic
def operation(self, a: MyEnum.GAMMA) -> None:
print(a)
m = MyClass()
with self.assertRaises(expected_exception=PedanticTypeCheckException):
m.operation(a=MyEnum.GAMMA)
def test_enum_1_corrected(self):
class MyEnum(Enum):
ALPHA = 'startEvent'
BETA = 'task'
GAMMA = 'sequenceFlow'
@pedantic
def operation(a: MyEnum) -> None:
print(a)
operation(a=MyEnum.GAMMA)
def test_sloppy_types_dict(self):
@pedantic
def operation(d: dict) -> int:
return len(d.keys())
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d={1: 1, 2: 2})
def test_sloppy_types_dict_almost_corrected_no_type_args(self):
@pedantic
def operation(d: Dict) -> int:
return len(d.keys())
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d={1: 1, 2: 2})
def test_sloppy_types_dict_corrected(self):
@pedantic
def operation(d: Dict[int, int]) -> int:
return len(d.keys())
operation(d={1: 1, 2: 2})
def test_sloppy_types_list(self):
@pedantic
def operation(d: list) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d=[1, 2, 3, 4])
def test_sloppy_types_list_almost_corrected_no_type_args(self):
@pedantic
def operation(d: List) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d=[1, 2, 3, 4])
def test_sloppy_types_list_corrected(self):
@pedantic
def operation(d: List[int]) -> int:
return len(d)
operation(d=[1, 2, 3, 4])
def test_sloppy_types_tuple(self):
@pedantic
def operation(d: tuple) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d=(1, 2, 3))
def test_sloppy_types_tuple_almost_corrected_no_type_args(self):
@pedantic
def operation(d: Tuple) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d=(1, 2, 3))
def test_sloppy_types_tuple_corrected(self):
@pedantic
def operation(d: Tuple[int, int, int]) -> int:
return len(d)
operation(d=(1, 2, 3))
def test_sloppy_types_set(self):
@pedantic
def operation(d: set) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d={1, 2, 3})
def test_sloppy_types_set_almost_corrected_to_type_args(self):
@pedantic
def operation(d: Set) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d={1, 2, 3})
def test_sloppy_types_set_corrected(self):
@pedantic
def operation(d: Set[int]) -> int:
return len(d)
operation(d={1, 2, 3})
def test_sloppy_types_frozenset(self):
@pedantic
def operation(d: frozenset) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d=frozenset({1, 2, 3}))
def test_sloppy_types_frozenset_almost_corrected_no_type_args(self):
@pedantic
def operation(d: FrozenSet) -> int:
return len(d)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
operation(d=frozenset({1, 2, 3}))
def test_sloppy_types_frozenset_corrected(self):
@pedantic
def operation(d: FrozenSet[int]) -> int:
return len(d)
operation(d=frozenset({1, 2, 3}))
def test_type_list_but_got_tuple(self):
@pedantic
def calc(ls: List[Any]) -> int:
return len(ls)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
calc(ls=(1, 2, 3))
def test_type_list_corrected(self):
@pedantic
def calc(ls: Tuple[Any, ...]) -> int:
return len(ls)
calc(ls=(1, 2, 3))
def test_any(self):
@pedantic
def calc(ls: List[Any]) -> Dict[int, Any]:
return {i: ls[i] for i in range(0, len(ls))}
calc(ls=[1, 2, 3])
calc(ls=[1.11, 2.0, 3.0])
calc(ls=['1', '2', '3'])
calc(ls=[10.5, '2', (3, 4, 5)])
def test_aliases(self):
Vector = List[float]
@pedantic
def scale(scalar: float, vector: Vector) -> Vector:
return [scalar * num for num in vector]
scale(scalar=2.0, vector=[1.0, -4.2, 5.4])
def test_new_type(self):
UserId = NewType('UserId', int)
@pedantic
def get_user_name(user_id: UserId) -> str:
return str(user_id)
some_id = UserId(524313)
get_user_name(user_id=some_id)
# the following would be desirable but impossible to check at runtime:
# with self.assertRaises(expected_exception=AssertionError):
# get_user_name(user_id=-1)
def test_list_of_new_type(self):
UserId = NewType('UserId', int)
@pedantic
def get_user_name(user_ids: List[UserId]) -> str:
return str(user_ids)
get_user_name(user_ids=[UserId(524313), UserId(42)])
with self.assertRaises(expected_exception=PedanticTypeCheckException):
get_user_name(user_ids=[UserId(524313), UserId(42), 430.0])
def test_callable_no_args(self):
@pedantic
def f(g: Callable[[], str]) -> str:
return g()
@pedantic
def greetings() -> str:
return 'hello world'
f(g=greetings)
def test_type_var(self):
T = TypeVar('T')
@pedantic
def first(ls: List[T]) -> T:
return ls[0]
first(ls=[1, 2, 3])
def test_type_var_wrong(self):
T = TypeVar('T')
@pedantic
def first(ls: List[T]) -> T:
return str(ls[0])
with self.assertRaises(expected_exception=PedanticTypeVarMismatchException):
first(ls=[1, 2, 3])
def test_type_var_wrong_sequence(self):
T = TypeVar('T')
@pedantic
def first(ls: Sequence[T]) -> T:
return str(ls[0])
with self.assertRaises(expected_exception=PedanticTypeVarMismatchException):
first(ls=[1, 2, 3])
def test_double_pedantic(self):
@pedantic
@pedantic
def f(x: int, y: float) -> Tuple[float, str]:
return float(x), str(y)
f(x=5, y=3.14)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
f(x=5.0, y=3.14)
with self.assertRaises(expected_exception=PedanticCallWithArgsException):
f(5, 3.14)
def test_args_kwargs(self):
@pedantic
def some_method(a: int = 0, b: float = 0.0) -> float:
return a * b
@pedantic
def wrapper_method(*args: Union[int, float], **kwargs: Union[int, float]) -> float:
return some_method(*args, **kwargs)
some_method()
with self.assertRaises(expected_exception=PedanticCallWithArgsException):
some_method(3, 3.0)
some_method(a=3, b=3.0)
wrapper_method()
with self.assertRaises(expected_exception=PedanticCallWithArgsException):
wrapper_method(3, 3.0)
wrapper_method(a=3, b=3.0)
def test_args_kwargs_no_type_hint(self):
@pedantic
def method_no_type_hint(*args, **kwargs) -> None:
print(args)
print(kwargs)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
method_no_type_hint(a=3, b=3.0)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
method_no_type_hint()
def test_args_kwargs_wrong_type_hint(self):
"""See: https://www.python.org/dev/peps/pep-0484/#arbitrary-argument-lists-and-default-argument-values"""
@pedantic
def wrapper_method(*args: str, **kwargs: str) -> None:
print(args)
print(kwargs)
wrapper_method()
wrapper_method('hi', 'you', ':)')
wrapper_method(a='hi', b='you', c=':)')
with self.assertRaises(expected_exception=PedanticTypeCheckException):
wrapper_method('hi', 'you', ':)', 7)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
wrapper_method(3, 3.0)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
wrapper_method(a=3, b=3.0)
def test_additional_kwargs(self):
@pedantic
def some_method(a: int, b: float = 0.0, **kwargs: int) -> float:
return sum([a, b])
some_method(a=5)
some_method(a=5, b=0.1)
some_method(a=5, b=0.1, c=4)
some_method(a=5, b=0.1, c=4, d=5, e=6)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
some_method(a=5, b=0.1, c=4, d=5.0, e=6)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
some_method(a=5.0, b=0.1, c=4, d=5, e=6)
with self.assertRaises(expected_exception=PedanticTypeCheckException):
some_method(a=5, b=0, c=4, d=5, e=6)
def test_args_kwargs_different_types(self):
@pedantic
def foo(*args: str, **kwds: int) -> None:
print(args)
print(kwds)
foo('a', 'b', 'c')
foo(x=1, y=2)
foo('', z=0)
def test_pedantic_on_class(self):
with self.assertRaises(expected_exception=PedanticTypeCheckException):
@pedantic
class MyClass:
pass
MyClass()
def test_is_subtype_tuple(self):
with self.assertRaises(expected_exception=PedanticTypeCheckException):
@pedantic
def foo() -> Callable[[Tuple[float, str]], Tuple[int]]:
def bar(a: Tuple[float]) -> Tuple[int]:
return len(a[1]) + int(a[0]),
return bar
foo()
def test_is_subtype_tuple_corrected(self):
@pedantic
def foo() -> Callable[[Tuple[float, str]], Tuple[int]]:
def bar(a: Tuple[float, str]) -> Tuple[int]:
return len(a[1]) + int(a[0]),
return bar
foo()
def test_forward_ref(self):
class Conversation:
pass
@pedantic
def get_conversations() -> List['Conversation']:
return [Conversation(), Conversation()]
get_conversations()
def test_alternative_list_type_hint(self):
@pedantic
def _is_digit_in_int(digit: [int], num: int) -> bool:
num_str = str(num)
for i in num_str:
if int(i) == digit:
return True
return False
with self.assertRaises(expected_exception=PedanticTypeCheckException):
_is_digit_in_int(digit=4, num=42)
def test_callable_with_union_return(self):
class MyClass:
pass
@pedantic
def admin_required(func: Callable[..., Union[str, MyClass]]) -> Callable[..., Union[str, MyClass]]:
@wraps(func)
def decorated_function(*args, **kwargs):
return func(*args, **kwargs)
return decorated_function
@admin_required
@pedantic
def get_server_info() -> str:
return 'info'
get_server_info()
def test_pedantic(self):
@pedantic
def foo(a: int, b: str) -> str:
return 'abc'
self.assertEqual('abc', foo(a=4, b='abc'))
def test_pedantic_always(self):
@pedantic
def foo(a: int, b: str) -> str:
return 'abc'
self.assertEqual('abc', foo(a=4, b='abc'))
def test_pedantic_arguments_fail(self):
@pedantic
def foo(a: int, b: str) -> str:
return 'abc'
with self.assertRaises(expected_exception=PedanticTypeCheckException):
foo(a=4, b=5)
def test_pedantic_return_type_fail(self):
@pedantic
def foo(a: int, b: str) -> str:
return 6
with self.assertRaises(expected_exception=PedanticTypeCheckException):
foo(a=4, b='abc')
def test_return_type_none(self):
@pedantic
def foo() -> None:
return 'a'
with self.assertRaises(expected_exception=PedanticTypeCheckException):
foo()
def test_marco(self):
@pedantic_class
class A:
def __init__(self, val: int) -> None:
self.val = val
def __eq__(self, other: 'A') -> bool: # other: A and all subclasses
return self.val == other.val
@pedantic_class
class B(A):
def __init__(self, val: int) -> None:
super().__init__(val=val)
@pedantic_class
class C(A):
def __init__(self, val: int) -> None:
super().__init__(val=val)
a = A(val=42)
b = B(val=42)
c = C(val=42)
assert a == b # works
assert a == c # works
assert b == c # error
def test_date_datetime(self):
@pedantic
def foo(a: datetime, b: date) -> None:
pass
foo(a=datetime(1995, 2, 5), b=date(1987, 8, 7))
foo(a=datetime(1995, 2, 5), b=datetime(1987, 8, 7))
with self.assertRaises(expected_exception=PedanticTypeCheckException):
foo(a=date(1995, 2, 5), b=date(1987, 8, 7))
def test_any_type(self):
@pedantic
def foo(a: Any) -> None:
pass
foo(a='aa')
def test_callable_exact_arg_count(self):
@pedantic
def foo(a: Callable[[int, str], int]) -> None:
pass
def some_callable(x: int, y: str) -> int:
pass
foo(a=some_callable)
def test_callable_bad_type(self):
@pedantic
def foo(a: Callable[..., int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_callable_too_few_arguments(self):
@pedantic
def foo(a: Callable[[int, str], int]) -> None:
pass
def some_callable(x: int) -> int:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=some_callable)
def test_callable_mandatory_kwonlyargs(self):
@pedantic
def foo(a: Callable[[int, str], int]) -> None:
pass
def some_callable(x: int, y: str, *, z: float, bar: str) -> int:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=some_callable)
def test_callable_class(self):
"""
Test that passing a class as a callable does not count the "self" argument "a"gainst the
ones declared in the Callable specification.
"""
@pedantic
def foo(a: Callable[[int, str], Any]) -> None:
pass
class SomeClass:
def __init__(self, x: int, y: str):
pass
foo(a=SomeClass)
def test_callable_plain(self):
@pedantic
def foo(a: Callable[..., Any]) -> None:
pass
def callback(a):
pass
foo(a=callback)
def test_callable_bound_method(self):
@pedantic
def foo(callback: Callable[[int], Any]) -> None:
pass
foo(callback=Child().method)
def test_callable_defaults(self):
"""
Test that a callable having "too many" arguments don't raise an error if the extra
arguments have default values.
"""
@pedantic
def foo(callback: Callable[[int, str], Any]) -> None:
pass
def some_callable(x: int, y: str, z: float = 1.2) -> int:
pass
foo(callback=some_callable)
def test_callable_builtin(self):
@pedantic
def foo(callback: types.BuiltinFunctionType) -> None:
pass
foo(callback=[].append)
def test_dict_bad_type(self):
@pedantic
def foo(a: Dict[str, int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_dict_bad_key_type(self):
@pedantic
def foo(a: Dict[str, int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a={1: 2})
def test_dict_bad_value_type(self):
@pedantic
def foo(a: Dict[str, int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a={'x': 'a'})
def test_list_bad_type(self):
@pedantic
def foo(a: List[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_list_bad_element(self):
@pedantic
def foo(a: List[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=[1, 2, 'bb'])
def test_sequence_bad_type(self):
@pedantic
def foo(a: Sequence[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_sequence_bad_element(self):
@pedantic
def foo(a: Sequence[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=[1, 2, 'bb'])
def test_abstractset_custom_type(self):
T = TypeVar('T')
@pedantic_class
class DummySet(AbstractSet[T]):
def __contains__(self, x: object) -> bool:
return x == 1
def __len__(self) -> T:
return 1
def __iter__(self) -> Iterator[T]:
yield 1
@pedantic
def foo(a: AbstractSet[int]) -> None:
pass
foo(a=DummySet[int]())
def test_abstractset_bad_type(self):
@pedantic
def foo(a: AbstractSet[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_set_bad_type(self):
@pedantic
def foo(a: Set[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_abstractset_bad_element(self):
@pedantic
def foo(a: AbstractSet[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a={1, 2, 'bb'})
def test_set_bad_element(self):
@pedantic
def foo(a: Set[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a={1, 2, 'bb'})
def test_tuple_bad_type(self):
@pedantic
def foo(a: Tuple[int]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=5)
def test_tuple_too_many_elements(self):
@pedantic
def foo(a: Tuple[int, str]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=(1, 'aa', 2))
def test_tuple_too_few_elements(self):
@pedantic
def foo(a: Tuple[int, str]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=(1,))
def test_tuple_bad_element(self):
@pedantic
def foo(a: Tuple[int, str]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=(1, 2))
def test_tuple_ellipsis_bad_element(self):
@pedantic
def foo(a: Tuple[int, ...]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=(1, 2, 'blah'))
def test_namedtuple(self):
Employee = NamedTuple('Employee', [('name', str), ('id', int)])
@pedantic
def foo(bar: Employee) -> None:
print(bar)
foo(bar=Employee('bob', 1))
def test_namedtuple_key_mismatch(self):
Employee1 = NamedTuple('Employee', [('name', str), ('id', int)])
Employee2 = NamedTuple('Employee', [('firstname', str), ('id', int)])
@pedantic
def foo(bar: Employee1) -> None:
print(bar)
with self.assertRaises(PedanticTypeCheckException):
foo(bar=Employee2('bob', 1))
def test_namedtuple_type_mismatch(self):
Employee = NamedTuple('Employee', [('name', str), ('id', int)])
@pedantic
def foo(bar: Employee) -> None:
print(bar)
with self.assertRaises(PedanticTypeCheckException):
foo(bar=('bob', 1))
def test_namedtuple_huge_type_mismatch(self):
Employee = NamedTuple('Employee', [('name', str), ('id', int)])
@pedantic
def foo(bar: int) -> None:
print(bar)
with self.assertRaises(PedanticTypeCheckException):
foo(bar=foo(bar=Employee('bob', 1)))
def test_namedtuple_wrong_field_type(self):
Employee = NamedTuple('Employee', [('name', str), ('id', int)])
@pedantic
def foo(bar: Employee) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(bar=Employee(2, 1))
def test_union(self):
@pedantic
def foo(a: Union[str, int]) -> None:
pass
for value in [6, 'xa']:
foo(a=value)
def test_union_new_syntax(self):
if sys.version_info < (3, 10):
return
@pedantic
def foo(a: str | int) -> None:
pass
for value in [6, 'xa']:
foo(a=value)
with self.assertRaises(PedanticTypeCheckException):
foo(a=1.7)
def test_union_typing_type(self):
@pedantic
def foo(a: Union[str, Collection]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=1)
def test_union_fail(self):
@pedantic
def foo(a: Union[str, int]) -> None:
pass
for value in [5.6, b'xa']:
with self.assertRaises(PedanticTypeCheckException):
foo(a=value)
def test_type_var_constraints(self):
T = TypeVar('T', int, str)
@pedantic
def foo(a: T, b: T) -> None:
pass
for values in [
{'a': 6, 'b': 7},
{'a': 'aa', 'b': "bb"},
]:
foo(**values)
def test_type_var_constraints_fail_typing_type(self):
T = TypeVar('T', int, Collection)
@pedantic
def foo(a: T, b: T) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a='aa', b='bb')
def test_typevar_constraints_fail(self):
T = TypeVar('T', int, str)
@pedantic
def foo(a: T, b: T) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=2.5, b='aa')
def test_typevar_bound(self):
T = TypeVar('T', bound=Parent)
@pedantic
def foo(a: T, b: T) -> None:
pass
foo(a=Child(), b=Child())
def test_type_var_bound_fail(self):
T = TypeVar('T', bound=Child)
@pedantic
def foo(a: T, b: T) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=Parent(), b=Parent())
def test_type_var_invariant_fail(self):
T = TypeVar('T', int, str)
@pedantic
def foo(a: T, b: T) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=2, b=3.6)
def test_type_var_covariant(self):
T = TypeVar('T', covariant=True)
@pedantic
def foo(a: T, b: T) -> None:
pass
foo(a=Parent(), b=Child())
def test_type_var_covariant_fail(self):
T = TypeVar('T', covariant=True)
@pedantic
def foo(a: T, b: T) -> None:
pass
with self.assertRaises(PedanticTypeVarMismatchException):
foo(a=Child(), b=Parent())
def test_type_var_contravariant(self):
T = TypeVar('T', contravariant=True)
@pedantic
def foo(a: T, b: T) -> None:
pass
foo(a=Child(), b=Parent())
def test_type_var_contravariant_fail(self):
T = TypeVar('T', contravariant=True)
@pedantic
def foo(a: T, b: T) -> None:
pass
with self.assertRaises(PedanticTypeVarMismatchException):
foo(a=Parent(), b=Child())
def test_class_bad_subclass(self):
@pedantic
def foo(a: Type[Child]) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=Parent)
def test_class_any(self):
@pedantic
def foo(a: Type[Any]) -> None:
pass
foo(a=str)
def test_wrapped_function(self):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@pedantic
@decorator
def foo(a: 'Child') -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=Parent())
def test_mismatching_default_type(self):
@pedantic
def foo(a: str = 1) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo()
def test_implicit_default_none(self):
"""
Test that if the default value is ``None``, a ``None`` argument can be passed.
"""
@pedantic
def foo(a: Optional[str] = None) -> None:
pass
foo()
def test_generator_simple(self):
"""Test that argument type checking works in a generator function too."""
@pedantic
def generate(a: int) -> Generator[int, int, None]:
yield a
yield a + 1
gen = generate(a=1)
next(gen)
def test_wrapped_generator_no_return_type_annotation(self):
"""Test that return type checking works in a generator function too."""
@pedantic
def generate(a: int) -> Generator[int, int, None]:
yield a
yield a + 1
gen = generate(a=1)
next(gen)
def test_varargs(self):
@pedantic
def foo(*args: int) -> None:
pass
foo(1, 2)
def test_varargs_fail(self):
@pedantic
def foo(*args: int) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(1, 'a')
def test_kwargs(self):
@pedantic
def foo(**kwargs: int) -> None:
pass
foo(a=1, b=2)
def test_kwargs_fail(self):
@pedantic
def foo(**kwargs: int) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=1, b='a')
def test_generic(self):
T_Foo = TypeVar('T_Foo')
class FooGeneric(Generic[T_Foo]):
pass
@pedantic
def foo(a: FooGeneric[str]) -> None:
print(a)
foo(a=FooGeneric[str]())
def test_newtype(self):
myint = NewType("myint", int)
@pedantic
def foo(a: myint) -> int:
return 42
assert foo(a=1) == 42
with self.assertRaises(PedanticTypeCheckException):
foo(a="a")
def test_collection(self):
@pedantic
def foo(a: Collection) -> None:
pass
with self.assertRaises(PedanticTypeCheckException):
foo(a=True)
def test_binary_io(self):
@pedantic
def foo(a: BinaryIO) -> None:
print(a)
foo(a=BytesIO())
def test_text_io(self):
@pedantic
def foo(a: TextIO) -> None:
print(a)
foo(a=StringIO())
def test_binary_io_fail(self):
@pedantic
def foo(a: TextIO) -> None:
print(a)
with self.assertRaises(PedanticTypeCheckException):
foo(a=BytesIO())
def test_text_io_fail(self):
@pedantic
def foo(a: BinaryIO) -> None:
print(a)
with self.assertRaises(PedanticTypeCheckException):
foo(a=StringIO())
def test_binary_io_real_file(self):
@pedantic
def foo(a: BinaryIO) -> None:
print(a)
with open(file=TEST_FILE, mode='wb') as f:
foo(a=f)
def test_text_io_real_file(self):
@pedantic
def foo(a: TextIO) -> None:
print(a)
with open(file=TEST_FILE, mode='w') as f:
foo(a=f)
def test_pedantic_return_type_var_fail(self):
T = TypeVar('T', int, float)
@pedantic
def foo(a: T, b: T) -> T:
return 'a'
with self.assertRaises(PedanticTypeCheckException):
foo(a=4, b=2)
def test_callable(self):
@pedantic
def foo_1(a: Callable[..., int]) -> None:
print(a)
@pedantic
def foo_2(a: Callable) -> None:
print(a)
def some_callable() -> int:
return 4
foo_1(a=some_callable)
with self.assertRaises(PedanticTypeCheckException):
foo_2(a=some_callable)
def test_list(self):
@pedantic
def foo_1(a: List[int]) -> None:
print(a)
@pedantic
def foo_2(a: List) -> None:
print(a)
@pedantic
def foo_3(a: list) -> None:
print(a)
foo_1(a=[1, 2])
with self.assertRaises(PedanticTypeCheckException):
foo_2(a=[1, 2])
with self.assertRaises(PedanticTypeCheckException):
foo_3(a=[1, 2])
def test_dict(self):
@pedantic
def foo_1(a: Dict[str, int]) -> None:
print(a)
@pedantic
def foo_2(a: Dict) -> None:
print(a)
@pedantic
def foo_3(a: dict) -> None:
print(a)
foo_1(a={'x': 2})
with self.assertRaises(PedanticTypeCheckException):
foo_3(a={'x': 2})
with self.assertRaises(PedanticTypeCheckException):
foo_3(a={'x': 2})
def test_sequence(self):
@pedantic
def foo(a: Sequence[str]) -> None:
print(a)
for value in [('a', 'b'), ['a', 'b'], 'abc']:
foo(a=value)
def test_sequence_no_type_args(self):
@pedantic
def foo(a: Sequence) -> None:
print(a)
for value in [('a', 'b'), ['a', 'b'], 'abc']:
with self.assertRaises(PedanticTypeCheckException):
foo(a=value)
def test_iterable(self):
@pedantic
def foo(a: Iterable[str]) -> None:
print(a)
for value in [('a', 'b'), ['a', 'b'], 'abc']:
foo(a=value)
def test_iterable_no_type_args(self):
@pedantic
def foo(a: Iterable) -> None:
print(a)
for value in [('a', 'b'), ['a', 'b'], 'abc']:
with self.assertRaises(PedanticTypeCheckException):
foo(a=value)
def test_container(self):
@pedantic
def foo(a: Container[str]) -> None:
print(a)
for value in [('a', 'b'), ['a', 'b'], 'abc']:
foo(a=value)
def test_container_no_type_args(self):
@pedantic
def foo(a: Container) -> None:
print(a)
for value in [('a', 'b'), ['a', 'b'], 'abc']:
with self.assertRaises(PedanticTypeCheckException):
foo(a=value)
def test_set(self):
@pedantic
def foo_1(a: AbstractSet[int]) -> None:
print(a)
@pedantic
def foo_2(a: Set[int]) -> None:
print(a)
for value in [set(), {6}]:
foo_1(a=value)
foo_2(a=value)
def test_set_no_type_args(self):
@pedantic
def foo_1(a: AbstractSet) -> None:
print(a)
@pedantic
def foo_2(a: Set) -> None:
print(a)
@pedantic
def foo_3(a: set) -> None:
print(a)
for value in [set(), {6}]:
with self.assertRaises(PedanticTypeCheckException):
foo_1(a=value)
with self.assertRaises(PedanticTypeCheckException):
foo_2(a=value)
with self.assertRaises(PedanticTypeCheckException):
foo_3(a=value)
def test_tuple(self):
@pedantic
def foo_1(a: Tuple[int, int]) -> None:
print(a)
@pedantic
def foo_2(a: Tuple[int, ...]) -> None:
print(a)
foo_1(a=(1, 2))
foo_2(a=(1, 2))
def test_tuple_no_type_args(self):
@pedantic
def foo_1(a: Tuple) -> None:
print(a)
@pedantic
def foo_2(a: tuple) -> None:
print(a)
with self.assertRaises(PedanticTypeCheckException):
foo_1(a=(1, 2))
with self.assertRaises(PedanticTypeCheckException):
foo_2(a=(1, 2))
def test_empty_tuple(self):
@pedantic
def foo(a: Tuple[()]) -> None:
print(a)
foo(a=())
def test_class(self):
@pedantic
def foo_1(a: Type[Parent]) -> None:
print(a)
@pedantic
def foo_2(a: Type[TypeVar('UnboundType')]) -> None:
print(a)
@pedantic
def foo_3(a: Type[TypeVar('BoundType', bound=Parent)]) -> None:
print(a)
foo_1(a=Child)
foo_2(a=Child)
foo_3(a=Child)
def test_class_no_type_vars(self):
@pedantic
def foo_1(a: Type) -> None:
print(a)
@pedantic
def foo_2(a: type) -> None:
print(a)
with self.assertRaises(PedanticTypeCheckException):
foo_1(a=Child)
with self.assertRaises(PedanticTypeCheckException):
foo_2(a=Child)
def test_class_not_a_class(self):
@pedantic
def foo(a: Type[Parent]) -> None:
print(a)
with self.assertRaises(PedanticTypeCheckException):
foo(a=1)
def test_complex(self):
@pedantic
def foo(a: complex) -> None:
print(a)
foo(a=complex(1, 5))
with self.assertRaises(PedanticTypeCheckException):
foo(a=1.0)
def test_float(self):
@pedantic
def foo(a: float) -> None:
print(a)
foo(a=1.5)
with self.assertRaises(PedanticTypeCheckException):
foo(a=1)
def test_coroutine_correct_return_type(self):
@pedantic
async def foo() -> str:
return 'foo'
coro = foo()
with self.assertRaises(StopIteration):
coro.send(None)
def test_coroutine_wrong_return_type(self):
@pedantic
async def foo() -> str:
return 1
coro = foo()
with self.assertRaises(PedanticTypeCheckException):
coro.send(None)
def test_bytearray_bytes(self):
@pedantic
def foo(x: bytearray) -> None:
pass
foo(x=bytearray([1]))
def test_class_decorator(self):
@pedantic_class
class Foo:
@staticmethod
def staticmethod() -> int:
return 'foo'
@classmethod
def classmethod(cls) -> int:
return 'foo'
def method(self) -> int:
return 'foo'
with self.assertRaises(PedanticTypeCheckException):
Foo.staticmethod()
with self.assertRaises(PedanticTypeCheckException):
Foo.classmethod()
with self.assertRaises(PedanticTypeCheckException):
Foo().method()
def test_generator(self):
@pedantic
def genfunc() -> Generator[int, str, List[str]]:
val1 = yield 2
val2 = yield 3
val3 = yield 4
return [val1, val2, val3]
gen = genfunc()
with self.assertRaises(StopIteration):
value = next(gen)
while True:
value = gen.send(str(value))
assert isinstance(value, int)
def test_generator_no_type_args(self):
@pedantic
def genfunc() -> Generator:
val1 = yield 2
val2 = yield 3
val3 = yield 4
return [val1, val2, val3]
with self.assertRaises(PedanticTypeCheckException):
genfunc()
def test_iterator(self):
@pedantic
def genfunc() -> Iterator[int]:
val1 = yield 2
val2 = yield 3
val3 = yield 4
return [val1, val2, val3]
gen = genfunc()
with self.assertRaises(PedanticTypeCheckException):
value = next(gen)
while True:
value = gen.send(str(value))
assert isinstance(value, int)
def test_iterator_no_type_args(self):
@pedantic
def genfunc() -> Iterator:
val1 = yield 2
val2 = yield 3
val3 = yield 4
return [val1, val2, val3]
with self.assertRaises(PedanticTypeCheckException):
genfunc()
def test_iterable_advanced(self):
@pedantic
def genfunc() -> Iterable[int]:
val1 = yield 2
val2 = yield 3
val3 = yield 4
return [val1, val2, val3]
gen = genfunc()
with self.assertRaises(PedanticTypeCheckException):
value = next(gen)
while True:
value = gen.send(str(value))
assert isinstance(value, int)
def test_iterable_advanced_no_type_args(self):
@pedantic
def genfunc() -> Iterable:
val1 = yield 2
val2 = yield 3
val3 = yield 4
return [val1, val2, val3]
with self.assertRaises(PedanticTypeCheckException):
genfunc()
def test_generator_bad_yield(self):
@pedantic
def genfunc_1() -> Generator[int, str, None]:
yield 'foo'
@pedantic
def genfunc_2() -> Iterable[int]:
yield 'foo'
@pedantic
def genfunc_3() -> Iterator[int]:
yield 'foo'
gen = genfunc_1()
with self.assertRaises(PedanticTypeCheckException):
next(gen)
gen = genfunc_2()
with self.assertRaises(PedanticTypeCheckException):
next(gen)
gen = genfunc_3()
with self.assertRaises(PedanticTypeCheckException):
next(gen)
def test_generator_bad_send(self):
@pedantic
def genfunc() -> Generator[int, str, None]:
yield 1
yield 2
gen = genfunc()
next(gen)
with self.assertRaises(PedanticTypeCheckException):
gen.send(2)
def test_generator_bad_return(self):
@pedantic
def genfunc() -> Generator[int, str, str]:
yield 1
return 6
gen = genfunc()
next(gen)
with self.assertRaises(PedanticTypeCheckException):
gen.send('foo')
def test_return_generator(self):
@pedantic
def genfunc() -> Generator[int, None, None]:
yield 1
@pedantic
def foo() -> Generator[int, None, None]:
return genfunc()
foo()
def test_local_class(self):
@pedantic_class
class LocalClass:
class Inner:
pass
def create_inner(self) -> 'Inner':
return self.Inner()
retval = LocalClass().create_inner()
assert isinstance(retval, LocalClass.Inner)
def test_local_class_async(self):
@pedantic_class
class LocalClass:
class Inner:
pass
async def create_inner(self) -> 'Inner':
return self.Inner()
coro = LocalClass().create_inner()
with self.assertRaises(StopIteration):
coro.send(None)
def test_callable_nonmember(self):
class CallableClass:
def __call__(self):
pass
@pedantic_class
class LocalClass:
some_callable = CallableClass()
def test_inherited_class_method(self):
@pedantic_class
class Parent:
@classmethod
def foo(cls, x: str) -> str:
return cls.__name__
@pedantic_class
class Child(Parent):
pass
self.assertEqual('Parent', Child.foo(x='bar'))
with self.assertRaises(PedanticTypeCheckException):
Child.foo(x=1)
def test_type_var_forward_ref_bound(self):
TBound = TypeVar('TBound', bound='Parent')
@pedantic
def func(x: TBound) -> None:
pass
func(x=Parent())
with self.assertRaises(PedanticTypeCheckException):
func(x='foo')
def test_noreturn(self):
@pedantic
def foo() -> NoReturn:
pass
with self.assertRaises(PedanticTypeCheckException):
foo()
def test_literal(self):
if sys.version_info < (3, 8):
return
from typing import Literal
@pedantic
def foo(a: Literal[1, True, 'x', b'y', 404]) -> None:
print(a)
foo(a=404)
foo(a=True)
foo(a='x')
with self.assertRaises(PedanticTypeCheckException):
foo(a=4)
def test_literal_union(self):
if sys.version_info < (3, 8):
return
from typing import Literal
@pedantic
def foo(a: Union[str, Literal[1, 6, 8]]) -> None:
print(a)
foo(a=6)
with self.assertRaises(PedanticTypeCheckException):
foo(a=4)
def test_literal_illegal_value(self):
if sys.version_info < (3, 8):
return
from typing import Literal
@pedantic
def foo(a: Literal[1, 1.1]) -> None:
print(a)
with self.assertRaises(PedanticTypeCheckException):
foo(a=4)
def test_enum(self):
with self.assertRaises(PedanticTypeCheckException):
@pedantic_class
class MyEnum(Enum):
A = 'a'
def test_enum_aggregate(self):
T = TypeVar('T', bound=IntEnum)
@pedantic_class
class EnumAggregate(Generic[T]):
enum: ClassVar[Type[T]]
def __init__(self, value: Union[int, str, List[T]]) -> None:
assert len(self.enum) < 10
if value == '':
raise ValueError(f'Parameter "value" cannot be empty!')
if isinstance(value, list):
self._value = ''.join([str(x.value) for x in value])
else:
self._value = str(value)
self._value = ''.join(sorted(self._value)) # sort characters in string
self.to_list() # check if is valid
def __contains__(self, item: T) -> bool:
return item in self.to_list()
def __eq__(self, other: Union['EnumAggregate', str]) -> bool:
if isinstance(other, str):
return self._value == other
return self._value == other._value
def __str__(self) -> str:
return self._value
def to_list(self) -> List[T]:
return [self.enum(int(character)) for character in self._value]
@property
def value(self) -> str:
return self._value
@classmethod
def all(cls) -> str:
return ''.join([str(x.value) for x in cls.enum])
class Gender(IntEnum):
MALE = 1
FEMALE = 2
DIVERS = 3
@pedantic_class
class Genders(EnumAggregate[Gender]):
enum = Gender
Genders(value=12)
with self.assertRaises(PedanticTypeCheckException):
Genders(value=Child()) | 26.833543 | 117 | 0.544439 | 7,557 | 63,998 | 4.473601 | 0.052666 | 0.081019 | 0.088739 | 0.108853 | 0.757742 | 0.696039 | 0.654154 | 0.585027 | 0.512438 | 0.446535 | 0 | 0.021669 | 0.338042 | 63,998 | 2,385 | 118 | 26.833543 | 0.776343 | 0.026266 | 0 | 0.618775 | 0 | 0 | 0.008728 | 0 | 0 | 0 | 0 | 0 | 0.092158 | 1 | 0.293646 | false | 0.043503 | 0.009159 | 0.073841 | 0.420149 | 0.032055 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36a1039f62de29095d8932b740524a88b0801df0 | 486 | py | Python | jobs/migrations/0005_job_date.py | AkinWilderman/myPort | 3ddeea04ccffe3ed7b66d6dba2c1f2dc00c9eb6c | [
"Apache-2.0"
] | null | null | null | jobs/migrations/0005_job_date.py | AkinWilderman/myPort | 3ddeea04ccffe3ed7b66d6dba2c1f2dc00c9eb6c | [
"Apache-2.0"
] | null | null | null | jobs/migrations/0005_job_date.py | AkinWilderman/myPort | 3ddeea04ccffe3ed7b66d6dba2c1f2dc00c9eb6c | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.1.7 on 2019-07-06 04:48
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('jobs', '0004_auto_20190706_0012'),
]
operations = [
migrations.AddField(
model_name='job',
name='date',
field=models.DateTimeField(default=django.utils.timezone.now, verbose_name='Date'),
preserve_default=False,
),
]
| 23.142857 | 95 | 0.625514 | 54 | 486 | 5.518519 | 0.740741 | 0.073826 | 0.127517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086351 | 0.261317 | 486 | 20 | 96 | 24.3 | 0.743733 | 0.092593 | 0 | 0 | 1 | 0 | 0.08656 | 0.052392 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36a2d86f1e49ac5520b342c8f2a78f162854b298 | 304 | py | Python | main.py | shoulderhu/heroku-ctf-www | f1e136f8d93034d34a60702517b32fc0245dac38 | [
"MIT"
] | null | null | null | main.py | shoulderhu/heroku-ctf-www | f1e136f8d93034d34a60702517b32fc0245dac38 | [
"MIT"
] | null | null | null | main.py | shoulderhu/heroku-ctf-www | f1e136f8d93034d34a60702517b32fc0245dac38 | [
"MIT"
] | null | null | null | import os
from app import create_app
from dotenv import load_dotenv
# .env
dotenv_path = os.path.join(os.path.dirname(__file__), ".env")
if os.path.exists(dotenv_path):
load_dotenv(dotenv_path)
app = create_app(os.environ.get("FLASK_CONFIG") or "default")
if __name__ == "__main__":
app.run() | 21.714286 | 61 | 0.733553 | 48 | 304 | 4.229167 | 0.479167 | 0.147783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 304 | 14 | 62 | 21.714286 | 0.768939 | 0.013158 | 0 | 0 | 0 | 0 | 0.103679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
36a374d5b35c8fab447dcc5b470ddd3335b4f06d | 5,628 | py | Python | src/apps/devices/cubelib/emulator.py | ajintom/music_sync | 0d7bc302502d28e4be4f0a0be1fc9bafb706f651 | [
"MIT"
] | null | null | null | src/apps/devices/cubelib/emulator.py | ajintom/music_sync | 0d7bc302502d28e4be4f0a0be1fc9bafb706f651 | [
"MIT"
] | null | null | null | src/apps/devices/cubelib/emulator.py | ajintom/music_sync | 0d7bc302502d28e4be4f0a0be1fc9bafb706f651 | [
"MIT"
] | null | null | null | #!/bin/env python
#using the wireframe module downloaded from http://www.petercollingridge.co.uk/
import mywireframe as wireframe
import pygame
from pygame import display
from pygame.draw import *
import time
import numpy
key_to_function = {
pygame.K_LEFT: (lambda x: x.translateAll('x', -10)),
pygame.K_RIGHT: (lambda x: x.translateAll('x', 10)),
pygame.K_DOWN: (lambda x: x.translateAll('y', 10)),
pygame.K_UP: (lambda x: x.translateAll('y', -10)),
pygame.K_EQUALS: (lambda x: x.scaleAll(1.25)),
pygame.K_MINUS: (lambda x: x.scaleAll( 0.8)),
pygame.K_q: (lambda x: x.rotateAll('X', 0.1)),
pygame.K_w: (lambda x: x.rotateAll('X', -0.1)),
pygame.K_a: (lambda x: x.rotateAll('Y', 0.1)),
pygame.K_s: (lambda x: x.rotateAll('Y', -0.1)),
pygame.K_z: (lambda x: x.rotateAll('Z', 0.1)),
pygame.K_x: (lambda x: x.rotateAll('Z', -0.1))}
class ProjectionViewer:
""" Displays 3D objects on a Pygame screen """
def __init__(self, width, height):
self.width = width
self.height = height
self.screen = pygame.display.set_mode((width, height))
pygame.display.set_caption('Wireframe Display')
self.background = (10,10,50)
self.wireframes = {}
self.displayNodes = True
self.displayEdges = True
self.nodeColour = (255,255,255)
self.edgeColour = (200,200,200)
self.nodeRadius = 3 #Modify to change size of the spheres
def addWireframe(self, name, wireframe):
""" Add a named wireframe object. """
self.wireframes[name] = wireframe
def run(self):
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key in key_to_function:
key_to_function[event.key](self)
self.display()
pygame.display.flip()
def display(self):
""" Draw the wireframes on the screen. """
self.screen.fill(self.background)
for wireframe in self.wireframes.values():
if self.displayEdges:
for edge in wireframe.edges:
pygame.draw.aaline(self.screen, self.edgeColour, (edge.start.x, edge.start.y), (edge.stop.x, edge.stop.y), 1)
if self.displayNodes:
for node in wireframe.nodes:
if node.visiblity:
pygame.draw.circle(self.screen, self.nodeColour, (int(node.x), int(node.y)), self.nodeRadius, 0)
def translateAll(self, axis, d):
""" Translate all wireframes along a given axis by d units. """
for wireframe in self.wireframes.itervalues():
wireframe.translate(axis, d)
def scaleAll(self, scale):
""" Scale all wireframes by a given scale, centred on the centre of the screen. """
centre_x = self.width/2
centre_y = self.height/2
for wireframe in self.wireframes.itervalues():
wireframe.scale((centre_x, centre_y), scale)
def rotateAll(self, axis, theta):
""" Rotate all wireframe about their centre, along a given axis by a given angle. """
rotateFunction = 'rotate' + axis
for wireframe in self.wireframes.itervalues():
centre = wireframe.findCentre()
getattr(wireframe, rotateFunction)(centre, theta)
def createCube(self,cube,X=[50,140], Y=[50,140], Z=[50,140]):
cube.addNodes([(x,y,z) for x in X for y in Y for z in Z]) #adding the nodes of the cube framework.
allnodes = []
cube.addEdges([(n,n+4) for n in range(0,4)]+[(n,n+1) for n in range(0,8,2)]+[(n,n+2) for n in (0,1,4,5)]) #creating edges of the cube framework.
for i in range(0,10):
for j in range(0,10):
for k in range(0,10):
allnodes.append((X[0]+(X[1]-X[0])/9 * i,Y[0]+(Y[1] - Y[0])/9 * j,Z[0] + (Z[1]-Z[0])/9 * k))
cube.addNodes(allnodes)
#cube.outputNodes()
self.addWireframe('cube',cube)
def findIndex(coords): #Send coordinates of the points you want lit up. Will convert to neede
indices = []
for nodes in coords:
x,y,z = nodes
index = x*100+y*10+z + 8
indices.append(index)
return indices
def findIndexArray(array): #Takes a 3-D numpy array containing bool of all the LED points.
indices = []
for i in range(0,10):
for j in range(0,10):
for k in range(0,10):
if(array[i][j][k] == 1):
index = i*100+j*10+ k + 8
indices.append(index)
return indices
def wireframecube(size):
if size % 2 == 1:
size = size+1
half = size/2
start = 5 - half
end = 5 + half - 1
cubecords = [(x,y,z) for x in (start,end) for y in (start,end) for z in range(start,end+1)]+[(x,z,y) for x in (start,end) for y in (start,end) for z in range(start,end+1)] + [(z,y,x) for x in (start,end) for y in (start,end) for z in range(start,end+1)]
return cubecords
def cubes(size):
if size % 2 == 1:
size = size+1
half = size/2
cubecords = []
for i in range(0,size):
for j in range(0,size):
for k in range(0,size):
cubecords.append((5-half+i,5-half+j,5-half+k))
return cubecords
if __name__ == '__main__':
pv = ProjectionViewer(400, 300)
allnodes =[]
cube = wireframe.Wireframe() #storing all the nodes in this wireframe object.
X = [50,140]
Y = [50,140]
Z = [50,140]
pv.createCube(cube,X,Y,Z)
YZface = findIndex((0,y,z) for y in range(0,10) for z in range(0,10))
count = 0
for k in range(1,150000):
if k%5000 ==2500:
count = (count+2)%11
cube.setVisible(findIndex(wireframecube(count)))
pv.run()
| 33.903614 | 254 | 0.600569 | 859 | 5,628 | 3.892899 | 0.214203 | 0.035586 | 0.0311 | 0.023923 | 0.283792 | 0.241029 | 0.226077 | 0.165072 | 0.129187 | 0.086124 | 0 | 0.046106 | 0.256219 | 5,628 | 165 | 255 | 34.109091 | 0.752747 | 0.071784 | 0 | 0.203252 | 0 | 0 | 0.009261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04878 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36a6702d9737607ae7b2838e31ebcda4772cd182 | 810 | py | Python | src/topbuzz/management/commands/topbuzz_stat.py | lucemia/gnews | ac537062d8e34bb63fe0bf95c2affc8d2771d392 | [
"MIT"
] | null | null | null | src/topbuzz/management/commands/topbuzz_stat.py | lucemia/gnews | ac537062d8e34bb63fe0bf95c2affc8d2771d392 | [
"MIT"
] | null | null | null | src/topbuzz/management/commands/topbuzz_stat.py | lucemia/gnews | ac537062d8e34bb63fe0bf95c2affc8d2771d392 | [
"MIT"
] | null | null | null | # -*- encoding=utf8 -*-
from django.core.management.base import BaseCommand
from datetime import timedelta, datetime
from topbuzz.tasks import stat
import argparse
def valid_date(s):
try:
return datetime.strptime(s, "%Y-%m-%d")
except ValueError:
msg = "Not a valid date: '{0}'.".format(s)
raise argparse.ArgumentTypeError(msg)
class Command(BaseCommand):
help = 'create campaign'
def add_arguments(self, parser):
parser.add_argument('channel', type=str)
parser.add_argument('start_date', type=valid_date)
parser.add_argument('end_date', type=valid_date)
parser.add_argument('cookie', type=str)
def handle(self, *args, **kwargs):
print stat(kwargs['channel'], kwargs['start_date'], kwargs['end_date'], kwargs['cookie'])
| 30 | 97 | 0.675309 | 103 | 810 | 5.194175 | 0.524272 | 0.06729 | 0.127103 | 0.063551 | 0.127103 | 0.127103 | 0.127103 | 0 | 0 | 0 | 0 | 0.00304 | 0.187654 | 810 | 26 | 98 | 31.153846 | 0.81003 | 0.025926 | 0 | 0 | 0 | 0 | 0.138501 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.210526 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36a935bd4ad858c3f4e568ba75cc0dc9d2989f18 | 2,410 | py | Python | tests/crud/test_crud_user.py | congdh/fastapi-realworld | 42c8630aedf594b69bc96a327b04dfe636a785fe | [
"MIT"
] | null | null | null | tests/crud/test_crud_user.py | congdh/fastapi-realworld | 42c8630aedf594b69bc96a327b04dfe636a785fe | [
"MIT"
] | null | null | null | tests/crud/test_crud_user.py | congdh/fastapi-realworld | 42c8630aedf594b69bc96a327b04dfe636a785fe | [
"MIT"
] | null | null | null | import pytest
from faker import Faker
from fastapi.encoders import jsonable_encoder
from pydantic.types import SecretStr
from sqlalchemy.orm import Session
from app import crud, schemas
from app.core import security
def test_create_user(db: Session) -> None:
faker = Faker()
profile = faker.profile()
email = profile.get("mail", None)
username = profile.get("username", None)
password = "changeit"
user_in = schemas.UserCreate(
username=username, email=email, password=SecretStr(password)
)
user = crud.user.create(db=db, obj_in=user_in)
assert user.email == email
assert user.username == username
assert hasattr(user, "hashed_password")
assert security.verify_password(SecretStr(password), user.hashed_password)
def test_authenticate_user_success(db: Session) -> None:
faker = Faker()
profile = faker.profile()
email = profile.get("mail", None)
username = profile.get("username", None)
password = "changeit"
user_in = schemas.UserCreate(
username=username, email=email, password=SecretStr(password)
)
user = crud.user.create(db=db, obj_in=user_in)
wrong_email = email + "xxx"
authenticated_user = crud.user.authenticate(
db, email=wrong_email, password=SecretStr(password)
)
assert not authenticated_user
wrong_password = password + "xxx"
authenticated_user = crud.user.authenticate(
db, email=email, password=SecretStr(wrong_password)
)
assert not authenticated_user
authenticated_user = crud.user.authenticate(
db, email=email, password=SecretStr(password)
)
assert authenticated_user
assert user.email == authenticated_user.email
@pytest.mark.parametrize("search_by", ("email", "username", "id"))
def test_get_user_by(db: Session, search_by: str) -> None:
faker = Faker()
profile = faker.profile()
email = profile.get("mail", None)
username = profile.get("username", None)
password = "changeit"
user_in = schemas.UserCreate(
username=username, email=email, password=SecretStr(password)
)
user = crud.user.create(db=db, obj_in=user_in)
func_name = f"get_user_by_{search_by}"
func = getattr(crud.user, func_name)
user_2 = func(db, getattr(user, search_by))
assert user_2
assert user.email == user_2.email
assert jsonable_encoder(user) == jsonable_encoder(user_2)
| 31.298701 | 78 | 0.702075 | 301 | 2,410 | 5.465116 | 0.186047 | 0.042553 | 0.080243 | 0.082067 | 0.565957 | 0.502736 | 0.502736 | 0.502736 | 0.47234 | 0.47234 | 0 | 0.002047 | 0.189212 | 2,410 | 76 | 79 | 31.710526 | 0.839816 | 0 | 0 | 0.460317 | 0 | 0 | 0.053112 | 0.009544 | 0 | 0 | 0 | 0 | 0.174603 | 1 | 0.047619 | false | 0.190476 | 0.111111 | 0 | 0.15873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
36ac48e2ab27df3c0677dca73c8d8951f0e9ae52 | 1,156 | py | Python | principles-of-computing/Practice Exercises/Solitaire Mancala/Solitaire Mancala/poc_simpletest.py | kingwatam/misc-python | 8a10f14eb79b9d93bbe889175fe5ab532da73c70 | [
"MIT"
] | 1 | 2019-09-03T03:47:39.000Z | 2019-09-03T03:47:39.000Z | principles-of-computing/Practice Exercises/Solitaire Mancala/Solitaire Mancala/poc_simpletest.py | kingwatam/misc-python | 8a10f14eb79b9d93bbe889175fe5ab532da73c70 | [
"MIT"
] | null | null | null | principles-of-computing/Practice Exercises/Solitaire Mancala/Solitaire Mancala/poc_simpletest.py | kingwatam/misc-python | 8a10f14eb79b9d93bbe889175fe5ab532da73c70 | [
"MIT"
] | null | null | null | """
Lightweight testing class inspired by unittest from Pyunit
https://docs.python.org/2/library/unittest.html
Note that code is designed to be much simpler than unittest
and does NOT replicate uinittest functionality
"""
class TestSuite:
"""
Create a suite of tests similar to unittest
"""
def __init__(self):
"""
Creates a test suite object
"""
self.total_tests = 0
self.failures = 0
def run_test(self, computed, expected, message = ""):
"""
Compare computed and expected expressions as strings
If not equal, print message, computed, expected
"""
self.total_tests += 1
if computed != expected:
print message + " Computed: " + str(computed) + \
" Expected: " + str(expected)
self.failures += 1
def report_results(self):
""""
Report back summary of successes and failures
from run_test()
"""
print "Ran " + str(self.total_tests) + " tests. " \
+ str(self.failures) + " failures." | 31.243243 | 62 | 0.554498 | 122 | 1,156 | 5.172131 | 0.540984 | 0.101426 | 0.066561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006676 | 0.352076 | 1,156 | 37 | 63 | 31.243243 | 0.835781 | 0 | 0 | 0 | 0 | 0 | 0.078014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36ad7dd9946b30b8edbad769fd9fe67f2dcb1c2d | 2,014 | py | Python | jwc_core/jwc_sender.py | Inetgeek/Notice-Pusher | 052e4ecbf7520ae93e16af6ae89f560d6a6d888a | [
"MIT"
] | 2 | 2021-09-16T04:19:52.000Z | 2022-03-28T03:48:29.000Z | jwc_core/jwc_sender.py | Inetgeek/Notice-Pusher | 052e4ecbf7520ae93e16af6ae89f560d6a6d888a | [
"MIT"
] | null | null | null | jwc_core/jwc_sender.py | Inetgeek/Notice-Pusher | 052e4ecbf7520ae93e16af6ae89f560d6a6d888a | [
"MIT"
] | 1 | 2021-09-16T04:21:08.000Z | 2021-09-16T04:21:08.000Z | #!/usr/bin/python3
# coding: utf-8
import sys
import os, time, datetime
import smtplib
from email import (header)
from email.mime import (text, multipart)
with open(r'/home/jwc_notice.txt', "r+", encoding="utf-8") as file: #自行更改路径
a = file.read()
send_title = "机器人风险提示"
send_head = '<p style="color:#507383">亲爱的主人:</p>'
send_content = '<p style="font-size:34px;color:#ca1b0f;"><span style="border-bottom: 1px dashed #ccc; z-index: 1; position: static;">账号被风控,请及时处理!</span></p>'+'<hr><p style="color:#FC5531">教务处通知为:<p>\n\n'+a
def sender_mail():
smtp_Obj = smtplib.SMTP_SSL('smtp.qq.com',465)
sender_addrs = 'xxxxxx'
password = "xxxxxx"
smtp_Obj.login(sender_addrs, password)
receiver_addrs = ['xxxxxx']
for email_addrs in receiver_addrs:
try:
msg = multipart.MIMEMultipart()
msg['From'] = "InetGeek"
msg['To'] = email_addrs
msg['subject'] = header.Header(send_title, 'utf-8')
msg.attach(text.MIMEText(send_content, 'html', 'utf-8'))
smtp_Obj.sendmail(sender_addrs, email_addrs, msg.as_string())
print('成功发送给%s' % ( email_addrs))
except Exception as e:
continue
smtp_Obj.quit()
Nowtime = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
# @scheduler.scheduled_job('cron', hour = 22,minute = 50)
async def _init_():#个人自用请去掉异步io(删掉async)
with open(r'/home/jwc_notice.txt', "r+", encoding="utf-8") as file:
a = file.read()
if len(a) > 0:
try:#下面两句句替换成你要发送的方式,如采用微信推送则换成push+的发送接口,不要直接用下面两行代码
await bot.send_msg(user_id=1xxxxxxx2, message=a+'\n'+'\n当前时间: '+Nowtime)
await bot.send_group_msg(group_id=2xxxxxxxx7, message=a+'\n [CQ:at,qq=all]'+'\n当前时间: '+Nowtime)
file.seek(0)
file.truncate()
except:
sender_mail()
file.seek(0)
file.truncate()
sys.exit()
if __name__ == '__main__':
_init_()
| 36.618182 | 205 | 0.600298 | 267 | 2,014 | 4.370787 | 0.524345 | 0.017138 | 0.015424 | 0.022279 | 0.111397 | 0.075407 | 0.075407 | 0.075407 | 0.075407 | 0.075407 | 0 | 0.023422 | 0.236842 | 2,014 | 54 | 206 | 37.296296 | 0.735849 | 0.07994 | 0 | 0.222222 | 0 | 0.022222 | 0.221982 | 0.088793 | 0.022222 | 0 | 0 | 0 | 0 | 0 | null | null | 0.044444 | 0.111111 | null | null | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36b50824ddb6f2e96f0d94699793a7e9265c44f3 | 518 | py | Python | models/IFR_generalized_SB.py | rileymcmorrow/C-SFRAT | c696942940118172dfb2c3b8cc27b8d2fd5a5a17 | [
"MIT"
] | null | null | null | models/IFR_generalized_SB.py | rileymcmorrow/C-SFRAT | c696942940118172dfb2c3b8cc27b8d2fd5a5a17 | [
"MIT"
] | 3 | 2021-03-09T16:13:59.000Z | 2021-09-20T16:50:07.000Z | models/IFR_generalized_SB.py | rileymcmorrow/C-SFRAT | c696942940118172dfb2c3b8cc27b8d2fd5a5a17 | [
"MIT"
] | 4 | 2021-07-20T18:01:12.000Z | 2021-11-22T10:13:35.000Z | from core.model import Model
class IFR_Generalized_SB(Model):
name = "IFR generalized Salvia & Bollinger"
shortName = "IFRGSB"
# initial parameter estimates
beta0 = 0.01
parameterEstimates = (0.1, 0.1)
def hazardSymbolic(self, i, args):
# args -> (c, alpha)
f = 1 - args[0] / ((i - 1) * args[1] + 1)
return f
def hazardNumerical(self, i, args):
# args -> (c, alpha)
f = 1 - args[0] / ((i - 1) * args[1] + 1)
return f
| 24.666667 | 50 | 0.525097 | 66 | 518 | 4.090909 | 0.469697 | 0.074074 | 0.066667 | 0.096296 | 0.303704 | 0.303704 | 0.303704 | 0.303704 | 0.303704 | 0.303704 | 0 | 0.052632 | 0.339768 | 518 | 20 | 51 | 25.9 | 0.736842 | 0.125483 | 0 | 0.333333 | 0 | 0 | 0.09324 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
36b666e75f8d2123fc2f466527229d2f55e94174 | 1,263 | py | Python | TrendTrading/ProbModel/CheckScripts/updated market indicator.py | benjabee10/WKUResearch | 5cc384c0e0c1afc82c38a9e6eb63b80c85af7c97 | [
"MIT"
] | null | null | null | TrendTrading/ProbModel/CheckScripts/updated market indicator.py | benjabee10/WKUResearch | 5cc384c0e0c1afc82c38a9e6eb63b80c85af7c97 | [
"MIT"
] | null | null | null | TrendTrading/ProbModel/CheckScripts/updated market indicator.py | benjabee10/WKUResearch | 5cc384c0e0c1afc82c38a9e6eb63b80c85af7c97 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import talib
big= 200
small= 50
threshold=0.02
#context.market (shortperiod, longperiod):
#Market Values= 0-negative, 1-no trend, 2-positive
def initialize(context):
context.spy= sid(8554)
schedule_function(check)
def check(context, data):
spydata= data.history(context.spy, 'price', big+5, '1d')
lAvg= talib.SMA(spydata, small)[-1]
sAvg= talib.SMA(spydata, big)[-1]
shortAvgY= talib.SMA(spydata, small)[-2]
longAvgY= talib.SMA(spydata, big)[-2]
shortp= conditionCheck(sd, md, threshold)
longp= 2*(conditionCheck(md, ld, threshold))
context.markettrack= context.market
def conditionCheck(small, large, smallY, largeY var):
if small > (1+var)*small and large > (1+var)*large:
return 1
elif (1-var)*large < small < (1+var)*large:
return 0
elif small < (1-var)*large:
return -1
def clearassets(context, data):
for asset in context.portfolio.positions:
position = context.portfolio.positions[asset].amount
if position <0:
context.longsells.append(asset)
elif position >0:
context.shortsells.append(asset)
order_target_percent(asset, 0)
| 25.26 | 61 | 0.639747 | 163 | 1,263 | 4.93865 | 0.447853 | 0.024845 | 0.074534 | 0.055901 | 0.070807 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034304 | 0.238321 | 1,263 | 49 | 62 | 25.77551 | 0.802495 | 0.071259 | 0 | 0 | 0 | 0 | 0.005983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36b8b92109f8c9655104ce9dade2ed763cbf2735 | 678 | py | Python | hackerearth/Algorithms/A plane journey/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | 4 | 2020-07-24T01:59:50.000Z | 2021-07-24T15:14:08.000Z | hackerearth/Algorithms/A plane journey/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | hackerearth/Algorithms/A plane journey/solution.py | ATrain951/01.python-com_Qproject | c164dd093954d006538020bdf2e59e716b24d67c | [
"MIT"
] | null | null | null | """
# Sample code to perform I/O:
name = input() # Reading input from STDIN
print('Hi, %s.' % name) # Writing output to STDOUT
# Warning: Printing unwanted or ill-formatted data to output will cause the test cases to fail
"""
# Write your code here
n, m = map(int, input().strip().split())
a = sorted(map(int, input().strip().split()), reverse=True)
b = sorted(map(int, input().strip().split()), reverse=True)
if a[0] > b[0]:
print(-1)
else:
min_time = 1
i = j = 0
while i < len(a):
if j < len(b) and a[i] <= b[j]:
j += 1
elif a[i] <= b[j - 1]:
min_time += 2
i += 1
print(min_time)
| 26.076923 | 94 | 0.538348 | 108 | 678 | 3.351852 | 0.518519 | 0.049724 | 0.09116 | 0.132597 | 0.267956 | 0.209945 | 0.209945 | 0.209945 | 0 | 0 | 0 | 0.018789 | 0.29351 | 678 | 25 | 95 | 27.12 | 0.736952 | 0.39233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36b8ccb8c50334dfa92a74050719c2548bf9dec4 | 738 | py | Python | addon.py | codingPF/plugin.video.newsApp | 64f7c3e2e742cef5cd7c3303e2ffb3ec07771476 | [
"MIT"
] | null | null | null | addon.py | codingPF/plugin.video.newsApp | 64f7c3e2e742cef5cd7c3303e2ffb3ec07771476 | [
"MIT"
] | null | null | null | addon.py | codingPF/plugin.video.newsApp | 64f7c3e2e742cef5cd7c3303e2ffb3ec07771476 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
The main addon module
SPDX-License-Identifier: MIT
"""
# -- Imports ------------------------------------------------
import xbmcaddon
import resources.lib.appContext as appContext
import resources.lib.settings as Settings
import resources.lib.logger as Logger
import resources.lib.main as Main
# -- Main Code ----------------------------------------------
if __name__ == '__main__':
appContext.init()
appContext.initAddon(xbmcaddon.Addon())
appContext.initLogger(Logger.Logger(appContext.ADDONCLASS.getAddonInfo('id'), appContext.ADDONCLASS.getAddonInfo('version')))
appContext.initSettings(Settings.Settings(appContext.ADDONCLASS))
PLUGIN = Main.Main()
PLUGIN.run()
del PLUGIN
| 29.52 | 129 | 0.647696 | 74 | 738 | 6.351351 | 0.459459 | 0.12766 | 0.153191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001536 | 0.117886 | 738 | 24 | 130 | 30.75 | 0.72043 | 0.262873 | 0 | 0 | 0 | 0 | 0.031895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
36bc7e0436f464b768c92e41f855171401f6f554 | 4,923 | py | Python | src/tests/model_deployment_tests.py | vravisrpi/mlops-vertex | 0944b22996a5405f64d7ae162bd2427ffd81884d | [
"Apache-2.0"
] | null | null | null | src/tests/model_deployment_tests.py | vravisrpi/mlops-vertex | 0944b22996a5405f64d7ae162bd2427ffd81884d | [
"Apache-2.0"
] | null | null | null | src/tests/model_deployment_tests.py | vravisrpi/mlops-vertex | 0944b22996a5405f64d7ae162bd2427ffd81884d | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test an uploaded model to Vertex AI."""
import os
import logging
import tensorflow as tf
test_instance = {
"dropoff_grid": ["POINT(-87.6 41.9)"],
"euclidean": [2064.2696],
"loc_cross": [""],
"payment_type": ["Credit Card"],
"pickup_grid": ["POINT(-87.6 41.9)"],
"trip_miles": [1.37],
"trip_day": [12],
"trip_hour": [16],
"trip_month": [2],
"trip_day_of_week": [4],
"trip_seconds": [555],
}
SERVING_DEFAULT_SIGNATURE_NAME = "serving_default"
from google.cloud import aiplatform as vertex_ai
def test_model_artifact():
pass
'''
feature_types = {
"dropoff_grid": tf.dtypes.string,
"euclidean": tf.dtypes.float32,
"loc_cross": tf.dtypes.string,
"payment_type": tf.dtypes.string,
"pickup_grid": tf.dtypes.string,
"trip_miles": tf.dtypes.float32,
"trip_day": tf.dtypes.int64,
"trip_hour": tf.dtypes.int64,
"trip_month": tf.dtypes.int64,
"trip_day_of_week": tf.dtypes.int64,
"trip_seconds": tf.dtypes.int64,
}
new_test_instance = dict()
for key in test_instance:
new_test_instance[key] = tf.constant(
[test_instance[key]], dtype=feature_types[key]
)
print(new_test_instance)
project = os.getenv("PROJECT")
region = os.getenv("REGION")
model_display_name = os.getenv("MODEL_DISPLAY_NAME")
assert project, "Environment variable PROJECT is None!"
assert region, "Environment variable REGION is None!"
assert model_display_name, "Environment variable MODEL_DISPLAY_NAME is None!"
vertex_ai.init(project=project, location=region,)
models = vertex_ai.Model.list(
filter=f'display_name={model_display_name}',
order_by="update_time"
)
assert (
models
), f"No model with display name {model_display_name} exists!"
model = models[-1]
artifact_uri = model.gca_resource.artifact_uri
logging.info(f"Model artifact uri:{artifact_uri}")
assert tf.io.gfile.exists(
artifact_uri
), f"Model artifact uri {artifact_uri} does not exist!"
saved_model = tf.saved_model.load(artifact_uri)
logging.info("Model loaded successfully.")
assert (
SERVING_DEFAULT_SIGNATURE_NAME in saved_model.signatures
), f"{SERVING_DEFAULT_SIGNATURE_NAME} not in model signatures!"
prediction_fn = saved_model.signatures["serving_default"]
predictions = prediction_fn(**new_test_instance)
logging.info("Model produced predictions.")
keys = ["classes", "scores"]
for key in keys:
assert key in predictions, f"{key} in prediction outputs!"
assert predictions["classes"].shape == (
1,
2,
), f"Invalid output classes shape: {predictions['classes'].shape}!"
assert predictions["scores"].shape == (
1,
2,
), f"Invalid output scores shape: {predictions['scores'].shape}!"
logging.info(f"Prediction output: {predictions}")
'''
def test_model_endpoint():
pass
'''
project = os.getenv("PROJECT")
region = os.getenv("REGION")
model_display_name = os.getenv("MODEL_DISPLAY_NAME")
endpoint_display_name = os.getenv("ENDPOINT_DISPLAY_NAME")
assert project, "Environment variable PROJECT is None!"
assert region, "Environment variable REGION is None!"
assert model_display_name, "Environment variable MODEL_DISPLAY_NAME is None!"
assert endpoint_display_name, "Environment variable ENDPOINT_DISPLAY_NAME is None!"
endpoints = vertex_ai.Endpoint.list(
filter=f'display_name={endpoint_display_name}',
order_by="update_time"
)
assert (
endpoints
), f"Endpoint with display name {endpoint_display_name} does not exist! in region {region}"
endpoint = endpoints[-1]
logging.info(f"Calling endpoint: {endpoint}.")
prediction = endpoint.predict([test_instance]).predictions[0]
keys = ["classes", "scores"]
for key in keys:
assert key in prediction, f"{key} in prediction outputs!"
assert (
len(prediction["classes"]) == 2
), f"Invalid number of output classes: {len(prediction['classes'])}!"
assert (
len(prediction["scores"]) == 2
), f"Invalid number output scores: {len(prediction['scores'])}!"
logging.info(f"Prediction output: {prediction}")
''' | 31.557692 | 95 | 0.672354 | 625 | 4,923 | 5.1232 | 0.28 | 0.068707 | 0.049969 | 0.021237 | 0.312305 | 0.253279 | 0.195191 | 0.173954 | 0.173954 | 0.173954 | 0 | 0.016146 | 0.207394 | 4,923 | 156 | 96 | 31.557692 | 0.804459 | 0.119033 | 0 | 0.090909 | 0 | 0 | 0.297162 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
36bf9270f81abe8f83096f56129e26e2554011cc | 803 | py | Python | dirtyclean/tests/test_dirtyclean.py | paultopia/dirtyclean | 1b93b29e070b53afede22ff28497fd68f28d0326 | [
"MIT"
] | 2 | 2017-12-04T16:58:57.000Z | 2021-03-02T04:59:54.000Z | dirtyclean/tests/test_dirtyclean.py | paultopia/dirtyclean | 1b93b29e070b53afede22ff28497fd68f28d0326 | [
"MIT"
] | null | null | null | dirtyclean/tests/test_dirtyclean.py | paultopia/dirtyclean | 1b93b29e070b53afede22ff28497fd68f28d0326 | [
"MIT"
] | null | null | null | from dirtyclean import clean
import unittest
class TestDirtyClean(unittest.TestCase):
def setUp(self):
self.uglystring = " st—up•id ‘char−ac ter..s’, in its’ string...”Ç "
with open("multiline.txt") as mt:
self.multiline = mt.read()
def test_basic_clean(self):
self.assertEqual(clean(self.uglystring),
"st up id char ac ter s in its string Ç")
def test_simplify_letters(self):
self.assertEqual(clean(self.uglystring, simplify_letters=True),
"st up id char ac ter s in its string C")
def test_multiline(self):
self.assertEqual(clean(self.multiline),
"I am the very model of a multiline string with more stuff than you might want to have in there Ç")
| 33.458333 | 124 | 0.617684 | 114 | 803 | 4.324561 | 0.491228 | 0.064909 | 0.036511 | 0.048682 | 0.359026 | 0.302231 | 0.148073 | 0.109533 | 0.109533 | 0.109533 | 0 | 0 | 0.291407 | 803 | 23 | 125 | 34.913043 | 0.86116 | 0 | 0 | 0 | 0 | 0.0625 | 0.29588 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36cad5c25faaf8cf1d768a98197ce4f6fa877fa3 | 4,321 | py | Python | unipipeline/worker/uni_worker_consumer.py | aliaksandr-master/unipipeline | d8eac38534172aee59ab5777321cabe67f3779ef | [
"MIT"
] | null | null | null | unipipeline/worker/uni_worker_consumer.py | aliaksandr-master/unipipeline | d8eac38534172aee59ab5777321cabe67f3779ef | [
"MIT"
] | 1 | 2021-09-14T13:08:13.000Z | 2021-09-14T13:08:13.000Z | unipipeline/worker/uni_worker_consumer.py | aliaksandr-master/unipipeline | d8eac38534172aee59ab5777321cabe67f3779ef | [
"MIT"
] | null | null | null | from typing import TypeVar, Generic, Optional, Type, Any, Union, Dict, TYPE_CHECKING
from unipipeline.errors.uni_payload_error import UniPayloadParsingError, UniAnswerPayloadParsingError
from unipipeline.errors.uni_sending_to_worker_error import UniSendingToWorkerError
from unipipeline.answer.uni_answer_message import UniAnswerMessage
from unipipeline.brokers.uni_broker_message_manager import UniBrokerMessageManager
from unipipeline.errors.uni_work_flow_error import UniWorkFlowError
from unipipeline.message.uni_message import UniMessage
from unipipeline.message_meta.uni_message_meta import UniMessageMeta, UniMessageMetaErrTopic, UniAnswerParams
from unipipeline.worker.uni_worker import UniWorker
from unipipeline.worker.uni_worker_consumer_manager import UniWorkerConsumerManager
from unipipeline.worker.uni_worker_consumer_message import UniWorkerConsumerMessage
from unipipeline.definitions.uni_worker_definition import UniWorkerDefinition
if TYPE_CHECKING:
from unipipeline.modules.uni_mediator import UniMediator
TInputMsgPayload = TypeVar('TInputMsgPayload', bound=UniMessage)
TAnswerMsgPayload = TypeVar('TAnswerMsgPayload', bound=Optional[UniMessage])
class UniWorkerConsumer(Generic[TInputMsgPayload, TAnswerMsgPayload]):
def __init__(self, definition: UniWorkerDefinition, mediator: 'UniMediator', worker_type: Type[UniWorker[TInputMsgPayload, TAnswerMsgPayload]]) -> None:
self._definition = definition
self._mediator = mediator
self._worker_manager = UniWorkerConsumerManager(self.send_to)
self._worker = worker_type(self._worker_manager)
self._uni_echo = mediator.echo.mk_child(f'worker[{definition.name}]')
self._input_message_type: Type[TInputMsgPayload] = mediator.get_message_type(self._definition.input_message.name) # type: ignore
self._answer_message_type: Optional[Type[TAnswerMsgPayload]] = mediator.get_message_type(self._definition.answer_message.name) if self._definition.answer_message is not None else None # type: ignore
self._current_meta: Optional[UniMessageMeta] = None
def send_to(self, worker: Union[Type['UniWorker[Any, Any]'], str], data: Union[Dict[str, Any], UniMessage], *, alone: bool = False, need_answer: bool = False) -> Optional[UniAnswerMessage[UniMessage]]:
wd = self._mediator.config.get_worker_definition(worker)
if wd.name not in self._definition.output_workers:
raise UniSendingToWorkerError(f'worker {wd.name} is not defined in workers->{self._definition.name}->output_workers')
if need_answer and not wd.need_answer:
raise UniWorkFlowError(f'you will get no response form worker {wd.name}')
if need_answer:
answ_params = UniAnswerParams(topic=self._definition.answer_topic, id=self._worker_manager.id)
return self._mediator.send_to(wd.name, data, parent_meta=self._current_meta, answer_params=answ_params, alone=alone)
self._mediator.send_to(wd.name, data, parent_meta=self._current_meta, answer_params=None, alone=alone)
return None
def process_message(self, meta: UniMessageMeta, manager: UniBrokerMessageManager) -> None:
self._current_meta = meta
msg = UniWorkerConsumerMessage[TInputMsgPayload](self._input_message_type, manager, meta)
try:
result: Optional[Union[TAnswerMsgPayload, Dict[str, Any]]] = self._worker.handle_message(msg)
except UniAnswerPayloadParsingError as e:
self._mediator.move_to_error_topic(self._definition, meta, UniMessageMetaErrTopic.HANDLE_MESSAGE_ERR, e)
except UniPayloadParsingError as e:
self._mediator.move_to_error_topic(self._definition, meta, UniMessageMetaErrTopic.MESSAGE_PAYLOAD_ERR, e)
# except Exception as e: # TODO: correct error handling
# self._mediator.move_to_error_topic(self._definition, meta, UniMessageMetaErrTopic.HANDLE_MESSAGE_ERR, e)
else:
if self._definition.need_answer:
try:
self._mediator.answer_to(self._definition.name, meta, result, unwrapped=self._definition.answer_unwrapped)
except UniSendingToWorkerError:
pass
if self._definition.ack_after_success:
msg.ack()
self._current_meta = None
| 61.728571 | 207 | 0.765564 | 493 | 4,321 | 6.434077 | 0.231237 | 0.066204 | 0.023644 | 0.022699 | 0.174023 | 0.164565 | 0.117907 | 0.117907 | 0.117907 | 0.117907 | 0 | 0 | 0.155982 | 4,321 | 69 | 208 | 62.623188 | 0.869756 | 0.043508 | 0 | 0.036364 | 0 | 0 | 0.052581 | 0.017688 | 0 | 0 | 0 | 0.014493 | 0 | 1 | 0.054545 | false | 0.018182 | 0.236364 | 0 | 0.345455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36cd33528502d61cfd130bce552b6359665140f3 | 8,039 | py | Python | pysnmp-with-texts/Fore-Common-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/Fore-Common-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/Fore-Common-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module Fore-Common-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/Fore-Common-MIB
# Produced by pysmi-0.3.4 at Wed May 1 13:14:34 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueRangeConstraint, ConstraintsUnion, SingleValueConstraint, ConstraintsIntersection, ValueSizeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueRangeConstraint", "ConstraintsUnion", "SingleValueConstraint", "ConstraintsIntersection", "ValueSizeConstraint")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
Bits, MibIdentifier, enterprises, Counter64, Unsigned32, ModuleIdentity, Counter32, TimeTicks, NotificationType, ObjectIdentity, IpAddress, Gauge32, Integer32, iso, MibScalar, MibTable, MibTableRow, MibTableColumn = mibBuilder.importSymbols("SNMPv2-SMI", "Bits", "MibIdentifier", "enterprises", "Counter64", "Unsigned32", "ModuleIdentity", "Counter32", "TimeTicks", "NotificationType", "ObjectIdentity", "IpAddress", "Gauge32", "Integer32", "iso", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
fore = ModuleIdentity((1, 3, 6, 1, 4, 1, 326))
if mibBuilder.loadTexts: fore.setLastUpdated('9911050000Z')
if mibBuilder.loadTexts: fore.setOrganization('Marconi Communications')
if mibBuilder.loadTexts: fore.setContactInfo(' Postal: Marconi Communications, Inc. 5000 Marconi Drive Warrendale, PA 15086-7502 Tel: +1 724 742 6999 Email: bbrs-mibs@marconi.com Web: http://www.marconi.com')
if mibBuilder.loadTexts: fore.setDescription('Definitions common to all FORE private MIBS.')
admin = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1))
systems = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2))
foreExperiment = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 3))
operations = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 1))
snmpErrors = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 2))
snmpTrapDest = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 3))
snmpAccess = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 4))
assembly = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 5))
fileXfr = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 6))
rmonExtensions = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 7))
preDot1qVlanMIB = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 8))
snmpTrapLog = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 9))
ilmisnmp = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 10))
entityExtensionMIB = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 11))
ilmiRegistry = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 14))
foreIfExtension = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 15))
frameInternetworking = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 16))
ifExtensions = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 1, 17))
atmAdapter = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 1))
atmSwitch = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2))
etherSwitch = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 3))
atmAccess = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 5))
hubSwitchRouter = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 6))
ipoa = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 7))
stackSwitch = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 10))
switchRouter = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 15))
software = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 2))
asxd = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 2, 1))
hardware = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 1))
asx = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 1, 1))
asx200wg = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 4))
asx200bx = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 5))
asx200bxe = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 6))
cabletron9A000 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 7))
asx1000 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 8))
le155 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 9))
sfcs200wg = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 10))
sfcs200bx = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 11))
sfcs1000 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 12))
tnx210 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 15))
tnx1100 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 16))
asx1200 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 17))
asx4000 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 18))
le25 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 19))
esx3000 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 20))
tnx1100b = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 21))
asx150 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 22))
bxr48000 = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 24))
asx4000m = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 25))
axhIp = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 26))
axhSig = MibIdentifier((1, 3, 6, 1, 4, 1, 326, 2, 2, 27))
class SpansAddress(OctetString):
subtypeSpec = OctetString.subtypeSpec + ValueSizeConstraint(8, 8)
fixedLength = 8
class AtmAddress(OctetString):
subtypeSpec = OctetString.subtypeSpec + ConstraintsUnion(ValueSizeConstraint(8, 8), ValueSizeConstraint(20, 20), )
class NsapPrefix(OctetString):
subtypeSpec = OctetString.subtypeSpec + ValueSizeConstraint(13, 13)
fixedLength = 13
class NsapAddr(OctetString):
subtypeSpec = OctetString.subtypeSpec + ValueSizeConstraint(20, 20)
fixedLength = 20
class TransitNetwork(DisplayString):
subtypeSpec = DisplayString.subtypeSpec + ValueSizeConstraint(1, 4)
class TrapNumber(Integer32):
pass
class EntryStatus(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))
namedValues = NamedValues(("valid", 1), ("createRequest", 2), ("underCreation", 3), ("invalid", 4))
class AtmSigProtocol(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13))
namedValues = NamedValues(("other", 1), ("spans", 2), ("q2931", 3), ("pvc", 4), ("spvc", 5), ("oam", 6), ("spvcSpans", 7), ("spvcPnni", 8), ("rcc", 9), ("fsig", 10), ("mpls", 11), ("ipCtl", 12), ("oam-ctl", 13))
class GeneralState(Integer32):
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2))
namedValues = NamedValues(("normal", 1), ("fail", 2))
class IntegerBitString(Integer32):
pass
class ConnectionType(Integer32):
pass
mibBuilder.exportSymbols("Fore-Common-MIB", ilmiRegistry=ilmiRegistry, fore=fore, ilmisnmp=ilmisnmp, NsapPrefix=NsapPrefix, atmAccess=atmAccess, snmpTrapDest=snmpTrapDest, rmonExtensions=rmonExtensions, preDot1qVlanMIB=preDot1qVlanMIB, operations=operations, ipoa=ipoa, software=software, tnx1100=tnx1100, snmpErrors=snmpErrors, sfcs200bx=sfcs200bx, snmpAccess=snmpAccess, sfcs200wg=sfcs200wg, le25=le25, sfcs1000=sfcs1000, esx3000=esx3000, frameInternetworking=frameInternetworking, asx4000m=asx4000m, AtmAddress=AtmAddress, assembly=assembly, ConnectionType=ConnectionType, axhIp=axhIp, bxr48000=bxr48000, ifExtensions=ifExtensions, asx=asx, asxd=asxd, asx4000=asx4000, TransitNetwork=TransitNetwork, fileXfr=fileXfr, EntryStatus=EntryStatus, foreIfExtension=foreIfExtension, asx1000=asx1000, asx200bxe=asx200bxe, axhSig=axhSig, TrapNumber=TrapNumber, SpansAddress=SpansAddress, IntegerBitString=IntegerBitString, atmSwitch=atmSwitch, cabletron9A000=cabletron9A000, AtmSigProtocol=AtmSigProtocol, tnx1100b=tnx1100b, asx200bx=asx200bx, etherSwitch=etherSwitch, asx1200=asx1200, hubSwitchRouter=hubSwitchRouter, entityExtensionMIB=entityExtensionMIB, switchRouter=switchRouter, NsapAddr=NsapAddr, asx200wg=asx200wg, systems=systems, atmAdapter=atmAdapter, foreExperiment=foreExperiment, PYSNMP_MODULE_ID=fore, admin=admin, le155=le155, GeneralState=GeneralState, hardware=hardware, stackSwitch=stackSwitch, asx150=asx150, tnx210=tnx210, snmpTrapLog=snmpTrapLog)
| 73.752294 | 1,461 | 0.707302 | 1,090 | 8,039 | 5.214679 | 0.202752 | 0.019001 | 0.027445 | 0.036594 | 0.386524 | 0.318438 | 0.318438 | 0.316854 | 0.299085 | 0.20197 | 0 | 0.144795 | 0.128872 | 8,039 | 108 | 1,462 | 74.435185 | 0.666857 | 0.040304 | 0 | 0.032967 | 0 | 0.010989 | 0.105892 | 0.008435 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.032967 | 0.065934 | 0 | 0.340659 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36d148c1ce0bdea8709582045309f0f2acad2b33 | 954 | py | Python | services/web/apps/inv/inv/plugins/log.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 84 | 2017-10-22T11:01:39.000Z | 2022-02-27T03:43:48.000Z | services/web/apps/inv/inv/plugins/log.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 22 | 2017-12-11T07:21:56.000Z | 2021-09-23T02:53:50.000Z | services/web/apps/inv/inv/plugins/log.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 23 | 2017-12-06T06:59:52.000Z | 2022-02-24T00:02:25.000Z | # ---------------------------------------------------------------------
# inv.inv log plugin
# ---------------------------------------------------------------------
# Copyright (C) 2007-2018 The NOC Project
# See LICENSE for details
# ---------------------------------------------------------------------
# NOC modules
from .base import InvPlugin
class LogPlugin(InvPlugin):
name = "log"
js = "NOC.inv.inv.plugins.log.LogPanel"
def get_data(self, request, o):
return {
"id": str(o.id),
"name": o.name,
"model": o.model.name,
"log": [
{
"ts": x.ts.isoformat(),
"user": x.user,
"system": x.system,
"managed_object": x.managed_object,
"op": x.op,
"message": x.message,
}
for x in o.get_log()
],
}
| 28.909091 | 71 | 0.336478 | 79 | 954 | 4.012658 | 0.56962 | 0.037855 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012841 | 0.34696 | 954 | 32 | 72 | 29.8125 | 0.495987 | 0.318658 | 0 | 0 | 0 | 0 | 0.130841 | 0.049844 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.047619 | 0.047619 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36d88c360c0960445e0699b390c5bc46416d33e6 | 406 | py | Python | super32assembler/super32assembler/preprocessor/asmdirectives.py | Projektstudium-Mikroprozessor/Super32 | d502d2d5885ac0408d06e57e0f5a67fe2a2fee15 | [
"BSD-3-Clause"
] | 1 | 2019-12-07T01:56:31.000Z | 2019-12-07T01:56:31.000Z | super32assembler/super32assembler/preprocessor/asmdirectives.py | Projektstudium-Mikroprozessor/Super32 | d502d2d5885ac0408d06e57e0f5a67fe2a2fee15 | [
"BSD-3-Clause"
] | 42 | 2020-05-15T10:39:30.000Z | 2020-08-30T10:59:43.000Z | super32assembler/preprocessor/asmdirectives.py | xsjad0/Super32 | 75cf5828b17cdbce144447a69ff3d1be7ad601f2 | [
"BSD-3-Clause"
] | 4 | 2019-11-27T15:05:33.000Z | 2020-05-13T06:51:21.000Z | """
Enum Assembler-Directives
"""
from enum import Enum, auto
class AssemblerDirectives(Enum):
START = auto()
END = auto()
ORG = auto()
DEFINE = auto()
@classmethod
def to_string(cls):
return "{START},{END},{ORG},{DEFINE}".format(
START=cls.START.name,
END=cls.END.name,
ORG=cls.ORG.name,
DEFINE=cls.DEFINE.name
)
| 18.454545 | 53 | 0.549261 | 46 | 406 | 4.826087 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307882 | 406 | 21 | 54 | 19.333333 | 0.790036 | 0.061576 | 0 | 0 | 0 | 0 | 0.075067 | 0.075067 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0.071429 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
36dbe66f53ea99cba7463f1defbdf1646e602362 | 15,516 | py | Python | pyjokes/jokes_pl.py | r0d0dendr0n/pyjokes | 382065cba91007302be7fd04c5c35a9957e173b2 | [
"BSD-3-Clause"
] | null | null | null | pyjokes/jokes_pl.py | r0d0dendr0n/pyjokes | 382065cba91007302be7fd04c5c35a9957e173b2 | [
"BSD-3-Clause"
] | null | null | null | pyjokes/jokes_pl.py | r0d0dendr0n/pyjokes | 382065cba91007302be7fd04c5c35a9957e173b2 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Jokes below come from the "jokes_en.py" file.
Translation to Polish: Tomasz Rozynek - provided under CC BY-SA 3.0
"""
neutral = [
"W 2030 roku Beata z ulgą usunęła Python'a 2.7 ze swoich maszyn. 'No!' westchnęła, by za chwilę przeczytać ogłoszenia na temat Python'a 4.4.",
"Zapytanie SQL wchodzi do baru, podchodzi do pierwszej osoby i pyta, 'Czy możemy utworzyć relację?'",
"Kiedy używasz C++ jak młotka, wszystko będzie Twoim kciukiem.",
"Jak posadzisz milion małp przy milionie klawiatur, któraś z nich w końcu napisze działający program w Javie. Pozostałe będą pisać w Perlu.",
"Aby zrozumieć rekurencję, musisz najpierw zrozumieć rekurencję.",
"'Puk, puk.' 'Kto tam?' ... bardzo długa pauza ... 'Java.'",
"'Puk, puk.' 'Kto tam?' 'C++.'",
"'Puk, p... Asembler.'",
"Ilu programistów potrzeba, żeby wymienić żarówkę? Żadnego, bo to problem sprzętowy.",
"Jak nazywa się obiektowa metoda bogacenia się? Dziedziczenie.",
"Dlaczego dowcipy nie działają w systemie ósemkowym? Ponieważ 7, 10, 11.",
"Ilu programistów potrzeba, aby wymienić żarówkę? Żadnego, po prostu ogłaszają ciemność standardem.",
"Dwa wątki wchodzą do baru. Barman patrzy na nie i woła, 'Hej! Nie chcemy tu hazardu!'",
"Programiści uwielbiają rozwiązywanie problemów. Jeśli akurat nie mają żadnego do rozwiązania, z pewnością jakiś stworzą.",
".NET nazywa się .NET, żeby przypadkiem nie wyświetlił się w uniksowym listingu plików.",
"Sprzęt: część komputera, którą możesz kopnąć.",
"Optymista: Szklanka do połowy pełna. Pesymista: Szklanka do połowy pusta. Programista: Rozmiar szklanki jest dwa razy większy, niż wymagany.",
"W C sami musieliśmy kodować błędy. W C++ możemy je po prostu odziedziczyć.",
"Dlaczego nie ma konkursów na najmniej czytelny kod w Perlu? Bo nikt nie umiałby wyłonić zwycięzcy.",
"Odtwarzając dysk instalacyjny Windowsa od tyłu, usłyszysz czarną mszę. Gorzej, jeśli odtworzysz ją od przodu, wtedy zainstaluje Windowsa.",
"Ilu programistów potrzeba, aby zabić karalucha? Dwóch: jeden go trzyma, a drugi instaluje na nim Windowsa.",
"Do jakiej grupy należą programiści z Finlandii? Nerdyckiej.",
"Co mówi kod w Javie do kodu w C? Brakuje Ci klasy.",
"Dlaczego Microsoft nazwał swoją wyszukiwarkę BING? Bo Indolentnie Naśladuje Google.",
"Piraci wołają 'arg!', komputerowi piraci wołają 'argv!'",
"Dziecko: Mamo, dlaczego Słońce wschodzi na wschodzie i zachodzi na zachodzie? Ojciec: jeśli działa, nie dotykaj.",
"Dlaczego programistom myli się Halloween z Bożym Narodzeniem? Ponieważ OCT 31 == DEC 25.",
"Ilu programistów Prologa potrzeba, żeby wymienić żarówkę? Fałsz.",
"Kelner: Podać kawę, lub herbatę? Programistka: Tak.",
"Programistka wchodzi do foo...",
"Jak brzmi drugie imię Benoit'a B. Mandelbrot'a? Benoit B. Mandelbrot.",
"Dlaczego zawsze się uśmiechasz? To moje regularne wyrażenie twarzy.",
"Programistka miała problem. Pomyślała sobie, 'Wiem, rozwiążę to wątkami!'. ma Teraz problemy. ona dwa",
"Opowiedziałbym dowcip o UDP, ale nie wiem, czy by do Ciebie dotarł.",
"Testerka wchodzi do baru. Wbiega do baru. Wczołguje się do baru. Tańczy wchodząc do baru. Wchodzi tip-topami do baru. Szarżuje do baru.",
"Miałem problem, więc pomyślałem, że użyję Javy. Teraz mam FabrykaProblemow.",
"Tester wchodzi do baru. Zamawia piwo. Zamawia 0 piw. Zamawia 999999999 piw. Zamawia jaszczurkę. Zamawia -1 piw. Zamawia sfdeljknesv.",
"Kierowniczka projektu wchodzi do baru, zamawia drinka. Barman odmawia, ale pomyśli nad dodaniem go później.",
"Jak wygenerować prawdziwie losowy ciąg znaków? Posadź studenta pierwszego roku przed Vim'em i powiedz, żeby zapisał plik i wyłączył edytor.",
"Od dłuższego czasu używam Vim'a. Głównie dlatego, że nadal próbuję go wyłączyć.",
"Jak poznać, że ktoś używa Vim'a? Nie przejmuj się, sam Ci powie.",
"Kelner: On się krztusi! Czy jest na sali doktor? Programista: Jestem użytkownikiem Vim'a.",
"Trójka adminów baz danych wchodzi do NoSQL'owego baru. Po krótkim czasie rozeszli się, ponieważ nie mogli utworzyć relacji.",
"Jak opisać fabułę Incepcji programiście? Uruchamiasz maszynę wirtualną w wirtualce, wewnątrz innej wirtualki... wszystko działa wolno!",
"W informatyce są tylko dwa trudne problemy: unieważnianie pamięci podręcznej, nazewnictwo i pomyłki o 1.",
"Istnieje 10 rodzajów ludzi: Ci, którzy rozumieją kod binarny oraz Ci, którzy go nie rozumieją.",
"Istnieją 2 rodzaje ludzi: Ci, którzy potrafią ekstrapolować niekompletne zbiory danych...",
"Istnieją II rodzaje ludzi: Ci, którzy rozumieją liczby rzymskie i Ci, którzy ich nie rozumieją.",
"Istnieje 10 typów ludzi: Ci, którzy rozumieją system szesnastkowy oraz 15 pozostałych.",
"Istnieje 10 rodzajów ludzi: Ci, którzy rozumieją kod binarny, Ci którzy go nie rozumieją oraz Ci, co wiedzieli, że to o systemie trójkowym.",
"Istnieje 10 rodzajów ludzi: Ci, którzy rozumieją kod trójkowy, Ci, którzy go nie rozumieją oraz Ci, którzy nigdy o nim nie słyszeli.",
"Jak nazywa się ósemka hobbitów? Hobbajt.",
"Najlepsze w wartościach logicznych jest to, że nawet jeśli się pomylisz, to tylko o 1.",
"Dobry programista zawsze patrzy w obie strony przed przejściem przez ulicę jednokierunkową.",
"Są dwa sposoby pisania programów bez błędów. Tylko ten trzeci działa.",
"Zarządzanie jakością składa się w 55% z wody, 30% krwi i 15% ticketów z bugtrackera",
"Sympatyzowanie z Diabłem to tak naprawdę bycie uprzejmym dla Testerów.",
"Ilu Testerów potrzeba do zmiany żarówki? Oni zauważyli, że pokój jest ciemny. Nie rozwiązują problemów, tylko ich szukają.",
"Programista rozbił auto zjeżdżając z góry. Przechodzień spytał co się stało. \"Nie wiem. Wnieśmy go na górę i spróbujmy ponownie.\".",
"Pisanie w PHP jest jak sikanie do basenu. Wszyscy to robili, ale niekoniecznie trzeba się tym chwalić publicznie.",
"Dlaczego Tester przeszedł przez ulicę? Żeby zepsuć dzień wszystkim innym.",
"Ilość dni od ostatniego błędu indeksowania tablicy: -1.",
"Ilość dni od ostatniej pomyłki o 1: 0.",
"Szybkie randki są bez sensu. 5 minut to zbyt mało czasu, aby prawidłowo wyjaśnić filozofię Unix'a.",
"Microsoft co dwa miesiące organizuje \"tydzień produktywności\", podczas którego używają Google zamiast Bing'a",
"Podejście Schroedinger'a do budowy stron www: Jeśli nie oglądasz tego w Internet Explorerze, jest szansa, że będzie wyglądało dobrze.",
"Szukanie dobrego programisty PHP jest jak szukanie igły w stogu siana. Czy raczej stogu siana w igle?",
"Unix jest bardzo przyjazny użytkownikom. Po prostu jest również bardzo wybredny przy wyborze przyjaciół.",
"Programistka COBOL'a zarabia miliony naprawiając problem roku 2000. Decyduje się zamrozić siebie. \"Mamy rok 9999. Znasz COBOL'a, prawda?\"",
"Język C łączy w sobie potęgę asemblera z prostotą użycia asemblera.",
"Ekspert SEO wchodzi do baru, bar, pub, miesce spotkań, browar, Irlandzki pub, tawerna, barman, piwo, gorzała, wino, alkohol, spirytus...",
"Co mają wspólnego pyjokes oraz Adobe Flash? Wciąż otrzymują aktualizacje, ale nigdy nie stają się lepsze.",
"Dlaczego Waldo nosi tylko paski? Bo nie chce się znaleźć w kropce.",
"Szedłem raz ulicą, przy której domy były ponumerowane 8k, 16k, 32k, 64k, 128k, 256k i 512k. To była podróż Aleją Pamięci.",
"!false, (To zabawne, bo to prawda)",
]
"""
Jokes below come from the "jokes_en.py" file.
Translation to Polish: Tomasz Rozynek - provided under CC BY-SA 3.0
"""
chuck = [
"Kiedy Chuck Norris rzuca wyjątek, to leci on przez cały pokój.",
"Wszystkie tablice, które deklaruje Chuck Norris są nieskończonego rozmiaru, ponieważ Chuck Norris nie zna granic.",
"Chuck Norris nie ma opóźnień w dysku twardym, ponieważ dysk twardy wie, że musi się spieszyć, żeby nie wkurzyć Chucka Norrisa.",
"Chuck Norris pisze kod, który sam się optymalizuje.",
"Chuck Norris nie porównuje, ponieważ nie ma sobie równych.",
"Chuck Norris nie potrzebuje garbage collector'a, ponieważ nie wywołuje .Dispose(), tylko .DropKick().",
"Pierwszym programem Chucka Norrisa było kill -9.",
"Chuck Norris przebił bańkę dot com'ów.",
"Wszystkie przeglądarki wspierają kolory #chuck oraz #norris, oznaczające czarny i niebieski.",
"MySpace tak naprawdę nie jest Twój, tylko Chuck'a. Po prostu pozwala Ci go używać.",
"Chuck Norris może pisać funkcje rekurencyjne bez warunku stopu, które zawsze wracają.",
"Chuck Norris może rozwiązać wieże Hanoi w jednym ruchu.",
"Chuck Norris zna tylko jeden wzorzec projektowy: Boski obiekt.",
"Chuck Norris ukończył World of Warcraft.",
"Kierownicy projektu nigdy nie pytają Chucka Norrisa o oszacowania.",
"Chuck Norris nie dostosowuje się do standardów webowych, ponieważ to one dostosowują się do niego.",
"'U mnie to działa' jest zawsze prawdą w przypadku Chucka Norrisa.",
"Chuck Norris nie używa diagramów wyżarzania, tylko uderzania.",
"Chuck Norris może usunąć Kosz.",
"Broda Chucka Norrisa może pisać 140 słów na minutę.",
"Chuck Norris może przetestować całą aplikację jedną asercją: 'działa'.",
"Chuck Norris nie szuka błędów, ponieważ to sugeruje, że może ich nie znaleźć. On likwiduje błędy.",
"Klawiatura Chucka Norris'a nie ma klawisza Ctrl, ponieważ nic nie kontroluje Chucka Norrisa.",
"Chuck Norris może przepełnić Twój stos samym spojrzeniem.",
"Dla Chucka Norrisa wszystko zawiera podatności.",
"Chuck Norris nie używa sudo. Powłoka wie, że to on i po prostu robi co jej każe.",
"Chuck Norris nie używa debuggera. Patrzy na kod tak długo, aż sam wyzna błędy.",
"Chuck Norris ma dostęp do prywatnych metod.",
"Chuck Norris może utworzyć obiekt klasy abstrakcyjnej.",
"Chuck Norris nie potrzebuje fabryki klas. On instancjonuje interfejsy.",
"Klasa Object dziedziczy po Chucku Norrisie.",
"Dla Chucka Norrisa problemy NP-trudne mają złożoność O(1).",
"Chuck Norris zna ostatnią cyfrę rozwinięcia dziesiętnego Pi.",
"Łącze internetowe Chucka Norrisa szybciej wysyła, niż pobiera, ponieważ nawet dane się go boją.",
"Chuck Norris rozwiązał problem komiwojażera w czasie stałym: rozbij komiwojażera na N kawałków, po czym wykop każdy do innego miasta.",
"Żadne wyrażenie nie może obsłużyć ChuckNorrisException.",
"Chuck Norris nie programuje w parach. Pracuje sam.",
"Chuck Norris potrafi pisać aplikacje wielowątkowe przy użyciu jednego wątku.",
"Chuck Norris nie musi używać AJAX'a, ponieważ strony i tak są przerażone jego zwykłymi żądaniami.",
"Chuck Norris nie używa refleksji. To refleksje uprzejmie proszą go o pomoc.",
"Klawiatura Chucka Norrisa nie ma klawisza Escape, ponieważ nikt nie ucieknie przed Chuckiem Norrisem.",
"Chuck Norris może użyć wyszukiwania binarnego na nieposortowanym kontenerze.",
"Chuck Norris nie musi łapać wyjątków. Są zbyt przerażone, by się pokazać.",
"Chuck Norris wyszedł z nieskończonej pętli.",
"Jeśli Chuck Norris napisze kod z błędami, to one same się poprawią.",
"Hosting Chucka Norrisa ma SLA na poziomie 101%.",
"Klawiatura Chucka Norrisa ma klawisz 'Dowolny'.",
"Chuck Norris może dostać się do bazy danych bezpośrednio przez interfejs użytkownika.",
"Programy Chucka Norrisa się nie kończą, tylko giną.",
"Chuck Norris nalega na używanie języków silnie typowanych.",
"Chuck Norris projektuje protokoły bez statusów, żądań, czy odpowiedzi. Definiuje tylko polecenia.",
"Programy Chucka Norrisa zajmują 150% procesora, nawet gdy nie są uruchomione.",
"Chuck Norris uruchamia wątki, które kończą swoje zadanie, zanim się poprawnie uruchomią.",
"Programy Chucka Norrisa nie akceptują wejścia.",
"Chuck Norris może zainstalować iTunes bez QuickTime'a.",
"Chuck Norris nie potrzebuje systemu operacyjnego.",
"Model OSI Chucka Norrisa ma tylko jedną warstwę - fizyczną.",
"Chuck Norris może poprawnie kompilować kod z błędami składniowymi.",
"Każde zapytanie SQL Chucka Norrisa zawiera implikowany 'COMMIT'.",
"Chuck Norris nie potrzebuje rzutowania. Kompilator Chucka Norrisa (KCN) dostrzega wszystko. Do samego końca. Zawsze.",
"Chuck Norris nie wykonuje kodu w cyklach, tylko w uderzeniach.",
"Chuck Norris kompresuje pliki przez kopnięcie dysku twardego z półobrotu.",
"Chuck Norris rozwiązał problem stopu.",
"Dla Chucka Norrisa P = NP. Jego decyzje są zawsze deterministyczne.",
"Chuck Norris może pobrać wszystko z /dev/null.",
"Nikomu nie udało się programować z Chuckiem Norrisem i wyjść z tego żywym.",
"Nikomu nie udało się odezwać podczas przeglądu kodu Chucka Norrisa i wyjść z tego żywym.",
"Chuck Norris nie używa interfejsów graficznych. On rozkazuje z wiersza poleceń.",
"Chuck Norris nie używa Oracle'a. On JEST Wyrocznią.",
"Chuck Norris może dokonać dereferencji NULL'a.",
"Lista różnic pomiędzy Twoim kodem oraz kodem Chucka Norrisa jest nieskończona.",
"Chuck Norris napisał wtyczkę do Eclipsa, która dokonała pierwszego kontaktu z obcą cywilizacją.",
"Chuck Norris jest ostatecznym semaforem. Wszystkie wątki się go boją.",
"Nie przejmuj się testami. Przypadki testowe Chucka Norrisa pokrywają również Twój kod.",
"Każdy włos z brody Chucka Norrisa ma swój wkład w największy na świecie atak DDOS.",
"Komunikaty w loggerze Chucka Norrisa zawsze mają poziom FATAL.",
"Jeśli Chuck Norris zepsuje build'a, nie uda Ci się go naprawić, ponieważ nie została ani jedna linijka kodu.",
"Chuck Norris pisze jednym palcem. Wskazuje nim na klawiaturę, a ona robi resztę roboty.",
"Programy Chucka Norrisa przechodzą test Turinga po prostu patrząc się na sędziego.",
"Jeśli spróbujesz zabić program Chucka Norrisa, to on zabije Ciebie.",
"Chuck Norris wykonuje nieskończone pętle w mniej niż 4 sekundy.",
"Chuck Norris może nadpisać zmienną zablokowaną semaforem.",
"Chuck Norris zna wartość NULL. Może też po niej sortować.",
"Chuck Norris może zainstalować 64-bitowy system operacyjny na 32-bitowych maszynach.",
"Chuck Norris może pisać do strumieni wyjściowych.",
"Chuck Norris może czytać ze strumieni wejściowych.",
"Chuck Norris nie musi kompilować swojego kodu. Maszyny nauczyły się interpretować kod Chuck Norrisa.",
"Chuck Norris jest powodem Niebieskiego Ekranu Śmierci.",
"Chuck Norris może utworzyć klasę, które jest jednocześnie abstrakcyjna i finalna.",
"Chuck Norris może użyć czegokolwiek z java.util.*, żeby Cię zabić. Nawet javadocs'ów.",
"Kod działa szybciej, gdy obserwuje go Chuck Norris.",
"Wszyscy lubią profil Chucka Norrisa na Facebook'u, czy im się to podoba, czy nie.",
"Nie możesz śledzić Chucka Norrisa na Twitterze, ponieważ to on śledzi Ciebie.",
"Kalkulator Chucka Norrisa ma tylko 3 klawisze: 0, 1 i NAND.",
"Chuck Norris używa tylko zmiennych globalnych. Nie ma nic do ukrycia.",
"Chuck Norris raz zaimplementował cały serwer HTTP, używając tylko jednego printf'a. Projekt wciąż się rozwija i jest znany pod nazwą Apache.",
"Chuck Norris pisze bezpośrednio w kodzie binarnym. Potem pisze kod źródłowy, jako dokumentację dla innych programistów.",
"Chuck Norris raz przesunął bit tak mocno, że wylądował w innym komputerze.",
"Jak nazywa się ulubiony framework Chucka Norrisa? Knockout.js.",
]
jokes_pl = {
'neutral': neutral,
'chuck': chuck,
'all': neutral + chuck,
}
| 77.969849 | 147 | 0.743942 | 2,128 | 15,516 | 5.422932 | 0.487782 | 0.063865 | 0.024263 | 0.009879 | 0.037002 | 0.032322 | 0.032322 | 0.02747 | 0.023744 | 0.015078 | 0 | 0.007798 | 0.181748 | 15,516 | 198 | 148 | 78.363636 | 0.90115 | 0.008765 | 0 | 0 | 0 | 0.125683 | 0.893974 | 0.001377 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.005464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36e117d0d57d57188bd69bce4d500df94875ceb8 | 4,913 | py | Python | platform_reports/prometheus_grammars.py | neuro-inc/platform-reports | 161c18733370235af0b63a772de49343e956c35c | [
"Apache-2.0"
] | null | null | null | platform_reports/prometheus_grammars.py | neuro-inc/platform-reports | 161c18733370235af0b63a772de49343e956c35c | [
"Apache-2.0"
] | 9 | 2021-12-23T03:10:40.000Z | 2022-03-31T03:15:52.000Z | platform_reports/prometheus_grammars.py | neuro-inc/platform-reports | 161c18733370235af0b63a772de49343e956c35c | [
"Apache-2.0"
] | null | null | null | PROMQL = """
start: query
// Binary operations are defined separately in order to support precedence
?query\
: or_match
| matrix
| subquery
| offset
?or_match\
: and_unless_match
| or_match OR grouping? and_unless_match
?and_unless_match\
: comparison_match
| and_unless_match (AND | UNLESS) grouping? comparison_match
?comparison_match\
: sum_match
| comparison_match /==|!=|>=|<=|>|</ BOOL? grouping? sum_match
?sum_match\
: product_match
| sum_match /\\+|-/ grouping? product_match
?product_match\
: unary
| product_match /\\*|\\/|%/ grouping? unary
?unary\
: power_match
| /\\+|-/ power_match
?power_match\
: atom
| atom /\\^/ grouping? power_match
?atom\
: function
| aggregation
| instant_query
| NUMBER
| STRING
| "(" query ")"
// Selectors
instant_query\
: METRIC_NAME ("{" label_matcher_list? "}")? -> instant_query_with_metric
| "{" label_matcher_list "}" -> instant_query_without_metric
label_matcher_list: label_matcher ("," label_matcher)*
label_matcher: label_name /=~|=|!=|!~/ STRING
matrix: query "[" DURATION "]"
subquery: query "[" DURATION ":" DURATION? "]"
offset: query OFFSET DURATION
// Function
function: function_name parameter_list
parameter_list: "(" (query ("," query)*)? ")"
?function_name\
: ABS
| ABSENT
| ABSENT_OVER_TIME
| CEIL
| CHANGES
| CLAMP_MAX
| CLAMP_MIN
| DAY_OF_MONTH
| DAY_OF_WEEK
| DAYS_IN_MONTH
| DELTA
| DERIV
| EXP
| FLOOR
| HISTOGRAM_QUANTILE
| HOLT_WINTERS
| HOUR
| IDELTA
| INCREASE
| IRATE
| LABEL_JOIN
| LABEL_REPLACE
| LN
| LOG2
| LOG10
| MINUTE
| MONTH
| PREDICT_LINEAR
| RATE
| RESETS
| ROUND
| SCALAR
| SORT
| SORT_DESC
| SQRT
| TIME
| TIMESTAMP
| VECTOR
| YEAR
| AVG_OVER_TIME
| MIN_OVER_TIME
| MAX_OVER_TIME
| SUM_OVER_TIME
| COUNT_OVER_TIME
| QUANTILE_OVER_TIME
| STDDEV_OVER_TIME
| STDVAR_OVER_TIME
// Aggregations
aggregation\
: aggregation_operator parameter_list
| aggregation_operator (by | without) parameter_list
| aggregation_operator parameter_list (by | without)
by: BY label_name_list
without: WITHOUT label_name_list
?aggregation_operator\
: SUM
| MIN
| MAX
| AVG
| GROUP
| STDDEV
| STDVAR
| COUNT
| COUNT_VALUES
| BOTTOMK
| TOPK
| QUANTILE
// Vector one-to-one/one-to-many joins
grouping: (on | ignoring) (group_left | group_right)?
on: ON label_name_list
ignoring: IGNORING label_name_list
group_left: GROUP_LEFT label_name_list
group_right: GROUP_RIGHT label_name_list
// Label names
label_name_list: "(" (label_name ("," label_name)*)? ")"
?label_name: keyword | LABEL_NAME
?keyword\
: AND
| OR
| UNLESS
| BY
| WITHOUT
| ON
| IGNORING
| GROUP_LEFT
| GROUP_RIGHT
| OFFSET
| BOOL
| aggregation_operator
| function_name
// Keywords
// Function names
ABS: "abs"
ABSENT: "absent"
ABSENT_OVER_TIME: "absent_over_time"
CEIL: "ceil"
CHANGES: "changes"
CLAMP_MAX: "clamp_max"
CLAMP_MIN: "clamp_min"
DAY_OF_MONTH: "day_of_month"
DAY_OF_WEEK: "day_of_week"
DAYS_IN_MONTH: "days_in_month"
DELTA: "delta"
DERIV: "deriv"
EXP: "exp"
FLOOR: "floor"
HISTOGRAM_QUANTILE: "histogram_quantile"
HOLT_WINTERS: "holt_winters"
HOUR: "hour"
IDELTA: "idelta"
INCREASE: "increase"
IRATE: "irate"
LABEL_JOIN: "label_join"
LABEL_REPLACE: "label_replace"
LN: "ln"
LOG2: "log2"
LOG10: "log10"
MINUTE: "minute"
MONTH: "month"
PREDICT_LINEAR: "predict_linear"
RATE: "rate"
RESETS: "resets"
ROUND: "round"
SCALAR: "scalar"
SORT: "sort"
SORT_DESC: "sort_desc"
SQRT: "sqrt"
TIME: "time"
TIMESTAMP: "timestamp"
VECTOR: "vector"
YEAR: "year"
AVG_OVER_TIME: "avg_over_time"
MIN_OVER_TIME: "min_over_time"
MAX_OVER_TIME: "max_over_time"
SUM_OVER_TIME: "sum_over_time"
COUNT_OVER_TIME: "count_over_time"
QUANTILE_OVER_TIME: "quantile_over_time"
STDDEV_OVER_TIME: "stddev_over_time"
STDVAR_OVER_TIME: "stdvar_over_time"
// Aggregation operators
SUM: "sum"
MIN: "min"
MAX: "max"
AVG: "avg"
GROUP: "group"
STDDEV: "stddev"
STDVAR: "stdvar"
COUNT: "count"
COUNT_VALUES: "count_values"
BOTTOMK: "bottomk"
TOPK: "topk"
QUANTILE: "quantile"
// Aggregation modifiers
BY: "by"
WITHOUT: "without"
// Join modifiers
ON: "on"
IGNORING: "ignoring"
GROUP_LEFT: "group_left"
GROUP_RIGHT: "group_right"
// Logical operators
AND: "and"
OR: "or"
UNLESS: "unless"
OFFSET: "offset"
BOOL: "bool"
NUMBER: /[0-9]+(\\.[0-9]+)?/
STRING\
: "'" /([^'\\\\]|\\\\.)*/ "'"
| "\\"" /([^\\"\\\\]|\\\\.)*/ "\\""
DURATION: DIGIT+ ("s" | "m" | "h" | "d" | "w" | "y")
METRIC_NAME: (LETTER | "_" | ":") (DIGIT | LETTER | "_" | ":")*
LABEL_NAME: (LETTER | "_") (DIGIT | LETTER | "_")*
%import common.DIGIT
%import common.LETTER
%import common.WS
%ignore WS
"""
| 17.996337 | 77 | 0.65113 | 593 | 4,913 | 5.067454 | 0.210793 | 0.07188 | 0.030283 | 0.018968 | 0.195674 | 0.143428 | 0.102163 | 0.086855 | 0 | 0 | 0 | 0.003343 | 0.208427 | 4,913 | 272 | 78 | 18.0625 | 0.769349 | 0 | 0 | 0 | 0 | 0.004405 | 0.996743 | 0.024018 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.013216 | 0 | 0.013216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36e87b1e11d644470443480a35f8b9e8b72438cd | 4,387 | py | Python | src/rechub/parameters.py | yusanshi/easy-rec | 86db0bbd1eb0caf94c4b0ec4198bf49e64f65f24 | [
"MIT"
] | null | null | null | src/rechub/parameters.py | yusanshi/easy-rec | 86db0bbd1eb0caf94c4b0ec4198bf49e64f65f24 | [
"MIT"
] | null | null | null | src/rechub/parameters.py | yusanshi/easy-rec | 86db0bbd1eb0caf94c4b0ec4198bf49e64f65f24 | [
"MIT"
] | null | null | null | import argparse
from distutils.util import strtobool
def str2bool(x):
return bool(strtobool(x))
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--num_epochs', type=int, default=1000)
parser.add_argument('--learning_rate', type=float, default=0.0005)
parser.add_argument('--batch_size', type=int, default=4096)
parser.add_argument('--num_workers', type=int, default=16)
parser.add_argument('--non_graph_embedding_dim', type=int, default=200)
parser.add_argument('--graph_embedding_dims',
type=int,
nargs='+',
default=[200, 128, 64])
parser.add_argument(
'--neighbors_sampling_quantile',
type=float,
default=0.9,
help=
'Set the number of sampled neighbors to the quantile of the numbers of neighbors'
)
parser.add_argument('--min_neighbors_sampled', type=int, default=4)
parser.add_argument('--max_neighbors_sampled', type=int, default=512)
parser.add_argument('--single_attribute_dim', type=int,
default=40) # TODO: support attributes
parser.add_argument('--attention_query_vector_dim', type=int, default=200)
parser.add_argument(
'--dnn_predictor_dims',
type=int,
nargs='+',
default=[-1, 128, 1],
help=
'You can set first dim as -1 to make it automatically fit the input vector'
)
parser.add_argument('--num_batches_show_loss', type=int, default=50)
parser.add_argument('--num_epochs_validate', type=int, default=5)
parser.add_argument('--early_stop_patience', type=int, default=20)
parser.add_argument('--num_attention_heads', type=int, default=8)
parser.add_argument('--save_checkpoint', type=str2bool, default=False)
parser.add_argument('--different_embeddings', type=str2bool, default=False)
parser.add_argument('--negative_sampling_ratio', type=int, default=4)
parser.add_argument(
'--model_name',
type=str,
default='GCN',
choices=[
# Non-graph
'NCF',
# Graph with single type of edge (we think it as homogeneous graph)
'GCN',
'GAT',
'LightGCN',
'NGCF',
# Graph with multiple types of edge (we think it as heterogeneous graph)
'HET-GCN',
'HET-GAT',
'HET-NGCF',
'HET-LightGCN',
# To be categorized
'GraphRec',
'DeepFM',
'DSSM',
'DiffNet',
'DiffNet++',
'DANSER'
])
parser.add_argument('--embedding_aggregator',
type=str,
default='concat',
choices=['concat', 'attn'])
parser.add_argument('--predictor',
type=str,
default='dnn',
choices=['dot', 'dnn'])
parser.add_argument('--dataset_path', type=str, required=True)
parser.add_argument('--metadata_path', type=str, required=True)
parser.add_argument('--log_path', type=str, default='./log/')
parser.add_argument('--tensorboard_runs_path', type=str, default='./runs/')
parser.add_argument('--checkpoint_path', type=str, default='./checkpoint/')
parser.add_argument('--edge_choice',
type=int,
nargs='+',
default=[],
help='Left empty to use all in metadata file')
parser.add_argument('--training_task_choice',
type=int,
nargs='+',
default=[],
help='Left empty to use all in metadata file')
parser.add_argument('--evaluation_task_choice',
type=int,
nargs='+',
default=[],
help='Left empty to use all in `training_task_choice`')
parser.add_argument('--task_loss_overwrite', type=str, nargs='+')
parser.add_argument('--task_weight_overwrite', type=float, nargs='+')
args, unknown = parser.parse_known_args()
if len(unknown) > 0:
print(
'Warning: if you are not in testing mode, you may have got some parameters wrong input'
)
return args
| 38.823009 | 99 | 0.56713 | 480 | 4,387 | 4.997917 | 0.339583 | 0.12005 | 0.226761 | 0.041684 | 0.286786 | 0.226761 | 0.212589 | 0.15173 | 0.087536 | 0.087536 | 0 | 0.017734 | 0.305904 | 4,387 | 112 | 100 | 39.169643 | 0.770115 | 0.043082 | 0 | 0.232323 | 0 | 0 | 0.271231 | 0.11021 | 0 | 0 | 0 | 0.008929 | 0 | 1 | 0.020202 | false | 0 | 0.020202 | 0.010101 | 0.060606 | 0.010101 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36f2445925b38eafa6fa76d91317ba20cacff47f | 1,241 | py | Python | test/unit/object/test_collaboration_allowlist_entry.py | box/box-python-sdk | 5c6766a17bac0315bede7a1f5909c912d194a793 | [
"Apache-2.0"
] | 367 | 2015-02-10T05:55:45.000Z | 2022-03-16T23:39:58.000Z | test/unit/object/test_collaboration_allowlist_entry.py | box/box-python-sdk | 5c6766a17bac0315bede7a1f5909c912d194a793 | [
"Apache-2.0"
] | 686 | 2015-02-10T01:21:28.000Z | 2022-03-31T11:40:22.000Z | test/unit/object/test_collaboration_allowlist_entry.py | box/box-python-sdk | 5c6766a17bac0315bede7a1f5909c912d194a793 | [
"Apache-2.0"
] | 260 | 2015-02-16T17:35:06.000Z | 2022-03-20T17:45:28.000Z | # coding: utf-8
from __future__ import unicode_literals, absolute_import
from boxsdk.config import API
def test_get(mock_box_session, test_collaboration_allowlist_entry):
entry_id = test_collaboration_allowlist_entry.object_id
expected_url = '{0}/collaboration_whitelist_entries/{1}'.format(API.BASE_API_URL, entry_id)
mock_entry = {
'type': 'collaboration_whitelist_entry',
'id': '98765',
'domain': 'example.com',
'direction': 'inbound'
}
mock_box_session.get.return_value.json.return_value = mock_entry
entry = test_collaboration_allowlist_entry.get()
mock_box_session.get.assert_called_once_with(expected_url, headers=None, params=None)
assert entry.id == mock_entry['id']
assert entry.domain == mock_entry['domain']
assert entry.direction == mock_entry['direction']
def test_delete(mock_box_session, test_collaboration_allowlist_entry):
entry_id = test_collaboration_allowlist_entry.object_id
expected_url = '{0}/collaboration_whitelist_entries/{1}'.format(API.BASE_API_URL, entry_id)
test_collaboration_allowlist_entry.delete()
mock_box_session.delete.assert_called_once_with(expected_url, expect_json_response=False, headers=None, params={})
| 42.793103 | 118 | 0.767929 | 163 | 1,241 | 5.411043 | 0.319018 | 0.055556 | 0.176871 | 0.210884 | 0.465986 | 0.465986 | 0.360544 | 0.360544 | 0.360544 | 0.360544 | 0 | 0.009276 | 0.131346 | 1,241 | 28 | 119 | 44.321429 | 0.808905 | 0.010475 | 0 | 0.181818 | 0 | 0 | 0.137031 | 0.087276 | 0 | 0 | 0 | 0 | 0.227273 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36f7aca45d40f82d8142db3d4804603a2675f264 | 1,463 | py | Python | jumpy/setup.py | bharadwaj1098/brax | 3108a0535b9b59725c97ef35732ed0378c0fd5cc | [
"Apache-2.0"
] | 1,162 | 2021-06-03T20:15:05.000Z | 2022-03-31T19:53:06.000Z | jumpy/setup.py | bharadwaj1098/brax | 3108a0535b9b59725c97ef35732ed0378c0fd5cc | [
"Apache-2.0"
] | 160 | 2021-06-05T02:32:39.000Z | 2022-03-31T11:39:58.000Z | jumpy/setup.py | bharadwaj1098/brax | 3108a0535b9b59725c97ef35732ed0378c0fd5cc | [
"Apache-2.0"
] | 117 | 2021-06-04T17:18:21.000Z | 2022-03-30T18:04:48.000Z | # Copyright 2021 The Brax Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""setup.py for Jumpy.
Install for development:
pip intall -e .
"""
from setuptools import setup
setup(
name="brax-jumpy",
version="0.0.1",
description=("Common backend for JAX or numpy."),
author="Brax Authors",
author_email="no-reply@google.com",
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
url="http://github.com/google/brax",
license="Apache 2.0",
py_modules=["jumpy"],
install_requires=[
"jax",
"jaxlib",
"numpy",
],
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
)
| 29.857143 | 74 | 0.673274 | 185 | 1,463 | 5.286486 | 0.643243 | 0.06135 | 0.026585 | 0.03272 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 0.210526 | 1,463 | 48 | 75 | 30.479167 | 0.834632 | 0.423787 | 0 | 0.076923 | 0 | 0 | 0.478155 | 0.026699 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.038462 | 0 | 0.038462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
36fd537a07164889366087995d08455fc14bd19e | 828 | py | Python | Batch_sentiment/spark_hashtag.py | malli3131/SparkApps | b24763eaf6411cba3c22a4c070a45d6fe96dfa1d | [
"Apache-2.0"
] | 3 | 2018-01-17T05:51:10.000Z | 2018-11-22T16:59:53.000Z | Batch_sentiment/spark_hashtag.py | malli3131/SparkApps | b24763eaf6411cba3c22a4c070a45d6fe96dfa1d | [
"Apache-2.0"
] | 2 | 2016-12-15T13:15:42.000Z | 2016-12-15T13:19:19.000Z | Batch_sentiment/spark_hashtag.py | malli3131/SparkApps | b24763eaf6411cba3c22a4c070a45d6fe96dfa1d | [
"Apache-2.0"
] | 4 | 2018-02-12T06:37:04.000Z | 2020-01-04T11:30:24.000Z | import re
import string
import sys
from pyspark import SparkContext
exclude = set(string.punctuation)
def get_hash_tag(word, rmPunc):
pattern = re.compile("^#(.*)")
m = pattern.match(word)
tag = None
if m:
match = m.groups()
for m_word in match:
tag = ''.join(letter for letter in m_word if letter not in rmPunc)
if tag is not None:
return tag
sc = SparkContext("local", "Finidng Hash Tags")
rmPunc = sc.broadcast(exclude)
mydata = sc.textFile("hdfs://<hostname>:<port>/path/to/parsedata<first job output>")
wordsRDD = mydata.flatMap( lambda line : line.split("\t")[1].split(" "))
tagsRDD = wordsRDD.map( lambda word : get_hash_tag(word, rmPunc.value))
hashtagsRDD = tagsRDD.filter( lambda word : word is not None)
hashtagsRDD.saveAsTextFile("hdfs://<hostname>:<port>/path/to/hashtags")
| 30.666667 | 84 | 0.695652 | 119 | 828 | 4.789916 | 0.504202 | 0.024561 | 0.035088 | 0.049123 | 0.147368 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001443 | 0.163043 | 828 | 26 | 85 | 31.846154 | 0.821068 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 0.107488 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.181818 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7fc0ed53e23bdf7182409dab9a83d9dcb7cb0ae5 | 417 | py | Python | backend/apps/risks/urls.py | intellisense/risks | e98b8c6e5694b895603f7ff1b3c04b6057aa1136 | [
"MIT"
] | null | null | null | backend/apps/risks/urls.py | intellisense/risks | e98b8c6e5694b895603f7ff1b3c04b6057aa1136 | [
"MIT"
] | null | null | null | backend/apps/risks/urls.py | intellisense/risks | e98b8c6e5694b895603f7ff1b3c04b6057aa1136 | [
"MIT"
] | null | null | null | from django.conf.urls import url
from rest_framework.urlpatterns import format_suffix_patterns
from . import views
urlpatterns = [
url(r'^risks/$', views.RiskTypeList.as_view(), name='risks_list'),
url(r'^risks/(?P<pk>[0-9]+)/$', views.RiskTypeDetail.as_view(), name='risk_details'),
url(r'^fields/$', views.FieldTypes.as_view(), name='field_types'),
]
urlpatterns = format_suffix_patterns(urlpatterns)
| 34.75 | 89 | 0.729017 | 57 | 417 | 5.140351 | 0.54386 | 0.040956 | 0.102389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005333 | 0.100719 | 417 | 11 | 90 | 37.909091 | 0.776 | 0 | 0 | 0 | 0 | 0 | 0.17506 | 0.055156 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
7fc44269a458fb1cbf6dc4894b2532e5211304c0 | 1,166 | py | Python | kanban_board/admin.py | Zeerooth/django-kanban-board | d390635017199a90da666bba3a74cafc86838884 | [
"BSD-3-Clause"
] | null | null | null | kanban_board/admin.py | Zeerooth/django-kanban-board | d390635017199a90da666bba3a74cafc86838884 | [
"BSD-3-Clause"
] | 2 | 2021-06-10T17:52:06.000Z | 2021-09-22T18:00:26.000Z | kanban_board/admin.py | Zeerooth/django-kanban-board | d390635017199a90da666bba3a74cafc86838884 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from ordered_model.admin import OrderedStackedInline, OrderedInlineModelAdminMixin
from kanban_board.models import KanbanBoard, KanbanBoardState, Workflow, KanbanBoardElement
class KanbanBoardAdmin(admin.ModelAdmin):
list_display = ('name', 'workflow', 'element_count')
filter_horizontal = ('allowed_users', 'allowed_groups')
def element_count(self, obj):
return KanbanBoardElement.objects.filter(kanban_board_parent=obj).select_subclasses().count()
class KanbanBoardStateInline(OrderedStackedInline):
model = KanbanBoardState
fields = ('workflow', 'name', 'move_up_down_links', )
readonly_fields = ('workflow', 'move_up_down_links', )
extra = 0
ordering = ('order',)
class WorkflowAdmin(OrderedInlineModelAdminMixin, admin.ModelAdmin):
list_display = ('name', 'workflow_sequence')
inlines = (KanbanBoardStateInline, )
def workflow_sequence(self, obj):
return "->".join([str(x.name) for x in list(obj.kanbanboardstate_set.all())])
admin.site.register(KanbanBoard, KanbanBoardAdmin)
admin.site.register(KanbanBoardState)
admin.site.register(Workflow, WorkflowAdmin)
| 37.612903 | 101 | 0.762436 | 120 | 1,166 | 7.225 | 0.5 | 0.031142 | 0.058824 | 0.059977 | 0.087659 | 0.087659 | 0 | 0 | 0 | 0 | 0 | 0.000981 | 0.126072 | 1,166 | 30 | 102 | 38.866667 | 0.849853 | 0 | 0 | 0 | 0 | 0 | 0.116638 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.136364 | 0.090909 | 0.863636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7fd3371311cc6675c8548300ec8d2acf6af4b1ea | 2,036 | py | Python | ireporterApp/migrations/0001_initial.py | George-Okumu/IReporter-Django | 5962984ce0069cdf048dbf91686377568a7cf55b | [
"MIT"
] | null | null | null | ireporterApp/migrations/0001_initial.py | George-Okumu/IReporter-Django | 5962984ce0069cdf048dbf91686377568a7cf55b | [
"MIT"
] | 1 | 2021-10-06T20:15:11.000Z | 2021-10-06T20:15:11.000Z | ireporterApp/migrations/0001_initial.py | George-Okumu/IReporter-Django | 5962984ce0069cdf048dbf91686377568a7cf55b | [
"MIT"
] | null | null | null | # Generated by Django 3.2.8 on 2021-10-13 16:04
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import ireporterApp.models
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='RedFlag',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=43)),
('description', models.TextField(max_length=100)),
('status', models.CharField(default='received', max_length=20)),
('redFlag_image', models.ImageField(blank=True, null=True, upload_to=ireporterApp.models.project_upload)),
('redFlag_video', models.CharField(blank=True, max_length=20, null=True)),
('redFlag_location', models.CharField(blank=True, max_length=20, null=True)),
('created_at', models.DateTimeField(auto_now=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='redflag', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='Intervention',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('subject', models.TextField(max_length=200)),
('description', models.TextField()),
('location', models.TextField(max_length=90)),
('upload_image', models.ImageField(null=True, upload_to='')),
('video', models.CharField(blank=True, max_length=20, null=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='intervention', to=settings.AUTH_USER_MODEL)),
],
),
]
| 45.244444 | 147 | 0.623281 | 216 | 2,036 | 5.717593 | 0.356481 | 0.0583 | 0.035628 | 0.053441 | 0.395951 | 0.358704 | 0.358704 | 0.358704 | 0.358704 | 0.323887 | 0 | 0.021359 | 0.241159 | 2,036 | 44 | 148 | 46.272727 | 0.777994 | 0.022102 | 0 | 0.27027 | 1 | 0 | 0.089995 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.108108 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7fe5f97f042b7d291cc4c77318e4cda78c4dbfcc | 1,187 | py | Python | app/cli.py | dev-johnlopez/Assignably | 056960556dd75dfce064970887f37a44a8c66aec | [
"MIT"
] | 1 | 2021-06-09T02:19:18.000Z | 2021-06-09T02:19:18.000Z | app/cli.py | dev-johnlopez/Assignably | 056960556dd75dfce064970887f37a44a8c66aec | [
"MIT"
] | 1 | 2021-06-01T23:45:06.000Z | 2021-06-01T23:45:06.000Z | app/cli.py | dev-johnlopez/assignably | 056960556dd75dfce064970887f37a44a8c66aec | [
"MIT"
] | null | null | null | import os
import click
from app import app
from flask.cli import with_appcontext
from app.auth.models import Role
def register(app):
@app.cli.group()
def translate():
"""Translation and localization commands."""
pass
@translate.command()
@click.argument('lang')
def init(lang):
"""Initialize a new language."""
pass
@translate.command()
def update():
"""Update all languages."""
pass
@translate.command()
def compile():
"""Compile all languages."""
pass
@click.command("add_roles")
@with_appcontext
def add_roles():
from app import db, security
from app.auth.models import Role
db.init_app(app)
role = Role(name="Company Admin", description="Administrator of a company. \
Users with this role can modify \
company data.")
db.session.add(role)
db.session.commit()
role = Role(name="Underwriter", description="Users with the ability to \
evaluate deals.")
db.session.add(role)
db.session.commit()
| 25.255319 | 83 | 0.566133 | 130 | 1,187 | 5.130769 | 0.430769 | 0.041979 | 0.089955 | 0.050975 | 0.173913 | 0.173913 | 0.092954 | 0 | 0 | 0 | 0 | 0 | 0.326874 | 1,187 | 46 | 84 | 25.804348 | 0.834793 | 0.092671 | 0 | 0.382353 | 0 | 0 | 0.035038 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.117647 | 0.205882 | 0 | 0.382353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
7febb7dfbccc110592c6373855dc121877f1f2c7 | 1,641 | py | Python | throwaway/viz_nav_policy.py | sfpd/rlreloaded | 650c64ec22ad45996c8c577d85b1a4f20aa1c692 | [
"MIT"
] | null | null | null | throwaway/viz_nav_policy.py | sfpd/rlreloaded | 650c64ec22ad45996c8c577d85b1a4f20aa1c692 | [
"MIT"
] | null | null | null | throwaway/viz_nav_policy.py | sfpd/rlreloaded | 650c64ec22ad45996c8c577d85b1a4f20aa1c692 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from control4.algs.save_load_utils import load_agent_and_mdp
from control4.core.rollout import rollout
from tabulate import tabulate
import numpy as np
import pygame
from control3.pygameviewer import PygameViewer, pygame
from collections import namedtuple
from copy import deepcopy
path = []
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("hdf")
parser.add_argument("--load_idx",type=int,default=-1)
parser.add_argument("--max_steps",type=int)
parser.add_argument("--one_traj",action="store_true")
args = parser.parse_args()
agent, mdp, _hdf = load_agent_and_mdp(args.hdf,args.load_idx)
from matplotlib.patches import Ellipse
import matplotlib.pyplot as plt
fig1,(ax0,ax1)=plt.subplots(2,1)
fig2,(ax3)=plt.subplots(1,1)
h = mdp.halfsize
while True:
path = []
init_arrs, traj_arrs = rollout(mdp,agent,999999,save_arrs=["m","o","a"])
m = np.concatenate([init_arrs["m"]]+traj_arrs["m"],axis=0)
o = np.concatenate([init_arrs["o"]]+traj_arrs["o"],axis=0)
a_na = np.concatenate(traj_arrs["a"])
print "o:"
print o
print "m:"
print m
ax0.cla()
ax0.plot(m)
ax1.cla()
ax1.plot(o)
ax3.cla()
x,y=np.array(init_arrs['x'].path).T
ax3.plot(x,y,'bx-')
ax3.axis([-h,h,-h,h])
for (x,a) in zip(init_arrs['x'].path,a_na):
ax3.add_artist(Ellipse(xy=x+a[0:2], width=2*a[2], height=2*a[3],alpha=0.2))
plt.draw()
plt.pause(0.01)
plt.ginput()
| 28.293103 | 87 | 0.624619 | 248 | 1,641 | 3.971774 | 0.391129 | 0.040609 | 0.069036 | 0.030457 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031323 | 0.221816 | 1,641 | 57 | 88 | 28.789474 | 0.740016 | 0.012188 | 0 | 0.043478 | 0 | 0 | 0.042672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.23913 | null | null | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7ff0e77d9b3db005d3ce70f0c9f81c5bbde228f8 | 4,808 | py | Python | main.py | superwaiwjia/lowRankForSeer | 86041e0a39e1ef2718e8133eb65a63c05d9a441c | [
"MIT"
] | 2 | 2021-11-18T07:01:40.000Z | 2021-11-18T07:01:49.000Z | main.py | superwaiwjia/lowRankForSeer | 86041e0a39e1ef2718e8133eb65a63c05d9a441c | [
"MIT"
] | null | null | null | main.py | superwaiwjia/lowRankForSeer | 86041e0a39e1ef2718e8133eb65a63c05d9a441c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#coding=utf-8
import pickle
import sys, os, re, subprocess, math
reload(sys)
sys.setdefaultencoding("utf-8")
from os.path import abspath, dirname, join
whereami = abspath(dirname(__file__))
sys.path.append(whereami)
from sklearn.metrics import roc_auc_score
import pandas as pd
import numpy as np
from scipy import *
import itertools
from utility import getWfixingA, getSparseWeight, appendDFToCSV_void, stop_critier
from solver import optimize
from data import load_dataset, loadSeer
def main(datasetName):
if datasetName == 'seer':
numberOfClass = 2
X, y = loadSeer(numberOfClass)
else:
X, y = load_dataset(name=datasetName)
from sklearn.model_selection import train_test_split
#P_train, P_test, y_train, y_test = train_test_split(X, y, random_state=42)
# split: 60-20-20
P_train, P_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
P_train, P_val, y_train, y_val = train_test_split(P_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2
input_size, input_dimension = P_train.shape
numberOfClass = np.unique(y).shape[0]
P_train = np.transpose(P_train); P_val = np.transpose(P_val); P_test = np.transpose(P_test);
if numberOfClass > 2:
from sklearn.preprocessing import LabelBinarizer, StandardScaler
lb = LabelBinarizer()
y_train = lb.fit_transform(y_train); y_train = np.transpose(y_train);
y_val = lb.fit_transform(y_val); y_val = np.transpose(y_val);
y_test = lb.transform(y_test); y_test = np.transpose(y_test);
elif numberOfClass == 2:
y_train = y_train.reshape((y_train.shape[0], 1)); y_train = np.transpose(y_train);
y_val = y_val.reshape((y_val.shape[0], 1)); y_val = np.transpose(y_val);
y_test = y_test.reshape((y_test.shape[0], 1)); y_test = np.transpose(y_test);
print datasetName, input_size, input_dimension, numberOfClass
rank = P_train.shape[1]
#grid search or pre-defined hyper-parameters
lambda_X_list = [0.1]#[math.pow(10, x) for x in range(-4, 3)];
lambda_W_list = [10.0]#[math.pow(10, x) for x in range(-4, 3)];
lambda_D_list = [0.01]#[math.pow(10, x) for x in range(-4, 3)];
valid_list = []
for (lambda_X, lambda_W, lambda_D) in itertools.product(lambda_X_list, lambda_W_list, lambda_D_list):
try:
###############################################
# 0-th iteration
##############################################
X_ini = np.random.rand(P_train.shape[0], rank)
W_ini = np.random.rand(rank, P_train.shape[1])
D_ini = np.random.rand(y_train.shape[0], rank)
X = X_ini; D = D_ini; W = W_ini;
w_val = getSparseWeight(X, P_val, choice = 0, alpha = 0.3)
w = getSparseWeight(X, P_test, choice = 0, alpha = 0.3)
print 'iter', 'training', 'validation', 'testing'
print 0, roc_auc_score(y_train.T, np.dot(D, W).T), \
roc_auc_score(y_val.T, np.dot(D, w_val.T).T), \
roc_auc_score(y_test.T, np.dot(D, w.T).T)
appendDFToCSV_void(pd.DataFrame([{"train":roc_auc_score(y_train.T, np.dot(D, W).T), "validation":roc_auc_score(y_val.T, np.dot(D, w_val.T).T), "test":roc_auc_score(y_test.T, np.dot(D, w.T).T)}]), join(whereami+'/res', datasetName+'.log'))
from time import time
start = time()
###############################################
# loop iteration
##############################################
for iter in range(1, 50):
#update X
X_new = optimize.min_rank_dict(P_train, W, lambda_X, X)
#update W
Z = np.concatenate((P_train,y_train),axis=0)
A = np.concatenate((X_new,D),axis=0)
W_new = getWfixingA(Z, W, A, lambda_W)
#update D
D_new = optimize.min_rank_dict(y_train, W_new, lambda_D, D)
#D_new = np.dot(y_train, np.linalg.pinv(W_new))
#from sklearn.decomposition.dict_learning import _update_dict
#D_new = _update_dict(D, y_train, W_new)
#E = np.dot(y_train, W_new.T)
#F = np.dot(W_new, W_new.T)
#D_new = optimize.ODL_updateD(D, E, F, iterations = 1)
w = getSparseWeight(X_new, P_test, choice = 0, alpha = 0.3)
w_val = getSparseWeight(X, P_val, choice = 0, alpha = 0.3)
print iter, roc_auc_score(y_train.T, np.dot(D_new, W_new).T), \
roc_auc_score(y_val.T, np.dot(D_new, w_val.T).T), \
roc_auc_score(y_test.T, np.dot(D_new, w.T).T)
appendDFToCSV_void(pd.DataFrame([{"train":roc_auc_score(y_train.T, np.dot(D_new, W_new).T),"validation":roc_auc_score(y_val.T, np.dot(D_new, w_val.T).T), "test":roc_auc_score(y_test.T, np.dot(D_new, w.T).T) }]), join(whereami+'/res', datasetName+'.log'))
D = D_new; W = W_new; X = X_new;
valid_list.append( roc_auc_score(y_val.T, np.dot(D_new, w_val.T).T) )
if stop_critier(valid_list):
break
except Exception as err:
print( err )
if __name__ == '__main__':
datasetName = sys.argv[1]
main(datasetName)
| 38.464 | 258 | 0.66015 | 829 | 4,808 | 3.571773 | 0.185766 | 0.046606 | 0.052009 | 0.052685 | 0.361702 | 0.324553 | 0.31003 | 0.291118 | 0.237082 | 0.237082 | 0 | 0.019778 | 0.158694 | 4,808 | 125 | 259 | 38.464 | 0.712237 | 0.127288 | 0 | 0.025316 | 0 | 0 | 0.025056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.177215 | null | null | 0.063291 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d01213807fe34d6cbaa37ec89c61cfcc0e43948 | 11,536 | py | Python | apps/hosts/views.py | kaustubh-s1/EvalAI | 1884811e7759e0d095f7afb68188a7f010fa65dc | [
"BSD-3-Clause"
] | 1,470 | 2016-10-21T01:21:45.000Z | 2022-03-30T14:08:29.000Z | apps/hosts/views.py | kaustubh-s1/EvalAI | 1884811e7759e0d095f7afb68188a7f010fa65dc | [
"BSD-3-Clause"
] | 2,594 | 2016-11-02T03:36:01.000Z | 2022-03-31T15:30:04.000Z | apps/hosts/views.py | kaustubh-s1/EvalAI | 1884811e7759e0d095f7afb68188a7f010fa65dc | [
"BSD-3-Clause"
] | 865 | 2016-11-09T17:46:32.000Z | 2022-03-30T13:06:52.000Z | from django.contrib.auth.models import User
from rest_framework import permissions, status
from rest_framework.decorators import (
api_view,
authentication_classes,
permission_classes,
throttle_classes,
)
from rest_framework.response import Response
from rest_framework_expiring_authtoken.authentication import (
ExpiringTokenAuthentication,
)
from rest_framework.throttling import UserRateThrottle
from rest_framework_simplejwt.authentication import JWTAuthentication
from accounts.permissions import HasVerifiedEmail
from base.utils import get_model_object, team_paginated_queryset
from .filters import HostTeamsFilter
from .models import ChallengeHost, ChallengeHostTeam
from .serializers import (
ChallengeHostSerializer,
ChallengeHostTeamSerializer,
InviteHostToTeamSerializer,
HostTeamDetailSerializer,
)
from .utils import is_user_part_of_host_team
get_challenge_host_model = get_model_object(ChallengeHost)
@api_view(["GET", "POST"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes(
(
JWTAuthentication,
ExpiringTokenAuthentication,
)
)
def challenge_host_team_list(request):
if request.method == "GET":
challenge_host_team_ids = ChallengeHost.objects.filter(
user=request.user
).values_list("team_name", flat=True)
challenge_host_teams = ChallengeHostTeam.objects.filter(
id__in=challenge_host_team_ids
).order_by("-id")
filtered_teams = HostTeamsFilter(
request.GET, queryset=challenge_host_teams
)
paginator, result_page = team_paginated_queryset(
filtered_teams.qs, request
)
serializer = HostTeamDetailSerializer(result_page, many=True)
response_data = serializer.data
return paginator.get_paginated_response(response_data)
elif request.method == "POST":
serializer = ChallengeHostTeamSerializer(
data=request.data, context={"request": request}
)
if serializer.is_valid():
serializer.save()
response_data = serializer.data
return Response(response_data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(["GET", "PUT", "PATCH"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes((JWTAuthentication, ExpiringTokenAuthentication))
def challenge_host_team_detail(request, pk):
try:
challenge_host_team = ChallengeHostTeam.objects.get(pk=pk)
except ChallengeHostTeam.DoesNotExist:
response_data = {"error": "ChallengeHostTeam does not exist"}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
if request.method == "GET":
serializer = HostTeamDetailSerializer(challenge_host_team)
response_data = serializer.data
return Response(response_data, status=status.HTTP_200_OK)
elif request.method in ["PUT", "PATCH"]:
if request.method == "PATCH":
serializer = ChallengeHostTeamSerializer(
challenge_host_team,
data=request.data,
context={"request": request},
partial=True,
)
else:
serializer = ChallengeHostTeamSerializer(
challenge_host_team,
data=request.data,
context={"request": request},
)
if serializer.is_valid():
serializer.save()
response_data = serializer.data
return Response(response_data, status=status.HTTP_200_OK)
else:
return Response(
serializer.errors, status=status.HTTP_400_BAD_REQUEST
)
@api_view(["GET", "POST"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes((JWTAuthentication, ExpiringTokenAuthentication))
def challenge_host_list(request, challenge_host_team_pk):
try:
challenge_host_team = ChallengeHostTeam.objects.get(
pk=challenge_host_team_pk
)
except ChallengeHostTeam.DoesNotExist:
response_data = {"error": "ChallengeHostTeam does not exist"}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
if request.method == "GET":
challenge_host_status = request.query_params.get("status", None)
filter_condition = {
"team_name": challenge_host_team,
"user": request.user,
}
if challenge_host_status:
challenge_host_status = challenge_host_status.split(",")
filter_condition.update({"status__in": challenge_host_status})
challenge_host = ChallengeHost.objects.filter(
**filter_condition
).order_by("-id")
paginator, result_page = team_paginated_queryset(
challenge_host, request
)
serializer = ChallengeHostSerializer(result_page, many=True)
response_data = serializer.data
return paginator.get_paginated_response(response_data)
elif request.method == "POST":
serializer = ChallengeHostSerializer(
data=request.data,
context={
"challenge_host_team": challenge_host_team,
"request": request,
},
)
if serializer.is_valid():
serializer.save()
response_data = serializer.data
return Response(response_data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(["GET", "PUT", "PATCH", "DELETE"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes((JWTAuthentication, ExpiringTokenAuthentication))
def challenge_host_detail(request, challenge_host_team_pk, pk):
try:
challenge_host_team = ChallengeHostTeam.objects.get(
pk=challenge_host_team_pk
)
except ChallengeHostTeam.DoesNotExist:
response_data = {"error": "ChallengeHostTeam does not exist"}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
challenge_host = get_challenge_host_model(pk)
if request.method == "GET":
serializer = ChallengeHostSerializer(challenge_host)
response_data = serializer.data
return Response(response_data, status=status.HTTP_200_OK)
elif request.method in ["PUT", "PATCH"]:
if request.method == "PATCH":
serializer = ChallengeHostSerializer(
challenge_host,
data=request.data,
context={
"challenge_host_team": challenge_host_team,
"request": request,
},
partial=True,
)
else:
serializer = ChallengeHostSerializer(
challenge_host,
data=request.data,
context={
"challenge_host_team": challenge_host_team,
"request": request,
},
)
if serializer.is_valid():
serializer.save()
response_data = serializer.data
return Response(response_data, status=status.HTTP_200_OK)
else:
return Response(
serializer.errors, status=status.HTTP_400_BAD_REQUEST
)
elif request.method == "DELETE":
challenge_host.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
@api_view(["POST"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes((JWTAuthentication, ExpiringTokenAuthentication))
def create_challenge_host_team(request):
serializer = ChallengeHostTeamSerializer(
data=request.data, context={"request": request}
)
if serializer.is_valid():
serializer.save()
response_data = serializer.data
challenge_host_team = serializer.instance
challenge_host = ChallengeHost(
user=request.user,
status=ChallengeHost.SELF,
permissions=ChallengeHost.ADMIN,
team_name=challenge_host_team,
)
challenge_host.save()
return Response(response_data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
@api_view(["DELETE"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes((JWTAuthentication, ExpiringTokenAuthentication))
def remove_self_from_challenge_host_team(request, challenge_host_team_pk):
"""
A user can remove himself from the challenge host team.
"""
try:
ChallengeHostTeam.objects.get(pk=challenge_host_team_pk)
except ChallengeHostTeam.DoesNotExist:
response_data = {"error": "ChallengeHostTeam does not exist"}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
try:
challenge_host = ChallengeHost.objects.filter(
user=request.user.id, team_name__pk=challenge_host_team_pk
)
challenge_host.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
except: # noqa E722
response_data = {"error": "Sorry, you do not belong to this team."}
return Response(response_data, status=status.HTTP_401_UNAUTHORIZED)
@api_view(["POST"])
@throttle_classes([UserRateThrottle])
@permission_classes((permissions.IsAuthenticated, HasVerifiedEmail))
@authentication_classes((JWTAuthentication, ExpiringTokenAuthentication))
def invite_host_to_team(request, pk):
try:
challenge_host_team = ChallengeHostTeam.objects.get(pk=pk)
except ChallengeHostTeam.DoesNotExist:
response_data = {"error": "Host Team does not exist"}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
email = request.data.get("email")
try:
user = User.objects.get(email=email)
except User.DoesNotExist:
response_data = {
"error": "User does not exist with this email address!"
}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
# Check if the user requesting this API is part of host team
if not is_user_part_of_host_team(request.user, challenge_host_team):
response_data = {"error": "You are not a member of this team!"}
return Response(response_data, status=status.HTTP_400_BAD_REQUEST)
host = ChallengeHost.objects.filter(
team_name=challenge_host_team, user=user
)
if host.exists():
response_data = {"error": "User is already part of the team!"}
return Response(response_data, status=status.HTTP_406_NOT_ACCEPTABLE)
serializer = InviteHostToTeamSerializer(
data=request.data,
context={
"challenge_host_team": challenge_host_team,
"request": request,
},
)
if serializer.is_valid():
serializer.save()
response_data = {
"message": "User has been added successfully to the host team"
}
return Response(response_data, status=status.HTTP_202_ACCEPTED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
| 37.093248 | 78 | 0.684466 | 1,154 | 11,536 | 6.566724 | 0.130849 | 0.096068 | 0.076273 | 0.058327 | 0.725653 | 0.687385 | 0.634072 | 0.634072 | 0.627474 | 0.614278 | 0 | 0.008789 | 0.230669 | 11,536 | 310 | 79 | 37.212903 | 0.84507 | 0.010836 | 0 | 0.572491 | 0 | 0 | 0.060734 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026022 | false | 0 | 0.048327 | 0 | 0.174721 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d05562ae792843c99e988fb6a4b5372987caff9 | 616 | py | Python | setup.py | KloudTrader/libkloudtrader | 015e2779f80ba2de93be9fa6fd751412a9d5f492 | [
"Apache-2.0"
] | 11 | 2019-01-16T16:10:09.000Z | 2021-03-02T00:59:17.000Z | setup.py | KloudTrader/kloudtrader | 015e2779f80ba2de93be9fa6fd751412a9d5f492 | [
"Apache-2.0"
] | 425 | 2019-07-10T06:59:49.000Z | 2021-01-12T05:32:14.000Z | setup.py | KloudTrader/kloudtrader | 015e2779f80ba2de93be9fa6fd751412a9d5f492 | [
"Apache-2.0"
] | 6 | 2019-03-15T16:25:06.000Z | 2021-05-03T10:02:13.000Z | from distutils.core import setup
setup(
name='libkloudtrader',
version='1.0.0',
author='KloudTrader',
author_email='admin@kloudtrader.com',
packages=['libkloudtrader'],
url='https://github.com/KloudTrader/kloudtrader',
license='LICENSE',
description="KloudTrader's in-house library that makes it much easier for you to code algorithms that can trade for you.",
long_description_content_type="text/markdown",
long_description='pypi.md',
install_requires=[
"boto3",
"pandas",
"numpy",
"empyrical",
"asyncio",
"ccxt"
],
)
| 26.782609 | 126 | 0.644481 | 68 | 616 | 5.75 | 0.764706 | 0.030691 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008368 | 0.224026 | 616 | 22 | 127 | 28 | 0.809623 | 0 | 0 | 0 | 0 | 0.047619 | 0.449675 | 0.034091 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d0b84039e886dcbf5a0882295390d0af7dd865b | 3,928 | py | Python | tools/convert_lightning2venot.py | ucl-exoplanets/TauREx_public | 28d47f829a2873cf15e3bfb0419b8bc4e5bc03dd | [
"CC-BY-4.0"
] | 18 | 2019-07-22T01:35:24.000Z | 2022-02-10T11:25:42.000Z | tools/convert_lightning2venot.py | ucl-exoplanets/TauREx_public | 28d47f829a2873cf15e3bfb0419b8bc4e5bc03dd | [
"CC-BY-4.0"
] | null | null | null | tools/convert_lightning2venot.py | ucl-exoplanets/TauREx_public | 28d47f829a2873cf15e3bfb0419b8bc4e5bc03dd | [
"CC-BY-4.0"
] | 1 | 2017-10-19T15:14:06.000Z | 2017-10-19T15:14:06.000Z | #! /usr/bin/python
#small script that shits out the venot file format equivalent for the lightening project
import numpy as np
import pylab as pl
import pyfits as pf
import glob, os, sys
AMU = 1.660538921e-27
KBOLTZ = 1.380648813e-23
G = 6.67384e-11
RSOL = 6.955e8
RJUP = 6.9911e7
#RJUP = 7.1492e7 # Jo's radius
MJUP = 1.898e27
AU = 1.49e11
DIR = '/Users/ingowaldmann/Dropbox/UCLlocal/REPOS/taurex/Input/lightening'
# FILENAME = 'Earth-Today-Lightning-Full.dat'
FILENAME = 'Modern-Earth-noLightning-Full.dat'
#EARTH
planet_mass = 5.97237e24 #kg
planet_radius = 6.371e6 #m
planet_mu = 28.97 * AMU#kg
data = np.loadtxt(os.path.join(DIR,FILENAME),skiprows=1)
[nlayers,ncols] = np.shape(data)
fheader = open(os.path.join(DIR,FILENAME),'r')
header = fheader.readlines()
c=0
for line in header:
head = line
break
fheader.close()
#rebuilding header line
newhead = 'alt(km) '+head[:2]+'m'+head[2:]
newhead_small = head[62:]
print head.split()
molnames = ['C_1D','H','N','O','O_1D','O_1S', 'CO', 'H2','HO','N2', 'NO', 'O2', 'O2_D', 'O3', 'CH4',
'CO2', 'H2O', 'HO2', 'N2O', 'NO2', 'H2O2', 'HNO3', 'CH2O2', 'HCOOH', 'CH3ONO', 'e-',
'H+', 'O+', 'NO+','O2+', 'C','HN','CNC','H2N','H3N','C+','C-','N+','O-','CO+','HO+','N2+','CHO+',
'CH3','CHO','HCN','HNO','NO3','C2H2','C2H6','CH2O','HNO2','N2O3','CH3O2','CH3OH','CH4O2','H3O+']
molweights = [14,1,14,16,18,18,28,2,17,28,30,32,34,48,16,44,18,33,44,46,34,63,46,46,61,0,1,16,30,32,12,15,38,16,17,12,12,14,16,28,17,28,29,
15,29,27,31,62,26,28,30,48,76,47,32,52,19]
badwords = ['p(bar)' ,'T(K)' , 'NH(cm-3)' , 'Kzz(cm2s-1)' , 'Hz(cm)', 'zeta(s-1)']
mollist = []
for mol in head.split():
if mol in molnames:
mollist.append(molweights[molnames.index(mol)])
elif mol not in badwords:
mollist.append(2.3)
print 'FILLED: ',mol
else:
print 'OUT: ',mol
# mollist.append(2.3)
moleweigthstr = ' '.join(str(e) for e in mollist)
#create ranking of most important molecules according to abundance
molabundance =[]
mnamelist =[]
c=0
for mol in head.split():
mnamelist.append(mol)
molabundance.append(np.max(data[:,c]))
c+=1
mnamelist = np.asarray(mnamelist[6:])
molabundance = np.asarray(molabundance[6:])
midx = np.argsort(molabundance)
print midx[::-1]
print mnamelist[midx][::-1]
print molabundance[midx][::-1]
pressure_profile_levels = data[:,0] * 1000.0 #converting bar to mbar
temperature_profile = data[:,1]
H = np.zeros(nlayers)
g = np.zeros(nlayers)
z = np.zeros((nlayers,1))
g[0] = (G * planet_mass) / (planet_radius**2) # surface gravity (0th layer)
H[0] = (KBOLTZ*temperature_profile[0])/(planet_mu*g[0]) # scaleheight at the surface (0th layer)
for i in xrange(1, nlayers):
deltaz = (-1.)*H[i-1]*np.log(pressure_profile_levels[i]/pressure_profile_levels[i-1])
z[i] = z[i-1] + deltaz # altitude at the i-th layer
with np.errstate(over='ignore'):
g[i] = (G * planet_mass) / ((planet_radius + z[i])**2) # gravity at the i-th layer
with np.errstate(divide='ignore'):
H[i] = (KBOLTZ*temperature_profile[i])/(planet_mu*g[i])
z /=1e3 #converting m to km
OUT = np.hstack((z,data))
OUT2 = OUT[:,:3]
[s1,s2] = np.shape(data[:,6:])
OUT3 = np.zeros((s1,s2+2))
OUT3[:,0] = z[:,0]
OUT3[:,1] = data[:,0]
OUT3[:,2:] = data[:,6:]
with open(FILENAME[:-4]+'_conv.dat','wb') as outfile:
outfile.write(newhead)
outfile.write(moleweigthstr+'\n')
np.savetxt(outfile, OUT)
with open(FILENAME[:-4]+'_mixing.dat','wb') as outfile:
outfile.write(newhead_small)
outfile.write(moleweigthstr+'\n')
np.savetxt(outfile, OUT3)
np.savetxt(FILENAME[:-4]+'_tp.dat',OUT2)
pl.figure(1)
pl.plot(np.log(molabundance[midx][::-1]),linewidth=3.0)
pl.gca().xaxis.set_ticks(np.arange(0, len(molabundance), 1.0))
pl.gca().set_xticklabels(mnamelist[midx][::-1])
pl.ylabel('log(mixing ratio)')
pl.show() | 26.90411 | 139 | 0.630855 | 641 | 3,928 | 3.820593 | 0.411856 | 0.010208 | 0.025725 | 0.010617 | 0.133116 | 0.083299 | 0.083299 | 0.02205 | 0 | 0 | 0 | 0.090882 | 0.154022 | 3,928 | 146 | 140 | 26.90411 | 0.646103 | 0.118126 | 0 | 0.063158 | 0 | 0 | 0.117784 | 0.028721 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.042105 | null | null | 0.063158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d0d5a4bdcb6949d58811e00ce041b7deeb69354 | 288 | py | Python | setup.py | NWeis97/ML_Ops_Project | cc4c65fec679b08675e76a24ad7e44de1b5df29a | [
"MIT"
] | null | null | null | setup.py | NWeis97/ML_Ops_Project | cc4c65fec679b08675e76a24ad7e44de1b5df29a | [
"MIT"
] | null | null | null | setup.py | NWeis97/ML_Ops_Project | cc4c65fec679b08675e76a24ad7e44de1b5df29a | [
"MIT"
] | null | null | null | from setuptools import find_packages, setup
setup(
name="src",
packages=find_packages(),
version="0.1.0",
description="This project contains the final exercise of S1, "
+ "in which we will continue to build upon",
author="Nicolai Weisbjerg",
license="MIT",
)
| 24 | 66 | 0.677083 | 38 | 288 | 5.078947 | 0.868421 | 0.124352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0.208333 | 288 | 11 | 67 | 26.181818 | 0.828947 | 0 | 0 | 0 | 0 | 0 | 0.399306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d1335d9fc99401818ca88efe979fffcb933a101 | 10,108 | py | Python | Products/LDAPUserFolder/interfaces.py | phgv/Products.LDAPUserFolder | eb9db778916f47a80b3df069a31d0a2100b26423 | [
"ZPL-2.1"
] | null | null | null | Products/LDAPUserFolder/interfaces.py | phgv/Products.LDAPUserFolder | eb9db778916f47a80b3df069a31d0a2100b26423 | [
"ZPL-2.1"
] | null | null | null | Products/LDAPUserFolder/interfaces.py | phgv/Products.LDAPUserFolder | eb9db778916f47a80b3df069a31d0a2100b26423 | [
"ZPL-2.1"
] | null | null | null | ##############################################################################
#
# Copyright (c) 2000-2009 Jens Vagelpohl and Contributors. All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
""" Interfaces for LDAPUserFolder package classes
"""
from AccessControl.interfaces import IStandardUserFolder
from AccessControl.interfaces import IUser
class ILDAPUser(IUser):
""" IUser interface with extended API for the LDAPUserFolder
This interface is supported by user objects which
are returned by user validation through the LDAPUserFolder
product and used for access control.
"""
def getProperty(name, default=''):
""" Retrieve the value of a property of name "name". If this
property does not exist, the default value is returned.
Properties can be any public attributes that are part of the
user record in LDAP. Refer to them by their LDAP attribute
name or the name they have been mapped to in the LDAP User
Folder
Permission - Access contents information
"""
def getUserDN():
""" Retrieve the user object's Distinguished Name attribute.
Permission - Access contents information
"""
def getCreationTime():
""" Return a DataTime object representing the user object creation time
Permission - Access contents information
"""
class ILDAPUserFolder(IStandardUserFolder):
""" This interface lists methods available for scripting
LDAPUserFolder objects.
Some others are accessible given the correct permissions but since
they are used only in the internal workings of the LDAPUserFolder
they are not listed here.
"""
def getUsers():
""" Return all user objects. Since the number of user records in
an LDAP database is potentially very large this method will
only return those user objects that are in the internal cache
of the LDAPUserFolder and not expired.
Permission - *Manage users*
"""
def getUserNames():
""" Return a list of user IDs for all users that can be found
given the selected user search base and search scope.
This method will return a simple error message if the
number of users exceeds the limit of search hits that is
built into the python-ldap module.
Permission - *Manage users*
"""
def getUser(name):
""" Return the user object for the user "name". if the user
cannot be found, None will be returned.
Permission - *Manage users*
"""
def getUserById(id):
""" Return the user object with the UserID "id". The User ID
may be different from the "Name", the Login. To get a user
by its Login, call getUser.
Permission - *Manage users*
"""
def getGroups(dn='*', attr=None, pwd=''):
""" Return a list of available group records under the group record
base as defined in the LDAPUserFolder, or a specific group if the
``dn`` parameter is provided. The attr argument determines
what gets returned and it can have the following values:
o None: A list of tuples is returned where the group CN is the first
and the group full DN is the second element.
o cn: A list of CN strings is returned.
o dn: A list of full DN strings is returned.
Permission: *Manage users*
"""
def manage_addGroup(newgroup_name, newgroup_type='groupOfUniqueNames',
REQUEST=None):
""" Add a new group under the group record base of type
``newgroup_type``. If REQUEST is not None a MessageDialog screen will
be returned. The group_name argument forms the new group CN while the
full DN will be formed by combining this new CN with the group base DN.
Since a group record cannot be empty, meaning there must be at least
a single uniqueMember element in it, the DN given as the binduid in
the LDAPUserFolder configuration is inserted.
Permission: *Manage users*
"""
def manage_deleteGroups(dns=[], REQUEST=None):
""" Delete groups specified by a list of group DN strings which are
handed in as the *dns* argument.
Permission: *Manage users*
"""
def findUser(search_param, search_term, attrs=(), exact_match=False):
""" Find user records given the *search_param* string (which is the
name of an LDAP attribute) and the *search_term* value. The
``attrs`` argument can be used the desired attributes to return, and
``exact_match`` determines whether the search is a wildcard search
or not.
This method will return a list of dictionaries where each matching
record is represented by a dictionary. The dictionary will contain
a key/value pair for each LDAP attribute, including *dn*, that is
present for the given user record.
Permission: *Manage users*
"""
def searchUsers(attrs=(), exact_match=False, **kw):
""" Search for user records by one or more attributes.
This method takes any passed-in search parameters and values as
keyword arguments and will sort out invalid keys automatically. It
accepts all three forms an attribute can be known as, its real
ldap name, the name an attribute is mapped to explicitly, and the
friendly name it is known by.
Permission: *Manage users*
"""
def getUserDetails(encoded_dn, format=None, attrs=()):
""" Retrieves all details for a user record represented by the DN that
is handed in as the URL-encoded *encoded_dn* argument. The format
argument determines the format of the returned data and can have
two values:
o None: All user attributes are handed back as a list of tuples
where the first element of each tuple contains the LDAP attribute
name and the second element contains the value.
o dictionary: The user record is handed back as a simple dictionary
of attributes as key/value pairs.
The desired attributes can be limited by passing in a sequence of
attribute names as the attrs argument.
Permission: *Manage users*
"""
def isUnique(attr, value):
""" Determine whether a given LDAP attribute (attr) and its value
(value) are unique in the LDAP tree branch set as the user record
base in the LDAPUserFolder. This method should be called before
inserting a new user record with attr being the attribute chosen as
the login name in your LDAPUserFolder because that attribute value
must be unique.
This method will return a truth value (1) if the given attribute value
is indeed unique, 0 if it is not and in the case of an exception it
will return the string describing the exception.
Permission: *Manage users*
"""
def manage_addUser(REQUEST, kwargs):
""" Create a new user record. If REQUEST is not None, it will be
used to retrieve the values for the user record.
To use this method from Python you must pass None as the REQUEST
argument and a dictionary called *kwargs* containing key/value pairs
for the user record attributes.
The dictionary of values passed in, be it REQUEST or kwargs, must at
the very least contain the following keys and values:
o *cn* or *uid* (depending on what you set the RDN attribute to)
o *user_pw* (the new user record's password)
o *confirm_pw* (This must match password)
o all attributes your user record LDAP schema must contain (consult
your LDAP server schema)
Only those attributes and values are used that are specified on the
LDAP Schema tab of your LDAPUserFolder.
Permission: *Manage users*
"""
def manage_editUser(user_dn, REQUEST, kwargs):
""" Edit an existing user record. If REQUEST is not None, it will
be used to retrieve the values for the user record.
To use this method from Python you must pass None as the REQUEST
argument and a dictionary called *kwargs* containing key/value pairs
for the user record attributes.
Only those attributes and values are used that are specified on the
LDAP Schema tab of your LDAPUserFolder.
This method will handle modified RDN (Relative Distinguished name)
attributes correctly and execute a *modrdn* as well if needed,
including changing the DN in all group records it is part of.
Permission: *Manage users*
"""
def manage_editUserPassword(dn, new_pw, REQUEST):
""" Change a users password. The *dn* argument contains the full DN
for the user record in question and new_pw contains the new password.
Permission: *Manage users*
"""
def manage_editUserRoles(user_dn, role_dns, REQUEST):
""" Change a user's group memberships. The user is specified by a
full DN string, handed in as the *user_dn* attribute. All group
records the user is supposed to be part of are handed in as
*role_dns*, a list of DN strings for group records.
Permission: *Manage users*
"""
def manage_deleteUsers(dns, REQUEST):
""" Delete the user records given by a list of DN strings. The user
records will be deleted and their mentioning in any group record
as well.
Permission: *Manage users*
"""
| 38.580153 | 79 | 0.662347 | 1,380 | 10,108 | 4.830435 | 0.255072 | 0.021002 | 0.050405 | 0.054005 | 0.172067 | 0.089709 | 0.089709 | 0.089709 | 0.089709 | 0.089709 | 0 | 0.001633 | 0.272853 | 10,108 | 261 | 80 | 38.727969 | 0.905306 | 0.742184 | 0 | 0 | 0 | 0 | 0.015298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.791667 | false | 0.041667 | 0.083333 | 0 | 0.958333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d192b649d8b6388f0dcd7b9e46896429e77993c | 2,497 | py | Python | src/sentry/api/endpoints/organization_projects.py | seukjung/sentry-custom | c5f6bb2019aef3caff7f3e2b619f7a70f2b9b963 | [
"BSD-3-Clause"
] | 1 | 2021-01-13T15:40:03.000Z | 2021-01-13T15:40:03.000Z | src/sentry/api/endpoints/organization_projects.py | fotinakis/sentry | c5cfa5c5e47475bf5ef41e702548c2dfc7bb8a7c | [
"BSD-3-Clause"
] | 8 | 2019-12-28T23:49:55.000Z | 2022-03-02T04:34:18.000Z | src/sentry/api/endpoints/organization_projects.py | fotinakis/sentry | c5cfa5c5e47475bf5ef41e702548c2dfc7bb8a7c | [
"BSD-3-Clause"
] | 1 | 2017-04-08T04:09:18.000Z | 2017-04-08T04:09:18.000Z | from __future__ import absolute_import
import six
from rest_framework.response import Response
from sentry.api.base import DocSection
from sentry.api.bases.organization import OrganizationEndpoint
from sentry.api.serializers import serialize
from sentry.models import Project, Team
from sentry.utils.apidocs import scenario, attach_scenarios
@scenario('ListOrganizationProjects')
def list_organization_projects_scenario(runner):
runner.request(
method='GET',
path='/organizations/%s/projects/' % runner.org.slug
)
class OrganizationProjectsEndpoint(OrganizationEndpoint):
doc_section = DocSection.ORGANIZATIONS
@attach_scenarios([list_organization_projects_scenario])
def get(self, request, organization):
"""
List an Organization's Projects
```````````````````````````````
Return a list of projects bound to a organization.
:pparam string organization_slug: the slug of the organization for
which the projects should be listed.
:auth: required
"""
if request.auth and not request.user.is_authenticated():
# TODO: remove this, no longer supported probably
if hasattr(request.auth, 'project'):
team_list = [request.auth.project.team]
project_list = [request.auth.project]
elif request.auth.organization is not None:
org = request.auth.organization
team_list = list(Team.objects.filter(
organization=org,
))
project_list = list(Project.objects.filter(
team__in=team_list,
).order_by('name'))
else:
return Response({'detail': 'Current access does not point to '
'organization.'}, status=400)
else:
team_list = list(request.access.teams)
project_list = list(Project.objects.filter(
team__in=team_list,
).order_by('name'))
team_map = {
d['id']: d
for d in serialize(team_list, request.user)
}
context = []
for project, pdata in zip(project_list, serialize(project_list, request.user)):
assert six.text_type(project.id) == pdata['id']
pdata['team'] = team_map[six.text_type(project.team_id)]
context.append(pdata)
return Response(context)
| 35.169014 | 87 | 0.60793 | 262 | 2,497 | 5.645038 | 0.381679 | 0.044625 | 0.026369 | 0.043272 | 0.081136 | 0.081136 | 0.081136 | 0.081136 | 0.081136 | 0.081136 | 0 | 0.001707 | 0.296356 | 2,497 | 70 | 88 | 35.671429 | 0.840068 | 0.127753 | 0 | 0.166667 | 0 | 0 | 0.061022 | 0.024125 | 0 | 0 | 0 | 0.014286 | 0.020833 | 1 | 0.041667 | false | 0 | 0.166667 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d1a374772b07f26b88bbef32d5d37abe99122f6 | 6,708 | py | Python | senta/data/field_reader/generate_label_field_reader.py | zgzwelldone/Senta | e01986dd17217bed82023c81d06588d63e0e19c7 | [
"Apache-2.0"
] | null | null | null | senta/data/field_reader/generate_label_field_reader.py | zgzwelldone/Senta | e01986dd17217bed82023c81d06588d63e0e19c7 | [
"Apache-2.0"
] | null | null | null | senta/data/field_reader/generate_label_field_reader.py | zgzwelldone/Senta | e01986dd17217bed82023c81d06588d63e0e19c7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*
"""
:py:class:`GenerateLabelFieldReader`
"""
import numpy as np
from senta.common.register import RegisterSet
from senta.common.rule import DataShape, FieldLength, InstanceName
from senta.data.field_reader.base_field_reader import BaseFieldReader
from senta.data.util_helper import generate_pad_batch_data
from senta.modules.token_embedding.custom_fluid_embedding import CustomFluidTokenEmbedding
@RegisterSet.field_reader.register
class GenerateLabelFieldReader(BaseFieldReader):
"""seq2seq label的专用field_reader
"""
def __init__(self, field_config):
"""
:param field_config:
"""
BaseFieldReader.__init__(self, field_config=field_config)
self.paddle_version_code = 1.6
if self.field_config.tokenizer_info:
tokenizer_class = RegisterSet.tokenizer.__getitem__(self.field_config.tokenizer_info["type"])
params = None
if self.field_config.tokenizer_info.__contains__("params"):
params = self.field_config.tokenizer_info["params"]
self.tokenizer = tokenizer_class(vocab_file=self.field_config.vocab_path,
split_char=self.field_config.tokenizer_info["split_char"],
unk_token=self.field_config.tokenizer_info["unk_token"],
params=params)
if self.field_config.embedding_info and self.field_config.embedding_info["use_reader_emb"]:
self.token_embedding = CustomFluidTokenEmbedding(emb_dim=self.field_config.embedding_info["emb_dim"],
vocab_size=self.tokenizer.vocabulary.get_vocab_size())
def init_reader(self):
""" 初始化reader格式
:return: reader的shape[]、type[]、level[]
"""
shape = []
types = []
levels = []
"""train_tar_ids"""
if self.field_config.data_type == DataShape.STRING:
"""src_ids"""
shape.append([-1, self.field_config.max_seq_len])
levels.append(0)
types.append('int64')
else:
raise TypeError("GenerateLabelFieldReader's data_type must be string")
"""mask_ids"""
shape.append([-1, self.field_config.max_seq_len])
levels.append(0)
types.append('float32')
"""seq_lens"""
shape.append([-1])
levels.append(0)
types.append('int64')
"""infer_tar_ids"""
shape.append([-1, self.field_config.max_seq_len, 1])
levels.append(0)
types.append('int64')
"""mask_ids"""
shape.append([-1, self.field_config.max_seq_len])
levels.append(0)
types.append('float32')
"""seq_lens"""
shape.append([-1])
levels.append(0)
types.append('int64')
return shape, types, levels
def convert_texts_to_ids(self, batch_text):
"""将一个batch的明文text转成id
:param batch_text:
:return:
"""
train_src_ids = []
infer_src_ids = []
for text in batch_text:
if self.field_config.need_convert:
tokens = self.tokenizer.tokenize(text)
src_id = self.tokenizer.convert_tokens_to_ids(tokens)
else:
src_id = text.split(" ")
# 加上截断策略
if len(src_id) > self.field_config.max_seq_len - 1:
src_id = src_id[0:self.field_config.max_seq_len - 1]
train_src_id = [self.field_config.label_start_id] + src_id
infer_src_id = src_id + [self.field_config.label_end_id]
train_src_ids.append(train_src_id)
infer_src_ids.append(infer_src_id)
return_list = []
train_label_ids, train_label_mask, label_lens = generate_pad_batch_data(train_src_ids,
pad_idx=self.field_config.padding_id,
return_input_mask=True,
return_seq_lens=True,
paddle_version_code=self.paddle_version_code)
infer_label_ids, infer_label_mask, label_lens = generate_pad_batch_data(infer_src_ids,
pad_idx=self.field_config.padding_id,
return_input_mask=True,
return_seq_lens=True,
paddle_version_code=self.paddle_version_code)
infer_label_ids = np.reshape(infer_label_ids, (infer_label_ids.shape[0], infer_label_ids.shape[1], 1))
return_list.append(train_label_ids)
return_list.append(train_label_mask)
return_list.append(label_lens)
return_list.append(infer_label_ids)
return_list.append(infer_label_mask)
return_list.append(label_lens)
return return_list
def structure_fields_dict(self, slots_id, start_index, need_emb=True):
"""静态图调用的方法,生成一个dict, dict有两个key:id , emb. id对应的是pyreader读出来的各个field产出的id,emb对应的是各个
field对应的embedding
:param slots_id: pyreader输出的完整的id序列
:param start_index:当前需要处理的field在slot_id_list中的起始位置
:param need_emb:是否需要embedding(预测过程中是不需要embedding的)
:return:
"""
record_id_dict = {}
record_id_dict[InstanceName.TRAIN_LABEL_SRC_IDS] = slots_id[start_index]
record_id_dict[InstanceName.TRAIN_LABEL_MASK_IDS] = slots_id[start_index + 1]
record_id_dict[InstanceName.TRAIN_LABEL_SEQ_LENS] = slots_id[start_index + 2]
record_id_dict[InstanceName.INFER_LABEL_SRC_IDS] = slots_id[start_index + 3]
record_id_dict[InstanceName.INFER_LABEL_MASK_IDS] = slots_id[start_index + 4]
record_id_dict[InstanceName.INFER_LABEL_SEQ_LENS] = slots_id[start_index + 5]
record_emb_dict = None
if need_emb and self.token_embedding:
record_emb_dict = self.token_embedding.get_token_embedding(record_id_dict)
record_dict = {}
record_dict[InstanceName.RECORD_ID] = record_id_dict
record_dict[InstanceName.RECORD_EMB] = record_emb_dict
return record_dict
def get_field_length(self):
"""获取当前这个field在进行了序列化之后,在slot_id_list中占多少长度
:return:
"""
return FieldLength.GENERATE_LABEL_FIELD
| 42.455696 | 125 | 0.595707 | 731 | 6,708 | 5.071135 | 0.194254 | 0.077151 | 0.097114 | 0.032101 | 0.459131 | 0.347721 | 0.26976 | 0.20178 | 0.159698 | 0.159698 | 0 | 0.008751 | 0.318575 | 6,708 | 157 | 126 | 42.726115 | 0.802231 | 0.076774 | 0 | 0.29 | 1 | 0 | 0.023991 | 0.004393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.06 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d200274ebad98aa4c72c04b9e6aca07e97be031 | 1,872 | py | Python | foodshering/authapp/forms.py | malfin/silvehanger | c71a936a0c59c5a6fb909861cf2197b72782642d | [
"Apache-2.0"
] | null | null | null | foodshering/authapp/forms.py | malfin/silvehanger | c71a936a0c59c5a6fb909861cf2197b72782642d | [
"Apache-2.0"
] | null | null | null | foodshering/authapp/forms.py | malfin/silvehanger | c71a936a0c59c5a6fb909861cf2197b72782642d | [
"Apache-2.0"
] | null | null | null | from django import forms
from django.contrib.auth.forms import AuthenticationForm, UserCreationForm, PasswordChangeForm, UserChangeForm
from authapp.models import UserProfile, Status
class LoginForm(AuthenticationForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for name, item in self.fields.items():
item.widget.attrs['class'] = f'form-control {name}'
class RegisterForm(UserCreationForm, forms.ModelForm):
status = forms.CharField(label='Кто вы?', widget=forms.Select(choices=Status.choices))
class Meta:
model = UserProfile
fields = (
'username',
'first_name',
'last_name',
'status',
'email',
)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for name, item in self.fields.items():
item.widget.attrs['class'] = f'form-control {name}'
item.help_text = ''
class ChangeForm(UserChangeForm):
class Meta:
model = UserProfile
fields = ('username', 'first_name', 'last_name', 'email', 'address', 'phone_number')
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for name, item in self.fields.items():
item.widget.attrs['class'] = 'form-control'
item.help_text = ''
if name == 'password':
item.widget = forms.HiddenInput()
class ChangePassword(PasswordChangeForm):
class Meta:
model = UserProfile
fields = 'password'
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for name, item in self.fields.items():
item.widget.attrs['class'] = 'form-control'
def get_form(self, form_class):
return form_class(self.request.user, **self.get_form_kwargs())
| 31.2 | 110 | 0.608974 | 199 | 1,872 | 5.507538 | 0.306533 | 0.072993 | 0.040146 | 0.054745 | 0.500912 | 0.472628 | 0.472628 | 0.472628 | 0.472628 | 0.472628 | 0 | 0 | 0.25641 | 1,872 | 59 | 111 | 31.728814 | 0.787356 | 0 | 0 | 0.533333 | 0 | 0 | 0.103632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.088889 | 0.066667 | 0.022222 | 0.377778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3d220060f34001abd4191e581365ad915971f136 | 340 | py | Python | devices/parser/serializers.py | City-of-Helsinki/hel-data-pipe | e473237cd00a54a791337ac611e99556dc37ea35 | [
"MIT"
] | 1 | 2021-02-25T14:21:41.000Z | 2021-02-25T14:21:41.000Z | devices/parser/serializers.py | City-of-Helsinki/hel-data-pipe | e473237cd00a54a791337ac611e99556dc37ea35 | [
"MIT"
] | 9 | 2020-11-23T11:56:56.000Z | 2021-02-25T12:20:05.000Z | devices/parser/serializers.py | City-of-Helsinki/hel-data-pipe | e473237cd00a54a791337ac611e99556dc37ea35 | [
"MIT"
] | 1 | 2021-07-25T12:16:53.000Z | 2021-07-25T12:16:53.000Z | from rest_framework import serializers
from .models import Device, SensorType
class SensorTypeSerializer(serializers.ModelSerializer):
class Meta:
model = SensorType
fields = "__all__"
class DeviceSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Device
fields = "__all__"
| 21.25 | 63 | 0.723529 | 30 | 340 | 7.9 | 0.566667 | 0.075949 | 0.118143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217647 | 340 | 15 | 64 | 22.666667 | 0.890977 | 0 | 0 | 0.4 | 0 | 0 | 0.041176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3d2303695d686e9f8b4033b5136f35315cde3220 | 696 | py | Python | core/migrations/0002_auto_20191102_1734.py | manulangat1/djcommerce | 2cd92631479ef949e0f05a255f2f50feca728802 | [
"MIT"
] | 1 | 2020-02-08T16:29:41.000Z | 2020-02-08T16:29:41.000Z | core/migrations/0002_auto_20191102_1734.py | manulangat1/djcommerce | 2cd92631479ef949e0f05a255f2f50feca728802 | [
"MIT"
] | 15 | 2020-05-04T13:22:32.000Z | 2022-03-12T00:27:28.000Z | core/migrations/0002_auto_20191102_1734.py | manulangat1/djcommerce | 2cd92631479ef949e0f05a255f2f50feca728802 | [
"MIT"
] | 1 | 2020-10-17T08:54:31.000Z | 2020-10-17T08:54:31.000Z | # Generated by Django 2.2.6 on 2019-11-02 17:34
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='item',
name='category',
field=models.CharField(blank=True, choices=[('S', 'Shirt'), ('Sw', 'Sport wear'), ('Ow', 'Outwear')], max_length=10, null=True),
),
migrations.AddField(
model_name='item',
name='label',
field=models.CharField(blank=True, choices=[('P', 'primary'), ('S', 'secondary'), ('D', 'danger')], max_length=10, null=True),
),
]
| 29 | 140 | 0.556034 | 76 | 696 | 5.026316 | 0.657895 | 0.094241 | 0.120419 | 0.141361 | 0.471204 | 0.371728 | 0 | 0 | 0 | 0 | 0 | 0.04501 | 0.265805 | 696 | 23 | 141 | 30.26087 | 0.702544 | 0.064655 | 0 | 0.352941 | 1 | 0 | 0.137134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d2603c4df2972be551558b1de82be8e153176f4 | 915 | py | Python | stamper/migrations/0004_auto_20161208_1658.py | uploadcare/stump | 8070ff42f01972fa86b4a2eaba580dad65482ef2 | [
"MIT"
] | null | null | null | stamper/migrations/0004_auto_20161208_1658.py | uploadcare/stump | 8070ff42f01972fa86b4a2eaba580dad65482ef2 | [
"MIT"
] | null | null | null | stamper/migrations/0004_auto_20161208_1658.py | uploadcare/stump | 8070ff42f01972fa86b4a2eaba580dad65482ef2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.9 on 2016-12-08 16:58
from __future__ import unicode_literals
import datetime
from django.db import migrations, models
from django.utils.timezone import utc
class Migration(migrations.Migration):
dependencies = [
('stamper', '0003_auto_20161122_1253'),
]
operations = [
migrations.AddField(
model_name='fileuploadmessage',
name='original_file_url',
field=models.CharField(default=datetime.datetime(2016, 12, 8, 16, 57, 50, 623808, tzinfo=utc), max_length=255),
preserve_default=False,
),
migrations.AddField(
model_name='imageuploadmessage',
name='original_file_url',
field=models.CharField(default=datetime.datetime(2016, 12, 8, 16, 58, 2, 437475, tzinfo=utc), max_length=255),
preserve_default=False,
),
]
| 30.5 | 123 | 0.64153 | 105 | 915 | 5.419048 | 0.542857 | 0.031634 | 0.080844 | 0.094903 | 0.393673 | 0.393673 | 0.393673 | 0.393673 | 0.249561 | 0.249561 | 0 | 0.108696 | 0.245902 | 915 | 29 | 124 | 31.551724 | 0.715942 | 0.073224 | 0 | 0.363636 | 1 | 0 | 0.11716 | 0.027219 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d321cb4dea8943fb087339fe2547eeaba4b5805 | 2,144 | py | Python | Assignment-04/Question-03/mpi_ping_pong.py | gnu-user/mcsc-6030-assignments | 42825cdbc4532d9da6ebdba549b65fb1e36456a0 | [
"MIT"
] | null | null | null | Assignment-04/Question-03/mpi_ping_pong.py | gnu-user/mcsc-6030-assignments | 42825cdbc4532d9da6ebdba549b65fb1e36456a0 | [
"MIT"
] | null | null | null | Assignment-04/Question-03/mpi_ping_pong.py | gnu-user/mcsc-6030-assignments | 42825cdbc4532d9da6ebdba549b65fb1e36456a0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
###############################################################################
#
# Assignment 4, Question 3 solution for MPI ping-pong timings to calcualate
# alpha and beta, implemented in Python using MPI.
#
# Copyright (C) 2015, Jonathan Gillett (100437638)
# All rights reserved.
#
###############################################################################
import numpy as np
import sys
from mpi4py import MPI
from time import sleep
from random import random
# Define process 0 as PING, process 1 as PONG
PING = 0
PONG = 1
# Number of trials for getting the average time
TRIALS = 100
if __name__ == '__main__':
if len(sys.argv) < 2:
print "ERROR: You must provide the number of bytes to send!."
sys.exit()
N = int(sys.argv[1]) # The number of bytes to generate
comm = MPI.COMM_WORLD
proc_id = comm.Get_rank()
n_proc = comm.Get_size()
status = MPI.Status()
# Error checking only 2 processes can be used
if n_proc > 2:
if proc_id == PING:
print "ERROR: Only two proceses (ping and pong)."
MPI.Finalize()
sys.exit()
if N < 1:
if proc_id == PING:
print "ERROR: You must specify the data size in bytes."
MPI.Finalize()
sys.exit()
# The data to send back and forth, in bytes
A = np.empty(N, dtype=np.int8)
comm.Barrier()
# Send the data back and forth 100 times to get the average time
timings = []
for i in range(0, 100):
if proc_id == PING:
local_time = -MPI.Wtime()
comm.Send(A, PONG, tag=PING)
comm.Recv(A, source=MPI.ANY_SOURCE, tag=PONG, status=status)
timings.append(local_time + MPI.Wtime())
# Simulate random sleeps to account for different scheduling
sleep(random() / 100)
else:
comm.Recv(A, source=MPI.ANY_SOURCE, tag=PING, status=status)
comm.Send(A, PING, tag=PONG)
if proc_id == PING:
print "N bytes sent: %d, trials: %d, average time: %0.8f seconds" \
% (N, TRIALS, sum(timings) / float(len(timings)) / 2.0)
| 32 | 79 | 0.567631 | 295 | 2,144 | 4.050847 | 0.4 | 0.025105 | 0.026778 | 0.040167 | 0.131381 | 0.087029 | 0.050209 | 0.050209 | 0 | 0 | 0 | 0.027883 | 0.263993 | 2,144 | 66 | 80 | 32.484848 | 0.729404 | 0.253731 | 0 | 0.209302 | 0 | 0 | 0.144359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.116279 | null | null | 0.093023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d332e20398ab4a054c4523a1136617bf5854f9a | 1,459 | py | Python | FPAIT/lib/logger/utils.py | D-X-Y/MSPLD-2018 | 71a6a75830ac84c7a861e63367ad3ace991fae77 | [
"MIT"
] | 63 | 2018-07-12T10:36:25.000Z | 2019-04-26T11:30:09.000Z | FPAIT/lib/logger/utils.py | D-X-Y/MSPLD-2018 | 71a6a75830ac84c7a861e63367ad3ace991fae77 | [
"MIT"
] | null | null | null | FPAIT/lib/logger/utils.py | D-X-Y/MSPLD-2018 | 71a6a75830ac84c7a861e63367ad3ace991fae77 | [
"MIT"
] | 8 | 2018-07-14T02:47:12.000Z | 2019-06-03T07:39:13.000Z | import time, sys
import numpy as np
def time_for_file():
ISOTIMEFORMAT='%d-%h-at-%H-%M-%S'
return '{}'.format(time.strftime( ISOTIMEFORMAT, time.gmtime(time.time()) ))
def time_string():
ISOTIMEFORMAT='%Y-%m-%d %X'
string = '[{}]'.format(time.strftime( ISOTIMEFORMAT, time.gmtime(time.time()) ))
return string
def time_string_short():
ISOTIMEFORMAT='%Y%m%d'
string = '{}'.format(time.strftime( ISOTIMEFORMAT, time.gmtime(time.time()) ))
return string
def print_log(print_string, log):
print("{}".format(print_string))
if log is not None:
log.write('{}\n'.format(print_string))
log.flush()
def convert_secs2time(epoch_time, return_string=False):
need_hour = int(epoch_time / 3600)
need_mins = int((epoch_time - 3600*need_hour) / 60)
need_secs = int(epoch_time - 3600*need_hour - 60*need_mins)
if return_string:
return '{:02d}:{:02d}:{:02d}'.format(need_hour, need_mins, need_secs)
else:
return need_hour, need_mins, need_secs
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __repr__(self):
return ('{name}(val={val}, avg={avg}, count={count})'.format(name=self.__class__.__name__, **self.__dict__))
| 28.057692 | 112 | 0.666895 | 215 | 1,459 | 4.302326 | 0.316279 | 0.043243 | 0.058378 | 0.100541 | 0.342703 | 0.321081 | 0.269189 | 0.269189 | 0.151351 | 0.151351 | 0 | 0.022989 | 0.165182 | 1,459 | 51 | 113 | 28.607843 | 0.736453 | 0.033585 | 0 | 0.04878 | 0 | 0 | 0.07906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219512 | false | 0 | 0.04878 | 0.02439 | 0.439024 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d39d78b8b90f5a0e60b1cd9c3435a778082fd09 | 636 | py | Python | ossdbtoolsservice/admin/contracts/__init__.py | DaeunYim/pgtoolsservice | b7e548718d797883027b2caee2d4722810b33c0f | [
"MIT"
] | 33 | 2019-05-27T13:04:35.000Z | 2022-03-17T13:33:05.000Z | ossdbtoolsservice/admin/contracts/__init__.py | DaeunYim/pgtoolsservice | b7e548718d797883027b2caee2d4722810b33c0f | [
"MIT"
] | 31 | 2019-06-10T01:55:47.000Z | 2022-03-09T07:27:49.000Z | ossdbtoolsservice/admin/contracts/__init__.py | DaeunYim/pgtoolsservice | b7e548718d797883027b2caee2d4722810b33c0f | [
"MIT"
] | 25 | 2019-05-13T18:39:24.000Z | 2021-11-16T03:07:33.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from ossdbtoolsservice.admin.contracts.get_database_info_request import (
DatabaseInfo, GetDatabaseInfoParameters, GetDatabaseInfoResponse, GET_DATABASE_INFO_REQUEST)
__all__ = [
'DatabaseInfo', 'GetDatabaseInfoParameters', 'GetDatabaseInfoResponse', 'GET_DATABASE_INFO_REQUEST'
]
| 53 | 103 | 0.575472 | 46 | 636 | 7.673913 | 0.673913 | 0.093484 | 0.127479 | 0.186969 | 0.464589 | 0.464589 | 0.464589 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 636 | 11 | 104 | 57.818182 | 0.605489 | 0.528302 | 0 | 0 | 0 | 0 | 0.288136 | 0.247458 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d3ee67b67a8537dbe3c66ff4a5cb8e8c72ee707 | 706 | py | Python | support/send_broadcast_message.py | ICT4H/dcs-web | fb0f53fad4401cfac1c1789ff28b9d5bda40c975 | [
"Apache-2.0"
] | 1 | 2015-11-02T09:11:12.000Z | 2015-11-02T09:11:12.000Z | support/send_broadcast_message.py | ICT4H/dcs-web | fb0f53fad4401cfac1c1789ff28b9d5bda40c975 | [
"Apache-2.0"
] | null | null | null | support/send_broadcast_message.py | ICT4H/dcs-web | fb0f53fad4401cfac1c1789ff28b9d5bda40c975 | [
"Apache-2.0"
] | null | null | null | from xlrd import open_workbook
from scheduler.smsclient import SMSClient
filename = "/Users/twer/Downloads/SchoolsSMSGhana.xlsx"
workbook = open_workbook(filename)
organization_number = "1902"
area_code = "233"
sheets_ = workbook.sheets()[0]
sms_client = SMSClient()
print 'Start'
for row_num in range(1, sheets_.nrows):
row = sheets_.row_values(row_num)
_, _, data_sender_phone_number, message = tuple(row)
phone_number = area_code + str(int(data_sender_phone_number))[1:]
print ("Sending broadcast message to %s from %s.") % (phone_number, organization_number)
sms_sent = sms_client.send_sms(organization_number, phone_number, message)
print 'Response:', sms_sent
print 'End'
| 32.090909 | 92 | 0.756374 | 96 | 706 | 5.260417 | 0.5 | 0.108911 | 0.059406 | 0.083168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.135977 | 706 | 21 | 93 | 33.619048 | 0.811475 | 0 | 0 | 0 | 0 | 0 | 0.150142 | 0.05949 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.117647 | null | null | 0.235294 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d4379916a421e4f16400672da640d246b4981ac | 27,082 | py | Python | src/sgfsdriver/plugins/ftp/ftp_client.py | syndicate-storage/syndicate-fs-driver-plugins | 8e455d6bb4838c2313bb6cd72ed5fa6bbbc871d2 | [
"Apache-2.0"
] | null | null | null | src/sgfsdriver/plugins/ftp/ftp_client.py | syndicate-storage/syndicate-fs-driver-plugins | 8e455d6bb4838c2313bb6cd72ed5fa6bbbc871d2 | [
"Apache-2.0"
] | 3 | 2016-11-18T21:31:00.000Z | 2017-08-16T15:35:52.000Z | src/sgfsdriver/plugins/ftp/ftp_client.py | syndicate-storage/syndicate-fs-driver-plugins | 8e455d6bb4838c2313bb6cd72ed5fa6bbbc871d2 | [
"Apache-2.0"
] | 2 | 2016-03-31T18:55:58.000Z | 2017-08-02T19:57:12.000Z | #!/usr/bin/env python
"""
Copyright 2016 The Trustees of University of Arizona
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import traceback
import os
import logging
import time
import ftplib
import threading
from datetime import datetime
from expiringdict import ExpiringDict
from io import BytesIO
logger = logging.getLogger('ftp_client')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('ftp_client.log')
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
METADATA_CACHE_SIZE = 10000
METADATA_CACHE_TTL = 60 * 60 # 1 hour
FTP_TIMEOUT = 5 * 60 # 5 min
FTP_OPERATION_TIMEOUT = 30 # 30 sec
BYTES_MAX_SKIP = 1024 * 1024 * 2 # 2MB
CONNECTIONS_MAX_NUM = 5
"""
Interface class to FTP
"""
class MLSD_NOT_SUPPORTED(Exception):
pass
class ftp_status(object):
def __init__(self,
directory=False,
symlink=False,
path=None,
name=None,
size=0,
create_time=0,
modify_time=0):
self.directory = directory
self.symlink = symlink
self.path = path
self.name = name
self.size = size
self.create_time = create_time
self.modify_time = modify_time
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __repr__(self):
rep_d = "F"
if self.directory:
rep_d = "D"
rep_s = "-"
if self.symlink:
rep_s = "S"
return "<ftp_status %s%s %s %d>" % \
(rep_d, rep_s, self.name, self.size)
class downloader_connection(object):
def __init__(self,
path,
connection,
offset=0,
last_comm=0):
self.path = path
self.offset = offset
self.connection = connection
self.last_comm = last_comm
self._lock = threading.RLock()
def __repr__(self):
return "<downloader_connection path(%s) off(%d) last_comm(%s)>" % \
(self.path, self.offset, self.last_comm)
def lock(self):
self._lock.acquire()
def unlock(self):
self._lock.release()
class ftp_session(object):
def __init__(self,
host,
port=21,
user="anonymous",
password="anonymous@email.com"):
self.host = host
self.port = port
self.user = user
self.password = password
self.session = None
self.last_comm = None
self.connections = {}
self._lock = threading.RLock()
def connect(self):
logger.info("connect session")
self.lock()
try:
ftp_session = ftplib.FTP()
ftp_session.connect(self.host, self.port, FTP_OPERATION_TIMEOUT)
ftp_session.login(self.user, self.password)
self.session = ftp_session
logger.info("new ftp session to %s:%d - %d" % (self.host, self.port, id(self.session)))
self.last_comm = datetime.now()
finally:
self.unlock()
def close(self):
logger.info("close session")
self.lock()
for connection in self.connections.values():
logger.info("_close_connection : %s" % connection.path)
connection.lock()
try:
conn = connection.connection
conn.close()
self.session.voidresp()
except ftplib.error_temp:
# abortion of transfer causes this type of error
pass
except EOFError:
pass
del self.connections[connection.path]
connection.unlock()
try:
logger.info("close ftp session to %s:%d - %d" % (self.host, self.port, id(self.session)))
self.session.close()
finally:
self.session = None
self.last_comm = None
self.unlock()
def reconnect(self):
logger.info("reconnect session")
self.lock()
self.close()
self.connect()
self.unlock()
def reconnect_if_needed(self):
expired = True
reconnected = False
if self.last_comm:
delta = datetime.now() - self.last_comm
if delta.total_seconds() < FTP_TIMEOUT:
expired = False
if expired:
# perform a short command then reconnect at fail
logger.info("reconnect_if_needed: expired - check live")
self.lock()
try:
self.pwd()
reconnected = False
except:
self.reconnect()
reconnected = True
finally:
self.unlock()
else:
reconnected = False
return reconnected
def retrlines(self, op, callback):
logger.info("retrlines")
self.lock()
try:
self.session.retrlines(op, callback)
self.last_comm = datetime.now()
finally:
self.unlock()
def pwd(self):
logger.info("pwd")
self.lock()
try:
self.session.pwd()
self.last_comm = datetime.now()
finally:
self.unlock()
def cwd(self, path):
logger.info("cwd - %s" % path)
self.lock()
try:
self.session.cwd(path)
self.last_comm = datetime.now()
finally:
self.unlock()
def mkd(self, path):
logger.info("mkd - %s" % path)
self.lock()
try:
self.session.mkd(path)
self.last_comm = datetime.now()
finally:
self.unlock()
def storbinary(self, op, path, buf, rest):
logger.info("storbinary - %s" % path)
self.lock()
try:
self.session.storbinary("%s %s" % (op, path), buf, rest=rest)
self.last_comm = datetime.now()
finally:
self.unlock()
def delete(self, path):
logger.info("delete - %s" % path)
self.lock()
try:
self.session.delete(path)
self.last_comm = datetime.now()
finally:
self.unlock()
def rename(self, path1, path2):
logger.info("rename - %s => %s" % (path1, path2))
self.lock()
try:
self.session.rename(path1, path2)
self.last_comm = datetime.now()
finally:
self.unlock()
def __repr__(self):
return "<ftp_session host(%s) port(%d)>" % \
(self.host, self.port)
def lock(self):
self._lock.acquire()
def unlock(self):
self._lock.release()
def _new_connection(self, path, offset=0):
logger.info("_new_connection : %s, off(%d)" % (path, offset))
self.lock()
try:
self.session.voidcmd("TYPE I")
conn = self.session.transfercmd("RETR %s" % path, offset)
connection = downloader_connection(path, conn, offset, datetime.now())
self.connections[path] = connection
self.last_comm = datetime.now()
finally:
self.unlock()
return connection
def _close_connection(self, connection):
path = connection.path
logger.info("_close_connection : %s" % path)
self.lock()
connection.lock()
try:
conn = connection.connection
conn.close()
self.session.voidresp()
except ftplib.error_temp:
# abortion of transfer causes this type of error
pass
del self.connections[path]
connection.unlock()
self.unlock()
def _get_connection(self, path, offset):
connection = None
logger.info("_get_connection : %s, off(%d)" % (path, offset))
self.lock()
if path in self.connections:
connection = self.connections[path]
if connection:
reusable = False
connection.lock()
coffset = connection.offset
if coffset <= offset and coffset + BYTES_MAX_SKIP >= offset:
reusable = True
time_delta = datetime.now() - connection.last_comm
if time_delta.total_seconds() < FTP_TIMEOUT:
reusable = True
connection.unlock()
if not reusable:
self._close_connection(connection)
connection = None
if len(self.connections) >= CONNECTIONS_MAX_NUM:
# remove oldest
oldest = None
for live_connection in self.connections.values():
if not oldest:
oldest = live_connection
else:
if oldest.last_comm > live_connection.last_comm:
oldest = live_connection
if oldest:
self._close_connection(oldest)
if connection:
connection.lock()
# may need to move offset
skip_bytes = offset - connection.offset
total_read = 0
EOF = False
while total_read < skip_bytes:
data = connection.connection.recv(skip_bytes - total_read)
if data:
data_len = len(data)
total_read += data_len
connection.offset += data_len
else:
#EOF
EOF = True
break
if total_read > 0:
connection.last_comm = datetime.now()
#connection.unlock()
else:
connection = self._new_connection(path, offset)
connection.lock()
self.unlock()
return connection
def read_data(self, path, offset, size):
logger.info("read_data : %s, off(%d), size(%d)" % (path, offset, size))
#connection is locked
self.lock()
connection = self._get_connection(path, offset)
buf = BytesIO()
total_read = 0
EOF = False
if connection.offset != offset:
connection.unlock()
self._close_connection(connection)
connection = self._new_connection(path, offset)
connection.lock()
#raise Exception("Connection does not have right offset %d - %d" % (connection.offset, offset))
while total_read < size:
data = connection.connection.recv(size - total_read)
if data:
buf.write(data)
data_len = len(data)
total_read += data_len
connection.offset += data_len
else:
#EOF
EOF = True
break
if total_read > 0:
connection.last_comm = datetime.now()
connection.unlock()
if EOF:
self._close_connection(connection)
self.unlock()
return buf.getvalue()
class prefetch_task(threading.Thread):
def __init__(self,
group=None,
target=None,
name=None,
args=(),
kwargs=None,
verbose=None):
threading.Thread.__init__(self, group=group, target=target, name=name, verbose=verbose)
self.args = args
self.kwargs = kwargs
self.host = kwargs["host"]
self.port = int(kwargs["port"])
self.path = kwargs["path"]
self.offset = kwargs["offset"]
self.size = kwargs["size"]
self.ftp_client = kwargs["ftp_client"]
self.complete = False
self.data = None
def run(self):
logger.info("prefetch_task : %s:%d - %s, off(%d), size(%d)" % (self.host, self.port, self.path, self.offset, self.size))
session = self.ftp_client.get_download_session()
session.lock()
buf = None
try:
session.reconnect_if_needed()
buf = session.read_data(self.path, self.offset, self.size)
logger.info("prefetch_task: read done")
except Exception, e:
logger.error("prefetch_task: " + traceback.format_exc())
finally:
session.unlock()
self.data = buf
self.complete = True
class ftp_client(object):
def __init__(self,
host=None,
port=21,
user='anonymous',
password='anonymous@email.com'):
self.host = host
if port > 0:
self.port = int(port)
else:
self.port = 21
if user:
self.user = user
else:
self.user = "anonymous"
if password:
self.password = password
else:
self.password = "anonymous@email.com"
self.session = ftp_session(self.host, self.port, self.user, self.password)
self.download_session = ftp_session(self.host, self.port, self.user, self.password)
self.mlsd_supported = True
self.prefetch_thread = None
# init cache
self.meta_cache = ExpiringDict(
max_len=METADATA_CACHE_SIZE,
max_age_seconds=METADATA_CACHE_TTL
)
def connect(self):
logger.info("connect: connecting to FTP server (%s)" % self.host)
self.session.connect()
self.download_session.connect()
def close(self):
try:
logger.info("close: closing a connectinn to FTP server (%s)" % self.host)
self.session.close()
self.download_session.close()
except:
pass
def reconnect(self):
self.close()
self.connect()
def __enter__(self):
self.connect()
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def get_download_session(self):
return self.download_session
def _parse_MLSD(self, parent, line):
# modify=20170623195719;perm=adfr;size=454060;type=file;unique=13U670966;UNIX.group=570;UNIX.mode=0444;UNIX.owner=14; gbrel.txt
fields = line.split(";")
stat_dict = {}
for field in fields:
if "=" in field:
# key-value field
fv = field.strip()
fv_idx = fv.index("=")
key = fv[:fv_idx].lower()
value = fv[fv_idx+1:]
else:
key = "name"
value = field.strip()
stat_dict[key] = value
full_path = parent.rstrip("/") + "/" + stat_dict["name"]
directory = False
symlink = False
if "type" in stat_dict:
t = stat_dict["type"]
if t in ["cdir", "pdir"]:
return None
if t == "dir":
directory = True
elif t == "OS.unix=symlink":
symlink = True
elif t.startswith("OS.unix=slink:"):
symlink = True
if t not in ["dir", "file", "OS.unix=symlink"]:
raise IOError("Unknown type : %s" % t)
size = 0
if "size" in stat_dict:
size = long(stat_dict["size"])
modify_time = None
if "modify" in stat_dict:
modify_time_obj = datetime.strptime(stat_dict["modify"], "%Y%m%d%H%M%S")
modify_time = time.mktime(modify_time_obj.timetuple())
create_time = None
if "create" in stat_dict:
create_time_obj = datetime.strptime(stat_dict["create"], "%Y%m%d%H%M%S")
create_time = time.mktime(create_time_obj.timetuple())
if "name" in stat_dict:
return ftp_status(
directory=directory,
symlink=symlink,
path=full_path,
name=stat_dict["name"],
size=size,
create_time=create_time,
modify_time=modify_time
)
else:
return None
def _parse_LIST(self, parent, line):
# drwxr-xr-x 8 20002 2006 4096 Dec 08 2015 NANOGrav_9y
fields = line.split()
stat_dict = {}
stat_dict["perm"] = fields[0]
stat_dict["owner"] = fields[2]
stat_dict["group"] = fields[3]
stat_dict["size"] = fields[4]
stat_dict["month"] = fields[5]
stat_dict["day"] = fields[6]
stat_dict["d3"] = fields[7]
stat_dict["name"] = fields[8]
full_path = parent.rstrip("/") + "/" + stat_dict["name"]
directory = False
symlink = False
if stat_dict["perm"].startswith("d"):
directory = True
elif stat_dict["perm"].startswith("-"):
directory = False
elif stat_dict["perm"].startswith("l"):
directory = True
symlink = True
else:
raise IOError("Unknown type : %s" % stat_dict["perm"])
size = 0
if "size" in stat_dict:
size = long(stat_dict["size"])
now = datetime.now()
year = now.year
hour = 0
minute = 0
if stat_dict["d3"].isdigit():
year = int(stat_dict["d3"])
else:
hm = stat_dict["d3"].split(":")
hour = int(hm[0])
minute = int(hm[1])
d = "%d %s %s %d %d" % (
year, stat_dict["month"], stat_dict["day"], hour, minute
)
modify_time = None
if "modify" in stat_dict:
modify_time_obj = datetime.strptime(d, "%Y %b %d %H %M")
modify_time = time.mktime(modify_time_obj.timetuple())
create_time = None
if "create" in stat_dict:
create_time_obj = datetime.strptime(d, "%Y %b %d %H %M")
create_time = time.mktime(create_time_obj.timetuple())
if "name" in stat_dict:
return ftp_status(
directory=directory,
symlink=symlink,
path=full_path,
name=stat_dict["name"],
size=size,
create_time=create_time,
modify_time=modify_time
)
else:
return None
def _list_dir_and_stat_MLSD(self, path):
logger.info("_list_dir_and_stat_MLSD: retrlines with MLSD - %s" % path)
stats = []
try:
entries = []
try:
self.session.retrlines("MLSD", entries.append)
except ftplib.error_perm, e:
msg = str(e)
if "500" in msg or "unknown command" in msg.lower():
raise MLSD_NOT_SUPPORTED("MLSD is not supported")
for ent in entries:
st = self._parse_MLSD(path, ent)
if st:
stats.append(st)
except ftplib.error_perm:
logger.error("_list_dir_and_stat_MLSD: " + traceback.format_exc())
return stats
def _list_dir_and_stat_LIST(self, path):
logger.info("_list_dir_and_stat_LIST: retrlines with LIST - %s" % path)
stats = []
try:
entries = []
self.session.retrlines("LIST", entries.append)
for ent in entries:
st = self._parse_LIST(path, ent)
if st:
stats.append(st)
except ftplib.error_perm:
logger.error("_list_dir_and_stat_LIST: " + traceback.format_exc())
return stats
def _ensureDirEntryStatLoaded(self, path):
# reuse cache
if path in self.meta_cache:
return self.meta_cache[path]
logger.info("_ensureDirEntryStatLoaded: loading - %s" % path)
self.session.lock()
self.session.reconnect_if_needed()
self.session.cwd(path)
stats = []
if self.mlsd_supported:
try:
stats = self._list_dir_and_stat_MLSD(path)
except MLSD_NOT_SUPPORTED:
self.mlsd_supported = False
stats = self._list_dir_and_stat_LIST(path)
else:
stats = self._list_dir_and_stat_LIST(path)
self.session.unlock()
self.meta_cache[path] = stats
return stats
def _invoke_prefetch(self, path, offset, size):
logger.info("_invoke_prefetch : %s, off(%d), size(%d)" % (path, offset, size))
self.prefetch_thread = prefetch_task(name="prefetch_task_thread", kwargs={'host':self.session.host, 'port':self.session.port, 'path':path, 'offset':offset, 'size':size, 'ftp_client':self})
self.prefetch_thread.start()
def _get_prefetch_data(self, path, offset, size):
logger.info("_get_prefetch_data : %s, off(%d), size(%d)" % (path, offset, size))
invoke_new_thread = False
if self.prefetch_thread:
if not self.prefetch_thread.complete:
self.prefetch_thread.join()
if self.prefetch_thread.path != path or self.prefetch_thread.offset != offset or self.prefetch_thread.size != size:
invoke_new_thread = True
else:
invoke_new_thread = True
if invoke_new_thread:
self._invoke_prefetch(path, offset, size)
self.prefetch_thread.join()
data = self.prefetch_thread.data
self.prefetch_thread = None
return data
"""
Returns ftp_status
"""
def stat(self, path):
logger.info("stat: %s" % path)
try:
# try bulk loading of stats
parent = os.path.dirname(path)
stats = self._ensureDirEntryStatLoaded(parent)
if stats:
for sb in stats:
if sb.path == path:
return sb
return None
except Exception:
# fall if cannot access the parent dir
return None
"""
Returns directory entries in string
"""
def list_dir(self, path):
logger.info("list_dir: %s" % path)
stats = self._ensureDirEntryStatLoaded(path)
entries = []
if stats:
for sb in stats:
entries.append(sb.name)
return entries
def is_dir(self, path):
sb = self.stat(path)
if sb:
return sb.directory
return False
def make_dirs(self, path):
logger.info("make_dirs: %s" % path)
if not self.exists(path):
self.session.lock()
# make parent dir first
self.reconnect_if_needed()
self.make_dirs(os.path.dirname(path))
self.session.mkd(path)
self.session.unlock()
# invalidate stat cache
self.clear_stat_cache(os.path.dirname(path))
def exists(self, path):
logger.info("exists: %s" % path)
try:
sb = self.stat(path)
if sb:
return True
return False
except Exception:
return False
def clear_stat_cache(self, path=None):
logger.info("clear_stat_cache: %s" % path)
if(path):
if path in self.meta_cache:
# directory
del self.meta_cache[path]
else:
# file
parent = os.path.dirname(path)
if parent in self.meta_cache:
del self.meta_cache[parent]
else:
self.meta_cache.clear()
def read(self, path, offset, size):
logger.info("read : %s, off(%d), size(%d)" % (path, offset, size))
buf = None
try:
sb = self.stat(path)
if offset >= sb.size:
# EOF
buf = BytesIO()
return buf.getvalue()
time1 = datetime.now()
buf = self._get_prefetch_data(path, offset, size)
#buf = self.session.read_data(path, offset, size)
read_len = len(buf)
if read_len + offset < sb.size:
self._invoke_prefetch(path, offset + read_len, size)
time2 = datetime.now()
delta = time2 - time1
logger.info("read: took - %s" % delta)
logger.info("read: read done")
except Exception, e:
logger.error("read: " + traceback.format_exc())
raise e
return buf
def write(self, path, offset, buf):
logger.info("write : %s, off(%d), size(%d)" % (path, offset, len(buf)))
try:
logger.info("write: writing buffer %d" % len(buf))
bio = BytesIO()
bio.write(buf)
bio.seek(0)
self.session.lock()
self.session.reconnect_if_needed()
self.session.storbinary("STOR", path, bio, offset)
self.session.unlock()
logger.info("write: writing done")
except Exception, e:
logger.error("write: " + traceback.format_exc())
raise e
# invalidate stat cache
self.clear_stat_cache(path)
def truncate(self, path, size):
logger.info("truncate : %s" % path)
raise IOError("truncate is not supported")
def unlink(self, path):
logger.info("unlink : %s" % path)
try:
logger.info("unlink: deleting a file - %s" % path)
self.session.lock()
self.session.reconnect_if_needed()
self.session.delete(path)
self.session.unlock()
logger.info("unlink: deleting done")
except Exception, e:
logger.error("unlink: " + traceback.format_exc())
raise e
# invalidate stat cache
self.clear_stat_cache(path)
def rename(self, path1, path2):
logger.info("rename : %s -> %s" % (path1, path2))
try:
logger.info("rename: renaming a file - %s to %s" % (path1, path2))
self.session.lock()
self.session.reconnect_if_needed()
self.session.rename(path1, path2)
self.session.unlock()
logger.info("rename: renaming done")
except Exception, e:
logger.error("rename: " + traceback.format_exc())
raise e
# invalidate stat cache
self.clear_stat_cache(path1)
self.clear_stat_cache(path2)
| 30.259218 | 196 | 0.535633 | 3,000 | 27,082 | 4.681 | 0.127667 | 0.032044 | 0.012818 | 0.014883 | 0.396354 | 0.319732 | 0.278858 | 0.247882 | 0.21128 | 0.192267 | 0 | 0.008951 | 0.360572 | 27,082 | 894 | 197 | 30.293065 | 0.801986 | 0.035485 | 0 | 0.494978 | 0 | 0 | 0.076074 | 0.005679 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.020086 | 0.012912 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d4abb2320ad6d11a7ab8694b9e07545a91044dd | 885 | py | Python | project/migrations/0002_auto_20180801_1907.py | mcdale/django-material | 3bd5725cc4a4b6f2fb1439333e9033d0cd2b6a9c | [
"MIT"
] | null | null | null | project/migrations/0002_auto_20180801_1907.py | mcdale/django-material | 3bd5725cc4a4b6f2fb1439333e9033d0cd2b6a9c | [
"MIT"
] | 2 | 2020-07-21T12:52:29.000Z | 2021-06-17T20:23:36.000Z | project/migrations/0002_auto_20180801_1907.py | mcdale/django-material | 3bd5725cc4a4b6f2fb1439333e9033d0cd2b6a9c | [
"MIT"
] | null | null | null | # Generated by Django 2.0.8 on 2018-08-01 19:07
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('project', '0001_initial'),
]
operations = [
migrations.AlterModelOptions(
name='knowledgearea',
options={'verbose_name': 'Knowledge Area', 'verbose_name_plural': 'Knowledge Areas'},
),
migrations.AddField(
model_name='knowledgeareaindexpage',
name='knowledge_area',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='project.KnowledgeArea'),
),
migrations.AddField(
model_name='knowledgeareapage',
name='abstract',
field=models.TextField(blank=True, help_text='Text to describe the article'),
),
]
| 30.517241 | 121 | 0.628249 | 89 | 885 | 6.134831 | 0.595506 | 0.043956 | 0.051282 | 0.080586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028832 | 0.255367 | 885 | 28 | 122 | 31.607143 | 0.799697 | 0.050847 | 0 | 0.227273 | 1 | 0 | 0.24105 | 0.051313 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d4dc9ef0428e142bdd3d4e674dd5dce9410a4ab | 8,925 | py | Python | src/core/src/tortuga/objects/softwareProfile.py | sutasu/tortuga | 48d7cde4fa652346600b217043b4a734fa2ba455 | [
"Apache-2.0"
] | 33 | 2018-03-02T17:07:39.000Z | 2021-05-21T18:02:51.000Z | src/core/src/tortuga/objects/softwareProfile.py | sutasu/tortuga | 48d7cde4fa652346600b217043b4a734fa2ba455 | [
"Apache-2.0"
] | 201 | 2018-03-05T14:28:24.000Z | 2020-11-23T19:58:27.000Z | src/core/src/tortuga/objects/softwareProfile.py | sutasu/tortuga | 48d7cde4fa652346600b217043b4a734fa2ba455 | [
"Apache-2.0"
] | 23 | 2018-03-02T17:21:59.000Z | 2020-11-18T14:52:38.000Z | # Copyright 2008-2018 Univa Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=no-member
from functools import cmp_to_key
from typing import Dict, Iterable, Optional
import tortuga.objects.admin
import tortuga.objects.component
import tortuga.objects.hardwareProfile
import tortuga.objects.kitSource
import tortuga.objects.nic
import tortuga.objects.node
import tortuga.objects.osInfo
import tortuga.objects.partition
from tortuga.objects.tortugaObject import TortugaObject, TortugaObjectList
from tortuga.utility.helper import str2bool
from .validators import RegexValidator
class SoftwareProfile(TortugaObject): \
# pylint: disable=too-many-public-methods
ROOT_TAG = 'softwareprofile'
validators = {
'name': RegexValidator(pattern='[a-zA-Z0-9-_]+')
}
def __init__(self, name=None):
TortugaObject.__init__(
self, {
'name': name,
'admins': TortugaObjectList(),
'partitions': TortugaObjectList(),
'components': TortugaObjectList(),
'nodes': TortugaObjectList(),
'kitsources': TortugaObjectList(),
}, ['name', 'id'], SoftwareProfile.ROOT_TAG)
def __repr__(self):
return self.getName()
def setId(self, id_):
""" Set software profile id."""
self['id'] = id_
def getId(self):
""" Return software profile id. """
return self.get('id')
def setName(self, name):
""" Set software profile name."""
self['name'] = name
def getName(self):
""" Return software profile name. """
return self.get('name')
def setDescription(self, description):
""" Set description."""
self['description'] = description
def getDescription(self):
""" Return description. """
return self.get('description')
def setKernel(self, kernel):
""" Set kernel."""
self['kernel'] = kernel
def getKernel(self):
""" Return kernel. """
return self.get('kernel')
def setKernelParams(self, kernelParams):
""" Set kernel params."""
self['kernelParams'] = kernelParams
def getKernelParams(self):
""" Return kernel params. """
return self.get('kernelParams')
def setInitrd(self, initrd):
""" Set initrd."""
self['initrd'] = initrd
def getInitrd(self):
""" Return initird. """
return self.get('initrd')
def setOsId(self, osId):
""" Set OS id."""
self['osId'] = osId
def getOsId(self):
""" Return OS id. """
return self.get('osId')
def setType(self, type_):
""" Set type."""
self['type'] = type_
def getType(self):
""" Return type. """
return self.get('type')
def setMinNodes(self, val):
self['minNodes'] = val
def getMinNodes(self):
return self.get('minNodes')
def setMaxNodes(self, value):
self['maxNodes'] = value
def getMaxNodes(self):
return self.get('maxNodes')
def setLockedState(self, val):
self['lockedState'] = val
def getLockedState(self):
return self.get('lockedState')
def setOsInfo(self, osInfo):
""" Set OS info. """
self['os'] = osInfo
def getOsInfo(self):
""" Get OS info. """
return self.get('os')
def setComponents(self, comp):
""" Set components. """
self['components'] = comp
def getComponents(self):
""" Get Components """
return self.get('components')
def setAdmins(self, admins):
""" set Admins """
self['admins'] = admins
def getAdmins(self):
""" Get Admins """
return self.get('admins')
def setPartitions(self, val):
self['partitions'] = val
def getPartitions(self):
""" We want to always return the partitions sorted by
device and partition number """
partitions = self.get('partitions')
if partitions:
partitions.sort(key=cmp_to_key(_partition_compare))
return partitions
def setNodes(self, val):
self['nodes'] = val
def getNodes(self):
return self.get('nodes')
def setUsableHardwareProfiles(self, val):
self['hardwareprofiles'] = val
def getUsableHardwareProfiles(self):
return self.get('hardwareprofiles')
def getKitSources(self):
return self.get('kitsources')
def setKitSources(self, kitsources):
self['kitsources'] = kitsources
def getTags(self) -> Dict[str, str]:
"""
Gets all the tags for this software profile.
:return Dict[str, str]: the tags
"""
return self.get('tags')
def setTags(self, tags: Dict[str, str]):
"""
Sets the tags for this hardware profile.
:param Dict[str, str] tags: the tags to set for this hardware profile
"""
self['tags'] = tags
def getMetadata(self):
return self.get('metadata')
def setMetadata(self, value):
self['metadata'] = value
def getDataRoot(self):
return self.get('dataRoot')
def setDataRoot(self, value):
self['dataRoot'] = value
def getDataRsync(self):
return self.get('dataRsync')
def setDataRsync(self, value):
self['dataRsync'] = value
@staticmethod
def getKeys():
return [
'id',
'name',
'osId',
'description',
'kernel',
'initrd',
'kernelParams',
'type',
'minNodes',
'maxNodes',
'lockedState',
'isIdle',
'metadata',
'tags',
'dataRoot',
'dataRsync',
]
@classmethod
def getFromDict(cls, _dict, ignore: Optional[Iterable[str]] = None):
""" Get software profile from _dict. """
softwareProfile = super(SoftwareProfile, cls).getFromDict(_dict)
softwareProfile.setAdmins(
tortuga.objects.admin.Admin.getListFromDict(_dict))
softwareProfile.setComponents(
tortuga.objects.component.Component.getListFromDict(_dict))
softwareProfile.setNodes(
tortuga.objects.node.Node.getListFromDict(_dict))
osDict = _dict.get(tortuga.objects.osInfo.OsInfo.ROOT_TAG)
if osDict:
softwareProfile.setOsInfo(
tortuga.objects.osInfo.OsInfo.getFromDict(osDict))
softwareProfile.setPartitions(
tortuga.objects.partition.Partition.getListFromDict(_dict))
softwareProfile.\
setUsableHardwareProfiles(
tortuga.objects.hardwareProfile.HardwareProfile.
getListFromDict(_dict))
# kitsources
softwareProfile.setKitSources(
tortuga.objects.kitSource.KitSource.getListFromDict(_dict))
return softwareProfile
@classmethod
def getFromDbDict(cls, _dict, ignore: Optional[Iterable[str]] = None):
softwareProfile = super(SoftwareProfile, cls).getFromDict(
_dict, ignore=ignore)
softwareProfile.setAdmins(
tortuga.objects.admin.Admin.getListFromDbDict(_dict))
softwareProfile.setComponents(
tortuga.objects.component.Component.getListFromDbDict(_dict))
if not ignore or 'nodes' not in ignore:
softwareProfile.setNodes(
tortuga.objects.node.Node.getListFromDbDict(_dict))
osDict = _dict.get(tortuga.objects.osInfo.OsInfo.ROOT_TAG)
if osDict:
softwareProfile.setOsInfo(
tortuga.objects.osInfo.OsInfo.getFromDbDict(
osDict.__dict__))
softwareProfile.setPartitions(
tortuga.objects.partition.Partition.getListFromDbDict(_dict))
softwareProfile.setUsableHardwareProfiles(
tortuga.objects.hardwareProfile.HardwareProfile.
getListFromDbDict(_dict))
tags = {tag.name: tag.value for tag in _dict.get('tags', [])}
softwareProfile.setTags(tags)
return softwareProfile
def _partition_compare(x, y):
deviceDiff = x.getDeviceTuple()[0] - y.getDeviceTuple()[0]
if deviceDiff == 0:
deviceDiff = x.getDeviceTuple()[1] - y.getDeviceTuple()[1]
return deviceDiff
| 27.631579 | 77 | 0.607731 | 875 | 8,925 | 6.139429 | 0.250286 | 0.032576 | 0.050819 | 0.028481 | 0.186895 | 0.186895 | 0.110201 | 0.040208 | 0.040208 | 0.040208 | 0 | 0.003104 | 0.277983 | 8,925 | 322 | 78 | 27.717391 | 0.83054 | 0.148459 | 0 | 0.104167 | 0 | 0 | 0.072243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.260417 | false | 0 | 0.067708 | 0.057292 | 0.484375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d4f711206b2fd9dbd8a3177d589e3c33373c8b1 | 822 | py | Python | tools/test_tmp.py | Z-XQ/mmdetection | 9f3756889969c0c21e6d84e0d993f302e7f07460 | [
"Apache-2.0"
] | null | null | null | tools/test_tmp.py | Z-XQ/mmdetection | 9f3756889969c0c21e6d84e0d993f302e7f07460 | [
"Apache-2.0"
] | null | null | null | tools/test_tmp.py | Z-XQ/mmdetection | 9f3756889969c0c21e6d84e0d993f302e7f07460 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# @Time : 2020/9/28 下午9:49
# @Author : zxq
# @File : test_tmp.py
# @Software: PyCharm
import mmcv
import torch
from mmdet.datasets import build_dataset
from mmdet.models import build_detector
from mmdet.apis import train_detector, inference_detector, show_result_pyplot
from tools.train_tmp import CustomerTrain
customer_train = CustomerTrain()
cfg = customer_train.cfg
# Build dataset
datasets = [build_dataset(cfg.data.train)]
# Build the detector
model = build_detector(
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
# Add an attribute for visualization convenience
model.CLASSES = datasets[0].CLASSES
img = mmcv.imread('../data/kitti_tiny/training/image_2/000068.jpeg')
model.cfg = cfg
result = inference_detector(model, img)
show_result_pyplot(model, img, result) | 27.4 | 77 | 0.770073 | 119 | 822 | 5.142857 | 0.478992 | 0.044118 | 0.052288 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026462 | 0.126521 | 822 | 30 | 78 | 27.4 | 0.825905 | 0.226277 | 0 | 0 | 0 | 0 | 0.074841 | 0.074841 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3d555476cff1bc071aa2e2a1ea0c596baf77825f | 1,586 | py | Python | scripts/space_heating_demand/ecofys_space_heating_demand.py | quintel/etmoses | e1e682d0ef68928e5a015c44d916ec151917b1ff | [
"MIT"
] | 16 | 2015-09-22T11:33:52.000Z | 2019-09-09T13:37:14.000Z | scripts/space_heating_demand/ecofys_space_heating_demand.py | quintel/etmoses | e1e682d0ef68928e5a015c44d916ec151917b1ff | [
"MIT"
] | 1,445 | 2015-05-20T22:42:50.000Z | 2022-02-26T19:16:02.000Z | scripts/space_heating_demand/ecofys_space_heating_demand.py | quintel/etloader | e1e682d0ef68928e5a015c44d916ec151917b1ff | [
"MIT"
] | 3 | 2015-11-03T10:41:26.000Z | 2017-02-11T07:39:52.000Z | import numpy as np
from numpy import genfromtxt
import matplotlib.pyplot as plt
import os
time_steps = 8760
file_name = "../input_data/Ecofys_ECN_heating_profiles.csv"
data = zip(*genfromtxt(file_name, delimiter=','))
names = ["tussenwoning_laag", "tussenwoning_midden", "tussenwoning_hoog",
"hoekwoning_laag", "hoekwoning_midden", "hoekwoning_hoog",
"twee_onder_een_kapwoning_laag", "twee_onder_een_kapwoning_midden", "twee_onder_een_kapwoning_hoog",
"appartement_laag", "appartement_midden", "appartement_hoog",
"vrijstaande_woning_laag", "vrijstaande_woning_midden", "vrijstaande_woning_hoog"]
profiles = []
totals = []
counter = 0
for profile in data:
if len(profile) == time_steps:
profiles.append(profile)
totals.append(np.sum(profile))
print "Writing: ", names[counter]+".csv"
out_file = open("../output_data/"+names[counter]+".csv","w")
for item in profile:
for i in range(4):
out_file.write(str(item) + "\n")
out_file.close()
else:
print "Error! profile #"+str(counter)+" has "+ str(len(profile)) + " lines"
counter += 1
print totals
plt.close()
plt.figure(figsize=(19, 7))
mini = 0
maxi = 24 * 7
for name,profile in zip(names,profiles):
#if "appartement" in name:
#plt.plot(profile[mini:maxi]/np.sum(profile),linewidth=1.0, label=name)
plt.plot(profile[mini:maxi],linewidth=1.0, label=name)
plt.xlabel('time (hours)')
plt.ylabel('kW')
plt.legend()
plt.show() | 26.433333 | 109 | 0.645019 | 201 | 1,586 | 4.895522 | 0.40796 | 0.027439 | 0.036585 | 0.064024 | 0.09248 | 0.09248 | 0 | 0 | 0 | 0 | 0 | 0.014446 | 0.214376 | 1,586 | 60 | 110 | 26.433333 | 0.775281 | 0.059899 | 0 | 0 | 0 | 0 | 0.290323 | 0.137769 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.102564 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d5db5e05861ba4f7444a52667354a11e6f370f2 | 6,018 | py | Python | utility_functions.py | andrewli2403/California-Basketball-Data-Processor | 19582bef72d6a4f4281ddb61eceb4bee033b5ceb | [
"MIT"
] | null | null | null | utility_functions.py | andrewli2403/California-Basketball-Data-Processor | 19582bef72d6a4f4281ddb61eceb4bee033b5ceb | [
"MIT"
] | null | null | null | utility_functions.py | andrewli2403/California-Basketball-Data-Processor | 19582bef72d6a4f4281ddb61eceb4bee033b5ceb | [
"MIT"
] | null | null | null | import requests
from bs4 import BeautifulSoup as bs
import re
import pandas as pd
#collect & process data based on GAME ID
def processor(game_id):
url = "https://www.espn.com/mens-college-basketball/matchup?gameId=" + str(game_id)
r = requests.get(url)
webpage = bs(r.content, features="html.parser")
#create dictionary for CAL & opponent with respective scores
team_name = [name.string for name in webpage.find_all("td", attrs={"class", "team-name"})]
score = [float(final_score.string) for final_score in webpage.find_all("td", attrs={"class", "final-score"})]
team_score = dict(zip(team_name, score))
#determine opponent
for team in team_name:
if team != "CAL":
opponent = team
#locate table for main data regarding game
table = webpage.select("table.mod-data")[0]
#create column_names and locate the rows of the table
column_names = (list(team_score.keys()))
#columns = table.find_all("td", string=re.compile("[A-Za-z%]+"))
#column_names = [c.get_text().strip() for c in columns]
table_rows = table.find("tbody").find_all("tr")
#accumulate all the table_rows into one list
row_names, l = [], []
for tr in table_rows:
td = tr.find_all("td")
#row: [STAT_NAME, NUM1, NUM2]
row = [tr.get_text().strip() for tr in td]
#.append([NUM1, NUM2])
l.append(row[1:])
#.append([STAT_NAME])
row_names.append(row[0])
#create data table with column_names and rows within table.mod-data
df = pd.DataFrame(l, columns=column_names)
#searches a string with - and seperates elements from either side into a list
def try_convert(val):
hyphen_index = val.index("-")
return [val[:hyphen_index], val[hyphen_index + 1:]]
#append new rows for FGA, FGM, 3PTA, 3PTM, FTA, FTM for later calculations in MAKES-ATTEMPT format
for index in df.index:
if re.search("-", df.loc[index, "CAL"]) and re.search("-", df.loc[index, opponent]):
#multiple assignment with try_convert to extract attempts and makes
makes_cal, att_cal = try_convert(df.loc[index, "CAL"])[0], try_convert(df.loc[index, "CAL"])[1]
makes_opp, att_opp = try_convert(df.loc[index, opponent])[0], try_convert(df.loc[index, opponent])[1]
#assigns current row with makes
df.loc[index, "CAL"], df.loc[index, opponent] = makes_cal, makes_opp
#append new row with attempts, increasing index to reorder later
df2 = pd.DataFrame({'CAL': att_cal, opponent: att_opp}, index=[index + .1])
df = df.append(df2, ignore_index = False)
#reorder rows with increasing index
df = df.sort_index().reset_index(drop=True)
#adds FGA, FGM, 3PTA, 3PTM, FTA, FTM row names
row = []
def add_makes_att(lst, new_lst):
for stat in lst:
if re.search("FG|3PT|FT", stat):
new_lst.append(stat + "M")
new_lst.append(stat +"A")
else:
new_lst.append(stat)
return new_lst
row_names = add_makes_att(row_names, row)
#set row indexes as STAT_NAME within row
df.index = row_names
#turns all stats into float, tranpose matrix
df["CAL"] = pd.to_numeric(df["CAL"], downcast="float")
df[opponent] = pd.to_numeric(df[opponent], downcast="float")
df = df.T
#calculates +/-
def net(cal, opp):
if cal - opp > 0:
return "+" + str(cal - opp)
elif opp - cal > 0:
return "-" + str(opp - cal)
else:
return 0
#create a list respresenting calculations for the game
poss = df.loc['CAL', 'FGA']-df.loc['CAL', 'Offensive Rebounds']+df.loc['CAL', 'Total Turnovers']+.475*df.loc['CAL', 'FTA']
calc = [opponent, poss, team_score[opponent]/poss, (df.loc[opponent, 'FGM']+.5*df.loc[opponent, '3PTM'])*100/df.loc[opponent, 'FGA'], df.loc['CAL', 'Defensive Rebounds']*100/(df.loc['CAL', 'Defensive Rebounds']+df.loc[opponent, 'Offensive Rebounds']), df.loc[opponent, 'Total Turnovers']*100/poss, df.loc[opponent, 'Field Goal %'], df.loc[opponent, 'Three Point %'],(df.loc[opponent, 'FTA']*100/df.loc[opponent, 'FGA'], df.loc[opponent, 'FTM']*100/df.loc[opponent, 'FTA']), team_score['CAL']/poss, (df.loc['CAL', 'FGM']+.5*df.loc['CAL', '3PTM'])*100/df.loc['CAL', 'FGA'], df.loc['CAL', 'Offensive Rebounds']*100/(df.loc[opponent, 'Defensive Rebounds']+df.loc['CAL', 'Offensive Rebounds']), df.loc['CAL', 'Total Turnovers']*100/poss, df.loc['CAL', 'Field Goal %'], df.loc['CAL', 'Three Point %'],(df.loc['CAL', 'FTA']*100/df.loc['CAL', 'FGA'], df.loc['CAL', 'FTM']*100/df.loc['CAL', 'FTA']), net(df.loc['CAL', 'Offensive Rebounds']+df.loc["CAL", 'Defensive Rebounds'], df.loc[opponent, 'Offensive Rebounds']+df.loc[opponent, 'Defensive Rebounds']), net(df.loc['CAL', '3PTM'], df.loc[opponent, '3PTM'])]
return calc
#find date of game
def get_date(game_id):
url = "https://www.espn.com/mens-college-basketball/game/_/gameId/" + str(game_id)
r = requests.get(url)
webpage = bs(r.content, features="html.parser")
date = re.findall('(?:January|February|March|April|May|June|July|August|September|October|November|December)\s[1-9][1-9]?,\s[0-9]{4}', webpage.find("title").get_text())[0]
return date
#rounds data based on stat parameters
def clean(series):
if series.name == "Game" or series.name == "Reb +/-" or series.name == "3pt +/-":
return series
elif series.name == "OER" or series.name == "DER":
return series.astype('float64').round(2)
elif series.name == "Def FT RATE, %" or series.name == "Off FT Rate, %":
#encounters tuple data structure of two valued stat
return series.apply(lambda stats: tuple(map(round, stats)))
else:
return series.astype('int32')
#converts datetime object into MM/DD/YYYY format
def date_convert(datetime_obj):
return f"{datetime_obj.month}/{datetime_obj.day}/{datetime_obj.year}" | 47.015625 | 1,105 | 0.633599 | 885 | 6,018 | 4.223729 | 0.258757 | 0.058855 | 0.044944 | 0.029428 | 0.250401 | 0.222846 | 0.174157 | 0.146335 | 0.132959 | 0.120385 | 0 | 0.016022 | 0.201396 | 6,018 | 128 | 1,106 | 47.015625 | 0.761756 | 0.201894 | 0 | 0.092105 | 0 | 0.013158 | 0.189659 | 0.036006 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092105 | false | 0 | 0.052632 | 0.013158 | 0.302632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3d5f940e0e5788ca23c26f2a301fe14e51745333 | 1,161 | py | Python | 003.branch/if.py | cjp1016/python-samples | ca5a7284cf4cb9fe42fa1487d4944815a00487ec | [
"Apache-2.0"
] | null | null | null | 003.branch/if.py | cjp1016/python-samples | ca5a7284cf4cb9fe42fa1487d4944815a00487ec | [
"Apache-2.0"
] | null | null | null | 003.branch/if.py | cjp1016/python-samples | ca5a7284cf4cb9fe42fa1487d4944815a00487ec | [
"Apache-2.0"
] | null | null | null | """
用户身份验证
Version: 0.1
Author: cjp
"""
username = input('请输入用户名: ')
password = input('请输入口令: ')
# 用户名是admin且密码是123456则身份验证成功否则身份验证失败
if username == 'admin' and password == '123456':
print('身份验证成功!')
else:
print('身份验证失败!')
"""
Python中没有用花括号来构造代码块而是使用了缩进的方式来设置代码的层次结构,
如果if条件成立的情况下需要执行多条语句,只要保持多条语句具有相同的缩进就可以了,
换句话说连续的代码如果又保持了相同的缩进那么它们属于同一个代码块,相当于是一个执行的整体。
"""
# 当然如果要构造出更多的分支,可以使用if…elif…else…结构,例如下面的分段函数求值。
"""
分段函数求值
3x - 5 (x > 1)
f(x) = x + 2 (-1 <= x <= 1)
5x + 3 (x < -1)
Version: 0.1
Author: cjp
"""
x = float(input('x = '))
if x > 1:
y = 3 * x - 5
elif x >= -1:
y = x + 2
else:
y = 5 * x + 3
print('f(%.2f) = %.2f' % (x, y))
"""
同理elif和else中也可以再构造新的分支,我们称之为嵌套的分支结构
"""
"""
分段函数求值
3x - 5 (x > 1)
f(x) = x + 2 (-1 <= x <= 1)
5x + 3 (x < -1)
Version: 0.1
Author: cjp
"""
x = float(input('x = '))
if x > 1:
y = 3 * x - 5
else:
if x >= -1:
y = x+2
else:
y = 5 * x + 3
print('f(%.2f) = %.2f' % (x, y))
"""
大家可以自己感受一下这两种写法到底是哪一种更好。
在之前我们提到的Python之禅中有这么一句话“Flat is better than nested.”,
之所以提倡代码“扁平化”是因为嵌套结构的嵌套层次多了之后会严重的影响代码的可读性
,所以能使用扁平化的结构时就不要使用嵌套。
""" | 15.077922 | 53 | 0.564169 | 150 | 1,161 | 4.426667 | 0.366667 | 0.03012 | 0.018072 | 0.067771 | 0.286145 | 0.259036 | 0.259036 | 0.259036 | 0.259036 | 0.259036 | 0 | 0.061927 | 0.248923 | 1,161 | 77 | 54 | 15.077922 | 0.68922 | 0.099053 | 0 | 0.695652 | 0 | 0 | 0.162047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.086957 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3d602a949005e0184acfd82e6822740a19d36fb9 | 7,210 | bzl | Python | bazel/antlr4_cc.bzl | kyle-winkelman/fhir | 01038aa235189fd043fd2981ebf40f4dc1e826e0 | [
"Apache-2.0"
] | null | null | null | bazel/antlr4_cc.bzl | kyle-winkelman/fhir | 01038aa235189fd043fd2981ebf40f4dc1e826e0 | [
"Apache-2.0"
] | 2 | 2020-07-24T14:20:45.000Z | 2020-07-24T19:43:52.000Z | bazel/antlr4_cc.bzl | kyle-winkelman/fhir | 01038aa235189fd043fd2981ebf40f4dc1e826e0 | [
"Apache-2.0"
] | 1 | 2020-07-10T15:03:45.000Z | 2020-07-10T15:03:45.000Z | """Build rules to create C++ code from an Antlr4 grammar."""
def antlr4_cc_lexer(name, src, namespaces = None, imports = None, deps = None, lib_import = None):
"""Generates the C++ source corresponding to an antlr4 lexer definition.
Args:
name: The name of the package to use for the cc_library.
src: The antlr4 g4 file containing the lexer rules.
namespaces: The namespace used by the generated files. Uses an array to
support nested namespaces. Defaults to [name].
imports: A list of antlr4 source imports to use when building the lexer.
deps: Dependencies for the generated code.
lib_import: Optional target for importing grammar and token files.
"""
namespaces = namespaces or [name]
imports = imports or []
deps = deps or []
if not src.endswith(".g4"):
fail("Grammar must end with .g4", "src")
if (any([not imp.endswith(".g4") for imp in imports])):
fail("Imported files must be Antlr4 grammar ending with .g4", "imports")
file_prefix = src[:-3]
base_file_prefix = _strip_end(file_prefix, "Lexer")
out_files = [
"%sLexer.h" % base_file_prefix,
"%sLexer.cpp" % base_file_prefix,
]
native.java_binary(
name = "antlr_tool",
jvm_flags = ["-Xmx256m"],
main_class = "org.antlr.v4.Tool",
runtime_deps = ["@maven//:org_antlr_antlr4_4_7_1"],
)
command = ";\n".join([
# Use the first namespace, we'll add the others afterwards.
_make_tool_invocation_command(namespaces[0], lib_import),
_make_namespace_adjustment_command(namespaces, out_files),
])
native.genrule(
name = name + "_source",
srcs = [src] + imports,
outs = out_files,
cmd = command,
heuristic_label_expansion = 0,
tools = ["antlr_tool"],
)
native.cc_library(
name = name,
srcs = [f for f in out_files if f.endswith(".cpp")],
hdrs = [f for f in out_files if f.endswith(".h")],
deps = ["@antlr_cc_runtime//:antlr4_runtime"] + deps,
copts = [
"-fexceptions",
],
features = ["-use_header_modules"], # Incompatible with -fexceptions.
)
def antlr4_cc_parser(
name,
src,
namespaces = None,
token_vocab = None,
imports = None,
listener = True,
visitor = False,
deps = None,
lib_import = None):
"""Generates the C++ source corresponding to an antlr4 parser definition.
Args:
name: The name of the package to use for the cc_library.
src: The antlr4 g4 file containing the parser rules.
namespaces: The namespace used by the generated files. Uses an array to
support nested namespaces. Defaults to [name].
token_vocab: The antlr g4 file containing the lexer tokens.
imports: A list of antlr4 source imports to use when building the parser.
listener: Whether or not to include listener generated files.
visitor: Whether or not to include visitor generated files.
deps: Dependencies for the generated code.
lib_import: Optional target for importing grammar and token files.
"""
suffixes = ()
if listener:
suffixes += (
"%sBaseListener.cpp",
"%sListener.cpp",
"%sBaseListener.h",
"%sListener.h",
)
if visitor:
suffixes += (
"%sBaseVisitor.cpp",
"%sVisitor.cpp",
"%sBaseVisitor.h",
"%sVisitor.h",
)
namespaces = namespaces or [name]
imports = imports or []
deps = deps or []
if not src.endswith(".g4"):
fail("Grammar must end with .g4", "src")
if token_vocab != None and not token_vocab.endswith(".g4"):
fail("Token Vocabulary must end with .g4", "token_vocab")
if (any([not imp.endswith(".g4") for imp in imports])):
fail("Imported files must be Antlr4 grammar ending with .g4", "imports")
file_prefix = src[:-3]
base_file_prefix = _strip_end(file_prefix, "Parser")
out_files = [
"%sParser.h" % base_file_prefix,
"%sParser.cpp" % base_file_prefix,
] + _make_outs(file_prefix, suffixes)
if token_vocab:
imports.append(token_vocab)
command = ";\n".join([
# Use the first namespace, we'll add the others afterwardsm thi .
_make_tool_invocation_command(namespaces[0], lib_import, listener, visitor),
_make_namespace_adjustment_command(namespaces, out_files),
])
native.genrule(
name = name + "_source",
srcs = [src] + imports,
outs = out_files,
cmd = command,
heuristic_label_expansion = 0,
tools = [
":antlr_tool",
],
)
native.cc_library(
name = name,
srcs = [f for f in out_files if f.endswith(".cpp")],
hdrs = [f for f in out_files if f.endswith(".h")],
deps = ["@antlr_cc_runtime//:antlr4_runtime"] + deps,
copts = [
"-fexceptions",
# FIXME: antlr generates broken C++ code that attempts to construct
# a std::string from nullptr. It's not clear whether the relevant
# constructs are reachable.
"-Wno-nonnull",
],
features = ["-use_header_modules"], # Incompatible with -fexceptions.
)
def _make_outs(file_prefix, suffixes):
return [file_suffix % file_prefix for file_suffix in suffixes]
def _strip_end(text, suffix):
if not text.endswith(suffix):
return text
return text[:len(text) - len(suffix)]
def _to_c_macro_name(filename):
# Convert the filenames to a format suitable for C preprocessor definitions.
char_list = [filename[i].upper() for i in range(len(filename))]
return "ANTLR4_GEN_" + "".join(
[a if (("A" <= a) and (a <= "Z")) else "_" for a in char_list],
)
def _make_tool_invocation_command(package, lib_import, listener = False, visitor = False):
return "$(location :antlr_tool) " + \
"$(SRCS)" + \
(" -visitor" if visitor else " -no-visitor") + \
(" -listener" if listener else " -no-listener") + \
(" -lib $$(dirname $(location " + lib_import + "))" if lib_import else "") + \
" -Dlanguage=Cpp" + \
" -package " + package + \
" -o $(@D)" + \
" -Xexact-output-dir"
def _make_namespace_adjustment_command(namespaces, out_files):
if len(namespaces) == 1:
return "true"
commands = []
extra_header_namespaces = "\\\n".join(["namespace %s {" % namespace for namespace in namespaces[1:]])
for filepath in out_files:
if filepath.endswith(".h"):
commands.append("sed -i '/namespace %s {/ a%s' $(@D)/%s" % (namespaces[0], extra_header_namespaces, filepath))
for namespace in namespaces[1:]:
commands.append("sed -i '/} \/\/ namespace %s/i} \/\/ namespace %s' $(@D)/%s" % (namespaces[0], namespace, filepath))
else:
commands.append("sed -i 's/using namespace %s;/using namespace %s;/' $(@D)/%s" % (namespaces[0], "::".join(namespaces), filepath))
return ";\n".join(commands)
| 38.55615 | 142 | 0.600971 | 879 | 7,210 | 4.778157 | 0.21843 | 0.030952 | 0.02 | 0.014286 | 0.586429 | 0.541905 | 0.517619 | 0.50619 | 0.459048 | 0.459048 | 0 | 0.009014 | 0.276838 | 7,210 | 186 | 143 | 38.763441 | 0.796509 | 0.231761 | 0 | 0.398601 | 1 | 0 | 0.18895 | 0.018232 | 0 | 0 | 0 | 0.005376 | 0 | 1 | 0.048951 | false | 0 | 0.111888 | 0.013986 | 0.20979 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9e2bdbc8442df5b9a587f4296d83d87e0d66ce8 | 6,982 | py | Python | bot/messages.py | pyaf/tpobot | d96a3650de46f6d43ab346d61b922b170cd5fdb2 | [
"MIT"
] | 4 | 2017-07-19T19:18:15.000Z | 2017-11-24T16:15:51.000Z | bot/messages.py | rishabhiitbhu/tpobot | d96a3650de46f6d43ab346d61b922b170cd5fdb2 | [
"MIT"
] | 5 | 2020-02-11T23:53:50.000Z | 2021-12-13T19:45:22.000Z | bot/messages.py | pyaf/tpobot | d96a3650de46f6d43ab346d61b922b170cd5fdb2 | [
"MIT"
] | 1 | 2017-08-27T20:40:50.000Z | 2017-08-27T20:40:50.000Z | # -*- coding: utf-8 -*-
message_dict = {
'welcome': "Hi! TPO Baba is here to give you updates about TPO portal, set willingness reminders, ppt "\
"reminders, exam date reminders and lot more...:D \n\n"\
"To personalise your experience, I gotta register you. It's simple two step process.\n",
'greetings': "Hello pal :)",
'haalchaal': "hamaar to mauj ahaai guru 🙏, tohaar batawa kaa haal chaal bate?"\
" ;P",
'no_idea': "Oops, didn't get you, Baba is a simple AI bot not Jarvis, don't be so cryptic. 😅\n"\
"Baba has gotta master, Baba will learn this soon. B) \n\n"\
"Ask for help to know what options you have.",
'user_invalid': "You account is Invalid.\n"\
"Contact https://m.me/rishabh.ags/ for help",
'get_email': "Baba needs to know your official IIT email id, drop it as a text message.",
'email_set': "Baba has set your email to {0}",
'not_iit_email': "Oops!, seems like you didn't enter your official email id\n"\
"As I am running on a heroku server, which costs 7$ pm. Don't misuse this. "\
"I cannot afford offering services to others,.\nIf you ain't student of IIT (BHU), please"\
" don't register ,.. Bhawnao ko samjho yaaar 😅",
'get_course': "Baba needs to know your course, select your course among btech, idd or imd, "\
"then drop a text message.",
'course_set': "Baba has set your course to {0}",
'reg_error': "Oops!, you got me wrong, retry entering it correctly..\n\n"\
"And you gotta register first, we'll chat afterwards. :)\n"\
"if you're facing issues contact https://m.me/rishabh.ags",
'email_already_set': "Pal, you already got your email set to {0}",
'invalid_email': "Baba wants a valid email id.\nRetry please.",
'course_already_set': "Pal, you already got your email set to {0}",
'reg_success': "And congratulations! 🎉 you have successfully registered!, your email id "\
"will be verified soon. :) \n\nIf found misleading or wrong, I'll find you and I'll "\
"deregister you ;P \n\n"\
"Ask for features to know what I've got for you in my Jhola B) \n\n"\
"Ask for help to know what options you have. :)",
'features': "Baba is a messenger bot created by a high functioning sociopathic nerd of IIT (BHU) :D\n"\
"\nI have got a simple AI brain powered by Wit and has not been trained too much, "\
"so please don't use too off the track keywords 😅 \n\n",
'features1': "What I currently do:\n"\
"1. Text you whenever a new company opens for your course and department, "\
"you'll get all details of such companies.\n"\
"2. Text you whenever companies your course and department get any changes in their "\
"parameters like willingness deadlines, exam dates, ppt dates, etc.. \n\n",
'features2':"What I plan to do pretty soon:\n"\
"1. Remind you about deadlines of willingness application, ppt dates "\
"and exam dates etc.. B) \n" \
"2. Give replies to your queries about companies...\n\n"\
"P.S. To know why that nerd made me? you are free to ask me :P\n"\
"Ask for help to know what options you have.",
'help': "Baba has got you some help:\n\n"\
"1. You can ask me to unsubscribe/deactivate you from receiving updates .\n"\
"2. You can ask me subscribe/activate your account. from receiving updates.\n",
'deactivate': "Alright pal, It's been a good chat with you, deactivating your account.\n"\
"You can ask me to reactivate it if necessary.",
'activate': "Welcome back!, your account is reactivated",
'wit_error': "Ohho, I'm sick, my brain is not working, Please call my master! 😰 \n"\
"https:/m.me/rishabhags/",
'new_company': "Hola!\nNew Company Open for you! 🎉🎊🎁\n\n"\
"Company Name: {company_name}\n"\
"Open for: {course}\n"\
"Departments: {department}\n"\
"BTech CTC: {btech_ctc}\n"\
"IDD/IMD CTC: {idd_imd_ctc}\n"\
"X cutoff: {x}\n"\
"XII cutoff: {xii}\n"\
"CGPA cutoff: {cgpa}\n"\
"Status: {status}\n\n"\
"Will keep you updated with this company :D.\n"\
"Cya :)",
'updated_company': "Baba has updates to deliver!\n\n"\
"{0} got updated on the portal\n\n"\
"Updated fields are: \n\n"\
"{1}\n"\
"{2}"\
"\n\nThis is it for now.\nCya :)",
#{1} will store update message
'abuse': "You are so abusive, next time, I'll deactivate your account 😡😠😡",
'lol': "Lol, I was kidding,,. 😜😝😂",
'master': "My master made me because TPO developers ko to `सीनेमा` ne barbaad karke rakkha hai.. "\
"and he knows very well, that jab tak iss des me `सीनेमा` hai, tab tak log * "\
"bante rahege ;P \n\n"\
"P.S. This was a joke, it has nothing to do with anything, we respect TPO portal "\
"developers they have made a great portal. \n"\
"Ask for me for help, if you wanna know what you have got to do.",
'idd_imd_4th_year': "Ops!, you are from 4rth year IDD/IMD, I don't wanna disturb you with updates. \n"\
"I'll have to set your account Invalid.\n\n"\
"For further queries contact https://m.me/rishabh.ags/"
}
field_msg_dict = {
'company_profile': 'Company Profile',
'x': 'X',
'xii': 'XII',
'cgpa': 'CGPA',
'course': 'Course',
'purpose': 'Purpose',
'department': 'Department',
'a_backlog': 'Active backlogs allowed',
't_backlog': 'Total backlogs allowed',
'ppt_date': 'PPT date',
'exam_date': 'Exam date',
'status': 'Status',
'branch_issue_dead': 'Branch issue deadline',
'willingness_dead': 'Willingness deadline',
'btech_ctc': 'B.Tech CTC',
'idd_imd_ctc':'IDD/IMD CTC',
# 'jd': 'JD',
}
# "TPO developers ko to `सीनेमा` ne barbaad karke rakkha hai.. ;P\n"
# "So, hum denge aapko sare updates, about new companies listed in the portal,willingness opening "\
# "and closing reminders ppt reminders, exam date reminders aur bhi bahot kuchh..\n"\
# 'invalid_course': "Baba wants valid course name (btech or idd or imd).\n retry please.",
# "Active backlogs allowed: {8}\n"\
# "Total backlogs allowed: {9}\n"\
| 48.151724 | 112 | 0.560011 | 986 | 6,982 | 3.94929 | 0.350913 | 0.008218 | 0.008988 | 0.006163 | 0.147406 | 0.122239 | 0.072676 | 0.072676 | 0.072676 | 0.072676 | 0 | 0.004633 | 0.319822 | 6,982 | 144 | 113 | 48.486111 | 0.810276 | 0.06703 | 0 | 0.019608 | 0 | 0.107843 | 0.695418 | 0.006919 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9e8878237d9fdf426e86b2606cac1e238054e1a | 8,888 | py | Python | arapheno/phenotypedb/migrations/0001_initial.py | svengato/AraPheno | d6918e2e69c497b7096d9291d904c69310e84d06 | [
"MIT"
] | 5 | 2018-03-24T08:54:50.000Z | 2021-01-19T03:19:42.000Z | arapheno/phenotypedb/migrations/0001_initial.py | svengato/AraPheno | d6918e2e69c497b7096d9291d904c69310e84d06 | [
"MIT"
] | 38 | 2016-08-14T12:09:15.000Z | 2020-10-30T06:02:24.000Z | arapheno/phenotypedb/migrations/0001_initial.py | svengato/AraPheno | d6918e2e69c497b7096d9291d904c69310e84d06 | [
"MIT"
] | 8 | 2016-08-15T06:07:32.000Z | 2020-11-06T06:43:56.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-06-27 14:12
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Accession',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('country', models.CharField(blank=True, max_length=255, null=True)),
('sitename', models.TextField(blank=True, null=True)),
('collector', models.TextField(blank=True, null=True)),
('collection_date', models.DateTimeField(blank=True, null=True)),
('longitude', models.FloatField(blank=True, db_index=True, null=True)),
('latitude', models.FloatField(blank=True, db_index=True, null=True)),
('cs_number', models.CharField(blank=True, max_length=255, null=True)),
],
),
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('firstname', models.CharField(blank=True, max_length=100, null=True)),
('lastname', models.CharField(blank=True, db_index=True, max_length=200, null=True)),
],
),
migrations.CreateModel(
name='ObservationUnit',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('accession', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Accession')),
],
),
migrations.CreateModel(
name='OntologySource',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('acronym', models.CharField(max_length=50)),
('name', models.CharField(max_length=255)),
('url', models.URLField()),
],
),
migrations.CreateModel(
name='OntologyTerm',
fields=[
('id', models.CharField(max_length=50, primary_key=True, serialize=False)),
('name', models.CharField(max_length=255)),
('definition', models.TextField(blank=True, null=True)),
('comment', models.TextField(blank=True, null=True)),
('source', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.OntologySource')),
],
),
migrations.CreateModel(
name='Phenotype',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('doi', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('name', models.CharField(db_index=True, max_length=255)),
('scoring', models.TextField(blank=True, null=True)),
('source', models.TextField(blank=True, null=True)),
('type', models.CharField(blank=True, max_length=255, null=True)),
('growth_conditions', models.TextField(blank=True, null=True)),
('shapiro_test_statistic', models.FloatField(blank=True, null=True)),
('shapiro_p_value', models.FloatField(blank=True, null=True)),
('number_replicates', models.IntegerField(default=0)),
('integration_date', models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name='PhenotypeMetaDynamic',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('phenotype_meta_field', models.CharField(db_index=True, max_length=255)),
('phenotype_meta_value', models.TextField()),
('phenotype_public', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Phenotype')),
],
),
migrations.CreateModel(
name='PhenotypeValue',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('value', models.FloatField()),
('obs_unit', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.ObservationUnit')),
('phenotype', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Phenotype')),
],
),
migrations.CreateModel(
name='Publication',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('author_order', models.TextField()),
('publication_tag', models.CharField(max_length=255)),
('pub_year', models.IntegerField(blank=True, null=True)),
('title', models.CharField(db_index=True, max_length=255)),
('journal', models.CharField(max_length=255)),
('volume', models.CharField(blank=True, max_length=255, null=True)),
('pages', models.CharField(blank=True, max_length=255, null=True)),
('doi', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('pubmed_id', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('authors', models.ManyToManyField(to='phenotypedb.Author')),
],
),
migrations.CreateModel(
name='Species',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ncbi_id', models.IntegerField(blank=True, null=True)),
('genus', models.CharField(max_length=255)),
('species', models.CharField(max_length=255)),
('description', models.TextField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='Study',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('description', models.TextField(blank=True, null=True)),
('publications', models.ManyToManyField(blank=True, to='phenotypedb.Publication')),
('species', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Species')),
],
),
migrations.AddField(
model_name='phenotype',
name='dynamic_metainformations',
field=models.ManyToManyField(to='phenotypedb.PhenotypeMetaDynamic'),
),
migrations.AddField(
model_name='phenotype',
name='eo_term',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='eo_term', to='phenotypedb.OntologyTerm'),
),
migrations.AddField(
model_name='phenotype',
name='species',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Species'),
),
migrations.AddField(
model_name='phenotype',
name='study',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Study'),
),
migrations.AddField(
model_name='phenotype',
name='to_term',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='to_term', to='phenotypedb.OntologyTerm'),
),
migrations.AddField(
model_name='phenotype',
name='uo_term',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='uo_term', to='phenotypedb.OntologyTerm'),
),
migrations.AddField(
model_name='observationunit',
name='study',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Study'),
),
migrations.AddField(
model_name='accession',
name='species',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='phenotypedb.Species'),
),
]
| 50.5 | 159 | 0.588884 | 884 | 8,888 | 5.782805 | 0.150452 | 0.056338 | 0.046948 | 0.059859 | 0.75 | 0.699531 | 0.61385 | 0.61385 | 0.564554 | 0.495696 | 0 | 0.012917 | 0.268339 | 8,888 | 175 | 160 | 50.788571 | 0.773182 | 0.007538 | 0 | 0.54491 | 1 | 0 | 0.130415 | 0.032774 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017964 | 0 | 0.041916 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9e9975a7e35ce3210ca6631964e51dc707d8e9b | 2,667 | py | Python | kwiklib/utils/settings.py | fiath/test | b50898dafa90e93da48f573e0b3feb1bb6acd8de | [
"MIT",
"BSD-3-Clause"
] | 7 | 2015-01-20T13:55:51.000Z | 2018-02-06T09:31:21.000Z | kwiklib/utils/settings.py | fiath/test | b50898dafa90e93da48f573e0b3feb1bb6acd8de | [
"MIT",
"BSD-3-Clause"
] | 6 | 2015-01-08T18:13:53.000Z | 2016-06-22T09:53:53.000Z | kwiklib/utils/settings.py | fiath/test | b50898dafa90e93da48f573e0b3feb1bb6acd8de | [
"MIT",
"BSD-3-Clause"
] | 8 | 2015-01-22T22:57:19.000Z | 2020-03-19T11:43:56.000Z | """Internal persistent settings store with cPickle."""
# -----------------------------------------------------------------------------
# Imports
# -----------------------------------------------------------------------------
import cPickle
import os
from kwiklib.utils.globalpaths import ensure_folder_exists
# -----------------------------------------------------------------------------
# Utility functions
# -----------------------------------------------------------------------------
def load(filepath):
"""Load the settings from the file, and creates it if it does not exist."""
if not os.path.exists(filepath):
save(filepath)
with open(filepath, 'rb') as f:
settings = cPickle.load(f)
return settings
def save(filepath, settings={}):
"""Save the settings in the file."""
with open(filepath, 'wb') as f:
cPickle.dump(settings, f)
return settings
# -----------------------------------------------------------------------------
# Settings
# -----------------------------------------------------------------------------
class Settings(object):
"""Manage internal settings.
They are stored in a binary file in the user home folder.
Settings are only loaded once from disk as soon as an user preference field
is explicitely requested.
"""
def __init__(self, appname=None, folder=None, filepath=None,
autosave=True):
"""The settings file is not loaded here, but only once when a field is
first accessed."""
self.appname = appname
self.folder = folder
self.filepath = filepath
self.settings = {}
self.settings = None
self.autosave = autosave
# I/O methods
# -----------
def _load_once(self):
"""Load or create the settings file, unless it has already been
loaded."""
if self.settings is None:
# Create the folder if it does not exist.
ensure_folder_exists(self.folder)
# Load or create the settings file.
self.settings = load(self.filepath)
def save(self):
save(self.filepath, self.settings)
# Getter and setter methods
# -------------------------
def set(self, key, value):
self._load_once()
self.settings[key] = value
if self.autosave:
self.save()
def get(self, key, default=None):
self._load_once()
return self.settings.get(key, default)
def __setitem__(self, key, value):
self.set(key, value)
def __getitem__(self, key):
return self.get(key)
| 31.011628 | 79 | 0.490064 | 269 | 2,667 | 4.776952 | 0.334572 | 0.06537 | 0.035019 | 0.017121 | 0.066926 | 0.042023 | 0 | 0 | 0 | 0 | 0 | 0 | 0.248219 | 2,667 | 86 | 80 | 31.011628 | 0.640898 | 0.428946 | 0 | 0.1 | 0 | 0 | 0.002791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225 | false | 0 | 0.075 | 0.025 | 0.425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9f24ec99f076ba98908603ffa1d50f5644d6aa7 | 31,441 | py | Python | Bio/Prosite/__init__.py | nuin/biopython | 045d57b08799ef52c64bd4fa807629b8a7e9715a | [
"PostgreSQL"
] | 2 | 2016-05-09T04:20:06.000Z | 2017-03-07T10:25:53.000Z | Bio/Prosite/__init__.py | nuin/biopython | 045d57b08799ef52c64bd4fa807629b8a7e9715a | [
"PostgreSQL"
] | null | null | null | Bio/Prosite/__init__.py | nuin/biopython | 045d57b08799ef52c64bd4fa807629b8a7e9715a | [
"PostgreSQL"
] | 1 | 2019-08-19T22:05:14.000Z | 2019-08-19T22:05:14.000Z | # Copyright 1999 by Jeffrey Chang. All rights reserved.
# Copyright 2000 by Jeffrey Chang. All rights reserved.
# Revisions Copyright 2007 by Peter Cock. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""
This module provides code to work with the prosite dat file from
Prosite.
http://www.expasy.ch/prosite/
Tested with:
Release 15.0, July 1998
Release 16.0, July 1999
Release 17.0, Dec 2001
Release 19.0, Mar 2006
Functions:
parse Iterates over entries in a Prosite file.
scan_sequence_expasy Scan a sequence for occurrences of Prosite patterns.
index_file Index a Prosite file for a Dictionary.
_extract_record Extract Prosite data from a web page.
_extract_pattern_hits Extract Prosite patterns from a web page.
Classes:
Record Holds Prosite data.
PatternHit Holds data from a hit against a Prosite pattern.
Dictionary Accesses a Prosite file using a dictionary interface.
RecordParser Parses a Prosite record into a Record object.
Iterator Iterates over entries in a Prosite file; DEPRECATED.
_Scanner Scans Prosite-formatted data.
_RecordConsumer Consumes Prosite data to a Record object.
"""
from types import *
import re
import sgmllib
from Bio import File
from Bio import Index
from Bio.ParserSupport import *
# There is probably a cleaner way to write the read/parse functions
# if we don't use the "parser = RecordParser(); parser.parse(handle)"
# approach. Leaving that for the next revision of Bio.Prosite.
def parse(handle):
import cStringIO
parser = RecordParser()
text = ""
for line in handle:
text += line
if line[:2]=='//':
handle = cStringIO.StringIO(text)
record = parser.parse(handle)
text = ""
if not record: # Then this was the copyright notice
continue
yield record
def read(handle):
parser = RecordParser()
try:
record = parser.parse(handle)
except ValueError, error:
if error.message=="There doesn't appear to be a record":
raise ValueError("No Prosite record found")
else:
raise error
# We should have reached the end of the record by now
remainder = handle.read()
if remainder:
raise ValueError("More than one Prosite record found")
return record
class Record:
"""Holds information from a Prosite record.
Members:
name ID of the record. e.g. ADH_ZINC
type Type of entry. e.g. PATTERN, MATRIX, or RULE
accession e.g. PS00387
created Date the entry was created. (MMM-YYYY)
data_update Date the 'primary' data was last updated.
info_update Date data other than 'primary' data was last updated.
pdoc ID of the PROSITE DOCumentation.
description Free-format description.
pattern The PROSITE pattern. See docs.
matrix List of strings that describes a matrix entry.
rules List of rule definitions (from RU lines). (strings)
prorules List of prorules (from PR lines). (strings)
NUMERICAL RESULTS
nr_sp_release SwissProt release.
nr_sp_seqs Number of seqs in that release of Swiss-Prot. (int)
nr_total Number of hits in Swiss-Prot. tuple of (hits, seqs)
nr_positive True positives. tuple of (hits, seqs)
nr_unknown Could be positives. tuple of (hits, seqs)
nr_false_pos False positives. tuple of (hits, seqs)
nr_false_neg False negatives. (int)
nr_partial False negatives, because they are fragments. (int)
COMMENTS
cc_taxo_range Taxonomic range. See docs for format
cc_max_repeat Maximum number of repetitions in a protein
cc_site Interesting site. list of tuples (pattern pos, desc.)
cc_skip_flag Can this entry be ignored?
cc_matrix_type
cc_scaling_db
cc_author
cc_ft_key
cc_ft_desc
cc_version version number (introduced in release 19.0)
DATA BANK REFERENCES - The following are all
lists of tuples (swiss-prot accession,
swiss-prot name)
dr_positive
dr_false_neg
dr_false_pos
dr_potential Potential hits, but fingerprint region not yet available.
dr_unknown Could possibly belong
pdb_structs List of PDB entries.
"""
def __init__(self):
self.name = ''
self.type = ''
self.accession = ''
self.created = ''
self.data_update = ''
self.info_update = ''
self.pdoc = ''
self.description = ''
self.pattern = ''
self.matrix = []
self.rules = []
self.prorules = []
self.postprocessing = []
self.nr_sp_release = ''
self.nr_sp_seqs = ''
self.nr_total = (None, None)
self.nr_positive = (None, None)
self.nr_unknown = (None, None)
self.nr_false_pos = (None, None)
self.nr_false_neg = None
self.nr_partial = None
self.cc_taxo_range = ''
self.cc_max_repeat = ''
self.cc_site = []
self.cc_skip_flag = ''
self.dr_positive = []
self.dr_false_neg = []
self.dr_false_pos = []
self.dr_potential = []
self.dr_unknown = []
self.pdb_structs = []
class PatternHit:
"""Holds information from a hit against a Prosite pattern.
Members:
name ID of the record. e.g. ADH_ZINC
accession e.g. PS00387
pdoc ID of the PROSITE DOCumentation.
description Free-format description.
matches List of tuples (start, end, sequence) where
start and end are indexes of the match, and sequence is
the sequence matched.
"""
def __init__(self):
self.name = None
self.accession = None
self.pdoc = None
self.description = None
self.matches = []
def __str__(self):
lines = []
lines.append("%s %s %s" % (self.accession, self.pdoc, self.name))
lines.append(self.description)
lines.append('')
if len(self.matches) > 1:
lines.append("Number of matches: %s" % len(self.matches))
for i in range(len(self.matches)):
start, end, seq = self.matches[i]
range_str = "%d-%d" % (start, end)
if len(self.matches) > 1:
lines.append("%7d %10s %s" % (i+1, range_str, seq))
else:
lines.append("%7s %10s %s" % (' ', range_str, seq))
return "\n".join(lines)
class Iterator:
"""Returns one record at a time from a Prosite file.
Methods:
next Return the next record from the stream, or None.
"""
def __init__(self, handle, parser=None):
"""__init__(self, handle, parser=None)
Create a new iterator. handle is a file-like object. parser
is an optional Parser object to change the results into another form.
If set to None, then the raw contents of the file will be returned.
"""
import warnings
warnings.warn("Bio.Prosite.Iterator is deprecated; we recommend using the function Bio.Prosite.parse instead. Please contact the Biopython developers at biopython-dev@biopython.org you cannot use Bio.Prosite.parse instead of Bio.Prosite.Iterator.",
DeprecationWarning)
if type(handle) is not FileType and type(handle) is not InstanceType:
raise ValueError("I expected a file handle or file-like object")
self._uhandle = File.UndoHandle(handle)
self._parser = parser
def next(self):
"""next(self) -> object
Return the next Prosite record from the file. If no more records,
return None.
"""
# Skip the copyright info, if it's the first record.
line = self._uhandle.peekline()
if line[:2] == 'CC':
while 1:
line = self._uhandle.readline()
if not line:
break
if line[:2] == '//':
break
if line[:2] != 'CC':
raise ValueError("Oops, where's the copyright?")
lines = []
while 1:
line = self._uhandle.readline()
if not line:
break
lines.append(line)
if line[:2] == '//':
break
if not lines:
return None
data = "".join(lines)
if self._parser is not None:
return self._parser.parse(File.StringHandle(data))
return data
def __iter__(self):
return iter(self.next, None)
class Dictionary:
"""Accesses a Prosite file using a dictionary interface.
"""
__filename_key = '__filename'
def __init__(self, indexname, parser=None):
"""__init__(self, indexname, parser=None)
Open a Prosite Dictionary. indexname is the name of the
index for the dictionary. The index should have been created
using the index_file function. parser is an optional Parser
object to change the results into another form. If set to None,
then the raw contents of the file will be returned.
"""
self._index = Index.Index(indexname)
self._handle = open(self._index[Dictionary.__filename_key])
self._parser = parser
def __len__(self):
return len(self._index)
def __getitem__(self, key):
start, len = self._index[key]
self._handle.seek(start)
data = self._handle.read(len)
if self._parser is not None:
return self._parser.parse(File.StringHandle(data))
return data
def __getattr__(self, name):
return getattr(self._index, name)
class ExPASyDictionary:
"""Access PROSITE at ExPASy using a read-only dictionary interface.
"""
def __init__(self, delay=5.0, parser=None):
"""__init__(self, delay=5.0, parser=None)
Create a new Dictionary to access PROSITE. parser is an optional
parser (e.g. Prosite.RecordParser) object to change the results
into another form. If set to None, then the raw contents of the
file will be returned. delay is the number of seconds to wait
between each query.
"""
import warnings
from Bio.WWW import RequestLimiter
warnings.warn("Bio.Prosite.ExPASyDictionary is deprecated. Please use the function Bio.ExPASy.get_prosite_raw instead.",
DeprecationWarning)
self.parser = parser
self.limiter = RequestLimiter(delay)
def __len__(self):
raise NotImplementedError("Prosite contains lots of entries")
def clear(self):
raise NotImplementedError("This is a read-only dictionary")
def __setitem__(self, key, item):
raise NotImplementedError("This is a read-only dictionary")
def update(self):
raise NotImplementedError("This is a read-only dictionary")
def copy(self):
raise NotImplementedError("You don't need to do this...")
def keys(self):
raise NotImplementedError("You don't really want to do this...")
def items(self):
raise NotImplementedError("You don't really want to do this...")
def values(self):
raise NotImplementedError("You don't really want to do this...")
def has_key(self, id):
"""has_key(self, id) -> bool"""
try:
self[id]
except KeyError:
return 0
return 1
def get(self, id, failobj=None):
try:
return self[id]
except KeyError:
return failobj
def __getitem__(self, id):
"""__getitem__(self, id) -> object
Return a Prosite entry. id is either the id or accession
for the entry. Raises a KeyError if there's an error.
"""
from Bio import ExPASy
# First, check to see if enough time has passed since my
# last query.
self.limiter.wait()
try:
handle = ExPASy.get_prosite_entry(id)
except IOError:
raise KeyError(id)
try:
handle = File.StringHandle(_extract_record(handle))
except ValueError:
raise KeyError(id)
if self.parser is not None:
return self.parser.parse(handle)
return handle.read()
class RecordParser(AbstractParser):
"""Parses Prosite data into a Record object.
"""
def __init__(self):
self._scanner = _Scanner()
self._consumer = _RecordConsumer()
def parse(self, handle):
self._scanner.feed(handle, self._consumer)
return self._consumer.data
class _Scanner:
"""Scans Prosite-formatted data.
Tested with:
Release 15.0, July 1998
"""
def feed(self, handle, consumer):
"""feed(self, handle, consumer)
Feed in Prosite data for scanning. handle is a file-like
object that contains prosite data. consumer is a
Consumer object that will receive events as the report is scanned.
"""
if isinstance(handle, File.UndoHandle):
uhandle = handle
else:
uhandle = File.UndoHandle(handle)
consumer.finished = False
while not consumer.finished:
line = uhandle.peekline()
if not line:
break
elif is_blank_line(line):
# Skip blank lines between records
uhandle.readline()
continue
elif line[:2] == 'ID':
self._scan_record(uhandle, consumer)
elif line[:2] == 'CC':
self._scan_copyrights(uhandle, consumer)
else:
raise ValueError("There doesn't appear to be a record")
def _scan_copyrights(self, uhandle, consumer):
consumer.start_copyrights()
self._scan_line('CC', uhandle, consumer.copyright, any_number=1)
self._scan_terminator(uhandle, consumer)
consumer.end_copyrights()
def _scan_record(self, uhandle, consumer):
consumer.start_record()
for fn in self._scan_fns:
fn(self, uhandle, consumer)
# In Release 15.0, C_TYPE_LECTIN_1 has the DO line before
# the 3D lines, instead of the other way around.
# Thus, I'll give the 3D lines another chance after the DO lines
# are finished.
if fn is self._scan_do.im_func:
self._scan_3d(uhandle, consumer)
consumer.end_record()
def _scan_line(self, line_type, uhandle, event_fn,
exactly_one=None, one_or_more=None, any_number=None,
up_to_one=None):
# Callers must set exactly one of exactly_one, one_or_more, or
# any_number to a true value. I do not explicitly check to
# make sure this function is called correctly.
# This does not guarantee any parameter safety, but I
# like the readability. The other strategy I tried was have
# parameters min_lines, max_lines.
if exactly_one or one_or_more:
read_and_call(uhandle, event_fn, start=line_type)
if one_or_more or any_number:
while 1:
if not attempt_read_and_call(uhandle, event_fn,
start=line_type):
break
if up_to_one:
attempt_read_and_call(uhandle, event_fn, start=line_type)
def _scan_id(self, uhandle, consumer):
self._scan_line('ID', uhandle, consumer.identification, exactly_one=1)
def _scan_ac(self, uhandle, consumer):
self._scan_line('AC', uhandle, consumer.accession, exactly_one=1)
def _scan_dt(self, uhandle, consumer):
self._scan_line('DT', uhandle, consumer.date, exactly_one=1)
def _scan_de(self, uhandle, consumer):
self._scan_line('DE', uhandle, consumer.description, exactly_one=1)
def _scan_pa(self, uhandle, consumer):
self._scan_line('PA', uhandle, consumer.pattern, any_number=1)
def _scan_ma(self, uhandle, consumer):
self._scan_line('MA', uhandle, consumer.matrix, any_number=1)
## # ZN2_CY6_FUNGAL_2, DNAJ_2 in Release 15
## # contain a CC line buried within an 'MA' line. Need to check
## # for that.
## while 1:
## if not attempt_read_and_call(uhandle, consumer.matrix, start='MA'):
## line1 = uhandle.readline()
## line2 = uhandle.readline()
## uhandle.saveline(line2)
## uhandle.saveline(line1)
## if line1[:2] == 'CC' and line2[:2] == 'MA':
## read_and_call(uhandle, consumer.comment, start='CC')
## else:
## break
def _scan_pp(self, uhandle, consumer):
#New PP line, PostProcessing, just after the MA line
self._scan_line('PP', uhandle, consumer.postprocessing, any_number=1)
def _scan_ru(self, uhandle, consumer):
self._scan_line('RU', uhandle, consumer.rule, any_number=1)
def _scan_nr(self, uhandle, consumer):
self._scan_line('NR', uhandle, consumer.numerical_results,
any_number=1)
def _scan_cc(self, uhandle, consumer):
self._scan_line('CC', uhandle, consumer.comment, any_number=1)
def _scan_dr(self, uhandle, consumer):
self._scan_line('DR', uhandle, consumer.database_reference,
any_number=1)
def _scan_3d(self, uhandle, consumer):
self._scan_line('3D', uhandle, consumer.pdb_reference,
any_number=1)
def _scan_pr(self, uhandle, consumer):
#New PR line, ProRule, between 3D and DO lines
self._scan_line('PR', uhandle, consumer.prorule, any_number=1)
def _scan_do(self, uhandle, consumer):
self._scan_line('DO', uhandle, consumer.documentation, exactly_one=1)
def _scan_terminator(self, uhandle, consumer):
self._scan_line('//', uhandle, consumer.terminator, exactly_one=1)
#This is a list of scan functions in the order expected in the file file.
#The function definitions define how many times each line type is exected
#(or if optional):
_scan_fns = [
_scan_id,
_scan_ac,
_scan_dt,
_scan_de,
_scan_pa,
_scan_ma,
_scan_pp,
_scan_ru,
_scan_nr,
_scan_cc,
# This is a really dirty hack, and should be fixed properly at
# some point. ZN2_CY6_FUNGAL_2, DNAJ_2 in Rel 15 and PS50309
# in Rel 17 have lines out of order. Thus, I have to rescan
# these, which decreases performance.
_scan_ma,
_scan_nr,
_scan_cc,
_scan_dr,
_scan_3d,
_scan_pr,
_scan_do,
_scan_terminator
]
class _RecordConsumer(AbstractConsumer):
"""Consumer that converts a Prosite record to a Record object.
Members:
data Record with Prosite data.
"""
def __init__(self):
self.data = None
def start_record(self):
self.data = Record()
def end_record(self):
self._clean_record(self.data)
def identification(self, line):
cols = line.split()
if len(cols) != 3:
raise ValueError("I don't understand identification line\n%s" \
% line)
self.data.name = self._chomp(cols[1]) # don't want ';'
self.data.type = self._chomp(cols[2]) # don't want '.'
def accession(self, line):
cols = line.split()
if len(cols) != 2:
raise ValueError("I don't understand accession line\n%s" % line)
self.data.accession = self._chomp(cols[1])
def date(self, line):
uprline = line.upper()
cols = uprline.split()
# Release 15.0 contains both 'INFO UPDATE' and 'INF UPDATE'
if cols[2] != '(CREATED);' or \
cols[4] != '(DATA' or cols[5] != 'UPDATE);' or \
cols[7][:4] != '(INF' or cols[8] != 'UPDATE).':
raise ValueError("I don't understand date line\n%s" % line)
self.data.created = cols[1]
self.data.data_update = cols[3]
self.data.info_update = cols[6]
def description(self, line):
self.data.description = self._clean(line)
def pattern(self, line):
self.data.pattern = self.data.pattern + self._clean(line)
def matrix(self, line):
self.data.matrix.append(self._clean(line))
def postprocessing(self, line):
postprocessing = self._clean(line).split(";")
self.data.postprocessing.extend(postprocessing)
def rule(self, line):
self.data.rules.append(self._clean(line))
def numerical_results(self, line):
cols = self._clean(line).split(";")
for col in cols:
if not col:
continue
qual, data = [word.lstrip() for word in col.split("=")]
if qual == '/RELEASE':
release, seqs = data.split(",")
self.data.nr_sp_release = release
self.data.nr_sp_seqs = int(seqs)
elif qual == '/FALSE_NEG':
self.data.nr_false_neg = int(data)
elif qual == '/PARTIAL':
self.data.nr_partial = int(data)
elif qual in ['/TOTAL', '/POSITIVE', '/UNKNOWN', '/FALSE_POS']:
m = re.match(r'(\d+)\((\d+)\)', data)
if not m:
raise Exception("Broken data %s in comment line\n%s" \
% (repr(data), line))
hits = tuple(map(int, m.groups()))
if(qual == "/TOTAL"):
self.data.nr_total = hits
elif(qual == "/POSITIVE"):
self.data.nr_positive = hits
elif(qual == "/UNKNOWN"):
self.data.nr_unknown = hits
elif(qual == "/FALSE_POS"):
self.data.nr_false_pos = hits
else:
raise ValueError("Unknown qual %s in comment line\n%s" \
% (repr(qual), line))
def comment(self, line):
#Expect CC lines like this:
#CC /TAXO-RANGE=??EPV; /MAX-REPEAT=2;
#Can (normally) split on ";" and then on "="
cols = self._clean(line).split(";")
for col in cols:
if not col or col[:17] == 'Automatic scaling':
# DNAJ_2 in Release 15 has a non-standard comment line:
# CC Automatic scaling using reversed database
# Throw it away. (Should I keep it?)
continue
if col.count("=") == 0 :
#Missing qualifier! Can we recover gracefully?
#For example, from Bug 2403, in PS50293 have:
#CC /AUTHOR=K_Hofmann; N_Hulo
continue
qual, data = [word.lstrip() for word in col.split("=")]
if qual == '/TAXO-RANGE':
self.data.cc_taxo_range = data
elif qual == '/MAX-REPEAT':
self.data.cc_max_repeat = data
elif qual == '/SITE':
pos, desc = data.split(",")
self.data.cc_site.append((int(pos), desc))
elif qual == '/SKIP-FLAG':
self.data.cc_skip_flag = data
elif qual == '/MATRIX_TYPE':
self.data.cc_matrix_type = data
elif qual == '/SCALING_DB':
self.data.cc_scaling_db = data
elif qual == '/AUTHOR':
self.data.cc_author = data
elif qual == '/FT_KEY':
self.data.cc_ft_key = data
elif qual == '/FT_DESC':
self.data.cc_ft_desc = data
elif qual == '/VERSION':
self.data.cc_version = data
else:
raise ValueError("Unknown qual %s in comment line\n%s" \
% (repr(qual), line))
def database_reference(self, line):
refs = self._clean(line).split(";")
for ref in refs:
if not ref:
continue
acc, name, type = [word.strip() for word in ref.split(",")]
if type == 'T':
self.data.dr_positive.append((acc, name))
elif type == 'F':
self.data.dr_false_pos.append((acc, name))
elif type == 'N':
self.data.dr_false_neg.append((acc, name))
elif type == 'P':
self.data.dr_potential.append((acc, name))
elif type == '?':
self.data.dr_unknown.append((acc, name))
else:
raise ValueError("I don't understand type flag %s" % type)
def pdb_reference(self, line):
cols = line.split()
for id in cols[1:]: # get all but the '3D' col
self.data.pdb_structs.append(self._chomp(id))
def prorule(self, line):
#Assume that each PR line can contain multiple ";" separated rules
rules = self._clean(line).split(";")
self.data.prorules.extend(rules)
def documentation(self, line):
self.data.pdoc = self._chomp(self._clean(line))
def terminator(self, line):
self.finished = True
def _chomp(self, word, to_chomp='.,;'):
# Remove the punctuation at the end of a word.
if word[-1] in to_chomp:
return word[:-1]
return word
def _clean(self, line, rstrip=1):
# Clean up a line.
if rstrip:
return line[5:].rstrip()
return line[5:]
def scan_sequence_expasy(seq=None, id=None, exclude_frequent=None):
"""scan_sequence_expasy(seq=None, id=None, exclude_frequent=None) ->
list of PatternHit's
Search a sequence for occurrences of Prosite patterns. You can
specify either a sequence in seq or a SwissProt/trEMBL ID or accession
in id. Only one of those should be given. If exclude_frequent
is true, then the patterns with the high probability of occurring
will be excluded.
"""
from Bio import ExPASy
if (seq and id) or not (seq or id):
raise ValueError("Please specify either a sequence or an id")
handle = ExPASy.scanprosite1(seq, id, exclude_frequent)
return _extract_pattern_hits(handle)
def _extract_pattern_hits(handle):
"""_extract_pattern_hits(handle) -> list of PatternHit's
Extract hits from a web page. Raises a ValueError if there
was an error in the query.
"""
class parser(sgmllib.SGMLParser):
def __init__(self):
sgmllib.SGMLParser.__init__(self)
self.hits = []
self.broken_message = 'Some error occurred'
self._in_pre = 0
self._current_hit = None
self._last_found = None # Save state of parsing
def handle_data(self, data):
if data.find('try again') >= 0:
self.broken_message = data
return
elif data == 'illegal':
self.broken_message = 'Sequence contains illegal characters'
return
if not self._in_pre:
return
elif not data.strip():
return
if self._last_found is None and data[:4] == 'PDOC':
self._current_hit.pdoc = data
self._last_found = 'pdoc'
elif self._last_found == 'pdoc':
if data[:2] != 'PS':
raise ValueError("Expected accession but got:\n%s" % data)
self._current_hit.accession = data
self._last_found = 'accession'
elif self._last_found == 'accession':
self._current_hit.name = data
self._last_found = 'name'
elif self._last_found == 'name':
self._current_hit.description = data
self._last_found = 'description'
elif self._last_found == 'description':
m = re.findall(r'(\d+)-(\d+) (\w+)', data)
for start, end, seq in m:
self._current_hit.matches.append(
(int(start), int(end), seq))
def do_hr(self, attrs):
# <HR> inside a <PRE> section means a new hit.
if self._in_pre:
self._current_hit = PatternHit()
self.hits.append(self._current_hit)
self._last_found = None
def start_pre(self, attrs):
self._in_pre = 1
self.broken_message = None # Probably not broken
def end_pre(self):
self._in_pre = 0
p = parser()
p.feed(handle.read())
if p.broken_message:
raise ValueError(p.broken_message)
return p.hits
def index_file(filename, indexname, rec2key=None):
"""index_file(filename, indexname, rec2key=None)
Index a Prosite file. filename is the name of the file.
indexname is the name of the dictionary. rec2key is an
optional callback that takes a Record and generates a unique key
(e.g. the accession number) for the record. If not specified,
the id name will be used.
"""
import os
if not os.path.exists(filename):
raise ValueError("%s does not exist" % filename)
index = Index.Index(indexname, truncate=1)
index[Dictionary._Dictionary__filename_key] = filename
handle = open(filename)
records = parse(handle)
end = 0L
for record in records:
start = end
end = long(handle.tell())
length = end - start
if rec2key is not None:
key = rec2key(record)
else:
key = record.name
if not key:
raise KeyError("empty key was produced")
elif key in index:
raise KeyError("duplicate key %s found" % key)
index[key] = start, length
# This function can be deprecated once Bio.Prosite.ExPASyDictionary
# is removed.
def _extract_record(handle):
"""_extract_record(handle) -> str
Extract PROSITE data from a web page. Raises a ValueError if no
data was found in the web page.
"""
# All the data appears between tags:
# <pre width = 80>ID NIR_SIR; PATTERN.
# </PRE>
class parser(sgmllib.SGMLParser):
def __init__(self):
sgmllib.SGMLParser.__init__(self)
self._in_pre = 0
self.data = []
def handle_data(self, data):
if self._in_pre:
self.data.append(data)
def do_br(self, attrs):
if self._in_pre:
self.data.append('\n')
def start_pre(self, attrs):
self._in_pre = 1
def end_pre(self):
self._in_pre = 0
p = parser()
p.feed(handle.read())
if not p.data:
raise ValueError("No data found in web page.")
return "".join(p.data)
| 35.326966 | 256 | 0.580675 | 3,904 | 31,441 | 4.525359 | 0.151383 | 0.021283 | 0.019358 | 0.016924 | 0.282561 | 0.224883 | 0.166808 | 0.129054 | 0.118979 | 0.101602 | 0 | 0.009392 | 0.326103 | 31,441 | 889 | 257 | 35.366704 | 0.824429 | 0.099965 | 0 | 0.240672 | 0 | 0.001866 | 0.080408 | 0.004749 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.024254 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e9fd5e9401ba6d04c5d4bf4d42d343bc34357a32 | 2,880 | py | Python | CIM16/IEC61970/Generation/Production/StartIgnFuelCurve.py | MaximeBaudette/PyCIM | d68ee5ccfc1d32d44c5cd09fb173142fb5ff4f14 | [
"MIT"
] | null | null | null | CIM16/IEC61970/Generation/Production/StartIgnFuelCurve.py | MaximeBaudette/PyCIM | d68ee5ccfc1d32d44c5cd09fb173142fb5ff4f14 | [
"MIT"
] | null | null | null | CIM16/IEC61970/Generation/Production/StartIgnFuelCurve.py | MaximeBaudette/PyCIM | d68ee5ccfc1d32d44c5cd09fb173142fb5ff4f14 | [
"MIT"
] | 1 | 2021-04-02T18:04:49.000Z | 2021-04-02T18:04:49.000Z | # Copyright (C) 2010-2011 Richard Lincoln
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from CIM16.IEC61970.Core.Curve import Curve
class StartIgnFuelCurve(Curve):
"""The quantity of ignition fuel (Y-axis) used to restart and repay the auxiliary power consumed versus the number of hours (X-axis) the unit was off lineThe quantity of ignition fuel (Y-axis) used to restart and repay the auxiliary power consumed versus the number of hours (X-axis) the unit was off line
"""
def __init__(self, ignitionFuelType="oil", StartupModel=None, *args, **kw_args):
"""Initialises a new 'StartIgnFuelCurve' instance.
@param ignitionFuelType: Type of ignition fuel Values are: "oil", "coal", "lignite", "gas"
@param StartupModel: The unit's startup model may have a startup ignition fuel curve
"""
#: Type of ignition fuel Values are: "oil", "coal", "lignite", "gas"
self.ignitionFuelType = ignitionFuelType
self._StartupModel = None
self.StartupModel = StartupModel
super(StartIgnFuelCurve, self).__init__(*args, **kw_args)
_attrs = ["ignitionFuelType"]
_attr_types = {"ignitionFuelType": str}
_defaults = {"ignitionFuelType": "oil"}
_enums = {"ignitionFuelType": "FuelType"}
_refs = ["StartupModel"]
_many_refs = []
def getStartupModel(self):
"""The unit's startup model may have a startup ignition fuel curve
"""
return self._StartupModel
def setStartupModel(self, value):
if self._StartupModel is not None:
self._StartupModel._StartIgnFuelCurve = None
self._StartupModel = value
if self._StartupModel is not None:
self._StartupModel.StartIgnFuelCurve = None
self._StartupModel._StartIgnFuelCurve = self
StartupModel = property(getStartupModel, setStartupModel)
| 45 | 309 | 0.720833 | 370 | 2,880 | 5.535135 | 0.440541 | 0.078125 | 0.048828 | 0.054199 | 0.287109 | 0.287109 | 0.287109 | 0.287109 | 0.287109 | 0.287109 | 0 | 0.006539 | 0.203472 | 2,880 | 63 | 310 | 45.714286 | 0.886225 | 0.602083 | 0 | 0.086957 | 0 | 0 | 0.082949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.043478 | 0 | 0.565217 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
e9ff12848b4786dd9b5181f046c3b8596891ad5d | 1,416 | py | Python | test_scripts/test_stack_and_visualize.py | jakevdp/spheredb | e5e5ff8b8902459b3f38a1a413a712ac1695accc | [
"BSD-3-Clause"
] | 1 | 2021-08-29T06:01:28.000Z | 2021-08-29T06:01:28.000Z | test_scripts/test_stack_and_visualize.py | jakevdp/spheredb | e5e5ff8b8902459b3f38a1a413a712ac1695accc | [
"BSD-3-Clause"
] | null | null | null | test_scripts/test_stack_and_visualize.py | jakevdp/spheredb | e5e5ff8b8902459b3f38a1a413a712ac1695accc | [
"BSD-3-Clause"
] | 2 | 2018-08-03T20:27:35.000Z | 2021-08-29T06:01:30.000Z | """
Stacking and Visualizing
------------------------
This script does the following:
1. Input LSST images, warp to sparse matrix, store as scidb arrays.
This tests the warping of a single LSST exposure into a sparse matrix
representation of a HEALPix grid.
"""
import os
import sys
import glob
import matplotlib.pyplot as plt
import numpy as np
sys.path.append(os.path.abspath('..'))
from spheredb.scidb_tools import HPXPixels3D, find_index_bounds
filenames = glob.glob("/home/jakevdp/research/LSST_IMGS/*/R*/S*.fits")
print "total number of files:", len(filenames)
HPX_data = HPXPixels3D(input_files=filenames[:20],
name='LSSTdata', force_reload=False)
times = HPX_data.unique_times()
xlim, ylim, tlim = HPX_data.index_bounds()
for time in times[:2]:
tslice = HPX_data.time_slice(time)
tslice_arr = tslice.arr[xlim[0]:xlim[1],
ylim[0]:ylim[1]].toarray()
fig, ax = plt.subplots()
im = ax.imshow(np.log(tslice_arr), cmap=plt.cm.binary)
ax.set_xlim(400, 440)
ax.set_ylim(860, 820)
fig.colorbar(im, ax=ax)
ax.set_title("time = {0}".format(time))
coadd = HPX_data.coadd().arr[xlim[0]:xlim[1],
ylim[0]:ylim[1]].toarray()
fig, ax = plt.subplots()
im = ax.imshow(np.log(coadd), cmap=plt.cm.binary)
ax.set_xlim(400, 440)
ax.set_ylim(860, 820)
fig.colorbar(im, ax=ax)
ax.set_title("coadd")
plt.show()
| 27.230769 | 70 | 0.664548 | 220 | 1,416 | 4.177273 | 0.468182 | 0.032644 | 0.01741 | 0.026115 | 0.289445 | 0.289445 | 0.289445 | 0.289445 | 0.289445 | 0.289445 | 0 | 0.033563 | 0.179379 | 1,416 | 51 | 71 | 27.764706 | 0.757315 | 0 | 0 | 0.3125 | 0 | 0 | 0.079723 | 0.038995 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1875 | null | null | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
18007f3ffa7e153ffa5c57f5301a0d773f024cb8 | 307 | py | Python | Problem/PeopleFund/concatenate.py | yeojin-dev/coding-test | 30ce8507838beaa9232c6fc6c62a7dcb62d51464 | [
"MIT"
] | 2 | 2018-07-11T08:13:06.000Z | 2018-07-11T08:47:12.000Z | Problem/PeopleFund/concatenate.py | yeojin-dev/coding-test | 30ce8507838beaa9232c6fc6c62a7dcb62d51464 | [
"MIT"
] | null | null | null | Problem/PeopleFund/concatenate.py | yeojin-dev/coding-test | 30ce8507838beaa9232c6fc6c62a7dcb62d51464 | [
"MIT"
] | null | null | null | import numpy as np
sizes = list(map(int, input().split()))
arr1 = list()
arr2 = list()
for _ in range(sizes[0]):
arr1.append(list(map(int, input().split())))
for _ in range(sizes[1]):
arr2.append(list(map(int, input().split())))
print(np.concatenate((np.array(arr1), np.array(arr2)), axis=0))
| 19.1875 | 63 | 0.635179 | 49 | 307 | 3.938776 | 0.44898 | 0.108808 | 0.15544 | 0.233161 | 0.373057 | 0.26943 | 0 | 0 | 0 | 0 | 0 | 0.034091 | 0.140065 | 307 | 15 | 64 | 20.466667 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1801df02ecd58a8f78ca27f271870b89690c5eb0 | 1,349 | py | Python | db_model.py | Build-Week-Saltiest-Hacker/machine-learning | 1822e2ecdca8279bc49095f6da527152e298b95d | [
"MIT"
] | null | null | null | db_model.py | Build-Week-Saltiest-Hacker/machine-learning | 1822e2ecdca8279bc49095f6da527152e298b95d | [
"MIT"
] | null | null | null | db_model.py | Build-Week-Saltiest-Hacker/machine-learning | 1822e2ecdca8279bc49095f6da527152e298b95d | [
"MIT"
] | null | null | null | # schema for SQL database
from data import app, db
class HNuser(db.Model):
""" SQL database class """
username = db.Column(db.String(100), primary_key=True)
post_id = db.Column(db.Integer)
salty_rank = db.Column(db.Float, nullable=False)
salty_comments = db.Column(db.Integer, nullable=False)
# comments_total = db.Column(db.Integer, nullable=False)
def __repr__(self)
return f"User {self.username} -- Salty Ranking: {self.salty_rank}"
def salty_hackers(self):
"""return user information in Json format """
return {
"username" : self.username,
"date" : self.date,
"salty_rank" : self.salty_rank,
"salty_comments" : self.salty_comments,
}
class Comments(db.Model):
comment_id = db.Column(db.BigInteger, primary_key=True)
username = db.Column(db.String(100), db.ForeignKey('user.username'))
text = db.Column(db.String(3000))
date = db.Column(db.BigInteger)
def __repr__(self):
return f"User {self.username} -- Comment: {self.text}"
def salty_comments(self):
""" returns comments in JSON format """
return {
"comment_id" : self.comment_id,
"username" : self.username,
"text" : self.text,
"date" : self.date
""
}
| 31.372093 | 74 | 0.604151 | 164 | 1,349 | 4.823171 | 0.292683 | 0.091024 | 0.11378 | 0.060683 | 0.230089 | 0.230089 | 0.085967 | 0.085967 | 0 | 0 | 0 | 0.010101 | 0.266123 | 1,349 | 42 | 75 | 32.119048 | 0.788889 | 0.057821 | 0 | 0.133333 | 0 | 0 | 0.151123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.033333 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1809e4f7973197265ce5a6a201169c2856659885 | 1,555 | py | Python | src/jt/rubicon/java/_typemanager.py | karpierz/jtypes.rubicon | 8f8196e47de93183eb9728fec0d08725fc368ee0 | [
"BSD-3-Clause"
] | 2 | 2018-11-29T06:19:05.000Z | 2018-12-09T09:47:55.000Z | src/jt/rubicon/java/_typemanager.py | karpierz/jtypes.rubicon | 8f8196e47de93183eb9728fec0d08725fc368ee0 | [
"BSD-3-Clause"
] | null | null | null | src/jt/rubicon/java/_typemanager.py | karpierz/jtypes.rubicon | 8f8196e47de93183eb9728fec0d08725fc368ee0 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2016-2019, Adam Karpierz
# Licensed under the BSD license
# http://opensource.org/licenses/BSD-3-Clause
from ...jvm.lib.compat import *
from ...jvm.lib import annotate
from ...jvm.lib import public
from ._typehandler import * # noqa
@public
class TypeManager(object):
__slots__ = ('_state', '_handlers')
def __init__(self, state=None):
super(TypeManager, self).__init__()
self._state = state
self._handlers = {}
def start(self):
self._register_handler(VoidHandler)
self._register_handler(BooleanHandler)
self._register_handler(CharHandler)
self._register_handler(ByteHandler)
self._register_handler(ShortHandler)
self._register_handler(IntHandler)
self._register_handler(LongHandler)
self._register_handler(FloatHandler)
self._register_handler(DoubleHandler)
self._register_handler(StringHandler)
def stop(self):
self._handlers = {}
def _register_handler(self, hcls):
thandler = hcls(self._state)
self._handlers[thandler._jclass] = thandler
return thandler
def get_handler(self, jclass):
thandler = self._handlers.get(jclass)
if thandler is None:
if not jclass.startswith("L"):
raise ValueError("Don't know how to convert argument with "
"type signature '{}'".format(jclass))
self._handlers[jclass] = thandler = ObjectHandler(self._state, jclass)
return thandler
| 28.272727 | 82 | 0.652733 | 165 | 1,555 | 5.866667 | 0.454545 | 0.170455 | 0.196281 | 0.033058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007705 | 0.248875 | 1,555 | 54 | 83 | 28.796296 | 0.821062 | 0.075884 | 0 | 0.111111 | 0 | 0 | 0.052374 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.111111 | 0 | 0.361111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
180dd0f316d9175e1decc0de1732de58c97bdcf4 | 3,874 | py | Python | run.py | Yvonne-Ouma/Password-Locker | b16f8e9ee36d3cb70eefb58bf7be2de1bb1948fc | [
"MIT"
] | null | null | null | run.py | Yvonne-Ouma/Password-Locker | b16f8e9ee36d3cb70eefb58bf7be2de1bb1948fc | [
"MIT"
] | null | null | null | run.py | Yvonne-Ouma/Password-Locker | b16f8e9ee36d3cb70eefb58bf7be2de1bb1948fc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3.6
from user import User
from credential import Credential
def createUser(userName,password):
'''
Function to create a new user
'''
newUser = User(userName,password)
return newUser
def saveUsers(user):
'''
Function to save users
'''
user.saveUser()
def createCredential(firstName,lastName,accountName,password):
newCredential = Credential(firstName,lastName,accountName,password)
return newCredential
def saveCredential(credential):
'''
Function to save a new credential
'''
Credential.saveCredential(credential)
def delCredential(credential):
'''
Function to delete a credential
'''
credential.deleteCredential()
def findCredential(name):
'''
Function that finds a credential by name returns the credential
'''
return Credential.find_by_name(name)
def check_existingCredentials(name):
'''
Function that checks if a credential exists with that name and return a boolean
'''
return Credential.credential_exist(name)
def displayCredentials():
'''
Function that returns all the saved credentials
'''
return Credential.displayCredentials()
def main():
print("Hello Welcome to password locker.\n Login:")
userName = input("What is your name?")
password = input("Enter your password :")
print(f"Hello {userName}. what would you like to do??\n Create an acount First!!" )
print("-"* 15)
while True:
print("Us this short codes : cc - Create a new credential, dc -display credentials, fc -to search a credential, dl -to delete credential, ex -exit the credential list ")
short_code = input()
if short_code == 'cc':
print("New Credential")
print("-"*10)
print ("firstName ....")
firstName = input()
print("lastName ...")
lastName = input()
print("accountName ...")
accountName = input()
print("password ...")
password = input()
saveCredential(createCredential(firstName,lastName,accountName,password)) # create and save new credential.
print('\n')
# print (f'New Credential {firstName} {lastName} {accountName} created')
print('\n')
elif short_code == 'dc':
if displayCredentials():
print("Here is a list of all your credentials")
print('\n')
for credential in displayCredentials():
print(f"{credential.firstName} {credential.lastName} ....{credential.accountName}")
print('\n')
else:
print('\n')
print("You dont seem to have any credentials saved yet")
print('\n')
elif short_code =='dl':
print("Are your sure you want to delete this credential\n Please insert the name of the credential:")
searchName = input()
deleteCredential = Credential.deleteCredential(searchName)
elif short_code == 'fc':
print("Enter the name you want to search for")
searchName = input()
searchCredential = findCredential(searchName)
print(f" {searchCredential.lastName}")
print('-' * 20)
print(f"accountName........{searchCredential.accountName}")
elif short_code == "ex":
print("Bye ......")
break
else:
print("I really didn't get that. Please use the short codes")
if __name__ == '__main__':
main()
| 27.28169 | 177 | 0.558596 | 373 | 3,874 | 5.753351 | 0.321716 | 0.025163 | 0.05219 | 0.050326 | 0.06617 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003126 | 0.339442 | 3,874 | 141 | 178 | 27.475177 | 0.835483 | 0.112803 | 0 | 0.138889 | 0 | 0.013889 | 0.252103 | 0.044171 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.125 | 0.027778 | 0 | 0.222222 | 0.361111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1810ed3f25b77f5724cfa46b09080dd25d3ba89c | 737 | py | Python | aaweb/__init__.py | cpelite/astorian-airways | 55498f308de7a4b8159519e191b492675ec5612a | [
"CC0-1.0"
] | null | null | null | aaweb/__init__.py | cpelite/astorian-airways | 55498f308de7a4b8159519e191b492675ec5612a | [
"CC0-1.0"
] | null | null | null | aaweb/__init__.py | cpelite/astorian-airways | 55498f308de7a4b8159519e191b492675ec5612a | [
"CC0-1.0"
] | 3 | 2020-04-14T20:46:50.000Z | 2021-03-11T19:07:20.000Z | # -*- coding: utf-8 -*-
import os
from datetime import timedelta
from flask import Flask, session
default_timezone = 'Europe/Berlin'
app = Flask(__name__, static_folder='../static', static_url_path='/static', template_folder="../templates/")
app.permanent_session_lifetime = timedelta(minutes=60)
app.config.update(
SESSION_COOKIE_NAME = "AAsession",
ERROR_LOG_FILE = "%s/app.log" % os.environ.get('OPENSHIFT_LOG_DIR', 'logs')
)
@app.before_request
def session_activity():
session.modified = True
@app.route('/robots.txt')
def serve_robots():
return 'User-agent: *\nDisallow: /'
# VIEWS
import aaweb.views
import aaweb.forms
# API
import aaweb.api
# additional functionalities
import aaweb.error
import aaweb.log
| 20.472222 | 108 | 0.738128 | 97 | 737 | 5.402062 | 0.608247 | 0.104962 | 0.061069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004658 | 0.126187 | 737 | 35 | 109 | 21.057143 | 0.809006 | 0.078697 | 0 | 0 | 0 | 0 | 0.176558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.380952 | 0.047619 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
181aa4e686c7e2eb75b68979882bfaab2af06de9 | 3,031 | py | Python | downloader.py | tuxetuxe/downloader | 76a1ac01189a6946b15ac6f58661931551dfc0ef | [
"Apache-2.0"
] | 3 | 2016-11-09T13:02:46.000Z | 2020-06-04T10:38:11.000Z | downloader.py | tuxetuxe/downloader | 76a1ac01189a6946b15ac6f58661931551dfc0ef | [
"Apache-2.0"
] | null | null | null | downloader.py | tuxetuxe/downloader | 76a1ac01189a6946b15ac6f58661931551dfc0ef | [
"Apache-2.0"
] | null | null | null | import sys, getopt
import sched
import time
import csv
from pprint import pprint
import urllib, urllib2
from random import randint
import threading
proxies_file = ""
targets_file = ""
proxies = []
targets = []
scheduler = sched.scheduler(time.time, time.sleep)
def pick_random_proxy():
proxy_count = len(proxies) - 1;
if proxy_count == 0:
return None
proxy_index = randint(0, proxy_count )
proxy = proxies[ proxy_index ]
return proxy[ "host" ] + ":" + proxy[ "port" ]
def download_file(interval, url):
threading.Thread( target = lambda: download_file_impl(interval, url) )
#randomize the interval
new_interval = interval + randint( -1 * interval, interval )
if new_interval < interval:
new_interval = interval
#repeat itself forever
scheduler.enter(new_interval, 1 , download_file, (new_interval, url) )
print "==> Next download of " + url + " in " + str( new_interval ) + " seconds"
def download_file_impl(interval, url):
selected_proxy = pick_random_proxy();
download_was_ok = True
try:
request = urllib2.Request(url)
if selected_proxy is None:
print "NO PROXY!"
else:
request.set_proxy(selected_proxy, 'http')
response = urllib2.urlopen(request)
print "Response code: " + str( response.code )
download_was_ok = response.code == 200
except urllib2.URLError, e:
download_was_ok = False
pprint( e )
if( download_was_ok ):
print " OK! "
else:
print " NOK! "
def main(argv):
global scheduler
parse_command_line_parameters(argv)
proxiesReader = csv.DictReader(open(proxies_file), dialect='excel', delimiter=',')
for row in proxiesReader:
proxies.append( row )
targetsReader = csv.DictReader(open(targets_file), dialect='excel', delimiter=',')
for row in targetsReader:
targets.append( row )
print "==============================================================================="
print "Proxies file: " + proxies_file
print "Targets file: " + targets_file
print "-------------------------------------------------------------------------------"
print "Proxies (total:" + str( len(proxies) ) + ")"
pprint( proxies )
print "Targets (total:" + str( len(targets) ) + ")"
pprint( targets )
print "==============================================================================="
for target in targets:
interval = int( target[ "interval" ] )
url = target[ "url" ]
scheduler.enter(interval, 1 , download_file, (interval, url ) )
scheduler.run()
def parse_command_line_parameters(argv):
global proxies_file
global targets_file
try:
opts, args = getopt.getopt(argv,"hp:t:",["proxies=","targets="])
except getopt.GetoptError:
print 'downloader.py -p <proxiesfile> -t <targetsfile>'
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print 'downloader.py -p <proxiesfile> -t <targetsfile>'
sys.exit()
elif opt in ("-p", "--proxiesfile"):
proxies_file = arg
elif opt in ("-t", "--targetsfile"):
targets_file = arg
if __name__ == "__main__":
main(sys.argv[1:]) | 25.470588 | 88 | 0.626856 | 356 | 3,031 | 5.174157 | 0.303371 | 0.035831 | 0.02823 | 0.024973 | 0.149837 | 0.087948 | 0.087948 | 0.052117 | 0.052117 | 0 | 0 | 0.006005 | 0.17585 | 3,031 | 119 | 89 | 25.470588 | 0.731385 | 0.014187 | 0 | 0.091954 | 0 | 0 | 0.18614 | 0.079344 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.091954 | null | null | 0.206897 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1820ae4e6fd68c69f37f4266bffb6793e643a89a | 6,580 | py | Python | script.py | rahulkmr1/heroku-python-script | 053be38dc8c6c6ab9929ca5af772d19c57f5e498 | [
"MIT"
] | null | null | null | script.py | rahulkmr1/heroku-python-script | 053be38dc8c6c6ab9929ca5af772d19c57f5e498 | [
"MIT"
] | null | null | null | script.py | rahulkmr1/heroku-python-script | 053be38dc8c6c6ab9929ca5af772d19c57f5e498 | [
"MIT"
] | null | null | null | import telepot
import time
import requests
from bs4 import BeautifulSoup as bs
import cPickle
import csv
RAHUL_ID = 931906767
# You can leave this bit out if you're using a paid PythonAnywhere account
# proxy_url = "http://proxy.server:3128"
# telepot.api._pools = {
# 'default': urllib3.ProxyManager(proxy_url=proxy_url, num_pools=3, maxsize=10, retries=False, timeout=30),
# }
# telepot.api._onetime_pool_spec = (urllib3.ProxyManager, dict(proxy_url=proxy_url, num_pools=1, maxsize=1, retries=False, timeout=30))
# end of the stuff that's only needed for free accounts
########################
login_url = 'https://www.placement.iitbhu.ac.in/accounts/login/'
client = requests.session()
login = client.get(login_url)
login = bs(login.content, "html.parser")
payload = {
"login": "rahul.kumar.cse15@itbhu.ac.in",
"password": "rahulkmr",
"csrfmiddlewaretoken": login.input['value']
}
result = client.post(
login_url,
data = payload,
headers = dict(referer=login_url)
)
forum = client.get("https://www.placement.iitbhu.ac.in/forum/c/notice-board/2019-20/")
soup = bs(forum.content, "html.parser")
#load last message delivred to users
try:
with open("posts", "rb") as f:
posts = cPickle.load(f);
except Exception as e:
print e
posts = soup.findAll("td", "topic-name")
for i in range(len(posts)):
posts[i] = posts[i].a
posts.pop(0)
posts.pop(0)
updated = soup.findAll('td','topic-last-post')
# updated.pop()
# updated.pop(0)
#########################
bot = telepot.Bot('940251504:AAG19YYQYtkiEOCrW0fZETvmYQSskElARcc')
# chat_ids = {RAHUL_ID}
with open("IDs", "rb") as f:
chat_ids = cPickle.load(f)
print '#################No of IDs loaded: ', len(chat_ids)
####### Commands ########
def start(msg):
# with open("users.csv", "w") as f:
# writer = csv.writer(f)
# writer.writerow(msg['from'].values())
bot.sendMessage(msg['chat']['id'],"Hello " + msg['from']['first_name'])
def add_cmd(chat_id, msg, *argv):
if chat_id not in chat_ids:
chat_ids.add(chat_id)
with open("IDs", "wb") as f:
cPickle.dump(chat_ids, f);
with open("users.txt", "a") as f:
writer = csv.writer(f)
writer.writerow(msg['from'].values())
bot.sendMessage(chat_id, "Added your ID for notifications. Note that it may take upto 5 minutes to get update of a recent post")
bot.sendMessage(RAHUL_ID, "Added:\n" + str(msg['from'].values()))
else:
bot.sendMessage(chat_id, "You are already added")
def remove_cmd(chat_id, *argv):
try:
chat_ids.remove(chat_id)
with open("IDs", "wb") as f:
cPickle.dump(chat_ids, f);
bot.sendMessage(chat_id, "Removed your ID")
except KeyError:
bot.sendMessage(chat_id, "You are not in the list")
def allPosts(chat_id, *argv):
msg = ''
for i in range(len(posts)):
msg += gen_msg(posts[i]) + '\n<b>Last Updated: </b>' + updated[i].string.encode() + '\n\n'
bot.sendMessage(chat_id, text=msg, parse_mode="HTML")
def top(chat_id, param, *argv):
total = 3
if len(param) > 1 and param[1].isdigit():
total = min(15, int(param[1]))
msg = '.'
for i in range(total):
msg += gen_msg(posts[i])
try:
# print msg
bot.sendMessage(chat_id, text=msg, parse_mode="HTML")
except Exception as e:
bot.sendMessage(chat_id, text=str(e), parse_mode="HTML")
#########################
command = {'/add':add_cmd, '/remove':remove_cmd, '/all':allPosts, '/recent':top}
def handle(msg):
# print msg
content_type, chat_type, chat_id = telepot.glance(msg)
print msg['from']['first_name'], chat_id
if content_type == 'text':
if msg['text'][0] == '/':
tokens = msg['text'].split()
try:
if tokens[0] == '/start':
start(msg)
elif tokens[0] == '/add':
add_cmd(chat_id, msg)
else:
command[tokens[0]](chat_id, tokens)
except KeyError:
bot.sendMessage(chat_id, "Unknown command: {}".format(tokens[0]))
else:
bot.sendMessage(chat_id, "You said '{}'".format(msg["text"]))
bot.message_loop(handle)
print ('Listening ...')
# for chat_id in chat_ids:
# bot.sendMessage(chat_id, text='Server started', parse_mode="HTML")
bot.sendMessage(RAHUL_ID, text='Server started', parse_mode="HTML")
def gen_msg(post):
string = str(post)
string = string[:8] + '"https://www.placement.iitbhu.ac.in' + string[9:] + '\n-----------------\n<b>Last Updated: </b>' + updated[posts.index(post)].string.encode() + '\n'
# string += '
post = client.get("https://www.placement.iitbhu.ac.in/" + post['href'])
post = bs(post.content, "html.parser")
post = post.find("td", "post-content")
# print post.contents
for x in post.contents:
if type(x) is type(post.contents[0]):
string += x + '\n'
post = post.find("div", "attachments").a
if post is not None:
tmp = '<a href=' + '"https://www.placement.iitbhu.ac.in' + post['href'] + '">'
tmp += post.contents[1].split()[0]
tmp += '</a>'
string += tmp
string += '\n\n'
return string
def on_new():
global updated
global posts
posts2 = soup.findAll("td", "topic-name")
for i in range(len(posts2)):
posts2[i] = posts2[i].a
#find how many new posts
try:
total = posts2.index(posts[0])
except ValueError:
total = len(posts2)
print total, "new posts, users = ", len(chat_ids)
posts = posts2
updated = soup.findAll('td','topic-last-post')
msg = '<b>Note that you need to be logged in before opening these links, else youll see 500 error in your browser</b>\n\n'
for i in range(total):
msg += gen_msg(posts[i])
for chat_id in chat_ids:
# print "sending update to ", chat_id
bot.sendMessage(chat_id, text=msg, parse_mode="HTML")
#save last message delivred to users
with open("posts", "wb") as f:
cPickle.dump(posts, f);
# Keep the program running.
def main():
while 1:
# bot.sendMessage(RAHUL_ID, text="Dynamic code update", parse_mode="HTML")
global forum
global soup
try:
forum = requests.get("https://www.placement.iitbhu.ac.in/forum/c/notice-board/2019-20/")
soup = bs(forum.content, "html.parser")
if len(posts) == 0 or soup.td.a['href'] != posts[0]['href']:
on_new()
except Exception as e:
bot.sendMessage(RAHUL_ID, text="<b>Exception:</b>\n" + str(e), parse_mode="HTML")
# else:
# bot.sendMessage(RAHUL_ID, text="Error in polling TPO forum", parse_mode="HTML")
try:
time.sleep(1000 * 60 *1)
finally:
# for chat_id in chat_ids:
# bot.sendMessage(chat_id, text='Server closing for maintenance, you might miss updates', parse_mode="HTML")
bot.sendMessage(RAHUL_ID, text='Server closing for maintenance, you might miss updates', parse_mode="HTML")
if __name__ == '__main__':
main()
| 25.019011 | 173 | 0.655623 | 1,004 | 6,580 | 4.199203 | 0.249004 | 0.039848 | 0.051233 | 0.056926 | 0.398008 | 0.344877 | 0.254507 | 0.232448 | 0.212287 | 0.16888 | 0 | 0.016607 | 0.158055 | 6,580 | 262 | 174 | 25.114504 | 0.744404 | 0.180091 | 0 | 0.225166 | 0 | 0.02649 | 0.242419 | 0.019333 | 0.006623 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006623 | 0.039735 | null | null | 0.033113 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
182e6f7b7c70dcc5da411a03395acac1d83ee9e9 | 3,136 | py | Python | src/models/Models.py | nbrutti/uol-export | c79a1a6b5c68e61a85952a60b935943aec27cdda | [
"MIT"
] | null | null | null | src/models/Models.py | nbrutti/uol-export | c79a1a6b5c68e61a85952a60b935943aec27cdda | [
"MIT"
] | null | null | null | src/models/Models.py | nbrutti/uol-export | c79a1a6b5c68e61a85952a60b935943aec27cdda | [
"MIT"
] | null | null | null | from config.defs import *
import peewee
db = peewee.SqliteDatabase(DATABASE_NAME)
class BaseModel(peewee.Model):
class Meta:
database = db
class Partida(BaseModel):
id_time_casa = peewee.CharField()
id_time_visitante = peewee.CharField()
time_casa = peewee.CharField()
time_visitante = peewee.CharField()
data = peewee.DateField()
time_da_casa_venceu = peewee.IntegerField()
HG = peewee.FloatField()
AG = peewee.FloatField()
PH = peewee.FloatField()
PD = peewee.FloatField()
PA = peewee.FloatField()
MAX_H = peewee.FloatField()
MAX_D = peewee.FloatField()
MAX_A = peewee.FloatField()
AVG_H = peewee.FloatField()
AVG_D = peewee.FloatField()
AVG_A = peewee.FloatField()
class Meta:
db_table = 'partidas'
class Substituicao(BaseModel):
# Pode ser INTERVALO
tempo = peewee.CharField()
tipo_tatico = peewee.CharField(null=True)
efetividade = peewee.IntegerField()
class Meta:
db_table = 'substituicoes'
class Penalti(BaseModel):
tempo = peewee.CharField()
class Meta:
db_table = 'penaltis'
class CartaoAmarelo(BaseModel):
tempo = peewee.CharField()
id_jogador = peewee.CharField()
class Meta:
db_table = 'cartoes_amarelos'
class CartaoVermelho(BaseModel):
tempo = peewee.CharField()
id_jogador = peewee.CharField()
class Meta:
db_table = 'cartoes_vermelhos'
class GolContra(BaseModel):
tempo = peewee.CharField()
id_jogador = peewee.CharField()
class Meta:
db_table = 'gols_contra'
class Gol(BaseModel):
tempo = peewee.CharField()
id_jogador = peewee.CharField()
class Meta:
db_table = 'gols'
class Time(BaseModel):
api_id = peewee.IntegerField()
nome = peewee.CharField()
class Meta:
db_table = "times"
### Relacionamentos ###
class PartidasSubstituicoes(BaseModel):
partida = peewee.ForeignKeyField(Partida)
substituicao = peewee.ForeignKeyField(Substituicao)
class Meta:
db_table = 'partidas_substituicoes'
class PartidasPenaltis(BaseModel):
partida = peewee.ForeignKeyField(Partida)
penalti = peewee.ForeignKeyField(Penalti)
class Meta:
db_table = 'partidas_penaltis'
class PartidasCartoesAmarelos(BaseModel):
partida = peewee.ForeignKeyField(Partida)
cartoes_amarelos = peewee.ForeignKeyField(CartaoAmarelo)
class Meta:
db_table = 'partidas_cartoes_amarelos'
class PartidasCartoesVermelhos(BaseModel):
partida = peewee.ForeignKeyField(Partida)
cartoes_vermelhos = peewee.ForeignKeyField(CartaoVermelho)
class Meta:
db_table = 'partidas_cartoes_vermelhos'
class PartidasGolsContra(BaseModel):
partida = peewee.ForeignKeyField(Partida)
gols_contra = peewee.ForeignKeyField(GolContra)
class Meta:
db_table = 'partidas_gols_contra'
class PartidasGols(BaseModel):
partida = peewee.ForeignKeyField(Partida)
gols = peewee.ForeignKeyField(Gol)
class Meta:
db_table = 'partidas_gols'
db.create_tables([Partida, Substituicao, Penalti, CartaoAmarelo, CartaoVermelho, GolContra, Gol, Time])
db.create_tables([PartidasSubstituicoes, PartidasPenaltis, PartidasCartoesAmarelos, PartidasCartoesVermelhos, PartidasGolsContra, PartidasGols]) | 24.888889 | 144 | 0.748724 | 334 | 3,136 | 6.871257 | 0.218563 | 0.104575 | 0.067102 | 0.097603 | 0.36427 | 0.294553 | 0.129847 | 0.129847 | 0.129847 | 0.129847 | 0 | 0 | 0.151786 | 3,136 | 126 | 144 | 24.888889 | 0.862782 | 0.010842 | 0 | 0.333333 | 0 | 0 | 0.066257 | 0.023594 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021505 | 0 | 0.806452 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
182eadd7acbf4364e0c9b88cd120533f1ae8e1e3 | 1,165 | py | Python | quantnn/__init__.py | simonpf/qrnn | 1de11ce8cede6b4b3de0734bcc8c198c10226188 | [
"MIT"
] | null | null | null | quantnn/__init__.py | simonpf/qrnn | 1de11ce8cede6b4b3de0734bcc8c198c10226188 | [
"MIT"
] | 3 | 2022-01-11T08:41:03.000Z | 2022-02-11T14:25:09.000Z | quantnn/__init__.py | simonpf/qrnn | 1de11ce8cede6b4b3de0734bcc8c198c10226188 | [
"MIT"
] | 5 | 2020-12-11T03:18:32.000Z | 2022-02-14T10:32:09.000Z | r"""
=======
quantnn
=======
The quantnn package provides functionality for probabilistic modeling and prediction
using deep neural networks.
The two main features of the quantnn package are implemented by the
:py:class:`~quantnn.qrnn.QRNN` and :py:class:`~quantnn.qrnn.DRNN` classes, which implement
quantile regression neural networks (QRNNs) and density regression neural networks (DRNNs),
respectively.
The modules :py:mod:`quantnn.quantiles` and :py:mod:`quantnn.density` provide generic
(backend agnostic) functions to manipulate probabilistic predictions.
"""
import logging as _logging
import os
from rich.logging import RichHandler
from quantnn.neural_network_model import set_default_backend, get_default_backend
from quantnn.qrnn import QRNN
from quantnn.drnn import DRNN
from quantnn.quantiles import (
cdf,
pdf,
posterior_mean,
probability_less_than,
probability_larger_than,
sample_posterior,
sample_posterior_gaussian,
quantile_loss,
)
_LOG_LEVEL = os.environ.get("QUANTNN_LOG_LEVEL", "WARNING").upper()
_logging.basicConfig(
level=_LOG_LEVEL, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]
)
| 29.871795 | 91 | 0.775107 | 150 | 1,165 | 5.866667 | 0.54 | 0.05 | 0.038636 | 0.040909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128755 | 1,165 | 38 | 92 | 30.657895 | 0.866995 | 0.480687 | 0 | 0 | 0 | 0 | 0.065327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
18323906f8da6c858e162af77f828aa7dc3d5141 | 1,314 | py | Python | leetcode/445.Add_Two_Numbers_II/python/add_two_numbers_v1.py | realXuJiang/research_algorithms | 8f2876288cb607b9eddb2aa75f51a1d574b51ec4 | [
"Apache-2.0"
] | 1 | 2019-08-12T09:32:30.000Z | 2019-08-12T09:32:30.000Z | leetcode/445.Add_Two_Numbers_II/python/add_two_numbers_v1.py | realXuJiang/research_algorithms | 8f2876288cb607b9eddb2aa75f51a1d574b51ec4 | [
"Apache-2.0"
] | null | null | null | leetcode/445.Add_Two_Numbers_II/python/add_two_numbers_v1.py | realXuJiang/research_algorithms | 8f2876288cb607b9eddb2aa75f51a1d574b51ec4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#-*- coding: utf-8 -*-
class ListNode(object):
def __init__(self, x):
self.val = x
self.next = None
class TwoNumbers(object):
@staticmethod
def builderListNode(nums):
if nums is not None:
head = ListNode(str(nums)[0])
a = head
for i in str(nums)[1:]:
b = ListNode(i)
a.next = b
a = a.next
return head
@staticmethod
def addTwoNumbers(n1, n2):
s1 = ""
s2 = ""
while n1 is not None:
s1 += str(n1.val)
n1 = n1.next
while n2 is not None:
s2 += str(n2.val)
n2 = n2.next
summation = str(int(s1) + int(s2))
head = ListNode(summation[0])
temp = head
for val in summation[1:]:
temp.next = ListNode(val)
temp = temp.next
return head
@staticmethod
def printLS(node):
if not node:
return None
res = ''
while node:
res += str(node.val) + ' -> '
node = node.next
print res
if __name__ == "__main__":
tn = TwoNumbers()
l1 = tn.builderListNode(1234)
l2 = tn.builderListNode(34)
tn.printLS(tn.addTwoNumbers(l1, l2))
| 22.655172 | 42 | 0.47793 | 153 | 1,314 | 4.026144 | 0.352941 | 0.073052 | 0.043831 | 0.084416 | 0.094156 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039897 | 0.408676 | 1,314 | 57 | 43 | 23.052632 | 0.752896 | 0.031202 | 0 | 0.108696 | 0 | 0 | 0.009441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
183ecccecd1a87d9ecdaf239b0b8acab5f9e8ed2 | 6,888 | py | Python | gamble/gamble.py | lookma/simple-coin-gamble | 8f1684e62b62f28a176458606ed193c812d97bc7 | [
"MIT"
] | null | null | null | gamble/gamble.py | lookma/simple-coin-gamble | 8f1684e62b62f28a176458606ed193c812d97bc7 | [
"MIT"
] | null | null | null | gamble/gamble.py | lookma/simple-coin-gamble | 8f1684e62b62f28a176458606ed193c812d97bc7 | [
"MIT"
] | null | null | null | from random import randint
from typing import Callable, List, Optional
class Coin:
"""Simulates a coin."""
def __init__(self) -> None:
self.__head = False
self.__toss_count = 0
self.__head_count = 0
def toss(self) -> None:
"""Toss a coin."""
r = randint(1, 2)
self.__head = True if r == 1 else False
self.__toss_count += 1
if self.__head:
self.__head_count += 1
def get_head_percentage(self) -> float:
"""Returns the percentages of heads relative to the total umber of coin tosses."""
return self.__head_count * 100 / self.__toss_count
def is_head(self) -> bool:
"""Check if the coins shows heads."""
return self.__head
def get_head_count(self) -> int:
"""Return the number of tossed heads."""
return self.__head_count
def get_toss_count(self) -> int:
"""Return the number of tosses."""
return self.__toss_count
class Player:
def __init__(self, name: str, bet_amount: float) -> None:
self.__name = name
self.__amounts = [bet_amount]
@property
def name(self) -> str:
"""Name of the player."""
return self.__name
@property
def is_winner(self) -> bool:
"""
Check if the player is a winner.
If the current amount of a player is greater or equal to the initial amount it is a winner.
"""
return self.__amounts[-1] >= self.__amounts[0]
@property
def is_total_loss(self) -> bool:
"""
Check if player lost everything.
It is assumed that a total lost occurs if amount drops below 1% of the initial bet.
"""
return self.amount < self.amounts[0] / 100
@property
def amount(self) -> float:
"""
Current amount of the player.
"""
return self.__amounts[-1]
@property
def amounts(self) -> List[float]:
"""
The amounts for all rounds of the player.
The initial amount (bet) is stored at index 0.
"""
return self.__amounts
def add_new_amount(self, amount: float) -> None:
self.__amounts.append(amount)
class RoundResults:
def __init__(self, players: List[Player]) -> None:
self.__total_amounts: List[float] = []
self.__number_of_winners: List[int] = []
self.__number_of_losers: List[int] = []
self.__number_of_total_losses: List[int] = []
self.__winner_percentages: List[float] = []
self.__min_amounts: List[float] = []
self.__max_amounts: List[float] = []
self.__avg_amounts: List[float] = []
self.add_round(players)
def add_round(self, players: List[Player]) -> None:
total_amount = 0
number_of_winners = 0
number_of_total_losses = 0
min_amount = players[0].amount
max_amount = 0
for player in players:
total_amount += player.amount
min_amount = min(player.amount, min_amount)
max_amount = max(player.amount, max_amount)
if player.is_winner:
number_of_winners += 1
if player.is_total_loss:
number_of_total_losses += 1
winner_percentage = number_of_winners * 100 / len(players)
self.__total_amounts.append(total_amount)
self.__number_of_winners.append(number_of_winners)
self.__number_of_losers.append(len(players) - number_of_winners)
self.__number_of_total_losses.append(number_of_total_losses)
self.__winner_percentages.append(winner_percentage)
self.__min_amounts.append(min_amount)
self.__max_amounts.append(max_amount)
self.__avg_amounts.append(total_amount / len(players))
@property
def number_of_rounds(self) -> int:
return len(self.__total_amounts) - 1
@property
def total_amounts(self) -> List[float]:
return self.__total_amounts
@property
def avg_amounts(self) -> List[float]:
return self.__avg_amounts
@property
def number_of_winners(self) -> List[int]:
return self.__number_of_winners
@property
def number_of_losers(self) -> List[int]:
return self.__number_of_losers
@property
def number_of_total_losses(self) -> List[int]:
return self.__number_of_total_losses
@property
def winner_percentages(self) -> List[float]:
return self.__winner_percentages
@property
def min_amounts(self) -> List[float]:
return self.__min_amounts
@property
def max_amounts(self) -> List[float]:
return self.__max_amounts
class Gamble:
def __init__(
self,
name: str,
number_of_players: int,
number_of_rounds: int,
bet_amount: float,
gain_percentage: int,
loss_percentage: int,
) -> None:
assert number_of_players > 0
assert number_of_rounds > 0
assert bet_amount > 0
assert gain_percentage >= 0
assert loss_percentage >= 0 and loss_percentage <= 100
self.__coin = Coin()
self.__name: str = name
self.__gain_factor: float = 1.0 + gain_percentage / 100.0
self.__loss_factor: float = 1.0 - loss_percentage / 100.0
self.__number_of_rounds: int = number_of_rounds
self.__progress_callback: Optional[Callable[[str, int, int], None]] = None
self.__players = []
for i in range(1, number_of_players + 1):
self.__players.append(Player(name="p" + str(i), bet_amount=bet_amount))
self.__round_results = RoundResults(self.__players)
def set_progress_callback(self, callback: Callable[[str, int, int], None]) -> None:
self.__progress_callback = callback
@property
def name(self) -> str:
return self.__name
def _apply_rule(self, amount: float) -> float:
self.__coin.toss()
amount = (
amount * self.__gain_factor
if self.__coin.is_head()
else amount * self.__loss_factor
)
return round(amount, 2)
def _play_round(self, round_index: int) -> None:
for player in self.__players:
player.add_new_amount(self._apply_rule(player.amount))
return
def play(self) -> None:
for index in range(1, self.__number_of_rounds + 1):
self._play_round(index)
self.__round_results.add_round(self.__players)
if self.__progress_callback:
self.__progress_callback(self.name, index, self.__number_of_rounds)
return
@property
def results(self) -> RoundResults:
return self.__round_results
@property
def players(self) -> List[Player]:
return self.__players
@property
def max_amount(self) -> float:
return max(self.results.max_amounts) | 30.477876 | 99 | 0.620499 | 862 | 6,888 | 4.582367 | 0.12413 | 0.06481 | 0.036456 | 0.033671 | 0.182532 | 0.095443 | 0.051392 | 0 | 0 | 0 | 0 | 0.011129 | 0.28252 | 6,888 | 226 | 100 | 30.477876 | 0.788142 | 0.085075 | 0 | 0.149068 | 0 | 0 | 0.000163 | 0 | 0 | 0 | 0 | 0 | 0.031056 | 1 | 0.204969 | false | 0 | 0.012422 | 0.080745 | 0.397516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1847f0a48843e1e83cb2f45be72c476d66e2ca39 | 562 | py | Python | setup.py | rif/imgdup | fe59c6b4b8c06699d48f887bc7a90acea48aa8f2 | [
"MIT"
] | 14 | 2016-02-10T04:53:42.000Z | 2021-08-08T17:39:55.000Z | setup.py | rif/imgdup | fe59c6b4b8c06699d48f887bc7a90acea48aa8f2 | [
"MIT"
] | null | null | null | setup.py | rif/imgdup | fe59c6b4b8c06699d48f887bc7a90acea48aa8f2 | [
"MIT"
] | 2 | 2017-11-01T14:02:46.000Z | 2019-02-20T10:55:52.000Z | from setuptools import setup, find_packages
setup(
name = "imgdup",
version = "1.3",
packages = find_packages(),
scripts = ['imgdup.py'],
install_requires = ['pillow>=2.8.1'],
# metadata for upload to PyPI
author = "Radu Ioan Fericean",
author_email = "radu@fericean.ro",
description = "Visual similarity image finder and cleaner (image deduplication tool)",
license = "MIT",
keywords = "deduplication duplicate images image visual finder",
url = "https://github.com/rif/imgdup", # project home page, if any
)
| 31.222222 | 90 | 0.663701 | 68 | 562 | 5.426471 | 0.779412 | 0.065041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011287 | 0.211744 | 562 | 17 | 91 | 33.058824 | 0.82167 | 0.094306 | 0 | 0 | 0 | 0 | 0.426877 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
184bf76e800fcea4dae223c4ac96db64613fb1ae | 709 | py | Python | humfrey/update/utils.py | ox-it/humfrey | c92e46a24a9bf28aa9638a612f166d209315e76b | [
"BSD-3-Clause"
] | 6 | 2015-01-09T15:53:07.000Z | 2020-02-13T14:00:53.000Z | humfrey/update/utils.py | ox-it/humfrey | c92e46a24a9bf28aa9638a612f166d209315e76b | [
"BSD-3-Clause"
] | null | null | null | humfrey/update/utils.py | ox-it/humfrey | c92e46a24a9bf28aa9638a612f166d209315e76b | [
"BSD-3-Clause"
] | 1 | 2017-05-12T20:46:15.000Z | 2017-05-12T20:46:15.000Z | from django.conf import settings
from django.utils.importlib import import_module
from humfrey.update.transform.base import Transform
def get_transforms():
try:
return get_transforms._cache
except AttributeError:
pass
transforms = {'__builtins__': {}}
for class_path in settings.UPDATE_TRANSFORMS:
module_path, class_name = class_path.rsplit('.', 1)
transform = getattr(import_module(module_path), class_name)
assert issubclass(transform, Transform)
transforms[transform.__name__] = transform
get_transforms._cache = transforms
return transforms
def evaluate_pipeline(pipeline):
return eval('(%s)' % pipeline, get_transforms())
| 29.541667 | 67 | 0.723554 | 79 | 709 | 6.189873 | 0.455696 | 0.106339 | 0.07362 | 0.07771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001745 | 0.191819 | 709 | 23 | 68 | 30.826087 | 0.851658 | 0 | 0 | 0 | 0 | 0 | 0.023977 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 1 | 0.111111 | false | 0.055556 | 0.222222 | 0.055556 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
184fa55d99eb6ba4a36992ee508941f13328275f | 1,074 | py | Python | src/python/autotransform/input/empty.py | nathro/AutoTransform | 04ef5458bc8401121e33370ceda6ef638e535e9a | [
"MIT"
] | 11 | 2022-01-02T00:50:24.000Z | 2022-02-22T00:30:09.000Z | src/python/autotransform/input/empty.py | nathro/AutoTransform | 04ef5458bc8401121e33370ceda6ef638e535e9a | [
"MIT"
] | 6 | 2022-01-06T01:45:34.000Z | 2022-02-03T21:49:52.000Z | src/python/autotransform/input/empty.py | nathro/AutoTransform | 04ef5458bc8401121e33370ceda6ef638e535e9a | [
"MIT"
] | null | null | null | # AutoTransform
# Large scale, component based code modification library
#
# Licensed under the MIT License <http://opensource.org/licenses/MIT>
# SPDX-License-Identifier: MIT
# Copyright (c) 2022-present Nathan Rockenbach <http://github.com/nathro>
# @black_format
"""The implementation for the DirectoryInput."""
from __future__ import annotations
from typing import ClassVar, Sequence
from autotransform.input.base import Input, InputName
from autotransform.item.base import Item
class EmptyInput(Input):
"""An Input that simply returns an empty list. Used when a Transformer operates
on the whole codebase, rather than on an individual Item/set of Items.
Attributes:
name (ClassVar[InputName]): The name of the component.
"""
name: ClassVar[InputName] = InputName.EMPTY
def get_items(self) -> Sequence[Item]:
"""Returns an empty list of Items, useful for Transformers that operate
on the whole codebase at once.
Returns:
Sequence[Item]: An empty list of Items.
"""
return []
| 28.263158 | 83 | 0.712291 | 136 | 1,074 | 5.580882 | 0.566176 | 0.027668 | 0.043478 | 0.047431 | 0.047431 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004678 | 0.203911 | 1,074 | 37 | 84 | 29.027027 | 0.883041 | 0.622905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
185637d8cc3eb01cc46a55e5e9f5b84f8e7f9e79 | 1,746 | py | Python | hard-gists/749857/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 21 | 2019-07-08T08:26:45.000Z | 2022-01-24T23:53:25.000Z | hard-gists/749857/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 5 | 2019-06-15T14:47:47.000Z | 2022-02-26T05:02:56.000Z | hard-gists/749857/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 17 | 2019-05-16T03:50:34.000Z | 2021-01-14T14:35:12.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist
# launchctl load /System/Library/LaunchDaemons/com.apple.syslogd.plist
from twisted.internet import reactor, stdio, defer
from twisted.internet.protocol import Protocol, Factory
from twisted.protocols.basic import LineReceiver
import time, re, math, json
#<22>Nov 1 00:12:04 gleicon-vm1 postfix/smtpd[4880]: connect from localhost[127.0.0.1]
severity = ['emerg', 'alert', 'crit', 'err', 'warn', 'notice', 'info', 'debug', ]
facility = ['kern', 'user', 'mail', 'daemon', 'auth', 'syslog', 'lpr', 'news',
'uucp', 'cron', 'authpriv', 'ftp', 'ntp', 'audit', 'alert', 'at', 'local0',
'local1', 'local2', 'local3', 'local4', 'local5', 'local6', 'local7',]
fs_match = re.compile("<(.+)>(.*)", re.I)
class SyslogdProtocol(LineReceiver):
delimiter = '\n'
def connectionMade(self):
print 'Connection from %r' % self.transport
def lineReceived(self, line):
k = {}
k['line'] = line.strip()
(fac, sev) = self._calc_lvl(k['line'])
k['host'] = self.transport.getHost().host
k['tstamp'] = time.time()
k['facility'] = fac
k['severity'] = sev
print json.dumps(k)
def _calc_lvl(self, line):
lvl = fs_match.split(line)
if lvl and len(lvl) > 1:
i = int(lvl[1])
fac = int(math.floor(i / 8))
sev = i - (fac * 8)
return (facility[fac], severity[sev])
return (None, None)
class SyslogdFactory(Factory):
protocol = SyslogdProtocol
def main():
factory = SyslogdFactory()
reactor.listenTCP(25000, factory, 10)
reactor.run()
if __name__ == '__main__':
main()
| 31.178571 | 87 | 0.605956 | 215 | 1,746 | 4.855814 | 0.567442 | 0.031609 | 0.049808 | 0.055556 | 0.088123 | 0.088123 | 0.088123 | 0 | 0 | 0 | 0 | 0.029197 | 0.215349 | 1,746 | 55 | 88 | 31.745455 | 0.732847 | 0.151203 | 0 | 0 | 0 | 0 | 0.152336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.105263 | null | null | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
185b8c2212dd3b144fbc0efeca4d07970b4b5805 | 316 | py | Python | exercicios/ex090.py | Siqueira-Vinicius/Python | bd1f7e2bcdfd5481724d32db387f51636bb4ad60 | [
"MIT"
] | null | null | null | exercicios/ex090.py | Siqueira-Vinicius/Python | bd1f7e2bcdfd5481724d32db387f51636bb4ad60 | [
"MIT"
] | null | null | null | exercicios/ex090.py | Siqueira-Vinicius/Python | bd1f7e2bcdfd5481724d32db387f51636bb4ad60 | [
"MIT"
] | null | null | null | aluno = {}
aluno['nome'] = str(input('Digite o nome do aluno: '))
aluno['media'] = float(input('Digite a média desse aluno: '))
if aluno['media'] >= 5:
aluno['situação'] = '\033[32mAprovado\033[m'
else:
aluno['situação'] = '\033[31mReprovado\033[m'
for k, v in aluno.items():
print(f'{k} do aluno é {v}') | 35.111111 | 61 | 0.617089 | 49 | 316 | 3.979592 | 0.571429 | 0.102564 | 0.164103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064151 | 0.161392 | 316 | 9 | 62 | 35.111111 | 0.671698 | 0 | 0 | 0 | 0 | 0 | 0.457413 | 0.141956 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
18664b760b4ae7d4a23d616670b3152102c11769 | 401 | py | Python | project/main/migrations/0003_auto_20200504_1852.py | Leeoku/MovieCrud | fb9e364895684f0cb1e3c1bc68971f0d4a7df1fc | [
"MIT"
] | null | null | null | project/main/migrations/0003_auto_20200504_1852.py | Leeoku/MovieCrud | fb9e364895684f0cb1e3c1bc68971f0d4a7df1fc | [
"MIT"
] | 6 | 2021-03-19T02:52:05.000Z | 2021-09-22T18:58:44.000Z | project/main/migrations/0003_auto_20200504_1852.py | Leeoku/MovieCrud | fb9e364895684f0cb1e3c1bc68971f0d4a7df1fc | [
"MIT"
] | null | null | null | # Generated by Django 3.0.4 on 2020-05-04 18:52
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0002_auto_20200323_0327'),
]
operations = [
migrations.AlterField(
model_name='movieentry',
name='date_watched',
field=models.DateField(blank=True, null=True),
),
]
| 21.105263 | 58 | 0.605985 | 44 | 401 | 5.409091 | 0.840909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107266 | 0.279302 | 401 | 18 | 59 | 22.277778 | 0.716263 | 0.112219 | 0 | 0 | 1 | 0 | 0.138418 | 0.064972 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
186a5816589e84e463b32b76302f76cecdf63a3d | 710 | py | Python | misc/redirector.py | ktan2020/tooling | 5a22adc2895f5baa98faad7028061219c545a675 | [
"MIT"
] | null | null | null | misc/redirector.py | ktan2020/tooling | 5a22adc2895f5baa98faad7028061219c545a675 | [
"MIT"
] | null | null | null | misc/redirector.py | ktan2020/tooling | 5a22adc2895f5baa98faad7028061219c545a675 | [
"MIT"
] | null | null | null | import SimpleHTTPServer
import SocketServer
import sys
from optparse import OptionParser
class myHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
print self.path
self.send_response(301)
new_path = 'http://%s%s'%(o.ip, self.path)
self.send_header('Location', new_path)
self.end_headers()
p = OptionParser()
p.add_option("--ip", dest="ip")
p.add_option("--port", dest="port", type=int, default=8080)
(o,p) = p.parse_args()
if o.ip == None:
print "XXX FATAL : IP address to redirect to is mandatory! XXX"
sys.exit(1)
handler = SocketServer.TCPServer(("", o.port), myHandler)
print "serving at port %s" % o.port
handler.serve_forever()
| 27.307692 | 67 | 0.685915 | 100 | 710 | 4.77 | 0.56 | 0.050314 | 0.050314 | 0.067086 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013629 | 0.173239 | 710 | 25 | 68 | 28.4 | 0.798978 | 0 | 0 | 0 | 0 | 0 | 0.152113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.190476 | null | null | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1875fb8f105e2c1eaf8a87c9adee8cca7ddd3e65 | 1,831 | py | Python | setup.py | AnacletoLAB/grape | 5ed0a84b7cedf588715919782f37c9492263bd12 | [
"MIT"
] | 6 | 2021-09-22T17:40:01.000Z | 2022-03-24T04:28:00.000Z | setup.py | AnacletoLAB/grape | 5ed0a84b7cedf588715919782f37c9492263bd12 | [
"MIT"
] | 5 | 2021-10-14T10:48:27.000Z | 2022-03-23T11:03:05.000Z | setup.py | AnacletoLAB/grape | 5ed0a84b7cedf588715919782f37c9492263bd12 | [
"MIT"
] | 2 | 2021-09-13T16:24:08.000Z | 2021-09-24T16:23:35.000Z | import os
import re
# To use a consistent encoding
from codecs import open as copen
from os import path
from setuptools import find_packages, setup
here = path.abspath(path.dirname(__file__))
# Get the long description from the relevant file
with copen(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
def read(*parts):
with copen(os.path.join(here, *parts), 'r') as fp:
return fp.read()
def find_version(*file_paths):
version_file = read(*file_paths)
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
__version__ = find_version("grape", "__version__.py")
test_deps = []
# TODO: Authors add your emails!!!
authors = {
"Luca Cappelletti": "luca.cappelletti1@unimi.it",
"Tommaso Fontana": "tommaso.fontana@mail.polimi.it",
"Vida Ravanmehr": "vida.ravanmehr@jax.org",
"Peter Robinson": "peter.robinson@jax.org",
}
setup(
name='grape',
version=__version__,
description="Rust/Python for high performance Graph Processing and Embedding.",
long_description=long_description,
url="https://github.com/AnacletoLAB/grape",
author=", ".join(list(authors.keys())),
author_email=", ".join(list(authors.values())),
# Choose your license
license='MIT',
include_package_data=True,
classifiers=[
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3'
],
packages=find_packages(exclude=['contrib', 'docs', 'tests*']),
tests_require=test_deps,
install_requires=[
"ensmallen==0.7.0.dev6",
"embiggen==0.10.0.dev2",
]
)
| 27.328358 | 83 | 0.653195 | 227 | 1,831 | 5.088106 | 0.555066 | 0.051948 | 0.020779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009543 | 0.198798 | 1,831 | 66 | 84 | 27.742424 | 0.777778 | 0.070453 | 0 | 0 | 0 | 0 | 0.30878 | 0.083677 | 0 | 0 | 0 | 0.015152 | 0 | 1 | 0.041667 | false | 0 | 0.104167 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1876d8349cdadc13b5b12782011e2506eb566592 | 1,299 | py | Python | NorthernLights/shapes/BaseShape.py | jgillick/coffeetable-programs | 244e3cc9099993a050ed64b1d11e41c763a1cb72 | [
"MIT"
] | null | null | null | NorthernLights/shapes/BaseShape.py | jgillick/coffeetable-programs | 244e3cc9099993a050ed64b1d11e41c763a1cb72 | [
"MIT"
] | null | null | null | NorthernLights/shapes/BaseShape.py | jgillick/coffeetable-programs | 244e3cc9099993a050ed64b1d11e41c763a1cb72 | [
"MIT"
] | null | null | null | import time
# Colors
RED = (1,0,0)
YELLOW = (1,1,0)
GREEN = (0,1,0)
CYAN = (0,1,1)
BLUE = (0,0,1)
PURPLE = (1,0,1)
class BaseShape:
# A list of instance attribute names, which are animatable objects
animatable_attrs = []
# The time of the last animation update
last_update = None
# The number of LEDs in the strip
led_count = 0
# The color index we're setting (red: 0, green: 1, blue: 2)
color = 0
def __init__(self, led_count, color, time):
self.led_count = led_count
self.color = color
self.last_update = time
def update(self, now):
""" Updates the shape animatable attributes."""
elapsed = now - self.last_update
print(elapsed)
is_animating = False
for anin_attr in self.animatable_attrs:
anim = getattr(self, anin_attr)
ret = anim.update(elapsed)
if ret:
is_animating = True
self.last_update = now
return is_animating
def __len__(self):
return self.led_count
def __getitem__(self, key):
return (0,0,0)
def __setitem__(self, key, value):
""" Cannot set pixel item. """
pass
def __delitem__(self, key):
""" Cannot delete pixel color. """
pass
| 24.055556 | 70 | 0.583526 | 175 | 1,299 | 4.125714 | 0.422857 | 0.055402 | 0.049862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029312 | 0.317167 | 1,299 | 53 | 71 | 24.509434 | 0.784667 | 0.225558 | 0 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0.057143 | 0.028571 | 0.057143 | 0.428571 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
43eab223999e2604b87fae88107217a209d85e53 | 859 | py | Python | teachers_toolkit/grading_system/migrations/0003_auto_20180706_1923.py | luiscberrocal/teachers_toolkit | 078c55c4a9ad9c5a74e1484d80ac34f3b26b69c9 | [
"MIT"
] | null | null | null | teachers_toolkit/grading_system/migrations/0003_auto_20180706_1923.py | luiscberrocal/teachers_toolkit | 078c55c4a9ad9c5a74e1484d80ac34f3b26b69c9 | [
"MIT"
] | null | null | null | teachers_toolkit/grading_system/migrations/0003_auto_20180706_1923.py | luiscberrocal/teachers_toolkit | 078c55c4a9ad9c5a74e1484d80ac34f3b26b69c9 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.7 on 2018-07-06 19:23
from django.db import migrations, models
import django_extensions.db.fields
class Migration(migrations.Migration):
dependencies = [
('grading_system', '0002_student_email'),
]
operations = [
migrations.RenameField(
model_name='assignment',
old_name='assingment_date',
new_name='assignment_date',
),
migrations.AddField(
model_name='course',
name='slug',
field=django_extensions.db.fields.AutoSlugField(blank=True, editable=False, populate_from=models.CharField(max_length=60)),
),
migrations.AlterField(
model_name='assignmentresult',
name='grade',
field=models.DecimalField(decimal_places=2, default=0.0, max_digits=5),
),
]
| 28.633333 | 135 | 0.620489 | 90 | 859 | 5.744444 | 0.655556 | 0.052224 | 0.069633 | 0.092843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039809 | 0.268917 | 859 | 29 | 136 | 29.62069 | 0.783439 | 0.052387 | 0 | 0.130435 | 1 | 0 | 0.126847 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
43f298d87e261cc2cbf422453d37df22dea68372 | 1,604 | py | Python | etravel/urls.py | zahir1509/project-ap-etravel | 2113a84ae4340be0e8cfa2676f448878c625e3e3 | [
"MIT"
] | 1 | 2020-12-06T17:49:11.000Z | 2020-12-06T17:49:11.000Z | etravel/urls.py | zahir1509/project-ap-etravel | 2113a84ae4340be0e8cfa2676f448878c625e3e3 | [
"MIT"
] | null | null | null | etravel/urls.py | zahir1509/project-ap-etravel | 2113a84ae4340be0e8cfa2676f448878c625e3e3 | [
"MIT"
] | 1 | 2020-12-07T14:20:41.000Z | 2020-12-07T14:20:41.000Z | """etravel URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.1/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
from django.conf.urls.static import static
from django.conf import settings
from main import views
urlpatterns = [
path('admin/', admin.site.urls),
path('', views.homepage, name = 'home'),
path('login/', views.loginPage, name = 'login'),
path('logout/', views.logoutUser, name = 'logout'),
path('signup/', views.signupPage, name = 'signup'),
path('browsehotel/', views.filterhotel, name = 'browsehotel'),
path('myaccount/', views.accountpage, name='myaccount'),
path('editprofile/', views.edit_profile, name='editprofile'),
path('change-password/', views.change_password, name='editpassword'),
path('hotel_booking/', views.bookhotel, name='bookhotel'),
path('hotel/<int:hotel_id>', views.hotelpage, name='hotelpage'),
path('cancelbooking/', views.cancelbooking, name='cancelbooking'),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
| 42.210526 | 77 | 0.706983 | 212 | 1,604 | 5.301887 | 0.386792 | 0.044484 | 0.013345 | 0.021352 | 0.104093 | 0.104093 | 0.066726 | 0 | 0 | 0 | 0 | 0.005844 | 0.146509 | 1,604 | 37 | 78 | 43.351351 | 0.815194 | 0.388404 | 0 | 0 | 0 | 0 | 0.224846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.052632 | 0.263158 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
43f6f242e391b123212da34e3f976064029b361e | 627 | py | Python | exs/mundo_2/python/067.py | QuatroQuatros/exercicios-CeV | c9b995b717fe1dd2c2eee3557db0161390bc78b0 | [
"MIT"
] | 45 | 2021-01-02T18:36:01.000Z | 2022-03-26T19:46:47.000Z | exs/mundo_2/python/067.py | QuatroQuatros/exercicios-CeV | c9b995b717fe1dd2c2eee3557db0161390bc78b0 | [
"MIT"
] | 24 | 2020-12-31T17:23:16.000Z | 2021-03-11T19:44:36.000Z | exs/mundo_2/python/067.py | QuatroQuatros/exercicios-CeV | c9b995b717fe1dd2c2eee3557db0161390bc78b0 | [
"MIT"
] | 28 | 2020-12-30T15:57:16.000Z | 2022-03-26T19:46:49.000Z | """
Desafio 067
Problema: Faça um programa que mostre a tabuada de vários números,
um de cada vez, para cada valor digitado pelo usuário.
O programa será interrompido quando o número solicitado
for negativo.
Resolução do problema:
"""
print('-' * 20)
print(f'{" Tabuada v3.0 ":~^20}')
print('-' * 20)
while True:
tabuada = int(input('Tabuada desejada: '))
print('-' * 20)
if tabuada < 0:
break
for cont in range(0, 11):
print(f'{tabuada} x {cont:2} = {tabuada * cont:2}')
print('-' * 20)
print(f'{" TABUADA FINALIZADA ":~^30}\nFOI UM PRAZER AJUDA-LO!!!')
| 23.222222 | 66 | 0.601276 | 86 | 627 | 4.383721 | 0.627907 | 0.074271 | 0.103448 | 0.068966 | 0.106101 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048936 | 0.250399 | 627 | 26 | 67 | 24.115385 | 0.753191 | 0.411483 | 0 | 0.333333 | 0 | 0 | 0.393352 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.583333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
a10591815a24a01b78e2571e754c9c37c5e03b4b | 205 | py | Python | wave/synth/wave/wave/base/curve.py | jedhsu/wave | a05d8f4b0a96722bdc2f5a514646c7a44681982b | [
"Apache-2.0"
] | null | null | null | wave/synth/wave/wave/base/curve.py | jedhsu/wave | a05d8f4b0a96722bdc2f5a514646c7a44681982b | [
"Apache-2.0"
] | null | null | null | wave/synth/wave/wave/base/curve.py | jedhsu/wave | a05d8f4b0a96722bdc2f5a514646c7a44681982b | [
"Apache-2.0"
] | null | null | null | from dataclasses import dataclass
from typing import Generic, Mapping, TypeVar
__all__ = ["Curve"]
T = TypeVar("T")
U = TypeVar("U")
@dataclass
class _Curve(Generic[T, U]):
mapping: Mapping[T, U]
| 14.642857 | 44 | 0.692683 | 28 | 205 | 4.892857 | 0.464286 | 0.043796 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 205 | 13 | 45 | 15.769231 | 0.805882 | 0 | 0 | 0 | 0 | 0 | 0.034146 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a10f0a0a33562a06ed9b546b2f53186a7237246b | 2,387 | py | Python | setup.py | mehta-lab/recOrder | 67f2edb9ab13114dfe41d57e465ae24f961b0004 | [
"Unlicense"
] | 2 | 2022-01-19T21:13:32.000Z | 2022-02-24T19:40:24.000Z | setup.py | mehta-lab/recOrder | 67f2edb9ab13114dfe41d57e465ae24f961b0004 | [
"Unlicense"
] | 55 | 2021-06-24T18:53:18.000Z | 2022-03-30T21:05:14.000Z | setup.py | mehta-lab/recOrder | 67f2edb9ab13114dfe41d57e465ae24f961b0004 | [
"Unlicense"
] | null | null | null | import os.path as osp
from setuptools import setup, find_packages
# todo: modify as we decide on versions, names, descriptions. readme
MIN_PY_VER = '3.7'
DISTNAME = 'recOrder'
DESCRIPTION = 'computational microscopy toolkit for label-free imaging'
with open("README.md", "r") as fh:
LONG_DESCRIPTION = fh.read()
LONG_DESCRIPTION_content_type = "text/markdown"
LONG_DESCRIPTION = __doc__
LICENSE = 'Chan Zuckerberg Biohub Software License'
INSTALL_REQUIRES = ['numpy', 'scipy', 'matplotlib', 'pycromanager']
REQUIRES = []
# todo: modify for python dependency
CLASSIFIERS = [
'License :: OSI Approved :: BSD License',
'Programming Language :: Python',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Visualization',
'Topic :: Scientific/Engineering :: Information Analysis',
'Topic :: Scientific/Engineering :: Bio-Informatics',
'Topic :: Utilities',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Operating System :: Unix',
'Operating System :: MacOS'
]
# populate packages
PACKAGES = [package for package in find_packages()]
# parse requirements
with open(osp.join('requirements', 'default.txt')) as f:
requirements = [line.strip() for line in f
if line and not line.startswith('#')]
# populate requirements
for l in requirements:
sep = l.split(' #')
INSTALL_REQUIRES.append(sep[0].strip())
if len(sep) == 2:
REQUIRES.append(sep[1].strip())
if __name__ == '__main__':
setup(
name=DISTNAME,
description=DESCRIPTION,
long_description=LONG_DESCRIPTION,
long_description_content_type=LONG_DESCRIPTION_content_type,
license=LICENSE,
version="0.0.1",
classifiers=CLASSIFIERS,
install_requires=INSTALL_REQUIRES,
python_requires=f'>={MIN_PY_VER}',
dependency_links=['https://github.com/mehta-lab/waveorder.git#egg=waveorder'],
packages=PACKAGES,
include_package_data=True,
entry_points={
'console_scripts': [
'recOrder.reconstruct = recOrder.cli_module:main',
'recOrder.convert = scripts.convert_tiff_to_zarr:main'
]
}
)
| 33.619718 | 86 | 0.6615 | 260 | 2,387 | 5.9 | 0.507692 | 0.068449 | 0.065189 | 0.050847 | 0.036506 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006948 | 0.216171 | 2,387 | 70 | 87 | 34.1 | 0.812934 | 0.06703 | 0 | 0 | 0 | 0 | 0.398019 | 0.065286 | 0 | 0 | 0 | 0.014286 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
a11034c8715f1c4364caa1c40989aaba6b81cecc | 2,983 | py | Python | codango/account/api.py | NdagiStanley/silver-happiness | 67fb6dd4047c603a84276f88a021d4489cf3b41e | [
"MIT"
] | 2 | 2019-10-17T01:03:12.000Z | 2021-11-24T07:43:14.000Z | codango/account/api.py | NdagiStanley/silver-happiness | 67fb6dd4047c603a84276f88a021d4489cf3b41e | [
"MIT"
] | 49 | 2019-09-05T02:48:04.000Z | 2021-06-28T02:29:42.000Z | codango/account/api.py | NdagiStanley/silver-happiness | 67fb6dd4047c603a84276f88a021d4489cf3b41e | [
"MIT"
] | 1 | 2021-11-25T10:19:27.000Z | 2021-11-25T10:19:27.000Z | import psycopg2
from rest_framework import generics, permissions
# from serializers import UserSerializer, UserFollowSerializer, UserSettingsSerializer
from serializers import UserSerializer, UserFollowSerializer, UserSettingsSerializer
from serializers import AllUsersSerializer, UserRegisterSerializer
from userprofile import serializers, models
from django.contrib.auth.models import User
from rest_framework import permissions
class IsOwner(permissions.BasePermission):
"""
Custom of class IsOwnerOrReadOnly(permissions.BasePermission)
That an APIexception is raised instead
We do not want a ReadOnly
"""
def has_object_permission(self, request, view, obj):
# First check if authentication is True
permission_classes = (permissions.IsAuthenticated, )
# Instance is the user
return obj.id == request.user.id
class UserListAPIView(generics.ListAPIView):
"""For /api/v1/users/ url path"""
queryset = User.objects.all()
serializer_class = AllUsersSerializer
permission_classes = (permissions.IsAdminUser,)
class UserDetailAPIView(generics.RetrieveUpdateAPIView):
"""For /api/v1/users/<id> url path"""
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = (IsOwner, )
class UserRegisterAPIView(generics.CreateAPIView):
"""For /api/v1/auth/register url path"""
permission_classes = (permissions.AllowAny,)
queryset = User.objects.all()
serializer_class = UserRegisterSerializer
class UserLogoutAPIView(generics.UpdateAPIView):
"""For /api/v1/auth/logout url path"""
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = (IsOwner, )
class UserFollowAPIView(generics.CreateAPIView):
"""
For api/v1/users/<>/follow/ url path
To enable user to add or remove those that they follow
"""
serializer_class = UserFollowSerializer
def get_queryset(self):
to_be_followed = User.objects.filter(id=self.kwargs['pk']).first()
return to_be_followed
def perform_create(self, serializer):
self.user = User.objects.filter(id=self.request.user.id).first()
try:
models.Follow.objects.create(
follower=self.user, followed=self.get_queryset())
return {"message":
"You have followed user'{}'".format(
self.get_queryset())}, 201
except:
raise serializers.serializers.ValidationError(
'You have already followed this person')
class UserSettingsAPIView(generics.RetrieveUpdateAPIView):
"""
For api/v1/users/<>/settings/ url path
To enable user to update those that their:
- update's frequency, github account and image
"""
"""For api/v1/users/<>/settings/ url path"""
queryset = models.UserSettings.objects.all()
serializer_class = UserSettingsSerializer
permission_classes = (IsOwner,)
| 31.072917 | 86 | 0.706336 | 315 | 2,983 | 6.612698 | 0.377778 | 0.020163 | 0.026884 | 0.031205 | 0.334133 | 0.284205 | 0.215554 | 0.18867 | 0.167547 | 0.083533 | 0 | 0.004614 | 0.200805 | 2,983 | 95 | 87 | 31.4 | 0.869128 | 0.207844 | 0 | 0.191489 | 0 | 0 | 0.032273 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.148936 | 0 | 0.765957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
a111d2ca236c2a067c9980e65999cf841b19dd21 | 548 | py | Python | scholariumat/products/migrations/0012_auto_20181125_1221.py | valuehack/scholariumat | 47c13f3429b95b9ad5ca59b45cf971895260bb5c | [
"MIT"
] | null | null | null | scholariumat/products/migrations/0012_auto_20181125_1221.py | valuehack/scholariumat | 47c13f3429b95b9ad5ca59b45cf971895260bb5c | [
"MIT"
] | 232 | 2018-06-30T11:40:52.000Z | 2020-04-29T23:55:41.000Z | scholariumat/products/migrations/0012_auto_20181125_1221.py | valuehack/scholariumat | 47c13f3429b95b9ad5ca59b45cf971895260bb5c | [
"MIT"
] | 3 | 2018-05-31T12:57:03.000Z | 2020-02-27T16:25:44.000Z | # Generated by Django 2.0.9 on 2018-11-25 11:21
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('products', '0011_auto_20181123_1446'),
]
operations = [
migrations.RenameField(
model_name='item',
old_name='amount',
new_name='_amount',
),
migrations.AddField(
model_name='itemtype',
name='default_amount',
field=models.SmallIntegerField(blank=True, null=True),
),
]
| 22.833333 | 66 | 0.578467 | 55 | 548 | 5.6 | 0.727273 | 0.058442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081794 | 0.308394 | 548 | 23 | 67 | 23.826087 | 0.730871 | 0.082117 | 0 | 0.117647 | 1 | 0 | 0.139721 | 0.045908 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.