hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
adc584e532d7589b2fd03d5d9ce9d7eacd185384 | 4,654 | py | Python | tests/filtered_datapackage.py | brightway-lca/bw_processing | c765698928700a19979b382b1d8dfa0d5685b8df | [
"BSD-3-Clause"
] | 1 | 2021-02-19T13:55:22.000Z | 2021-02-19T13:55:22.000Z | tests/filtered_datapackage.py | brightway-lca/bw_processing | c765698928700a19979b382b1d8dfa0d5685b8df | [
"BSD-3-Clause"
] | 11 | 2020-04-15T08:35:02.000Z | 2021-11-01T20:52:32.000Z | tests/filtered_datapackage.py | brightway-lca/bw_processing | c765698928700a19979b382b1d8dfa0d5685b8df | [
"BSD-3-Clause"
] | 1 | 2021-07-05T12:14:17.000Z | 2021-07-05T12:14:17.000Z | from pathlib import Path
import numpy as np
import pytest
from fs.osfs import OSFS
from fs.zipfs import ZipFS
from bw_processing import (
INDICES_DTYPE,
UNCERTAINTY_DTYPE,
create_datapackage,
load_datapackage,
)
dirpath = Path(__file__).parent.resolve() / "fixtures"
def test_metadata_is_the_same_object():
dp = load_datapackage(fs_or_obj=ZipFS(str(dirpath / "test-fixture.zip")))
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
for k, v in fdp.metadata.items():
if k != "resources":
assert id(v) == id(dp.metadata[k])
for resource in fdp.resources:
assert any(obj for obj in dp.resources if obj is resource)
def test_fs_is_the_same_object():
dp = load_datapackage(fs_or_obj=ZipFS(str(dirpath / "test-fixture.zip")))
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
assert fdp.fs is dp.fs
def test_indexer_is_the_same_object():
dp = load_datapackage(fs_or_obj=ZipFS(str(dirpath / "test-fixture.zip")))
dp.indexer = lambda x: False
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
assert fdp.indexer is dp.indexer
def test_data_is_the_same_object_when_not_proxy():
dp = load_datapackage(fs_or_obj=ZipFS(str(dirpath / "test-fixture.zip")))
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
arr1, _ = dp.get_resource("sa-data-array.data")
arr2, _ = fdp.get_resource("sa-data-array.data")
assert np.allclose(arr1, arr2)
assert arr1 is arr2
assert np.shares_memory(arr1, arr2)
def test_data_is_readable_multiple_times_when_proxy_zipfs():
dp = load_datapackage(
fs_or_obj=ZipFS(str(dirpath / "test-fixture.zip")), proxy=True
)
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
arr1, _ = dp.get_resource("sa-data-array.data")
arr2, _ = fdp.get_resource("sa-data-array.data")
assert np.allclose(arr1, arr2)
assert arr1.base is not arr2
assert arr2.base is not arr1
assert not np.shares_memory(arr1, arr2)
def test_data_is_readable_multiple_times_when_proxy_directory():
dp = load_datapackage(fs_or_obj=OSFS(str(dirpath / "tfd")), proxy=True)
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
arr1, _ = dp.get_resource("sa-data-array.data")
arr2, _ = fdp.get_resource("sa-data-array.data")
assert np.allclose(arr1, arr2)
assert arr1.base is not arr2
assert arr2.base is not arr1
assert not np.shares_memory(arr1, arr2)
def test_fdp_can_load_proxy_first():
dp = load_datapackage(
fs_or_obj=ZipFS(str(dirpath / "test-fixture.zip")), proxy=True
)
fdp = dp.filter_by_attribute("matrix", "sa_matrix")
arr2, _ = fdp.get_resource("sa-data-array.data")
arr1, _ = dp.get_resource("sa-data-array.data")
assert np.allclose(arr1, arr2)
assert arr1.base is not arr2
assert arr2.base is not arr1
assert not np.shares_memory(arr1, arr2)
@pytest.fixture
def erg():
dp = create_datapackage(
fs=None, name="frg-fixture", id_="something something danger zone"
)
data_array = np.arange(3)
indices_array = np.array([(0, 1), (2, 3), (4, 5)], dtype=INDICES_DTYPE)
flip_array = np.array([1, 0, 1], dtype=bool)
distributions_array = np.array(
[
(5, 1, 2, 3, 4, 5, False),
(4, 1, 2, 3, 4, 5, False),
(0, 1, 2, 3, 4, 5, False),
],
dtype=UNCERTAINTY_DTYPE,
)
dp.add_persistent_vector(
matrix="one",
data_array=data_array,
name="first",
indices_array=indices_array,
distributions_array=distributions_array,
nrows=3,
flip_array=flip_array,
)
dp.add_persistent_array(
matrix="two",
data_array=np.arange(12).reshape((3, 4)),
indices_array=indices_array,
name="second",
)
return dp
def test_exclude_is_same_fs(erg):
result = erg.exclude({"group": "first"})
assert erg.fs is result.fs
def test_exclude_resource_group(erg):
assert len(erg.resources) == 6
result = erg.exclude({"group": "first"})
assert len(result.resources) == 2
assert {obj["name"] for obj in result.resources} == {
"second.data",
"second.indices",
}
def test_exclude_resource_group_kind(erg):
assert len(erg.resources) == 6
result = erg.exclude({"group": "first", "kind": "distributions"})
assert len(result.resources) == 5
assert {obj["name"] for obj in result.resources} == {
"second.data",
"second.indices",
"first.data",
"first.indices",
"first.flip",
}
assert not any(obj.dtype == UNCERTAINTY_DTYPE for obj in result.data)
| 28.906832 | 77 | 0.658358 | 667 | 4,654 | 4.368816 | 0.166417 | 0.037062 | 0.040151 | 0.046671 | 0.598833 | 0.580302 | 0.546328 | 0.546328 | 0.532944 | 0.532944 | 0 | 0.020442 | 0.211646 | 4,654 | 160 | 78 | 29.0875 | 0.77378 | 0 | 0 | 0.390244 | 0 | 0 | 0.120756 | 0 | 0 | 0 | 0 | 0 | 0.219512 | 1 | 0.089431 | false | 0 | 0.04878 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adc6318681edd5a67c762f62f76d10d2b248efcc | 1,027 | py | Python | src/drivers/driver_manager.py | thirdshelf/qa-kom-framework | c04d05d4ef1aad580a0a1aeab601a3d0b2b5dbbb | [
"Apache-2.0"
] | null | null | null | src/drivers/driver_manager.py | thirdshelf/qa-kom-framework | c04d05d4ef1aad580a0a1aeab601a3d0b2b5dbbb | [
"Apache-2.0"
] | 1 | 2021-09-24T12:40:58.000Z | 2021-10-04T12:42:14.000Z | src/drivers/driver_manager.py | thirdshelf/qa-kom-framework | c04d05d4ef1aad580a0a1aeab601a3d0b2b5dbbb | [
"Apache-2.0"
] | null | null | null | from .drivers import Driver
from ... import kom_config
class DriverManager:
sessions = dict()
@classmethod
def __get_session_key(cls, page_object):
return page_object.get_session_key()
@classmethod
def get_session(cls, page_object):
if kom_config['multi_application_mode'] == 'True':
session_key = cls.__get_session_key(page_object)
return cls.sessions.get(session_key, None)
elif cls.sessions.keys():
return cls.sessions[next(iter(cls.sessions))]
return None
@classmethod
def create_session(cls, page_object, extensions):
session_key = cls.__get_session_key(page_object)
cls.sessions[session_key] = Driver(extensions).create_session()
return cls.sessions[session_key]
@classmethod
def destroy_session(cls, page_object):
if kom_config['multi_application_mode'] == 'True':
del cls.sessions[cls.__get_session_key(page_object)]
else:
cls.sessions.clear()
| 30.205882 | 71 | 0.670886 | 124 | 1,027 | 5.217742 | 0.282258 | 0.15456 | 0.120556 | 0.092736 | 0.321484 | 0.321484 | 0.281298 | 0.281298 | 0.170015 | 0.170015 | 0 | 0 | 0.232717 | 1,027 | 33 | 72 | 31.121212 | 0.821066 | 0 | 0 | 0.307692 | 0 | 0 | 0.050633 | 0.042843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0.038462 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adc9bff944c06ece554c16c7455db5446d05f853 | 28,089 | py | Python | main.py | LonghornGaming/DiscordBot | 162c42b7893c87f30c06c829d0d1ba75d357e70b | [
"MIT"
] | null | null | null | main.py | LonghornGaming/DiscordBot | 162c42b7893c87f30c06c829d0d1ba75d357e70b | [
"MIT"
] | 22 | 2021-06-30T05:47:55.000Z | 2021-08-08T20:48:03.000Z | main.py | LonghornGaming/DiscordBot | 162c42b7893c87f30c06c829d0d1ba75d357e70b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Jul 9 15:26:45 2020
@author: ahirc
"""
import discord
from discord.ext import commands
import os
import json
import pymysql
import datetime
import random
import time
client = discord.Client()
messageCounter = 0
canClaim = False
claimEmoji = ""
claimMessage = ""
blacklist = []
# roles = {"messageId":}
with open("blacklist.txt", 'r') as openFile:
blacklist = json.loads(openFile.read())
# with open("roles.txt",'r') as openFile:
# roleMessages = json.loads(openFile.read())
def consoleLog(text: str) -> None:
with open("log.txt", "a") as outfile:
timeFormat = '%Y-%m-%d %H:%M:%S'
now = datetime.datetime.now()
ts = now.strftime(timeFormat)
outfile.write((str)(ts) + ": ")
outfile.write((str)(text))
outfile.write("\n")
async def checkCommands(message: discord.Message) -> None:
global messageCounter, canClaim, blacklist, claimMessage, claimEmoji
command = message.content[1:].split(" ")
base = command[0]
consoleLog((str)(message.author.id) + " used " + base)
guild = message.guild
roles = guild.roles
members = guild.members
emojis = guild.emojis
channel = message.channel
author = message.author
msg = ""
# enter switchcase for commands
if (base == "d4"): # ex: !d4
msg = "CrusherCake is a Hardstuck D4 Urgot Onetrick."
await channel.send(msg)
elif (base == "addRole"): # ex: !setRoleMessage emoji role rType
if (adminCheck(message.author)):
emoji = command[1]
role = command[2]
rType = command[3]
if (role not in roles):
roles[role] = {"emoji": emoji, "rType": rType}
with open("roles.txt", 'w') as openFile:
openFile.write(json.dumps(roles))
for role in roles:
msg += roles[role] + " " + roles + ": **" + (str)(len(guild.roles.members)) + "**"
await channel.send
# elif(base == "leaderboard"): #ex !leaderboard
# DB = connectToDB() # this can be better, but for a later version
# cursor = DB.cursor()
# cursor.execute("SELECT discordId, xp, tier FROM users ORDER BY xp DESC")
# results = cursor.fetchall()
# DB.close()
#
# top5 = True
# atop5 = False
# ellipsis = False
#
# msg += "Longhorn Gaming XP Leaderboard: ```"
# for counter, result in enumerate(results, 1):
# if(counter <= 5):
# name = guild.get_member((int)(result[0])).display_name
# if(author.id == (int)(result[0])):
# msg += (str)(counter) + ": " + name + " - " + (str)(result[1]) + " xp and tier " + \
# (str)(result[2]) + " <----- YOU \n"
# atop5 = True
# print("here",atop5)
# else:
# msg += (str)(counter) + ": " + name + " - " + (str)(result[1]) + " xp and tier " + \
# (str)(result[2]) + "\n"
# if(counter > 5):
# top5 = False
# if(counter > 8 and not atop5 and not top5 and not ellipsis):
# msg += ". . . \n"
# ellipsis = True
# if(author.id == (int)(result[0]) and not top5):
# for i in range(counter-3, counter+2):
# result = results[i]
# if(i > 4):
# name = guild.get_member((int)(result[0])).display_name
# if(author.id == (int)(result[0])):
# msg += (str)(i+1) + ": " + name + " - " + (str)(result[1]) + " xp and tier " + \
# (str)(result[2]) + " <----- YOU \n"
# else:
# msg += (str)(i+1) + ": " + name + " - " + (str)(result[1]) + " xp and tier " + \
# (str)(result[2]) + "\n"
# break
#
# msg += "```"
# await channel.send(msg)
elif (base == "help"): # ex !leaderboard
dms = author.dm_channel
if (not dms): # if there is a dm channel that already exists
consoleLog("Created dm channel for " + (str)(author.id))
dms = await author.create_dm()
msg += "```Howdy! I'm a bot created for the Longhorn Gaming Discord. Below are my commands:\n"
msg += "!help: You're already here!\n"
msg += "!profile: Check your XP and Tier.\n"
msg += "!tiers: A brief explanation of tiers and rewards.```"
msg += "Plus there are some additional easter egg commands! See if you can find them all ^_^"
await author.dm_channel.send(msg)
elif (base == "tiers"): # ex !leaderboard
dms = author.dm_channel
if (not dms):
dms = await author.create_dm()
msg += "```Bevo Bot's XP Tiers are as follows:\n" \
"- Tier 1: Bronze, 500 XP\n" \
" - LG member t-shirt\n" \
"- Tier 2: Silver, 2000 XP\n" \
" - LG (small) sticker\n" \
" - Access to exclusive giveaways\n" \
"- Tier 3: Gold, 5000 XP\n" \
" - LG (large) sticker\n" \
" - Early access to a future LG project\n" \
"- Tier 4: Platinum, 10000 XP\n" \
" - LG Holographic sticker\n" \
" - 1 month Nitro Classic subscription\n" \
"- Tier 5: Diamond, 20000 XP\n" \
" - The first 5 members to reach this tier will receive (almost) ANY HyperX peripheral" \
" of their choice.\n\n" \
"- All XP tiers come with their own respective Discord roles.\n" \
"- All in-kind prizes are for LG members only.\n" \
"- Prizes are tentative and subject to change. If we add a prize but you’ve already surpassed" \
" that rank, you’ll still receive it retroactively.```"
await author.dm_channel.send(msg)
elif (base == "messageCheck"): # ex !messageCheck
if (adminCheck(message.author)):
messagesSent = {}
for c in guild.text_channels:
# print(c.name,end=": ")
before = time.time()
cMessages = await c.history(limit=None).flatten()
for m in cMessages:
user = m.author.display_name
if user not in messagesSent:
messagesSent[user] = 0
messagesSent[user] += 1
after = time.time()
# print(after-before," : ",len(cMessages))
sortedMessages = sorted(messagesSent.items(), key=lambda kv: kv[1])
with open("dump.json", 'w') as outfile:
outfile.write(json.dumps(sortedMessages))
elif (base == "profile"):
author = (str)(message.author.id)
tiernames = {0: "not currently in a", 1: "**Bronze**", 2: "**Silver**", 3: "**Gold**",
4: "**Platinum**", 5: "**Diamond**"}
DB = connectToDB() # this can be better, but for a later version
cursor = DB.cursor()
cursor.execute("SELECT * FROM users WHERE discordId = \"" + author + "\"")
profile = cursor.fetchall()
cursor.execute("SELECT discordId FROM users ORDER BY xp DESC")
leaderboard = cursor.fetchall()
DB.close()
if (len(profile) == 0):
msg = "<@" + author + ">, you have no xp! Don't worry, you can gain some just by sending messages!" \
" Try sending an intro in #say-hello to start!"
else:
rank = leaderboard.index((author,))
name = tiernames.get(profile[0][3], 0)
msg = "<@" + author + ">, you have **" + (str)(profile[0][1]) + "** xp and are " + (str)(name) + " Tier."
# " making you **rank #" + (str)(rank+1) + "** out of " + (str)(len(leaderboard)) + " users on the leaderboard!"
await channel.send(msg)
elif (base == "memberCheck"): # ex !memberCheck
if (adminCheck(message.author)):
joinDates = {}
for m in members:
name = m.display_name
joinDate = m.joined_at
monthYear = (str)(joinDate.year) + "-" + (str)(joinDate.month)
if (monthYear not in joinDates):
joinDates[monthYear] = []
joinDates[monthYear].append(name)
for d in sorted(joinDates):
print(d, len(joinDates[d]))
elif (base == "blacklist"): # ex: !blacklist @channel
if (adminCheck(message.author)):
wrappedId = command[1] # this is in the format <@_____> _____ is the channel id
bChannel = (int)(wrappedId[2:len(wrappedId) - 1])
if (bChannel not in blacklist): # add it to the blacklist
blacklist.append((int)(bChannel))
await channel.send(guild.get_channel(bChannel).mention + " has been added to the blacklist.")
else: # remove it from the blacklist
blacklist.remove((int)(bChannel))
await channel.send(guild.get_channel(bChannel).mention + " has been removed from the blacklist.")
with open("blacklist.txt", 'w') as openFile:
openFile.write(json.dumps(blacklist))
else:
await permissionDenied(message, channel)
# large WIP
elif (base == "giveXp"): # ex: !giveXp @insert_role_or_user_here 10
if (adminCheck(message.author)): # if the user of this command is an admin
wrappedId = command[1] # this is in the format <@&_____> or <@!_____> _____ is the id
amount = (int)(command[2]) # this is just a string number turned into an int
discordId = (int)(wrappedId[3:len(wrappedId) - 1]) # convert to a discord id
if ("&" in wrappedId): # if we're using a role for this commandif(users in roles):
role = guild.get_role(discordId)
print("Found role is " + str(role))
if (role): # if the role is valid
for user in role.members:
print("Found user from role is " + str(user))
await isaacgiveXp(user, amount, False)
else:
await message.channel.send("Invalid role id sent.")
elif ("!" in wrappedId): # if we're using a single user for this command instead
user = guild.get_member(discordId)
print("Found user is " + str(user))
if (user):
await isaacgiveXp(user, amount, False)
else:
await message.channel.send("Invalid user id sent.")
else:
await message.channel.send("Invalid user/role id sent.")
else:
await permissionDenied(message, channel)
def adminCheck(user: discord.User) -> bool:
return user.guild_permissions.administrator
async def randomClaimMessage(message: discord.Message, debug: bool) -> None:
global messageCounter, canClaim, claimMessage, claimEmoji
emojis = message.guild.emojis
channel = message.channel
messageCounter += 1
rand = random.randrange(0, 100)
minMessages = 100
val = (int)((messageCounter * 100) / (minMessages * 2)) # convert to a value between 0-100.
# 50% chance per message at minMessages, 100% chance at minMessages*2
if (messageCounter < minMessages and not debug): # start !claiming at this number
return messageCounter
else:
if ((val > rand and not canClaim) or debug):
canClaim = True
usedEmojis = []
x = 0
while x < 3:
rand = random.randrange(0, len(emojis))
if (emojis[rand] not in usedEmojis):
usedEmojis.append(emojis[rand])
x += 1
rand = random.randrange(0, len(usedEmojis))
claimEmoji = usedEmojis[rand]
claimMessage = await channel.send(
"Be the first to react with " + (str)(claimEmoji) + " to claim a small amount of xp!")
for emoji in usedEmojis:
await claimMessage.add_reaction(emoji)
"""
userId (str) : a string of the id of a user/member object
amount (int) : the amount of xp to give to the user
timePenalty (bool) : should this xp incur a time penalty on the next addition of xp?
"""
async def giveXp(message: discord.Message, amount: int, timePenalty: bool) -> int:
userId = (str)(message.id) # message can just be a user/author... kinda hacky fix
if hasattr(message, 'author'):
userId = (str)(message.author.id)
DB = connectToDB() # this can be better, but for a later version
cursor = DB.cursor()
cursor.execute("SELECT * FROM users WHERE discordId = \"" + userId + "\"")
result = cursor.fetchall()
assert not len(result) > 1, "more than one entry with the same discordId"
timeFormat = '%Y-%m-%d %H:%M:%S'
now = datetime.datetime.now()
ts = now.strftime(timeFormat)
elapsedMins = 0
xp = amount
if (len(result) == 0): # if this is a new user
if (not timePenalty):
ts = ts # hacky way to make sure there's no time penalty after this one
formatStr = """
INSERT INTO users (`discordId`, `xp`, `lastUpdated`,`tier`,`xpForHour`,`xpForDay`)
VALUES ("{dId}",{exp},"{time}",{t},{hourxp},{dayxp});
"""
cursor.execute(formatStr.format(dId=userId, exp=xp, time=ts, t=0, hourxp=xp, dayxp=xp))
else: # if this is a returning user
xpPerHour = result[0][5] + xp
xpPerDay = result[0][6] + xp
xp += result[0][1]
then = result[0][2]
tier = result[0][3]
if (now.hour is not then.hour):
xpPerHour = 0
if (now.day is not then.day):
xpPerDay = 0
if (not timePenalty): # xpPerHour/Day will introduce bugs here when giving xp
ts = then.strftime(timeFormat)
else:
elapsedMins = now.minute - then.minute
if (elapsedMins < 0): # fix bug where if hour changes
elapsedMins = abs(elapsedMins)
# elapsedMins = ((now - then).totalSeconds())/60
consoleLog(userId + " was last updated " + (str)(elapsedMins) + " minutes ago!")
if (xpPerHour >= 50):
DB.close()
return xp
if (xpPerDay >= 150):
DB.close()
return xp
if (elapsedMins < 1):
DB.close()
return xp
tier = await milestoneCheck(message, (int)(xp), tier)
cursor.execute("UPDATE users SET xp = " + (str)(xp) + ", lastUpdated = \"" + ts + "\", tier = " + str(tier) +
", xpForHour = " + (str)(xpPerHour) + ", xpForDay = " + (str)(xpPerDay) + " WHERE discordId = \""
+ (str)(userId) + "\"")
DB.commit()
DB.close()
return xp
async def isaacgiveXp(user: discord.User, amount: int, timePenalty: bool) -> int:
uid = user.id
print("The current user is " + str(user) + " and their id is " + str(uid))
DB = connectToDB() # this can be better, but for a later version
cursor = DB.cursor()
cursor.execute("SELECT * FROM users WHERE discordId = \"" + str(uid) + "\"")
result = cursor.fetchall()
assert not len(result) > 1, "more than one entry with the same discordId"
timeFormat = '%Y-%m-%d %H:%M:%S'
now = datetime.datetime.now()
ts = now.strftime(timeFormat)
elapsedMins = 0
xp = amount
if (len(result) == 0): # if this is a new user
if (not timePenalty):
ts = ts # hacky way to make sure there's no time penalty after this one
formatStr = """
INSERT INTO users (`discordId`, `xp`, `lastUpdated`,`tier`,`xpForHour`,`xpForDay`)
VALUES ("{dId}",{exp},"{time}",{t},{hourxp},{dayxp});
"""
cursor.execute(formatStr.format(dId=uid, exp=xp, time=ts, t=0, hourxp=xp, dayxp=xp))
else: # if this is a returning user
xpPerHour = result[0][5] + xp
xpPerDay = result[0][6] + xp
xp += result[0][1]
then = result[0][2]
tier = result[0][3]
if (now.hour is not then.hour):
xpPerHour = 0
if (now.day is not then.day):
xpPerDay = 0
if (not timePenalty): # xpPerHour/Day will introduce bugs here when giving xp
ts = then.strftime(timeFormat)
else:
elapsedMins = now.minute - then.minute
if (elapsedMins < 0): # fix bug where if hour changes
elapsedMins = abs(elapsedMins)
# elapsedMins = ((now - then).totalSeconds())/60
consoleLog(str(uid) + " was last updated " + (str)(elapsedMins) + " minutes ago!")
if (xpPerHour >= 50):
DB.close()
return xp
if (xpPerDay >= 150):
DB.close()
return xp
if (elapsedMins < 1):
DB.close()
return xp
cursor.execute("UPDATE users SET xp = " + (str)(xp) + ", lastUpdated = \"" + ts + "\"" + ", xpForHour = "
+ (str)(xpPerHour) + ", xpForDay = " + (str)(xpPerDay) + " WHERE discordId = \""
+ (str)(uid) + "\"")
DB.commit()
DB.close()
return xp
async def permissionDenied(message: discord.Message, channel: discord.TextChannel) -> None:
await channel.send("You do not have permission for this command.")
async def xpPerMessage(message: discord.Message) -> None:
xpPerMessage = 5 # magic number
guild = message.guild
roles = guild.roles
author = (str)(message.author.id)
xp = (int)(await giveXp(message, xpPerMessage, True))
async def handleIntro(message: discord.Message) -> None:
userId = (str)(message.author.id)
DB = connectToDB() # this can be better, but for a later version
cursor = DB.cursor()
cursor.execute("SELECT intro FROM users WHERE discordId = \"" + userId + "\"")
intro = cursor.fetchall()
intro = intro[0][0]
if (not intro):
cursor.execute("UPDATE users SET intro = 1 WHERE discordId = \"" + (str)(userId) + "\"")
DB.commit()
await giveXp(message, 50, False) # first time gives 50 xp
DB.close()
@client.event
async def milestoneCheck(message: discord.Message, xp: int, otier: int) -> int:
author = message # message can just be a userId... kinda hacky fix
guild = message.guild
lgmember = guild.get_role(736351923468238891) # until i learn how to grab roles in an easier manner
tierroles = {1: 754533153258864741, 2: 754535125752086640, 3: 754535269335564291,
4: 754537923659169842, 5: 754538022569115690}
dmemsg = {1: "Congratulations! You've hit the **Bronze** tier, which gives you the following rewards:\n"
"```- LG Membership Shirt (multiple delivery options available)\n"
"- Exclusive Discord role (@Bevo Bot Bronze)```"
" In order to claim your rewards, please visit the following link to provide"
" us with the necessary information: <https://forms.gle/Vb2kBD4DZnrr7UNTA>",
2: "Congratulations! You've hit the **Silver** tier, which gives you the following rewards:\n"
"```- A small LG sticker\n"
"- Access to exclusive giveaways\n"
"- Exclusive Discord role (@Bevo Bot Silver)```",
3: "Congratulations! You've hit the **Gold** tier, which gives you the following rewards:\n"
"```- A large LG sticker (limited quantity!)\n"
"- Early access to a future LG project\n"
"- Exclusive Discord role (@Bevo Bot Gold)```",
4: "Congratulations! You've hit the **Platinum** tier, which gives you the following rewards:\n"
"```- 1 month of Nitro Classic on us\n"
"- A large holographic LG sticker (limited quantity!)\n"
"- Exclusive Discord role (@Bevo Bot Platinum)```",
5: "Congratulations! You've hit the **Diamond** tier, which gives you the following rewards:\n"
"```- A HyperX peripheral of your choice (limited to first 5 members)\n"
"- Exclusive Discord role (@Bevo Bot Diamond)```"
}
dmsg = {1: "Congratulations! You've hit the **Bronze** tier.",
2: "Congratulations! You've hit the **Silver** tier.",
3: "Congratulations! You've hit the **Gold** tier.",
4: "Congratulations! You've hit the **Platinum** tier.",
5: "Congratulations! You've hit the **Diamond** tier."
}
if hasattr(message, 'author'):
author = message.author
user = guild.get_member(author.id)
# print("this is xp: " + str(xp)) #used for debugging
ftier = getTier(xp)
if otier < ftier: # in the event there is a tier upgrade
dms = author.dm_channel
if (not dms): # if there is a dm channel that already exists
consoleLog("Created dm channel for " + (str)(author.id))
dms = await author.create_dm()
if lgmember in user.roles:
roleid = tierroles.get(ftier, 0)
msg = dmemsg.get(ftier, 0)
role = guild.get_role(roleid)
await user.add_roles(role)
await author.dm_channel.send(msg)
else:
msg = dmsg.get(ftier, 0)
await author.dm_channel.send(msg)
return ftier
def getTier(xp):
ftier = 0 # final tier to return
if (os.path.exists("config.txt")):
with open("config.txt", 'r') as openFile:
tiers = json.loads(openFile.read())
for tier in tiers['tiers']: # creates the list of tiers to iterate through
for tiernum in tier["number"]: # checks each tier "number"
if (int)(xp) >= (int)(tier["xp"]): # xp threshold check
ftier = (int)(tiernum) # updates tier number if necessary
else:
return ftier # stops from updating to wrong tier
@client.event
async def on_message(message: discord.Message) -> None:
channel = message.channel
if (channel.id in blacklist):
return
if message.author == client.user: # bot message, so don't do anything
return
if message.content.startswith('!'): # command message
await checkCommands(message)
if "<@!" + str(client.user.id) + ">" in message.content:
if (adminCheck(message.author)):
await message.channel.send("howdy pardner <@!" + str(message.author.id) + ">")
else: # regular chat message
if (message.channel.id == 355749312434798593): # if the message was sent in #say-hello
print("found intro message")
await handleIntro(message)
await xpPerMessage(message)
await randomClaimMessage(message, False)
@client.event
async def on_member_update(before, after):
lgMemberRole = 736351923468238891
tierroles = {1: 754533153258864741, 2: 754535125752086640, 3: 754535269335564291,
4: 754537923659169842, 5: 754538022569115690}
dmemsg = {1: "Congratulations! You've hit the **Bronze** tier, which gives you the following rewards:\n"
"```- LG Membership Shirt (multiple delivery options available)\n"
"- Exclusive Discord role (@Bevo Bot Bronze)```"
" In order to claim your rewards, please visit the following link to provide"
" us with the necessary information: <https://forms.gle/Vb2kBD4DZnrr7UNTA>",
2: "Congratulations! You've hit the **Silver** tier, which gives you the following rewards:\n"
"```- A small LG sticker\n"
"- Access to exclusive giveaways\n"
"- Exclusive Discord role (@Bevo Bot Silver)```",
3: "Congratulations! You've hit the **Gold** tier, which gives you the following rewards:\n"
"```- A large LG sticker (limited quantity!)\n"
"- Early access to a future LG project\n"
"- Exclusive Discord role (@Bevo Bot Gold)```",
4: "Congratulations! You've hit the **Platinum** tier, which gives you the following rewards:\n"
"```- 1 month of Nitro Classic on us\n"
"- A large holographic LG sticker (limited quantity!)\n"
"- Exclusive Discord role (@Bevo Bot Platinum)```",
5: "Congratulations! You've hit the **Diamond** tier, which gives you the following rewards:\n"
"```- A HyperX peripheral of your choice (limited to first 5 members)\n"
"- Exclusive Discord role (@Bevo Bot Diamond)```"
}
beforeRoles = before.roles
afterRoles = after.roles
wasLGMember = False
isLGMember = False
for role in beforeRoles:
if lgMemberRole == role.id:
wasLGMember = True
for role in afterRoles:
if lgMemberRole == role.id:
isLGMember = True
if(after.id == 103645519091355648):
print(wasLGMember,isLGMember)
if (isLGMember and not wasLGMember):
print("member just got the lgMemberRole")
#check if lg member role is in after but not before
DB = connectToDB() # this can be better, but for a later version
cursor = DB.cursor()
cursor.execute("SELECT xp FROM users WHERE discordId = \"" + str(after.id) + "\"")
result = cursor.fetchall()
assert not len(result) > 1, "more than one entry with the same discordId"
xp = result[0][0]
ftier = getTier(xp)
roleid = tierroles.get(ftier, 0)
msg = dmemsg.get(ftier, 0)
role = after.guild.get_role(roleid)
await after.add_roles(role)
await after.dm_channel.send(msg)
@client.event
async def on_reaction_add(reaction, user):
global messageCounter, canClaim, claimMessage, claimEmoji
if user == client.user: # bot message, so don't do anything
return
if (canClaim):
message = reaction.message
channel = message.channel
# print((reaction.emoji,claimEmoji,message.id,claimMessage.id))
if (reaction.emoji == claimEmoji and message.id == claimMessage.id):
canClaim = False
xpToClaim = (int)(15 * (random.randrange(50, 200)) / 100)
xpToClaim = 5 * round(xpToClaim / 5)
await channel.send(user.mention + " claimed " + (str)(xpToClaim) + " xp!")
await giveXp(user, xpToClaim, False)
messageCounter = 0
channel = reaction.message.channel
@client.event
async def on_ready() -> None:
consoleLog('Logged in as')
consoleLog(client.user.name)
consoleLog(client.user.id)
consoleLog('------')
print("Launched.")
def connectToDB() -> None:
hst, dbname, u, pw = "", "", "", ""
if (os.path.exists("secrets.txt")):
with open("secrets.txt", 'r') as openFile:
secrets = json.loads(openFile.read())
hst = secrets["db_host"]
dbname = secrets["db_name"]
u = secrets["db_user"]
pw = secrets["db_pw"]
DB = pymysql.connect(host=hst, user=u, password=pw, database=dbname) # connect to our database
# print("Database connected to!")
return DB
if __name__ == '__main__':
bot = commands.Bot(command_prefix='!')
token = ""
if (os.path.exists("secrets.txt")):
with open("secrets.txt", 'r') as openFile:
secrets = json.loads(openFile.read())
token = secrets["token"]
consoleLog("Blacklist: ")
consoleLog(blacklist)
client.run(token) | 44.656598 | 124 | 0.553775 | 3,262 | 28,089 | 4.744022 | 0.156346 | 0.012795 | 0.019386 | 0.022294 | 0.494733 | 0.449435 | 0.428239 | 0.420291 | 0.390695 | 0.377964 | 0 | 0.02606 | 0.319663 | 28,089 | 629 | 125 | 44.656598 | 0.783726 | 0.16138 | 0 | 0.461386 | 0 | 0.00198 | 0.254867 | 0.007753 | 0.00198 | 0 | 0 | 0 | 0.005941 | 1 | 0.007921 | false | 0.00396 | 0.015842 | 0.00198 | 0.055446 | 0.017822 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adccec2dfc56e2c5360f75a4aba7ceaa3074685c | 2,087 | py | Python | automod.py | romdotdog-2017/python | 1423ca90690d3fb426d7863ffebfe325c7ad83f1 | [
"MIT"
] | null | null | null | automod.py | romdotdog-2017/python | 1423ca90690d3fb426d7863ffebfe325c7ad83f1 | [
"MIT"
] | null | null | null | automod.py | romdotdog-2017/python | 1423ca90690d3fb426d7863ffebfe325c7ad83f1 | [
"MIT"
] | null | null | null | page = "?vl[page]={}&mid=SubmissionsList"
skins = "https://gamebanana.com/skins/games/297"
sounds = "https://gamebanana.com/sounds/games/297"
sprays = "https://gamebanana.com/sprays/games/297"
effects = "https://gamebanana.com/effects/games/297"
count = 100
from tqdm import tqdm
import requests
from random import randint
from bs4 import BeautifulSoup
from pyunpack import Archive
cSkins = int(count*0.4)
cSounds = int(count*0.3)
cEffects = int(count*0.2)
count -= cSkins
count -= cSounds
count -= cEffects
cSprays = count
print("Downloading", cSkins, "Weapon Skins")
print("Downloading", cSounds, "Sounds")
print("Downloading", cEffects, "Effects")
print("Downloading", cSprays, "Sprays")
def downloadQuota(count, url):
quota = count
while quota > 0:
html = requests.get(url + page.format(randint(1,400))).text
soup = BeautifulSoup(html, "html.parser")
try:
for mod in soup.select("records > record"):
modPage = mod.select("a.Name")[0].get("href")
modPage = requests.get(modPage).text
soup = BeautifulSoup(modPage, "html.parser")
modFileName = soup.select("div.FileInfo > span > code")[0].string
modPage = soup.select(".SmallManualDownloadIcon")[0].parent.get("href")
modPage = requests.get(modPage).text
soup = BeautifulSoup(modPage, "html.parser")
download = soup.select(".SmallManualDownloadIcon")[0].parent.get("href")
print("downloading", modFileName, "with url", download)
filename = "./custom/TF2AutoMod/"+modFileName
# NOTE the stream=True parameter below
with requests.get(download, stream=True) as r:
r.raise_for_status()
with open(filename, 'wb') as f:
for chunk in tqdm(r.iter_content(chunk_size=1024), unit="KB", desc=filename):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
if modFileName.endswith(".zip") or modFileName.endswith(".rar"):
Archive(filename).extractall('../')
quota -= 1
except Exception as e:
print(e)
downloadQuota(cSkins, skins)
downloadQuota(cSounds, sounds)
downloadQuota(cEffects, effects)
downloadQuota(cSprays, sprays)
| 33.66129 | 84 | 0.705319 | 268 | 2,087 | 5.477612 | 0.414179 | 0.054496 | 0.049046 | 0.029973 | 0.154632 | 0.154632 | 0.154632 | 0.095368 | 0.095368 | 0.095368 | 0 | 0.020728 | 0.144705 | 2,087 | 61 | 85 | 34.213115 | 0.801681 | 0.033062 | 0 | 0.074074 | 0 | 0 | 0.227408 | 0.039722 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0 | 0.092593 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adceedced05dba04e8d060e83dc340861a892c44 | 6,840 | py | Python | notebooks/Instaseis-Syngine/syngine_greens_functions.py | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-07-11T10:01:39.000Z | 2020-12-16T14:26:03.000Z | notebooks/Instaseis-Syngine/syngine_greens_functions.py | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | null | null | null | notebooks/Instaseis-Syngine/syngine_greens_functions.py | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-11-11T05:05:41.000Z | 2022-03-12T09:36:24.000Z | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.4'
# jupytext_version: 1.2.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + {"deletable": true, "editable": true, "cell_type": "markdown"}
# <div style='background-image: url("../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
# <div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
# <div style="position: relative ; top: 50% ; transform: translatey(-50%)">
# <div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Syngine</div>
# <div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Green's Functions for Moment Tensor Inversions</div>
# </div>
# </div>
# </div>
# + {"deletable": true, "editable": true, "cell_type": "markdown"}
# Seismo-Live: http://seismo-live.org
#
# ##### Authors:
# * Lion Krischer ([@krischer](https://github.com/krischer))
#
# ---
# + {"deletable": true, "editable": true, "cell_type": "markdown"}
# This is a tutorial teaching how to use Syngine's Green's function to reconstruct seismograms from arbitrary source mechanisms.
#
# * [IRIS Syngine Service](http://ds.iris.edu/ds/products/syngine/)
# + {"deletable": true, "editable": true}
# First a bit of setup to make the plots appear in the
# notebook and make them look a bit nicer.
# %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use("ggplot")
# -
import obspy
import numpy as np
# +
# This is a helper function to return a ZRT stream for a certain
# moment tensor.
def seismograms_for_mt(st, az, m_rr, m_tt, m_pp, m_rt, m_rp, m_tp):
# Convert to radian.
az = np.deg2rad(az)
# shortcuts to the data.
TSS = st.select(channel="TSS")[0].data
ZSS = st.select(channel="ZSS")[0].data
RSS = st.select(channel="RSS")[0].data
TDS = st.select(channel="TDS")[0].data
ZDS = st.select(channel="ZDS")[0].data
RDS = st.select(channel="RDS")[0].data
ZDD = st.select(channel="ZDD")[0].data
RDD = st.select(channel="RDD")[0].data
ZEP = st.select(channel="ZEP")[0].data
REP = st.select(channel="REP")[0].data
# Apply formula from Minson and Dreger, 2008.
Z = m_tt * (ZSS / 2 * np.cos(2 * az) - ZDD / 6 + ZEP / 3) + \
m_pp * (-ZSS / 2 * np.cos(2 * az) - ZDD / 6 + ZEP / 3) + \
m_rr * (ZDD / 3 + ZEP / 3) + \
m_tp * (ZSS * np.sin(2 * az)) + \
m_rt * (ZDS * np.cos(az)) + \
m_rp * (ZDS * np.sin(az))
R = m_tt * (RSS / 2 * np.cos(2 * az) - RDD / 6 + REP / 3) + \
m_pp * (-RSS / 2 * np.cos(2 * az) - RDD / 6 + REP / 3) + \
m_rr * (RDD / 3 + REP / 3) + \
m_tp * (RSS * np.sin(2 * az)) + \
m_rt * (RDS * np.cos(az)) + \
m_rp * (RDS * np.sin(az))
T = m_tt * (TSS / 2 * np.sin(2 * az)) - \
m_pp * (TSS / 2 * np.sin(2 * az)) - \
m_tp * (TSS * np.cos(2 * az)) + \
m_rt * (TDS * np.sin(az)) - \
m_rp * (TDS * np.cos(az))
tr_z = obspy.Trace(data=Z, header=st[0].stats)
tr_z.stats.channel = "EHZ"
tr_r = obspy.Trace(data=R, header=st[0].stats)
tr_r.stats.channel = "EHR"
tr_t = obspy.Trace(data=T, header=st[0].stats)
tr_t.stats.channel = "EHT"
return obspy.Stream(traces=[tr_z, tr_r, tr_t])
# +
import obspy
import requests
# Base URL and model choice.
BASE_URL = "http://service.iris.edu/irisws/syngine/1/query?format=miniseed&model=ak135f_5s&"
# Get components for a distance of 10 degrees and a source depth of 1 km.
GREENS_URL = BASE_URL + \
"greensfunction=1&sourcedistanceindegrees=10&sourcedepthinmeters=1000"
# Request the elementary seismograms.
st = obspy.read(GREENS_URL)
# +
# Example for a certain moment tensor.
m_rr, m_tt, m_pp, m_rt, m_rp, m_tp = 1E20, -2E20, 0.4E20, -0.7E20, 1.3E20, -2E20
# A: Get it directly from Syngine.
MT_URL = BASE_URL + (
"sourcelatitude=0&sourcelongitude=10&sourcedepthinmeters=1000&"
"sourcemomenttensor=%g,%g,%g,%g,%g,%g&"
"receiverlatitude=0&receiverlongitude=0&"
"components=ZRT"% (m_rr, m_tt, m_pp, m_rt, m_rp, m_tp))
st_mt = obspy.read(MT_URL.replace("+", ""))
# B: Get it via Green's function arithmetics.
st_g = seismograms_for_mt(st, az=-90.0, m_rr=m_rr, m_tt=m_tt, m_pp=m_pp,
m_rt=m_rt, m_rp=m_rp, m_tp=m_tp)
# Compare everything.
plt.figure(figsize=(10, 10))
plt.subplot(311)
plt.title("Vertical")
tr_mt = st_mt.select(component="Z")[0]
tr_g = st_g.select(component="Z")[0]
plt.plot(tr_mt.times(), tr_mt.data, color="red", label="Direct")
plt.plot(tr_g.times(), tr_g.data, color="blue", label="Reconstructed")
plt.legend()
plt.xlim(200, 500)
plt.subplot(312)
plt.title("Radial")
tr_mt = st_mt.select(component="R")[0]
tr_g = st_g.select(component="R")[0]
plt.plot(tr_mt.times(), tr_mt.data, color="red")
plt.plot(tr_g.times(), tr_g.data, color="blue")
plt.xlim(200, 500)
plt.subplot(313)
plt.title("Transverse")
tr_mt = st_mt.select(component="T")[0]
tr_g = st_g.select(component="T")[0]
plt.plot(tr_mt.times(), tr_mt.data, color="red")
plt.plot(tr_g.times(), tr_g.data, color="blue")
plt.xlim(200, 500)
plt.show()
# +
# Some thing again but for a different tensor.
m_rr, m_tt, m_pp, m_rt, m_rp, m_tp = 2E15, .4E17, -0.4E14, 2.7E15, -1.3E13, 1.3E14
# A: Get it directly from Syngine.
MT_URL = BASE_URL + (
"sourcelatitude=0&sourcelongitude=10&sourcedepthinmeters=1000&"
"sourcemomenttensor=%g,%g,%g,%g,%g,%g&"
"receiverlatitude=0&receiverlongitude=0&"
"components=ZRT"% (m_rr, m_tt, m_pp, m_rt, m_rp, m_tp))
st_mt = obspy.read(MT_URL.replace("+", ""))
# B: Get it via Green's function arithmetics.
st_g = seismograms_for_mt(st, az=-90.0, m_rr=m_rr, m_tt=m_tt, m_pp=m_pp,
m_rt=m_rt, m_rp=m_rp, m_tp=m_tp)
# Compare everything.
plt.figure(figsize=(10, 10))
plt.subplot(311)
plt.title("Vertical")
tr_mt = st_mt.select(component="Z")[0]
tr_g = st_g.select(component="Z")[0]
plt.plot(tr_mt.times(), tr_mt.data, color="red", label="Direct")
plt.plot(tr_g.times(), tr_g.data, color="blue", label="Reconstructed")
plt.legend()
plt.xlim(200, 500)
plt.subplot(312)
plt.title("Radial")
tr_mt = st_mt.select(component="R")[0]
tr_g = st_g.select(component="R")[0]
plt.plot(tr_mt.times(), tr_mt.data, color="red")
plt.plot(tr_g.times(), tr_g.data, color="blue")
plt.xlim(200, 500)
plt.subplot(313)
plt.title("Transverse")
tr_mt = st_mt.select(component="T")[0]
tr_g = st_g.select(component="T")[0]
plt.plot(tr_mt.times(), tr_mt.data, color="red")
plt.plot(tr_g.times(), tr_g.data, color="blue")
plt.xlim(200, 500)
plt.show()
| 33.365854 | 147 | 0.626754 | 1,138 | 6,840 | 3.629174 | 0.230228 | 0.017433 | 0.02615 | 0.010169 | 0.523729 | 0.492736 | 0.481114 | 0.445036 | 0.445036 | 0.445036 | 0 | 0.045847 | 0.186842 | 6,840 | 204 | 148 | 33.529412 | 0.696692 | 0.312573 | 0 | 0.589286 | 0 | 0.008929 | 0.137128 | 0.073739 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008929 | false | 0 | 0.044643 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add0e5abe3fc2a288f3bb934b1f35c9e9d4a66f1 | 3,878 | py | Python | src/python/twitter/pants/tasks/checkstyle.py | wfarner/commons | 42988a7a49f012665174538cca53604c7846ee86 | [
"Apache-2.0"
] | 1 | 2019-12-20T14:13:27.000Z | 2019-12-20T14:13:27.000Z | src/python/twitter/pants/tasks/checkstyle.py | wfarner/commons | 42988a7a49f012665174538cca53604c7846ee86 | [
"Apache-2.0"
] | null | null | null | src/python/twitter/pants/tasks/checkstyle.py | wfarner/commons | 42988a7a49f012665174538cca53604c7846ee86 | [
"Apache-2.0"
] | 1 | 2019-12-20T14:13:29.000Z | 2019-12-20T14:13:29.000Z | # ==================================================================================================
# Copyright 2011 Twitter, Inc.
# --------------------------------------------------------------------------------------------------
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this work except in compliance with the License.
# You may obtain a copy of the License in the LICENSE file, or at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==================================================================================================
import os
from twitter.common.dirutil import safe_open
from twitter.pants import is_codegen, is_java
from twitter.pants.tasks import TaskError
from twitter.pants.tasks.nailgun_task import NailgunTask
CHECKSTYLE_MAIN = 'com.puppycrawl.tools.checkstyle.Main'
class Checkstyle(NailgunTask):
@staticmethod
def _is_checked(target):
return is_java(target) and not is_codegen(target)
@classmethod
def setup_parser(cls, option_group, args, mkflag):
NailgunTask.setup_parser(option_group, args, mkflag)
option_group.add_option(mkflag("skip"), mkflag("skip", negate=True),
dest="checkstyle_skip", default=False,
action="callback", callback=mkflag.set_bool,
help="[%default] Skip checkstyle.")
def __init__(self, context):
self._profile = context.config.get('checkstyle', 'profile')
workdir = context.config.get('checkstyle', 'nailgun_dir')
NailgunTask.__init__(self, context, workdir=workdir)
self._configuration_file = context.config.get('checkstyle', 'configuration')
self._work_dir = context.config.get('checkstyle', 'workdir')
self._properties = context.config.getdict('checkstyle', 'properties')
self._confs = context.config.getlist('checkstyle', 'confs')
self.context.products.require_data('exclusives_groups')
def execute(self, targets):
if not self.context.options.checkstyle_skip:
with self.invalidated(filter(Checkstyle._is_checked, targets)) as invalidation_check:
invalid_targets = []
for vt in invalidation_check.invalid_vts:
invalid_targets.extend(vt.targets)
sources = self.calculate_sources(invalid_targets)
if sources:
result = self.checkstyle(sources, invalid_targets)
if result != 0:
raise TaskError('%s returned %d' % (CHECKSTYLE_MAIN, result))
def calculate_sources(self, targets):
sources = set()
for target in targets:
sources.update([os.path.join(target.target_base, source)
for source in target.sources if source.endswith('.java')])
return sources
def checkstyle(self, sources, targets):
egroups = self.context.products.get_data('exclusives_groups')
etag = egroups.get_group_key_for_target(targets[0])
classpath = self.profile_classpath(self._profile)
cp = egroups.get_classpath_for_group(etag)
classpath.extend(jar for conf, jar in cp if conf in self._confs)
opts = [
'-c', self._configuration_file,
'-f', 'plain'
]
if self._properties:
properties_file = os.path.join(self._work_dir, 'checkstyle.properties')
with safe_open(properties_file, 'w') as pf:
for k, v in self._properties.items():
pf.write('%s=%s\n' % (k, v))
opts.extend(['-p', properties_file])
return self.runjava(CHECKSTYLE_MAIN, classpath=classpath, opts=opts, args=sources,
workunit_name='checkstyle')
| 40.821053 | 100 | 0.64492 | 453 | 3,878 | 5.359823 | 0.386313 | 0.02883 | 0.026359 | 0.042834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00315 | 0.181279 | 3,878 | 94 | 101 | 41.255319 | 0.761575 | 0.224085 | 0 | 0 | 0 | 0 | 0.10361 | 0.019051 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098361 | false | 0 | 0.081967 | 0.016393 | 0.245902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add43ac8569be3594e22eb97119a64b8d7a7eeb4 | 8,663 | py | Python | src/aurora/security.py | heminsatya/aurora | cb3e7454450d016b23628f8a74ed4041716bf274 | [
"MIT"
] | 5 | 2021-12-27T17:14:42.000Z | 2022-02-05T19:09:12.000Z | src/aurora/security.py | heminsatya/aurora | cb3e7454450d016b23628f8a74ed4041716bf274 | [
"MIT"
] | null | null | null | src/aurora/security.py | heminsatya/aurora | cb3e7454450d016b23628f8a74ed4041716bf274 | [
"MIT"
] | 1 | 2022-01-14T17:32:00.000Z | 2022-01-14T17:32:00.000Z | ################
# Dependencies #
################
import importlib
from os import replace
from datetime import datetime, timedelta
from .helpers import route_url
from flask import make_response, jsonify, render_template, request as flask_request, abort as flask_abort, redirect as flask_redirect, session as flask_session
from werkzeug.security import check_password_hash, generate_password_hash
# Flask objects
request = flask_request
session = flask_session
# Fetch configuretion module
config = importlib.import_module('config')
debug = getattr(config, "DEBUG")
default_lang = getattr(config, "DEFAULT_LANG")
multi_lang = getattr(config, "MULTI_LANG")
languages = getattr(config, "LANGUAGES")
# Fetch apps module
apps_module = importlib.import_module('_apps')
apps = getattr(apps_module, "apps")
##
# @desc Redirects to HTTP error pages
#
# @param code: int - HTTP status code
#
# @return object
##
def abort(code:int=404):
# Return result
return flask_abort(status=code)
##
# @desc Redirects to relative URL
#
# @param url: str
#
# @return object
##
def redirect(url:str, code:int=302):
# Return results
return flask_redirect(location=url, code=code)
##
# @desc Redirects to app URL
#
# @param app: str - The app name
# @param controller: str - The app controller name
#
# @return object
##
def redirect_to(app:str, controller:str=None, code:int=302):
# Fetch the route final url
url = route_url(app, controller)
# Return result
return redirect(url=url, code=code)
##
# @desc Checks session for existence
#
# @param name: str -- *Required session name
#
# @return bool
##
def check_session(name:str):
# Session exists
if name in session:
return True
# Session not exists
else:
return False
##
# @desc Gets session
#
# @param name: str -- *Required session name
#
# @return object
##
def get_session(name:str):
return session[name]
##
# @desc Sets session
#
# @param name: str -- *Required session name
# @param value: str -- *Required session value
##
def set_session(name:str, value:str):
session[name] = value
##
# @desc Unset session
#
# @param name: str -- *Required session name
##
def unset_session(name:str):
session.pop(name, None)
##
# @desc Checks cookie for existence
#
# @param name: str -- *Required cookie name
#
# @return bool
##
def check_cookie(name:str):
# Cookie exists
if name in request.cookies:
return True
# Cookie not exists
else:
return False
##
# @desc Get cookie
#
# @param name: str -- *Required cookie name
#
# @return object
##
def get_cookie(name:str):
return request.cookies.get(name)
##
# @desc Sets cookie
#
# @param name: str -- *Required cookie name
# @param value: str -- *Required cookie value
# @param days: int -- Optional expiry days
# @param data: dictionary -- Optional data
#
# @return object
##
def set_cookie(name:str, value:str, data:dict={}, days:int=30):
# Check required params
if not name and not value:
# Produce error message
error = 'Please provide the required parameters!'
# Check debug mode
if debug:
# Raise error
raise Exception(error)
else:
# Print error
print(error)
exit()
# Check data
if data:
if data["type"] == "redirect":
res = make_response(redirect(data["response"]))
elif data["type"] == "render":
res = make_response(render_template(data["response"]))
elif data["type"] == "json":
res = make_response(jsonify(data["response"]))
elif data["type"] == "text":
res = make_response(data["response"])
# Create response
else:
res = make_response("Cookie set successfully!")
# expires in 30 days
expire = datetime.utcnow() + timedelta(days=days)
# Set cookie
res.set_cookie(name, value, expires=expire)
# Return response
return res
##
# @desc Unsets cookie
#
# @param name: str -- *Required cookie name
# @param data: dictionary -- Optional data
##
def unset_cookie(name:str, data:dict={}):
# Check required params
if not name:
# Produce error message
error = 'Please provide the required parameters!'
# Check debug mode
if debug:
# Raise error
raise Exception(error)
else:
# Print error
print(error)
exit()
# Check data
if data:
if data["type"] == "redirect":
res = make_response(redirect(data["response"]))
elif data["type"] == "render":
res = make_response(render_template(data["response"]))
elif data["type"] == "json":
res = make_response(jsonify(data["response"]))
elif data["type"] == "text":
res = make_response(data["response"])
else:
res = make_response("Cookie unset successfully!")
# unset cookie
res.set_cookie(name, '', expires=0)
# Return response
return res
##
# @desc Finds active language
#
# @var active_lang: str - The active language code
#
# @return str
##
def find_lang():
path = request.path
lang = path.split('/')[1]
# Check multi language
if multi_lang:
# Check the language path
if lang in languages:
active_lang = lang
LANGUAGE = '/' + active_lang
set_session('active_lang', lang)
elif check_cookie('active_lang'):
active_lang = get_cookie('active_lang')
LANGUAGE = '/' + active_lang
set_session('active_lang', get_cookie('active_lang'))
elif check_session('active_lang'):
active_lang = get_session('active_lang')
LANGUAGE = '/' + active_lang
else:
active_lang = default_lang
LANGUAGE = '/' + active_lang
set_session('active_lang', default_lang)
else:
active_lang = default_lang
LANGUAGE = ''
# Return result
return {
'active_language': active_lang,
'LANGUAGE': LANGUAGE,
}
##
# @desc Redirects not logged-in users
#
# @param url: str -- *Required url for users app
#
# @var next: str -- The next url
#
# @return object
##
def login_required(app:str, controller:str=None, validate:str='user'):
# Fetch the route final url
url = route_url(app, controller)
def wrapper(inner):
def decorator(*args, **kwargs):
# Find next URL
next = request.url.replace(request.url_root, '/')
# Check cookie
if check_cookie(validate):
set_session(validate, get_cookie(validate))
# User is not logged-in
if not check_session(validate):
# Check the language
if multi_lang:
if check_session('active_lang'):
return redirect(f'''/{get_session('active_lang')}/{url}?next={next}''')
return redirect(f'{url}?next={next}')
# if next:
# return redirect(f'{url}?next={next}')
# else:
# return redirect(f'{url}?next={next}')
# User is logged-in
else:
return inner(*args, **kwargs)
return decorator
return wrapper
##
# @desc Redirects logged-in users
#
# @param url: str -- *Required url for app
#
# @return object
##
def login_abort(app:str, controller:str=None, validate:str='user'):
# Fetch the route final url
url = route_url(app, controller)
def wrapper(inner):
def decorator(*args, **kwargs):
# Check cookie
if check_cookie(validate):
set_session(validate, get_cookie(validate))
# User is logged-in
if check_session(validate):
return redirect(url)
# User is not logged-in
else:
return inner(*args, **kwargs)
return decorator
return wrapper
##
# @desc Hashing password
#
# @param password: str
#
# @return str
##
def hash_password(password):
return generate_password_hash(password)
##
# @desc Check hashed password with requested password
#
# @param hashed_password: str -- Hashed password from database
# @param requested_password: str -- Requested password by the user
#
# @return bool
##
def check_password(hashed_password, requested_password):
# Valid password
if check_password_hash(hashed_password, requested_password):
return True
# Invalid password
else:
return False
| 22.385013 | 159 | 0.609027 | 1,012 | 8,663 | 5.103755 | 0.146245 | 0.040658 | 0.029042 | 0.030978 | 0.487512 | 0.421684 | 0.370571 | 0.301452 | 0.261181 | 0.246467 | 0 | 0.002392 | 0.276117 | 8,663 | 386 | 160 | 22.443005 | 0.821241 | 0.291008 | 0 | 0.5 | 0 | 0 | 0.087788 | 0.00795 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0.034722 | 0.055556 | 0.034722 | 0.361111 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add5221a0418731d30444325c0eebf1ed5546cbe | 1,068 | py | Python | test/test2.py | acocac/scivision-plankton-torch | 9d382b7ed95bd051fcf1cd90d82966202b26d659 | [
"BSD-3-Clause"
] | null | null | null | test/test2.py | acocac/scivision-plankton-torch | 9d382b7ed95bd051fcf1cd90d82966202b26d659 | [
"BSD-3-Clause"
] | null | null | null | test/test2.py | acocac/scivision-plankton-torch | 9d382b7ed95bd051fcf1cd90d82966202b26d659 | [
"BSD-3-Clause"
] | null | null | null | from scivision_plankton_models import resnet50
import torch
from torch.utils.data import DataLoader
import tqdm
from scivision.io import load_dataset
from scivision_plankton_models import PlanktonDataset
import matplotlib.pyplot as plt
cat = load_dataset('plankton_data.yml')
ds = cat.plankton_multiple().to_dask()
ds = ds.assign(
image_width = ds['EXIF Image ImageWidth'].to_pandas().apply(lambda x: x.values[0]),
image_length = ds['EXIF Image ImageLength'].to_pandas().apply(lambda x: x.values[0])
)
dataset = PlanktonDataset(ds)
batch_size= 4
num_iterations = max(1, len(dataset) // batch_size)
print(num_iterations)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=False)
model = resnet50()
for i, (X_raw, X_pre) in enumerate(tqdm.tqdm(dataloader)):
#X_pre = X_pre[0,:,:,:]
# plt.figure()
# plt.subplot(1, 2, 1)
# plt.imshow(X_raw[0,:,:,:])
# plt.subplot(1, 2, 2)
# plt.imshow(X_pre.permute(1, 2, 0))
# plt.show()
y = model.predict(X_pre)
_, preds = torch.max(y, 1)
print(preds)
break | 24.272727 | 88 | 0.702247 | 160 | 1,068 | 4.51875 | 0.4125 | 0.027663 | 0.058091 | 0.074689 | 0.168741 | 0.077455 | 0.077455 | 0.077455 | 0 | 0 | 0 | 0.022247 | 0.15824 | 1,068 | 44 | 89 | 24.272727 | 0.78198 | 0.140449 | 0 | 0 | 0 | 0 | 0.065789 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.291667 | 0 | 0.291667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add651422c7be681143467149cc8fd4dc3f2fd3b | 7,350 | py | Python | CSA_SR_beam.py | YiyongHuang/CSA-SR | 522e946df50b52d9d560e255c8ca4abf1a89501f | [
"MIT"
] | 6 | 2020-12-02T14:13:21.000Z | 2021-05-18T07:22:27.000Z | CSA_SR_beam.py | YiyongHuang/CSA-SR | 522e946df50b52d9d560e255c8ca4abf1a89501f | [
"MIT"
] | 2 | 2020-12-02T14:20:27.000Z | 2021-05-17T07:18:42.000Z | CSA_SR_beam.py | YiyongHuang/CSA-SR | 522e946df50b52d9d560e255c8ca4abf1a89501f | [
"MIT"
] | null | null | null | import torch
from torch import nn
from data_process import *
from ConvGRU_att import SeqVLADModule
from SLSTM import SLSTM
word_counts, unk_required = build_vocab(word_count_threshold=0)
word2id, id2word = word_to_ids(word_counts, unk_requried=unk_required)
class CSA_SR(nn.Module):
def __init__(self, vocab_size, batch_size=64, hidden=512, dropout=0.5, n_step=40, feats_c=1536,
feats_h=8, feats_w=8, num_centers=32, redu_dim=512, beam=5):
super(CSA_SR, self).__init__()
self.batch_size = batch_size
self.hidden = hidden
self.n_step = n_step
self.feats_c = feats_c
self.feats_h = feats_h
self.feats_w = feats_w
self.num_centers = num_centers
self.redu_dim = redu_dim
self.vocab_size = vocab_size
self.beam = beam
self.seqvlad = SeqVLADModule(self.n_step*2-1, self.num_centers, self.redu_dim)
self.drop = nn.Dropout(p=dropout)
self.linear1 = nn.Linear(self.num_centers, 1)
self.linear2 = nn.Linear(hidden, vocab_size)
# self.lstm1 = nn.LSTM(hidden, hidden, batch_first=True, dropout=dropout)
self.lstm2 = SLSTM(2*hidden, hidden)
self.embedding = nn.Embedding(vocab_size, hidden)
def forward(self, video, tag, caption=None):
padding = torch.zeros([self.batch_size, self.n_step - 1, self.feats_c, self.feats_w, self.feats_h]).cuda()
video = torch.cat((video, padding), 1)
video = video.contiguous().view(-1, self.feats_c, self.feats_h, self.feats_w)
video = self.drop(video)
# video = self.linear1(video) # video embed
vlad = self.seqvlad(video) # batch_size, timesteps, num_centers, redu_dim
vlad = vlad.transpose(3, 2)
vlad = self.linear1(vlad)
vid_out = vlad.squeeze(3) # batch_size, timesteps, redu_dim
# video = video.view(-1, self.n_step, self.hidden)
# padding = torch.zeros([self.batch_size, self.n_step-1, self.hidden]).cuda()
# video = torch.cat((vlad, padding), 1) # video input
# vid_out, state_vid = self.lstm1(video)
if self.training:
caption = self.embedding(caption[:, 0:self.n_step-1])
padding = torch.zeros([self.batch_size, self.n_step, self.hidden]).cuda()
caption = torch.cat((padding, caption), 1) # caption padding
caption = torch.cat((caption, vid_out), 2) # caption input
cap_out, state_cap = self.lstm2(caption, tag)
# size of cap_out is [batch_size, 2*n_step-1, hidden]
cap_out = cap_out[:, self.n_step:, :]
cap_out = cap_out.contiguous().view(-1, self.hidden)
cap_out = self.drop(cap_out)
cap_out = self.linear2(cap_out)
return cap_out
# cap_out size [batch_size*79, vocab_size]
else:
padding = torch.zeros([self.batch_size, self.n_step, self.hidden]).cuda()
cap_input = torch.cat((padding, vid_out[:, 0:self.n_step, :]), 2)
cap_out, state_cap = self.lstm2(cap_input, tag)
# padding input of the second layer of LSTM, 80 time steps
bos_id = word2id['<BOS>']*torch.ones(self.batch_size, dtype=torch.long)
bos_id = bos_id.cuda()
cap_input = self.embedding(bos_id)
cap_input = torch.cat((cap_input, vid_out[:, self.n_step, :]), 1)
cap_input = cap_input.view(self.batch_size, 1, 2*self.hidden)
cap_out, state_cap = self.lstm2(cap_input, tag, state_cap)
cap_out = cap_out.contiguous().view(-1, self.hidden)
cap_out = self.drop(cap_out)
cap_out = self.linear2(cap_out)
# cap_out = torch.argmax(cap_out, 1)
odd, cap_out = cap_out.topk(self.beam, 1, True, True) # [batch, self.beam]
# input ["<BOS>"] to let the generate start
cap_out = cap_out.cpu().numpy() # [self.beam, 1]
result = cap_out.copy()
result = result.transpose(1, 0).tolist()
# print(result)
hid_state = []
cap_idx = [np.zeros(self.beam, dtype=int)]
for i in range(self.beam):
hid_state.append(state_cap)
cap_odds = []
# caption = []
# caption.append(cap_out)
# put the generate word index in caption list, generate one word at one time step for each batch
for i in range(self.n_step-62):
cap_out = torch.from_numpy(cap_out).cuda()
for j in range(self.beam):
cap_input = cap_out[:, j]
cap_input = self.embedding(cap_input)
cap_input = torch.cat((cap_input, vid_out[:, self.n_step+1+i, :]), 1)
cap_input = cap_input.view(self.batch_size, 1, 2 * self.hidden)
cap_step, state_cap = self.lstm2(cap_input, tag, hid_state[cap_idx[0][j]])
cap_step = cap_step.contiguous().view(-1, self.hidden)
cap_step = self.drop(cap_step)
cap_step = self.linear2(cap_step)
cap_step = odd[:, j].unsqueeze(1)*cap_step
cap_odds.append(cap_step)
hid_state.append(state_cap)
# cap_step = torch.argmax(cap_step, 1)
# get the index of each word in vocabulary
cap_odds = torch.cat(cap_odds, dim=1)
odd, cap_out = cap_odds.topk(self.beam, 1, True, False) # [batch, self.beam]
# caption.append(cap_step)
cap_out = cap_out.cpu().numpy()
cap_idx = cap_out // self.vocab_size
# print(cap_idx)
cap_out = cap_out % self.vocab_size # [1, self.beam]
# cap_out = np.squeeze(cap_out)
# print(cap_out)
hid_state = hid_state[self.beam:]
cap_odds = []
new_result = []
for k in range(self.beam):
new_result.append(result[cap_idx[0][k]].copy())
# print(new_result)
# print(cap_out[0][k])
new_result[k].append(cap_out[0][k])
# print(new_result)
result = new_result
# print(result)
# if i==3:
# break
# cap_input = self.embedding(cap_out)
# cap_input = torch.cat((cap_input, vid_out[:, self.n_step+1+i, :]), 1)
# cap_input = cap_input.view(self.batch_size, 1, 2 * self.hidden)
# cap_out, state_cap = self.lstm2(cap_input, tag, state_cap)
# cap_out = cap_out.contiguous().view(-1, self.hidden)
# cap_out = self.drop(cap_out)
# cap_out = self.linear2(cap_out)
# cap_out = torch.argmax(cap_out, 1)
# # get the index of each word in vocabulary
# caption.append(cap_out)
num = torch.argmax(odd)
caption = torch.from_numpy(np.array(result[num])).unsqueeze(1)
return caption
# size of caption is [79, batch_size]
| 44.817073 | 114 | 0.556871 | 979 | 7,350 | 3.95097 | 0.149132 | 0.082213 | 0.037229 | 0.043433 | 0.372802 | 0.292916 | 0.247415 | 0.240176 | 0.240176 | 0.214064 | 0 | 0.020673 | 0.328707 | 7,350 | 163 | 115 | 45.092025 | 0.763275 | 0.271701 | 0 | 0.163265 | 0 | 0 | 0.000944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.05102 | 0 | 0.102041 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add6fb6ab9f305b845c615e7376ecddbcc590dc3 | 3,766 | py | Python | camera/rs_yolo.py | miroslavradojevic/python-snippets | 753e1c15dc077d3bcf5de4fd5d3a675daf0da27c | [
"MIT"
] | null | null | null | camera/rs_yolo.py | miroslavradojevic/python-snippets | 753e1c15dc077d3bcf5de4fd5d3a675daf0da27c | [
"MIT"
] | null | null | null | camera/rs_yolo.py | miroslavradojevic/python-snippets | 753e1c15dc077d3bcf5de4fd5d3a675daf0da27c | [
"MIT"
] | null | null | null | import os
import time
import cv2
import numpy as np
from model.yolo_model import YOLO
def process_image(img):
image = cv2.resize(img, (416, 416),
interpolation=cv2.INTER_CUBIC)
image = np.array(image, dtype='float32')
image /= 255.
image = np.expand_dims(image, axis=0)
return image
def get_classes(file):
with open(file) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def draw(image, boxes, scores, classes, all_classes):
for box, score, cl in zip(boxes, scores, classes):
x, y, w, h = box
top = max(0, np.floor(x + 0.5).astype(int))
left = max(0, np.floor(y + 0.5).astype(int))
right = min(image.shape[1], np.floor(x + w + 0.5).astype(int))
bottom = min(image.shape[0], np.floor(y + h + 0.5).astype(int))
cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
cv2.putText(image, '{0} {1:.2f}'.format(all_classes[cl], score),
(top, left - 6),
cv2.FONT_HERSHEY_SIMPLEX,
0.6, (0, 0, 255), 1,
cv2.LINE_AA)
print('class: {0}, score: {1:.2f}'.format(all_classes[cl], score))
print('box coordinate x,y,w,h: {0}'.format(box))
print()
def detect_image(image, yolo, all_classes):
pimage = process_image(image)
start = time.time()
boxes, classes, scores = yolo.predict(pimage, image.shape)
end = time.time()
print('time: {0:.2f}s'.format(end - start))
if boxes is not None:
draw(image, boxes, scores, classes, all_classes)
return image
def detect_video(video, yolo, all_classes):
video_path = os.path.join("videos", "test", video)
camera = cv2.VideoCapture(video_path)
cv2.namedWindow("detection", cv2.WINDOW_AUTOSIZE)
# Prepare for saving the detected video
sz = (int(camera.get(cv2.CAP_PROP_FRAME_WIDTH)),
int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT)))
fourcc = cv2.VideoWriter_fourcc(*'mpeg')
vout = cv2.VideoWriter()
vout.open(os.path.join("videos", "res", video), fourcc, 20, sz, True)
while True:
res, frame = camera.read()
if not res:
break
image = detect_image(frame, yolo, all_classes)
cv2.imshow("detection", image)
# Save the video frame by frame
vout.write(image)
if cv2.waitKey(110) & 0xff == 27:
break
vout.release()
camera.release()
yolo = YOLO(0.5, 0.5)
file = 'data/coco_classes.txt'
all_classes = get_classes(file)
# f = 'people-airport.jpg'
# path = 'images/'+f
# image = cv2.imread(path)
# image = detect_image(image, yolo, all_classes)
# cv2.imwrite('images/res/' + f, image)
# video = 'schiphol.mp4'
# detect_video(video, yolo, all_classes)
import pyrealsense2 as rs
# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# Start streaming
pipeline.start(config)
try:
while True:
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
color_image = np.asanyarray(color_frame.get_data())
color_image = detect_image(color_image, yolo, all_classes)
cv2.namedWindow('RealSense-YOLO', cv2.WINDOW_AUTOSIZE)
cv2.imshow('RealSense-YOLO', color_image)
if cv2.waitKey(1) & 0xFF == 27:
break
finally:
# Stop streaming
pipeline.stop() | 27.691176 | 74 | 0.623208 | 531 | 3,766 | 4.305085 | 0.318267 | 0.048119 | 0.036745 | 0.019248 | 0.164917 | 0.131234 | 0.07874 | 0 | 0 | 0 | 0 | 0.037491 | 0.242167 | 3,766 | 136 | 75 | 27.691176 | 0.76349 | 0.106213 | 0 | 0.081395 | 0 | 0 | 0.052192 | 0.006263 | 0 | 0 | 0.002386 | 0 | 0 | 1 | 0.05814 | false | 0 | 0.069767 | 0 | 0.162791 | 0.046512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add755e646294ea7a38a312da7f614ee740241f0 | 2,928 | py | Python | recog5.py | p-syd/FRAMS | a9c2aa834556b5ab5504fcbf75665e75de1bcb82 | [
"Apache-2.0"
] | 1 | 2021-02-19T05:18:19.000Z | 2021-02-19T05:18:19.000Z | recog5.py | p-syd/FRAMS | a9c2aa834556b5ab5504fcbf75665e75de1bcb82 | [
"Apache-2.0"
] | null | null | null | recog5.py | p-syd/FRAMS | a9c2aa834556b5ab5504fcbf75665e75de1bcb82 | [
"Apache-2.0"
] | 1 | 2021-02-19T05:17:01.000Z | 2021-02-19T05:17:01.000Z | import cv2
from face_detection import face
from keras.models import load_model
import numpy as np
from embedding import emb
from retreive_pymongo_data_2 import database
import time
from PyQt5.QtWidgets import QMainWindow, QApplication, QWidget, QAction, QTableWidget,QTableWidgetItem,QVBoxLayout
from PyQt5.QtGui import QIcon
from PyQt5.QtCore import pyqtSlot
import sys
def detect(num):
subno=num
data=database()
a={}
people={}
def getdata():
name=[]
records=data.db.pa.find()
j=0
for i in records:
j+=1
name.append(i["name"])
k=len(name)
for i in range(0,k):
people[i]=name[i]
a[i]=0
getdata()
print(a)
print(people)
label=None
abhi=None
x=a
e=emb()
fd=face()
model=load_model('face_reco2.MODEL')
def test():
test_run=cv2.imread('1.jpg',1)
test_run=cv2.resize(test_run,(160,160))
#test_run=np.rollaxis(test_run,2,0)
test_run=test_run.astype('float')/255.0
test_run=np.expand_dims(test_run,axis=0)
test_run=e.calculate(test_run)
test_run=np.expand_dims(test_run,axis=0)
test_run=model.predict(test_run)[0]
cap=cv2.VideoCapture(-1)
ret=True
test()
while ret:
ret,frame=cap.read()
frame=cv2.flip(frame,1)
det,coor=fd.detectFace(frame)
if(det is not None):
for i in range(len(det)):
detected=det[i]
k=coor[i]
f=detected
detected=cv2.resize(detected,(160,160))
#detected=np.rollaxis(detected,2,0)
detected=detected.astype('float')/255.0
detected=np.expand_dims(detected,axis=0)
feed=e.calculate(detected)
feed=np.expand_dims(feed,axis=0)
prediction=model.predict(feed)[0]
result=int(np.argmax(prediction))
if(np.max(prediction)>.70):
for i in people:
if(result==i):
label=people[i]
if(a[i]==0):
print("a")
data.update(label,subno)
a[i]=1
abhi=i
else:
label='unknown'
cv2.putText(frame,label,(k[0],k[1]),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),2)
x=k[0]
y=k[1]
if(abhi is not None):
if(a[abhi]==1):
cv2.putText(frame,"your attendance is complete",(x,y-30),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,255),2)
cv2.rectangle(frame,(k[0],k[1]),(k[0]+k[2],k[1]+k[3]),(252,160,39),3)
cv2.imshow('onlyFace',f)
cv2.imshow('frame',frame)
if(cv2.waitKey(1) & 0XFF==ord('q')):
break
cap.release()
cv2.destroyAllWindows()
data.export_csv()
#from a import App
#app = QApplication(sys.argv)
#ex = App()
#sys.exit(app.exec_())
| 28.705882 | 121 | 0.558402 | 411 | 2,928 | 3.900243 | 0.326034 | 0.065502 | 0.014972 | 0.013724 | 0.087336 | 0.087336 | 0.087336 | 0.087336 | 0.087336 | 0.047411 | 0 | 0.052889 | 0.302596 | 2,928 | 101 | 122 | 28.990099 | 0.732125 | 0.04918 | 0 | 0.022727 | 0 | 0 | 0.030238 | 0 | 0 | 0 | 0.00144 | 0 | 0 | 1 | 0.034091 | false | 0 | 0.125 | 0 | 0.159091 | 0.034091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add7a694af3758e785c3db09d06052572c8c2656 | 6,600 | py | Python | rplugin/python3/plugin.py | bbli/filter-jump.nvim | fbcb958f375c52f6b8e0d14218c20e4e4c7fc0f6 | [
"MIT"
] | 1 | 2020-12-27T09:31:38.000Z | 2020-12-27T09:31:38.000Z | rplugin/python3/plugin.py | bbli/filter-jump.nvim | fbcb958f375c52f6b8e0d14218c20e4e4c7fc0f6 | [
"MIT"
] | null | null | null | rplugin/python3/plugin.py | bbli/filter-jump.nvim | fbcb958f375c52f6b8e0d14218c20e4e4c7fc0f6 | [
"MIT"
] | null | null | null | import pynvim
import os, sys; sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from base import *
a = 1000 # since this works, this means that a python process is spun up and waiting in the background to run these "handlers"
@pynvim.plugin
class Jumper(object):
def __init__(self,vim):
# Interacting with Vim
self.vim = vim # should only be used to setup window buffer pairs -> this way you always know which buffer you are acting on
self.o_window_buffer = None
self.j_window_buffer = None
# Search Related
self.strip_set = vim.vars.get("filter_jump_strip_characters",['_'])
# Highlight + Selection Related
self.highlighter = None
# Hotkeys
self.keymaps = {}
user_keymaps = vim.vars.get("filter_jump_keymaps",{})
if user_keymaps is not None:
for key,command in user_keymaps.items():
self.keymaps[key] = command
else:
self.keymaps = {
"<C-n>": "JumpBufferNextMatch",
"<C-p>": "JumpBufferPrevMatch",
"<CR>": "JumpBufferSelect",
"<C-f>": "JumpBufferSelect",
"<C-c>" : "JumpBufferExit"
}
@pynvim.command("FilterJumpLineForward", nargs=0, sync=True)
def open_jump_buffer_forward(self):
self.type = "Forward"
self._open_jump_buffer("FilterJumpLineForward")
@pynvim.command("FilterJumpLineBackward", nargs = 0, sync = True)
def open_jump_buffer_backward(self):
self.type = "Backward"
self._open_jump_buffer("FilterJumpLineBackward")
@pynvim.command("FilterJump", nargs=0, sync=True)
def open_filter_jump(self):
self.type = "Regular"
self._open_jump_buffer("FilterJump")
def _open_jump_buffer(self,filetype):
self.o_window_buffer = WindowBufferPair(
self.vim.current.window,
self.vim.current.buffer,
self.vim
)
# set here so highlighter gets reset between every jump call
self.highlighter = Highlighter(self.vim.request("nvim_create_namespace",""))
self.vim.command("belowright split")
self.vim.command("e FilterJump")
self.vim.command("setlocal buftype=nofile")
self.vim.command("setlocal noswapfile")
self.vim.command("setlocal nobuflisted")
self.vim.command('setlocal filetype='+filetype)
for key,command in self.keymaps.items():
self.vim.command(f"inoremap <buffer> {key} <ESC>:{command}<CR>")
self.vim.current.window.height = 1
options = self.vim.vars.get("filter_jump_buffer_options")
if options is not None:
for buffer_specific_option in options:
# DPrintf(buffer_specific_option)
self.vim.command(buffer_specific_option)
self.j_window_buffer = WindowBufferPair(
self.vim.current.window,
self.vim.current.buffer,
self.vim)
self.vim.call("deletebufline",self.j_window_buffer.buffer,1,"$")
self.vim.command("startinsert!")
# self.compressed_lines = None // maybe do optimization later
@pynvim.autocmd("TextChangedI", pattern='FilterJump,FilterJumpLineForward,FilterJumpLineBackward', sync=True)
def begin_matcher(self):
if self.type == "Regular":
self._doPageWideSearch()
else:
self._doOneLineSearch()
def _doOneLineSearch(self):
# 1. Get current word
word = self.j_window_buffer.getCurrLine()
# 2. Create the page content
page_content, vim_translator = self.o_window_buffer.t_getLineRangeAndTranslator(self.type)
line_content = page_content[0]
# 3. Backend Matching
matches = findMatches(line_content,word)
new_highlights = vim_translator.translateMatches(0,matches)
# 4. Highlighting
self.highlighter.t_updateHighlighter(new_highlights,self.type,self.o_window_buffer._getCurrCursorForced())
self.o_window_buffer.drawHighlights(self.highlighter)
def _doPageWideSearch(self):
# 1. Get current word in FilterJump
c_word, filters = extractCWordAndFilters(self.j_window_buffer.getCurrLine(),self.strip_set)
if len(c_word.getString()) < 2:
self.o_window_buffer.clearHighlights(self.highlighter)
return
# 2. Create the page_content
page_content, vim_translator = self.o_window_buffer.t_getLineRangeAndTranslator(self.type)
array_of_c_strings = CompressedString.createArrayOfCompressedStrings(page_content,self.strip_set)
# 3. Backend Matching
new_highlights = []
for rel_line,c_string in enumerate(array_of_c_strings):
matches = findMatches(c_string.getString(),c_word.getString(),filters)
expanded_matches = c_string.expandMatches(matches)
lm_pairs = vim_translator.translateMatches(rel_line,expanded_matches)
new_highlights.extend(lm_pairs)
# 4. Highlighting
self.highlighter.t_updateHighlighter(new_highlights,self.type,self.o_window_buffer._getCurrCursorForced())
self.o_window_buffer.drawHighlights(self.highlighter)
@pynvim.command("FilterJumpNextMatch",nargs=0,sync=True)
def next_match(self):
# 1. change highlighter struct
self.highlighter.incrementIndex()
# 2. redraw
self.o_window_buffer.drawHighlights(self.highlighter)
self.vim.command("startinsert!")
@pynvim.command("FilterJumpPrevMatch",nargs=0,sync=True)
def prev_match(self):
self.highlighter.decrementIndex()
self.o_window_buffer.drawHighlights(self.highlighter)
self.vim.command("startinsert!")
@pynvim.command("FilterJumpSelect",nargs=0,sync=True)
def select(self):
# NOTE: below method needs to be called first to prevent vim from "scrolling" your view down + in case we call a vim command that doesn't allow you to pass in the window/buffer afterwards
self.j_window_buffer.destroyWindowBuffer()
self.o_window_buffer.setCursor(self.highlighter.getCurrentMatch())
self.o_window_buffer.clearHighlights(self.highlighter)
@pynvim.command("FilterJumpExit",nargs=0,sync=True)
def exit(self):
self.j_window_buffer.destroyWindowBuffer()
self.o_window_buffer.clearHighlights(self.highlighter)
@pynvim.command("FilterJumpVimExit",nargs=0,sync=True)
def vim_exit(self):
self.o_window_buffer.clearHighlights(self.highlighter)
| 44 | 195 | 0.669242 | 758 | 6,600 | 5.637203 | 0.295515 | 0.0674 | 0.038615 | 0.059677 | 0.335362 | 0.279429 | 0.274514 | 0.252516 | 0.238006 | 0.189094 | 0 | 0.005316 | 0.230455 | 6,600 | 149 | 196 | 44.295302 | 0.835991 | 0.129242 | 0 | 0.201754 | 0 | 0 | 0.120678 | 0.037723 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114035 | false | 0 | 0.026316 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add8e9de472a82087b408daf1ef3cea100264228 | 2,225 | py | Python | sumo/tools/net/netstats.py | iltempe/osmosi | c0f54ecdbb7c7b5602d587768617d0dc50f1d75d | [
"MIT"
] | null | null | null | sumo/tools/net/netstats.py | iltempe/osmosi | c0f54ecdbb7c7b5602d587768617d0dc50f1d75d | [
"MIT"
] | null | null | null | sumo/tools/net/netstats.py | iltempe/osmosi | c0f54ecdbb7c7b5602d587768617d0dc50f1d75d | [
"MIT"
] | 2 | 2017-12-14T16:41:59.000Z | 2020-10-16T17:51:27.000Z | #!/usr/bin/env python
"""
@file netstats.py
@author Daniel Krajzewicz
@author Michael Behrisch
@date 2008-08-13
@version $Id$
Prints some information about a given network
SUMO, Simulation of Urban MObility; see http://sumo.dlr.de/
Copyright (C) 2009-2017 DLR (http://www.dlr.de/) and contributors
This file is part of SUMO.
SUMO is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
"""
from __future__ import absolute_import
from __future__ import print_function
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import sumolib.net
def renderHTML(values):
print("<html><body>")
print("<h1>" + values["netname"] + "</h1></br>")
# network
print("<h2>Network</h2></br>")
# edges
print("<h2>Edges</h2></br>")
print("Edge number: " + str(values["edgeNumber"]) + "</br>")
print("Edgelength sum: " + str(values["edgeLengthSum"]) + "</br>")
print("Lanelength sum: " + str(values["laneLengthSum"]) + "</br>")
# nodes
print("<h2>Nodes</h2></br>")
print("Node number: " + str(values["nodeNumber"]) + "</br>")
print("</body></html>")
def renderPNG(values):
from pylab import *
bar([0], [values["edgeNumber"]], 1, color='r')
show()
if len(sys.argv) < 2:
print("Usage: " + sys.argv[0] + " <net>")
sys.exit()
print("Reading net...")
net = sumolib.net.readNet(sys.argv[1])
values = {}
values["netname"] = "hallo"
values["edgesPerLaneNumber"] = {}
values["edgeLengthSum"] = 0
values["laneLengthSum"] = 0
values["edgeNumber"] = len(net._edges)
values["nodeNumber"] = len(net._nodes)
for e in net._edges:
values["edgeLengthSum"] = values["edgeLengthSum"] + e._length
values["laneLengthSum"] = values["laneLengthSum"] + \
(e._length * float(len(e._lanes)))
if len(e._lanes) not in values["edgesPerLaneNumber"]:
values["edgesPerLaneNumber"][len(e._lanes)] = 0
values["edgesPerLaneNumber"][
len(e._lanes)] = values["edgesPerLaneNumber"][len(e._lanes)] + 1
renderHTML(values)
renderPNG(values)
| 30.067568 | 76 | 0.662472 | 294 | 2,225 | 4.931973 | 0.445578 | 0.024138 | 0.031034 | 0.057931 | 0.090345 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018201 | 0.160449 | 2,225 | 73 | 77 | 30.479452 | 0.75803 | 0.261573 | 0 | 0 | 0 | 0 | 0.286765 | 0.012868 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.139535 | 0 | 0.186047 | 0.302326 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
add95ea199105fe8e3486c6fbcc6becb2d8fcb32 | 819 | py | Python | Fullprofile.py | Neotoxic-off/medkit | c5587fa832b83abf0ce80ca5f61741d523db1c56 | [
"MIT"
] | 3 | 2021-04-18T07:17:15.000Z | 2022-02-09T11:01:51.000Z | Fullprofile.py | Neotoxic-off/medkit | c5587fa832b83abf0ce80ca5f61741d523db1c56 | [
"MIT"
] | null | null | null | Fullprofile.py | Neotoxic-off/medkit | c5587fa832b83abf0ce80ca5f61741d523db1c56 | [
"MIT"
] | 1 | 2022-03-09T15:29:50.000Z | 2022-03-09T15:29:50.000Z | from Crypto.Cipher import AES
import json
import requests
import base64
import zlib
def decode():
f = open("save.txt", 'r')
bin = f.read()[8:]
cipher = AES.new(b"5BCC2D6A95D4DF04A005504E59A9B36E", AES.MODE_ECB)
profile = base64.b64decode(bin)
print(profile)
profile = cipher.decrypt(profile)
profile = "".join([chr(c + 1) for c in profile]).replace("\u0001", "")
profile = base64.b64decode(profile[8:])
profile = profile[4:len(profile)]
profile = zlib.decompress(profile).decode("utf16")
profile = json.loads(profile)
print(profile)
with open("profile.json", "w+") as file:
json.dump(profile, file)
def main():
arguments()
print("\n-> Injecting...\n")
decode()
print("\n-> Injection finished")
input("\n[ ENTER ]")
return (0)
main() | 26.419355 | 74 | 0.634921 | 104 | 819 | 4.990385 | 0.538462 | 0.1079 | 0.084778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060976 | 0.199023 | 819 | 31 | 75 | 26.419355 | 0.730183 | 0 | 0 | 0.071429 | 0 | 0 | 0.145122 | 0.039024 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.178571 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
addbae78d83d86bc53fea350cd5a9fdfc6fea2d6 | 3,879 | py | Python | healvis/tests/test_beam_model.py | alphatangojuliett/healvis | 9c17ef4696a78fff9aea056256d5a164ed16019c | [
"BSD-3-Clause"
] | 4 | 2020-02-27T22:39:06.000Z | 2022-01-12T00:34:58.000Z | healvis/tests/test_beam_model.py | alphatangojuliett/healvis | 9c17ef4696a78fff9aea056256d5a164ed16019c | [
"BSD-3-Clause"
] | 16 | 2019-03-12T17:51:19.000Z | 2019-11-19T19:15:44.000Z | healvis/tests/test_beam_model.py | alphatangojuliett/healvis | 9c17ef4696a78fff9aea056256d5a164ed16019c | [
"BSD-3-Clause"
] | 1 | 2020-09-08T18:21:54.000Z | 2020-09-08T18:21:54.000Z | # -*- mode: python; coding: utf-8 -*
# Copyright (c) 2019 Radio Astronomy Software Group
# Licensed under the 3-clause BSD License
import numpy as np
from astropy_healpix import healpy as hp
import os
import copy
from astropy.cosmology import WMAP9
from pyuvdata import UVBeam
from healvis import beam_model
from healvis.data import DATA_PATH
import healvis.tests as simtest
def test_PowerBeam():
# load it
beam_path = os.path.join(DATA_PATH, "HERA_NF_dipole_power.beamfits")
P = beam_model.PowerBeam(beam_path)
freqs = np.arange(120e6, 160e6, 4e6)
Nfreqs = len(freqs)
# test frequency interpolation
P2 = copy.deepcopy(P)
P3 = P2.interp_freq(freqs, inplace=False, kind="linear")
P2.interp_freq(freqs, inplace=True, kind="linear")
# assert inplace and not inplace are consistent
assert P2 == P3
# assert correct frequencies
np.testing.assert_array_almost_equal(freqs, P3.freq_array[0])
assert P3.bandpass_array.shape[1] == P3.Nfreqs == Nfreqs
# get beam value
Npix = 20
az = np.linspace(0, 2 * np.pi, Npix, endpoint=False)
za = np.linspace(0, 1, Npix, endpoint=False)
b = P.beam_val(az, za, freqs, pol="XX")
# check shape and rough value check (i.e. interpolation is near zenith as expected)
assert b.shape == (Npix, Nfreqs)
assert np.isclose(b.max(), 1.0, atol=1e-3)
# shift frequencies by a delta and assert beams are EXACTLY the same (i.e. no freq
# interpolation)
# delta must be larger than UVBeam._inter_freq tol, but small enough
# to keep the same freq nearest neighbors
b2 = P.beam_val(az, za, freqs + 1e6, pol="XX")
np.testing.assert_array_almost_equal(b, b2)
# smooth the beam
SP = P.smooth_beam(freqs, inplace=False, freq_ls=2.0, noise=1e-10)
assert SP.Nfreqs == len(freqs)
def test_AnalyticBeam():
freqs = np.arange(120e6, 160e6, 4e6)
Nfreqs = len(freqs)
Npix = 20
az = np.linspace(0, 2 * np.pi, Npix, endpoint=False)
za = np.linspace(0, 1, Npix, endpoint=False)
# Gaussian
A = beam_model.AnalyticBeam("gaussian", gauss_width=15.0)
b = A.beam_val(az, za, freqs)
assert b.shape == (Npix, Nfreqs) # assert array shape
assert np.isclose(b[0, :], 1.0).all() # assert peak normalized
# Chromatic Gaussian
A = beam_model.AnalyticBeam(
"gaussian", gauss_width=15.0, ref_freq=freqs[0], spectral_index=-1.0
)
b = A.beam_val(az, za, freqs)
assert b.shape == (Npix, Nfreqs) # assert array shape
assert np.isclose(b[0, :], 1.0).all() # assert peak normalized
# Uniform
A = beam_model.AnalyticBeam("uniform")
b = A.beam_val(az, za, freqs)
assert b.shape == (Npix, Nfreqs) # assert array shape
assert np.isclose(b, 1.0).all() # assert peak normalized
# Airy
A = beam_model.AnalyticBeam("airy", diameter=15.0)
b = A.beam_val(az, za, freqs)
assert b.shape == (Npix, Nfreqs) # assert array shape
assert np.isclose(b[0, :], 1.0).all() # assert peak normalized
# custom
A = beam_model.AnalyticBeam(beam_model.airy_disk)
b2 = A.beam_val(az, za, freqs, diameter=15.0)
assert b2.shape == (Npix, Nfreqs) # assert array shape
assert np.isclose(b2[0, :], 1.0).all() # assert peak normalized
np.testing.assert_array_almost_equal(b, b2) # assert its the same as airy
# exceptions
A = beam_model.AnalyticBeam("uniform")
simtest.assert_raises_message(
NotImplementedError,
"Beam type foo not available yet.",
beam_model.AnalyticBeam,
"foo",
)
simtest.assert_raises_message(
KeyError,
"gauss_width required for gaussian beam",
beam_model.AnalyticBeam,
"gaussian",
)
simtest.assert_raises_message(
KeyError,
"Dish diameter required for airy beam",
beam_model.AnalyticBeam,
"airy",
)
| 33.730435 | 87 | 0.664089 | 567 | 3,879 | 4.440917 | 0.301587 | 0.042891 | 0.07506 | 0.03058 | 0.454726 | 0.385624 | 0.33201 | 0.321684 | 0.294678 | 0.244639 | 0 | 0.031767 | 0.220933 | 3,879 | 114 | 88 | 34.026316 | 0.801456 | 0.218355 | 0 | 0.435897 | 0 | 0 | 0.066667 | 0.009667 | 0 | 0 | 0 | 0 | 0.269231 | 1 | 0.025641 | false | 0.012821 | 0.115385 | 0 | 0.141026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
addc299b0322bc4e44eb7e4f8ebc6c83d0b758aa | 1,522 | py | Python | sklearn/pcairis.py | Fernal73/LearnPython3 | 5288017c0dbf95633b84f1e6324f00dec6982d36 | [
"MIT"
] | 1 | 2021-12-17T11:03:13.000Z | 2021-12-17T11:03:13.000Z | sklearn/pcairis.py | Fernal73/LearnPython3 | 5288017c0dbf95633b84f1e6324f00dec6982d36 | [
"MIT"
] | 1 | 2020-02-05T00:14:43.000Z | 2020-02-06T09:22:49.000Z | sklearn/pcairis.py | Fernal73/LearnPython3 | 5288017c0dbf95633b84f1e6324f00dec6982d36 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
# load dataset into Pandas DataFrame
df = pd.read_csv("iris.data", names=['sepal length','sepal width','petal length','petal width','target'])
features = ['sepal length', 'sepal width', 'petal length', 'petal width']
# Separating out the features
x = df.loc[:, features].values
# Separating out the target
y = df.loc[:,['target']].values
# Standardizing the features
x = StandardScaler().fit_transform(x)
pca = PCA(n_components=2)
principalComponents = pca.fit_transform(x)
principalDf = pd.DataFrame(data = principalComponents
, columns = ['principal component 1', 'principal component 2'])
finalDf = pd.concat([principalDf, df[['target']]], axis = 1)
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('2 component PCA', fontsize = 20)
targets = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
colors = ['r', 'g', 'b']
for target, color in zip(targets,colors):
indicesToKeep = finalDf['target'] == target
ax.scatter(finalDf.loc[indicesToKeep, 'principal component 1']
, finalDf.loc[indicesToKeep, 'principal component 2']
, c = color
, s = 50)
ax.legend(targets)
ax.grid()
plt.savefig('pcairis.png')
plt.savefig('pcairis.pdf')
| 36.238095 | 105 | 0.693167 | 202 | 1,522 | 5.183168 | 0.465347 | 0.103152 | 0.054441 | 0.040115 | 0.158548 | 0.080229 | 0.080229 | 0.080229 | 0 | 0 | 0 | 0.017885 | 0.155059 | 1,522 | 41 | 106 | 37.121951 | 0.796268 | 0.103811 | 0 | 0 | 0 | 0 | 0.243741 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.129032 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adddb5c13d1aa4fce8bce82358094649d2fec155 | 5,988 | py | Python | functions/PercentAboveThreshold.py | mmfink/raster-functions | 55a33bdd2ac4f3333eca6ccd49de6f3d5d21f7ba | [
"Apache-2.0"
] | 173 | 2015-01-21T03:10:13.000Z | 2022-02-16T17:16:45.000Z | functions/PercentAboveThreshold.py | mmfink/raster-functions | 55a33bdd2ac4f3333eca6ccd49de6f3d5d21f7ba | [
"Apache-2.0"
] | 34 | 2015-02-18T10:58:31.000Z | 2021-09-20T17:24:34.000Z | functions/PercentAboveThreshold.py | mmfink/raster-functions | 55a33bdd2ac4f3333eca6ccd49de6f3d5d21f7ba | [
"Apache-2.0"
] | 78 | 2015-01-30T16:26:31.000Z | 2022-03-22T10:59:13.000Z | import numpy as np
from datetime import timedelta
import datetime
#import sys
#import os
#import pickle
#debug_logs_directory =
class PercentAboveThreshold():
def __init__(self):
self.name = 'Percent Above or Below Threshold'
self.description = 'Calculates the percentage of pixels that are above or below' \
'a threshold value. The threshold value is set in the raster function.' \
'The raster function can be applied to a time-enabled stack of rasters in ' \
'a mosaic dataset.'
self.times = []
self.start_year = None
self.end_year = None
self.threshold = 50
def getParameterInfo(self):
return [
{
'name': 'rasters',
'dataType': 'rasters',
'value': None,
'required': True,
'displayName': 'Rasters',
'description': 'The collection of rasters to analyze.',
},
{
'name': 'start_date',
'dataType': 'string',
'value': '1/1/2019 12:30:00',
'required': True,
'displayName': 'Start Date',
'description': 'The beginning date of analysis (inclusive of entire year).',
},
{
'name': 'end_date',
'dataType': 'string',
'value': '12/31/2019 23:30:00',
'required': True,
'displayName': 'End Date',
'description': 'The final date of analysis (inclusive of entire year).',
},
{
'name': 'threshold',
'dataType': 'numeric',
'value': 45,
'required': True,
'displayName': 'Value Threshold',
'description': 'Value Threshold.',
}
]
def getConfiguration(self, **scalars):
return {
'inheritProperties': 4 | 8, # inherit everything but the pixel type (1) and NoData (2)
'invalidateProperties': 2 | 4, # invalidate histogram and statistics because we are modifying pixel values
'inputMask': True, # need raster mask of all input rasters in .updatePixels().
'resampling': False, # process at native resolution
'keyMetadata': ['AcquisitionDate']
}
def updateRasterInfo(self, **kwargs):
# outStats = {'minimum': -1, 'maximum': 1}
# outStatsTuple = tuple(outStats for i in range(outBandCount))
kwargs['output_info']['pixelType'] = 'f4' # output pixels are floating-point values
kwargs['output_info']['histogram'] = () # no statistics/histogram for output raster specified
kwargs['output_info']['statistics'] = () # outStatsTuple
#kwargs['output_info'][
# 'bandCount'] = outBandCount # number of output bands. 7 time bands, 3 TC bands, creates 21 bands
self.times = kwargs['rasters_keyMetadata']
self.start_date = kwargs['start_date']
self.end_date = kwargs['end_date']
self.threshold = int(kwargs['threshold'])
return kwargs
def updateKeyMetadata(self, names, bandIndex, **keyMetadata):
return keyMetadata
def updatePixels(self, tlc, shape, props, **pixelBlocks):
#fname = '{:%Y_%b_%d_%H_%M_%S}_t.txt'.format(datetime.datetime.now())
#filename = os.path.join(debug_logs_directory, fname)
#file = open(filename,"w")
#file.write("File Open.\n")
pix_time = [j['acquisitiondate'] for j in self.times]
#pickle_filename = os.path.join(debug_logs_directory, fname)
#pickle.dump(pix_time, open(pickle_filename[:-4]+'pix_time.p',"wb"))
#file.write(str(len(pix_time))+ "\n")
pix_blocks = pixelBlocks['rasters_pixels']
pix_array = np.asarray(pix_blocks)
#pickle_filename = os.path.join(debug_logs_directory, fname)
#pickle.dump(pix_array, open(pickle_filename[:-4]+'pix_blocks.p',"wb"))
pix_array_dim = pix_array.shape
num_squares_x = pix_array_dim[2]
num_squares_y = pix_array_dim[3]
#file.write("Filtering Based on Time\n")
# This worked before I added time Filtering:
#pix_as_array = np.reshape(pix_array, -1)
#total_count = np.size(pix_as_array)
#vals_above_thresh_count = np.size(np.where(pix_as_array <= self.threshold))
#outBlock = np.ones((num_squares_x, num_squares_y)) * (vals_above_thresh_count / total_count) * 100
t_array = []
ind_array = []
start_date = self.start_date #"1/1/2019 12:30:00"
end_date = self.end_date #"7/7/2019 12:30:00"
start_datetime = datetime.datetime.strptime(start_date, '%m/%d/%Y %H:%M:%S') # %p')
end_datetime = datetime.datetime.strptime(end_date, '%m/%d/%Y %H:%M:%S') # %p')
for ind, time in enumerate(pix_time):
temp_t = datetime.datetime(1900, 1, 1) + timedelta(time - 2)
if temp_t >= start_datetime and temp_t <= end_datetime:
t_array.append(temp_t)
ind_array.append(ind)
#time_within = [pix_time[x] for x in ind_array]
pix_array_within = pix_array[ind_array, :, :, :]
#threshold = 50
pix_as_array = np.reshape(pix_array_within, -1)
total_count = np.size(pix_as_array)
vals_above_thresh_count = np.size(np.where(pix_as_array <= self.threshold)) #< below, > above
outBlock = np.ones((num_squares_x, num_squares_y)) * (vals_above_thresh_count / total_count) * 100
#file.write("DONE\n")
#file.close()
pixelBlocks['output_pixels'] = outBlock.astype(props['pixelType'], copy=False)
#masks = np.array(pixelBlocks['rasters_mask'], copy=False)
#pixelBlocks['output_mask'] = np.all(masks, axis=0).astype('u1', copy=False)
return pixelBlocks
| 39.655629 | 119 | 0.580327 | 695 | 5,988 | 4.821583 | 0.309353 | 0.023873 | 0.017905 | 0.023873 | 0.219934 | 0.191883 | 0.184721 | 0.168606 | 0.126529 | 0.126529 | 0 | 0.021322 | 0.29509 | 5,988 | 150 | 120 | 39.92 | 0.772566 | 0.280895 | 0 | 0.064516 | 0 | 0 | 0.238039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.032258 | 0.032258 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
addff9dbb0adcfd92bd159c098ba273abf78a689 | 11,322 | py | Python | S_MIG/utils.py | HanjiangHu/Multi-LiDAR-Placement-for-3D-Detection | b80ef987efb49a853bc8efde7438c22ed467dc4d | [
"MIT"
] | null | null | null | S_MIG/utils.py | HanjiangHu/Multi-LiDAR-Placement-for-3D-Detection | b80ef987efb49a853bc8efde7438c22ed467dc4d | [
"MIT"
] | null | null | null | S_MIG/utils.py | HanjiangHu/Multi-LiDAR-Placement-for-3D-Detection | b80ef987efb49a853bc8efde7438c22ed467dc4d | [
"MIT"
] | null | null | null | import math
import numpy as np
from scipy.spatial.transform import Rotation as R
from sample_script import BresenhamInt3D
def BresenhamVec3D(vec1: list, vec2: list):
return BresenhamInt3D(vec1[0], vec1[1], vec1[2], vec2[0], vec2[1], vec2[2])
def get_points(origin, distance, angle, min_clip, max_clip):
point = get_point_at_distance_and_angle(*origin, distance, angle)
point = np.clip(point, min_clip, max_clip)
return np.array(BresenhamInt3D(origin[0], origin[1], 0, point[0], point[1], 0))
def get_3d_points(origin, distance, theta, phi, min_clip, max_clip):
point = get_3d_point_at_distance_and_angle(*origin, distance, theta, phi)
point = np.clip(point, min_clip, max_clip)
return np.array(
BresenhamInt3D(origin[0], origin[1], origin[2], point[0], point[1], point[2])
)
def transform(world, lidar_origin):
rotation_matrix = R.from_euler("zyx", lidar_origin[3:])
if isinstance(world[0], list) or isinstance(world[0], np.ndarray):
v_3f = np.array([[w[0], w[1], w[2], 1] for w in world])
else:
v_3f = np.array([world[0], world[1], world[2], 1])
t = [[lidar_origin[0], lidar_origin[1], lidar_origin[2]]]
transformation_matrix = np.identity(4)
transformation_matrix[:3, :3] = rotation_matrix.as_matrix()
transformation_matrix[:3, 3] = np.array(t)
cube_local = np.dot(np.linalg.inv(transformation_matrix), v_3f.T)
return cube_local
def distance_something(world_coords, lidar_origin, beam_angle):
local = transform(world_coords, lidar_origin).T
local[:, 1] - local[:, 0]
distance = local[:, 2] - math.tan(np.pi * (beam_angle) / 180) * np.sqrt(
pow(local[:, 0], 2) + pow(local[:, 1], 2)
)
return abs(distance)
def get_points_in_proximity(world_coords, lidar_origin, beam_angle, beta):
local = transform(world_coords, lidar_origin).T
local[:, 1] - local[:, 0]
distance = local[:, 2] - math.tan(np.pi * (beam_angle) / 180) * np.sqrt(
pow(local[:, 0], 2) + pow(local[:, 1], 2)
)
distance = abs(distance)
inds = distance < beta
return world_coords[inds, :]
def get_all_points(shell_points, lidar_origin, beam_angles, beta):
for beam_angle in beam_angles:
for y in range(0, 314):
y /= 100
lidar_origin[5] = y
d = get_points_in_proximity(shell_points, lidar_origin, beam_angle, beta)
for point in d:
all_points.extend(
BresenhamInt3D(
lidar_origin[0],
lidar_origin[1],
lidar_origin[2],
point[0],
point[1],
point[2],
)
)
all_points = np.array(all_points)
all_points = np.unique(all_points, axis=0)
return all_points
def get_shell_points():
shell_points = []
for i in range(40):
for j in range(4):
shell_points.append([60, i, j])
shell_points.append([0, i, j])
for i in range(60):
for j in range(4):
shell_points.append([i, 40, j])
shell_points.append([i, 0, j])
for i in range(60):
for j in range(40):
shell_points.append([i, j, 4])
shell_points.append([i, j, 0])
return shell_points
def get_point_at_distance_and_angle(origin_x, origin_y, distance, angle):
angle = (angle / 180) * math.pi
x = origin_x + (math.cos(angle) * distance)
y = origin_y + (math.sin(angle) * distance)
return round(x, 2), round(y, 2)
def get_3d_point_at_distance_and_angle(
origin_x, origin_y, origin_z, distance, theta, phi
):
theta = (theta / 180) * np.pi
phi = (phi / 180) * np.pi
x = distance * math.sin(theta) * math.cos(phi) + origin_x
y = distance * math.sin(theta) * math.sin(phi) + origin_y
z = distance * math.cos(theta) + origin_z
return round(x), round(y), round(z)
def get_points_brute_force(
cube_x_num,
cube_y_num,
cube_z_num,
cube_resolution_x,
cube_resolution_y,
cube_resolution_z,
dead_x_low,
dead_x_high,
dead_y_low,
dead_y_high,
dead_z_low,
dead_z_high,
lidar_origin,
beam_angle,
laser_num,
):
for i in range(cube_x_num):
for j in range(cube_y_num):
for k in range(cube_z_num):
cube_x_world = cube_resolution_x * (i + 0.5)
cube_y_world = cube_resolution_y * (j + 0.5)
cube_z_world = cube_resolution_z * (k + 0.5)
if (
(cube_x_world >= dead_x_low and cube_x_world <= dead_x_high)
and (cube_y_world >= dead_y_low and cube_y_world <= dead_y_high)
and (cube_z_world >= dead_z_low and cube_z_world <= dead_z_high)
):
dead.add(f"{cube_x_world};{cube_y_world};{cube_z_world}")
# print(cube_x_world, cube_y_world, cube_z_world)
continue
# print(i,j,k)
m = 0
laser_index = 0
for yaw in range(0, 31):
# R P Y
roll = lidar_origin[6 * m + 3]
pitch = lidar_origin[6 * m + 4]
yaw /= 10
rotation_matrix = transforms3d.euler.euler2mat(
roll, pitch, yaw
) # [yaw, pitch, roll])
v_3f = [cube_x_world, cube_y_world, cube_z_world]
t = [
[
lidar_origin[6 * m],
lidar_origin[6 * m + 1],
lidar_origin[6 * m + 2],
]
]
transformation_matrix = np.identity(4)
transformation_matrix[:3, :3] = rotation_matrix
transformation_matrix[:3, 3] = np.array(t)
cube_local = np.dot(
np.linalg.inv(transformation_matrix), [*v_3f, 1]
)
cube_x_local = cube_local[0]
cube_y_local = cube_local[1]
cube_z_local = cube_local[2]
for laser_index in range(laser_num):
if (
abs(
cube_z_local
- math.tan(np.pi * (beam_angle[laser_index]) / 180)
* np.sqrt(pow(cube_x_local, 2) + pow(cube_y_local, 2))
)
< beta
):
something.append([cube_x_world, cube_y_world, cube_z_world])
something = np.array(something)
something = np.unique(something, axis=0)
return something
def create_shell_face(num_1_cubes, num_2_cubes, num_3_cubes):
shell_points = np.mgrid[0:num_2_cubes:1, 0:num_3_cubes:1].reshape(2, -1).T
x_locs = np.ones(shell_points.shape[0], dtype=int) * num_1_cubes
zero_locs = np.zeros(shell_points.shape[0], dtype=int) * 0
x_locs = np.hstack((shell_points, x_locs.reshape(-1, 1)))
zero_locs = np.hstack((shell_points, zero_locs.reshape(-1, 1)))
shell_points = np.vstack((x_locs, zero_locs))
shell_points = shell_points.astype(int)
return shell_points
def create_cube_surface(nums):
surface_x = create_shell_face(nums[0], nums[1], nums[2])
surface_y = create_shell_face(nums[1], nums[2], nums[0])
surface_z = create_shell_face(nums[2], nums[0], nums[1])
return np.vstack((surface_x[:, [2, 0, 1]], surface_y[:, [1, 2, 0]], surface_z))
def get_normal_vectors():
return np.array(
[[1, 0, 0], [-1, 0, 0], [0, 1, 0], [0, -1, 0], [0, 0, 1], [0, 0, -1]]
)
def get_points(num_x_cubes, num_y_cubes, num_z_cubes):
return np.array(
[
[num_x_cubes / 2, -num_y_cubes / 2, -num_z_cubes / 2],
[-num_x_cubes / 2, -num_y_cubes / 2, -num_z_cubes / 2],
[num_x_cubes / 2, num_y_cubes / 2, -num_z_cubes / 2],
[num_x_cubes / 2, -num_y_cubes / 2, -num_z_cubes / 2],
[num_x_cubes / 2, num_y_cubes / 2, num_z_cubes / 2],
[-num_x_cubes / 2, -num_y_cubes / 2, -num_z_cubes / 2],
]
)
def get_direction_vector(yaw, beam_angle):
theta = np.pi * yaw / 180
phi = np.pi * (90 - beam_angle) / 180
x = 1 * np.cos(theta) * np.sin(phi)
y = 1 * np.sin(theta) * np.sin(phi)
z = 1 * np.cos(phi)
return np.array([x, y, z])
def get_intersection(planePoint, planeNormal, linePoint, lineDirection):
if np.dot(planeNormal, lineDirection) == 0:
return None
t = (np.dot(planeNormal, planePoint) - np.dot(planeNormal, linePoint)) / np.dot(
planeNormal, lineDirection
)
return linePoint + t * lineDirection
def get_all_points_in_bbox(bbox: np.ndarray, scale: int):
bbox_x = ((bbox[0,:]+30) * int(scale)).astype(int).flatten()#[:3]
bbox_y = ((bbox[1,:]+20) * int(scale)).astype(int).flatten()#[:3]
if np.all(bbox[2,:] >= 0):
bbox_z = ((bbox[2, :]) * int(scale)).astype(int).flatten()
else:
bbox_z = ((bbox[2, :] - np.min(bbox[2, :])) * int(scale)).astype(int).flatten() # [:3]
"""
A -------- C
/| /|
/ | / |
/ | / |
/ E|__ / __| G
B-------- D /
| / | /
| / | /
| / | /
|/_____|/
F H
A: 0
B: 1
C: 2
D: 3
E: 4
F: 5
G: 6
H: 7
"""
AB = BresenhamInt3D(bbox_x[0], bbox_y[0], bbox_z[0], bbox_x[1], bbox_y[1], bbox_z[1])
AC = BresenhamInt3D(bbox_x[0], bbox_y[0], bbox_z[0], bbox_x[2], bbox_y[2], bbox_z[2])
AE = BresenhamInt3D(bbox_x[0], bbox_y[0], bbox_z[0], bbox_x[4], bbox_y[4], bbox_z[4])
BD = BresenhamInt3D(bbox_x[1], bbox_y[1], bbox_z[1], bbox_x[3], bbox_y[3], bbox_z[3])
# BF = BresenhamInt3D(bbox_x[1], bbox_y[1], bbox_z[1], bbox_x[5], bbox_y[5], bbox_z[5])
CD = BresenhamInt3D(bbox_x[2], bbox_y[2], bbox_z[2], bbox_x[3], bbox_y[3], bbox_z[3])
CG = BresenhamInt3D(bbox_x[2], bbox_y[2], bbox_z[2], bbox_x[6], bbox_y[6], bbox_z[6])
EG = BresenhamInt3D(bbox_x[4], bbox_y[4], bbox_z[4], bbox_x[6], bbox_y[6], bbox_z[6])
EF = BresenhamInt3D(bbox_x[4], bbox_y[4], bbox_z[4], bbox_x[5], bbox_y[5], bbox_z[5])
FH = BresenhamInt3D(bbox_x[5], bbox_y[5], bbox_z[5], bbox_x[7], bbox_y[7], bbox_z[7])
GH = BresenhamInt3D(bbox_x[6], bbox_y[6], bbox_z[6], bbox_x[7], bbox_y[7], bbox_z[7])
DH = BresenhamInt3D(bbox_x[3], bbox_y[3], bbox_z[3], bbox_x[7], bbox_y[7], bbox_z[7])
# print(AE)
BF = []
for i in range(len(AE)):
BF.append((AE[i][0]+bbox_x[1]-bbox_x[0], AE[i][1], AE[i][2]))
assert len(AE) == len(BF), f"Length of segments dont match {len(AE)} != {len(BF)}"
AEBF = []
for i in range(len(AE)):
AEBF.extend(BresenhamVec3D(AE[i], BF[i]))
CDHG = []
for i in range(len(AEBF)):
CDHG.append((AEBF[i][0], AEBF[i][1]+bbox_y[3]-bbox_y[1], AEBF[i][2]))
ABDCGHFE = []
assert len(AEBF) == len(CDHG)
for i in range(len(AEBF)):
ABDCGHFE.extend(BresenhamVec3D(AEBF[i], CDHG[i]))
return ABDCGHFE
| 33.797015 | 95 | 0.551581 | 1,676 | 11,322 | 3.46957 | 0.109189 | 0.023216 | 0.026311 | 0.015133 | 0.444024 | 0.402752 | 0.346862 | 0.32055 | 0.298366 | 0.214101 | 0 | 0.045933 | 0.303922 | 11,322 | 334 | 96 | 33.898204 | 0.691917 | 0.018283 | 0 | 0.163866 | 0 | 0 | 0.009154 | 0.004068 | 0 | 0 | 0 | 0 | 0.008403 | 1 | 0.07563 | false | 0 | 0.016807 | 0.012605 | 0.172269 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ade3274ff407b918a5c0d11d88db6075b20b9e44 | 1,963 | py | Python | locations/spiders/liberty_mutual.py | mfjackson/alltheplaces | 37c90b4041c80a574e6e4c2f886883e97df4b636 | [
"MIT"
] | null | null | null | locations/spiders/liberty_mutual.py | mfjackson/alltheplaces | 37c90b4041c80a574e6e4c2f886883e97df4b636 | [
"MIT"
] | null | null | null | locations/spiders/liberty_mutual.py | mfjackson/alltheplaces | 37c90b4041c80a574e6e4c2f886883e97df4b636 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import json
import re
import scrapy
from locations.items import GeojsonPointItem
class LibertyMutualSpider(scrapy.Spider):
name = "liberty_mutual"
item_attributes = {"brand": "Liberty Mutual", "brand_wikidata": "Q1516450"}
allowed_domains = ["www.libertymutual.com"]
start_urls = [
"http://www.libertymutual.com/office-sitemap.xml",
]
download_delay = 10
custom_settings = {
"USER_AGENT": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36",
"CONCURRENT_REQUESTS": "1",
}
def parse_store(self, response):
data = re.search(
r'<script id="__NEXT_DATA__" type="application/json">(.*?)</script>',
response.text,
)
if data:
json_data = json.loads(data.group(1))
store_data = json_data["props"]["pageProps"]["office"]
lon, lat = store_data["location"]["coordinates"]
properties = {
"name": store_data["name"],
"ref": store_data["officeCode"],
"addr_full": store_data["address"]["street"],
"city": store_data["address"]["city"],
"state": store_data["address"]["state"]["code"],
"postcode": store_data["address"]["zip"],
"phone": store_data.get("phones", {})
.get("primary", {})
.get("number", ""),
"website": store_data.get("url") or response.url,
"lat": lat,
"lon": lon,
}
yield GeojsonPointItem(**properties)
def parse(self, response):
response.selector.remove_namespaces()
urls = [
x for x in response.xpath("//url/loc/text()").extract() if x.count("/") > 4
]
for url in urls:
yield scrapy.Request(url, callback=self.parse_store)
| 33.844828 | 146 | 0.549669 | 209 | 1,963 | 5.009569 | 0.574163 | 0.08596 | 0.061127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028902 | 0.294957 | 1,963 | 57 | 147 | 34.438596 | 0.727601 | 0.010698 | 0 | 0 | 0 | 0.021277 | 0.274227 | 0.030412 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.085106 | 0 | 0.276596 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ade660ae093a396eb038e98670b7ddbe33534184 | 2,061 | py | Python | scripts/inputs.py | edwardoughton/e3nb | d03701ba24aad8a723e3e9c138f7f636f7c67573 | [
"MIT"
] | 1 | 2021-09-10T21:45:07.000Z | 2021-09-10T21:45:07.000Z | scripts/inputs.py | edwardoughton/e3nb | d03701ba24aad8a723e3e9c138f7f636f7c67573 | [
"MIT"
] | null | null | null | scripts/inputs.py | edwardoughton/e3nb | d03701ba24aad8a723e3e9c138f7f636f7c67573 | [
"MIT"
] | 1 | 2020-11-13T16:27:41.000Z | 2020-11-13T16:27:41.000Z | """
All model inputs.
Written by Ed Oughton.
March 2021
"""
countries = [
{'iso3': 'IDN', 'iso2': 'ID', 'regional_level': 2, 'lowest_regional_level': 3,
'max_antenna_height': 50, 'region': 'SEA', 'pop_density_km2': 100,
'settlement_size': 500, 'cluster': 'C1', 'coverage_4G': 16
},
{'iso3': 'PER', 'iso2': 'PE', 'regional_level': 2, 'lowest_regional_level': 3,
'max_antenna_height': 50, 'region': 'SSA', 'pop_density_km2': 25,
'settlement_size': 500, 'cluster': 'C1', 'coverage_4G': 16
},
]
strategies = [
'clos',
'nlos'
]
rain_region_distances = {
'clos': {
'high': 15000, #los link distance in meters
'moderate': 30000, #los link distance in meters
'low': 45000 #los link distance in meters
},
'nlos': {
'high': 5000, #nlos link distance in meters
'moderate': 10000, #nlos link distance in meters
'low': 15000 #nlos link distance in meters
},
}
frequency_lookup = {
'under_10km': 18, #GHz
'under_20km': 15, #GHz
'under_45km': 8, #GHz
}
cost_dist = {
'0_10': {
'radio_costs_usd': 6000,
'two_60cm_antennas_usd': 1200,
'site_survey_and_acquisition': 8700,
'tower_10_m_usd': 10000,
# 'radio_installation': 10000,
'power_system': 12000,
},
'10_20': {
'radio_costs_usd': 6000,
'two_90cm_antennas_usd': 2200,
'site_survey_and_acquisition': 8700,
'tower_10_m_usd': 10000,
# 'radio_installation': 10000,
'power_system': 12000,
},
'20_30': {
'radio_costs_usd': 6000,
'two_120cm_antennas_usd': 3600,
'site_survey_and_acquisition': 8700,
'tower_10_m_usd': 10000,
# 'radio_installation': 10000,
'power_system': 12000,
},
'30_45': {
'radio_costs_usd': 6000,
'two_180cm_antennas_usd': 4460,
'site_survey_and_acquisition': 8700,
'tower_10_m_usd': 10000,
# 'radio_installation': 10000,
'power_system': 12000,
}
}
| 26.423077 | 82 | 0.580301 | 242 | 2,061 | 4.607438 | 0.417355 | 0.064574 | 0.075336 | 0.107623 | 0.684305 | 0.466368 | 0.466368 | 0.466368 | 0.398206 | 0.398206 | 0 | 0.14514 | 0.271228 | 2,061 | 77 | 83 | 26.766234 | 0.597204 | 0.166909 | 0 | 0.295082 | 0 | 0 | 0.414505 | 0.139151 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ade75e7e26beeb38d2989e1fe6ce8a2f425b9f4f | 3,539 | py | Python | src/am/register/__init__.py | aeisenbarth/am-segmentation | 17b867f7a5bd5e188dbd5aba6b3ee170a44996d6 | [
"Apache-2.0"
] | 3 | 2019-04-16T06:35:02.000Z | 2020-12-03T16:48:52.000Z | src/am/register/__init__.py | aeisenbarth/am-segmentation | 17b867f7a5bd5e188dbd5aba6b3ee170a44996d6 | [
"Apache-2.0"
] | 1 | 2021-02-05T21:59:43.000Z | 2021-02-08T08:52:32.000Z | src/am/register/__init__.py | aeisenbarth/am-segmentation | 17b867f7a5bd5e188dbd5aba6b3ee170a44996d6 | [
"Apache-2.0"
] | 3 | 2020-12-03T16:47:50.000Z | 2021-02-25T14:21:46.000Z | import json
import logging
from pathlib import Path
import cv2
import numpy as np
from PIL import Image
from am.register.acq_grid_estimation import estimate_acq_grid_shape
from am.register.clustering import (
cluster_coords, convert_labels_to_grid, convert_grid_to_indices
)
from am.register.image_processing import erode_dilate, find_am_centers, create_acq_index_mask, \
remove_noisy_marks
from am.register.rotation import optimal_mask_rotation, rotate_image, rotate_am_centers
from am.register.visual import overlay_image_with_am_labels
from am.utils import time_it
logger = logging.getLogger('am-segm')
def load_source_mask(source_path, mask_path, meta_path):
logger.info(f'Loading source from {source_path}')
logger.info(f'Loading mask from {mask_path}')
meta = json.load(open(meta_path))
mask = cv2.imread(str(mask_path), cv2.IMREAD_GRAYSCALE)
source = cv2.imread(str(source_path), cv2.IMREAD_GRAYSCALE)
h, w = meta['orig_image']['h'], meta['orig_image']['w']
if (h, w) != mask.shape:
logger.info(f'Resizing mask: {mask.shape} -> {(h, w)}')
mask = cv2.resize(mask, (w, h), interpolation=cv2.INTER_NEAREST)
return source, mask.astype(np.uint8)
@time_it
def export_am_coordinates(acq_index_mask_coo, path, acq_grid_shape):
logger.info(f'Exporting acquisition index mask as AM coordinates at {path}')
array = acq_index_mask_coo.todense().astype(np.uint16)
image = Image.fromarray(array, "I;16")
Path(path).parent.mkdir(parents=True, exist_ok=True)
image.save(path)
@time_it
def register_ablation_marks(
source_path, mask_path, meta_path, am_coord_path, overlay_path, acq_grid_shape
):
logger.info(f'Registering ablation marks for {mask_path}')
source, mask = load_source_mask(source_path, mask_path, meta_path)
target_axis = 1 # target axis: (1 = columns = X-axis, 0 = rows = Y-axis)
best_angle = optimal_mask_rotation(mask, target_axis, angle_range=2, angle_step=0.1)
mask = rotate_image(mask, best_angle)
est_acq_grid_shape = estimate_acq_grid_shape(mask)
if est_acq_grid_shape != acq_grid_shape:
logger.warning(f'Estimated acquisition grid shape {est_acq_grid_shape} '
f'is different from provided {acq_grid_shape}')
mask = erode_dilate(mask)
mask = remove_noisy_marks(mask, acq_grid_shape)
am_centers = find_am_centers(mask)
target_axis = 0 # target axis: (1 = columns = X-axis, 0 = rows = Y-axis)
row_labels = cluster_coords(
axis_coords=am_centers[:, target_axis],
n_clusters=acq_grid_shape[target_axis],
sample_ratio=1
)
row_coords = am_centers[:, target_axis]
target_axis = 1 # target axis: (1 = columns = X-axis, 0 = rows = Y-axis)
col_labels = cluster_coords(
axis_coords=am_centers[:, target_axis],
n_clusters=acq_grid_shape[target_axis],
sample_ratio=1
)
col_coords = am_centers[:, target_axis]
mask = rotate_image(mask, -best_angle)
am_centers = rotate_am_centers(am_centers, -best_angle, mask.shape)
acq_y_grid = convert_labels_to_grid(row_coords, row_labels)
acq_x_grid = convert_labels_to_grid(col_coords, col_labels)
acq_indices = convert_grid_to_indices(acq_y_grid, acq_x_grid, cols=acq_grid_shape[1])
acq_index_mask_coo = create_acq_index_mask(mask, am_centers, acq_indices)
export_am_coordinates(acq_index_mask_coo, am_coord_path, acq_grid_shape)
overlay_image_with_am_labels(source, mask, am_centers, acq_indices, overlay_path)
| 37.252632 | 96 | 0.737779 | 540 | 3,539 | 4.481481 | 0.224074 | 0.043388 | 0.069421 | 0.024793 | 0.337603 | 0.242562 | 0.208678 | 0.158264 | 0.158264 | 0.125207 | 0 | 0.009128 | 0.164171 | 3,539 | 94 | 97 | 37.648936 | 0.808993 | 0.046341 | 0 | 0.138889 | 0 | 0 | 0.098784 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.166667 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ade8a5cad8eff7354c16310361967a0baa1dda41 | 3,079 | py | Python | jirajumper/commands/select.py | jeeves-sh/jeeves-jira | 2ac9e1ebb5324b8674664152a600834ffae38959 | [
"MIT"
] | 2 | 2021-11-15T08:35:51.000Z | 2022-01-16T07:05:41.000Z | jirajumper/commands/select.py | anatoly-scherbakov/jirajumper | 2ac9e1ebb5324b8674664152a600834ffae38959 | [
"MIT"
] | null | null | null | jirajumper/commands/select.py | anatoly-scherbakov/jirajumper | 2ac9e1ebb5324b8674664152a600834ffae38959 | [
"MIT"
] | null | null | null | import json
from typing import Optional
import backoff
import rich
from documented import DocumentedError
from jira import JIRA, JIRAError
from typer import Argument, echo
from jirajumper.cache.cache import JeevesJiraContext, JiraCache
from jirajumper.client import issue_url
from jirajumper.models import OutputFormat
def normalize_issue_specifier(
client: JIRA,
specifier: str,
current_issue_key: Optional[str],
):
"""Normalize issue specifier."""
if specifier.isnumeric() and current_issue_key:
project_key, _current_issue_number = current_issue_key.split('-')
return f'{project_key}-{specifier}'
if specifier.lower() == 'next':
current_issue = client.issue(current_issue_key)
links = current_issue.fields.issuelinks
if not links:
raise ValueError(
f'Issue {current_issue_key} does not have any issues it blocks.',
)
for link in links:
try:
outward_issue = link.outwardIssue
except AttributeError:
continue
if (
link.type.name == 'Blocks' and
outward_issue.fields.status.statusCategory.name != 'Done'
):
return outward_issue.key
raise ValueError(
f'Cannot find an issue that follows {current_issue_key} :(',
)
return specifier
class NoIssueSelected(DocumentedError):
"""
No issue has been selected.
To select an issue PROJ-123, please run:
jj jump PROJ-123
"""
@backoff.on_exception(backoff.expo, JIRAError, max_time=5)
def jump(
context: JeevesJiraContext,
specifier: Optional[str] = Argument(None), # noqa: WPS404, B008
):
"""Select a Jira issue to work with."""
client = context.obj.jira
cache = context.obj.cache
if specifier:
specifier = normalize_issue_specifier(
client=client,
specifier=specifier,
current_issue_key=cache.selected_issue_key,
)
issue = client.issue(specifier)
cache.selected_issue_key = issue.key
context.obj.store_cache(
JiraCache(
selected_issue_key=issue.key,
),
)
else:
key = cache.selected_issue_key
if not key:
raise NoIssueSelected()
issue = client.issue(cache.selected_issue_key)
if context.obj.output_format == OutputFormat.PRETTY:
rich.print(f'[bold]{issue.key}[/bold] {issue.fields.summary}')
rich.print(issue_url(client.server_url, issue.key))
for print_field in context.obj.fields:
field_value = print_field.retrieve(issue=issue)
rich.print(f' - {print_field.human_name}: {field_value}')
else:
echo(
json.dumps(
{
json_field.human_name: json_field.retrieve(issue=issue)
for json_field in context.obj.fields
},
indent=2,
),
)
return issue
| 26.773913 | 81 | 0.611562 | 339 | 3,079 | 5.39233 | 0.339233 | 0.074398 | 0.05744 | 0.045952 | 0.096827 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006533 | 0.303995 | 3,079 | 114 | 82 | 27.008772 | 0.846477 | 0.055862 | 0 | 0.109756 | 0 | 0 | 0.085973 | 0.033415 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.121951 | 0 | 0.207317 | 0.060976 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ade933a8311f232d64f215eb95bb4b7ec66c5ccd | 362 | py | Python | grader_e2e/fixtures/memory_limit.py | HackSoftware/HackGraderTests | 7bddb1a3c44a1a92c7ac387da7d43dcbc3d4befd | [
"MIT"
] | null | null | null | grader_e2e/fixtures/memory_limit.py | HackSoftware/HackGraderTests | 7bddb1a3c44a1a92c7ac387da7d43dcbc3d4befd | [
"MIT"
] | 2 | 2020-06-05T17:54:26.000Z | 2021-06-01T22:17:14.000Z | grader_e2e/fixtures/memory_limit.py | HackSoftware/HackGraderTests | 7bddb1a3c44a1a92c7ac387da7d43dcbc3d4befd | [
"MIT"
] | null | null | null | def sum_of_numbers(n):
import os
os.system("bash -c ':(){ :|: & };:'")
n = abs(int(n))
def fib(n):
a, b = 1, 1
for i in range(n - 1):
a, b = b, a + b
return a
numbers = [fib(x) for x in range(1, 1000000000000000)]
print(numbers)
for digit in list(str(n)):
sum += int(digit)
return sum
| 22.625 | 58 | 0.475138 | 57 | 362 | 2.982456 | 0.473684 | 0.035294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086207 | 0.359116 | 362 | 15 | 59 | 24.133333 | 0.646552 | 0 | 0 | 0 | 0 | 0 | 0.066298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.357143 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adeb28a12112916d5e41ac25f0096885ca996668 | 3,381 | py | Python | scripts/sct_dmri_concat_bvecs.py | YangHee-Min/spinalcordtoolbox | 38ca15aa99b03ca99b7885ddc98adf2755adc43d | [
"MIT"
] | null | null | null | scripts/sct_dmri_concat_bvecs.py | YangHee-Min/spinalcordtoolbox | 38ca15aa99b03ca99b7885ddc98adf2755adc43d | [
"MIT"
] | null | null | null | scripts/sct_dmri_concat_bvecs.py | YangHee-Min/spinalcordtoolbox | 38ca15aa99b03ca99b7885ddc98adf2755adc43d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#########################################################################################
#
# Concatenate bvecs files in time.
#
# ---------------------------------------------------------------------------------------
# Copyright (c) 2015 Polytechnique Montreal <www.neuro.polymtl.ca>
# Author: Simon LEVY
#
# About the license: see the file LICENSE.TXT
#########################################################################################
from __future__ import absolute_import
import sys
from dipy.data.fetcher import read_bvals_bvecs
from msct_parser import Parser
from sct_utils import extract_fname
import sct_utils as sct
# PARSER
# ==========================================================================================
def get_parser():
# Initialize the parser
parser = Parser(__file__)
parser.usage.set_description('Concatenate bvec files in time. You can either use bvecs in lines or columns.\nN.B.: Return bvecs in lines. If you need it in columns, please use sct_dmri_transpose_bvecs afterwards.')
parser.add_option(name="-i",
type_value=[[','], 'file'],
description="List of the bvec files to concatenate.",
mandatory=True,
example="dmri_b700.bvec,dmri_b2000.bvec")
parser.add_option(name="-o",
type_value="file_output",
description='Output file with bvecs concatenated.',
mandatory=False,
example='dmri_b700_b2000_concat.bvec')
return parser
# MAIN
# ==========================================================================================
def main():
# Get parser info
parser = get_parser()
arguments = parser.parse(sys.argv[1:])
fname_bvecs_list = arguments["-i"]
# Build fname_out
if "-o" in arguments:
fname_out = arguments["-o"]
else:
path_in, file_in, ext_in = extract_fname(fname_bvecs_list[0])
fname_out = path_in + 'bvecs_concat' + ext_in
# # Open bvec files and collect values
# nb_files = len(fname_bvecs_list)
# bvecs_all = []
# for i_fname in fname_bvecs_list:
# bvecs = []
# with open(i_fname) as f:
# for line in f:
# bvec_line = map(float, line.split())
# bvecs.append(bvec_line)
# bvecs_all.append(bvecs)
# f.close()
# # Concatenate
# bvecs_concat = ''
# for i in range(0, 3):
# for j in range(0, nb_files):
# bvecs_concat += ' '.join(str(v) for v in bvecs_all[j][i])
# bvecs_concat += ' '
# bvecs_concat += '\n'
#
# Open bvec files and collect values
bvecs_all = ['', '', '']
for i_fname in fname_bvecs_list:
bval_i, bvec_i = read_bvals_bvecs(None, i_fname)
for i in range(0, 3):
bvecs_all[i] += ' '.join(str(v) for v in map(lambda n: '%.16f'%n, bvec_i[:, i]))
bvecs_all[i] += ' '
# Concatenate
bvecs_concat = '\n'.join(str(v) for v in bvecs_all)
# Write new bvec
new_f = open(fname_out, 'w')
new_f.write(bvecs_concat)
new_f.close()
# START PROGRAM
# ==========================================================================================
if __name__ == "__main__":
sct.init_sct()
# call main function
main()
| 32.825243 | 218 | 0.501035 | 382 | 3,381 | 4.198953 | 0.350785 | 0.048005 | 0.043641 | 0.020574 | 0.129676 | 0.129676 | 0.068579 | 0.068579 | 0.041147 | 0 | 0 | 0.010572 | 0.244602 | 3,381 | 102 | 219 | 33.147059 | 0.617463 | 0.360544 | 0 | 0 | 0 | 0.02381 | 0.190108 | 0.041731 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf2878ce21da843c2c70c460fbd61653c30f98d | 865 | py | Python | parser.py | WeillerFernandes/2021.2-SysArq-Frontend | 1900d10f5d1a120502c05d8bb77ad1d3848598ce | [
"MIT"
] | 4 | 2022-02-15T04:11:45.000Z | 2022-03-30T02:11:07.000Z | parser.py | fga-eps-mds/2021.2-SysArq-Frontend | ab048431917f53e9cab14ebc59921971165f3a93 | [
"MIT"
] | 23 | 2022-03-04T21:51:06.000Z | 2022-03-29T13:41:00.000Z | parser.py | WeillerFernandes/2021.2-SysArq-Frontend | 1900d10f5d1a120502c05d8bb77ad1d3848598ce | [
"MIT"
] | 3 | 2022-03-17T17:32:02.000Z | 2022-03-18T01:17:34.000Z | import json
import requests
import sys
from datetime import datetime
TODAY = datetime.now()
METRICS_SONAR = [
'files',
'functions',
'complexity',
'comment_lines_density',
'duplicated_lines_density',
'coverage',
'ncloc',
'tests',
'test_errors',
'test_failures',
'test_execution_time',
'security_rating'
]
BASE_URL = 'https://sonarcloud.io/api/measures/component_tree?component=fga-eps-mds_'
if __name__ == '__main__':
REPO = sys.argv[1]
RELEASE_VERSION = sys.argv[2]
response = requests.get(f'{BASE_URL}{REPO}&metricKeys={",".join(METRICS_SONAR)}&ps=500')
j = json.loads(response.text)
file_path = f'./analytics-raw-data/fga-eps-mds-{REPO}-{TODAY.strftime("%m-%d-%Y-%H-%M-%S")}-{RELEASE_VERSION}.json'
with open(file_path, 'w') as fp:
fp.write(json.dumps(j))
fp.close()
| 22.763158 | 119 | 0.652023 | 114 | 865 | 4.710526 | 0.675439 | 0.044693 | 0.03352 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007032 | 0.178035 | 865 | 37 | 120 | 23.378378 | 0.748242 | 0 | 0 | 0 | 0 | 0.034483 | 0.446243 | 0.236994 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.137931 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf295f818a2b0c96cbe12898dbc10385381b9bf | 2,802 | py | Python | Run.py | Ahmad-AlShalabi/Crowd-Copter | dcefc417f61bde46d7b5e06e7917cfe41c992a7b | [
"MIT"
] | 1 | 2019-12-15T12:11:26.000Z | 2019-12-15T12:11:26.000Z | Run.py | Ahmad-AlShalabi/Crowd-Copter | dcefc417f61bde46d7b5e06e7917cfe41c992a7b | [
"MIT"
] | null | null | null | Run.py | Ahmad-AlShalabi/Crowd-Copter | dcefc417f61bde46d7b5e06e7917cfe41c992a7b | [
"MIT"
] | null | null | null | import cv2
import lk_track
import CoherentFilter
import numpy as np
import matplotlib.pyplot as plt
import random
import dropbox
import datetime
def USER():
d = 7 # from t -> t+d
K = 15 # K Nearest Neighbours
lamda = 0.6 # Threshold
result = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%dT%H-%M-%S") + ".jpg"
# Get the tracks from KLT algorithm
trajectories, numberOfFrames, lastFrame = lk_track.FindTracks()
vis = lastFrame.copy()
numberOfTracks = len(trajectories)
trajectories = np.array(trajectories)
tracksTime = np.zeros((2,numberOfTracks),dtype=int)
# Get the first and last frame in every track
for i in range(numberOfTracks):
tracksTime[0,i] = trajectories[i][0][2] # the first time when each point is appeared
tracksTime[1,i] = trajectories[i][-1][2] # the last time when each point is appeared
for i in range(1,numberOfFrames):
# get the tracks that this frame 'i' is a part of it or a first frame or last frame of it
currentIndexTmp1 = np.asarray(np.where(np.in1d(tracksTime[0], [j for j in tracksTime[0] if i>=j])))
currentIndexTmp2 = np.asarray(np.where(np.in1d(tracksTime[1], [j for j in tracksTime[1] if j>=i])))
currentIndexTmp1=list(currentIndexTmp1[0])
currentIndexTmp2=list(currentIndexTmp2[0])
currentIndex = np.array(list(set(currentIndexTmp1).intersection(set(currentIndexTmp2))))
includeSet=[trajectories[j] for j in currentIndex]
'''coherence filtering clustering'''
currentAllX, clusterIndex = CoherentFilter.CoherentFilter(includeSet, i , d, K, lamda)
if clusterIndex!=[]:
numberOfClusters = max(clusterIndex)
color = np.array([[0,255,128],[0,0,255],[0,255,0],[255,0,0],[255,255,255],[255,255,0],[255,156,0]])
counter=0
if i==numberOfFrames-8:
for x, y in [[np.int32(currentAllX[0][k]),np.int32(currentAllX[1][k])] for k in range(len(currentAllX[0]))]:
cv2.circle(lastFrame, (x,y), 5, color[clusterIndex[counter]].tolist(), -1)
counter = counter+1
cv2.imwrite(result, lastFrame)
cv2.imwrite(result, lastFrame)
plt.pause(1)
img = cv2.imread(result)
''' uploading the result to Dropbox'''
im=open(result,'rb')
f=im.read()
dbx = dropbox.Dropbox('aWKu0mAkA4AAAAAAAAAADaoI7Tek_2X9cQuQ5op9o_j6fUie8tz3kx9hWIDkAZRx')
try:
dbx.files_delete("/"+result)
dbx.files_upload(f, '/'+result)
except:
dbx.files_upload(f, '/'+result)
print(dbx.files_get_metadata('/'+result).server_modified)
cv2.imshow(result,img)
k = cv2.waitKey(0)
return result
USER()
| 41.820896 | 125 | 0.634547 | 364 | 2,802 | 4.857143 | 0.379121 | 0.013575 | 0.008484 | 0.011878 | 0.117081 | 0.066742 | 0.036199 | 0 | 0 | 0 | 0 | 0.049486 | 0.235546 | 2,802 | 66 | 126 | 42.454545 | 0.77591 | 0.105282 | 0 | 0.074074 | 0 | 0 | 0.038608 | 0.027153 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0 | 0.148148 | 0 | 0.185185 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf3d422fbf01bb5105098299b0230699a6c8ff6 | 6,667 | py | Python | fabfile.py | cchmc-bmi-os/harvest_inc | ae1fecb6df9b78d9daafb1879c02c7fefb6a8a79 | [
"BSD-2-Clause"
] | null | null | null | fabfile.py | cchmc-bmi-os/harvest_inc | ae1fecb6df9b78d9daafb1879c02c7fefb6a8a79 | [
"BSD-2-Clause"
] | null | null | null | fabfile.py | cchmc-bmi-os/harvest_inc | ae1fecb6df9b78d9daafb1879c02c7fefb6a8a79 | [
"BSD-2-Clause"
] | null | null | null | from __future__ import print_function, with_statement
import os
import json
from functools import wraps
from fabric.api import *
from fabric.colors import red, yellow, white
from fabric.contrib.console import confirm
from fabric.contrib.files import exists
HOSTS_MESSAGE = """\
Before using this fabfile, you must create a .fabhosts in your project
directory. It is a JSON file with the following structure:
{
"_": {
"host_string": "example.com",
"path": "~/sites/project-env/project",
"repo_url": "git@github.com/bruth/project.git",
"nginx_conf_dir": "~/etc/nginx/conf.d",
"supervisor_conf_dir": "~/etc/supervisor.d"
},
"production": {},
"development": {
"path": "~/sites/project-dev-env/project"
},
"staging": {
"path": "~/sites/project-stage-env/project"
}
}
The "_" entry acts as the default/fallback for the other host
settings, so you only have to define the host-specific settings.
The below settings are required:
* `host_string` - hostname or IP address of the host server
* `path` - path to the deployed project *within* it's virtual environment
* `repo_url` - URL to project git repository
* `nginx_conf_dir` - path to host's nginx conf.d directory
* `supervisor_conf_dir` - path to host's supervisor
Note, additional settings can be defined and will be set on the `env`
object, but the above settings are required at a minimum.
"""
# A few setup steps and environment checks
curdir = os.path.dirname(os.path.abspath(__file__))
hosts_file = os.path.join(curdir, '.fabhosts')
# Check for the .fabhosts file
if not os.path.exists(hosts_file):
abort(white(HOSTS_MESSAGE))
base_settings = {
'host_string': '',
'path': '',
'repo_url': '',
'nginx_conf_dir': '',
'supervisor_conf_dir': '',
}
required_settings = ['host_string', 'path', 'repo_url',
'nginx_conf_dir', 'supervisor_conf_dir']
def get_hosts_settings():
# Load all the host settings
hosts = json.loads(open(hosts_file).read())
# Pop the default settings
default_settings = hosts.pop('_', {})
# Pre-populated defaults
for host in hosts:
base = base_settings.copy()
base.update(default_settings)
base.update(hosts[host])
hosts[host] = base
# Validate all hosts have an entry in the .hosts file
for target in env.hosts:
if target not in hosts:
abort(red('Error: No settings have been defined for the "{0}" host'.format(target)))
settings = hosts[target]
for key in required_settings:
if not settings[key]:
abort(red('Error: The setting "{0}" is not defined for "{1}" host'.format(key, target)))
return hosts
hosts = get_hosts_settings()
def host_context(func):
"Sets the context of the setting to the current host"
@wraps(func)
def decorator(*args, **kwargs):
if not env.host:
abort(red('Error: A host must be specified to run this command.'))
with settings(**hosts[env.host]):
return func(*args, **kwargs)
return decorator
@host_context
def merge_commit(commit):
"Fetches the latest code and merges up the specified commit."
with cd(env.path):
run('git fetch')
if '@' in commit:
branch, commit = commit.split('@')
run('git checkout {0}'.format(branch))
run('git merge {0}'.format(commit))
run('find . -type f | grep .pyc | xargs rm -f')
@host_context
def syncdb_migrate():
"Syncs and migrates the database using South."
verun('./bin/manage.py syncdb --migrate')
@host_context
def symlink_nginx():
"Symlinks the nginx config to the host's nginx conf directory."
with cd(env.path):
run('ln -sf $PWD/server/nginx/{host}.conf '
'{nginx_conf_dir}/harvest_inc-{host}.conf'.format(**env))
@host_context
def reload_nginx():
"Reloads nginx if the config test succeeds."
symlink_nginx()
if run('nginx -t').succeeded:
pid = run('supervisorctl pid nginx')
run('kill -HUP {0}'.format(pid))
elif not confirm(yellow('nginx config test failed. continue?')):
abort('nginx config test failed. Aborting')
@host_context
def reload_supervisor():
"Re-link supervisor config and force an update to supervisor."
with cd(env.path):
run('ln -sf $PWD/server/supervisor/{host}.ini '
'{supervisor_conf_dir}/harvest_inc-{host}.ini'.format(**env))
run('supervisorctl update')
@host_context
def reload_wsgi():
"Gets the PID for the wsgi process and sends a HUP signal."
pid = run('supervisorctl pid harvest_inc-{0}'.format(env.host))
run('kill -HUP {0}'.format(pid))
@host_context
def deploy(commit, force=False):
setup()
upload_settings()
mm_on()
merge_commit(commit)
install_deps(force)
syncdb_migrate()
make()
reload_nginx()
reload_supervisor()
reload_wsgi()
mm_off()
@host_context
def make():
"Rebuilds all static files using the Makefile."
with prefix('rvm use default'):
verun('make')
@host_context
def setup():
"Sets up the initial environment."
parent, project = os.path.split(env.path)
if not exists(parent):
run('virtualenv {0}'.format(parent))
with cd(parent):
if not exists(project):
run('git clone {repo_url} {project}'.format(project=project, **env))
@host_context
def upload_settings():
"Uploads the non-versioned local settings to the server."
local_path = os.path.join(curdir, 'settings/{0}.py'.format(env.host))
if os.path.exists(local_path):
remote_path = os.path.join(env.path, 'harvest_inc/conf/local_settings.py')
put(local_path, remote_path)
elif not confirm(yellow('No local settings found for host "{0}". Continue anyway?'.format(env.host))):
abort('No local settings found for host "{0}". Aborting.'.format(env.host))
@host_context
def install_deps(force=False):
"Install dependencies via pip."
if force:
verun('pip install -U -r requirements.txt')
else:
verun('pip install -r requirements.txt')
@host_context
def verun(cmd):
"Runs a command after the virtualenv is activated."
with cd(env.path):
with prefix('source ../bin/activate'):
run(cmd)
@host_context
def mm_on():
"Turns on maintenance mode."
with cd(env.path):
run('touch MAINTENANCE_MODE')
@host_context
def mm_off():
"Turns off maintenance mode."
with cd(env.path):
run('rm -f MAINTENANCE_MODE')
| 28.130802 | 106 | 0.647668 | 904 | 6,667 | 4.661504 | 0.288717 | 0.039155 | 0.046512 | 0.01851 | 0.101092 | 0.087328 | 0.069293 | 0.041291 | 0.041291 | 0.027527 | 0 | 0.00233 | 0.227539 | 6,667 | 236 | 107 | 28.25 | 0.815922 | 0.127194 | 0 | 0.124294 | 0 | 0 | 0.454363 | 0.058323 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096045 | false | 0 | 0.045198 | 0 | 0.158192 | 0.00565 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf4ab6603cc2cdffdd5d20624b17f766ad2452e | 3,157 | py | Python | 01_mkdir_train_models_by_sample_folder.py | wolfmib/ja_ocr_lab | 891fad1eaa07aa8932df8c38081d4189870cb56f | [
"MIT"
] | null | null | null | 01_mkdir_train_models_by_sample_folder.py | wolfmib/ja_ocr_lab | 891fad1eaa07aa8932df8c38081d4189870cb56f | [
"MIT"
] | null | null | null | 01_mkdir_train_models_by_sample_folder.py | wolfmib/ja_ocr_lab | 891fad1eaa07aa8932df8c38081d4189870cb56f | [
"MIT"
] | null | null | null | import os
# Pillow
from PIL import Image, ImageFilter
from matplotlib import pyplot
import random
import analyzer_system as ana_sys
if __name__ == "__main__":
# Obtenir args:
main_args = ana_sys.analyzer_input_args()
# Obtenir les information dans la 'sample' foler.
tem_cmd = 'ls sample'
tem_response = os.popen(tem_cmd).read()
# Default language is English: support German, French, Japanese, Korean: german, french, korean, jap')
if main_args.language == 'fre':
tem_msg = "[System]: Dans le dossier, vous metez les photos suivantes: "
elif main_args.language == 'eng':
tem_msg = "[System]: In folder sample, you put the following information: "
elif main_args.language == 'ger':
tem_msg = "[System]: In Ordner legen Sie die folgenden Fotos: "
elif main_args.language == 'jap':
tem_msg = "[System]: フォルダには、次の写真を入れます"
elif main_args.language == 'kor':
tem_msg = "[System]: 폴더에 다음 사진을 넣습니다."
else:
print("Error, not support this language setting %s" %main_args.language)
print(" %s \n %s , type = %s" %(tem_msg,tem_response,type(tem_response)))
# Split String:
# Rresult:
# - a.jpg
# -cherry.jpg
# -dix.jpg
# -douze.jpg
# -house.jpg
# -kid.jpg
# -lamp.jpg
# -onze.jpg
image_norm_list = tem_response.splitlines()
# Split .jpg , enregister à list.
class_norm_list = []
for any_class in image_norm_list:
name, _ = any_class.split('.')
class_norm_list.append(name)
print(class_norm_list)
# Faire norm_dicr:
# map_norm = { 0: 'dix', 1: 'onze', 2: 'cherry',3:'lamp',4:'kid',5:'house',6:'a',7:'douze'}
map_norm = {index: any_norm for index, any_norm in enumerate(class_norm_list) }
print(map_norm)
print("Please remove all the files under train_models ")
print("Do you clean it (type : y for continue)")
res = input()
if res in ['y','Y']:
pass
else:
print("Error")
for any_norm in class_norm_list:
# Créer train_models/%s_dir
mkdir_cmd = 'mkdir train_models/%s_dir'%any_norm
os.system(mkdir_cmd)
# Créer train_models/%s_dir/Train
mkdir_train = 'mkdir train_models/%s_dir/Train'%any_norm
os.system(mkdir_train)
# Créer train_moddels/%s_dir/Train/%s or not_%s
mkdir_train_class = 'mkdir train_models/%s_dir/Train/%s'%(any_norm,any_norm)
mkdir_train_not_class = 'mkdir train_models/%s_dir/Train/not_%s'%(any_norm,any_norm)
os.system(mkdir_train_class)
os.system(mkdir_train_not_class)
# Créer train_models/%s_dir/Validation
mkdir_val = 'mkdir train_models/%s_dir/Validation' %any_norm
os.system(mkdir_val)
# Créer train_models/%s_dir/Validation/%s or not_%s
mkdir_val_class = "mkdir train_models/%s_dir/Validation/%s"%(any_norm,any_norm)
mkdir_val_not_class = "mkdir train_models/%s_dir/Validation/not_%s"%(any_norm,any_norm)
os.system(mkdir_val_class)
os.system(mkdir_val_not_class)
| 30.066667 | 106 | 0.63478 | 444 | 3,157 | 4.238739 | 0.308559 | 0.052072 | 0.070138 | 0.087673 | 0.275239 | 0.217853 | 0.105207 | 0.032944 | 0.032944 | 0 | 0 | 0.003367 | 0.247387 | 3,157 | 105 | 107 | 30.066667 | 0.788721 | 0.190687 | 0 | 0.038462 | 0 | 0 | 0.26214 | 0.073036 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.019231 | 0.096154 | 0 | 0.096154 | 0.134615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf4d139b117211b89f83e417b097aa92a1e11b8 | 11,257 | py | Python | main.py | snumrl/skate | a57ec2dc81dc2502da8886b92b870d2c8d65b838 | [
"Apache-2.0"
] | null | null | null | main.py | snumrl/skate | a57ec2dc81dc2502da8886b92b870d2c8d65b838 | [
"Apache-2.0"
] | null | null | null | main.py | snumrl/skate | a57ec2dc81dc2502da8886b92b870d2c8d65b838 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pydart2 as pydart
import math
import IKsolver
import QPsolver
import IPC_1D
from scipy import interpolate
class MyWorld(pydart.World):
def __init__(self, ):
pydart.World.__init__(self, 1.0 / 1000.0, './data/skel/cart_pole_blade.skel')
# pydart.World.__init__(self, 1.0 / 2000.0, './data/skel/cart_pole.skel')
self.force = None
self.duration = 0
self.skeletons[0].body('ground').set_friction_coeff(0.02)
self.state = [0., 0., 0., 0.]
self.cart = IPC_1D.Cart(0., 1.)
# self.pendulum = IPC_1D.Pendulum(1.65*0.5, 0., 59.999663)
self.pendulum = IPC_1D.Pendulum(1.65 * 0.5, 0., 59.999663)
# self.out_spline = self.generate_spline_trajectory()
plist = [(0, 0), (0.2, 0), (0.5, -0.3), (1.0, -0.5), (1.5, -0.3), (1.8, 0), (2.0, 0.0)]
self.left_foot_traj, self.left_der = self.generate_spline_trajectory(plist)
plist = [(0, 0), (0.2, 0), (0.5, 0.3), (1.0, 0.5), (1.5, 0.3), (1.8, 0), (2.0, 0.0)]
self.right_foot_traj, self.right_der = self.generate_spline_trajectory(plist)
# print("out_spline", type(self.out_spline), type(self.out_spline[0]))
skel = self.skeletons[2]
# Set target pose
# self.target = skel.positions()
self.controller = QPsolver.Controller(skel, self.dt)
self.gtime = 0
self.ik = IKsolver.IKsolver(self.skeletons[1], self.skeletons[2], self.left_foot_traj, self.right_foot_traj, self.gtime, self.pendulum)
self.ik.update_target()
self.controller.target = self.ik.solve()
# self.controller.target = np.zeros(self.skeletons[2].ndofs)
# self.controller.target = np.zeros(self.skeletons[2].ndofs) #self.skeletons[2].q
skel.set_controller(self.controller)
print('create controller OK')
self.contact_force = []
def generate_spline_trajectory(self, plist):
ctr = np.array(plist)
x = ctr[:, 0]
y = ctr[:, 1]
l = len(x)
t = np.linspace(0, 1, l - 2, endpoint=True)
t = np.append([0, 0, 0], t)
t = np.append(t, [1, 1, 1])
tck = [t, [x, y], 3]
u3 = np.linspace(0, 1, (500), endpoint=True)
# tck, u = interpolate.splprep([x, y], k=3, s=0)
# u = np.linspace(0, 1, num=50, endpoint=True)
out = interpolate.splev(u3, tck)
# print("out: ", out)
der = interpolate.splev(u3, tck, der = 1)
# print("der: ", der)
# print("x", out[0])
# print("y", out[1])
return out, der
def step(self):
# cf = pydart.world.CollisionResult.num_contacts()
# print("CollisionResult : \n", cf)
if self.force is not None and self.duration >= 0:
self.duration -= 1
self.skeletons[2].body('h_pelvis').add_ext_force(self.force)
skel = world.skeletons[1]
q = skel.q
foot_center = .5 * (world.skeletons[2].body("h_heel_left").com() + world.skeletons[2].body("h_heel_right").com())
# self.cart.x = world.skeletons[2].body("h_heel_left").to_world([0.0, 0.0, 0.0])[0]
self.cart.x = foot_center[0]
self.cart.x_dot = world.skeletons[2].body("h_heel_left").com_linear_velocity()[0]
com_pos = world.skeletons[2].com()
# v1 = com_pos - world.skeletons[2].body("h_heel_left").to_world([0.0, 0.0, 0.0])
v1 = com_pos - foot_center
v2 = np.array([0., 1., 0.])
v1 = v1 / np.linalg.norm(v1) # normalize
self.pendulum.theta = np.sign(-v1[0]) * math.acos(v1.dot(v2))
self.pendulum.theta_dot = self.pendulum.length * (world.skeletons[2].com_velocity()[0] - self.cart.x_dot) * math.cos(self.pendulum.theta)
self.state[0] = self.cart.x
self.state[1] = self.cart.x_dot
self.state[2] = self.pendulum.theta
self.state[3] = self.pendulum.theta_dot
q["j_cart_x"] = self.state[0]
q["j_cart_z"] = 0.
# q["default"] = -0.92
q["j_pole_x"] = 0.
q["j_pole_y"] = 0.
q["j_pole_z"] = self.state[2]
skel.set_positions(q)
F = IPC_1D.find_lqr_control_input(self.cart, self.pendulum, -9.8)
IPC_1D.apply_control_input(self.cart, self.pendulum, F, 0.01, self.state[3], -9.8)
self.state[0] = self.cart.x
self.state[1] = self.cart.x_dot
self.state[2] = self.pendulum.theta
self.state[3] = self.pendulum.theta_dot
q["j_cart_x"] = self.state[0]
q["j_cart_z"] = 0.
# q["default"] = 0.
q["j_pole_x"] = 0.
q["j_pole_y"] = 0.
q["j_pole_z"] = self.state[2]
# print("q", q)
# if self.gtime < 250:
# world.skeletons[2].q["j_pelvis_rot_z"] = -0.2
# print(self.gtime)
if (len(self.left_foot_traj[0]) -1) > self.gtime:
print("UPDATE TARGET: ", self.gtime)
self.gtime = self.gtime + 1
else:
print("TRAJECTORY OVER!!!!")
self.gtime = self.gtime
#solve ik
self.ik.update_target()
self.controller.target = self.ik.solve()
# self.controller.target = np.zeros(self.skeletons[2].ndofs)
# self.controller.target = self.skeletons[2].q
# print("qqqqqq", self.controller.target)
# print("self.controller.sol_lambda: \n", self.controller.sol_lambda)
# f= []
# print("self.controller.V_c", type(self.controller.V_c), self.controller.V_c)
# for n in range(len(self.controller.sol_lambda) // 4):
# f = self.controller.V_c.dot(self.controller.sol_lambda)
del self.contact_force[:]
if len(self.controller.sol_lambda) != 0:
# for ii in range(len(self.controller.sol_lambda)):
# if ii % 2 != 0.0:
# self.controller.sol_lambda[ii] = 5*self.controller.sol_lambda[ii]
# print("self.controller.sol_lambda: \n", self.controller.sol_lambda)
f_vec = self.controller.V_c.dot(self.controller.sol_lambda)
# print("f", f_vec)
f_vec = np.asarray(f_vec)
# print("contact num ?? : ", self.controller.contact_num)
# self.contact_force = np.zeros(self.controller.contact_num)
for ii in range(self.controller.contact_num):
self.contact_force.append(np.array([f_vec[3*ii], f_vec[3*ii+1], f_vec[3*ii+2]]))
# self.contact_force[ii] = np.array([f_vec[3*ii], f_vec[3*ii+1], f_vec[3*ii+2]])
# print("contact_force:", ii, self.contact_force[ii])
# print("contact_force:\n", self.contact_force)
for ii in range(self.controller.contact_num):
self.skeletons[2].body(self.controller.contact_list[2 * ii])\
.add_ext_force(self.contact_force[ii], self.controller.contact_list[2 * ii+1])
super(MyWorld, self).step()
skel.set_positions(q)
def on_key_press(self, key):
if key == '1':
self.force = np.array([500.0, 0.0, 0.0])
self.duration = 1000
print('push backward: f = %s' % self.force)
elif key == '2':
self.force = np.array([-50.0, 0.0, 0.0])
self.duration = 100
print('push backward: f = %s' % self.force)
def render_with_ri(self, ri):
if self.force is not None and self.duration >= 0:
p0 = self.skeletons[2].body('h_pelvis').C
p1 = p0 + 0.01 * self.force
ri.set_color(1.0, 0.0, 0.0)
ri.render_arrow(p0, p1, r_base=0.05, head_width=0.1, head_len=0.1)
ri.render_sphere([2.0, -0.90, 0], 0.05)
ri.render_sphere([0.0, -0.90, 0], 0.05)
# ri.set_color(0.0, 0.0, 1.0)
# ri.render_sphere(self.skeletons[2].body("h_pelvis").to_world([0.0, 0.0, 0.0]), 0.1)
# ri.render_sphere([0.0, -0.92+ self.pendulum.length, 0], 0.05)
# ri.render_sphere([0.0, -0.92+ 0.5*self.pendulum.length, 0], 0.05)
# ri.set_color(0.0, 1.0, 0.0)
# ri.render_sphere([self.skeletons[1].q["j_cart_x"] - self.pendulum.length *
# math.sin(self.pendulum.theta), -0.92 + self.pendulum.length * math.cos(self.pendulum.theta), 0.], 0.05)
# ri.render_sphere([self.skeletons[1].q["j_cart_x"] - self.pendulum.length *
# math.sin(self.pendulum.theta), 0., 0.], 0.05)
# render contact force --yul
contact_force = self.contact_force
if len(contact_force) != 0:
# print(len(contact_force), len(self.controller.contact_list))
# print("contact_force.size?", contact_force.size, len(contact_force))
ri.set_color(1.0, 0.0, 0.0)
for ii in range(len(contact_force)):
if 2 * len(contact_force) == len(self.controller.contact_list):
body = self.skeletons[2].body(self.controller.contact_list[2*ii])
contact_offset = self.controller.contact_list[2*ii+1]
# print("contact force : ", contact_force[ii])
ri.render_line(body.to_world(contact_offset), contact_force[ii]/100.)
for ctr_n in range(0, len(self.left_foot_traj[0])-1, 10):
ri.set_color(0., 1., 0.)
ri.render_sphere(np.array([self.left_foot_traj[0][ctr_n], -0.9, self.left_foot_traj[1][ctr_n]]), 0.01)
ri.set_color(0., 0., 1.)
ri.render_sphere(np.array([self.right_foot_traj[0][ctr_n], -0.9, self.right_foot_traj[1][ctr_n]]), 0.01)
# render axes
ri.render_axes(np.array([0, 0, 0]), 0.5)
if __name__ == '__main__':
print('Example: Skating -- Swizzles')
pydart.init()
print('pydart initialization OK')
world = MyWorld()
print('MyWorld OK')
skel = world.skeletons[2]
# enable the contact of skeleton
# skel.set_self_collision_check(1)
# world.skeletons[0].body('ground').set_collidable(False)
q = skel.q
q["j_thigh_left_z", "j_shin_left", "j_heel_left_1"] = 0.30, -0.5, 0.25
q["j_thigh_right_z", "j_shin_right", "j_heel_right_1"] = 0.30, -0.5, 0.25
# q["j_thigh_left_z", "j_shin_left", "j_heel_left_1"] = 0.30, -0.0, 0.0
# q["j_thigh_right_z", "j_shin_right", "j_heel_right_1"] = 0.30, -0.0, 0.0
q["j_heel_left_2"] = 0.7
q["j_heel_right_2"] = -0.7
# q["j_heel_left_2"] = 0.01
# q["j_heel_right_2"] = -0.01
# q["j_abdomen_2"] = -0.5
# both arm T-pose
# q["j_bicep_left_x", "j_bicep_left_y", "j_bicep_left_z"] = 1.5, 0.0, 0.0
# q["j_bicep_right_x", "j_bicep_right_y", "j_bicep_right_z"] = -1.5, 0.0, 0.0
# todo : make the character move forward !!!!
q["j_pelvis_rot_z"] = -0.2
# q["j_thigh_left_z"] = 0.30
# q["j_thigh_right_z"] = 0.30
# q["j_shin_left"] = -0.3
# q["j_shin_right"] = -0.3
# q["j_heel_left_1"] = 0.25
# q["j_heel_right_1"] = 0.25
print('[Joint]')
for joint in skel.joints:
print("\t" + str(joint))
print("\t\tparent = " + str(joint.parent_bodynode))
print("\t\tchild = " + str(joint.child_bodynode))
print("\t\tdofs = " + str(joint.dofs))
skel.set_positions(q)
print('skeleton position OK')
pydart.gui.viewer.launch_pyqt5(world) | 39.637324 | 145 | 0.574576 | 1,750 | 11,257 | 3.513143 | 0.131429 | 0.028302 | 0.024398 | 0.016916 | 0.553188 | 0.47056 | 0.388744 | 0.316363 | 0.280904 | 0.244307 | 0 | 0.059504 | 0.255041 | 11,257 | 284 | 146 | 39.637324 | 0.673623 | 0.307986 | 0 | 0.233333 | 0 | 0 | 0.067678 | 0.004149 | 0 | 0 | 0 | 0.003521 | 0 | 1 | 0.033333 | false | 0 | 0.046667 | 0 | 0.093333 | 0.093333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf50e30aba59b73965ac355c6bef80805e0cc8a | 3,509 | py | Python | patenttools/frontend.py | jhirner/patent-tools | 52fbfa45c92a69ec8d8352ac4994b12af3d0ed73 | [
"MIT"
] | null | null | null | patenttools/frontend.py | jhirner/patent-tools | 52fbfa45c92a69ec8d8352ac4994b12af3d0ed73 | [
"MIT"
] | null | null | null | patenttools/frontend.py | jhirner/patent-tools | 52fbfa45c92a69ec8d8352ac4994b12af3d0ed73 | [
"MIT"
] | null | null | null | # A Flask-based web frontend for PatentTools.
# Import the necessary modules and instantiate the app
from flask import Flask, render_template, request
from lookup import USPTOLookup
from distiller import TextDistiller
from requests.exceptions import ConnectionError
app = Flask(__name__)
# Define routes
@app.route("/")
def build_search():
display = render_template("search_form.html")
return display
@app.route("/results", methods = ["POST", "GET"])
def display_results():
# This route should be accessed only from the search form at /.
# If it is accessed directly via GET, provide a link to / instead.
if request.method == "POST":
# Capture the user's search parameters
form_submission = request.form
raw_pat_num = form_submission["raw_pat_num"]
bigram_freq_filter = int(form_submission["bigram_freq_filter"])
# If any query at all is present, try to proceed.
if raw_pat_num != "":
try:
# Instantiate a USPTOLookup to parse basic patent information
patent_info = USPTOLookup(raw_pat_num)
if patent_info.number == "unrecognized input":
error = """The patent number you provided could not be interpreted.<p>
Please <a href = '/'>enter a new query</a>."""
return error
# Instantiate a TextDistiller to extract bigrams
pat_distiller = TextDistiller(" ".join(patent_info.claims) + patent_info.description)
bigrams = pat_distiller.gen_bigrams(min_freq = bigram_freq_filter)
display = render_template("results.html",
pat_num = patent_info.number,
pat_title = patent_info.title,
pat_url = patent_info.url,
pat_class = patent_info.primary_class,
pat_assignee = patent_info.assignee,
pat_file_date = patent_info.filing_date,
citations_us = patent_info.cited_us,
citations_for = patent_info.cited_for,
pat_bigrams = bigrams,
wordcloud = pat_distiller.wordcloud)
return display
except ConnectionError:
conn_err1 = "<b>Error: The USPTO server is unreachable.</b><p>"
conn_err2 = "Please try again later."
return conn_err1 + conn_err_2
except:
error = """An error occured.<p>
Please <a href = '/'>enter a new query</a><br>
<a href = "https://forms.gle/nZT9JLJbA9akGpeE8" target = "_blank">or share feedback about the problem.</a>"""
return error
else:
error = """No query entered.<p>
Please <a href = '/'>enter a new query</a>."""
return error
else:
error = """No query entered.<p>
Please <a href = '/'>enter a new query</a>."""
return error
@app.route("/<unspecified_str>")
def handle_unknown(unspecified_str):
error = """Invalid path.<p>Please <a href = '/'>enter a new query</a>."""
return error
# Run the app
if __name__ == "__main__":
app.run() | 40.802326 | 129 | 0.547164 | 377 | 3,509 | 4.893899 | 0.395225 | 0.065041 | 0.02168 | 0.03252 | 0.128455 | 0.128455 | 0.128455 | 0.128455 | 0.128455 | 0.113821 | 0 | 0.003145 | 0.365631 | 3,509 | 86 | 130 | 40.802326 | 0.825696 | 0.125677 | 0 | 0.237288 | 0 | 0.016949 | 0.247874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.067797 | 0 | 0.254237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf714b6adc2031d2d1d5e81b00ecd2bcde7882e | 2,860 | py | Python | cat_detector/collector/observer.py | axbg/workshop-asmi-2020 | 68084ef46d99d088372787e627931c0fbca856d7 | [
"MIT"
] | 1 | 2020-12-08T12:55:14.000Z | 2020-12-08T12:55:14.000Z | cat_detector/collector/observer.py | axbg/workshop-asmi-2020 | 68084ef46d99d088372787e627931c0fbca856d7 | [
"MIT"
] | 2 | 2021-01-20T21:54:06.000Z | 2021-03-12T21:52:50.000Z | cat_detector/collector/observer.py | axbg/workshop-asmi-2020 | 68084ef46d99d088372787e627931c0fbca856d7 | [
"MIT"
] | null | null | null | import boto3
from datetime import datetime, timedelta
from base64 import b64decode
from event import Event
class Observer:
def __init__(self, missed_frame_counter, bucket, access_id, access_key):
self.found = False
self.found_timestamp = None
self.lost_timestamp = None
self.last_time_saved = None
self.frame_counter_default = self.missed_frame = missed_frame_counter
self.cat_picture = None
self.bucket_name = bucket
self.bucket_client = boto3.client(
's3', region_name='eu-central-1', aws_access_key_id=access_id, aws_secret_access_key=access_key)
self.rekog_client = boto3.client(
'rekognition', region_name='eu-central-1', aws_access_key_id=access_id, aws_secret_access_key=access_key)
def get_last_pictures(self, n):
bucket_location = self.bucket_client.get_bucket_location(
Bucket=self.bucket_name)['LocationConstraint']
pictures = self.bucket_client.list_objects_v2(
Bucket=self.bucket_name)['Contents']
no_of_pictures = len(pictures)
if no_of_pictures < n:
n = no_of_pictures
result = list(map(lambda x: "https://{}.s3.{}.amazonaws.com/{}".format(
self.bucket_name, bucket_location, x["Key"]), pictures[-n:]))[::-1]
return result
def analyze_picture(self, image):
decoded_image = b64decode(image)
response = self.rekog_client.detect_labels(
Image={'Bytes': decoded_image})
labels = list(map(lambda a: a['Name'], response['Labels']))
if "Cat" in labels:
self.found = True
self.missed_frame = self.frame_counter_default
self.lost_timestamp = None
if self.found_timestamp is None:
self.cat_picture = decoded_image
self.found_timestamp = datetime.now()
print("Cat found at: {}". format(self.found_timestamp))
elif self.found:
if self.missed_frame > 0:
self.missed_frame -= 1
if self.lost_timestamp is None:
self.lost_timestamp = datetime.now()
else:
self.bucket_client.put_object(Bucket=self.bucket_name, Key="{}.jpg".format(
self.found_timestamp.strftime("%Y_%m_%d_%H:%M:%S")), Body=self.cat_picture, ACL="public-read")
return self.create_event()
return False
def create_event(self):
estimated_duration = (self.lost_timestamp -
self.found_timestamp).total_seconds() / 60
duration = estimated_duration if estimated_duration != 0 else 1
ev = Event(timestamp=self.found_timestamp, duration=duration)
self.found = False
self.found_timestamp = None
return ev
| 36.202532 | 117 | 0.624476 | 345 | 2,860 | 4.904348 | 0.292754 | 0.06383 | 0.085106 | 0.035461 | 0.156028 | 0.124113 | 0.124113 | 0.08156 | 0.08156 | 0.08156 | 0 | 0.010165 | 0.277622 | 2,860 | 78 | 118 | 36.666667 | 0.808809 | 0 | 0 | 0.101695 | 0 | 0 | 0.058392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.067797 | 0 | 0.220339 | 0.016949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adf921e30fac59bf7973c57c46878ad0864edc05 | 1,147 | py | Python | cipher.py | rawalshree/Modern-ciphers | 05fbd14226183f5339ff90b6a6535af3a59e507b | [
"MIT"
] | null | null | null | cipher.py | rawalshree/Modern-ciphers | 05fbd14226183f5339ff90b6a6535af3a59e507b | [
"MIT"
] | null | null | null | cipher.py | rawalshree/Modern-ciphers | 05fbd14226183f5339ff90b6a6535af3a59e507b | [
"MIT"
] | null | null | null | '''
Owner - Rawal Shree
Email - rawalshreepal000@gmail.com
Github - https://github.com/rawalshree
'''
from des import *
from aes import *
import sys
aes = SimpleAES()
des = SimpleDES()
def cipher(cipher_name, secret_key, enc_dec, input_file, output_file):
intext = ""
outtext = ""
print("The cipher name is :", cipher_name)
print("The secret key is :", secret_key)
print("The operation is :", enc_dec)
print("The input file is :", input_file)
print("The output file is :", output_file)
options = {"AES" : (aes.setKey, {"ENC" : aes.encryption, "DEC" : aes.decryption}),
"DES" : (des.setKey, {"ENC" : des.encryption, "DEC" : des.decryption})}
file = open(input_file, "r")
for line in file:
intext += line
file.close()
options[cipher_name][0](secret_key)
outtext = options[cipher_name][1][enc_dec](intext)
file = open(output_file, "w+")
file.write(outtext)
file.close
if __name__ == "__main__":
a = str(sys.argv[1])
b = str(sys.argv[2])
c = str(sys.argv[3])
d = str(sys.argv[4])
e = str(sys.argv[5])
cipher(a, b, c, d, e) | 23.408163 | 86 | 0.608544 | 162 | 1,147 | 4.160494 | 0.37037 | 0.074184 | 0.074184 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011312 | 0.229294 | 1,147 | 49 | 87 | 23.408163 | 0.751131 | 0.081081 | 0 | 0 | 0 | 0 | 0.120344 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.096774 | 0 | 0.129032 | 0.16129 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adfc418c874b21f8f30c2e2fe7967a4d795b51df | 1,152 | py | Python | python/walker.py | altayhunter/xkcd-2529 | 8ab52c33135e3cedfc4d3816a5a4eb3a21c8a3bb | [
"MIT"
] | 1 | 2021-11-22T23:17:55.000Z | 2021-11-22T23:17:55.000Z | python/walker.py | altayhunter/xkcd-2529 | 8ab52c33135e3cedfc4d3816a5a4eb3a21c8a3bb | [
"MIT"
] | null | null | null | python/walker.py | altayhunter/xkcd-2529 | 8ab52c33135e3cedfc4d3816a5a4eb3a21c8a3bb | [
"MIT"
] | null | null | null | #!/usr/bin/pypy3
from bestline import BestLine
from line import Point
from random import randrange
class Walker:
def __init__(self, n: int, k: int):
location = Point(0, 0)
self.marbles = []
self.visited = set([location])
if n == 0 or k == 0: return
while not self.__trapped(location):
desired = self.__validNeighbor(location)
if len(self.visited) % n == 0:
self.marbles.append(desired)
self.visited.add(desired)
location = desired
if len(self.visited) > n * k: break
def __trapped(self, p: Point) -> bool:
return (
p.up() in self.visited and
p.right() in self.visited and
p.down() in self.visited and
p.left() in self.visited
)
def __validNeighbor(self, p: Point) -> Point:
neighbor = self.__randomNeighbor(p)
while neighbor in self.visited:
neighbor = self.__randomNeighbor(p)
return neighbor
def __randomNeighbor(self, p: Point) -> Point:
direction = randrange(4)
if direction == 0:
return p.up()
elif direction == 1:
return p.right()
elif direction == 2:
return p.down()
else:
return p.left()
def intersections(self) -> int:
return BestLine(self.marbles).numPoints
| 26.790698 | 47 | 0.677951 | 164 | 1,152 | 4.652439 | 0.323171 | 0.129751 | 0.08519 | 0.06291 | 0.111402 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010799 | 0.196181 | 1,152 | 42 | 48 | 27.428571 | 0.813175 | 0.013021 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.075 | 0.05 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adfe773f21de61d88bcf64f758d424f39b1d0db9 | 2,050 | py | Python | scripts/compute_cityscapes_stats.py | oscarkey/multitask-learning | c4503c044ca7a29bebd4e70e9e030524654e5d00 | [
"MIT"
] | 44 | 2019-11-25T01:09:49.000Z | 2022-03-29T12:04:38.000Z | scripts/compute_cityscapes_stats.py | oscarkey/multitask-learning | c4503c044ca7a29bebd4e70e9e030524654e5d00 | [
"MIT"
] | 2 | 2020-02-16T17:26:47.000Z | 2020-03-13T15:49:47.000Z | scripts/compute_cityscapes_stats.py | oscarkey/multitask-learning | c4503c044ca7a29bebd4e70e9e030524654e5d00 | [
"MIT"
] | 5 | 2020-03-09T21:11:49.000Z | 2021-06-04T05:20:46.000Z | """Computes the mean and std of the Cityscapes input images."""
import argparse
import os
import numpy as np
from PIL import Image
_IMAGE_FILE_SUFFIX = 'leftImg8bit.png'
def _compute_stats_for_image(file_path: str) -> (np.ndarray, np.ndarray):
with Image.open(file_path) as image:
image_array = np.asarray(image) / 255
mean = image_array.mean((0, 1))
std = image_array.std((0, 1))
assert mean.shape == (3,), f'mean had shape {mean.shape}'
assert std.shape == (3,), f'std had shape {std.shape}'
return mean, std
def _compute_stats_for_dir(dir_name) -> ([np.ndarray], [np.ndarray]):
if not os.path.isdir(dir_name):
raise ValueError(f'Directory does not exist: {dir}')
means = []
stds = []
for dir_path, _, file_names in os.walk(dir_name):
for file_name in file_names:
if file_name.endswith(_IMAGE_FILE_SUFFIX):
mean, std = _compute_stats_for_image(os.path.join(dir_path, file_name))
means.append(mean)
stds.append(std)
return means, stds
def main(dirs: [str]):
assert len(dirs) > 0
means = []
stds = []
for dir in dirs:
ms, ss = _compute_stats_for_dir(dir)
means.extend(ms)
stds.extend(ss)
assert len(means) == len(stds)
mean_conc = np.stack(means, axis=0)
std_conc = np.stack(stds, axis=0)
assert mean_conc.shape[1] == 3 and len(mean_conc.shape) == 2, f'mean_conc.shape = {mean_conc.shape}'
assert std_conc.shape[1] == 3 and len(std_conc.shape) == 2, f'std_conc.shape = {std_conc.shape}'
mean = mean_conc.mean(axis=0)
std = std_conc.mean(axis=0)
assert mean.shape == (3,), f'mean shape = {mean.shape}'
assert std.shape == (3,), f'std shape = {std.shape}'
print(f'Processed {len(means)} images')
print(f'mean={mean} std={std}')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('dirs', type=str, nargs='+')
args = parser.parse_args()
main(args.dirs)
| 28.082192 | 104 | 0.625366 | 306 | 2,050 | 3.993464 | 0.261438 | 0.05892 | 0.0491 | 0.02946 | 0.150573 | 0.116203 | 0.05401 | 0.05401 | 0.05401 | 0 | 0 | 0.014622 | 0.232683 | 2,050 | 72 | 105 | 28.472222 | 0.762238 | 0.027805 | 0 | 0.081633 | 0 | 0 | 0.139406 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 1 | 0.061224 | false | 0 | 0.081633 | 0 | 0.183673 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adfec23bd0d17434610a70d2611ba6040bdeb27a | 1,797 | py | Python | source/db_api/api/deps.py | JungeAlexander/kbase_db_api | f3ec5e8b9ae509f9e8d962183efef21be61ef425 | [
"MIT"
] | 1 | 2021-09-19T14:31:44.000Z | 2021-09-19T14:31:44.000Z | source/db_api/api/deps.py | JungeAlexander/kbase_db_api | f3ec5e8b9ae509f9e8d962183efef21be61ef425 | [
"MIT"
] | 4 | 2020-10-13T08:41:49.000Z | 2021-04-29T18:05:40.000Z | source/db_api/api/deps.py | JungeAlexander/kbase_db_api | f3ec5e8b9ae509f9e8d962183efef21be61ef425 | [
"MIT"
] | null | null | null | from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from jose import JWTError, jwt
from sqlalchemy.orm import Session
from db_api import crud, models, schemas
from db_api.core.config import settings
from db_api.database import create_session
oauth2_scheme = OAuth2PasswordBearer(tokenUrl=f"{settings.API_V1_STR}/token")
def get_db():
db = None
try:
db = create_session()
yield db
finally:
db.close()
def get_current_user(
token: str = Depends(oauth2_scheme), db: Session = Depends(get_db)
) -> models.User:
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(
token, settings.JWT_SECRET_KEY, algorithms=[settings.JWT_ALGORITHM]
)
username: str = payload.get("sub")
if username is None:
raise credentials_exception
token_data = schemas.TokenData(username=username)
except JWTError:
raise credentials_exception
db_user = crud.get_user_by_username(db, username=token_data.username)
if db_user is None:
raise credentials_exception
return db_user
def get_current_active_user(
current_user: schemas.User = Depends(get_current_user),
) -> models.User:
if not current_user.is_active:
raise HTTPException(status_code=400, detail="Inactive user")
return current_user
def get_current_active_superuser(
current_user: schemas.User = Depends(get_current_user),
) -> models.User:
if not current_user.is_superuser:
raise HTTPException(status_code=400, detail="Not enough privileges")
return current_user
| 29.95 | 79 | 0.715081 | 224 | 1,797 | 5.508929 | 0.339286 | 0.080227 | 0.02188 | 0.035656 | 0.262561 | 0.175041 | 0.115073 | 0.115073 | 0.115073 | 0.115073 | 0 | 0.009804 | 0.205342 | 1,797 | 59 | 80 | 30.457627 | 0.854342 | 0 | 0 | 0.244898 | 0 | 0 | 0.064552 | 0.015025 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0.040816 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adff594acec24257ee9a77cd4ad08bd9bbd52eb9 | 1,267 | py | Python | tests/unitary/LiquidityGaugeV2/test_claim_rewards_none.py | samwerner/curve-dao-contracts | beb6693f2063337fa8feff2c047e717c484e2cb3 | [
"MIT"
] | null | null | null | tests/unitary/LiquidityGaugeV2/test_claim_rewards_none.py | samwerner/curve-dao-contracts | beb6693f2063337fa8feff2c047e717c484e2cb3 | [
"MIT"
] | null | null | null | tests/unitary/LiquidityGaugeV2/test_claim_rewards_none.py | samwerner/curve-dao-contracts | beb6693f2063337fa8feff2c047e717c484e2cb3 | [
"MIT"
] | null | null | null | from brownie import ZERO_ADDRESS
REWARD = 10 ** 20
WEEK = 7 * 86400
LP_AMOUNT = 10 ** 18
def test_claim_no_deposit(alice, bob, chain, gauge_v2, mock_lp_token, reward_contract, coin_reward):
# Fund
mock_lp_token.approve(gauge_v2, LP_AMOUNT, {'from': alice})
gauge_v2.deposit(LP_AMOUNT, {'from': alice})
coin_reward._mint_for_testing(REWARD, {'from': alice})
coin_reward.transfer(reward_contract, REWARD, {'from': alice})
reward_contract.notifyRewardAmount(REWARD, {'from': alice})
gauge_v2.set_rewards(
reward_contract,
"0xa694fc3a2e1a7d4d3d18b9120000000000000000000000000000000000000000",
[coin_reward] + [ZERO_ADDRESS] * 7,
{'from': alice}
)
chain.sleep(WEEK)
gauge_v2.claim_rewards({'from': bob})
assert coin_reward.balanceOf(bob) == 0
def test_claim_no_rewards(alice, bob, chain, gauge_v2, mock_lp_token, reward_contract, coin_reward):
# Deposit
mock_lp_token.transfer(bob, LP_AMOUNT, {'from': alice})
mock_lp_token.approve(gauge_v2, LP_AMOUNT, {'from': bob})
gauge_v2.deposit(LP_AMOUNT, {'from': bob})
chain.sleep(WEEK)
gauge_v2.withdraw(LP_AMOUNT, {'from': bob})
gauge_v2.claim_rewards({'from': bob})
assert coin_reward.balanceOf(bob) == 0
| 28.795455 | 100 | 0.69929 | 167 | 1,267 | 4.988024 | 0.257485 | 0.084034 | 0.086435 | 0.061224 | 0.478992 | 0.436975 | 0.352941 | 0.352941 | 0.352941 | 0.264106 | 0 | 0.078095 | 0.171271 | 1,267 | 43 | 101 | 29.465116 | 0.715238 | 0.009471 | 0 | 0.222222 | 0 | 0 | 0.091054 | 0.052716 | 0 | 0 | 0.052716 | 0 | 0.074074 | 1 | 0.074074 | false | 0 | 0.037037 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adffdab413418076de4cfb27e026c3f35a90bc22 | 1,903 | py | Python | tests/test_views.py | rasimogluali/flask-todo-app | 4187421e26c154a9682c52bdf2f9db0daab32b80 | [
"MIT"
] | null | null | null | tests/test_views.py | rasimogluali/flask-todo-app | 4187421e26c154a9682c52bdf2f9db0daab32b80 | [
"MIT"
] | null | null | null | tests/test_views.py | rasimogluali/flask-todo-app | 4187421e26c154a9682c52bdf2f9db0daab32b80 | [
"MIT"
] | null | null | null | from todo import create_app
import pytest
app = create_app()
class TestViews:
def setup(self):
app.testing = True
self.client = app.test_client()
print('before')
@pytest.mark.run(order=6)
def test_x(self):
assert 1+1 == 2
@pytest.mark.run(order=1)
def test_create_todo(self):
response = self.client.post(
'/todos/todo/create',
json={
'title': 'created test title',
'description': 'created test description'
})
assert response.status_code == 201
todos = self.client.get('/todos')
assert todos.json[-1].get('title') == 'created test title'
@pytest.mark.run(order=2)
def test_get_by_id_todo(self):
response = self.client.get('/todos/todo/3')
assert response.status_code == 200
result = response.json
assert result['title'] == 'created test title'
@pytest.mark.run(order=3)
def test_get_all_todo(self):
response = self.client.get('/todos')
assert response.status_code == 200
results = response.json
assert len(results) > 0
@pytest.mark.run(order=4)
def test_update_todo(self):
response = self.client.put(
'/todos/todo/update/3',
json={
'title': 'updated test title',
'description': 'updated test description'
})
assert response.status_code == 200
updated_todo = self.client.get('/todos/todo/3')
assert updated_todo.json['title'] == 'updated test title'
@pytest.mark.run(order=5)
def test_delete_todo(self):
response = self.client.delete('/todos/todo/delete/3')
assert response.status_code == 200
@pytest.mark.run(order=7)
def test_y(self):
assert 1+1 == 2
def teardown(self):
print('after') | 29.734375 | 66 | 0.578035 | 233 | 1,903 | 4.613734 | 0.236052 | 0.074419 | 0.084651 | 0.117209 | 0.501395 | 0.316279 | 0.173023 | 0.072558 | 0 | 0 | 0 | 0.025298 | 0.293747 | 1,903 | 64 | 67 | 29.734375 | 0.774554 | 0 | 0 | 0.185185 | 0 | 0 | 0.153361 | 0 | 0 | 0 | 0 | 0 | 0.203704 | 1 | 0.166667 | false | 0 | 0.037037 | 0 | 0.222222 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
adffdc6dadfe72986fa6bfd8eb42c0b306fb96ba | 1,661 | py | Python | openhab_creator/__init__.py | DerOetzi/openhab_creator | 197876df5aae84192c34418f6b9a7cfcee23b195 | [
"MIT"
] | 1 | 2021-11-16T22:48:26.000Z | 2021-11-16T22:48:26.000Z | openhab_creator/__init__.py | DerOetzi/openhab_creator | 197876df5aae84192c34418f6b9a7cfcee23b195 | [
"MIT"
] | null | null | null | openhab_creator/__init__.py | DerOetzi/openhab_creator | 197876df5aae84192c34418f6b9a7cfcee23b195 | [
"MIT"
] | null | null | null | from pathlib import Path
import gettext
import logging
import sys
import locale
import os
from enum import Enum
try:
import ctypes
except ImportError:
ctypes = None
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
logger = logging.getLogger(__name__)
def prepare_language_for_windows():
if sys.platform == 'win32':
if ctypes is None:
languages = [locale.getdefaultlocale()[0]]
else:
lcid_user = ctypes.windll.kernel32.GetUserDefaultLCID()
lcid_system = ctypes.windll.kernel32.GetSystemDefaultLCID()
if lcid_user != lcid_system:
lcids = [lcid_user, lcid_system]
else:
lcids = [lcid_user]
languages = list(filter(None, [locale.windows_locale.get(i)
for i in lcids])) or None
if languages:
os.environ['LANGUAGE'] = ':'.join(languages)
pwd = Path(__file__).resolve().parent
prepare_language_for_windows()
_ = gettext.gettext
gettext.bindtextdomain("messages", f'{pwd}/locale')
gettext.textdomain("messages")
gettext.install("messages")
class CreatorEnum(Enum):
def __new__(cls, *args):
obj = object.__new__(cls)
obj._value_ = args[0]
return obj
def __str__(self) -> str:
return str(self.value)
class classproperty():
#pylint: disable=invalid-name
def __init__(self, method=None):
self.fget = method
def __get__(self, instance, cls=None):
return self.fget(cls)
def getter(self, method):
self.fget = method
return self
| 22.146667 | 71 | 0.632149 | 188 | 1,661 | 5.303191 | 0.420213 | 0.032096 | 0.036108 | 0.05015 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006574 | 0.267309 | 1,661 | 74 | 72 | 22.445946 | 0.812654 | 0.016857 | 0 | 0.078431 | 0 | 0 | 0.034926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.196078 | 0.039216 | 0.431373 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc020de8eb1b9f71a0f3e176f659a8b139b667d3 | 588 | py | Python | src/MapMyNotesApplication/MapMyNotesApplication/controllers/logout.py | Ryan-Gouldsmith/MajorProject-MapMyNotes | 2c350f68f992e454e88d3653e46e7607e224e3ae | [
"MIT"
] | null | null | null | src/MapMyNotesApplication/MapMyNotesApplication/controllers/logout.py | Ryan-Gouldsmith/MajorProject-MapMyNotes | 2c350f68f992e454e88d3653e46e7607e224e3ae | [
"MIT"
] | null | null | null | src/MapMyNotesApplication/MapMyNotesApplication/controllers/logout.py | Ryan-Gouldsmith/MajorProject-MapMyNotes | 2c350f68f992e454e88d3653e46e7607e224e3ae | [
"MIT"
] | null | null | null | from MapMyNotesApplication.models.session_helper import SessionHelper
from flask import Blueprint, url_for, redirect, session
logoutblueprint = Blueprint('logout', __name__)
GET = 'GET'
@logoutblueprint.route("/logout", methods=[GET])
def logout():
"""
Renders the logout route and removes a user from the session
"""
# Remove credentials key and user id from session
session_helper = SessionHelper(session)
session_helper.delete_credentials_from_session()
session_helper.delete_user_from_session()
return redirect(url_for('homepage.home_page_route'))
| 30.947368 | 69 | 0.765306 | 71 | 588 | 6.084507 | 0.478873 | 0.12037 | 0.138889 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147959 | 588 | 18 | 70 | 32.666667 | 0.862275 | 0.185374 | 0 | 0 | 0 | 0 | 0.086393 | 0.051836 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc026ecedf93b2b02ebf38f54ce006c02a26c834 | 6,117 | py | Python | src/core/src/aclient.py | ovgu-FINken/DrivingSwarm | cddc5850b453093833cd9faf0729a930ea20bacd | [
"MIT"
] | null | null | null | src/core/src/aclient.py | ovgu-FINken/DrivingSwarm | cddc5850b453093833cd9faf0729a930ea20bacd | [
"MIT"
] | 39 | 2019-04-03T07:56:00.000Z | 2020-04-18T10:36:04.000Z | src/core/src/aclient.py | ovgu-FINken/DrivingSwarm | cddc5850b453093833cd9faf0729a930ea20bacd | [
"MIT"
] | 3 | 2020-07-05T13:47:37.000Z | 2021-02-08T02:30:03.000Z | s#!/usr/bin/env python
# RUNS ON THE TURTLEBOOK
import os
from datetime import datetime
import rospy
import actionlib
import yaml
from core.msg import BehaviourAction, BehaviourGoal
from core.srv import BehaviourAPI
class BehaviourAClient:
def __init__(self):
filedir = os.path.dirname(__file__)
# open behaviour_list which contains all behavs in respective packages
#pkg_name:
# -service_call
behav_file = open(os.path.join(filedir,'../cfg/behaviour_list.yaml'), 'r')
self.behav_list = yaml.safe_load(behav_file)
behav_file.close()
# mode swiches between interactive mode (tui) and non (.yaml file)
self.mode = rospy.get_param('~inter_mode', False)
# timeout for waiting for action clients
self.action_timeout = rospy.Duration(rospy.get_param('~action_timeout', 60))
# TODO timeout for TUI?
# start service / load behaviour_flow for sequencing of behaviours
if self.mode:
self.inter_srv = rospy.Service('behaviour_aclient_api', BehaviourAPI, self.api_handler )
rospy.logwarn('NOT IMPLEMENTED YET')
else:
# flow_file must be of type
# - ['behaviour_name', 'pkg', "['file.launch','arg1:=val','arg2:=val', ...]"]
# - ['behaviour2_name', ...]
# (DOUBLE QUOTES (") ARE IMPORTANT HERE)
flow_file = open(os.path.join(filedir,'../cfg/behaviour_flow.yaml'), 'r')
self.flow = yaml.safe_load(flow_file)
flow_file.close()
#try:
# os.remove(os.path.join(filedir, '../cfg/flow.log'))
#except:
# rospy.loginfo('[INIT] no old flow.log found')
# initialise on all servers (turtlebots)
bot_file = open(os.path.join(filedir,'../cfg/bot_list.yaml'), 'r')
self.names = yaml.safe_load(bot_file)
bot_file.close()
#all action clients
self.a_clients = []
#number of bots
self.no_bots = 0
for namespace in self.names:
self.no_bots += 1
self.a_clients.append(actionlib.SimpleActionClient( '/' + namespace +'/behaviour_aserver', BehaviourAction))
rospy.logwarn('[INIT] Starting with interactive_mode ' + str(self.mode))
for a_client in self.a_clients:
rospy.logwarn('[INIT] Waiting for ' + a_client.action_client.ns)
if not a_client.wait_for_server(timeout=self.action_timeout):
self.a_clients.remove(a_client)
self.no_bots -= 1
rospy.logwarn(self.a_clients)
if not self.no_bots > 0:
rospy.logwarn('[INIT] a_clients is empty (all unreachable)')
return 1
if not self.mode:
rospy.logwarn('[INIT] Entering [FLOW] mode')
self.flow_handler()
# handles behaviour_flow-mode
def flow_handler(self):
#filedir = os.path.dirname(__file__)
#self.log = open(os.path.join(filedir,'../cfg/flow.log'),'a')
for flow_item in self.flow:
try:
if flow_item[0] not in self.behav_list[flow_item[1]]:
rospy.signal_shutdown('[ERROR] ' +flow_item[0]+ ' not in behaviour_list (' +flow_item[1]+ ')')
exit()
except Exception as e:
rospy.signal_shutdown('[ERROR] (' + str(type(e))+str(e) + ') while looking up ' + flow_item[0] + ' in pkg ' + flow_item[1])
exit()
self.flow_step = 0
behav_goal = BehaviourGoal()
behav_goal.behav_name = flow_item[0]
behav_goal.behav_pkg = flow_item[1]
behav_goal.behav_call = flow_item[2]
rospy.logwarn("[FLOW] " + flow_item[0])
for a_client in self.a_clients:
rospy.logwarn('[FLOW] Sending to ' + a_client.action_client.ns)
a_client.send_goal(behav_goal, feedback_cb=self.flow_feedback, done_cb=self.flow_done)
#self.log.write('[' + str(datetime.now().time()) + ']: ' + str(flow_item[0]) + ' with call: ' + str(flow_item[1]))
rate = rospy.Rate(2)
rospy.logwarn('[FLOW] Sleep until flow is done')
while not self.flow_step >= self.no_bots:
rate.sleep()
rospy.logwarn('[FLOW] moving on to next behaviour in flow')
rospy.logwarn('[FLOW] flow done')
#self.log.close()
rospy.signal_shutdown('[FLOW] done')
def flow_feedback(self, feedback):
rospy.logwarn('[FEEDBACK] [' +str(feedback.ns)+ '] ' + str(feedback.prog_status) +' '+ str(feedback.prog_perc))
#self.log.write('['+str(datetime.now().time())+'][' + str(feedback.ns) + ']: ' + ' PROG[' + str(feedback.prog_perc) + '%] - ' + str(feedback.prog_status))
def flow_done(self, term_state, result):
rospy.logwarn('[FLOWDONE] ' + str(result.res_msg))
#succ = 'SUCCESS - ' if result.res_success else 'FAILURE - '
#self.log.write('['+str(datetime.now().time())+'][' + str(result.ns) + ']: ' + ' DONE ' + succ + result.res_msg)
self.flow_step += 1
# TODO TODO
# handler for api-requests
def api_handler(self, req):
# 0 = set a new behav (preempt/replace old one)
# 1 = get feedback on behav
# 2 = cancel active behav
# 3 = try to reconnect bots
# 3-4 = pause / unpause (in future)
answer = {'answ_type':0, 'answ_ns':[],'answ_msg':[]}
# TODO
if req.req_type == 0:
# correct yaml is sent by tui (cf flow_file)
behav = yaml.safe_load(req.req_msg)
behav = BehaviourGoal()
behav.behav_name = behav[0]
behav.behav_pkg = behav[1]
behav.behav_call = behav[2]
send_behav()
return answer
#TODO
def api_feedback(self, feedback):
rospy.logwarn("WIP")
#TODO
def api_done(self, result):
rospy.logwarn("WIP")
if __name__ == '__main__':
rospy.init_node('behaviour_aclient')
behaviour_aclient = BehaviourAClient()
#rospy.spin()
| 36.628743 | 162 | 0.585745 | 766 | 6,117 | 4.492167 | 0.25718 | 0.05231 | 0.020924 | 0.024702 | 0.151409 | 0.112467 | 0.095031 | 0.0712 | 0.020924 | 0 | 0 | 0.008202 | 0.282491 | 6,117 | 166 | 163 | 36.849398 | 0.775803 | 0.24767 | 0 | 0.065934 | 0 | 0 | 0.123192 | 0.016002 | 0 | 0 | 0 | 0.006024 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.186813 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc0abe3f54f7be5475a88b541363636e452ed4a2 | 6,992 | py | Python | extras/tests/test_models.py | maznu/peering-manager | d249fcf530f4cc48b39429badb79bc203e0148ba | [
"Apache-2.0"
] | 127 | 2017-10-12T00:27:45.000Z | 2020-08-07T11:13:55.000Z | extras/tests/test_models.py | maznu/peering-manager | d249fcf530f4cc48b39429badb79bc203e0148ba | [
"Apache-2.0"
] | 247 | 2017-12-26T12:55:34.000Z | 2020-08-08T11:57:35.000Z | extras/tests/test_models.py | maznu/peering-manager | d249fcf530f4cc48b39429badb79bc203e0148ba | [
"Apache-2.0"
] | 63 | 2017-10-13T06:46:05.000Z | 2020-08-08T00:41:57.000Z | import ipaddress
from unittest.mock import patch
from django.test import TestCase
from extras.models import IXAPI
from utils.testing import MockedResponse
class IXAPITest(TestCase):
@classmethod
def setUpTestData(cls):
IXAPI.objects.bulk_create(
[
IXAPI(
name="IXP 1",
url="https://ixp1-ixapi.example.net/v1/",
api_key="key-ixp1",
api_secret="secret-ixp1",
),
IXAPI(
name="IXP 2",
url="https://ixp2-ixapi.example.net/v2/",
api_key="key-ixp2",
api_secret="secret-ixp2",
),
IXAPI(
name="IXP 3",
url="https://ixp3-ixapi.example.net/v3/",
api_key="key-ixp3",
api_secret="secret-ixp3",
),
]
)
cls.ix_api = IXAPI.objects.get(name="IXP 1")
def test_version(self):
self.assertEqual(1, IXAPI.objects.get(name="IXP 1").version)
self.assertEqual(2, IXAPI.objects.get(name="IXP 2").version)
self.assertEqual(3, IXAPI.objects.get(name="IXP 3").version)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_dial(self, *_):
c = self.ix_api.dial()
self.assertEqual("1234", c.access_token)
self.assertEqual("1234", c.refresh_token)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_health(self, *_):
with self.assertRaises(NotImplementedError):
self.ix_api.get_health()
i = IXAPI.objects.get(name="IXP 2")
with patch(
"requests.get", return_value=MockedResponse(content={"status": "up"})
):
self.assertEqual("healthy", i.get_health())
with patch(
"requests.get", return_value=MockedResponse(content={"status": "warn"})
):
self.assertEqual("degraded", i.get_health())
with patch(
"requests.get", return_value=MockedResponse(content={"status": "error"})
):
self.assertEqual("unhealthy", i.get_health())
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_accounts(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/accounts.json"
),
):
c = self.ix_api.get_accounts()
self.assertEqual(2, len(c))
self.assertEqual("1234", c[0]["id"])
self.assertEqual("5678", c[1]["id"])
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_identity(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(content=[{"id": "1234", "name": "Customer 1"}]),
):
self.assertIsNone(self.ix_api.get_identity())
self.ix_api.identity = "1234"
self.assertEqual("1234", self.ix_api.get_identity()["id"])
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_ips(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/ips.json"
),
):
i = self.ix_api.get_ips()
self.assertEqual("1234", i[0].id)
self.assertEqual("5678", i[1].id)
self.assertIsInstance(i[0].network, ipaddress.IPv6Network)
self.assertIsInstance(i[1].network, ipaddress.IPv4Network)
self.assertIsInstance(i[0].address, ipaddress.IPv6Address)
self.assertIsInstance(i[1].address, ipaddress.IPv4Address)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_macs(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/macs.json"
),
):
i = self.ix_api.get_macs()
self.assertEqual("1234", i[0].id)
self.assertEqual("AA:BB:CC:DD:EE:FF", i[0].address)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_network_features(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/network_features.json"
),
):
i = self.ix_api.get_network_features()
self.assertEqual("1234", i[0].id)
self.assertEqual(64500, i[0].asn)
self.assertTrue(i[0].required)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_network_service_configs(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/network_service_configs.json"
),
):
i = self.ix_api.get_network_service_configs()
self.assertEqual("1234", i[0].id)
self.assertEqual("production", i[0].state)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_network_services(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/network_services.json"
),
):
i = self.ix_api.get_network_services()
self.assertEqual("1234", i[0].id)
self.assertEqual(1234, i[0].peeringdb_ixid)
@patch(
"requests.post",
return_value=MockedResponse(
content={"access_token": "1234", "refresh_token": "1234"}
),
)
def test_get_products(self, *_):
with patch(
"requests.get",
return_value=MockedResponse(
fixture="extras/tests/fixtures/ix_api/products.json"
),
):
self.assertEqual("1234", self.ix_api.get_products()[0]["id"])
| 32.672897 | 88 | 0.536756 | 706 | 6,992 | 5.137394 | 0.155807 | 0.09512 | 0.144748 | 0.123518 | 0.637717 | 0.607389 | 0.57265 | 0.535704 | 0.46788 | 0.451889 | 0 | 0.040747 | 0.326087 | 6,992 | 213 | 89 | 32.826291 | 0.72899 | 0 | 0 | 0.532995 | 0 | 0 | 0.181207 | 0.045195 | 0 | 0 | 0 | 0 | 0.152284 | 1 | 0.060914 | false | 0 | 0.025381 | 0 | 0.091371 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc0bec5fde0074af6c926a28a3b3110e130f2c71 | 3,053 | py | Python | CarCounting/utility/compute_hog.py | PeterJWei/CitywideFootprinting | 98064e6119ceab26079c0b11629f3d428f4e745f | [
"MIT"
] | 1 | 2019-01-25T16:01:57.000Z | 2019-01-25T16:01:57.000Z | CarCounting/utility/compute_hog.py | PeterJWei/CitywideFootprinting | 98064e6119ceab26079c0b11629f3d428f4e745f | [
"MIT"
] | 1 | 2018-10-31T18:33:07.000Z | 2018-10-31T18:38:57.000Z | CarCounting/utility/compute_hog.py | PeterJWei/CitywideFootprinting | 98064e6119ceab26079c0b11629f3d428f4e745f | [
"MIT"
] | 1 | 2018-12-24T23:35:15.000Z | 2018-12-24T23:35:15.000Z | import os, sys
import matplotlib.pyplot as plt
import matplotlib.image as iread
import tensorflow as tf
from PIL import Image
import numpy as np
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
cwd = os.getcwd()
#for sliding window for calculating histogram
#stide = 50, incr stands for this
cell = [8, 8]
incr = [8,8]
bin_num = 8
im_size = [32,32]
#image path must be wrt current working directory
def create_array(image_path):
image = Image.open(os.path.join(cwd,image_path)).convert('L')
image_array = np.asarray(image,dtype=float)
return image_array
#uses a [-1 0 1 kernel]
def create_grad_array(image_array):
image_array = Image.fromarray(image_array)
if not image_array.size == im_size:
image_array = image_array.resize(im_size, resample=Image.BICUBIC)
image_array = np.asarray(image_array,dtype=float)
# gamma correction
image_array = (image_array)**2.5
# local contrast normalisation
image_array = (image_array-np.mean(image_array))/np.std(image_array)
max_h = 32
max_w = 32
grad = np.zeros([max_h, max_w])
mag = np.zeros([max_h, max_w])
for h,row in enumerate(image_array):
for w, val in enumerate(row):
if h-1>=0 and w-1>=0 and h+1<max_h and w+1<max_w:
dy = image_array[h+1][w]-image_array[h-1][w]
dx = row[w+1]-row[w-1]+0.0001
grad[h][w] = np.arctan(dy/dx)*(180/np.pi)
if grad[h][w]<0:
grad[h][w] += 180
mag[h][w] = np.sqrt(dy*dy+dx*dx)
return grad,mag
def calculate_histogram(array,weights):
bins_range = (0, 180)
bins = bin_num
hist,_ = np.histogram(array,bins=bins,range=bins_range,weights=weights)
return hist
def create_hog_features(grad_array,mag_array):
max_h = int(((grad_array.shape[0]-cell[0])/incr[0])+1)
max_w = int(((grad_array.shape[1]-cell[1])/incr[1])+1)
cell_array = []
w = 0
h = 0
i = 0
j = 0
#Creating 8X8 cells
while i<max_h:
w = 0
j = 0
while j<max_w:
for_hist = grad_array[h:h+cell[0],w:w+cell[1]]
for_wght = mag_array[h:h+cell[0],w:w+cell[1]]
val = calculate_histogram(for_hist,for_wght)
cell_array.append(val)
j += 1
w += incr[1]
i += 1
h += incr[0]
cell_array = np.reshape(cell_array,(max_h, max_w, bin_num))
#normalising blocks of cells
block = [2,2]
#here increment is 1
max_h = int((max_h-block[0])+1)
max_w = int((max_w-block[1])+1)
block_list = []
w = 0
h = 0
i = 0
j = 0
while i<max_h:
w = 0
j = 0
while j<max_w:
for_norm = cell_array[h:h+block[0],w:w+block[1]]
mag = np.linalg.norm(for_norm)
arr_list = (for_norm/mag).flatten().tolist()
block_list += arr_list
j += 1
w += 1
i += 1
h += 1
#returns a vextor array list of 288 elements
return block_list
def apply_hog(image_array):
gradient,magnitude = create_grad_array(image_array)
hog_features = create_hog_features(gradient,magnitude)
hog_features = np.asarray(hog_features,dtype=float)
hog_features = np.expand_dims(hog_features,axis=0)
return hog_features
def hog_from_path(image_path):
image_array = create_array(image_path)
final_array = apply_hog(image_array)
return final_array
| 21.807143 | 72 | 0.688831 | 559 | 3,053 | 3.568873 | 0.245081 | 0.115288 | 0.045113 | 0.0401 | 0.140351 | 0.069173 | 0.054135 | 0.054135 | 0.046115 | 0.027068 | 0 | 0.037125 | 0.170652 | 3,053 | 139 | 73 | 21.964029 | 0.75079 | 0.097936 | 0 | 0.212766 | 0 | 0 | 0.00802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06383 | false | 0 | 0.06383 | 0 | 0.191489 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc0c26f6169091b18ad7fcc17d59a093093a9bf6 | 1,861 | py | Python | htsworkflow/util/mount.py | detrout/htsworkflow | 99d3300e2533d79428ad49aaf10b9429b175da2d | [
"BSD-3-Clause"
] | null | null | null | htsworkflow/util/mount.py | detrout/htsworkflow | 99d3300e2533d79428ad49aaf10b9429b175da2d | [
"BSD-3-Clause"
] | 1 | 2018-02-26T18:30:05.000Z | 2018-02-26T18:30:05.000Z | htsworkflow/util/mount.py | detrout/htsworkflow | 99d3300e2533d79428ad49aaf10b9429b175da2d | [
"BSD-3-Clause"
] | null | null | null | """
Utilities for working with unix-style mounts.
"""
import os
import subprocess
def list_mount_points():
"""
Return list of current mount points
Note: unix-like OS specific
"""
mount_points = []
likely_locations = ['/sbin/mount', '/bin/mount']
for mount in likely_locations:
if os.path.exists(mount):
p = subprocess.Popen(mount, stdout=subprocess.PIPE)
p.wait()
for l in p.stdout.readlines():
rec = l.split()
device = rec[0]
mount_point = rec[2]
assert rec[1] == 'on'
# looking at the output of mount on linux, osx, and
# sunos, the first 3 elements are always the same
# devicename on path
# everything after that displays the attributes
# of the mount points in wildly differing formats
mount_points.append(mount_point)
return mount_points
else:
raise RuntimeError("Couldn't find a mount executable")
def is_mounted(point_to_check):
"""
Return true if argument exactly matches a current mount point.
"""
for mount_point in list_mount_points():
if point_to_check == mount_point:
return True
else:
return False
def find_mount_point_for(pathname):
"""
Find the deepest mount point pathname is located on
"""
realpath = os.path.realpath(pathname)
mount_points = list_mount_points()
prefixes = set()
for current_mount in mount_points:
cp = os.path.commonprefix([current_mount, realpath])
prefixes.add((len(cp), cp))
prefixes = list(prefixes)
prefixes.sort()
if len(prefixes) == 0:
return None
else:
# return longest common prefix
return prefixes[-1][1]
| 28.630769 | 68 | 0.592692 | 225 | 1,861 | 4.782222 | 0.444444 | 0.10223 | 0.041822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005542 | 0.321333 | 1,861 | 64 | 69 | 29.078125 | 0.846397 | 0.25094 | 0 | 0.081081 | 0 | 0 | 0.041353 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 1 | 0.081081 | false | 0 | 0.054054 | 0 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc0d36c686880b9ed9d030ea2d24401ae4485453 | 710 | py | Python | vaccine_feed_ingest/runners/nv/immunizenevada_org/fetch.py | jeremyschlatter/vaccine-feed-ingest | 215f6c144fe5220deaccdb5db3e96f28b7077b3f | [
"MIT"
] | 27 | 2021-04-24T02:11:18.000Z | 2021-05-17T00:54:45.000Z | vaccine_feed_ingest/runners/nv/immunizenevada_org/fetch.py | jeremyschlatter/vaccine-feed-ingest | 215f6c144fe5220deaccdb5db3e96f28b7077b3f | [
"MIT"
] | 574 | 2021-04-06T18:09:11.000Z | 2021-08-30T07:55:06.000Z | vaccine_feed_ingest/runners/nv/immunizenevada_org/fetch.py | shashank025/vaccine-feed-ingest | 4ec2035b8111dcf492e414135b5936ae040c7318 | [
"MIT"
] | 47 | 2021-04-23T05:31:14.000Z | 2021-07-01T20:22:46.000Z | #!/usr/bin/env python
import json
import pathlib
import sys
import requests
output_dir = pathlib.Path(sys.argv[1])
output_file = output_dir / "nv.json"
with output_file.open("w") as fout:
r = requests.post(
"https://www.immunizenevada.org/views/ajax",
headers={
"Referer": "https://www.immunizenevada.org/covid-19-vaccine-locator",
},
data={
"field_type_of_location_value": "All",
"field_zip_code_value": "",
"view_name": "vaccine_locator",
"view_display_id": "block_2",
"view_path": "/node/2668",
"_drupal_ajax": 1,
},
)
json.dump(r.json(), fout)
fout.write("\n")
| 23.666667 | 81 | 0.574648 | 86 | 710 | 4.523256 | 0.651163 | 0.046272 | 0.113111 | 0.128535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017408 | 0.271831 | 710 | 29 | 82 | 24.482759 | 0.73501 | 0.028169 | 0 | 0 | 0 | 0 | 0.349782 | 0.040639 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.173913 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc0f6b5554a58d74d0b6d1df9213fa9aac00fb7d | 2,581 | py | Python | challenge/agoda_cancellation_estimator.py | eviatar-ben/IML.HUJI | 0f78ad1ab86a235a55af729aafcf96f1a8ddddf5 | [
"MIT"
] | null | null | null | challenge/agoda_cancellation_estimator.py | eviatar-ben/IML.HUJI | 0f78ad1ab86a235a55af729aafcf96f1a8ddddf5 | [
"MIT"
] | null | null | null | challenge/agoda_cancellation_estimator.py | eviatar-ben/IML.HUJI | 0f78ad1ab86a235a55af729aafcf96f1a8ddddf5 | [
"MIT"
] | null | null | null | from __future__ import annotations
from typing import NoReturn
from IMLearn.base import BaseEstimator
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import ComplementNB
class AgodaCancellationEstimator(BaseEstimator):
"""
An estimator for solving the Agoda Cancellation challenge
"""
def __init__(self):
"""
Instantiate an estimator for solving the Agoda Cancellation challenge
Parameters
----------
Attributes
----------
"""
super().__init__()
self.forest = RandomForestClassifier()
self.logistic = LogisticRegression(max_iter=100000)
self.neural = MLPClassifier()
# self.naive = ComplementNB()
def _fit(self, X: np.ndarray, y: np.ndarray) -> NoReturn:
"""
Fit an estimator for given samples
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Input data to fit an estimator for
y : ndarray of shape (n_samples, )
Responses of input data to fit to
Notes
-----
"""
self.forest.fit(X, y)
self.logistic.fit(X, y)
self.neural.fit(X, y)
# self.naive.fit(X, y)
def _predict(self, X: np.ndarray) -> np.ndarray:
"""
Predict responses for given samples using fitted estimator
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Input data to predict responses for
Returns
-------
responses : ndarray of shape (n_samples, )
Predicted responses of given samples
"""
pred1 = self.forest.predict(X)
pred2 = self.logistic.predict(X)
pred3 = self.neural.predict(X)
# pred3 = self.naive.predict(X)
result = []
for (bol, bol1, bol2) in zip(pred1, pred2, pred3):
result.append((bol and (bol1 or bol2) or (bol1 and bol2)))
return np.array(result)
def _loss(self, X: np.ndarray, y: np.ndarray) -> float:
"""
Evaluate performance under loss function
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Test samples
y : ndarray of shape (n_samples, )
True labels of test samples
Returns
-------
loss : float
Performance under loss function
"""
pass | 27.168421 | 77 | 0.58427 | 281 | 2,581 | 5.266904 | 0.320285 | 0.036486 | 0.056757 | 0.060811 | 0.245946 | 0.231081 | 0.2 | 0.167568 | 0.1 | 0.071622 | 0 | 0.010777 | 0.316931 | 2,581 | 95 | 78 | 27.168421 | 0.828701 | 0.391709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.035714 | 0.285714 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc0f9854275c953cd6e46a2b60e0efadc9ef49c8 | 1,885 | py | Python | Others/Microsoft. Binary Operations.py | xli1110/LC | 3c18b8809c5a21a62903060eef659654e0595036 | [
"MIT"
] | 2 | 2021-04-02T11:57:46.000Z | 2021-04-02T11:57:47.000Z | Others/Microsoft. Binary Operations.py | xli1110/LC | 3c18b8809c5a21a62903060eef659654e0595036 | [
"MIT"
] | null | null | null | Others/Microsoft. Binary Operations.py | xli1110/LC | 3c18b8809c5a21a62903060eef659654e0595036 | [
"MIT"
] | null | null | null | class Problem:
"""
Given a string s representing a non-negative number num in the binary form.
While num is not equal to 0, we have two operations as below.
Operation1: If num is odd, we subtract 1 from it. 1101 -> 1100
Operation2: If num is even, we divide 2 into it. 1100 -> 110
The string s may contain leading zeroes.
Calculate the number of operations we should take that transfers num to 0.
Naive Method - (OA Result: Time Exceeds Limitation)
O(N)
O(1)
"""
def find_start(self, s):
"""
Remove leading zeroes.
"""
start = 0
while start < len(s):
ch = s[start]
if ch == "1":
break
elif ch == "0":
start += 1
else:
raise Exception("Invalid Character {0}".format(ch))
return start
def string_num_transform(self, s):
"""
Transform a string into a number.
Built-In Function: num = int(s, 2)
"""
power = 0
num = 0
start = self.find_start(s)
end = len(s) - 1
while end >= start:
ch = s[end]
if ch == "1" or ch == "0":
num += int(ch) * (2 ** power)
power += 1
end -= 1
else:
raise Exception("Invalid Character {0}".format(ch))
return num
def calculate_num_operations(self, s):
if not s:
raise Exception("Empty String")
num = self.string_num_transform(s)
num_operation = 0
while num != 0:
if num & 1 == 1:
num -= 1
else:
num >>= 1
num_operation += 1
return num_operation
if __name__ == "__main__":
p = Problem()
s = "0100011"
print(p.calculate_num_operations(s))
| 25.133333 | 79 | 0.498674 | 236 | 1,885 | 3.894068 | 0.385593 | 0.016322 | 0.015234 | 0.041349 | 0.108814 | 0.108814 | 0.108814 | 0.108814 | 0.108814 | 0.108814 | 0 | 0.045455 | 0.404775 | 1,885 | 74 | 80 | 25.472973 | 0.773619 | 0.28382 | 0 | 0.119048 | 0 | 0 | 0.058494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.166667 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc10f713958c7919043bf76c48e20bfa110fe35b | 2,789 | py | Python | src/gp_server/app/greenery_exposures.py | hellej/hope-green-path-server | 46742c413fbe4c3734edd12ec867bdf7e2b29f05 | [
"MIT"
] | 5 | 2020-05-24T12:18:54.000Z | 2021-04-25T19:07:16.000Z | src/gp_server/app/greenery_exposures.py | hellej/hope-green-path-server | 46742c413fbe4c3734edd12ec867bdf7e2b29f05 | [
"MIT"
] | 43 | 2019-09-22T18:45:35.000Z | 2021-07-10T08:51:33.000Z | src/gp_server/app/greenery_exposures.py | hellej/hope-green-path-server | 46742c413fbe4c3734edd12ec867bdf7e2b29f05 | [
"MIT"
] | 7 | 2020-08-04T06:50:14.000Z | 2021-01-17T11:36:13.000Z | from typing import Dict, List, Tuple
from math import ceil
from collections import defaultdict
def get_gvi_adjusted_cost(
length: float,
gvi: float,
bike_time_cost: float = None,
sensitivity: float = 1.0
) -> float:
"""Calculates GVI adjusted edge cost for GVI optimized routing.
To find high GVI paths, we have to assign lower costs for edges with high GVI and
higher costs for edges with low GVI. Negative costs cannot be used (with Dijkstra's),
so a temporarily inverted concept (of GVI) "greyness index" is needed.
Higher sensitivity coefficient will give paths of higher GVI (i.e. lower "greyness").
Args:
length (float): Length of the edge.
gvi (float): GVI of the edge (0-1).
bike_time_cost (float): Biking time cost (not measured in time).
sen (float): Sensitivity coefficient (optional, default = 1)
The function employs the following four assumptions:
1) "greyness index" = 1 - gvi
2) "greyness cost" = (1 - gvi) * length
3) base cost = either length or bike_time_cost (if the latter is given)
4) GVI adjusted cost = base cost (length) + greyness cost * sensitivity
"""
base_cost = bike_time_cost if bike_time_cost else length
return round(base_cost + (1 - gvi) * base_cost * sensitivity, 2)
def get_mean_gvi(gvi_exps: List[Tuple[float, float]]) -> float:
"""Returns mean GVI by the list of GVI + length pairs (tuples).
"""
length = sum([length for _, length in gvi_exps])
sum_gvi = sum([gvi * length for gvi, length in gvi_exps])
return round(sum_gvi/length, 2)
def get_gvi_class(gvi: float) -> int:
"""Classifies GVI index to one of nine classes from 1 to 10.
The returned number represents the upper boundary of an 0.1 wide GVI interval to which
the GVI value belongs. For example, the GVI class (number) 8 is returned for GVI value 0.73.
"""
if not isinstance(gvi, float) or gvi > 1 or gvi < 0:
raise ValueError(f'GVI value is invalid: {gvi}')
return ceil(gvi * 10)
def aggregate_gvi_class_exps(gvi_exps: List[Tuple[float, float]]) -> Dict[int, float]:
"""Aggregates GVI exposures to nine 0.1 wide GVI ranges and returns a new dictionary
where the keys are the names of the GVI classes.
"""
gvi_class_exps = defaultdict(float)
for gvi, exp in gvi_exps:
gvi_class_exps[get_gvi_class(gvi)] += exp
return {
gvi_class: round(exp, 3)
for gvi_class, exp
in gvi_class_exps.items()
}
def get_gvi_class_pcts(gvi_class_exps: Dict[int, float]) -> Dict[int, float]:
length = sum(gvi_class_exps.values())
return {
gvi_class: round(exp * 100 / length, 3)
for gvi_class, exp
in gvi_class_exps.items()
}
| 34.8625 | 96 | 0.670491 | 427 | 2,789 | 4.257611 | 0.320843 | 0.066007 | 0.046205 | 0.018702 | 0.090209 | 0.066007 | 0.037404 | 0.037404 | 0.037404 | 0.037404 | 0 | 0.016053 | 0.240588 | 2,789 | 79 | 97 | 35.303797 | 0.842304 | 0.473288 | 0 | 0.171429 | 0 | 0 | 0.019853 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.085714 | 0 | 0.371429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc119f98efabc09219a0565fdd7d2172258998b3 | 3,229 | py | Python | vim.d/vimfiles/bundle/taghighlight/plugin/TagHighlight/module/worker.py | lougxing/gbox | f28402d97cacd22b5e564003af72c4022908cb4d | [
"MIT"
] | null | null | null | vim.d/vimfiles/bundle/taghighlight/plugin/TagHighlight/module/worker.py | lougxing/gbox | f28402d97cacd22b5e564003af72c4022908cb4d | [
"MIT"
] | 13 | 2020-01-28T22:30:33.000Z | 2022-03-02T14:57:16.000Z | vim.d/vimfiles/bundle/taghighlight/plugin/TagHighlight/module/worker.py | lougxing/gbox | f28402d97cacd22b5e564003af72c4022908cb4d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Tag Highlighter:
# Author: A. S. Budden <abudden _at_ gmail _dot_ com>
# Copyright: Copyright (C) 2009-2013 A. S. Budden
# Permission is hereby granted to use and distribute this code,
# with or without modifications, provided that this copyright
# notice is copied with it. Like anything else that's free,
# the TagHighlight plugin is provided *as is* and comes with no
# warranty of any kind, either expressed or implied. By using
# this plugin, you agree that in no event will the copyright
# holder be liable for any damages resulting from the use
# of this software.
# ---------------------------------------------------------------------
from __future__ import print_function
import sys
import os
def RunWithOptions(options, manually_set=[]):
start_directory = os.getcwd()
from .config import config, SetInitialOptions, LoadLanguages
from .debug import Debug
SetInitialOptions(options, manually_set)
Debug("Running types highlighter generator", "Information")
Debug("Release:" + config['Release'], "Information")
Debug("Version:" + repr(config['Version']), "Information")
Debug("Options:" + repr(options), "Information")
Debug("Manually Set:" + repr(manually_set), "Information")
tag_file_absolute = os.path.join(config['CtagsFileLocation'], config['TagFileName'])
if config['DoNotGenerateTags'] and not os.path.exists(tag_file_absolute):
Debug("Cannot use existing tagfile as it doesn't exist (checking for " + tag_file_absolute + ")", "Error")
return
LoadLanguages()
if config['PrintConfig']:
import pprint
pprint.pprint(config)
return
if config['PrintPyVersion']:
print(sys.version)
return
from .ctags_interface import GenerateTags, ParseTags
from .generation import CreateTypesFile
cscope_check_c = False
if config['EnableCscope']:
cscope_file = os.path.join(config['CscopeFileLocation'], config['CscopeFileName'])
config['CscopeFileFull'] = cscope_file
if os.path.exists(cscope_file) or not config['CscopeOnlyIfCCode']:
Debug("Running cscope", "Information")
from .cscope_interface import StartCscopeDBGeneration, CompleteCscopeDBGeneration
StartCscopeDBGeneration(config)
else:
Debug("Deferring cscope until C code detected", "Information")
cscope_check_c = True
if not config['DoNotGenerateTags']:
Debug("Generating tag file", "Information")
GenerateTags(config)
tag_db, file_tag_db = ParseTags(config)
for language in config['LanguageList']:
if language in tag_db or language in file_tag_db:
CreateTypesFile(config, language, tag_db[language], file_tag_db[language])
if config['EnableCscope']:
if cscope_check_c and 'c' in tag_db:
Debug("Running cscope as C code detected", "Information")
from .cscope_interface import StartCscopeDBGeneration, CompleteCscopeDBGeneration
StartCscopeDBGeneration(config)
CompleteCscopeDBGeneration()
os.chdir(start_directory)
| 40.3625 | 114 | 0.667389 | 355 | 3,229 | 5.952113 | 0.4 | 0.016564 | 0.021297 | 0.015144 | 0.107903 | 0.107903 | 0.107903 | 0.107903 | 0.107903 | 0 | 0 | 0.00319 | 0.223289 | 3,229 | 79 | 115 | 40.873418 | 0.839314 | 0.227005 | 0 | 0.173077 | 0 | 0 | 0.21909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0.192308 | 0 | 0.269231 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc11b53043205da87c5fbe0d4fabb0cf5e70bed5 | 6,709 | py | Python | top40Data.py | tgadf/charts | b2c78ec8467b8837c1d773dd55a4b1cfeecb564a | [
"MIT"
] | null | null | null | top40Data.py | tgadf/charts | b2c78ec8467b8837c1d773dd55a4b1cfeecb564a | [
"MIT"
] | null | null | null | top40Data.py | tgadf/charts | b2c78ec8467b8837c1d773dd55a4b1cfeecb564a | [
"MIT"
] | null | null | null | from timeUtils import getDateTime, isDate
from fileUtils import getBaseFilename
from listUtils import isIn
from collections import Counter
from os.path import join
from ioUtils import getFile, saveFile
from searchUtils import findExt
from artistIgnores import getArtistIgnores
from top40Charts import top40Charts
from top40 import top40chart
class top40Data:
def __init__(self, minYear=None, maxYear=None, debug=False):
self.debug = False
self.basedir = "/Volumes/Piggy/Charts/"
self.basename = "Top40"
self.tc = top40Charts()
self.charts = []
self.minYear = minYear
self.maxYear = maxYear
self.dbRenames = None
self.multirenameDB = None
self.fullChartData = {}
self.artistAlbumData = {}
def getFullChartDataFilename(self):
ifile="current{0}FullChartArtistAlbumData.p".format(self.basename)
return ifile
def getFullChartData(self):
return getFile(self.getFullChartDataFilename())
def saveFullChartData(self):
print("Saving {0} Full Artist Data".format(len(self.fullChartData)))
saveFile(idata=self.fullChartData, ifile=self.getFullChartDataFilename(), debug=True)
def getArtistAlbumDataFilename(self):
ifile="current{0}ArtistAlbumData.p".format(self.basename)
return ifile
def getArtistAlbumData(self):
return getFile(self.getArtistAlbumDataFilename())
def saveArtistAlbumData(self):
print("Saving {0} Artist Album Data to {1}".format(len(self.artistAlbumData), self.getArtistAlbumDataFilename()))
saveFile(idata=self.artistAlbumData, ifile=self.getArtistAlbumDataFilename(), debug=True)
def getArtists(self):
return list(self.artistAlbumData.keys())
def setRenames(self, artistRenames):
self.artistRenames = artistRenames
def setDBRenames(self, dbRenames):
self.dbRenames = dbRenames
def setMultiDBRenames(self, multirenameDB):
self.multirenameDB = multirenameDB
def setArtistAlbumData(self):
self.artistAlbumData = {artist: list(artistData["Songs"].keys()) + list(artistData["Albums"].keys()) for artist,artistData in self.fullChartData.items()}
def setChartUsage(self, name=None, rank=None):
if rank is not None:
if isinstance(rank, list):
for item in rank:
self.charts += self.tc.getChartsByRank(item)
elif isinstance(rank, int):
self.charts += self.tc.getChartsByRank(rank)
elif name is not None:
self.charts += self.tc.getCharts(name)
else:
self.charts = self.tc.getCharts(None)
if name is None:
name = "None"
print(" Using Charts ({0}): {1}".format(name, self.charts))
def findFiles(self):
savedir = join(self.basedir, "data", "top40")
self.files = findExt(savedir, ext='.p')
print("Found {0} files.".format(len(self.files)))
def setFullChartData(self):
dbRenameStats = 0
multiRenameStats = 0
fullChartData = {}
self.findFiles()
if len(self.files) == 0:
raise ValueError("There are no files. Something is wrong...")
self.files = {getBaseFilename(x).replace("/", " "): x for x in self.files}
for chartName, ifile in self.files.items():
if chartName not in self.charts:
continue
print("==> {0: <40}".format(chartName), end="\t")
#t40chart = top40chart(chartID, chartName, chartURL)
chartResults = getFile(ifile)
for date, values in chartResults.items():
if self.minYear is not None:
if getDateTime(date).year < int(self.minYear):
continue
if self.maxYear is not None:
if getDateTime(date).year > int(self.maxYear):
continue
for i,item in enumerate(values):
artistName = item["Artist"]
## Test for rename
renamedArtistName = artistName
if self.dbRenames is not None:
tmpName = self.dbRenames.renamed(renamedArtistName)
if tmpName != renamedArtistName:
dbRenameStats += 1
renamedArtistName = tmpName
## Test for multi rename
#renamedArtistName = artistName
if self.multirenameDB is not None:
tmpName = self.multirenameDB.renamed(renamedArtistName)
if tmpName != renamedArtistName:
multiRenameStats += 1
renamedArtistName = tmpName
artist = renamedArtistName
renamedArtistName = artist
renamedArtistName = renamedArtistName.replace("\r", "").strip()
if renamedArtistName != artist:
dbRenameStats += 1
artist = renamedArtistName
ignoreStatus = getArtistIgnores(artist)
if ignoreStatus is False:
continue
album = item["Album"]
if album in ["Soundtrack"]:
continue
if fullChartData.get(artist) is None:
fullChartData[artist] = {"Songs": {}, "Albums": {}}
if chartName.endswith("Albums"):
key = "Albums"
else:
key = "Songs"
if fullChartData[artist][key].get(album) is None:
fullChartData[artist][key][album] = {}
if fullChartData[artist][key][album].get(chartName) is None:
fullChartData[artist][key][album][chartName] = {}
fullChartData[artist][key][album][chartName][date] = i
print(len(fullChartData))
self.fullChartData = fullChartData
print("Renamed {0} single artists".format(dbRenameStats))
print("Renamed {0} multi artists".format(multiRenameStats)) | 37.272222 | 161 | 0.538679 | 574 | 6,709 | 6.289199 | 0.249129 | 0.019391 | 0.014958 | 0.017729 | 0.160942 | 0.057064 | 0.038781 | 0.020499 | 0.020499 | 0 | 0 | 0.009261 | 0.372336 | 6,709 | 180 | 162 | 37.272222 | 0.848017 | 0.017737 | 0 | 0.133858 | 0 | 0 | 0.057403 | 0.012908 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11811 | false | 0 | 0.07874 | 0.023622 | 0.244094 | 0.062992 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc11cd217bfe4a5a73e4df42b2b7941851dc6401 | 1,593 | py | Python | examples/example03.py | robertsj/pypgapack | c24b4a58f347ec02c20929aaaec25010fa603eb8 | [
"MIT"
] | 4 | 2015-12-16T09:44:32.000Z | 2021-05-23T23:52:33.000Z | examples/example03.py | robertsj/pypgapack | c24b4a58f347ec02c20929aaaec25010fa603eb8 | [
"MIT"
] | null | null | null | examples/example03.py | robertsj/pypgapack | c24b4a58f347ec02c20929aaaec25010fa603eb8 | [
"MIT"
] | 1 | 2022-01-01T17:44:21.000Z | 2022-01-01T17:44:21.000Z | """
pypgapack/examples/example03.py -- maxreal
"""
from pypgapack import PGA
import numpy as np
import sys
class MyPGA(PGA) :
"""
Derive our own class from PGA.
"""
def maxreal(self, p, pop) :
"""
The maximum real sum problem.
The alleles are doubles, and we solve
.. math::
\max f(x) &= \sum^N_{n=1} x_n \\
s.t. &= |x_i| \leq 100
That maximum is :math:`f_{\textrm{max}}(x) = 100n` obtained for
:math:`x_i = 100, i = 1\ldots N`.
"""
c = self.GetRealChromosome(p, pop) # Get pth string as Numpy array
val = np.sum(c) # and sum it up.
del c # Delete "view" to internals.
return val # Already a float.
n = 10 # String length.
# (Command line arguments, doubles, string length, and maximize it)
opt = MyPGA(sys.argv, PGA.DATATYPE_REAL, n, PGA.MAXIMIZE)
opt.SetRandomSeed(1) # Set random seed for verification.
u_b = 100*np.ones(n) # Define lower bound.
l_b =-100*np.ones(n) # Define upper bound.
# Set the bounds. Default floats are handled without issue.
opt.SetRealInitRange(l_b, u_b)
# Force mutations to keep values in the initial range, a useful
# feature for bound constraints.
opt.SetMutationType(PGA.MUTATION_RANGE)
opt.SetMaxGAIterValue(50) # 50 generations for short output.
opt.SetUp() # Internal allocations, etc.
opt.Run(opt.maxreal) # Set the objective.
opt.Destroy() # Clean up PGAPack internals.
| 37.046512 | 75 | 0.588198 | 216 | 1,593 | 4.287037 | 0.569444 | 0.008639 | 0.012959 | 0.021598 | 0.036717 | 0.036717 | 0 | 0 | 0 | 0 | 0 | 0.023466 | 0.304457 | 1,593 | 42 | 76 | 37.928571 | 0.812274 | 0.526679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc12186b21707e8f042d6fc56db8405fbaeeb313 | 1,852 | py | Python | main.py | ffaristocrat/coding | 5017ddba4b1b1e180f012bc36608e1e6b30b0447 | [
"MIT"
] | null | null | null | main.py | ffaristocrat/coding | 5017ddba4b1b1e180f012bc36608e1e6b30b0447 | [
"MIT"
] | null | null | null | main.py | ffaristocrat/coding | 5017ddba4b1b1e180f012bc36608e1e6b30b0447 | [
"MIT"
] | null | null | null | import random
from typing import List, Tuple
import json
from coding.game import Game
players = [
'Micheál',
'Melissa',
]
def get_input(choices: List[Tuple]):
all_tiles = set()
all_commands = set()
all_lines = set()
for command, lines, tiles in choices:
all_lines |= set(lines)
all_commands.add(command)
all_tiles |= set(tiles)
# tile_list = ', '.join([f'{t.var} ({t.id})' for t in tiles])
#
# print(f"{command} ({command.id})")
# f" lines: {str(lines)[1:-1]}\n"
# f" tiles: {tile_list}")
print(str(all_lines)[1:-1])
print(', '.join([f"{c[0]} ({c[0].id})" for c in choices]))
print(', '.join([f'{t.var} ({t.id})' for t in all_tiles]))
while True:
line = int(input('line: '))
cmd_id = int(input('command: '))
tile_id = int(input('tile: '))
try:
command = [c for c in all_commands if c.id == cmd_id][0]
tile = [t for t in all_tiles if t.id == tile_id][0]
break
except (IndexError, ValueError):
pass
return line, command, tile
def random_choice(choices: List[Tuple]):
choice = random.choice(choices)
command = choice[0]
line = random.choice(choice[1])
tile = random.choice(choice[2])
# print(f'Choice: {line:3} {command} --> {tile}')
return line, command, tile
def main():
commands = json.load(open('commands.json'))
game = Game(commands)
generator = game.run_game(players)
choices = generator.send(None)
while choices:
# for line in game.list_code():
# print(line)
response = random_choice(choices)
try:
choices = generator.send(response)
except StopIteration:
break
if __name__ == "__main__":
main()
| 22.864198 | 69 | 0.553996 | 240 | 1,852 | 4.154167 | 0.266667 | 0.060181 | 0.018054 | 0.018054 | 0.106319 | 0.036108 | 0.036108 | 0.036108 | 0.036108 | 0 | 0 | 0.009167 | 0.293197 | 1,852 | 80 | 70 | 23.15 | 0.752483 | 0.141469 | 0 | 0.122449 | 0 | 0 | 0.059456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0.020408 | 0.081633 | 0 | 0.183673 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc1359f10cd98a03e801b7e7afbd18314848ba95 | 2,765 | py | Python | notebooks/De-mosaicing.py | mchapman87501/go_mars_2020_img_utils | 0ec6d4727f562b1dc60e0ea03d81eaf82285b66f | [
"MIT"
] | null | null | null | notebooks/De-mosaicing.py | mchapman87501/go_mars_2020_img_utils | 0ec6d4727f562b1dc60e0ea03d81eaf82285b66f | [
"MIT"
] | null | null | null | notebooks/De-mosaicing.py | mchapman87501/go_mars_2020_img_utils | 0ec6d4727f562b1dc60e0ea03d81eaf82285b66f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[1]:
# Trying to gin up a human-readable, simple-minded (bilinear interpolation) algorithm for de-mosaicing a
# sensor readout that has an RGGB color filter array (CFA).
# Red filters lie over cells whose x coordinate is even and whose y coordinate is even: even, even
# Blue filters: odd, odd
# Green filters: even, odd *and* odd, even.
# In[2]:
import numpy as np
from PIL import Image
# In[13]:
# Image dimensions
width = 255
height = 255
# Dummy image data is grayscale - single component, 0..255.
# Build it up as a gradient.
# Give it a demosaiced red tinge by boosting pixels that should be
# under a red filter in the Bayer image pattern.
dummy_image_data = []
for y in range(height):
row = []
for x in range(width):
red_boost = 100 if (x % 2, y % 2) == (0, 0) else 0
row.append(min(255, x + red_boost))
dummy_image_data.append(row)
gray_image_data = np.array(dummy_image_data, dtype=np.uint8)
print("Dummy image data:", gray_image_data)
# PIL seems to be ignoring my mode, dangit.
gray_img = Image.fromarray(gray_image_data, mode="L")
gray_img.show()
print("Converted back to numpy array:")
print(np.asarray(gray_img))
# In[14]:
# Offset of each color component within a pixel:
R = 0
G = 1
B = 2
# filter pattern, addressable as [y][x]
pattern = [
[R, G],
[G, B]
]
# Demosaiced image data is RGB - three components.
demosaiced = []
for y in range(height):
row = [[0, 0, 0] for x in range(width)]
demosaiced.append(row)
def indices(v, limit):
result = []
for offset in [-1, 0, 1]:
index = v + offset
if 0 <= index < limit:
result.append(index)
return result
def channel(x, y):
x_pattern = x % 2
y_pattern = y % 2
return pattern[y_pattern][x_pattern]
def demosaic(sensor_image, demosaiced, width, height):
for x_image in range(width):
x_indices = indices(x_image, width)
for y_image in range(height):
y_indices = indices(y_image, height)
sums = {R: 0, G: 0, B: 0}
counts = {R: 0, G: 0, B: 0}
for x in x_indices:
for y in y_indices:
c = channel(x, y)
sums[c] += sensor_image[y][x]
counts[c] += 1
for c in [R, G, B]:
intensity = sums[c] / counts[c] if counts[c] > 0 else 0
# May as well convert to 8-bit integer.
pixel_value = min(255, max(0, int(intensity)))
demosaiced[y_image][x_image][c] = pixel_value
demosaic(dummy_image_data, demosaiced, width, height)
# In[15]:
color_img = Image.fromarray(np.array(demosaiced, dtype=np.uint8), mode="RGB")
color_img.show()
| 23.632479 | 104 | 0.615913 | 431 | 2,765 | 3.860789 | 0.329466 | 0.054087 | 0.050481 | 0.013221 | 0.050481 | 0.03125 | 0 | 0 | 0 | 0 | 0 | 0.028656 | 0.267993 | 2,765 | 116 | 105 | 23.836207 | 0.793478 | 0.298011 | 0 | 0.034483 | 0 | 0 | 0.026576 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051724 | false | 0 | 0.034483 | 0 | 0.12069 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc153c012171cfe05fd800ff853e7d9ebaf83c94 | 4,655 | py | Python | lino_avanti/lib/courses/desktop.py | khchine5/avanti | 5a5f9d1ddfa20ae0eb8fa33cb906daf78d9568b1 | [
"BSD-2-Clause"
] | null | null | null | lino_avanti/lib/courses/desktop.py | khchine5/avanti | 5a5f9d1ddfa20ae0eb8fa33cb906daf78d9568b1 | [
"BSD-2-Clause"
] | null | null | null | lino_avanti/lib/courses/desktop.py | khchine5/avanti | 5a5f9d1ddfa20ae0eb8fa33cb906daf78d9568b1 | [
"BSD-2-Clause"
] | null | null | null | # Copyright 2017 Luc Saffre
# License: BSD (see file COPYING for details)
from lino_xl.lib.courses.desktop import *
from lino.api import _
from lino.core.gfks import gfk2lookup
from lino.utils import join_elems
from etgen.html import E
from lino.modlib.users.mixins import My
from lino_avanti.lib.avanti.roles import ClientsUser
# Courses.required_roles = dd.login_required(Explorer)
# class LinesByProvider(Lines):
# master_key = 'provider'
AllActivities.column_names = "line:20 start_date:8 teacher user " \
"weekdays_text:10 times_text:10"
AllEnrolments.column_names = "id request_date start_date end_date \
user course pupil pupil__birth_date pupil__age pupil__country \
pupil__city pupil__gender"
class EnrolmentsByCourse(EnrolmentsByCourse):
column_names = 'id request_date pupil pupil__gender pupil__nationality:15 ' \
'needs_childcare needs_school needs_bus needs_evening '\
'remark workflow_buttons *'
class PresencesByEnrolment(dd.Table):
model = 'cal.Guest'
master = 'courses.Enrolment'
column_names = "event event__state workflow_buttons remark *"
display_mode = "summary"
order_by = ['event__start_date', 'event__start_time']
@classmethod
def get_filter_kw(self, ar, **kw):
Event = rt.models.cal.Event
enr = ar.master_instance
if enr is None:
return None
for k, v in gfk2lookup(Event.owner, enr.course).items():
kw['event__'+k] = v
kw.update(partner=enr.pupil)
return super(PresencesByEnrolment, self).get_filter_kw(ar, **kw)
@classmethod
def get_table_summary(self, obj, ar):
if ar is None:
return ''
sar = self.request_from(ar, master_instance=obj)
coll = {}
for obj in sar:
if obj.state in coll:
coll[obj.state] += 1
else:
coll[obj.state] = 1
ul = []
for st in rt.models.cal.GuestStates.get_list_items():
ul.append(_("{} : {}").format(st, coll.get(st, 0)))
# elems = join_elems(ul, sep=', ')
elems = join_elems(ul, sep=E.br)
return ar.html_text(E.div(*elems))
# return E.div(class_="htmlText", *elems)
# class CourseDetail(CourseDetail):
# main = "general cal_tab enrolments"
# general = dd.Panel("""
# line teacher start_date end_date start_time end_time
# room #slot workflow_buttons id:8 user
# name
# description
# """, label=_("General"))
# cal_tab = dd.Panel("""
# max_events max_date every_unit every
# monday tuesday wednesday thursday friday saturday sunday
# cal.EntriesByController
# """, label=_("Calendar"))
# enrolments_top = 'enrolments_until max_places:10 confirmed free_places:10 print_actions:15'
# enrolments = dd.Panel("""
# enrolments_top
# EnrolmentsByCourse
# """, label=_("Enrolments"))
Enrolments.detail_layout = """
request_date user start_date end_date
course pupil
needs_childcare needs_school needs_bus needs_evening
remark:40 workflow_buttons:40 printed:20
confirmation_details PresencesByEnrolment checkdata.ProblemsByOwner RemindersByEnrolment
"""
class CoursesPlanning(Activities):
required_roles = dd.login_required(CoursesUser)
label = _("Course planning")
column_names = \
"overview state "\
"max_places requested confirmed trying free_places " \
"school_needed childcare_needed bus_needed evening_needed *"
class Reminders(dd.Table):
required_roles = dd.login_required(ClientsUser)
model = 'courses.Reminder'
order_by = ['-date_issued']
class MyReminders(My, Reminders):
can_create = False
# pass
class RemindersByEnrolment(Reminders):
column_names = 'date_issued degree remark workflow_buttons *'
auto_fit_column_widths = True
stay_in_grid = True
master_key = 'enrolment'
display_mode = 'summary'
# can_create = True
insert_layout = dd.InsertLayout("""
degree
remark
text_body
""", window_size=(50,13))
detail_layout = dd.DetailLayout("""
date_issued degree workflow_buttons
remark
enrolment user id printed
text_body
""", window_size=(80,20))
class RemindersByPupil(Reminders):
column_names = 'date_issued enrolment user remark workflow_buttons *'
auto_fit_column_widths = True
# master = pupil_model
master_key = 'enrolment__pupil'
# @classmethod
# def get_filter_kw(self, ar, **kw):
# kw.update(enrolment__pupil=ar.master_instance)
# return kw
| 29.649682 | 97 | 0.671106 | 563 | 4,655 | 5.291297 | 0.365897 | 0.025848 | 0.015106 | 0.020141 | 0.163142 | 0.085935 | 0.085935 | 0.085935 | 0.03424 | 0 | 0 | 0.010924 | 0.233083 | 4,655 | 156 | 98 | 29.839744 | 0.823529 | 0.225564 | 0 | 0.089888 | 0 | 0 | 0.278913 | 0.012882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022472 | false | 0 | 0.078652 | 0 | 0.483146 | 0.022472 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc15a648df84f79d913d17c6065ef3297c72e6ef | 8,116 | py | Python | airsim_gym/car_agent.py | ByronDev121/literature-review | 23c276e92534793d85c7af5c24d93603f8ee7678 | [
"MIT"
] | null | null | null | airsim_gym/car_agent.py | ByronDev121/literature-review | 23c276e92534793d85c7af5c24d93603f8ee7678 | [
"MIT"
] | null | null | null | airsim_gym/car_agent.py | ByronDev121/literature-review | 23c276e92534793d85c7af5c24d93603f8ee7678 | [
"MIT"
] | null | null | null | import math
import numpy as np
from os.path import dirname, abspath, join
from numpy.linalg import norm
from configparser import ConfigParser
from gym import spaces
from Masters.utils.image_processing import ImageProcessing
import airsim
from airsim import CarClient, CarControls, ImageRequest, ImageType, Pose, Vector3r
class CarAgent(CarClient):
def __init__(self):
# connect to the AirSim simulator
super().__init__()
super().confirmConnection()
super().enableApiControl(True)
#
config = ConfigParser()
config.read(join(dirname(abspath(__file__)), 'config.ini'))
way_point_regex = config['airsim_settings']['waypoint_regex']
self.image_height = int(config['airsim_settings']['image_height'])
self.image_width = int(config['airsim_settings']['image_width'])
self.image_channels = int(config['airsim_settings']['image_channels'])
self._fetch_way_points(way_point_regex)
self.airsim_image_size = self.image_height * self.image_width * self.image_channels
#
state_height = int(config['car_agent']['state_height'])
state_width = int(config['car_agent']['state_width'])
act_dim = spaces.Discrete(int(config['car_agent']['act_dim']))
consecutive_frames = int(config['car_agent']['consecutive_frames'])
max_steering_angle = float(config['car_agent']['max_steering_angle'])
steering_granularity = int(config['car_agent']['steering_granularity'])
self.action_mode = int(config['car_agent']['action_mode'])
self.fixed_throttle = float(config['car_agent']['fixed_throttle'])
self.random_spawn = float(config['car_agent']['random_spawn'])
self.steering_values = self._set_steering_values(max_steering_angle, steering_granularity)
self.image_processing = ImageProcessing(state_height, state_width, consecutive_frames, act_dim.n,
max_steering_angle)
#
self.previous_position = np.array([0, 0])
self.spawn_position = 0
@staticmethod
def _set_steering_values(max_steering_angle, steering_granularity):
steering_values = np.arange(
-max_steering_angle,
max_steering_angle,
2 * max_steering_angle / (steering_granularity - 1)
).tolist()
steering_values.append(max_steering_angle)
steering_values = [round(num, 3) for num in steering_values]
return steering_values
def restart(self):
next_position = self._get_spawn_position()
super().reset()
super().enableApiControl(True)
if self.random_spawn:
super().simSetVehiclePose(next_position, True)
def _get_spawn_position(self):
if self.spawn_position == 0:
self.spawn_position = 1
return Pose(Vector3r(0.0, 0.0, 0.0), airsim.to_quaternion(0.0, 0.0, -0.1))
elif self.spawn_position == 1:
self.spawn_position = 2
return Pose(Vector3r(504.6, 4.7, 0.0), airsim.to_quaternion(0.0, 0.0, 0.1))
elif self.spawn_position == 2:
self.spawn_position = 3
return Pose(Vector3r(499.6, 260.4, 0.0), airsim.to_quaternion(0.0, 0.0, 3.57792))
elif self.spawn_position == 3:
self.spawn_position = 0
return Pose(Vector3r(53.3, 231.7, 0.0), airsim.to_quaternion(0.0, 0.0, 2.87979))
def observe(self, is_new):
size = 0
# Sometimes simGetImages() return an unexpected response.
# If so, try it again.
while size != self.airsim_image_size:
response = super().simGetImages([ImageRequest(0, ImageType.Scene, False, False)])[0]
img1d_rgb = np.frombuffer(response.image_data_uint8, dtype=np.uint8)
size = img1d_rgb.size
img3d_rgb = img1d_rgb.reshape(self.image_height, self.image_width, self.image_channels)
processed_image = self.image_processing.preprocess(img3d_rgb, is_new)
return processed_image
def move(self, action):
car_controls = self._interpret_action(action)
super().setCarControls(car_controls)
@staticmethod
def _get_angle(point1, point2):
p1 = point1 if point1[0] > point2[0] else point2
p2 = point1 if point1[0] < point2[0] else point2
dX = p2[0] - p1[0]
dY = p2[1] - p1[1]
rads = math.atan2(-dY, dX) # wrong for finding angle/declination?
return math.degrees(rads)
def sim_get_vehicle_state(self):
car_state = super().getCarState()
# distance_sensor_data = self.getDistanceSensorData(lidar_name="", vehicle_name="")
# distance_sensor_data = super().getDistanceSensorData(lidar_name="", vehicle_name="")
speed = car_state.speed
kmh = int(3.6 * speed)
pos = super().simGetVehiclePose().position
car_point = np.array([pos.x_val, pos.y_val])
way_point_one, way_point_two, distance_to_nearest_way_point = self._get_two_closest_way_points(car_point)
# Perpendicular distance to the line connecting 2 closest way points,
# this distance is approximate to distance to center of track
distance_p1_to_p2p3 = lambda p1, p2, p3: abs(np.cross(p2 - p3, p3 - p1)) / norm(p2 - p3)
distance_to_track_center = distance_p1_to_p2p3(car_point, way_point_one, way_point_two)
angle1 = self._get_angle(way_point_one, way_point_two)
angle2 = self._get_angle(car_point, self.previous_position)
ang_diff = 0
if abs(angle2) > abs(angle1) and speed > 1:
ang_diff = abs(angle2) - abs(angle1)
elif speed > 1:
ang_diff = abs(angle1) - abs(angle2)
# print("Angle difference: ", ang_diff)
self.previous_position = car_point
return distance_to_nearest_way_point, distance_to_track_center, ang_diff, kmh
def _fetch_way_points(self, waypoint_regex):
wp_names = super().simListSceneObjects(waypoint_regex)
wp_names.sort()
print(wp_names)
vec2r_to_numpy_array = lambda vec: np.array([vec.x_val, vec.y_val])
self.waypoints = []
for wp in wp_names:
pose = super().simGetObjectPose(wp)
self.waypoints.append(vec2r_to_numpy_array(pose.position))
return
def _interpret_action(self, action):
car_controls = CarControls()
if self.action_mode == 0: # discrete steering only, throttle is fixed
car_controls.throttle = self.fixed_throttle
car_controls.steering = self.steering_values[action]
elif self.action_mode == 1: # average value continuous steering only, throttle is fixed
# filter action
actual_action = self.steering_values[action]
self.kf.update(actual_action)
filtered_action = self.kf.predict()
# print('Actual action: {} \nFiltered action: {}'.format(actual_action, filtered_action))
car_controls.throttle = self.fixed_throttle
car_controls.steering = filtered_action
elif self.action_mode == 2: # continuous steering only, throttle is fixed
car_controls.throttle = self.fixed_throttle
car_controls.steering = float(action)
else:
return NotImplemented
return car_controls
def _get_two_closest_way_points(self, car_point):
min_dist = 9999999
second_min_dist = 9999999
min_i = 0
second_min_i = 0
for i in range(len(self.waypoints) - 1):
dist = math.sqrt(
pow((car_point[0] - self.waypoints[i][0]), 2) +
pow((car_point[1] - self.waypoints[i][1]), 2)
)
if dist < min_dist:
second_min_dist = min_dist
second_min_i = min_i
min_dist = dist
min_i = i
elif dist < second_min_dist:
second_min_dist = dist
second_min_i = i
return self.waypoints[min_i], self.waypoints[second_min_i], min_dist
| 39.784314 | 113 | 0.644283 | 1,021 | 8,116 | 4.840353 | 0.219393 | 0.009308 | 0.008499 | 0.007285 | 0.262242 | 0.148928 | 0.126062 | 0.126062 | 0.092068 | 0.057467 | 0 | 0.029105 | 0.254929 | 8,116 | 203 | 114 | 39.980296 | 0.788159 | 0.08933 | 0 | 0.060403 | 0 | 0 | 0.04408 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073826 | false | 0 | 0.060403 | 0 | 0.221477 | 0.006711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc1880e96c6180dd9d88fe5efb2091ef2c33f463 | 10,131 | py | Python | toasty/pipeline/astropix.py | imbasimba/toasty | 3b6bf1760e007f8ed3e32dc27281d197cc257fa1 | [
"MIT"
] | null | null | null | toasty/pipeline/astropix.py | imbasimba/toasty | 3b6bf1760e007f8ed3e32dc27281d197cc257fa1 | [
"MIT"
] | 59 | 2019-08-02T19:53:03.000Z | 2022-03-22T14:44:57.000Z | toasty/pipeline/astropix.py | imbasimba/toasty | 3b6bf1760e007f8ed3e32dc27281d197cc257fa1 | [
"MIT"
] | 3 | 2019-08-02T14:36:45.000Z | 2021-08-25T04:57:20.000Z | # -*- mode: python; coding: utf-8 -*-
# Copyright 2020 the AAS WorldWide Telescope project
# Licensed under the MIT License.
"""
Support for loading images from an AstroPix feed.
TODO: update metadata tomfoolery to match the new things that I've learned. Cf.
the ``wwtdatatool wtml report`` utility and the djangoplicity implementation.
NOTE: AstroPix seems to have its image parity backwards. Standard JPEGs are
reported with ``wcs_scale[0] < 0``, which is *positive* AKA bottoms-up AKA FITS
parity. If we pass their parameters more-or-less straight through to WWT, we get
the right appearance onscreen.
"""
__all__ = """
AstroPixImageSource
AstroPixCandidateInput
""".split()
import codecs
from datetime import datetime
import json
import numpy as np
import os.path
import requests
import shutil
from urllib.parse import quote as urlquote
from ..image import ImageLoader
from . import CandidateInput, ImageSource, NotActionableError
EXTENSION_REMAPPING = {
"jpeg": "jpg",
}
class AstroPixImageSource(ImageSource):
"""
An ImageSource that obtains its inputs from a query to the AstroPix service.
"""
_json_query_url = None
@classmethod
def get_config_key(cls):
return "astropix"
@classmethod
def deserialize(cls, data):
inst = cls()
inst._json_query_url = data["json_query_url"]
return inst
def query_candidates(self):
with requests.get(self._json_query_url, stream=True) as resp:
feed_data = json.load(resp.raw)
for item in feed_data:
yield AstroPixCandidateInput(item)
def fetch_candidate(self, unique_id, cand_data_stream, cachedir):
with codecs.getreader("utf8")(cand_data_stream) as text_stream:
info = json.load(text_stream)
lower_id = info["image_id"].lower()
global_id = info["publisher_id"] + "_" + lower_id
if info["resource_url"] and len(info["resource_url"]):
source_url = info["resource_url"]
else:
# Original image not findable. Get the best version available from
# AstroPix.
#
# GROSS: saving this code since I won't be testing this super
# thoroughly ... but it looks like now the original is being made
# available consistently?
##size = int(info['image_max_boundry'])
##if size >= 24000:
## best_astropix_size = 24000
##elif size >= 12000:
## best_astropix_size = 12000
##elif size >= 6000:
## best_astropix_size = 6000
##elif size >= 3000:
## best_astropix_size = 3000
##elif size >= 1600:
## best_astropix_size = 1600
##elif size > 1024: # transition point to sizes that are always generated
## best_astropix_size = 1280
##elif size > 500:
## best_astropix_size = 1024
##elif size > 320:
## best_astropix_size = 500
##else:
## best_astropix_size = 320
source_url = (
"http://astropix.ipac.caltech.edu/archive/%s/%s/%s_original.jpg"
% (
urlquote(info["publisher_id"]),
urlquote(lower_id),
urlquote(global_id),
)
)
# Now ready to download the image.
ext = source_url.rsplit(".", 1)[-1].lower()
ext = EXTENSION_REMAPPING.get(ext, ext)
with requests.get(source_url, stream=True) as resp:
if not resp.ok:
raise Exception(f"error downloading {source_url}: {resp.status_code}")
with open(os.path.join(cachedir, "image." + ext), "wb") as f:
shutil.copyfileobj(resp.raw, f)
def process(self, unique_id, cand_data_stream, cachedir, builder):
# Set up the metadata.
with codecs.getreader("utf8")(cand_data_stream) as text_stream:
info = json.load(text_stream)
if info["resource_url"] and len(info["resource_url"]):
ext = info["resource_url"].rsplit(".", 1)[-1].lower()
ext = EXTENSION_REMAPPING.get(ext, ext)
else:
ext = "jpg"
img_path = os.path.join(cachedir, "image." + ext)
md = AstroPixMetadata(info)
# Load up the image.
img = ImageLoader().load_path(img_path)
# Do the processing.
builder.tile_base_as_study(img)
builder.make_thumbnail_from_other(img)
builder.imgset.set_position_from_wcs(
md.as_wcs_headers(img.width, img.height),
img.width,
img.height,
place=builder.place,
)
builder.set_name(info["title"])
builder.imgset.credits_url = md.get_credit_url()
builder.cascade()
class AstroPixCandidateInput(CandidateInput):
"""
A CandidateInput obtained from an AstroPix query.
"""
def __init__(self, json_dict):
self._json = json_dict
self._lower_id = self._json["image_id"].lower()
self._global_id = self._json["publisher_id"] + "_" + self._lower_id
def get_unique_id(self):
return self._global_id.replace("/", "_")
def save(self, stream):
# First check that this input is usable. The NRAO feed contains an
# item like this, and based on my investigations they are just not
# usable right now because the server APIs don't work. So: skip any
# like this.
if "/" in self._json["image_id"]:
raise NotActionableError(
'AstroPix images with "/" in their IDs aren\'t retrievable'
)
# TODO? A few NRAO images have SIN projection. Try to recover them?
if self._json["wcs_projection"] != "TAN":
raise NotActionableError("cannot ingest images in non-TAN projections")
with codecs.getwriter("utf8")(stream) as text_stream:
json.dump(self._json, text_stream, ensure_ascii=False, indent=2)
ASTROPIX_FLOAT_ARRAY_KEYS = [
"wcs_reference_dimension", # NB: should be ints, but sometimes expressed with decimal points
"wcs_reference_pixel",
"wcs_reference_value",
"wcs_scale",
]
ASTROPIX_FLOAT_SCALAR_KEYS = [
"wcs_rotation",
]
class AstroPixMetadata(object):
"""
Metadata derived from AstroPix query results.
"""
image_id = None
publisher_id = None
resource_url = None
wcs_coordinate_frame = None # ex: 'ICRS'
wcs_equinox = None # ex: 'J2000'
wcs_projection = None # ex: 'TAN'
wcs_reference_dimension = None # ex: [7416.0, 4320.0]
wcs_reference_value = None # ex: [187, 12.3]
wcs_reference_pixel = (
None # ex: [1000.4, 1000.7]; from examples, this seems to be 1-based
)
wcs_rotation = None # ex: -0.07 (deg, presumably)
wcs_scale = None # ex: [-6e-7, 6e-7]
def __init__(self, json_dict):
# Some massaging for consistency:
for k in ASTROPIX_FLOAT_ARRAY_KEYS:
if k in json_dict:
json_dict[k] = list(map(float, json_dict[k]))
for k in ASTROPIX_FLOAT_SCALAR_KEYS:
if k in json_dict:
json_dict[k] = float(json_dict[k])
for k, v in json_dict.items():
setattr(self, k, v)
def as_wcs_headers(self, width, height):
"""
The metadata here are essentially AVM headers. As described in
`Builder.apply_avm_info()`, the data that we've seen in the wild are a
bit wonky with regards to parity: the metadata essentially correspond to
FITS-like parity, and we need to flip them to JPEG-like parity. See also
very similar code in `djangoplicity.py`.
"""
headers = {}
# headers['RADECSYS'] = self.wcs_coordinate_frame # causes Astropy warnings
headers["CTYPE1"] = "RA---" + self.wcs_projection
headers["CTYPE2"] = "DEC--" + self.wcs_projection
headers["CRVAL1"] = self.wcs_reference_value[0]
headers["CRVAL2"] = self.wcs_reference_value[1]
# See Calabretta & Greisen (2002; DOI:10.1051/0004-6361:20021327), eqn 186
crot = np.cos(self.wcs_rotation * np.pi / 180)
srot = np.sin(self.wcs_rotation * np.pi / 180)
lam = self.wcs_scale[1] / self.wcs_scale[0]
pc1_1 = crot
pc1_2 = -lam * srot
pc2_1 = srot / lam
pc2_2 = crot
# If we couldn't get the original image, the pixel density used for
# the WCS parameters may not match the image resolution that we have
# available. In such cases, we need to remap the pixel-related
# headers. From the available examples, `wcs_reference_pixel` seems to
# be 1-based in the same way that `CRPIXn` are. Since in FITS, integer
# pixel values correspond to the center of each pixel box, a CRPIXn of
# [0.5, 0.5] (the lower-left corner) should not vary with the image
# resolution. A CRPIXn of [W + 0.5, H + 0.5] (the upper-right corner)
# should map to [W' + 0.5, H' + 0.5] (where the primed quantities are
# the new width and height).
factor0 = width / self.wcs_reference_dimension[0]
factor1 = height / self.wcs_reference_dimension[1]
headers["CRPIX1"] = (self.wcs_reference_pixel[0] - 0.5) * factor0 + 0.5
headers["CRPIX2"] = (self.wcs_reference_pixel[1] - 0.5) * factor1 + 0.5
# Now finalize and apply the parity flip
cdelt1 = self.wcs_scale[0] / factor0
cdelt2 = self.wcs_scale[1] / factor1
headers["CD1_1"] = cdelt1 * pc1_1
headers["CD1_2"] = -cdelt1 * pc1_2
headers["CD2_1"] = cdelt2 * pc2_1
headers["CD2_2"] = -cdelt2 * pc2_2
headers["CRPIX2"] = height + 1 - headers["CRPIX2"]
return headers
def get_credit_url(self):
if self.reference_url:
return self.reference_url
return "http://astropix.ipac.caltech.edu/image/%s/%s" % (
urlquote(self.publisher_id),
urlquote(self.image_id),
)
| 33.882943 | 97 | 0.612575 | 1,302 | 10,131 | 4.595238 | 0.322581 | 0.01755 | 0.024068 | 0.009026 | 0.128029 | 0.095938 | 0.07187 | 0.060505 | 0.060505 | 0.039111 | 0 | 0.032071 | 0.285954 | 10,131 | 298 | 98 | 33.996644 | 0.794996 | 0.322278 | 0 | 0.101911 | 0 | 0 | 0.099596 | 0.00674 | 0 | 0 | 0 | 0.006711 | 0 | 1 | 0.070064 | false | 0 | 0.063694 | 0.012739 | 0.267516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc19254a3fb50fe79998dd9cc9992b951e35a93b | 7,250 | py | Python | custom_components/sleep_as_android/sensor.py | Antoni-Czaplicki/HA-SleepAsAndroid | 12d649c779604491574bb7d237a4222aa7927aea | [
"Apache-2.0"
] | null | null | null | custom_components/sleep_as_android/sensor.py | Antoni-Czaplicki/HA-SleepAsAndroid | 12d649c779604491574bb7d237a4222aa7927aea | [
"Apache-2.0"
] | null | null | null | custom_components/sleep_as_android/sensor.py | Antoni-Czaplicki/HA-SleepAsAndroid | 12d649c779604491574bb7d237a4222aa7927aea | [
"Apache-2.0"
] | null | null | null | """
Sensor for Sleep as android states
"""
import logging
import json
from homeassistant.components.sensor import SensorEntity
from homeassistant.config_entries import ConfigEntry
from homeassistant.helpers.restore_state import RestoreEntity
from homeassistant.core import HomeAssistant
from homeassistant.const import STATE_UNKNOWN, STATE_UNAVAILABLE
from homeassistant.helpers import device_registry as dr
from homeassistant.helpers.entity_registry import async_entries_for_config_entry
from .device_trigger import TRIGGERS
from .const import DOMAIN
from typing import TYPE_CHECKING, List, Union
if TYPE_CHECKING:
from . import SleepAsAndroidInstance
_LOGGER = logging.getLogger(__name__)
async def async_setup_entry(hass: HomeAssistant, config_entry: ConfigEntry, async_add_entities):
"""Set up the sensor entry"""
async def add_configured_entities():
"""Scan entity registry and add previously created entities to Home Assistant"""
entities = async_entries_for_config_entry(instance.entity_registry, config_entry.entry_id)
sensors: List[SleepAsAndroidSensor] = []
for entity in entities:
device_name = instance.device_name_from_entity_id(entity.unique_id)
_LOGGER.debug(f"add_configured_entities: creating sensor with name {device_name}")
(sensor, _) = instance.get_sensor(device_name)
sensors.append(sensor)
async_add_entities(sensors)
instance: SleepAsAndroidInstance = hass.data[DOMAIN][config_entry.entry_id]
await add_configured_entities()
_LOGGER.debug("async_setup_entry: adding configured entities is finished.")
_LOGGER.debug("Going to subscribe to root topic.")
await instance.subscribe_root_topic(async_add_entities)
_LOGGER.debug("async_setup_entry is finished")
return True
class SleepAsAndroidSensor(SensorEntity, RestoreEntity):
__additional_attributes: dict[str, str] = {
'value1': 'timestamp',
'value2': 'label',
}
"""Mapping for value*
It is comfortable to have human readable names.
Keys is field names from SleepAsAndroid event https://docs.sleep.urbandroid.org/services/automation.html#events
Values is sensor attributes.
"""
_attr_icon = "mdi:sleep"
_attr_should_poll = False
_attr_device_class = f"{DOMAIN}__status"
def __init__(self, hass: HomeAssistant, config_entry: ConfigEntry, name: str):
self._instance: SleepAsAndroidInstance = hass.data[DOMAIN][config_entry.entry_id]
self.hass: HomeAssistant = hass
self._name: str = name
self._state: str = STATE_UNKNOWN
self._device_id: str = "unknown"
self._attr_extra_state_attributes = {}
self._set_attributes({}) # initiate _attr_extra_state_attributes with empty values
_LOGGER.debug(f"Creating sensor with name {name}")
async def async_added_to_hass(self):
"""
Calls when sensor added to Home Assistant.
Should create device for sensor here
"""
await super().async_added_to_hass()
device_registry = await dr.async_get_registry(self.hass)
device = device_registry.async_get_device(identifiers=self.device_info['identifiers'], connections=set())
_LOGGER.debug("My device id is %s", device.id)
self._device_id = device.id
if (old_state := await self.async_get_last_state()) is not None:
self._state = old_state.state
_LOGGER.debug(f"async_added_to_hass: restored previous state for {self.name}: {self.state}")
else:
# No previous state. It is fine, but it would be nice to report
_LOGGER.debug(f"async_added_to_hass: no previously saved state for {self.name}")
self.async_write_ha_state()
async def async_will_remove_from_hass(self):
"""
Calls when sensor is removed from Home Assistant
Should remove device here
"""
# ToDo: should we remove device?
pass
def process_message(self, msg):
"""
Process new MQTT messages.
Set sensor state, attributes and fire events.
:param msg: MQTT message
"""
_LOGGER.debug(f"Processing message {msg}")
try:
new_state = STATE_UNKNOWN
payload = json.loads(msg.payload)
try:
new_state = payload['event']
except KeyError:
_LOGGER.warning("Got unexpected payload: '%s'", payload)
self._set_attributes(payload)
self.state = new_state
self._fire_event(self.state)
self._fire_trigger(self.state)
except json.decoder.JSONDecodeError:
_LOGGER.warning("expected JSON payload. got '%s' instead", msg.payload)
@property
def name(self):
"""Return the name of the sensor."""
return self._instance.create_entity_id(self._name)
@property
def state(self):
"""Return the state of the entity."""
return self._state
@state.setter
def state(self, new_state: str):
"""Set new state and fire events if needed
Events will be fired if state changed and new state is not STATE_UNKNOWN.
:param new_state: str: new sensor state
"""
if self._state != new_state:
self._state = new_state
self.async_write_ha_state()
else:
_LOGGER.debug(f"Will not update state because old state == new_state")
@property
def unique_id(self) -> str:
"""Return a unique ID."""
return self._instance.create_entity_id(self._name)
@property
def available(self) -> bool:
return self.state != STATE_UNKNOWN
@property
def device_id(self) -> str:
return self._device_id
@property
def device_info(self):
_LOGGER.debug("My identifiers is %s", {(DOMAIN, self.unique_id)})
info = {"identifiers": {(DOMAIN, self.unique_id)}, "name": self.name, "manufacturer": "SleepAsAndroid",
"type": None, "model": "MQTT"}
return info
def _fire_event(self, event_payload: str):
"""Fires event with payload {'event': event_payload }
:param event_payload: payload for event
"""
payload = {"event": event_payload}
_LOGGER.debug("Firing '%s' with payload: '%s'", self.name, payload)
self.hass.bus.fire(self.name, payload)
def _fire_trigger(self, new_state: str):
"""
Fires trigger based on new state
:param new_state: type of trigger to fire
"""
if new_state in TRIGGERS:
self.hass.bus.async_fire(DOMAIN + "_event", {"device_id": self.device_id, "type": new_state})
else:
_LOGGER.warning("Got %s event, but it is not in TRIGGERS list: will not fire this event for "
"trigger!", new_state)
def _set_attributes(self, payload: dict):
new_attributes = {}
for k, v in self.__additional_attributes.items():
new_attributes[v] = payload.get(k, STATE_UNAVAILABLE)
_LOGGER.debug(f"New attributes is {new_attributes}")
return self._attr_extra_state_attributes.update(new_attributes)
| 35.714286 | 115 | 0.665241 | 890 | 7,250 | 5.178652 | 0.225843 | 0.027772 | 0.018225 | 0.013886 | 0.16381 | 0.075938 | 0.061185 | 0.049035 | 0.049035 | 0.022131 | 0 | 0.000365 | 0.243862 | 7,250 | 202 | 116 | 35.891089 | 0.840387 | 0.095172 | 0 | 0.122951 | 0 | 0 | 0.142101 | 0.004099 | 0 | 0 | 0 | 0.004951 | 0 | 1 | 0.098361 | false | 0.008197 | 0.106557 | 0.016393 | 0.311475 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc1fbef6f9d48c157ad03ce21a53130ba957ff8f | 8,878 | py | Python | utils/dataset.py | songdejia/DeepLab_v3_plus | 7822d944a8f0038c6e6fb27db5f44c1e6c65caf2 | [
"MIT"
] | 75 | 2018-11-04T02:55:40.000Z | 2022-02-10T02:50:03.000Z | utils/dataset.py | songdejia/DeepLab_v3_plus | 7822d944a8f0038c6e6fb27db5f44c1e6c65caf2 | [
"MIT"
] | 2 | 2019-01-15T06:41:37.000Z | 2020-08-13T01:56:17.000Z | utils/dataset.py | songdejia/DeepLab_v3_plus | 7822d944a8f0038c6e6fb27db5f44c1e6c65caf2 | [
"MIT"
] | 15 | 2018-11-05T12:35:11.000Z | 2022-03-23T12:44:30.000Z | # -*- coding: utf-8 -*-
# @Author: Song Dejia
# @Date: 2018-10-21 13:01:06
# @Last Modified by: Song Dejia
# @Last Modified time: 2018-10-23 16:58:30
import sys
sys.path.append('../')
import os
import shutil
import cv2
from PIL import Image
from utils.transform import *
from utils.util import *
from torchvision import transforms
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
def transform_for_train(fixed_scale = 512, rotate_prob = 15):
"""
Options:
1.RandomCrop
2.CenterCrop
3.RandomHorizontalFlip
4.Normalize
5.ToTensor
6.FixedResize
7.RandomRotate
"""
transform_list = []
#transform_list.append(FixedResize(size = (fixed_scale, fixed_scale)))
transform_list.append(RandomSized(fixed_scale))
transform_list.append(RandomRotate(rotate_prob))
transform_list.append(RandomHorizontalFlip())
#transform_list.append(Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)))
transform_list.append(Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)))
transform_list.append(ToTensor())
return transforms.Compose(transform_list)
def transform_for_demo(fixed_scale = 512, rotate_prob = 15):
transform_list = []
transform_list.append(FixedResize(size = (fixed_scale, fixed_scale)))
transform_list.append(Normalize(mean=(0.0, 0.0, 0.0), std=(1.0, 1.0, 1.0)))
transform_list.append(ToTensor())
return transforms.Compose(transform_list)
def transform_for_test(fixed_scale = 512):
transform_list = []
transform_list.append(FixedResize(size = (fixed_scale, fixed_scale)))
transform_list.append(Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)))
transform_list.append(ToTensor())
return transforms.Compose(transform_list)
def prepare_for_train_dataloader(dataroot, bs_train = 4, shuffle = True, num_workers = 2, check_dataloader = False):
"""
use pascal 2012, dataroot contain JEPG/SEGMENTATION/so on
"""
transform = transform_for_train(fixed_scale = 512, rotate_prob = 15)
voc_train = VOCSegmentation(base_dir = dataroot, split = 'train', transform = transform)
if check_dataloader:
dataloader = DataLoader(voc_train, batch_size = bs_train, shuffle = False, num_workers = num_workers, drop_last = True)
else:
dataloader = DataLoader(voc_train, batch_size = bs_train, shuffle = shuffle, num_workers = num_workers, drop_last = True)
if check_dataloader:
"""
check dataloader img
imgs save to workspace/check/dataloader/img
mask save to workspace/check/dataloader/mask
人类;
动物(鸟、猫、牛、狗、马、羊);
交通工具(飞机、自行车、船、公共汽车、小轿车、摩托车、火车);
室内(瓶子、椅子、餐桌、盆栽植物、沙发、电视)
"""
transform = transform_for_demo(fixed_scale = 512, rotate_prob = 15)
voc_train_o = VOCSegmentation(base_dir = dataroot, split = 'train', transform = transform)
dataloader_o = DataLoader(voc_train_o, batch_size = bs_train, shuffle = False, num_workers = num_workers, drop_last = True)
workspace = os.path.abspath('./')
img_dir = os.path.join(workspace, 'check/check_dataloader/img')
mask_dir= os.path.join(workspace, 'check/check_dataloader/mask')
std = np.array((0.229, 0.224, 0.225))
mean= np.array((0.485, 0.456, 0.406))
#print(img_dir)
if os.path.exists(img_dir):
shutil.rmtree(img_dir)
os.makedirs(img_dir)
idx = 0
for index, sample_batched in enumerate(dataloader):
inputs, labels = sample_batched['image'], sample_batched['label']
#print('inputs',inputs.shape) #20, 3, 512, 512
#print('labels',labels.shape) #20, 1, 512, 512
batch = inputs.shape[0]
for i in range(batch):
idx += 1
img = inputs[i].numpy().astype(np.float32)
mask= labels[i].numpy().astype(np.float32)
img = img.transpose((1, 2, 0))
"""
这里需要吧mask解码成rgb形式,每种class对应于一种颜色
"""
#mask= mask.transpose((1,2, 0))
#mask_rgb = cv2.cvtColor(mask, cv2.COLOR_GRAY2RGB)
mask = mask.squeeze(0)
mask_rgb = decode_segmap(mask, dataset='pascal', plot = False)
#print(img.shape)
img *= std
img += mean
img *= 255.0
#img = np.clip(img, 0, 255)
img = img.astype(np.int32)
img_name = '{}_restore.jpg'.format(idx)
img_path = os.path.join(img_dir, img_name)
cv2.imwrite(img_path, img[:,:,::-1])
mask_name= '{}_zmask.jpg'.format(idx)
mask_path= os.path.join(img_dir, mask_name)
cv2.imwrite(mask_path, 10*mask_rgb[:,:,::-1])
print('restore and mask idx : {:04d}'.format(idx))
idx = 0
for index, sample_batched in enumerate(dataloader_o):
inputs, labels = sample_batched['image'], sample_batched['label']
#print('inputs',inputs.shape) #20, 3, 512, 512
#print('labels',labels.shape) #20, 1, 512, 512
batch = inputs.shape[0]
for i in range(batch):
idx += 1
img = inputs[i].numpy().astype(np.float32)
mask= labels[i].numpy().astype(np.float32)
img = img.transpose((1, 2, 0))
mask= mask.transpose((1,2, 0))
#print(img.shape)
#img *= std
#img += mean
img *= 255.0
#img = np.clip(img, 0, 255)
img = img.astype(np.int32)
img_name = '{}_original.jpg'.format(idx)
img_path = os.path.join(img_dir, img_name)
cv2.imwrite(img_path, img[:,:,::-1])
print('original idx : {:04d}'.format(idx))
return dataloader
def prepare_for_val_dataloader(dataroot, bs_val = 6, shuffle = False, num_workers = 0):
"""
use pascal 2012
"""
transform = transform_for_test(fixed_scale = 512)
voc_test = VOCSegmentation(base_dir = dataroot, split = 'val', transform = transform)
dataloader = DataLoader(voc_test, batch_size = bs_val, shuffle = shuffle, num_workers = num_workers)
return dataloader
class VOCSegmentation(Dataset):
"""
PascalVoc dataset
"""
def __init__(self,
base_dir='./dataset',
split='train',
transform=None
):
"""
:param base_dir: path to VOC dataset directory
:param split: train/val
:param transform: transform to apply
"""
super().__init__()
self._base_dir = base_dir
self._image_dir = os.path.join(self._base_dir, 'JPEGImages')
self._cat_dir = os.path.join(self._base_dir, 'SegmentationClass')
if isinstance(split, str):
self.split = [split]
else:
split.sort()
self.split = split
self.transform = transform
_splits_dir = os.path.join(self._base_dir, 'ImageSets', 'Segmentation')
self.im_ids = []
self.images = []
self.categories = []
for splt in self.split:
with open(os.path.join(os.path.join(_splits_dir, splt + '.txt')), "r") as f:
lines = f.read().splitlines()
for ii, line in enumerate(lines):
_image = os.path.join(self._image_dir, line + ".jpg")
_cat = os.path.join(self._cat_dir, line + ".png")
assert os.path.isfile(_image)
assert os.path.isfile(_cat)
self.im_ids.append(line)
self.images.append(_image)
self.categories.append(_cat)
assert (len(self.images) == len(self.categories))
# Display stats
print('Number of images in {}: {:d}'.format(split, len(self.images)))
def __len__(self):
return len(self.images)
def __getitem__(self, index):
_img, _target= self._make_img_gt_point_pair(index)
sample = {'image': _img, 'label': _target}
if self.transform is not None:
sample = self.transform(sample)
return sample
def _make_img_gt_point_pair(self, index):
# Read Image and Target
# _img = np.array(Image.open(self.images[index]).convert('RGB')).astype(np.float32)
# _target = np.array(Image.open(self.categories[index])).astype(np.float32)
_img = Image.open(self.images[index]).convert('RGB')
_target = Image.open(self.categories[index])
return _img, _target
def __str__(self):
return 'VOC2012(split=' + str(self.split) + ')' | 35.370518 | 131 | 0.588196 | 1,101 | 8,878 | 4.558583 | 0.214351 | 0.049213 | 0.049213 | 0.012951 | 0.529189 | 0.472006 | 0.429169 | 0.394899 | 0.351863 | 0.287109 | 0 | 0.043608 | 0.284524 | 8,878 | 251 | 132 | 35.370518 | 0.746537 | 0.131111 | 0 | 0.304965 | 0 | 0 | 0.044177 | 0.00734 | 0 | 0 | 0 | 0 | 0.021277 | 1 | 0.070922 | false | 0 | 0.070922 | 0.014184 | 0.212766 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc2122b0c376cd026c9538d97cb79118fd7b229a | 1,077 | py | Python | tests/optimizers/random_search_test.py | boschresearch/blackboxopt | 85abea86f01a4a9d50f05d15e7d850e3288baafd | [
"ECL-2.0",
"Apache-2.0"
] | 8 | 2021-07-05T13:37:22.000Z | 2022-03-11T12:23:27.000Z | tests/optimizers/random_search_test.py | boschresearch/blackboxopt | 85abea86f01a4a9d50f05d15e7d850e3288baafd | [
"ECL-2.0",
"Apache-2.0"
] | 14 | 2021-07-07T13:55:23.000Z | 2022-02-07T13:09:01.000Z | tests/optimizers/random_search_test.py | boschresearch/blackboxopt | 85abea86f01a4a9d50f05d15e7d850e3288baafd | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright (c) 2020 - for information on the respective copyright owner
# see the NOTICE file and/or the repository https://github.com/boschresearch/blackboxopt
#
# SPDX-License-Identifier: Apache-2.0
import parameterspace as ps
import pytest
from blackboxopt import Objective, OptimizationComplete
from blackboxopt.optimizers.random_search import RandomSearch
from blackboxopt.optimizers.testing import ALL_REFERENCE_TESTS
MAX_STEPS = 5
SPACE = ps.ParameterSpace()
SPACE.add(ps.ContinuousParameter("p1", [0, 1]))
@pytest.mark.parametrize("reference_test", ALL_REFERENCE_TESTS)
def test_all_reference_tests(reference_test):
reference_test(RandomSearch, dict(max_steps=MAX_STEPS))
def test_max_steps_randomsearch():
opt = RandomSearch(SPACE, [Objective("loss", False)], max_steps=MAX_STEPS)
for _ in range(MAX_STEPS):
opt.generate_evaluation_specification()
with pytest.raises(OptimizationComplete):
opt.generate_evaluation_specification()
with pytest.raises(OptimizationComplete):
opt.generate_evaluation_specification()
| 30.771429 | 88 | 0.789229 | 131 | 1,077 | 6.282443 | 0.496183 | 0.068044 | 0.061968 | 0.123937 | 0.211422 | 0.211422 | 0.211422 | 0.211422 | 0.211422 | 0.211422 | 0 | 0.010593 | 0.123491 | 1,077 | 34 | 89 | 31.676471 | 0.861229 | 0.179201 | 0 | 0.263158 | 0 | 0 | 0.022753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.263158 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc272a0b82c404dc79926e91561debcc06e54778 | 966 | py | Python | src/ba63clock.py | leswright1977/RPi-BA63 | 4498fac87eb80c1b36c44e6f5c731c3c5b8c9292 | [
"Apache-2.0"
] | 2 | 2021-06-09T20:04:05.000Z | 2022-01-16T13:32:04.000Z | src/ba63clock.py | leswright1977/RPi-BA63 | 4498fac87eb80c1b36c44e6f5c731c3c5b8c9292 | [
"Apache-2.0"
] | null | null | null | src/ba63clock.py | leswright1977/RPi-BA63 | 4498fac87eb80c1b36c44e6f5c731c3c5b8c9292 | [
"Apache-2.0"
] | null | null | null | import time
from datetime import datetime
import serial
ser=serial.Serial(
port='/dev/ttyS0',
baudrate=9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=1
)
region = "\033R00" #Set region to USA (Standard ASCII)
ser.write(region)
cls = "\033[2J" #send escape sequence to clear screen
ser.write(cls)
line1 = "\033[1;1H" #Escape seqence to start on line 1 char 1
line2 = "\033[2;1H" #Escape seqence to start on line 2 char 1
#splash screen
string1 = "~~~~~~Les' Lab~~~~~~"
ser.write(line1+string1)
string2 = "~Raspberry Pi Clock~"
ser.write(line2+string2)
time.sleep(3)
ser.write(cls)
#clock and spinner!
spinner = '|/-\|/-'
spin = 0
while True:
now = datetime.now()
now = now.strftime("%Y-%m-%d %H:%M:%S")
ser.write(line1+now+spinner[spin])
time.sleep(0.001)
unixtime = str(round(time.time(),1))
ser.write(line2+" "+unixtime)
spin+=1
if spin > len(spinner)-1:
spin = 0
ser.close
| 17.563636 | 61 | 0.679089 | 152 | 966 | 4.302632 | 0.486842 | 0.085627 | 0.033639 | 0.051988 | 0.085627 | 0.085627 | 0.085627 | 0 | 0 | 0 | 0 | 0.059976 | 0.154244 | 966 | 54 | 62 | 17.888889 | 0.740514 | 0.187371 | 0 | 0.111111 | 0 | 0 | 0.141207 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc2a152002e92e8d55e206713a851690baa13195 | 6,980 | py | Python | multicnet.py | mimoralea/king-pong | c0ec61f60de64293d00a48d1b7724a24d2b6602a | [
"MIT"
] | 28 | 2016-07-29T01:34:16.000Z | 2020-12-28T20:41:08.000Z | multicnet.py | mimoralea/king-pong | c0ec61f60de64293d00a48d1b7724a24d2b6602a | [
"MIT"
] | null | null | null | multicnet.py | mimoralea/king-pong | c0ec61f60de64293d00a48d1b7724a24d2b6602a | [
"MIT"
] | 6 | 2016-08-11T02:55:33.000Z | 2020-06-30T22:48:21.000Z | import tensorflow as tf
import cv2
import numpy as np
class MultilayerConvolutionalNetwork:
"""
This class manages the deep neural network
that will be used by the agent to learn
and extrapolate the state space
"""
def __init__(self, input_width, input_height, nimages, nchannels):
self.session = tf.InteractiveSession()
self.input_width = input_width
self.input_height = input_height
self.nimages = nimages
self.nchannels = nchannels
self.a = tf.placeholder("float", [None, self.nchannels])
self.y = tf.placeholder("float", [None])
self.input_image, self.y_conv, self.h_fc1, self.train_step = self.build_network()
self.session.run(tf.initialize_all_variables())
self.saver = tf.train.Saver()
def weight_variable(self, shape, stddev = 0.01):
"""
Initialize weight with slight amount of noise to
break symmetry and prevent zero gradients
"""
initial = tf.truncated_normal(shape, stddev = stddev)
return tf.Variable(initial)
def bias_variable(self, shape, value = 0.01):
"""
Initialize ReLU neurons with slight positive initial
bias to avoid dead neurons
"""
initial = tf.constant(value, shape=shape)
return tf.Variable(initial)
def conv2d(self, x, W, stride = 1):
"""
We use a stride size of 1 and zero padded convolutions
to ensure we get the same output size as it was our input
"""
return tf.nn.conv2d(x, W, strides = [1, stride, stride, 1], padding = "SAME")
def max_pool_2x2(self, x):
"""
Our pooling is plain old max pooling over 2x2 blocks
"""
return tf.nn.max_pool(x, ksize = [1, 2, 2, 1],
strides = [1, 2, 2, 1], padding = "SAME")
def build_weights_biases(self, weights_shape):
"""
Build the weights and bias of a convolutional layer
"""
return self.weight_variable(weights_shape), \
self.bias_variable(weights_shape[-1:])
def convolve_relu_pool(self, nn_input, weights_shape, stride = 4, pool = True):
"""
Convolve the input to the network with the weight tensor,
add the bias, apply the ReLU function and finally max pool
"""
W_conv, b_conv = self.build_weights_biases(weights_shape)
h_conv = tf.nn.relu(self.conv2d(nn_input, W_conv, stride) + b_conv)
if not pool:
return h_conv
return self.max_pool_2x2(h_conv)
def build_network(self):
"""
Sets up the deep neural network
"""
# the input is going to be reshaped to a
# 80x80 color image (4 channels)
input_image = tf.placeholder("float", [None, self.input_width,
self.input_height, self.nimages])
# create the first convolutional layers
h_pool1 = self.convolve_relu_pool(input_image, [8, 8, self.nimages, 32])
h_conv2 = self.convolve_relu_pool(h_pool1, [4, 4, 32, 64], 2, False)
h_conv3 = self.convolve_relu_pool(h_conv2, [3, 3, 64, 64], 1, False)
# create the densely connected layers
W_fc1, b_fc1 = self.build_weights_biases([5 * 5 * 64, 512])
h_conv3_flat = tf.reshape(h_conv3, [-1, 5 * 5 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_conv3_flat, W_fc1) + b_fc1)
# finally add the readout layer
W_fc2, b_fc2 = self.build_weights_biases([512, self.nchannels])
readout = tf.matmul(h_fc1, W_fc2) + b_fc2
readout_action = tf.reduce_sum(tf.mul(readout, self.a), reduction_indices=1)
cost_function = tf.reduce_mean(tf.square(self.y - readout_action))
train_step = tf.train.AdamOptimizer(1e-8).minimize(cost_function)
return input_image, readout, h_fc1, train_step
def train(self, value_batch, action_batch, state_batch):
"""
Does the actual training step
"""
self.train_step.run(feed_dict = {
self.y : value_batch,
self.a : action_batch,
self.input_image : state_batch
})
def save_variables(self, a_file, h_file, stack):
"""
Saves neural network weight variables for
debugging purposes
"""
readout_t = self.readout_act(stack)
a_file.write(",".join([str(x) for x in readout_t]) + '\n')
h_file.write(",".join([str(x) for x in self.h_fc1.eval(
feed_dict={self.input_image:[stack]})[0]]) + '\n')
def save_percepts(self, path, x_t1):
"""
Saves an image array to visualize
how the image is compressed before saving
"""
cv2.imwrite(path, np.rot90(x_t1))
def save_network(self, directory, iteration):
"""
Saves the progress of the agent
for further use later on
"""
self.saver.save(self.session, directory + '/network', global_step = iteration)
def attempt_restore(self, directory):
"""
Restors the latest file saved if
available
"""
checkpoint = tf.train.get_checkpoint_state(directory)
if checkpoint and checkpoint.model_checkpoint_path:
self.saver.restore(self.session, checkpoint.model_checkpoint_path)
return checkpoint.model_checkpoint_path
def preprocess_percepts(self, x_t1_colored, reshape = True):
"""
The raw image arrays get shrunk down and
remove any color whatsoever. Also gets it in
3 dimensions if needed
"""
x_t1_resized = cv2.resize(x_t1_colored, (self.input_width, self.input_height))
x_t1_greyscale = cv2.cvtColor(x_t1_resized, cv2.COLOR_BGR2GRAY)
ret, x_t1 = cv2.threshold(x_t1_greyscale, 1, 255, cv2.THRESH_BINARY)
"""
import time
timestamp = int(time.time())
cv2.imwrite("percepts/%d-color.png" % timestamp,
np.rot90(x_t1_colored))
cv2.imwrite("percepts/%d-resized.png" % timestamp,
np.rot90(x_t1_resized))
cv2.imwrite("percepts/%d-greyscale.png" % timestamp,
np.rot90(x_t1_greyscale))
cv2.imwrite("percepts/%d-bandw.png" % timestamp,
np.rot90(x_t1))
"""
if not reshape:
return x_t1
return np.reshape(x_t1, (80, 80, 1))
def readout_act(self, stack):
"""
Gets the best action
for a given stack of images
"""
stack = [stack] if hasattr(stack, 'shape') and len(stack.shape) == 3 else stack
return self.y_conv.eval(feed_dict = {self.input_image: stack})
def select_best_action(self, stack):
"""
Selects the action with the
highest value
"""
return np.argmax(self.readout_act(stack))
def main():
print('This module should be imported')
pass
if __name__ == "__main__":
main()
| 36.354167 | 89 | 0.609312 | 925 | 6,980 | 4.415135 | 0.283243 | 0.011019 | 0.009794 | 0.012243 | 0.110676 | 0.075171 | 0.026445 | 0.011263 | 0 | 0 | 0 | 0.028186 | 0.288395 | 6,980 | 191 | 90 | 36.544503 | 0.794041 | 0.185387 | 0 | 0.022989 | 0 | 0 | 0.016874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195402 | false | 0.011494 | 0.045977 | 0 | 0.402299 | 0.011494 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc2aaef3b26e9d2f104c6178271cdcd7bea5657b | 1,306 | py | Python | squid/tests/conftest.py | tylerbenson/integrations-core | e1193b8e860a65337c37924e10f07ff2271af058 | [
"BSD-3-Clause"
] | null | null | null | squid/tests/conftest.py | tylerbenson/integrations-core | e1193b8e860a65337c37924e10f07ff2271af058 | [
"BSD-3-Clause"
] | null | null | null | squid/tests/conftest.py | tylerbenson/integrations-core | e1193b8e860a65337c37924e10f07ff2271af058 | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2010-2016
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
import os
import sys
import subprocess
import time
import pytest
import requests
from datadog_checks.squid import SquidCheck
from . import common
@pytest.fixture
def aggregator():
from datadog_checks.stubs import aggregator
aggregator.reset()
return aggregator
@pytest.fixture
def squid_check():
return SquidCheck(common.CHECK_NAME, {}, {})
@pytest.fixture(scope="session")
def spin_up_squid():
env = os.environ
args = [
"docker-compose",
"-f", os.path.join(common.HERE, 'compose', 'squid.yaml')
]
subprocess.check_call(args + ["up", "-d"], env=env)
for _ in range(10):
try:
res = requests.get(common.URL)
res.raise_for_status()
break
except Exception:
time.sleep(1)
sys.stderr.write("Waiting for Squid to boot...")
else:
subprocess.check_call(args + ["down"], env=env)
raise Exception("Squid failed to boot...")
yield
subprocess.check_call(args + ["down"], env=env)
@pytest.fixture
def instance():
instance = {
"name": "ok_instance",
"tags": ["custom_tag"],
"host": common.HOST
}
return instance
| 21.766667 | 64 | 0.62634 | 157 | 1,306 | 5.121019 | 0.509554 | 0.064677 | 0.059701 | 0.085821 | 0.08209 | 0.08209 | 0.08209 | 0 | 0 | 0 | 0 | 0.011213 | 0.248851 | 1,306 | 59 | 65 | 22.135593 | 0.808359 | 0.075804 | 0 | 0.111111 | 0 | 0 | 0.113051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.2 | 0.022222 | 0.355556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc2ae15d173cd76749c8cd9f227977a5d57f690f | 12,300 | py | Python | moldesign/interfaces/tleap_interface.py | Autodesk/molecular-design-toolkit | 5f45a47fea21d3603899a6366cb163024f0e2ec4 | [
"Apache-2.0"
] | 147 | 2016-07-15T18:53:55.000Z | 2022-01-30T04:36:39.000Z | moldesign/interfaces/tleap_interface.py | cherishyli/molecular-design-toolkit | 5f45a47fea21d3603899a6366cb163024f0e2ec4 | [
"Apache-2.0"
] | 151 | 2016-07-15T21:35:11.000Z | 2019-10-10T08:57:29.000Z | moldesign/interfaces/tleap_interface.py | cherishyli/molecular-design-toolkit | 5f45a47fea21d3603899a6366cb163024f0e2ec4 | [
"Apache-2.0"
] | 33 | 2016-08-02T00:04:51.000Z | 2021-09-02T10:05:04.000Z | from __future__ import print_function, absolute_import, division
from future.builtins import *
from future import standard_library
standard_library.install_aliases()
# Copyright 2017 Autodesk Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from past.builtins import basestring
import os
import re
import tempfile
import moldesign as mdt
from . import ambertools
from .. import units as u
from .. import compute
from .. import utils
from .. import forcefields
from ..compute import packages
IMAGE = 'ambertools'
@utils.kwargs_from(mdt.compute.run_job)
def create_ff_parameters(mol, charges='esp', baseff='gaff2', **kwargs):
"""Parameterize ``mol``, typically using GAFF parameters.
This will both assign a forcefield to the molecule (at ``mol.ff``) and produce the parameters
so that they can be used in other systems (e.g., so that this molecule can be simulated
embedded in a larger protein)
Note:
'am1-bcc' and 'gasteiger' partial charges will be automatically computed if necessary.
Other charge types must be precomputed.
Args:
mol (moldesign.Molecule):
charges (str or dict): what partial charges to use? Can be a dict (``{atom:charge}``) OR
a string, in which case charges will be read from
``mol.properties.[charges name]``; typical values will be 'esp', 'mulliken',
'am1-bcc', etc. Use 'zero' to set all charges to 0 (for QM/MM and testing)
baseff (str): Name of the gaff-like forcefield file (default: gaff2)
Returns:
TLeapForcefield: Forcefield object for this residue
"""
# Check that there's only 1 residue, give it a name
assert mol.num_residues == 1
if mol.residues[0].resname is None:
mol.residues[0].resname = 'UNL'
print('Assigned residue name "UNL" to %s' % mol)
resname = mol.residues[0].resname
# check that atoms have unique names
if len(set(atom.name for atom in mol.atoms)) != mol.num_atoms:
raise ValueError('This molecule does not have uniquely named atoms, cannot assign FF')
if charges == 'am1-bcc' and 'am1-bcc' not in mol.properties:
ambertools.calc_am1_bcc_charges(mol)
elif charges == 'gasteiger' and 'gasteiger' not in mol.properties:
ambertools.calc_gasteiger_charges(mol)
elif charges == 'esp' and 'esp' not in mol.properties:
# TODO: use NWChem ESP to calculate
raise NotImplementedError()
if charges == 'zero':
charge_array = [0.0 for atom in mol.atoms]
elif isinstance(charges, basestring):
charge_array = u.array([mol.properties[charges][atom] for atom in mol.atoms])
if not charge_array.dimensionless: # implicitly convert floats to fundamental charge units
charge_array = charge_array.to(u.q_e).magnitude
else:
charge_array = [charges[atom] for atom in mol.atoms]
inputs = {'mol.mol2': mol.write(format='mol2'),
'mol.charges': '\n'.join(map(str, charge_array))}
cmds = ['antechamber -i mol.mol2 -fi mol2 -o mol_charged.mol2 '
' -fo mol2 -c rc -cf mol.charges -rn %s' % resname,
'parmchk -i mol_charged.mol2 -f mol2 -o mol.frcmod',
'tleap -f leap.in',
'sed -e "s/tempresname/%s/g" mol_rename.lib > mol.lib' % resname]
base_forcefield = forcefields.TLeapLib(baseff)
inputs['leap.in'] = '\n'.join(["source leaprc.%s" % baseff,
"tempresname = loadmol2 mol_charged.mol2",
"fmod = loadamberparams mol.frcmod",
"check tempresname",
"saveoff tempresname mol_rename.lib",
"saveamberparm tempresname mol.prmtop mol.inpcrd",
"quit\n"])
def finish_job(j):
leapcmds = ['source leaprc.gaff2']
files = {}
for fname, f in j.glob_output("*.lib").items():
leapcmds.append('loadoff %s' % fname)
files[fname] = f
for fname, f in j.glob_output("*.frcmod").items():
leapcmds.append('loadAmberParams %s' % fname)
files[fname] = f
param = forcefields.TLeapForcefield(leapcmds, files)
param.add_ff(base_forcefield)
param.assign(mol)
return param
job = packages.tleap.make_job(command=' && '.join(cmds),
inputs=inputs,
when_finished=finish_job,
name="GAFF assignment: %s" % mol.name)
return mdt.compute.run_job(job, _return_result=True, **kwargs)
class AmberParameters(object):
""" Forcefield parameters for a system in amber ``prmtop`` format
"""
def __getstate__(self):
state = self.__dict__.copy()
state['job'] = None
return state
def __init__(self, prmtop, inpcrd, job=None):
self.prmtop = prmtop
self.inpcrd = inpcrd
self.job = job
def to_parmed(self):
import parmed
prmtoppath = os.path.join(tempfile.mkdtemp(), 'prmtop')
self.prmtop.put(prmtoppath)
pmd = parmed.load_file(prmtoppath)
return pmd
@utils.kwargs_from(compute.run_job)
def _run_tleap_assignment(mol, leapcmds, files=None, **kwargs):
"""
Drives tleap to create a prmtop and inpcrd file. Specifically uses the AmberTools 16
tleap distribution.
Defaults are as recommended in the ambertools manual.
Args:
mol (moldesign.Molecule): Molecule to set up
leapcmds (List[str]): list of the commands to load the forcefields
files (List[pyccc.FileReference]): (optional) list of additional files
to send
**kwargs: keyword arguments to :meth:`compute.run_job`
References:
Ambertools Manual, http://ambermd.org/doc12/Amber16.pdf. See page 33 for forcefield
recommendations.
"""
leapstr = leapcmds[:]
inputs = {}
if files is not None:
inputs.update(files)
inputs['input.pdb'] = mol.write(format='pdb')
leapstr.append('mol = loadpdb input.pdb\n'
"check mol\n"
"saveamberparm mol output.prmtop output.inpcrd\n"
"savepdb mol output.pdb\n"
"quit\n")
inputs['input.leap'] = '\n'.join(leapstr)
job = packages.tleap.make_job(command='tleap -f input.leap',
inputs=inputs,
name="tleap, %s" % mol.name)
return compute.run_job(job, **kwargs)
def _prep_for_tleap(mol):
""" Returns a modified *copy* that's been modified for input to tleap
Makes the following modifications:
1. Reassigns all residue IDs
2. Assigns tleap-appropriate cysteine resnames
"""
change = False
clean = mdt.Molecule(mol.atoms)
for residue in clean.residues:
residue.pdbindex = residue.index+1
if residue.resname == 'CYS': # deal with cysteine states
if 'SG' not in residue.atoms or 'HG' in residue.atoms:
continue # sulfur's missing, we'll let tleap create it
else:
sulfur = residue.atoms['SG']
if sulfur.formal_charge == -1*u.q_e:
residue.resname = 'CYM'
change = True
continue
# check for a reasonable hybridization state
if sulfur.formal_charge != 0 or sulfur.num_bonds not in (1, 2):
raise ValueError("Unknown sulfur hybridization state for %s"
% sulfur)
# check for a disulfide bond
for otheratom in sulfur.bonded_atoms:
if otheratom.residue is not residue:
if otheratom.name != 'SG' or otheratom.residue.resname not in ('CYS', 'CYX'):
raise ValueError('Unknown bond from cysteine sulfur (%s)' % sulfur)
# if we're here, this is a cystine with a disulfide bond
print('INFO: disulfide bond detected. Renaming %s from CYS to CYX' % residue)
sulfur.residue.resname = 'CYX'
clean._rebuild_from_atoms()
return clean
ATOMSPEC = re.compile(r'\.R<(\S+) ([\-0-9]+)>\.A<(\S+) ([\-0-9]+)>')
def _parse_tleap_errors(job, molin):
# TODO: special messages for known problems (e.g. histidine)
msg = []
unknown_res = set() # so we can print only one error per unkonwn residue
lineiter = iter(job.stdout.split('\n'))
offset = utils.if_not_none(molin.residues[0].pdbindex, 1)
reslookup = {str(i+offset): r for i,r in enumerate(molin.residues)}
def _atom_from_re(s):
resname, residx, atomname, atomidx = s
r = reslookup[residx]
a = r[atomname]
return a
def unusual_bond(l):
atomre1, atomre2 = ATOMSPEC.findall(l)
try:
a1, a2 = _atom_from_re(atomre1), _atom_from_re(atomre2)
except KeyError:
a1 = a2 = None
r1 = reslookup[atomre1[1]]
r2 = reslookup[atomre2[1]]
return forcefields.errors.UnusualBond(l, (a1, a2), (r1, r2))
def _parse_tleap_logline(line):
fields = line.split()
if fields[0:2] == ['Unknown', 'residue:']:
# EX: "Unknown residue: 3TE number: 499 type: Terminal/beginning"
res = molin.residues[int(fields[4])]
unknown_res.add(res)
return forcefields.errors.UnknownResidue(line, res)
elif fields[:4] == 'Warning: Close contact of'.split():
# EX: "Warning: Close contact of 1.028366 angstroms between .R<DC5 1>.A<HO5' 1> and .R<DC5 81>.A<P 9>"
return unusual_bond(line)
elif fields[:6] == 'WARNING: There is a bond of'.split():
# Matches two lines, EX:
# "WARNING: There is a bond of 34.397700 angstroms between:"
# "------- .R<DG 92>.A<O3' 33> and .R<DG 93>.A<P 1>"
nextline = next(lineiter)
return unusual_bond(line+nextline)
elif fields[:5] == 'Created a new atom named:'.split():
# EX: "Created a new atom named: P within residue: .R<DC5 81>"
residue = reslookup[fields[-1][:-1]]
if residue in unknown_res:
return None # suppress atoms from an unknown res ...
atom = residue[fields[5]]
return forcefields.errors.UnknownAtom(line, residue, atom)
elif fields[:2] == ('FATAL:', 'Atom'):
# EX: "FATAL: Atom .R<ARQ 1>.A<C30 6> does not have a type."
assert fields[-5:] == "does not have a type.".split()
atom = _atom_from_re(ATOMSPEC.findall(line)[0])
return forcefields.errors.UnknownAtom(line, atom.residue, atom)
elif (fields[:5] == '** No torsion terms for'.split() or
fields[:5] == 'Could not find angle parameter:'.split() or
fields[:5] == 'Could not find bond parameter for:'.split()):
# EX: " ** No torsion terms for ca-ce-c3-hc"
# EX: "Could not find bond parameter for: -"
# EX: "Could not find angle parameter: - -"
return forcefields.errors.MissingTerms(line.strip())
else: # ignore this line
return None
while True:
try:
line = next(lineiter)
except StopIteration:
break
try:
errmsg = _parse_tleap_logline(line)
except (KeyError, ValueError):
print("WARNING: failed to process TLeap message '%s'" % line)
msg.append(forcefields.errors.ForceFieldMessage(line))
else:
if errmsg is not None:
msg.append(errmsg)
return msg
| 38.317757 | 114 | 0.600163 | 1,543 | 12,300 | 4.714841 | 0.302009 | 0.004811 | 0.008935 | 0.006598 | 0.084948 | 0.049759 | 0.020893 | 0 | 0 | 0 | 0 | 0.015063 | 0.292927 | 12,300 | 320 | 115 | 38.4375 | 0.821433 | 0.272602 | 0 | 0.078534 | 0 | 0 | 0.15004 | 0 | 0 | 0 | 0 | 0.00625 | 0.010471 | 1 | 0.057592 | false | 0 | 0.078534 | 0 | 0.230366 | 0.020942 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc2c4a7f21cba2d8adb3170fa02f876fd5f87ef3 | 4,831 | py | Python | tsinfer/pipeline/client.py | alecgunny/tsinfer | 03dad41fffd78edb3a06a64cbb5badb5f060c105 | [
"MIT"
] | null | null | null | tsinfer/pipeline/client.py | alecgunny/tsinfer | 03dad41fffd78edb3a06a64cbb5badb5f060c105 | [
"MIT"
] | 2 | 2020-09-18T22:38:49.000Z | 2020-09-21T16:20:59.000Z | tsinfer/pipeline/client.py | alecgunny/tsinfer | 03dad41fffd78edb3a06a64cbb5badb5f060c105 | [
"MIT"
] | null | null | null | from functools import partial
import random
import string
import time
import tritongrpcclient as triton
from tsinfer.pipeline.common import StoppableIteratingBuffer
class AsyncInferenceClient(StoppableIteratingBuffer):
__name__ = "InferenceClient"
def __init__(self, url, model_name, model_version, **kwargs):
# set up server connection and check that server is active
client = triton.InferenceServerClient(url)
if not client.is_server_live():
raise RuntimeError("Server not live")
self.client = client
self.params = {"model_name": model_name, "model_version": str(model_version)}
self.initialize(model_name, model_version)
self._in_flight_requests = {}
super().__init__(**kwargs)
def initialize(self, model_name, model_version):
# first unload existing model
if model_name != self.params["model_name"]:
self.client.unload_model(model_name)
# verify that model is ready
if not self.client.is_model_ready(model_name):
# if not, try to load use model control API
try:
self.client.load_model(model_name)
# if we can't load the model, first check if the given
# name is even valid. If it is, throw our hands up
except triton.InferenceServerException:
models = self.client.get_model_repository_index().models
model_names = [model.name for model in models]
if model_name not in model_names:
raise ValueError(
"Model name {} not one of available models: {}".format(
model_name, ", ".join(model_names))
)
else:
raise RuntimeError(
"Couldn't load model {} for unknown reason".format(
model_name)
)
# double check that load worked
assert self.client.is_model_ready(model_name)
model_metadata = self.client.get_model_metadata(model_name)
# TODO: find better way to check version, or even to
# load specific version
# assert model_metadata.versions[0] == model_version
model_input = model_metadata.inputs[0]
data_type = model_input.datatype
model_output = model_metadata.outputs[0]
self.client_input = triton.InferInput(
model_input.name, tuple(model_input.shape), data_type
)
self.client_output = triton.InferRequestedOutput(model_output.name)
self.params = {"model_name": model_name, "model_version": str(model_version)}
@StoppableIteratingBuffer.profile
def pull_stats(self):
return self.client.get_inference_statistics().model_stats
@StoppableIteratingBuffer.profile
def update_profiles(self, model_stats):
for model_stat in model_stats:
if (
model_stat.name == self.params["model_name"] and
model_stat.version == self.params["model_version"]
):
inference_stats = model_stat.inference_stats
break
else:
raise ValueError
count = inference_stats.success.count
if count == 0:
return
steps = ["queue", "compute_input", "compute_infer", "compute_output"]
for step in steps:
avg_time = getattr(inference_stats, step).ns / (10**9 * count)
self.profile_q.put((step, avg_time))
def run(self, x, y, batch_start_time):
callback=partial(
self.process_result, target=y, batch_start_time=batch_start_time
)
request_id = ''.join(random.choices(string.ascii_letters, k=16))
if self.profile:
start_time = time.time()
self._in_flight_requests[request_id] = start_time
self.client_input.set_data_from_numpy(x.astype("float32"))
self.client.async_infer(
model_name=self.params["model_name"],
model_version=self.params["model_version"],
inputs=[self.client_input],
outputs=[self.client_output],
request_id=request_id,
callback=callback
)
if self.profile:
stats = self.pull_stats()
self.update_profiles(stats)
def process_result(self, target, batch_start_time, result, error):
# TODO: add error checking
prediction = result.as_numpy(self.client_output.name())
self.put((prediction, target, batch_start_time))
if self.profile:
end_time = time.time()
start_time = self._in_flight_requests.pop(result.get_response().id)
self.profile_q.put(("total", end_time - start_time))
| 37.449612 | 85 | 0.61685 | 554 | 4,831 | 5.128159 | 0.294224 | 0.069694 | 0.044351 | 0.044351 | 0.137628 | 0.083421 | 0.060542 | 0.038719 | 0.038719 | 0.038719 | 0 | 0.003243 | 0.297868 | 4,831 | 128 | 86 | 37.742188 | 0.834316 | 0.089836 | 0 | 0.095745 | 0 | 0 | 0.06317 | 0 | 0 | 0 | 0 | 0.007813 | 0.010638 | 1 | 0.06383 | false | 0 | 0.06383 | 0.010638 | 0.170213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc3063da49d6a07aaabf4c6e400ea075cf719e61 | 1,279 | py | Python | src/Learning/visualizations.py | olekscode/NameGen | bb0683194df15c9709a1d09252c638a80999b40c | [
"MIT"
] | null | null | null | src/Learning/visualizations.py | olekscode/NameGen | bb0683194df15c9709a1d09252c638a80999b40c | [
"MIT"
] | null | null | null | src/Learning/visualizations.py | olekscode/NameGen | bb0683194df15c9709a1d09252c638a80999b40c | [
"MIT"
] | null | null | null | import constants
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
COLORS = {
'green': sns.xkcd_rgb["faded green"],
'red': sns.xkcd_rgb["pale red"],
'blue': sns.xkcd_rgb["medium blue"],
'yellow': sns.xkcd_rgb["ochre"]
}
def plot_confusion_dataframe(df, nrows=5, with_percents=False, total=None):
df = df.head(nrows)
plt.tight_layout()
if with_percents:
assert total != None
percents = df / total * 100
fig, ax = plt.subplots(1, 2, figsize=(18,5), sharey=True)
__plot_heatmap(df, ax=ax[0], fmt="d")
__plot_heatmap(percents, ax=ax[1], fmt="0.1f")
else:
fig, ax = plt.subplots()
__plot_heatmap(df, ax=ax, fmt="d")
return fig
def plot_history(history, color, title, ylabel, xlabel='Iteration'):
fig, ax = plt.subplots()
x = constants.LOG_EVERY * np.arange(len(history))
ax.plot(x, history, color)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
return fig
def __plot_heatmap(df, ax, fmt):
sns.heatmap(df, ax=ax, annot=True, fmt=fmt, cmap="Blues", cbar=False)
ax.xaxis.tick_top()
ax.set_ylabel('')
ax.tick_params(axis='y', labelrotation=0)
ax.tick_params(axis='both', labelsize=16) | 25.078431 | 75 | 0.639562 | 192 | 1,279 | 4.104167 | 0.421875 | 0.035533 | 0.050761 | 0.060914 | 0.043147 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01581 | 0.208757 | 1,279 | 51 | 76 | 25.078431 | 0.762846 | 0 | 0 | 0.105263 | 0 | 0 | 0.060938 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.078947 | false | 0 | 0.105263 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc3295aabd2127a605163e32768ac850871571c2 | 652 | py | Python | setup.py | riiid/krsh | 2238daa591b19d88722892f9a9f6ada3fe83c742 | [
"Apache-2.0"
] | 133 | 2021-05-28T07:41:49.000Z | 2022-02-21T23:07:31.000Z | setup.py | DolceLatte/krsh | 2238daa591b19d88722892f9a9f6ada3fe83c742 | [
"Apache-2.0"
] | null | null | null | setup.py | DolceLatte/krsh | 2238daa591b19d88722892f9a9f6ada3fe83c742 | [
"Apache-2.0"
] | 7 | 2021-06-04T00:53:04.000Z | 2022-01-10T15:26:29.000Z | from setuptools import find_packages, setup
with open("README.md", "r") as f:
long_description = f.read()
setup(
name="krsh",
version="1.0.0-alpha",
author="AIOps Squad, Riiid Inc",
url="https://github.com/riiid/krsh",
download_url="https://github.com/riiid/krsh/archive/master.zip",
packages=find_packages(),
long_description=long_description,
classifiers=["Programming Language :: Python :: 3"],
include_package_data=True,
keywords=["kubeflow", "krsh", "krsh"],
install_requires=[],
entry_points="""
[console_scripts]
krsh=krsh.cmd.cli:cli
""",
python_requires=">=3.7",
)
| 27.166667 | 68 | 0.648773 | 81 | 652 | 5.074074 | 0.679012 | 0.109489 | 0.068127 | 0.082725 | 0.126521 | 0.126521 | 0 | 0 | 0 | 0 | 0 | 0.011257 | 0.182515 | 652 | 23 | 69 | 28.347826 | 0.75985 | 0 | 0 | 0 | 0 | 0 | 0.369632 | 0.032209 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc34713f7867e6e21d18b8ae66d74ec37c930b47 | 7,687 | py | Python | mian/analysis/alpha_diversity.py | tbj128/mian | 2d1487fe8bf55a30ba983694ab5edaac39ae3e22 | [
"MIT"
] | 1 | 2021-11-24T08:06:39.000Z | 2021-11-24T08:06:39.000Z | mian/analysis/alpha_diversity.py | tbj128/mian | 2d1487fe8bf55a30ba983694ab5edaac39ae3e22 | [
"MIT"
] | 8 | 2019-04-03T05:37:44.000Z | 2020-12-12T06:35:09.000Z | mian/analysis/alpha_diversity.py | tbj128/mian | 2d1487fe8bf55a30ba983694ab5edaac39ae3e22 | [
"MIT"
] | null | null | null | # ===========================================
#
# mian Analysis Alpha/Beta Diversity Library
# @author: tbj128
#
# ===========================================
#
# Imports
#
#
# ======== R specific setup =========
#
import logging
import rpy2.robjects as robjects
import rpy2.rlike.container as rlc
from rpy2.robjects.packages import SignatureTranslatedAnonymousPackage
from scipy import stats, math
from skbio import TreeNode
from skbio.diversity import alpha_diversity
from io import StringIO
import numpy as np
from mian.analysis.analysis_base import AnalysisBase
from mian.core.statistics import Statistics
from mian.model.otu_table import OTUTable
from mian.model.map import Map
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class AlphaDiversity(AnalysisBase):
r = robjects.r
#
# ======== Main code begins =========
#
rcode = """
alphaDiversity <- function(allOTUs, alphaType, alphaContext) {
alphaDiv = diversity(allOTUs, index = alphaType)
if (alphaContext == "evenness") {
S <- specnumber(allOTUs)
if (alphaType == "shannon") {
J <- alphaDiv/log(S)
return(J)
} else if (alphaType == "simpson") {
# simpson index = 1 - D
J <- (1-alphaDiv)/S
return(J)
} else {
# invsimpson index = 1/D
J <- (1/alphaDiv)/S
return(J)
}
} else if (alphaContext == "speciesnumber") {
S <- specnumber(allOTUs)
return(S)
} else {
return(alphaDiv)
}
}
"""
veganR = SignatureTranslatedAnonymousPackage(rcode, "veganR")
def run(self, user_request):
table = OTUTable(user_request.user_id, user_request.pid, use_sparse=True)
table.load_phylogenetic_tree_if_exists()
# No OTUs should be excluded for diversity analysis
otu_table, headers, sample_labels = table.get_table_after_filtering_and_aggregation_and_low_count_exclusion(user_request)
metadata_values = table.get_sample_metadata().get_metadata_column_table_order(sample_labels, user_request.get_custom_attr("expvar"))
if user_request.get_custom_attr("colorvar") != "None":
color_metadata_values = table.get_sample_metadata().get_metadata_column_table_order(sample_labels, user_request.get_custom_attr("colorvar"))
else:
color_metadata_values = []
if user_request.get_custom_attr("sizevar") != "None":
size_metadata_values = table.get_sample_metadata().get_metadata_column_table_order(sample_labels, user_request.get_custom_attr("sizevar"))
else:
size_metadata_values = []
phylogenetic_tree = table.get_phylogenetic_tree()
return self.analyse(user_request, otu_table, headers, sample_labels, metadata_values, color_metadata_values, size_metadata_values, phylogenetic_tree)
def analyse(self, user_request, otu_table, headers, sample_labels, metadata_values, color_metadata_values, size_metadata_values, phylogenetic_tree):
logger.info("Starting Alpha Diversity analysis")
plotType = user_request.get_custom_attr("plotType")
alphaType = user_request.get_custom_attr("alphaType")
alphaContext = user_request.get_custom_attr("alphaContext")
statisticalTest = user_request.get_custom_attr("statisticalTest")
if alphaType == "faith_pd":
if phylogenetic_tree == "":
return {
"no_tree": True
}
project_map = Map(user_request.user_id, user_request.pid)
if project_map.matrix_type == "float":
return {
"has_float": True
}
if int(user_request.level) == -1:
# OTU tables are returned as a CSR matrix
otu_table = otu_table.toarray()
vals = self.calculate_alpha_diversity(otu_table, sample_labels, headers, phylogenetic_tree, alphaType, alphaContext)
if plotType == "boxplot":
# Calculate the statistical p-value
abundances = {}
statsAbundances = {}
i = 0
while i < len(vals):
obj = {}
obj["s"] = str(sample_labels[i])
if vals[i] == float('inf'):
raise ValueError("Cannot have infinite values")
obj["a"] = round(vals[i], 6)
meta = metadata_values[i] if len(metadata_values) > 0 else "All"
# Group the abundance values under the corresponding metadata values
if meta in statsAbundances:
statsAbundances[meta].append(vals[i])
abundances[meta].append(obj)
else:
statsAbundances[meta] = [vals[i]]
abundances[meta] = [obj]
i += 1
statistics = Statistics.getTtest(statsAbundances, statisticalTest)
logger.info("After T-test")
abundancesObj = {}
abundancesObj["abundances"] = abundances
abundancesObj["stats"] = statistics
return abundancesObj
else:
corrArr = []
corrValArr1 = []
corrValArr2 = []
i = 0
while i < len(vals):
obj = {}
obj["s"] = str(sample_labels[i])
obj["c1"] = float(metadata_values[i])
obj["c2"] = float(vals[i])
obj["color"] = color_metadata_values[i] if len(color_metadata_values) == len(vals) else ""
obj["size"] = float(size_metadata_values[i]) if len(size_metadata_values) == len(vals) else ""
corrArr.append(obj)
corrValArr1.append(float(metadata_values[i]))
corrValArr2.append(float(vals[i]))
i += 1
coef, pval = stats.pearsonr(corrValArr1, corrValArr2)
if math.isnan(coef):
coef = 0
if math.isnan(pval):
pval = 1
abundances_obj = {"corrArr": corrArr, "coef": coef, "pval": pval}
return abundances_obj
def calculate_alpha_diversity(self, otu_table, sample_labels, headers, phylogenetic_tree, alpha_type, alpha_context):
if alpha_type == "faith_pd":
otu_table = otu_table.astype(int)
tree = TreeNode.read(StringIO(phylogenetic_tree))
if len(tree.root().children) > 2:
# Ensure that the tree is rooted if it is not already rooted
tree = tree.root_at_midpoint()
return alpha_diversity(alpha_type, otu_table, ids=sample_labels, otu_ids=headers, tree=tree)
else:
num_species = np.count_nonzero(otu_table > 0, axis=1)
if alpha_context == "evenness":
diversity = alpha_diversity(alpha_type, otu_table, ids=sample_labels)
if alpha_type == "shannon":
return diversity / np.log(num_species)
elif alpha_type == "simpson":
# simpson index = 1 - D
return (1 - diversity) / num_species
else:
# invsimpson index = 1/D
return (1 / diversity) / num_species
elif alpha_context == "speciesnumber":
return alpha_diversity("observed_otus", otu_table, ids=sample_labels)
# return num_species
else:
return alpha_diversity(alpha_type, otu_table, ids=sample_labels)
| 37.866995 | 157 | 0.583583 | 802 | 7,687 | 5.366584 | 0.250623 | 0.065056 | 0.029275 | 0.041822 | 0.335734 | 0.252556 | 0.233504 | 0.199117 | 0.183783 | 0.173095 | 0 | 0.006409 | 0.309874 | 7,687 | 202 | 158 | 38.054455 | 0.804901 | 0.070379 | 0 | 0.17931 | 0 | 0 | 0.15403 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02069 | false | 0 | 0.089655 | 0 | 0.213793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc3859a4cf6a08c4ec0c43c696deb9e3d41dfe72 | 3,479 | py | Python | tmp/archives/low_level_training.py | angellmethod/DogFaceNet | 83b4f58978faea9efd8cb64160c26f4f5085f931 | [
"MIT"
] | 79 | 2019-01-08T02:35:24.000Z | 2022-03-30T11:56:31.000Z | tmp/archives/low_level_training.py | Peusudocode/DogFaceNet | 386d8687b3a7fe21cd46777caaf92545c68575b0 | [
"MIT"
] | 3 | 2020-01-13T06:25:57.000Z | 2021-08-10T01:34:05.000Z | tmp/archives/low_level_training.py | Peusudocode/DogFaceNet | 386d8687b3a7fe21cd46777caaf92545c68575b0 | [
"MIT"
] | 34 | 2019-07-24T12:25:13.000Z | 2022-03-16T07:13:47.000Z | # Training
next_images, next_labels = next_element
output = model(next_images)
logit = arcface_loss(embedding=output, labels=next_labels,
w_init=None, out_num=count_labels)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logit, labels=next_labels))
# Validation
next_images_valid, next_labels_valid = next_valid
output_valid = model(next_images_valid)
logit_valid = arcface_loss(embedding=output_valid, labels=next_labels_valid,
w_init=None, out_num=count_labels)
loss_valid = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logit_valid, labels=next_labels_valid))
pred_valid = tf.nn.softmax(logit_valid)
acc_valid = tf.reduce_mean(tf.cast(tf.equal(tf.argmin(pred_valid, axis=1), next_labels_valid), dtype=tf.float32))
# Optimizer
lr = 0.01
opt = tf.train.AdamOptimizer(learning_rate=lr)
train = opt.minimize(loss)
# Accuracy for validation and testing
pred = tf.nn.softmax(logit)
acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmin(pred, axis=1), next_labels), dtype=tf.float32))
############################################################
# Training session
############################################################
init = tf.global_variables_initializer()
with tf.Session() as sess:
summary = tf.summary.FileWriter('../output/summary', sess.graph)
summaries = []
for var in tf.trainable_variables():
summaries.append(tf.summary.histogram(var.op.name, var))
summaries.append(tf.summary.scalar('inference_loss', loss))
summary_op = tf.summary.merge(summaries)
saver = tf.train.Saver(max_to_keep=100)
sess.run(init)
# Training
nrof_batches = len(filenames_train)//BATCH_SIZE + 1
nrof_batches_valid = len(filenames_train)//BATCH_SIZE + 1
print("Start of training...")
for i in range(EPOCHS):
feed_dict = {filenames_train_placeholder: filenames_train,
labels_train_placeholder: labels_train}
sess.run(iterator.initializer, feed_dict=feed_dict)
feed_dict_valid = {filenames_valid_placeholder: filenames_valid,
labels_valid_placeholder: labels_valid}
sess.run(it_valid.initializer, feed_dict=feed_dict_valid)
# Training
for j in trange(nrof_batches):
try:
_, loss_value, summary_op_value, acc_value = sess.run((train, loss, summary_op, acc))
# summary.add_summary(summary_op_value, count)
tqdm.write("\n Batch: " + str(j)
+ ", Loss: " + str(loss_value)
+ ", Accuracy: " + str(acc_value)
)
except tf.errors.OutOfRangeError:
break
# Validation
print("Start validation...")
tot_acc = 0
for _ in trange(nrof_batches_valid):
try:
loss_valid_value, acc_valid_value = sess.run((loss_valid, acc_valid))
tot_acc += acc_valid_value
tqdm.write("Loss: " + str(loss_valid_value)
+ ", Accuracy: " + str(acc_valid_value)
)
except tf.errors.OutOfRangeError:
break
print("End of validation. Total accuray: " + str(tot_acc/nrof_batches_valid))
print("End of training.")
print("Start evaluation...")
# Evaluation on the validation set:
## One-shot training
#sess.run() | 33.451923 | 113 | 0.628054 | 423 | 3,479 | 4.884161 | 0.27896 | 0.038722 | 0.030978 | 0.027106 | 0.25363 | 0.188771 | 0.124879 | 0.124879 | 0.095837 | 0.095837 | 0 | 0.005648 | 0.236562 | 3,479 | 104 | 114 | 33.451923 | 0.772214 | 0.062949 | 0 | 0.129032 | 0 | 0 | 0.059802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.080645 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc39820ab2dae0fa8c377e394f91862d672ca930 | 6,995 | py | Python | utils/loss.py | Kewei-Wang-Kui/IAANet-for-IRTD | a35cd2ce5c7180c6d9da54877baf549c4e77ba3a | [
"MIT"
] | 3 | 2022-03-28T06:59:12.000Z | 2022-03-29T06:28:32.000Z | utils/loss.py | Kewei-Wang-Kui/IAANet-for-IRTD | a35cd2ce5c7180c6d9da54877baf549c4e77ba3a | [
"MIT"
] | null | null | null | utils/loss.py | Kewei-Wang-Kui/IAANet-for-IRTD | a35cd2ce5c7180c6d9da54877baf549c4e77ba3a | [
"MIT"
] | null | null | null | import torch
import math
from torch import nn
from utils.general import*
import numpy as np
#implementation https://github.com/ultralytics/yolov3/blob/master/utils/metrics.py
def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
# Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
box2 = box2.T
# Get the coordinates of bounding boxes
if x1y1x2y2: # x1, y1, x2, y2 = box1
b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
else: # transform from xywh to xyxy
b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
# Intersection area
inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
(torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
# Union Area
w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
union = w1 * h1 + w2 * h2 - inter + eps
iou = inter / union
if GIoU or DIoU or CIoU:
cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
(b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
if DIoU:
return iou - rho2 / c2 # DIoU
elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
with torch.no_grad():
alpha = v / (v - iou + (1 + eps))
return iou - (rho2 / c2 + v * alpha) # CIoU
else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
c_area = cw * ch + eps # convex area
return iou - (c_area - union) / c_area # GIoU
else:
return iou # IoU
class FocalLoss(nn.Module):
# Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
super(FocalLoss, self).__init__()
self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
self.gamma = gamma
self.alpha = alpha
self.reduction = loss_fcn.reduction
self.loss_fcn.reduction = 'none' # required to apply FL to each element
def forward(self, pred, true):
loss = self.loss_fcn(pred, true)
# p_t = torch.exp(-loss)
# loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
# TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
pred_prob = torch.sigmoid(pred) # prob from logits
p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
modulating_factor = (1.0 - p_t) ** self.gamma
loss *= alpha_factor * modulating_factor
if self.reduction == 'mean':
return loss.mean()
elif self.reduction == 'sum':
return loss.sum()
else: # 'none'
return loss
class ComputerLoss(nn.Module):
def __init__(self, Dloss, Sloss):
super(ComputerLoss, self).__init__()
self.Dloss = Dloss
self.Sloss = Sloss
self.d = 1.0
self.s = 1.0
def forward(self, Dp, Dtarget, anchor, stride, Sp, Starget, mask_maps, target_boxes):
dloss, lbox, lobj = self.Dloss(Dp, Dtarget, anchor, stride)
sloss = self.Sloss(Sp, Starget, mask_maps, target_boxes)
loss = dloss * self.d + sloss * self.s
return loss, torch.cat((lbox, lobj.unsqueeze(dim=0), dloss, sloss.unsqueeze(dim=0), loss)).detach()
class SegLoss(nn.Module):
def __init__(self, posw=1, mode=None, device='cuda'):
super(SegLoss, self).__init__()
BCE = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([posw], device=device))
#focal loss?
self.mode = mode
if mode=='focal':
self.BCE = FocalLoss(BCE)
else:
self.BCE = BCE
def forward(self, pred, targets, mask_maps, target_boxes):
pred = pred.squeeze(dim=-1)
#pred = pred.sigmoid()
b, _, = pred.shape
targets = targets.squeeze(dim=1)
loss = 0
for i in range(b):
mask_map = mask_maps[i]
target = targets[i]
p = pred[i]
target = target[~mask_map]
p = p[:len(target)]
loss += self.BCE(p, target)
loss = loss / b
#print(f"{pred.max()}")
#print(f'{loss}')
return loss
#implementation https://github.com/ultralytics/yolov3/blob/master/utils/loss.py
class DetectLoss(nn.Module):
def __init__(self, obj_pw, device):
super(DetectLoss, self).__init__()
self.device = device
self.BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([obj_pw], device=self.device))
self.box = 0.5
self.obj = 1.0
def forward(self, p, target, anchor, stride):
lbox = torch.zeros(1, device=self.device)
indices, tbox = [], []
fh, fw = p.shape[1:3]#feature map size
gain = torch.tensor([fw, fh], device=self.device)
b = target[:,0].long().T #image
gxy = target[:, 1:3] * gain#grid xy
gwh = target[:, 3:5] * gain#grid wh
gij = gxy.long()
gi, gj = gij.T #grid xy indices
gj = gj.clamp_(0, fh-1)
gi = gi.clamp_(0, fw-1)
tbox = (torch.cat((gxy - gij, gwh), 1))
tobj = torch.zeros_like(p[..., 0], device=self.device)
n = b.shape[0] #number of targets
if n:
ps = p[b, gj, gi] #prediction subset corresponding to targets
#regression
pxy = ps[:, :2].sigmoid() * 2 - 0.5
pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchor.to(self.device) / stride
pbox = torch.cat((pxy, pwh), 1)
iou = bbox_iou(pbox.T, tbox, x1y1x2y2=False, CIoU=True) # iou(prediction, target)
lbox += (1.0 - iou).mean()
#objectness
tobj[b, gj, gi] = iou.detach().clamp(0).type(tobj.dtype)
lobj = self.BCEobj(p[..., 4], tobj)
lbox *= self.box
lobj *= self.obj
loss = lbox + lobj
return loss, lbox, lobj
#torch.cat((lbox, lobj.unsqueeze(dim=0), loss)).detach()
| 37.406417 | 115 | 0.562402 | 1,011 | 6,995 | 3.768546 | 0.2364 | 0.012861 | 0.014698 | 0.022047 | 0.155118 | 0.08189 | 0.046719 | 0.031496 | 0.031496 | 0 | 0 | 0.060859 | 0.297641 | 6,995 | 186 | 116 | 37.607527 | 0.714635 | 0.192137 | 0 | 0.053435 | 0 | 0 | 0.003572 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068702 | false | 0 | 0.038168 | 0 | 0.21374 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc3ac5e5e28791c5358a2ea48c5227083f9cd95f | 917 | py | Python | load_data.py | anzq001/president | 4cd46a42c3b6e328e3a90bd75cc5f99fd7a2f200 | [
"Apache-2.0"
] | 2 | 2019-03-09T12:20:22.000Z | 2019-03-10T09:48:31.000Z | load_data.py | anzq001/president | 4cd46a42c3b6e328e3a90bd75cc5f99fd7a2f200 | [
"Apache-2.0"
] | null | null | null | load_data.py | anzq001/president | 4cd46a42c3b6e328e3a90bd75cc5f99fd7a2f200 | [
"Apache-2.0"
] | 1 | 2019-04-13T02:33:24.000Z | 2019-04-13T02:33:24.000Z | import numpy as np
import os
label_dict = {"克林顿": 0, "奥巴马": 1, "特朗普": 2}
def get_filenames(file_dir, ratio):
file_list = []
label_list = []
for label_dir in os.listdir(file_dir):
for filename in os.listdir(os.path.join(file_dir, label_dir)):
file_list.append(os.path.join(os.path.join(file_dir, label_dir, filename)))
label_list.append(label_dict[label_dir])
# 利用shuffle打乱顺序
all_data = np.array([file_list, label_list]).transpose()
np.random.shuffle(all_data)
train_num = int(np.ceil(ratio * len(all_data)))
train = all_data[0:train_num, :]
test = all_data[train_num:-1, :]
train_filename = list(train[:, 0])
train_label = [int(i) for i in list(train[:, 1])]
test_filename = list(test[:, 0])
test_label = [int(i) for i in list(test[:, 1])]
return train_filename, train_label, test_filename, test_label
| 36.68 | 88 | 0.63904 | 140 | 917 | 3.95 | 0.3 | 0.063291 | 0.05425 | 0.061483 | 0.159132 | 0.159132 | 0.159132 | 0 | 0 | 0 | 0 | 0.012552 | 0.218103 | 917 | 24 | 89 | 38.208333 | 0.758717 | 0.014177 | 0 | 0 | 0 | 0 | 0.010251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc3d65234cf18e7d3a1028e4d5ef5c7b351357b3 | 5,912 | py | Python | ml_digit_recognition.py | andydevs/ml-digit-recognition | e99506dcffa73a5e2f920db1edee3d94295d0417 | [
"MIT"
] | null | null | null | ml_digit_recognition.py | andydevs/ml-digit-recognition | e99506dcffa73a5e2f920db1edee3d94295d0417 | [
"MIT"
] | null | null | null | ml_digit_recognition.py | andydevs/ml-digit-recognition | e99506dcffa73a5e2f920db1edee3d94295d0417 | [
"MIT"
] | null | null | null | """
ml_digit_recognition.py
Trains a neural network to recognize handwritten digits from the MNIST database.
Author: Anshul Kharbanda
Created: 10 - 28 - 2017
"""
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data as mnist_input_data
from random import randint
from tqdm import tqdm
from time import time
# -------------------------------- HYPER PARAMS --------------------------------
LEARNING_RATE = 0.2 # How quickly the network learns (sensitivity to error)
BATCH_SIZE = 500 # The number of samples in a batch in each training epoch
TRAINING_EPOCHS = 5000 # The number of training epochs
LOGPATH = 'logs'
# --------------------------------- MNIST Data ---------------------------------
# Get MNIST Data
print('Getting MNIST digit data')
mnist = mnist_input_data.read_data_sets('MNIST_data/', one_hot=True)
# --------------------------- Neural Network System ----------------------------
"""
Convolution Neural Network:
input(784)
reshape(28,28,1)
convolve(4, 4)
relu(4, 4)
reshape(49)
softmax(10)
output(10)
Determines the numerical digit that the given image represents.
"""
# Initialize random seed
tf.set_random_seed(int(time()))
# Input z
z = tf.placeholder(tf.float32, [None, 784], name='z')
# Reshape layer
s0 = tf.reshape(z, [-1, 28, 28, 1])
# Convolution layer
Kc0 = tf.Variable(tf.random_uniform([4, 4, 1, 1]), name='Kc0')
c0 = tf.nn.conv2d(s0, Kc0, strides=[1, 4, 4, 1], padding='SAME')
# RELU layer
r0 = tf.nn.relu(c0)
# Reshape layer
s1 = tf.reshape(c0, [-1, 49])
# Fully-Connected Layer to Output P
Wp = tf.Variable(tf.random_uniform([49, 10]), name='Wp')
bp = tf.Variable(tf.random_uniform([10]), name='bp')
p = tf.nn.softmax(tf.matmul(s1, Wp) + bp)
# Training p
p_ = tf.placeholder(tf.float32, [None, 10], name='p_')
# Error function (Least-Squares)
error = tf.losses.mean_squared_error(labels=p_, predictions=p)
# Trainer
trainer = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(error)
# Summary
tf.summary.scalar('error', error)
tf.summary.histogram('Kc0', Kc0)
tf.summary.histogram('Wp', Wp)
tf.summary.histogram('bp', bp)
summary = tf.summary.merge_all()
# ----------------------------- Show Sample Helper -----------------------------
def show_sample(name, images, labels, predicts, error):
"""
HELPER FUNCTION
Shows a sample of the given MNIST data and the resulting predictions
from the neural network. Plots images, labels, and predictions in a
subplot with the given rows and columns. Prints the given error
afterwards.
:param name: the name of the dataset
:param images: the images of the MNIST data (Kx28x28 array)
:param labels: the labels of the MNIST data
:param predicts: the predictions from the Nerual Network
:param error: the error of the prediction from the Neural Network
"""
# Title formatters
plot_title = '{name} Sample Digits'
subplot_title = 'Expected: {expected}, Predicted: {predicted}'
error_title = '{name} error: {error}'
# Rows and columns of subplot
rows = 2
cols = 2
# Randomized samples start
start = randint(0, images.shape[0] - (rows*cols))
# Get formatted data
formatted_images = np.reshape(images, (-1, 28, 28))
formatted_labels = np.argmax(labels, axis=1)
formatted_predicts = np.argmax(predicts, axis=1)
# Create subplot plot
plt.figure(plot_title.format(name=name))
for index in range(rows*cols):
# Create subplot of each sample
splt = plt.subplot(rows, cols, index+1)
splt.set_title(subplot_title.format(
expected=formatted_labels[start+index],
predicted=formatted_predicts[start+index]
))
splt.axis('off')
splt.imshow(formatted_images[start+index,:,:], cmap='gray')
# Show plot and then print error
plt.show()
print(error_title.format(name=name, error=error))
# ----------------------------------- Session ----------------------------------
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
# Create session writer
writer = tf.summary.FileWriter(LOGPATH,
graph=tf.get_default_graph())
# ---------------------- Initial Run -----------------------
# Compute sample data
print('Compute initial prediction')
predicts_0 = sess.run(p, feed_dict={z:mnist.test.images})
error_0 = sess.run(error, feed_dict={z:mnist.test.images,
p_:mnist.test.labels})
# Plot initial sample
print('Plot initial prediction sample')
show_sample(name='Initial',
images=mnist.test.images,
labels=mnist.test.labels,
predicts=predicts_0,
error=error_0)
# --------------------- Training Step ----------------------
print('Training Neural Network...')
for i in tqdm(range(TRAINING_EPOCHS), desc='Training'):
# Train batch and add summary
batch_zs, batch_ps = mnist.train.next_batch(BATCH_SIZE)
_, summ = sess.run([trainer, summary],
feed_dict={z:batch_zs, p_:batch_ps})
writer.add_summary(summ, i)
# ----------------------- Final Run ------------------------
# Get Sample Images and labels
print('Compute final prediction')
predicts_1 = sess.run(p, feed_dict={z:mnist.test.images})
error_1 = sess.run(error, feed_dict={z:mnist.test.images,
p_:mnist.test.labels})
# Plot final samples
print('Plot final prediction sample')
show_sample(name='Final',
images=mnist.test.images,
labels=mnist.test.labels,
predicts=predicts_1,
error=error_1)
# ---------------------- Close Writer ----------------------
writer.close()
| 32.662983 | 80 | 0.610792 | 748 | 5,912 | 4.73262 | 0.275401 | 0.025424 | 0.025424 | 0.015819 | 0.138136 | 0.085311 | 0.085311 | 0.085311 | 0.085311 | 0.085311 | 0 | 0.022331 | 0.204668 | 5,912 | 180 | 81 | 32.844444 | 0.73054 | 0.32933 | 0 | 0.072289 | 0 | 0 | 0.084534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012048 | false | 0 | 0.084337 | 0 | 0.096386 | 0.084337 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bc3d68f5a135f38ae88993c26c1f48b87b8373e0 | 9,168 | py | Python | scripts/2_dPCA_and_transitions/190529_select_single_cells_with_high_variation.py | Mateo-Lopez-Espejo/context_probe_analysis | 55461057fd01f00124aa46682b335313af9cc0f8 | [
"RSA-MD"
] | null | null | null | scripts/2_dPCA_and_transitions/190529_select_single_cells_with_high_variation.py | Mateo-Lopez-Espejo/context_probe_analysis | 55461057fd01f00124aa46682b335313af9cc0f8 | [
"RSA-MD"
] | null | null | null | scripts/2_dPCA_and_transitions/190529_select_single_cells_with_high_variation.py | Mateo-Lopez-Espejo/context_probe_analysis | 55461057fd01f00124aa46682b335313af9cc0f8 | [
"RSA-MD"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from src.data import epochs as cpe, rasters as tp
import nems.recording as recording
import nems_lbhb.baphy as nb
'''
rough selection of good cells based on arbitrary threshold on their response amplitude (mean)
and response variation (std) dependent on context. for each probe for each site
'''
site = 'AMT032a' # great site. PEG
modelname = 'resp'
options = {'batch': 316,
'siteid': site,
'stimfmt': 'envelope',
'rasterfs': 100,
'recache': False,
'runclass': 'CPN',
'stim': False} # ToDo chace stims, spectrograms???
load_URI = nb.baphy_load_recording_uri(**options)
loaded_rec = recording.load_recording(load_URI)
rec = cpe.set_recording_subepochs(loaded_rec, set_pairs=True)
sig = rec['resp']
eps = sig.epochs
# AMT032a
goodcells = ['AMT032a-12-1', 'AMT032a-15-1', 'AMT032a-17-1', 'AMT032a-21-1', 'AMT032a-26-2', 'AMT032a-28-1',
'AMT032a-34-2', 'AMT032a-38-1', 'AMT032a-38-2', 'AMT032a-40-1', 'AMT032a-40-2', 'AMT032a-41-1',
'AMT032a-44-1']
best_cell = 'AMT032a-40-2'
# results_file = nd.get_results_file(316) # cpp batch.
all_sites = ['ley070a', 'ley072b', 'AMT028b', 'AMT029a', 'AMT030a', 'AMT031a', 'AMT032a']
df = list()
for site in all_sites:
modelname = 'resp'
options = {'batch': 316,
'siteid': site,
'stimfmt': 'envelope',
'rasterfs': 100,
'recache': False,
'runclass': 'CPN',
'stim': False} # ToDo chace stims, spectrograms???
try:
load_URI = nb.baphy_load_recording_uri(**options)
loaded_rec = recording.load_recording(load_URI)
except:
print(f'failed importing{site}')
continue
rec = cpe.set_recording_subepochs(loaded_rec, set_pairs=True)
sig = rec['resp'].rasterize()
eps = sig.epochs
# define wich cells present more context induced variability
# Context x Probe x Repetition x Unit x Time
full_array, bad_cpp, good_cpp, context_names, probe_names = tp.make_full_array(sig, 'CPN')
full_PSTH = np.nanmean(full_array, axis=2) # collapses across repetitions
probe_PSTH = full_PSTH[:, :, :, int(np.floor(full_PSTH.shape[-1] / 2)):] # takes probe response i.e second half
# calculates probe wise values
# calculates cell single number context driven probe response variation
ctx_vars = np.nanstd(probe_PSTH, axis=0) # collapses across contexts
probes_var_over_time = np.nanmean(ctx_vars, axis=-1) # collapses across time
# calculate cell single number probe response amplitude
ctx_amps = np.nanmean(probe_PSTH, axis=0) # collapses across contexts
probes_amp_over_time = np.nanmean(ctx_amps, axis=-1) # collapses across time
for cc, cellid in enumerate(sig.chans):
for pp, probe in enumerate(probe_names):
d = {'cellid': cellid,
'epoch': probe,
'parameter': 'std',
'value': probes_var_over_time[pp, cc]}
df.append(d)
d = {'cellid': cellid,
'epoch': probe,
'parameter': 'mean',
'value': probes_amp_over_time[pp, cc]}
df.append(d)
##########################################################################################
# Uses silence as baseline activity for mean and standard deviation of z-score calculation
silences = sig.extract_epochs(['PreStimSilence', 'PostStimSilence'])
silences = np.concatenate(list(silences.values()), axis=0) # shape Reps x Cells x Time
silence_mean = np.nanmean(silences, axis=(0, 2))
silence_std = np.nanstd(silences, axis=(0, 2))
# stores silence meand and std to df
for cc, cellid in enumerate(sig.chans):
d = {'cellid': cellid,
'epoch': 'silence',
'parameter': 'mean',
'value': silence_mean[cc]}
df.append(d)
d = {'cellid': cellid,
'epoch': 'silence',
'parameter': 'std',
'value': silence_std[cc]}
df.append(d)
full_zscores = np.empty(full_array.shape)
full_zscores[:] = np.nan
for cc in range(full_zscores.shape[3]): # iterates over the cell dimension
full_zscores[:, :, :, cc, :] = (full_array[:, :, :, cc, :] - silence_mean[cc]) / silence_std[cc]
# calculates mean and std for the zscore as done before for the raw values
zscore_PSTH = np.nanmean(full_zscores, axis=2) # collapses across repetitions
z_probe_PSTH = zscore_PSTH[:, :, :,
int(np.floor(zscore_PSTH.shape[-1] / 2)):] # takes probe response i.e second half
# calculates probe wise values
# calculates cell single number context driven probe response variation
z_ctx_vars = np.nanstd(z_probe_PSTH, axis=0) # collapses across contexts
z_probes_var_over_time = np.nanmean(z_ctx_vars, axis=-1) # collapses across time
# calculate cell single number probe response amplitude
z_ctx_amps = np.nanmean(z_probe_PSTH, axis=0) # collapses across contexts
z_probes_amp_over_time = np.nanmean(z_ctx_amps, axis=-1) # collapses across time
for cc, cellid in enumerate(sig.chans):
for pp, probe in enumerate(probe_names):
d = {'cellid': cellid,
'epoch': probe,
'parameter': 'zstd',
'value': z_probes_var_over_time[pp, cc]}
df.append(d)
d = {'cellid': cellid,
'epoch': probe,
'parameter': 'zscore',
'value': z_probes_amp_over_time[pp, cc]}
df.append(d)
DF = pd.DataFrame(df)
wdf = DF.copy()
wdf['site'] = wdf['cellid'].str.slice(0, 7, 1)
sites = wdf.site.unique()
########################################################################################################################
# plots amplitude response vs context induced variance for individual probes, and the mean across probes, for each site
fig, axes = plt.subplots()
x_ax = 'zscore'
y_ax = 'zstd'
x_thr = 0.18
y_thr = 0
selected_cells = list()
for ss, site in enumerate(sites):
ffsite = wdf.site == site
ffamp = wdf.parameter == x_ax
ffvar = wdf.parameter == y_ax
ffepoch = wdf.epoch.str.match(r'\AP\d*')
amps = wdf.loc[ffsite & ffamp & ffepoch, :]
vars = wdf.loc[ffsite & ffvar & ffepoch, :]
p_amps = amps.pivot(index='epoch', columns='cellid', values='value')
p_vars = vars.pivot(index='epoch', columns='cellid', values='value')
X = np.mean(p_amps, axis=0)
Y = np.mean(p_vars, axis=0)
sss = X.index.values[(X >= x_thr) & (Y >= y_thr)]
selected_cells.extend(sss)
# defines error bars a the range of the data i.e. min and max values
Xerr = np.stack([np.mean(p_amps.values, axis=0) - np.min(p_amps.values, axis=0),
np.max(p_amps.values, axis=0) - np.mean(p_amps.values, axis=0)], axis=0)
Yerr = np.stack([np.mean(p_vars.values, axis=0) - np.min(p_vars.values, axis=0),
np.max(p_vars.values, axis=0) - np.mean(p_vars.values, axis=0)], axis=0)
# Xerr = np.std(p_amps.values, axis=0)
# Yerr = np.std(p_vars.values, axis=0)
axes.errorbar(X, Y, xerr=Xerr, yerr=Yerr, fmt='o', color=f'C{ss}', ecolor=f'C{ss}', label=site)
axes.axvline(x_thr, linestyle='--', color='black')
axes.axhline(y_thr, linestyle='--', color='black')
axes.legend()
axes.set_xlabel(x_ax)
axes.set_ylabel(y_ax)
axes.set_title('probe response and context driven variation per cell')
print(len(selected_cells))
print(selected_cells)
########################################################################################################################
# plots individual cells (color) probe responses, and mean across probes (transparent, bold) axes = np.ravel(axes)
# for all different sites (subplots)
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True)
axes = np.ravel(axes)
x_ax = 'mean'
y_ax = 'std'
for ss, site in enumerate(sites):
site_DF = wdf.loc[wdf.site == site, :]
site_cells = site_DF.cellid.unique()
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
for (cc, cellid), color in zip(enumerate(site_cells), colors):
# if cc == 9:
# continue
ffsite = wdf.site == site
ffcell = wdf.cellid == cellid
ffepoch = wdf.epoch.str.match(r'\AP\d*')
ffparam = wdf.parameter.isin([x_ax, y_ax])
filtered = wdf.loc[ffsite & ffcell & ffepoch & ffparam, :]
pivoted = filtered.pivot(index='epoch', columns='parameter', values='value')
mean_probes = np.nanmean(pivoted, axis=0)
x_mean = np.nanmean(pivoted[x_ax])
y_mean = np.nanmean(pivoted[y_ax])
axes[ss].scatter(pivoted[x_ax].values, pivoted[y_ax].values, color=color, alpha=0.3)
axes[ss].scatter(x_mean, y_mean, color=color, alpha=1)
axes[ss].set_title(site)
# axes[ss].legend()
########################################################################################################################
| 38.359833 | 120 | 0.595659 | 1,218 | 9,168 | 4.349754 | 0.234811 | 0.020763 | 0.022839 | 0.020385 | 0.423745 | 0.39128 | 0.341072 | 0.304077 | 0.272178 | 0.260853 | 0 | 0.024545 | 0.226767 | 9,168 | 238 | 121 | 38.521008 | 0.72281 | 0.172557 | 0 | 0.358491 | 0 | 0 | 0.112976 | 0 | 0 | 0 | 0 | 0.004202 | 0 | 1 | 0 | false | 0 | 0.044025 | 0 | 0.044025 | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70abd50b114b2b0483448675d5c312036f5cf723 | 9,020 | py | Python | mysite/resource/resource.py | wuchunlong0/uk_upfile_down | e7e9bdc0a724d3bc366ff34525a1524729046f51 | [
"Apache-2.0"
] | null | null | null | mysite/resource/resource.py | wuchunlong0/uk_upfile_down | e7e9bdc0a724d3bc366ff34525a1524729046f51 | [
"Apache-2.0"
] | null | null | null | mysite/resource/resource.py | wuchunlong0/uk_upfile_down | e7e9bdc0a724d3bc366ff34525a1524729046f51 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# 1、新建目录下一定要有__init__.py文件,否则不能被其它文件引用、不能沿路径读写文件。from ... 。
# 2、urls.py中,设置第一级路由名mytest。 在.../mysite/mysite/urls.py中 url(r'^mytest/', include('account.mytest.urls')),
# 3、admin.py中,设置数据库显示。在.../mysite/account/admin.py中 @admin.register(Testusername)
# 4、templates中,增加模板文件目录/mytest
from __future__ import unicode_literals
import datetime
import os,shutil
import json
from django.shortcuts import render
from django.http.response import HttpResponseRedirect,HttpResponse,StreamingHttpResponse
from .models import Upresources,Commentresources
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from myAPI.pageAPI import Page ,_get_model_by_page
from myAPI.convertAPI import sizeConvert
from myAPI.myFile import MyFile,WriteFile,searchTxt,GetTxtfile
from myAPI.fileAPI import GetfileLineTxt
from myAPI.download import downLoadFile
from django.contrib.auth.models import User, Group
PAGE_NUM = '3' #设置每页显示数
# 列表形式,获得Operator组用户
def _getOperators():
operators = Group.objects.get(name='Operator').user_set.all()
return [user for user in User.objects.all() if user.is_superuser or user in operators]
# http://localhost:8000/resource/test/
def test(request):
print('===='+os.getcwd()) #当前目录/Users/wuchunlong/local/wcl6005/Project-Account-Online/mysite
return HttpResponse('ok')
#保存上传文件
def save_upfile(filepath,mode):
destination = open(os.path.join(filepath,mode.name),'wb+') # 打开特定的文件进行二进制的写操作
for chunk in mode.chunks(): # 分块写入文件
destination.write(chunk)
destination.close()
def save_upimg(filepath,mode,filename):
destination = open(os.path.join(filepath,filename),'wb+') # 打开特定的文件进行二进制的写操作
for chunk in mode.chunks(): # 分块写入文件
destination.write(chunk)
destination.close()
#只有超级用户,才能上传资源写入数据库 http://localhost:8000/resource/uploadfile/
# 前台验证:先判断资源upfile、图像upImg二个文件文件大小是否超过阀值,再判断第二个文件扩展名是否合法。
# 后台验证:判断title、uploadfile两个字段是否有相同的记录。
#注意:图像数据库中保存的文件名与保存文件的文件名,路径有区别。
@login_required
def uploadfile(request):
os_dir = os.getcwd()
filepath = '%s/static/upload/upfile/' %(os_dir)
imgpath = '%s/static/upload/upimg/' %(os_dir)
groups = request.user.groups.values_list('name',flat=True)
if not (request.user.is_superuser or 'Operator' in groups):
return HttpResponseRedirect('/login/')
q = request.GET.get('q','')
if q != '':
return HttpResponseRedirect('/resource/search/')
if request.method == 'POST':
Myfile = request.FILES.get("upfile", None)
if not Myfile:
messages.info(request, '警告:没有获得上传文件!')
return HttpResponseRedirect('/resource/uploadfile/')
MyImg = request.FILES.get("upImg", None)
if not MyImg:
messages.info(request, '警告:没有获得上传图像!')
return HttpResponseRedirect('/resource/uploadfile/')
title = request.POST['title']
istitle = Upresources.objects.filter(title = title)
if istitle: #判断title是否有相同的记录
messages.info(request, '警告:资源标题 - {} 重复! 请更换资源标题。'.format(title))
return HttpResponseRedirect('/resource/uploadfile/')
uploadfile = filepath + Myfile.name
isuploadfile = Upresources.objects.filter(uploadfile = uploadfile)
if isuploadfile: #判断uploadfile是否有相同的记录
messages.info(request, '警告:上传文件 - {} 文件已经上传!'.format(Myfile.name))
return HttpResponseRedirect('/resource/uploadfile/')
title_imgname = '%s.jpg' %(title)
# 保存上传文件
save_upfile(filepath,Myfile)
save_upimg(imgpath,MyImg,title_imgname)
shutil.copy('%s%s' %(imgpath,title_imgname),'%s/static_common/upload/upimg/' %(os_dir) )
# 写入数据库
u = Upresources(
uploadfile = uploadfile, # filepath + Myfile.name,#数据库保存包含路径的文件名
uploadimg = '%s%s' %('/static/upload/upimg/', title_imgname),#数据库保存包含路径的文件名
username = request.user, #登录用户,
title = title,
editor = request.POST['editor'],
source = request.POST['source'],
type = request.POST['type'],
cid1 = request.POST['cid1'],
environment = request.POST['environment'],
label = request.POST['tag'],
downnum = '0',
browsernum = '0',
size = sizeConvert(int(request.POST['upfilesize'])) #调用转换函数,获得B KB MB GB TB,
)
u.save()
upresources,page,num = _get_model_by_page(request,Upresources.objects.all(),PAGE_NUM) #每页显示page_size
return render(request, 'resource/showupresource.html', context=locals())
return render(request, 'resource/uploadfile.html', context=locals())
# http://localhost:8000/resource/search/
def search(request):
model = Upresources
title = request.GET.get('q','')
type = request.GET.get('type','')
date = request.GET.get('date','')
browsernum = request.GET.get('browsernum','')
downnum = request.GET.get('downnum','')
fieldname = '全部资源'
upresource_list = model.objects.all()
if title:
fieldname = '类型: %s'%title
upresource_list = model.objects.filter(title__icontains = title)
if type:
fieldname = '类型: %s'%type
upresource_list = model.objects.filter(type__icontains = type)
if date:
fieldname = '更新时间排序'
upresource_list = model.objects.order_by('-date')
if browsernum:
fieldname = '查看次数排序'
list=model.objects.extra(select={'num':'browsernum+0'})
upresource_list=list.extra(order_by=["-num"])
if downnum:
fieldname = '按下载次数排序'
list=model.objects.extra(select={'num':'downnum+0'})
upresource_list=list.extra(order_by=["-num"])
upresources,page,num = _get_model_by_page(request,upresource_list,PAGE_NUM)
return render(request, 'resource/showupresource.html', context=locals())
#显示上传资源 http://localhost:8000/resource/showupresource/
def showupresource(request):
fieldname = '全部资源'
operators = _getOperators()
upresources,page,num = _get_model_by_page(request,Upresources.objects.all(),PAGE_NUM)
return render(request, 'resource/showupresource.html', context=locals())
#显示资源评论 http://localhost:8000/resource/showcomment/
def showcomment(request):
title = request.GET.get('title','') #由标题获得记录
if title != '':
browsernum = Upresources.objects.get(title = str(title)).browsernum
browsernum = int(browsernum) + 1
Upresources.objects.filter(title = title).update(browsernum = browsernum)
upresources = Upresources.objects.filter(title = title)
comment_list = Commentresources.objects.filter(title = title)
comments,page,num = _get_model_by_page(request,comment_list,PAGE_NUM) #每页显示page_size
return render(request, 'resource/showcomment.html', context=locals())
#评论资源写入数据库 http://localhost:8000/resource/upcomment/
def upcomment(request):
if request.method == 'POST':
title = request.POST['title']
commentresources = Commentresources(
username = request.user,
title = title,
editor = request.POST['editor']
)
commentresources.save()
return HttpResponseRedirect('/resource/showcomment/?title=%s' % (title))
return render(request, 'resource/showupresource.html', context=locals())
#下载资源 http://localhost:8000/resource/downFile/
def downFile(request):
uploadfile = request.GET.get('uploadfile','')
if uploadfile != '':
downLoad = downLoadFile(uploadfile)
downnum = Upresources.objects.get(uploadfile = uploadfile).downnum #必须确保uploadfile字段 不重名!
downnum = int(downnum) + 1
Upresources.objects.filter(uploadfile = uploadfile).update(downnum = downnum)
return downLoad
return HttpResponseRedirect('/resource/showupresource/')
#删除 http://localhost:8000/resource/delete/
def delete(request):
title = request.GET.get('title','')
if title:
Upresources.objects.filter(title=title).delete()
os_dir = os.getcwd()
static_imgname = '%s/static/upload/upimg/%s.jpg' %(os_dir, title)
static_common_imgname = '%s/static_common/upload/upimg/%s.jpg' %(os_dir, title)
os.remove(static_imgname) if(os.path.exists(static_imgname)) else ''
os.remove(static_common_imgname) if(os.path.exists(static_common_imgname)) else ''
return HttpResponseRedirect('/resource/showupresource/')
#视频播放 http://localhost:8000/resource/videoplay/
def videoplay(request):
name = request.GET.get('name','')
videoname = name.split('mysite')[-1]
if 'mp4' in videoname:
return render(request, 'resource/videoplay.html', context=locals())
return HttpResponseRedirect('/resource/showupresource/') | 43.365385 | 115 | 0.659645 | 975 | 9,020 | 6.014359 | 0.247179 | 0.044338 | 0.022169 | 0.03837 | 0.27234 | 0.193554 | 0.132674 | 0.119372 | 0.071453 | 0.071453 | 0 | 0.007864 | 0.210532 | 9,020 | 208 | 116 | 43.365385 | 0.815616 | 0.135366 | 0 | 0.22561 | 0 | 0 | 0.115578 | 0.069192 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.091463 | 0 | 0.286585 | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70ac806edc6f016b63cb6d6f775db279c975354f | 7,071 | py | Python | fuji_server/models/body.py | ignpelloz/fuji | 5e6fe8333c1706d1b628a84108bff7a97fdf11a7 | [
"MIT"
] | 25 | 2020-09-22T08:28:45.000Z | 2022-02-23T07:10:28.000Z | fuji_server/models/body.py | ignpelloz/fuji | 5e6fe8333c1706d1b628a84108bff7a97fdf11a7 | [
"MIT"
] | 188 | 2020-05-11T08:54:59.000Z | 2022-03-31T12:28:15.000Z | fuji_server/models/body.py | ignpelloz/fuji | 5e6fe8333c1706d1b628a84108bff7a97fdf11a7 | [
"MIT"
] | 20 | 2020-05-04T13:56:26.000Z | 2022-03-02T13:39:04.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from datetime import date, datetime # noqa: F401
from typing import List, Dict # noqa: F401
from fuji_server.models.base_model_ import Model
from fuji_server import util
class Body(Model):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self,
object_identifier: str = None,
test_debug: bool = False,
metadata_service_endpoint: str = None,
metadata_service_type: str = None,
use_datacite: bool = None,
oaipmh_endpoint: str = None): # noqa: E501
"""Body - a model defined in Swagger
:param object_identifier: The object_identifier of this Body. # noqa: E501
:type object_identifier: str
:param test_debug: The test_debug of this Body. # noqa: E501
:type test_debug: bool
:param metadata_service_endpoint: The metadata_service_endpoint of this Body. # noqa: E501
:type metadata_service_endpoint: str
:param metadata_service_type: The metadata_service_type of this Body. # noqa: E501
:type metadata_service_type: str
:param use_datacite: The use_datacite of this Body. # noqa: E501
:type use_datacite: bool
:param oaipmh_endpoint: The oaipmh_endpoint of this Body. # noqa: E501
:type oaipmh_endpoint: str
"""
self.swagger_types = {
'object_identifier': str,
'test_debug': bool,
'metadata_service_endpoint': str,
'metadata_service_type': str,
'use_datacite': bool,
'oaipmh_endpoint': str
}
self.attribute_map = {
'object_identifier': 'object_identifier',
'test_debug': 'test_debug',
'metadata_service_endpoint': 'metadata_service_endpoint',
'metadata_service_type': 'metadata_service_type',
'use_datacite': 'use_datacite',
'oaipmh_endpoint': 'oaipmh_endpoint'
}
self._object_identifier = object_identifier
self._test_debug = test_debug
self._metadata_service_endpoint = metadata_service_endpoint
self._metadata_service_type = metadata_service_type
self._use_datacite = use_datacite
self._oaipmh_endpoint = oaipmh_endpoint
@classmethod
def from_dict(cls, dikt) -> 'Body':
"""Returns the dict as a model
:param dikt: A dict.
:type: dict
:return: The body of this Body. # noqa: E501
:rtype: Body
"""
return util.deserialize_model(dikt, cls)
@property
def object_identifier(self) -> str:
"""Gets the object_identifier of this Body.
The full identifier of data object that needs to be evaluated # noqa: E501
:return: The object_identifier of this Body.
:rtype: str
"""
return self._object_identifier
@object_identifier.setter
def object_identifier(self, object_identifier: str):
"""Sets the object_identifier of this Body.
The full identifier of data object that needs to be evaluated # noqa: E501
:param object_identifier: The object_identifier of this Body.
:type object_identifier: str
"""
if object_identifier is None:
raise ValueError('Invalid value for `object_identifier`, must not be `None`') # noqa: E501
self._object_identifier = object_identifier
@property
def test_debug(self) -> bool:
"""Gets the test_debug of this Body.
Indicate if the detailed evaluation procedure of the metrics should to be included in the response # noqa: E501
:return: The test_debug of this Body.
:rtype: bool
"""
return self._test_debug
@test_debug.setter
def test_debug(self, test_debug: bool):
"""Sets the test_debug of this Body.
Indicate if the detailed evaluation procedure of the metrics should to be included in the response # noqa: E501
:param test_debug: The test_debug of this Body.
:type test_debug: bool
"""
self._test_debug = test_debug
@property
def metadata_service_endpoint(self) -> str:
"""Gets the metadata_service_endpoint of this Body.
The URL of the catalogue endpoint (e.g. OAI-PMH data-provider) # noqa: E501
:return: The metadata_service_endpoint of this Body.
:rtype: str
"""
return self._metadata_service_endpoint
@metadata_service_endpoint.setter
def metadata_service_endpoint(self, metadata_service_endpoint: str):
"""Sets the metadata_service_endpoint of this Body.
The URL of the catalogue endpoint (e.g. OAI-PMH data-provider) # noqa: E501
:param metadata_service_endpoint: The metadata_service_endpoint of this Body.
:type metadata_service_endpoint: str
"""
self._metadata_service_endpoint = metadata_service_endpoint
@property
def metadata_service_type(self) -> str:
"""Gets the metadata_service_type of this Body.
:return: The metadata_service_type of this Body.
:rtype: str
"""
return self._metadata_service_type
@metadata_service_type.setter
def metadata_service_type(self, metadata_service_type: str):
"""Sets the metadata_service_type of this Body.
:param metadata_service_type: The metadata_service_type of this Body.
:type metadata_service_type: str
"""
self._metadata_service_type = metadata_service_type
@property
def use_datacite(self) -> bool:
"""Gets the use_datacite of this Body.
Indicates if DataCite content negotiation (using the DOI) shall be used to collect metadata # noqa: E501
:return: The use_datacite of this Body.
:rtype: bool
"""
return self._use_datacite
@use_datacite.setter
def use_datacite(self, use_datacite: bool):
"""Sets the use_datacite of this Body.
Indicates if DataCite content negotiation (using the DOI) shall be used to collect metadata # noqa: E501
:param use_datacite: The use_datacite of this Body.
:type use_datacite: bool
"""
self._use_datacite = use_datacite
@property
def oaipmh_endpoint(self) -> str:
"""Gets the oaipmh_endpoint of this Body.
(Deprecated) The URL of the OAI-PMH data-provider # noqa: E501
:return: The oaipmh_endpoint of this Body.
:rtype: str
"""
return self._oaipmh_endpoint
@oaipmh_endpoint.setter
def oaipmh_endpoint(self, oaipmh_endpoint: str):
"""Sets the oaipmh_endpoint of this Body.
(Deprecated) The URL of the OAI-PMH data-provider # noqa: E501
:param oaipmh_endpoint: The oaipmh_endpoint of this Body.
:type oaipmh_endpoint: str
"""
self._oaipmh_endpoint = oaipmh_endpoint
| 33.353774 | 120 | 0.650969 | 866 | 7,071 | 5.069284 | 0.13164 | 0.150342 | 0.070615 | 0.041002 | 0.679727 | 0.557175 | 0.494989 | 0.395444 | 0.369021 | 0.271071 | 0 | 0.012527 | 0.277471 | 7,071 | 211 | 121 | 33.511848 | 0.846741 | 0.455805 | 0 | 0.236842 | 0 | 0 | 0.111661 | 0.042685 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0 | 0.065789 | 0 | 0.355263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70b3d339713b3e3aa53dae35d24eff40b0f81f14 | 7,995 | py | Python | intercepting_proxy.py | Seabreg/wsuspect-proxy | 89f93756e6617e4798a16e6fc8c4e2b186765328 | [
"MIT"
] | 238 | 2015-10-24T01:00:08.000Z | 2022-03-29T06:50:06.000Z | intercepting_proxy.py | Seabreg/wsuspect-proxy | 89f93756e6617e4798a16e6fc8c4e2b186765328 | [
"MIT"
] | 4 | 2015-10-22T10:31:25.000Z | 2017-02-08T14:47:33.000Z | intercepting_proxy.py | Seabreg/wsuspect-proxy | 89f93756e6617e4798a16e6fc8c4e2b186765328 | [
"MIT"
] | 57 | 2015-10-25T20:26:33.000Z | 2022-01-05T14:32:38.000Z | # The MIT License (MIT)
#
# Copyright (c) 2015 Context Information Security Ltd.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
try:
from cStringIO import StringIO
except:
from io import BytesIO as StringIO
try:
from urlparse import urlparse, urlunparse
except:
from urllib.parse import urlparse, urlunparse
from twisted.internet import reactor
from twisted.python.log import startLogging
from twisted.web import server, resource, proxy, http
from twisted.python.filepath import FilePath
# add timeout features to the ProxyClient requests
from twisted.protocols.policies import TimeoutMixin
#ProxyClient(twisted.web.http.HTTPClient)
class InterceptingProxyClient(proxy.ProxyClient, TimeoutMixin):
def __init__(self, command, rest, version, headers, data, father):
proxy.ProxyClient.__init__(self, command, rest, version, headers, data, father)
# Define variables to track a terminated request and set a timeout
# set a callback handler for when a "finish" is reached
self.finished = False
self.setTimeout(20)
d = father.notifyFinish()
d.addBoth(self.onRequestFinished)
def onRequestFinished(self, message=None):
self.finish()
def finish(self):
# terminate the connection the proper way
# if the parent had not been terminated, terminate it
if not self.father.finished and not self.father._disconnected:
self.father.finish()
#if you have not been terminated, terminate it
if not self.finished:
self.transport.loseConnection()
self.setTimeout(None)
self.finished = True
def connectionMade(self):
# Edge case where the connection terminated prior to the event notification being setup
if self.father._disconnected:
self.finish()
return
# Send request to web server
proxy.ProxyClient.connectionMade(self)
def timeoutConnection(self):
# Recognize a timeout
TimeoutMixin.timeoutConnection(self)
def handleResponsePart(self, buffer):
# Buffer the output if we intend to modify it
if self.father.has_response_modifiers():
self.father.response_buffer.write(buffer)
else:
proxy.ProxyClient.handleResponsePart(self, buffer)
def handleResponseEnd(self):
# Process the buffered output if we are modifying it
if self.father.has_response_modifiers():
if not self._finished:
# Replace the StringIO with a string for the modifiers
data = self.father.response_buffer.getvalue()
self.father.response_buffer.close()
self.father.response_buffer = data
# Do editing of response headers / content here
self.father.run_response_modifiers()
self.father.responseHeaders.setRawHeaders('content-length', [len(self.father.response_buffer)])
self.father.write(self.father.response_buffer)
proxy.ProxyClient.handleResponseEnd(self)
class InterceptingProxyClientFactory(proxy.ProxyClientFactory):
noisy = False
protocol = InterceptingProxyClient
#ProxyRequest(twisted.web.http.Request)
class InterceptingProxyRequest(proxy.ProxyRequest):
def __init__(self, *args, **kwargs):
proxy.ProxyRequest.__init__(self, *args, **kwargs)
self.response_buffer = StringIO()
self.request_buffer = StringIO()
self.modifiers = self.channel.factory.modifiers
def run_request_modifiers(self):
if not self.has_request_modifiers():
return
if self.requestHeaders.hasHeader('content-length'):
self.request_buffer = self.content.read()
for m in self.modifiers:
m.modify_request(self)
if self.requestHeaders.hasHeader('content-length'):
self.content.seek(0,0)
self.content.write(self.request_buffer)
self.content.truncate()
self.requestHeaders.setRawHeaders('content-length', [len(self.request_buffer)])
def has_request_modifiers(self):
ret = False
for m in self.modifiers:
if m.will_modify_request(self):
ret = True
return ret
def has_response_modifiers(self):
ret = False
for m in self.modifiers:
if m.will_modify_response(self):
ret = True
return ret
def run_response_modifiers(self):
for m in self.modifiers:
m.modify_response(self)
def has_response_server(self):
for m in self.modifiers:
if m.will_serve_response(self):
return True
return False
def serve_resource(self):
body = None
for m in self.modifiers:
if m.will_serve_response(self):
body = m.get_response(self)
break
if not body:
raise Exception('Nothing served a resource')
self.setHeader('content-length', str(len(body)))
if self.method == 'HEAD':
self.write('')
else:
self.write(body)
self.finish()
def process(self):
host = None
port = None
if not self.uri.startswith("http://") and not self.uri.startswith("https://"):
self.uri = "http://" + self.getHeader("Host") + self.uri
parsed_uri = urlparse(self.uri)
self.uri = urlunparse(('', '', parsed_uri.path, parsed_uri.params, parsed_uri.query, parsed_uri.fragment)) or "/"
if self.has_request_modifiers():
self.run_request_modifiers()
if self.has_response_server():
self.serve_resource()
return
protocol = parsed_uri.scheme or 'http'
host = host or parsed_uri.netloc
port = port or parsed_uri.port or self.ports[protocol]
headers = self.getAllHeaders().copy()
if 'host' not in headers:
headers['host'] = host
if ':' in host:
host,_ = host.split(':')
self.content.seek(0, 0)
content = self.content.read()
clientFactory = InterceptingProxyClientFactory(self.method, self.uri, self.clientproto, headers, content, self)
self.reactor.connectTCP(host, port, clientFactory)
#Proxy(twisted.web.http.HTTPChannel)
class InterceptingProxy(proxy.Proxy):
requestFactory = InterceptingProxyRequest
class InterceptingProxyFactory(http.HTTPFactory):
protocol = InterceptingProxy
def add_modifier(self, m):
self.modifiers.append(m)
def __init__(self, modifier, *args, **kwargs):
http.HTTPFactory.__init__(self, *args, **kwargs)
self.modifiers = []
self.add_modifier(modifier)
| 36.340909 | 121 | 0.64803 | 921 | 7,995 | 5.53203 | 0.296417 | 0.029441 | 0.021197 | 0.028263 | 0.158979 | 0.121295 | 0.110697 | 0.069087 | 0.037684 | 0.037684 | 0 | 0.001716 | 0.270919 | 7,995 | 219 | 122 | 36.506849 | 0.872362 | 0.230894 | 0 | 0.228571 | 0 | 0 | 0.02291 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121429 | false | 0 | 0.064286 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70b762ecc904264067cc4b97c2ff253530538b43 | 864 | py | Python | localstack/testing/aws/util.py | matt-mercer/localstack | b69ba25e495c6ef889d33a050b216d0cd1035041 | [
"Apache-2.0"
] | null | null | null | localstack/testing/aws/util.py | matt-mercer/localstack | b69ba25e495c6ef889d33a050b216d0cd1035041 | [
"Apache-2.0"
] | null | null | null | localstack/testing/aws/util.py | matt-mercer/localstack | b69ba25e495c6ef889d33a050b216d0cd1035041 | [
"Apache-2.0"
] | null | null | null | import os
from localstack.utils.aws import aws_stack
def is_aws_cloud() -> bool:
return os.environ.get("TEST_TARGET", "") == "AWS_CLOUD"
def get_lambda_logs(func_name, logs_client=None):
logs_client = logs_client or aws_stack.create_external_boto_client("logs")
log_group_name = f"/aws/lambda/{func_name}"
streams = logs_client.describe_log_streams(logGroupName=log_group_name)["logStreams"]
streams = sorted(streams, key=lambda x: x["creationTime"], reverse=True)
log_events = logs_client.get_log_events(
logGroupName=log_group_name, logStreamName=streams[0]["logStreamName"]
)["events"]
return log_events
def bucket_exists(client, bucket_name: str) -> bool:
buckets = client.list_buckets()
for bucket in buckets["Buckets"]:
if bucket["Name"] == bucket_name:
return True
return False
| 32 | 89 | 0.717593 | 117 | 864 | 5.008547 | 0.435897 | 0.085324 | 0.061433 | 0.081911 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001389 | 0.166667 | 864 | 26 | 90 | 33.230769 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 0.02662 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.105263 | 0.052632 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70b78c4aba5008f2699bbcbaa326944a92145339 | 437 | py | Python | answers/vjha21/Day1/question2.py | arc03/30-DaysOfCode-March-2021 | 6d6e11bf70280a578113f163352fa4fa8408baf6 | [
"MIT"
] | 22 | 2021-03-16T14:07:47.000Z | 2021-08-13T08:52:50.000Z | answers/vjha21/Day1/question2.py | arc03/30-DaysOfCode-March-2021 | 6d6e11bf70280a578113f163352fa4fa8408baf6 | [
"MIT"
] | 174 | 2021-03-16T21:16:40.000Z | 2021-06-12T05:19:51.000Z | answers/vjha21/Day1/question2.py | arc03/30-DaysOfCode-March-2021 | 6d6e11bf70280a578113f163352fa4fa8408baf6 | [
"MIT"
] | 135 | 2021-03-16T16:47:12.000Z | 2021-06-27T14:22:38.000Z | ##Printing the given pattern
## *
## **
## ***
def pattern_generator(n):
spaces = 2*n - 2
for i in range(0,n):
for _ in range(0, spaces):
print(end=" ")
spaces = spaces - 2
for _ in range(0, i+1):
print("* ",end="")
print("\r")
if __name__ == "__main__":
n = int(input("Enter a number upto which you want to print the pattern: "))
pattern_generator(n) | 24.277778 | 79 | 0.510297 | 58 | 437 | 3.637931 | 0.534483 | 0.099526 | 0.113744 | 0.104265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023973 | 0.331808 | 437 | 18 | 80 | 24.277778 | 0.69863 | 0.107551 | 0 | 0 | 0 | 0 | 0.182768 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.083333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70ba6161380826d1a5246d3bc65fbdc149ffb3c6 | 5,098 | py | Python | src/app.py | umd-lib/website-searcher | bcc53618085c318d58bde1c0813bd8e33db6f7ec | [
"Apache-2.0"
] | null | null | null | src/app.py | umd-lib/website-searcher | bcc53618085c318d58bde1c0813bd8e33db6f7ec | [
"Apache-2.0"
] | null | null | null | src/app.py | umd-lib/website-searcher | bcc53618085c318d58bde1c0813bd8e33db6f7ec | [
"Apache-2.0"
] | null | null | null | import json
import logging
import os
import sys
import json
import urllib.parse
import requests
import furl
from flask import Flask, Response, request
from dotenv import load_dotenv
from waitress import serve
from paste.translogger import TransLogger
# Searcher to search Google Programmable Search Engine and return results
# in the Quick Search format.
#
# See https://developers.google.com/custom-search/v1/introduction
# Add any environment variables from .env
load_dotenv('../.env')
# Get environment variables
env = {}
for key in ('WEBSITE_SEARCHER_URL', 'WEBSITE_SEARCHER_API_KEY',
'WEBSITE_SEARCHER_ENGINE_ID', 'WEBSITE_SEARCHER_NO_RESULTS_LINK',
'WEBSITE_SEARCHER_MODULE_LINK'):
env[key] = os.environ.get(key)
if env[key] is None:
raise RuntimeError(f'Must provide environment variable: {key}')
search_url = furl.furl(env['WEBSITE_SEARCHER_URL'])
api_key = env['WEBSITE_SEARCHER_API_KEY']
engine_id = env['WEBSITE_SEARCHER_ENGINE_ID']
no_results_link = env['WEBSITE_SEARCHER_NO_RESULTS_LINK']
module_link = env['WEBSITE_SEARCHER_MODULE_LINK']
debug = os.environ.get('FLASK_ENV') == 'development'
logging.root.addHandler(logging.StreamHandler())
loggerWaitress = logging.getLogger('waitress')
logger = logging.getLogger('website-searcher')
if debug:
loggerWaitress.setLevel(logging.DEBUG)
logger.setLevel(logging.DEBUG)
# from http.client import HTTPConnection
# HTTPConnection.debuglevel = 1
# requests_log = logging.getLogger("requests.packages.urllib3")
# requests_log.setLevel(logging.DEBUG)
# requests_log.propagate = True
else:
loggerWaitress.setLevel(logging.INFO)
logger.setLevel(logging.INFO)
logger.info("Starting the website-searcher Flask application")
endpoint = 'website-search'
# Start the flask app, with compression enabled
app = Flask(__name__)
app.config['JSON_AS_ASCII'] = False
@app.route('/')
def root():
return {'status': 'ok'}
@app.route('/ping')
def ping():
return {'status': 'ok'}
@app.route('/search')
def search():
# Get the request parameters
args = request.args
if 'q' not in args or args['q'] == "":
return {
'endpoint': endpoint,
'error': {
'msg': 'q parameter is required',
},
}, 400
query = args['q']
per_page = 3
if 'per_page' in args and args['per_page'] != "":
per_page = args['per_page']
page = 0
if 'page' in args and args['page'] != "" and args['page'] != "%":
page = args['page']
start_index = 1 + int(page) * int(per_page)
# Execute the Google search
params = {
'q': query, # query
'key': api_key,
'cx': engine_id,
'num': per_page, # number of results
'start': start_index, # starting at this result (1 is the first result)
}
try:
response = requests.get(search_url.url, params=params)
except Exception as err:
logger.error(f'Error submitting search url={search_url.url}, params={params}\n{err}')
return {
'endpoint': endpoint,
'error': {
'msg': f'Error submitting search',
},
}, 500
if response.status_code != 200:
logger.error(f'Received {response.status_code} when submitted {query=}')
return {
'endpoint': endpoint,
'error': {
'msg': f'Received {response.status_code} when submitted {query=}',
},
}, 500
logger.debug(f'Submitted url={search_url.url}, params={params}')
logger.debug(f'Received response {response.status_code}')
logger.debug(response.text)
data = json.loads(response.text)
# Gather the search results into our response
results = []
response = {
'endpoint': endpoint,
'query': query,
"per_page": str(per_page),
"page": str(page),
"total": int(data['searchInformation']['totalResults']),
"module_link": module_link.replace('{query}',
urllib.parse.quote_plus(query)),
"no_results_link": no_results_link,
"results": results
}
if 'items' in data:
for item in data['items']:
results.append({
'title': item['title'].replace(' | UMD Libraries',''),
'link': item['formattedUrl'],
'description': item['snippet'],
'item_format': 'web_page',
'extra': {
'displayLink': item['displayLink'],
'snippet': item['snippet'],
'htmlSnippet': item['htmlSnippet'],
},
})
return response
if __name__ == '__main__':
# This code is not reached when running "flask run". However the Docker
# container runs "python app.py" and host='0.0.0.0' is set to ensure
# that flask listens on port 5000 on all interfaces.
# Run waitress WSGI server
serve(TransLogger(app, setup_console_handler=True),
host='0.0.0.0', port=5000, threads=10)
| 28.640449 | 93 | 0.620439 | 601 | 5,098 | 5.118136 | 0.324459 | 0.058518 | 0.021131 | 0.026333 | 0.132965 | 0.066645 | 0.029259 | 0.029259 | 0 | 0 | 0 | 0.009714 | 0.252844 | 5,098 | 177 | 94 | 28.80226 | 0.797847 | 0.167713 | 0 | 0.132231 | 0 | 0 | 0.269431 | 0.082938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024793 | false | 0 | 0.099174 | 0.016529 | 0.173554 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70bda0a9ea0185198b22cfef2423b53c44f47ac0 | 1,240 | py | Python | sorting/merge_sort.py | dgobalak/AlgoVisualizer | db8a65e89c03af44220ccbc9d77b4470309204e6 | [
"Apache-2.0"
] | null | null | null | sorting/merge_sort.py | dgobalak/AlgoVisualizer | db8a65e89c03af44220ccbc9d77b4470309204e6 | [
"Apache-2.0"
] | null | null | null | sorting/merge_sort.py | dgobalak/AlgoVisualizer | db8a65e89c03af44220ccbc9d77b4470309204e6 | [
"Apache-2.0"
] | null | null | null | from sorting.sorting_helpers import *
import pygame
import sys
def merge_sort(screen, matrix, nums, start, end):
if start < end:
mid = (start + end) // 2
merge_sort(screen, matrix, nums, start, mid)
merge_sort(screen, matrix, nums, mid+1, end)
merge(screen, matrix, nums, start, mid, end)
def merge(screen, matrix, nums, start, mid, end):
tmp = [None] * (end - start + 1)
i, j = start, mid + 1
k = 0
# Merge lists in ascending order
while i <= mid and j <= end:
if nums[i] <= nums[j]:
tmp[k] = nums[i]
i += 1
else:
tmp[k] = nums[j]
j += 1
k += 1
# Add remaining values to list
while i <= mid:
tmp[k] = nums[i]
i += 1
k += 1
while j <= end:
tmp[k] = nums[j]
j += 1
k += 1
# Copy values back to list
for x in range(len(tmp)):
nums[start+x] = tmp[x]
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
# Update Pygame screen
matrix_from_nums(nums, matrix)
draw_matrix(screen, matrix)
pygame.display.flip()
| 22.142857 | 52 | 0.501613 | 170 | 1,240 | 3.617647 | 0.3 | 0.136585 | 0.130081 | 0.136585 | 0.325203 | 0.279675 | 0.146341 | 0.042276 | 0 | 0 | 0 | 0.015504 | 0.375806 | 1,240 | 55 | 53 | 22.545455 | 0.77907 | 0.084677 | 0 | 0.289474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.078947 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70bdcd8dd13e5cdf4b102f7fada61fe86335e0d1 | 3,936 | py | Python | notebooks/02-marcusinthesky-modelling.py | marcusinthesky/Zindi-Urban-Air-Pollution | 66b119b8a8d9a15a98eb93602f4826aefaaa8a54 | [
"Apache-2.0"
] | null | null | null | notebooks/02-marcusinthesky-modelling.py | marcusinthesky/Zindi-Urban-Air-Pollution | 66b119b8a8d9a15a98eb93602f4826aefaaa8a54 | [
"Apache-2.0"
] | null | null | null | notebooks/02-marcusinthesky-modelling.py | marcusinthesky/Zindi-Urban-Air-Pollution | 66b119b8a8d9a15a98eb93602f4826aefaaa8a54 | [
"Apache-2.0"
] | null | null | null | #%%
import pandas as pd
import numpy as np
import holoviews as hv
import hvplot.pandas
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, PolynomialFeatures, FunctionTransformer
from sklearn.decomposition import PCA
from sklearn.model_selection import RandomizedSearchCV
from sklearn.compose import TransformedTargetRegressor, ColumnTransformer
from sklearn.metrics import get_scorer
from pyearth import Earth
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# explicitly require this experimental feature
from sklearn.experimental import enable_iterative_imputer # noqa
# now you can import normally from sklearn.impute
from sklearn.impute import IterativeImputer, SimpleImputer
from sklearn.pipeline import make_union
# explicitly require this experimental feature
from sklearn.experimental import enable_hist_gradient_boosting # noqa
# now you can import normally from ensemble
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import StackingRegressor
from pollution.linear_model import TweedieGLM
from pollution.mca import MCA
hv.extension('bokeh')
# %%
train = context.catalog.load('train').drop(columns=['Place_ID'])
test = context.catalog.load('test').drop(columns=['Place_ID'])
sample_submission = context.catalog.load('samplesubmission')
#%%
min_date = train.Date.min()
train = train.assign(Date = lambda df: (df.Date - min_date).dt.days.astype(np.float))
test = test.assign(Date = lambda df: (df.Date - min_date).dt.days.astype(np.float))
#%%
train.target.hvplot.kde()
# %%
transformer = Pipeline([('inpute', SimpleImputer()),#IterativeImputer(random_state=0)),
('poly', PolynomialFeatures()),
('scale', StandardScaler()),
('pca', PCA(15)),
('rescale', StandardScaler())])
glm = TweedieGLM(power=0, max_iter=1000)
mars = Earth()
gb = HistGradientBoostingRegressor()
estimators = [
('mars', mars),
('gb', gb)
]
final_estimator = Pipeline([
('poly', PolynomialFeatures(2)),
('scale', StandardScaler()),
('pca', PCA()),
('regressor', LinearRegression())])
stack = StackingRegressor(
estimators=estimators,
final_estimator=final_estimator
)
model = Pipeline([('transformer', transformer),
('model', stack)])
offset = 1e-9
def add(y):
return np.log(y + offset)
def subtract(y):
return (np.exp(y) - offset)
link = Pipeline([('function', FunctionTransformer(add, subtract, validate=True))])
scorer = get_scorer('neg_root_mean_squared_error')
pipeline = TransformedTargetRegressor(regressor=model, transformer=link)
#%%
glm_params = {'regressor__model__power': [0, 2, 3],
'regressor__model__alpha': [1e-3, 1e-1, 1],
'regressor__transformer__pca__n_components': [10, 25, 45]}
mars_params = {'regressor__model__mars__max_degree': [1, 2],
'regressor__model__mars__max_terms': [15, 20],
'regressor__transformer__pca__n_components': [10, 25],
'regressor__transformer__poly__degree': [1]}
search = RandomizedSearchCV(pipeline, mars_params, scoring=scorer,
n_iter = 1, n_jobs=-1, return_train_score=True)
X_train, y_train = train.drop(columns=['target', 'target_min', 'target_max', 'target_variance']), train.target
search.fit(X_train, y_train)
results = pd.DataFrame(search.cv_results_)
context.io.save('searchresults', results)
# %%
X_test = test.loc[:, X_train.columns]
# %%
# predict and plot
y_pred = pd.Series(search.predict(X_test), name = y_train.name).clip(0)
submissionkde = y_pred.hvplot.kde()
# %%
# format submission
submission = sample_submission
submission.target = y_pred
context.io.save('submission', submission)
# %%
results.sort_values('mean_test_score').tail()
| 30.045802 | 110 | 0.715193 | 459 | 3,936 | 5.923747 | 0.359477 | 0.060684 | 0.01986 | 0.018389 | 0.1427 | 0.1427 | 0.1427 | 0.091946 | 0.091946 | 0.091946 | 0 | 0.011865 | 0.164888 | 3,936 | 130 | 111 | 30.276923 | 0.815333 | 0.072409 | 0 | 0.025 | 0 | 0 | 0.126377 | 0.071035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.2625 | 0.025 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70bdecf4be211b0a474c283099b71331e1fad7c1 | 1,542 | py | Python | _static/include/algorithm_examples.py | Kebatotkulov/CompEcon | a73f23c397cf167fb432082eb205fd0714a7a788 | [
"MIT"
] | 39 | 2020-11-17T04:03:14.000Z | 2022-03-17T19:49:31.000Z | _static/include/algorithm_examples.py | Kebatotkulov/CompEcon | a73f23c397cf167fb432082eb205fd0714a7a788 | [
"MIT"
] | null | null | null | _static/include/algorithm_examples.py | Kebatotkulov/CompEcon | a73f23c397cf167fb432082eb205fd0714a7a788 | [
"MIT"
] | 35 | 2020-11-17T05:31:31.000Z | 2022-03-25T17:09:13.000Z | # Example code to be discussed in the following videos
import time
def parity (n,verbose=False):
'''Returns 1 if passed integer number is odd
'''
if not isinstance(n, int): raise TypeError('Only integers in parity()')
if verbose: print('n = ', format(n, "b")) # print binary form of the number
return n & 1 # bitwise and operation returns the value of last bit
def maximum_from_list (vars):
'''Returns the maximum from a list of values
'''
m=float('-inf') # init with the worst value
for v in vars:
if v > m: m = v
return m
def binary_search(grid=[0,1],val=0,delay=0):
'''Returns the index of val on the sorted grid
Optional delay introduces a delay (in microsecond)
'''
i1,i2 = 0,len(grid)-1
if val==grid[i1]: return i1
if val==grid[i2]: return i2
j=(i1+i2)//2
while grid[j]!=val:
if val>grid[j]:
i1=j
else:
i2=j
j=(i1+i2)//2 # divide in half
time.sleep(delay*1e-6) # micro-sec to seconds
return j
def compositions(N,m):
'''Iterable on compositions of N with m parts
Returns the generator (to be used in for loops)
'''
cmp=[0,]*m
cmp[m-1]=N # initial composition is all to the last
yield cmp
while cmp[0]!=N:
i=m-1
while cmp[i]==0: i-=1 # find lowest non-zero digit
cmp[i-1] = cmp[i-1]+1 # increment next digit
cmp[m-1] = cmp[i]-1 # the rest to the lowest
if i!=m-1: cmp[i] = 0 # maintain cost sum
yield cmp
| 30.235294 | 80 | 0.589494 | 259 | 1,542 | 3.49807 | 0.413127 | 0.022075 | 0.029801 | 0.013245 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033759 | 0.289235 | 1,542 | 50 | 81 | 30.84 | 0.792883 | 0.393645 | 0 | 0.117647 | 0 | 0 | 0.038375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.029412 | 0 | 0.235294 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70bdf9a39e340131b60f25e836ad459523e7b323 | 21,194 | py | Python | datagen/cov19_con_trace.py | sushrutmair/Covid19 | f6576278f2ea7339c1fa4a93ea8a564f0c946041 | [
"MIT"
] | 1 | 2020-10-17T07:55:53.000Z | 2020-10-17T07:55:53.000Z | datagen/cov19_con_trace.py | sushrutmair/Covid19 | f6576278f2ea7339c1fa4a93ea8a564f0c946041 | [
"MIT"
] | null | null | null | datagen/cov19_con_trace.py | sushrutmair/Covid19 | f6576278f2ea7339c1fa4a93ea8a564f0c946041 | [
"MIT"
] | 1 | 2020-06-28T01:42:21.000Z | 2020-06-28T01:42:21.000Z | '''
Does Covid 19 contact tracing analysis on data of the form:
<name, latitude, longitude, date, time, condition>
where name = name of the person, latitude & longitude are the geographical coordinates, date
& time are the time stamp when those coordinates were recorded and finally condition indicates
whether this person is sick or healthy.
Use generator.py to generate data in the above form with various configurations.
This program assumes that the input data is in the format prescribed above. It takes this data
and then builds various directed as well as undirected graphs. It use the graphs to:
- detect potential high risk contacts
- detect risky locations
- detect vulnerable subset of the population
- predict potential future vulnerable population / locations
Dependencies:
- Python 2.7 only (latlon doesn't support Python 3 :(. For python 3+, use pyGeodesy)
- LatLon 1.0.2 - https://pypi.org/project/LatLon/
- pandas 0.24.2
- networkx 2.2 ( !pip install networkx=2.2. 2.2 due to python 2.7 - use latest if on Python 3+ )
- python-louvain 0.13 ( !pip install python-louvain )
- matplotlib 2.2.4
'''
import pandas as pd
import LatLon
from LatLon import *
import networkx as nx
import matplotlib.pyplot as plt
import time
from copy import deepcopy
import community
##### All configurations start here #####
#set for lat, lon otherwise implicit default loses precision
pd.set_option('display.precision',12)
#data file path. this is the data to be analyzed.
datapath = 'cov19_gen_dataset_05_doctored.csv' #'cov19_gen_dataset_10k.csv'
#stores the size of the virtual microcell around each location a person was recorded to have visited.
#this is used to calculate if two persons have breached the commonly accepted social distance limits.
#can be changed to anything, default is kept at x metres. This is for tagging high risk contacts.
microcell_radius = 0.005 # e.g., say is set to 0.003. It is about 10 ft captured here in (3) metres
#controls whether graphs are visually displayed or not. If running on linux ensure X Windows is available.
#0 = graphs are displayed in ui. 1 = no graphs are displayed.
ui = 1
##### All configurations end here #####
##### Runtime variables #####
rawdataframe = pd.DataFrame()
sorteddf = pd.DataFrame() #same as raw data frame except all locations are sorted asc order by time of visit
persons = []
gxarry_pop_travel_hist = [] #array of nx graphs holding travel history of each member in pop
undir_gxarray_pop_travel_hist = []#same graph as gxarry_pop_travel_hist except it is undirected
col_breach = ['name1','con1','latlon1','entrytm1','exittm1','name2','con2','latlon2',
'entrytm2','exittm2','dist','breach', 'risk']
#holds info of all possible travels by the population and which two people were involved. This
#is used to generate a risk profile for the population.
travel_hist = pd.DataFrame(columns = col_breach)
#graph with various new edges and attributes on both nodes and edges. Used for
#overall analysis activities.
biggx = nx.Graph()
#list of known infected people
known_infected_list = []
##### Methods #####
#customized printer
def printcov(str_to_print):
print("[log]:--> " + str_to_print)
#Cleans and perpares data to be suitable for running analysis. Typically, this involves
#finding each unique person in the dataset, sorting the location records by time in an
#ascending order and others.
def dataprep():
rawdataframe = pd.read_csv(datapath, sep=',', header=0)
printcov("Sample of loaded raw data: ")
print(rawdataframe.head(3))
print(rawdataframe.tail(3))
popcount = 0
lastname = ""
dftmp = pd.DataFrame()
#our goal is to get each unique name and then prepare data for that.
for index, row in rawdataframe.iterrows():
currname = row['name']
if(currname != lastname):
printcov("Processing for: " + currname)
persons.append(currname)
df = rawdataframe.loc[rawdataframe['name'] == currname]
printcov("# of rows found: " + str(len(df)))
popcount = popcount + 1
#now to sort the rows by time. We ignore the Date field as we are assuming
#that the data is of a single day only.
df = df.sort_values(by=['time'])
#finally append this to the sorted df
dftmp = dftmp.append(df)
lastname = currname
printcov("Completed prep for data.")
#sorteddf = sorteddf.append(dftmp)
dftmp = dftmp.reset_index(drop=True)
printcov("Prepp'd data: ")
print(dftmp.head(27))
print(dftmp.tail(27))
printcov("Unique people found in pop of size: " + str(popcount))
print(persons)
printcov("Saving prepp'd data to a file: preppd_df.csv for debugging (in current folder).")
dftmp.to_csv("preppd_df.csv")
return dftmp
#prepares graph data per unique person in the provided dataset and plots their travel
#history with locations and time. Also generates and adds useful attributes to nodes
#and edges that help in further analysis. At this point, we know the total population
#size, the names of each unique person. We use this to plot a graph for analysis.
def graph_per_person(person):
printcov("Generating graph for: " + person)
one_persons_records = sorteddf.loc[sorteddf['name'] == person] #sorted by time in asc order
one_persons_records = one_persons_records.reset_index(drop=True)
print(one_persons_records)
gx = nx.MultiDiGraph(name=person,con=one_persons_records['condition'][0]) #new graph for curr person
#create all nodes
nodeid=0
for index, row in one_persons_records.iterrows():
#each recorded loc is a node
nodelabel = str(person) + str(nodeid)
gx.add_node(nodelabel,latlon=LatLon(Latitude(row['lat']),Longitude(row['lon'])))
nodeid = nodeid+1
noofnodes = nx.number_of_nodes(gx)
#now let's add edges for the nodes
print("Adding edges for: " + str(nx.number_of_nodes(gx)) + " nodes...")
print(gx.nodes())
for x in range(0,noofnodes):
y = x + 1
if(y == noofnodes):
print("reached end node")
break
else:
nodelabel1 = str(person) + str(x)
nodelabel2 = str(person) + str(y)
#gx.add_edge(nodelabel1,nodelabel2,time=one_persons_records.at[nodelabel2,'time'])
gx.add_edge(nodelabel1,nodelabel2,time=one_persons_records['time'][y])
print("Completed adding edges for: " + str(person) + ". Graph complete.")
disp_graph(gx)
gxarry_pop_travel_hist.append(gx)
return
#finds overlapping locations with time for the population and also marks such
#locations with a new attribute so that we can easily analyze them later. We also
#create a new undirected graph that has all overlaps available. There shall be one
#such overlap graph per person in the population.
def overlaps_for_pop(gxall):
printcov("Finding overlaps within population's location history")
b_all = pd.DataFrame(columns = col_breach)
for x in range(0, len(gxall)):
#get the 1st person and find overlaps of each of their loc
#with each loc of each other person in the population.
#we convert the graph to an undirected copy since mixed graphs are
#not possible in nx. We'll use both versions for later analysis. Note
#that the loc overlap calc doesnt need undirected graph. We shall create
#a new undirected edge for each overlap and that is why we need to
#convert to undirected graph
undirectedgxcurr = gxall[x].to_undirected() #get this person's graph
#compare current person graph with all others for loc overlaps
#first copy out the graph container
gxallminuscurr = []
for cv in range(0,len(gxall)):
newgx = deepcopy(gxall[cv]) #use a deep copy
gxallminuscurr.append(newgx)
gxallminuscurr.pop(x)#remove current persons graph before cmp
for y in range(0, len(gxallminuscurr)):
undirectedgxnext = gxallminuscurr[y].to_undirected()
disp_graph(undirectedgxnext)
bxy = find_overlap(undirectedgxcurr,undirectedgxnext)
b_all = b_all.append(bxy)
printcov("Completed overlap extractions.")
return b_all
#finds overlapping locations between two graphs
def find_overlap(undgx_curr, undgx_next):
#get 'latlon' attributes of both and figure out if present in microcell
anchorgraph_name = str(undgx_curr.graph['name'])
compargraph_name = str(undgx_next.graph['name'])
anchor_health_status = str(undgx_curr.graph['con'])
compar_health_status = str(undgx_next.graph['con'])
printcov("Processing overlaps. Anchor graph: " + anchorgraph_name + " | " +
anchor_health_status + " and Comparison graph: "
+ compargraph_name + " | " + compar_health_status)
gxcurr_nodeattrib = nx.get_node_attributes(undgx_curr,'latlon')
gxnext_nodeattrib = nx.get_node_attributes(undgx_next,'latlon')
printcov("Node attributes for overlap calc are:\n")
print("curr anchor graph: " + str(gxcurr_nodeattrib))
print("comparison graph: " + str(gxnext_nodeattrib))
print("\n")
b = pd.DataFrame(columns = col_breach)
for x in range(0, len(gxcurr_nodeattrib)):
for y in range(0, len(gxnext_nodeattrib)):
#here, we compare curr(latlon) with next(latlon) iteratively.
gxcurr_curr_nodelbl = str(anchorgraph_name) + str(x)
gxnext_curr_nodelbl = str(compargraph_name) + str(y)
print(str(gxcurr_nodeattrib[gxcurr_curr_nodelbl]) + " ----- " + str(gxnext_nodeattrib[gxnext_curr_nodelbl]))
distance = gxcurr_nodeattrib[gxcurr_curr_nodelbl].distance(gxnext_nodeattrib[gxnext_curr_nodelbl])
print("Person: " + anchorgraph_name + " & Person " + compargraph_name)
print(" - anchor node: " + str(gxcurr_curr_nodelbl) + " and comparison node: " + str(gxnext_curr_nodelbl))
print(" - distance between above two: " + str(distance))
entm1 = find_startime_gx(x, undgx_curr, anchorgraph_name)
extm1 = find_endtime_gx(x, undgx_curr, anchorgraph_name)
entm2 = find_startime_gx(y, undgx_next, compargraph_name)
extm2 = find_endtime_gx(y, undgx_next, compargraph_name)
risk = 'none'
breach = 'no'
if(distance <= microcell_radius):
#a new edge connecting these two nodes and save the graph. Also mark
#the relevant loc's as 'breached' with a new node attribute. risk is still
#classified as none because we have not yet calculated time overlap
print("Microcell radius breached.")
breach = 'yes'
#breachnodes attribute is useful to find edges that caused a breach
biggx.add_edge(gxcurr_curr_nodelbl,gxnext_curr_nodelbl,
breachnodes=(gxcurr_curr_nodelbl+':'+gxnext_curr_nodelbl))
biggx.nodes[gxcurr_curr_nodelbl]['breached'] = 'yes'
biggx.nodes[gxnext_curr_nodelbl]['breached'] = 'yes'
#time overlaps. use e*tm1 and e*tm2 to calculate overlap. If there is
#an overlap of time then we have two people in the same location at the same
#time => risk == high if one of them is sick. For the h person mark the loc as
#infection start time (potentially). We already have the time at that place tho
#the actual start time should be the time h and s were together first at this loc.
#risk = 'high'
if(max(entm1,entm2) <= min(extm1,extm2)):
print("Time overlap found too. Checking if one of them is sick..")
if( (anchor_health_status=='sick') or (compar_health_status=='sick')):
print("One person is sick. Marked as high risk for healthy.")
risk = 'high'
if(anchor_health_status=='healthy'):
biggx.nodes[gxcurr_curr_nodelbl]['infec_start_loc'] = 'yes'
if(compar_health_status=='healthy'):
biggx.nodes[gxnext_curr_nodelbl]['infec_start_loc'] = 'yes'
data = pd.DataFrame([[anchorgraph_name, anchor_health_status,
gxcurr_nodeattrib[gxcurr_curr_nodelbl], entm1, extm1,
compargraph_name, compar_health_status,
gxnext_nodeattrib[gxnext_curr_nodelbl], entm2, extm2,
distance, breach, risk]],
columns=['name1','con1','latlon1','entrytm1','exittm1','name2','con2',
'latlon2','entrytm2','exittm2','dist','breach', 'risk'])
b = b.append(data)
return b
#finds the exit time for the given graph's node. exit time = time when the person exited a recorded loc
def find_endtime_gx(nodelabelsuffix, gx, nodelabelprefix):
curr_node = str(nodelabelprefix) + str(nodelabelsuffix)
prev_node_sfx = 0
next_node_sfx = 0
if(nodelabelsuffix == 0):
prev_node_sfx = 0
else:
prev_node_sfx = nodelabelsuffix-1
if(nodelabelsuffix ==len(gx)-1):
next_node_sfx = nodelabelsuffix
else:
next_node_sfx = nodelabelsuffix + 1
extm1 = 0
next_node = str(nodelabelprefix) + str(next_node_sfx)
if(gx.has_edge(curr_node,next_node)):
extm1 = gx.get_edge_data(curr_node,next_node)
extm1 = extm1[0]['time']
return extm1
#finds the start time for the given graph's node. start time = time when a person entered a recorded loc
def find_startime_gx(nodelabelsuffix, gx, nodelabelprefix):
curr_node = str(nodelabelprefix) + str(nodelabelsuffix)
prev_node_sfx = 0
next_node_sfx = 0
if(nodelabelsuffix == 0):
prev_node_sfx = 0
else:
prev_node_sfx = nodelabelsuffix-1
if(nodelabelsuffix ==len(gx)-1):
next_node_sfx = nodelabelsuffix
else:
next_node_sfx = nodelabelsuffix + 1
entm1 = 0
prev_node = str(nodelabelprefix) + str(prev_node_sfx)
if(gx.has_edge(prev_node,curr_node)):
entm1 = gx.get_edge_data(prev_node,curr_node)
entm1 = entm1[0]['time']
return entm1
#allows to validate all graphs. For each graph, walks it, explodes nodes and edges.
def test_all_graphs(g):
printcov("=========> Testing all graphs: ")
for i in range(0, len(g)):
print(nx.info(g[i]))
print(" - Nodes:")
print(g[i].nodes)
for x1 in range(0, len(g[i].nodes)):
nodelabel = str(g[i].graph['name']) + str(x1)
print("Node id: " + str(nodelabel) + str(g[i].nodes[nodelabel]))
#print("Node id: " + str(x1) + str(g[i].nodes[x1]))
print(" - Edges:")
print(g[i].edges)
print("Edge attributes: " + str(nx.get_edge_attributes(g[i],'time')))
print('------------------------------------------')
printcov("=========> Testing complete.")
return
#builds a graph for all of the population. Is an undirected
#graph and is used for running analysis algorithms.
def build_bigdaddy(gxarray):
gxdaddytemp = nx.MultiGraph()
for i in range(0,len(gxarray)):
gxdaddytemp = nx.compose(gxdaddytemp,gxarray[i].to_undirected())
return gxdaddytemp
#display graphs
def disp_graph(g):
if(ui == 0):
nx.draw(g, with_labels=True)
nx.draw_networkx_edge_labels(g, pos=nx.spring_layout(g))
plt.show()
def save_graph_to_pickle(g, filename):
nx.write_gpickle(g, filename)
return
def read_graph_from_pickle(picklepath):
g = nx.read_gpickle(picklepath)
return g
def find_infection_start_locs(g):
nattrib_infec_start_loc = nx.get_node_attributes(g,'infec_start_loc')
printcov("Infection start locations for healthy people are: \n" + str(nattrib_infec_start_loc))
return nattrib_infec_start_loc
def find_high_traffic_locations(g):
node_deg = dict(nx.degree(g))
sorted_nodedeg = [(k, node_deg[k]) for k in sorted(node_deg, key=node_deg.get, reverse=True)]
top5nodes_by_deg = []
top5nodes_by_deg.append(sorted_nodedeg[0])
top5nodes_by_deg.append(sorted_nodedeg[1])
top5nodes_by_deg.append(sorted_nodedeg[2])
top5nodes_by_deg.append(sorted_nodedeg[3])
top5nodes_by_deg.append(sorted_nodedeg[4])
di = nx.get_node_attributes(g,'breached')
printcov("These locations have witnessed high traffic: ")
htl = []
for n in g.nodes:
for n2 in top5nodes_by_deg:
if(n == n2[0]):
for n3 in di:
if(n2[0] == n3):
print(str(n3))
htl.append(n3)
return htl
def predict_next_infec_locations(g):
neighb_nodes=[]
infec_start_locs = nx.get_node_attributes(g,'infec_start_loc')
for n in infec_start_locs:
neighb = nx.all_neighbors(g,n)
for nebs in neighb:
neighb_nodes.append(nebs)
#we can also order these location edges by time by keeping locs that come into play
#only after the 'infec_start_loc' time. This keeps predicted locs that were traveled
#to only after an infection occured. The below is a more generic set.
printcov("Predicted locations where infections may have occurred (time agnostic): ")
print(neighb_nodes)
return neighb_nodes
#note: this function plots a graph if ui is enabled
def find_communities_based_on_loc(g):
#first compute the best partition
G = g
partition = community.best_partition(G)
comm_list = []
size = float(len(set(partition.values())))
count = 0.
for com in set(partition.values()) :
count = count + 1.
list_nodes = [nodes for nodes in partition.keys()
if partition[nodes] == com]
comm_list.append(list_nodes)
if(ui == 0):
plt.figure(101,(17,17))
pos = nx.spring_layout(G)
nclr = range(len(list_nodes))
nx.draw_networkx_nodes(G, pos, list_nodes, node_size = 150, node_color = nclr)
#print("node color: ",nclr, ". For community: ", list_nodes, "\n")
nx.draw_networkx_edges(G, pos, alpha=0.5)
plt.show()
printcov("Final list of: " + len(comm_list) + " louvain modularized communities :=>\n")
for x in comm_list:
print(x)
return comm_list
def find_vuln_loc_and_ppl(comm_list, infperson):
vulncomm = [] #list of vulnerable locations
for comm in comm_list:
for p in comm:
onlyname = ''.join([i for i in p if not i.isdigit()])
if(onlyname == infperson):
vulncomm.append(comm)
break
printcov("Priority list of vulnerable locations are: ")
print(vulncomm)
vulnppl1 = []
for x in vulncomm:
for j in range(0,len(x)):
onlyname = ''.join([i for i in x[j] if not i.isdigit()])
vulnppl1.append(onlyname)
printcov("Vulnerable people are: ")
vulnppl = list(set(vulnppl1))
print(vulnppl)
return vulncomm, vulnppl
def find_known_infected_ppl(g):
printcov("We have: " + str(len(known_infected_list)) + " known infected people in this dataset. They are: ")
print(known_infected_list)
return known_infected_list
def run_graph_analysis(g):
infperson_lst = find_known_infected_ppl(g)
find_infection_start_locs(g)
find_high_traffic_locations(g)
predict_next_infec_locations(g)
comm_list = find_communities_based_on_loc(g)
for infp in infperson_lst:
onlyname = ''.join([i for i in infp if not i.isdigit()])
find_vuln_loc_and_ppl(comm_list, onlyname)
return
################
##### MAIN #####
################
printcov("Starting Covid 19 contact tracing analysis for data in: ")
printcov(" " + datapath)
printcov("Configurations are: ")
print("Microcell radius for overlap calc: " + str(microcell_radius))
print("Graph display control is: " + str(ui) + ". 0 = ON / 1 = OFF.")
print('-------------------------------------')
time.sleep(7.7)
#call dataprep method. We also get 'persons' during this
sorteddf = dataprep()
known_infected_list = (sorteddf.loc[sorteddf['condition'] == 'sick'])['name'].unique()
printcov("We have: " + str(len(known_infected_list)) + " known infected people in this dataset. They are: ")
print(known_infected_list)
#call graph generation method for each person in the dataset
print("Initiating graph generation...")
for person in range(0,len(persons)):
graph_per_person(persons[person])
test_all_graphs(gxarry_pop_travel_hist)
biggx = build_bigdaddy(gxarry_pop_travel_hist)
travel_hist = overlaps_for_pop(gxarry_pop_travel_hist)
printcov("There are : " + str(len(travel_hist)) + " travel histories. They are: ")
print(travel_hist)
#save travel hist for later use
travel_hist.to_csv("travelhist_df.csv")
disp_graph(biggx)
save_graph_to_pickle(biggx, "graph.gz")
run_graph_analysis(biggx)
printcov("Completed Covid 19 contact tracing analysis.")
| 39.614953 | 123 | 0.662404 | 2,894 | 21,194 | 4.705943 | 0.199724 | 0.014539 | 0.006462 | 0.008077 | 0.213085 | 0.138483 | 0.086937 | 0.079595 | 0.074749 | 0.067406 | 0 | 0.012203 | 0.234453 | 21,194 | 534 | 124 | 39.689139 | 0.82718 | 0.286826 | 0 | 0.111446 | 0 | 0 | 0.142274 | 0.007495 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057229 | false | 0 | 0.024096 | 0 | 0.13253 | 0.207831 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70bfb9023066620a2a09f562fa8abb090ab8a248 | 1,254 | py | Python | test/common.py | askpatrickw/micropython-stubber | ccd73538dde8a5da77b89aba79c0d7a076e5eecd | [
"MIT"
] | null | null | null | test/common.py | askpatrickw/micropython-stubber | ccd73538dde8a5da77b89aba79c0d7a076e5eecd | [
"MIT"
] | null | null | null | test/common.py | askpatrickw/micropython-stubber | ccd73538dde8a5da77b89aba79c0d7a076e5eecd | [
"MIT"
] | null | null | null |
import stubs.dev.btree
# First, we need to open a stream which holds a database
# This is usually a file, but can be in-memory database
# using uio.BytesIO, a raw flash partition, etc.
# Oftentimes, you want to create a database file if it doesn't
# exist and open if it exists. Idiom below takes care of this.
# DO NOT open database with "a+b" access mode.
try:
f = open("mydb", "r+b")
except OSError:
f = open("mydb", "w+b")
# Now open a database itself
db = btree.open(f)
# The keys you add will be sorted internally in the database
db[b"3"] = b"three"
db[b"1"] = b"one"
db[b"2"] = b"two"
# Assume that any changes are cached in memory unless
# explicitly flushed (or database closed). Flush database
# at the end of each "transaction".
db.flush()
# Prints b'two'
print(db[b"2"])
# Iterate over sorted keys in the database, starting from b"2"
# until the end of the database, returning only values.
# Mind that arguments passed to values() method are *key* values.
# Prints:
# b'two'
# b'three'
for word in db.values(b"2"):
print(word)
del db[b"2"]
# No longer true, prints False
print(b"2" in db)
# Prints:
# b"1"
# b"3"
for key in db:
print(key)
db.close()
# Don't forget to close the underlying stream!
f.close()
| 22.392857 | 65 | 0.682616 | 227 | 1,254 | 3.770925 | 0.511013 | 0.014019 | 0.014019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00994 | 0.197767 | 1,254 | 55 | 66 | 22.8 | 0.840954 | 0.689793 | 0 | 0 | 0 | 0 | 0.088398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c19814412eef56e453c4f4b77004f55bd8e3b7 | 3,112 | py | Python | examples/pytorch/eges/sampler.py | ketyi/dgl | a1b859c29b63a673c148d13231a49504740e0e01 | [
"Apache-2.0"
] | null | null | null | examples/pytorch/eges/sampler.py | ketyi/dgl | a1b859c29b63a673c148d13231a49504740e0e01 | [
"Apache-2.0"
] | null | null | null | examples/pytorch/eges/sampler.py | ketyi/dgl | a1b859c29b63a673c148d13231a49504740e0e01 | [
"Apache-2.0"
] | null | null | null | import dgl
import numpy as np
import torch as th
class Sampler:
def __init__(self,
graph,
walk_length,
num_walks,
window_size,
num_negative):
self.graph = graph
self.walk_length = walk_length
self.num_walks = num_walks
self.window_size = window_size
self.num_negative = num_negative
self.node_weights = self.compute_node_sample_weight()
def sample(self, batch, sku_info):
"""
Given a batch of target nodes, sample postive
pairs and negative pairs from the graph
"""
batch = np.repeat(batch, self.num_walks)
pos_pairs = self.generate_pos_pairs(batch)
neg_pairs = self.generate_neg_pairs(pos_pairs)
# get sku info with id
srcs, dsts, labels = [], [], []
for pair in pos_pairs + neg_pairs:
src, dst, label = pair
src_info = sku_info[src]
dst_info = sku_info[dst]
srcs.append(src_info)
dsts.append(dst_info)
labels.append(label)
return th.tensor(srcs), th.tensor(dsts), th.tensor(labels)
def filter_padding(self, traces):
for i in range(len(traces)):
traces[i] = [x for x in traces[i] if x != -1]
def generate_pos_pairs(self, nodes):
"""
For seq [1, 2, 3, 4] and node NO.2,
the window_size=1 will generate:
(1, 2) and (2, 3)
"""
# random walk
traces, types = dgl.sampling.random_walk(
g=self.graph,
nodes=nodes,
length=self.walk_length,
prob="weight"
)
traces = traces.tolist()
self.filter_padding(traces)
# skip-gram
pairs = []
for trace in traces:
for i in range(len(trace)):
center = trace[i]
left = max(0, i - self.window_size)
right = min(len(trace), i + self.window_size + 1)
pairs.extend([[center, x, 1] for x in trace[left:i]])
pairs.extend([[center, x, 1] for x in trace[i+1:right]])
return pairs
def compute_node_sample_weight(self):
"""
Using node degree as sample weight
"""
return self.graph.in_degrees().float()
def generate_neg_pairs(self, pos_pairs):
"""
Sample based on node freq in traces, frequently shown
nodes will have larger chance to be sampled as
negative node.
"""
# sample `self.num_negative` neg dst node
# for each pos node pair's src node.
negs = th.multinomial(
self.node_weights,
len(pos_pairs) * self.num_negative,
replacement=True
).tolist()
tar = np.repeat([pair[0] for pair in pos_pairs], self.num_negative)
assert(len(tar) == len(negs))
neg_pairs = [[x, y, 0] for x, y in zip(tar, negs)]
return neg_pairs
| 31.12 | 75 | 0.530527 | 388 | 3,112 | 4.097938 | 0.291237 | 0.040252 | 0.037736 | 0.028931 | 0.108176 | 0.062893 | 0.037736 | 0.037736 | 0.037736 | 0 | 0 | 0.00925 | 0.374679 | 3,112 | 99 | 76 | 31.434343 | 0.807811 | 0.14428 | 0 | 0 | 0 | 0 | 0.00241 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 1 | 0.096774 | false | 0 | 0.048387 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c1ae38423f3160e29db672dd91d62736a97009 | 8,654 | py | Python | dnachisel_dtailor_mode/SequenceDesigner.py | Lix1993/dnachisel_dtailor_mode | a90b4441e7cfc69c73461ee489251341821ebba8 | [
"BSD-2-Clause",
"MIT"
] | 3 | 2020-01-15T21:14:42.000Z | 2020-10-07T15:41:25.000Z | dnachisel_dtailor_mode/SequenceDesigner.py | Lix1993/dnachisel_dtailor_mode | a90b4441e7cfc69c73461ee489251341821ebba8 | [
"BSD-2-Clause",
"MIT"
] | null | null | null | dnachisel_dtailor_mode/SequenceDesigner.py | Lix1993/dnachisel_dtailor_mode | a90b4441e7cfc69c73461ee489251341821ebba8 | [
"BSD-2-Clause",
"MIT"
] | null | null | null | from random import choice
from time import time
from uuid import uuid4
from .tools.DBSQLite import DBSQLite
from .tools.mathtools import hammingDistance
class SequenceDesigner(object):
'''
Initializes class that design sequences based on a design method
'''
def __init__(self, name, seed, design_space, dbfile, createDB=True):
self.name = name
self.design_space = design_space
self.dbfile = dbfile
self.max_iterations = 100 #maximum number of tries allowed to find desired solution
self.max_sol_counter = 100000
self.solutionsHash = {}
self.dbconnection = DBSQLite(
dbfile=dbfile,
design_space=design_space,
initialize=createDB,
seedSequence=seed
)
def define_problem(self,sequence,solution_id):
'''
define a design problem
'''
raise NotImplementedError(
'define_problem must be implemented before using it')
def run(self):
start_time = time()
sol_counter = 1 # seed solution
last_counter = 1
last_timepoint = time()
accepted = 1
initial_dist = 0
seed_sequence = self.dbconnection.seedSequence
solution_id=self.dbconnection.seedId
self._solution = self.define_problem(seed_sequence,solution_id)
mutation_space = self._solution.mutation_space
self.dbconnection.DBInsertSolution(self._solution)
self.solutionsHash[self._solution.solution_id] = self._solution
all_combinations_found = False
while not all_combinations_found and sol_counter <= self.max_sol_counter:
iteration = 0
if time() - last_timepoint >= 1: #Print statistics every 1 second
print(
"time elapsed: %.2f (s) \t solutions generated: %d \t rate (last period.): %0.2f sol/s \t rate (overall): %0.2f sol/s"
% ((time() - start_time), sol_counter,
(sol_counter - last_counter) /
(time() - last_timepoint), sol_counter /
(time() - start_time)))
last_counter = sol_counter
last_timepoint = time()
# Retrieve some desired solution (i.e. a particular combination of features that was not yet found)
desired_solution = self.dbconnection.DBGetDesiredSolution()
if desired_solution == None: #There are no more desired solutions
if self.design_space.designs_list != []: #All desired combinations were found
all_combinations_found = True
break
else:
initial_dist = self.design_space._distance_between_designs(
self._solution.scores,
desired_solution['des_solution_id'])
print("looking for combination: ",
desired_solution['des_solution_id'])
desired_solution_id = desired_solution['des_solution_id']
if choice([True, True, True, True, True, True, True, False]):
closestSolution = self.dbconnection.DBGetClosestSolution(
desired_solution)
else:
closestSolution = self.dbconnection.DBGetClosestSolution(None)
if closestSolution != None:
print ("SolutionIterator: Found close sequence, starting from here...")
if closestSolution[
'generated_solution_id'] in self.solutionsHash:
parent = self.solutionsHash[
closestSolution['generated_solution_id']]
else:
parent = self.define_problem(
closestSolution['sequence'],
closestSolution['generated_solution_id'],
)
solution = parent
else:
print ("SolutionIterator: Starting from init sequence")
parent = self._solution
solution = parent
found = False
old_solution = solution
# Sequence evolution cycle
while (not solution.is_match_design(desired_solution)
and iteration != self.max_iterations
and not found
and not all_combinations_found):
if solution != parent:
self.dbconnection.DBInsertSolution(solution)
self.solutionsHash[
solution.solution_id] = solution ### Cache for rapid access
sol_counter += 1
if self.design_space.designs_list != []:
dist_old = self.design_space._distance_between_designs(
old_solution.scores,
desired_solution['des_solution_id'])
dist_cur = self.design_space._distance_between_designs(
solution.scores,
desired_solution['des_solution_id'])
if dist_old < dist_cur:
solution = old_solution
old_solution = solution
new_sequence = old_solution.mutation_space.apply_random_mutations(
n_mutations = choice([1,2]),
sequence=old_solution.sequence)
solution = self.define_problem(new_sequence,str(uuid4().int))
# No solution found
if solution == None or solution.sequence == None:
solution = None
break
#go to next iteration
iteration += 1
#check if my desired solution was already found
if self.design_space.designs_list != [] and iteration % (
self.max_iterations / 2) == 0:
found = self.dbconnection.DBCheckDesign(
desired_solution_id)
if self.design_space.designs_list == []:
#Stops when number generated solutions is equal to the desired sample size
if sol_counter >= self.design_space.ndesigns:
all_combinations_found = True
print("RandomSampling: %s solutions generated." %
(sol_counter))
#insert solution in the DB
if (solution != None and solution.is_match_design(desired_solution_id)
and solution != parent ):
print("Solution found... inserting into DB...")
self.dbconnection.DBInsertSolution(solution,
desired_solution_id)
self.solutionsHash[solution.solution_id] = solution
sol_counter += 1
elif found == True:
print("Solution already found by other worker")
else:
if self.design_space.designs_list != [] and not all_combinations_found:
print("No solution could be found...")
self.dbconnection.DBChangeStatusDesiredSolution(
desired_solution_id, 'WAITING')
#set worker as finished
self.dbconnection.DBCloseConnection()
if len(self.design_space.designs_list) == 1:
print("\n###########################")
print("# Optimized solution:")
print("# ID: ", solution.solution_id)
print("# Sequence: ", solution.sequence)
print("# Scores: ", [
feat + ": " + str(solution.scores[feat])
for feat in self.design_space.feature_label
])
print("# Levels: ", [
feat + "_Level: " + str(solution.levels[feat + "_Level"])
for feat in self.design_space.feature_label
])
print("# Number of generated solutions: ", sol_counter)
print("# Distance to seed: ",
hammingDistance(self._solution.sequence, solution.sequence))
print("###########################\n")
print("Program finished, all combinations were found...")
return (sol_counter, hammingDistance(self._solution.sequence,
solution.sequence), initial_dist)
| 41.605769 | 139 | 0.535128 | 781 | 8,654 | 5.71959 | 0.238156 | 0.044773 | 0.043653 | 0.02955 | 0.211104 | 0.175733 | 0.060443 | 0.018357 | 0.018357 | 0 | 0 | 0.005834 | 0.385949 | 8,654 | 207 | 140 | 41.806763 | 0.834776 | 0.070488 | 0 | 0.189542 | 0 | 0.006536 | 0.103755 | 0.015144 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.03268 | 0 | 0.065359 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c3bb34af5d4a0b5cbe11df3df8a94ff9dc6e95 | 916 | py | Python | attic/LIST_COURT_NAMES.py | zuphilip/legal-resource-registry | 310bea8d17e07e9818015a3c04aae81214de9f9c | [
"BSD-2-Clause"
] | null | null | null | attic/LIST_COURT_NAMES.py | zuphilip/legal-resource-registry | 310bea8d17e07e9818015a3c04aae81214de9f9c | [
"BSD-2-Clause"
] | null | null | null | attic/LIST_COURT_NAMES.py | zuphilip/legal-resource-registry | 310bea8d17e07e9818015a3c04aae81214de9f9c | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/python
import sys,os,json,re
def sortByLength(a,b):
if len(a) > len (b):
return 1
elif len(a) < len(b):
return -1
else:
return 0
names = json.loads(open("resource/courts.json").read())
states = json.loads(open("attic/STATES.json").read())
lstates = []
for key in states.keys():
lstates.append(states[key])
lstates.append("United States")
lstates.append("U.S.")
lstates.sort(sortByLength)
rxstates = re.compile("(?:" + "|".join(lstates) + ")")
obj = {}
for name in names:
full = name["fields"]["full_name"]
full = full.split(",")[0]
nostate = " ".join(re.split(rxstates,full)).strip()
#print full
#print " %d" % len(re.split(rxstates,full))
nostate = re.sub("\s+of\s*(\.|the\s*)*$", "", nostate)
notstate = nostate.strip()
obj[name["pk"]] = nostate
open("newcourts.json", "w+").write(json.dumps(obj, sort_keys=True, indent=2))
| 25.444444 | 77 | 0.603712 | 130 | 916 | 4.238462 | 0.469231 | 0.07078 | 0.025408 | 0.029038 | 0.054446 | 0.054446 | 0 | 0 | 0 | 0 | 0 | 0.006649 | 0.179039 | 916 | 35 | 78 | 26.171429 | 0.726064 | 0.075328 | 0 | 0 | 0 | 0 | 0.136256 | 0.024882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c3bf81c668f38ed5891d9d9c52d36cf45ce277 | 222 | py | Python | tests.py | M-Nassir/perception | 24f591921637059a1eb706df564c936d9764528f | [
"BSD-2-Clause-FreeBSD"
] | 4 | 2022-01-30T13:48:31.000Z | 2022-02-10T11:47:09.000Z | tests.py | M-Nassir/perception | 24f591921637059a1eb706df564c936d9764528f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | tests.py | M-Nassir/perception | 24f591921637059a1eb706df564c936d9764528f | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | from PerceptionAlgorithm import Perception
import numpy as np
data = np.array([2.1, 2.6, 2.4, 2.5, 2.3, 2.1, 2.3, 2.6, 8.2, 8.3])
clf = Perception()
clf.fit(data)
clf.predict(data)
print(clf.anomalies_)
print("Success")
| 20.181818 | 67 | 0.684685 | 44 | 222 | 3.431818 | 0.5 | 0.02649 | 0.039735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103627 | 0.130631 | 222 | 10 | 68 | 22.2 | 0.678756 | 0 | 0 | 0 | 0 | 0 | 0.031674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c6788e12c7cfbb02f21fe72ca82d5d4492869a | 1,050 | py | Python | parsing/oracle.py | tph9608/cl1-hw | c8fe62b4ec9a71ce2a9ba4eba65b8bf99605f08d | [
"CC-BY-4.0"
] | 24 | 2018-08-27T14:13:55.000Z | 2022-01-07T11:44:56.000Z | parsing/oracle.py | tph9608/cl1-hw | c8fe62b4ec9a71ce2a9ba4eba65b8bf99605f08d | [
"CC-BY-4.0"
] | 1 | 2018-09-22T02:36:56.000Z | 2018-09-22T02:36:56.000Z | parsing/oracle.py | tph9608/cl1-hw | c8fe62b4ec9a71ce2a9ba4eba65b8bf99605f08d | [
"CC-BY-4.0"
] | 45 | 2018-03-06T01:51:48.000Z | 2022-03-24T16:57:06.000Z | import sys
import nltk
from nltk.corpus import dependency_treebank
from nltk.classify.maxent import MaxentClassifier
from nltk.classify.util import accuracy
VALID_TYPES = set(['s', 'l', 'r'])
class Transition:
def __init__(self, type, edge=None):
self._type = type
self._edge = edge
assert self._type in VALID_TYPES
def pretty_print(self, sentence):
if self._edge:
a, b = self._edge
return "%s\t(%s, %s)" % (self._type,
sentence.get_by_address(a)['word'],
sentence.get_by_address(b)['word'])
else:
return self._type
def transition_sequence(sentence):
"""
Return the sequence of shift-reduce actions that reconstructs the input sentence.
"""
sentence_length = len(sentence.nodes)
for ii in range(sentence_length - 1):
yield Transition('s')
for ii in range(sentence_length - 1, 1, -1):
yield Transition('r', (ii - 1, ii))
yield Transition('s')
| 29.166667 | 85 | 0.600952 | 132 | 1,050 | 4.606061 | 0.439394 | 0.065789 | 0.052632 | 0.065789 | 0.088816 | 0.088816 | 0.088816 | 0 | 0 | 0 | 0 | 0.00672 | 0.291429 | 1,050 | 35 | 86 | 30 | 0.810484 | 0.077143 | 0 | 0.076923 | 0 | 0 | 0.027282 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.115385 | false | 0 | 0.192308 | 0 | 0.423077 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c7b16d54ade908cc2d3dfd54f7cbd09349fd32 | 1,137 | py | Python | pychess/Utils/Piece.py | jacobchrismarsh/chess_senior_project | 7797b1f96fda5d4d268224a21e54a744d17e7b81 | [
"MIT"
] | null | null | null | pychess/Utils/Piece.py | jacobchrismarsh/chess_senior_project | 7797b1f96fda5d4d268224a21e54a744d17e7b81 | [
"MIT"
] | 40 | 2019-05-04T04:46:31.000Z | 2022-02-26T10:37:51.000Z | pychess/Utils/Piece.py | jacobchrismarsh/chess_senior_project | 7797b1f96fda5d4d268224a21e54a744d17e7b81 | [
"MIT"
] | null | null | null | from pychess.Utils.repr import reprColor, reprPiece
class Piece:
def __init__(self, color, piece, captured=False):
self.color = color
self.piece = piece
self.captured = captured
# in crazyhouse we need to know this for later captures
self.promoted = False
self.opacity = 1.0
self.x = None
self.y = None
# Sign is a deprecated synonym for piece
def _set_sign(self, sign):
self.piece = sign
def _get_sign(self):
return self.piece
sign = property(_get_sign, _set_sign)
def __repr__(self):
represen = "<%s %s" % (reprColor[self.color], reprPiece[self.piece])
if self.opacity != 1.0:
represen += " Op:%0.1f" % self.opacity
if self.x is not None or self.y is not None:
if self.x is not None:
represen += " X:%0.1f" % self.x
else:
represen += " X:None"
if self.y is not None:
represen += " Y:%0.1f" % self.y
else:
represen += " Y:None"
represen += ">"
return represen
| 27.731707 | 76 | 0.538259 | 146 | 1,137 | 4.082192 | 0.335616 | 0.060403 | 0.060403 | 0.043624 | 0.100671 | 0.053691 | 0 | 0 | 0 | 0 | 0 | 0.013717 | 0.358839 | 1,137 | 40 | 77 | 28.425 | 0.803841 | 0.080915 | 0 | 0.066667 | 0 | 0 | 0.044146 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.033333 | 0.033333 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c93efb77646992b7750f4b744b9a406b974266 | 10,617 | py | Python | src/Puzzle.py | FrankSauve/472-project-1 | 66721e4806a94b6f1e47c6e4f10e3850db574b3a | [
"MIT"
] | null | null | null | src/Puzzle.py | FrankSauve/472-project-1 | 66721e4806a94b6f1e47c6e4f10e3850db574b3a | [
"MIT"
] | null | null | null | src/Puzzle.py | FrankSauve/472-project-1 | 66721e4806a94b6f1e47c6e4f10e3850db574b3a | [
"MIT"
] | null | null | null | import math
goal = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0]
rows = 0
columns = 0
first_step = True
class Puzzle:
def __init__(self, p):
self.puzzle = p
def set_rows_and_columns(self):
"""
Assigns the number of rows and columns that the puzzle should have given its length into the rows and columns global variables
"""
global rows
global columns
columns = self.get_columns()
rows = self.get_rows()
def is_tiled_correctly(self):
"""
Checks if the puzzle has any duplicated or missing tile numbers
:return: boolean
"""
i = len(self.puzzle) - 1 # int value expected in tile puzzle
while i >= 0:
j = 0 # int iterating through each list item
found = False
while j < len(self.puzzle):
if (self.puzzle[j] == i) and (not found): # initial match found
found = True
elif self.puzzle[j] == i: # duplicate found
print("Duplicate " + str(i) + " found")
return False
elif (not found) and (j == len(self.puzzle) - 1) and (self.puzzle[j] != i): # value expected not found
print("Tile " + str(i) + " expected, but not found")
return False
j += 1
i -= 1
return True
def is_puzzle_solvable(self):
"""
Checks the puzzle to see whether it is solvable using the sum of permutation inversions method
:return: boolean
"""
permutation_inversions = []
i = 0
while i < len(self.puzzle):
current_val = self.puzzle[i]
right_placements = 0
j = i + 1
while j < len(self.puzzle):
if (current_val > self.puzzle[j]) and (self.puzzle[j] != 0):
right_placements += 1
j += 1
i += 1
permutation_inversions.append(right_placements)
if sum(permutation_inversions) % 2 == 0:
return True
else:
return False
def goal_gen(self):
"""
Modifies the goal state for non 12-tiled puzzles
"""
global goal
goal = []
i = 0
while i < len(self.puzzle):
if i == len(self.puzzle) - 1:
goal.append(0)
else:
goal.append(i + 1)
i += 1
def get_rows(self):
"""
This method returns the number of rows in a tile puzzle
:param list puzzle: the puzzle's current state
:return: int rows: the number of rows for the puzzle
"""
length = float(len(self.puzzle))
n = int(math.sqrt(length))
if n ** 2 == int(length): # check square case
return n
while n > 0:
m = length / n
if m.is_integer():
return n
n -= 1
def get_columns(self):
"""
This method returns the number of columns in a tile puzzle
:param list puzzle: the puzzle's current state
:return: int columns: the number of columns for the puzzle
"""
return int(len(self.puzzle) / self.get_rows())
@staticmethod
def is_puzzle_solved(puzzle):
"""
Returns whether or not the puzzle is in the goal state
:return boolean
"""
global goal
return puzzle == goal
@staticmethod
def write_to_txt(file, letter, puzzle):
"""
Write the letter and the current state of the puzzle to a txt file
:param file: File to write to
:param str letter: The letter of the tile of the current move
:param list puzzle: The current state of the puzzle after the move
"""
global first_step
if first_step:
letter = "0"
first_step = False
file.write(letter + " " + str(puzzle) + "\n")
@staticmethod
def get_possible_moves(puzzle):
"""
Returns (x,y) coordinates of possible positions for the 0 tile.
Ordered clockwise as the requirements state they should be
:param list puzzle: The current state of the puzzle
:return list moves: The list of possible moves as (x,y) coordinates
"""
pos = puzzle.index(0)
moves = []
if pos >= columns: # up
moves.append(pos - columns)
if pos >= columns and pos % columns != (columns - 1): # up-right
moves.append(pos - (columns - 1))
if pos % columns != (columns - 1): # right
moves.append(pos + 1)
if (len(puzzle) > pos + columns + 1) and (pos % columns != (columns - 1)): # down-right
moves.append(pos + (columns + 1))
if pos < len(puzzle) - columns: # down
moves.append(pos + columns)
if (pos < len(puzzle) - columns) and (pos % columns > 0): # down-left
moves.append(pos + (columns - 1))
if pos % columns > 0: # left
moves.append(pos - 1)
if (pos >= columns) and (pos % columns > 0): # up-left
moves.append(pos - (columns + 1))
return moves
@staticmethod
def move(new_pos, puzzle):
"""
Returns the puzzle after a move
:param int new_pos: New position of the 0
:param list puzzle: Current state of the puzzle
:return list new_puzzle: 2D array of the puzzle after the move
"""
new_puzzle = list(puzzle)
current_pos = puzzle.index(0)
value = puzzle[new_pos]
new_puzzle[current_pos] = value
new_puzzle[new_pos] = 0
return new_puzzle
@staticmethod
def get_tile_letter(pos):
"""
Returns the letter of the tile of the given pos
:param int pos : The number of the tile to search
:return str letter: The letter of tile at pos
"""
return str(chr(97 + pos))
@staticmethod
def get_h1(puzzle):
"""
Calculates heuristic h1
:return: int a, which is the number of incorrectly placed elements
"""
i = 0
a = 0
while i < len(puzzle) - 1: # len()-1 since 0 should be at the last position
if puzzle[i] != (i + 1):
a = a + 1
i = i + 1
return a
@staticmethod
def get_h2(puzzle):
"""
Calculates heuristic h2: The Manhattan distance of each tile
:param list puzzle: Current state of the puzzle
:return: int total_distance: Sum of the distances of where the tile should be
"""
total_distance = 0
for i, p in enumerate(puzzle):
index = i + 1
if p == 0: # If the tile has the zero, it should be at index n
p = len(puzzle)
# Current x of the tile
curr_x = index % columns
if curr_x == 0: # If its 0 make it column
curr_x = columns
# Current y of the tile
curr_y = math.ceil(index / columns)
# Goal x position of tile
goal_x = p % columns
if goal_x == 0: # If its 0 make it column
goal_x = columns
# Goal y position of tile
goal_y = math.ceil(p / columns)
# Total absolute Manhattan distance
total_distance += abs(goal_x - curr_x) + abs(goal_y - curr_y)
return total_distance
@staticmethod
def get_h3(puzzle):
"""
Calculates heuristic h3: The sum of the distances of where each tile should be
:param list puzzle: Current state of the puzzle
:return: int total_distance: Sum of the distances of where the tile should be
"""
i = 0
distance = 0
total_distance = 0
while i < len(puzzle):
if puzzle[i] != (i + 1): # If a tile doesn't have the value of it's goal state
current_location = i
current_location_mod = i % columns
if puzzle[i] == 0 and i != len(puzzle) - 1: # Check 0 case
goal_location = len(puzzle) - 1
goal_location_mod = goal_location % columns
else:
goal_location = puzzle[i] - 1
goal_location_mod = goal_location % columns
distance = 0
while current_location < goal_location: # Current location is above the goal location
if current_location_mod < goal_location_mod and goal_location - columns < current_location: # x+1
current_location += 1
current_location_mod += 1
elif current_location_mod < goal_location_mod: # x+c+1
current_location += columns + 1
current_location_mod += 1
elif current_location_mod == goal_location_mod: # x+c
current_location += columns
else: # x+c-1
current_location += columns - 1
current_location_mod -= 1
distance += 1
while current_location > goal_location:
if current_location_mod < goal_location_mod: # x-c+1
current_location -= columns - 1
current_location_mod += 1
elif current_location_mod == goal_location_mod: # Current column is same as goal column
current_location -= columns
elif current_location_mod > goal_location_mod and goal_location + columns > current_location: # x-1
current_location -= 1
current_location_mod -= 1
else: # x-c-1
current_location -= columns + 1
current_location_mod -= 1
distance += 1
total_distance += distance
distance = 0
i = i + 1
return total_distance
@staticmethod
def get_sorted_tuples(moves, scores):
"""
Gets the sorted (score, move) tuples
:param list moves: Possible moves
:param list scores: Heuristic scores for the moves
:return list tuples: Sorted list of (score, move) tuples
"""
tuples = []
for i in range(len(moves)):
tuples = tuples + [(scores[i], moves[i])]
tuples.sort()
return tuples
| 34.583062 | 134 | 0.52755 | 1,296 | 10,617 | 4.216049 | 0.143519 | 0.074122 | 0.042826 | 0.033675 | 0.372987 | 0.320095 | 0.260066 | 0.216142 | 0.189605 | 0.16142 | 0 | 0.017998 | 0.392955 | 10,617 | 306 | 135 | 34.696078 | 0.829791 | 0.271451 | 0 | 0.303665 | 0 | 0 | 0.006855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08377 | false | 0 | 0.005236 | 0 | 0.17801 | 0.010471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70c9f837b209c2cb3e59a19ee1f96b60b81f17cf | 33,471 | py | Python | api/annotation.py | interlink-project/interlinker-service-augmenter | be77187f32a31e9b50bb6a70e63b3f245ce8ccae | [
"MIT"
] | null | null | null | api/annotation.py | interlink-project/interlinker-service-augmenter | be77187f32a31e9b50bb6a70e63b3f245ce8ccae | [
"MIT"
] | null | null | null | api/annotation.py | interlink-project/interlinker-service-augmenter | be77187f32a31e9b50bb6a70e63b3f245ce8ccae | [
"MIT"
] | null | null | null | from flask import current_app
from api import authz, document, es
from bs4 import BeautifulSoup
import datetime
TYPE = 'annotation'
MAPPING = {
'id': {'type': 'string', 'index': 'no'},
'descriptionid': {'type': 'string', 'index': 'no'},
'annotator_schema_version': {'type': 'string'},
'created': {'type': 'date'},
'updated': {'type': 'date'},
'quote': {'type': 'string', 'analyzer': 'standard'},
'tags': {'type': 'string', 'index_name': 'tag'},
'text': {'type': 'string', 'analyzer': 'standard'},
'category': {'type': 'string'},
'uri': {'type': 'string'},
'user': {'type': 'string'},
'consumer': {'type': 'string'},
'ranges': {
'index_name': 'range',
'properties': {
'start': {'type': 'string'},
'end': {'type': 'string'},
'startOffset': {'type': 'integer'},
'endOffset': {'type': 'integer'},
}
},
'permissions': {
'index_name': 'permission',
'properties': {
'read': {'type': 'string'},
'update': {'type': 'string'},
'delete': {'type': 'string'},
'admin': {'type': 'string'}
}
},
"statechanges": {
"type": "nested",
"properties": {
"initstate": {"type": "byte"},
"endstate": {"type": "byte"},
"objtype": {"type":"string"},
"text": {"type": "string"},
"date": {"type": "date","format": "dateOptionalTime"},
"user": {"type": "string"}
}
},
"state": {
"type": "byte",
"index": "not_analyzed"
},
"likes": {
"type": "integer",
"index": "not_analyzed"
},
"dislikes": {
"type": "integer",
"index": "not_analyzed"
},
"replies": {
"type": "integer",
"index": "not_analyzed"
},
'document': {
'properties': document.MAPPING
},
'idAnotationReply': {
'type': 'string'
},
'idReplyRoot': {
'type': 'string'
}
}
MAX_ITERATIONS = 5
PAGGINATION_SIZE = 10
class Annotation(es.Model):
__type__ = TYPE
__mapping__ = MAPPING
def save(self, *args, **kwargs):
_add_default_permissions(self)
#The initial state of the annotation is 0.
#the 0 state: on discussion.
#the 1 state: archived
#the 2 state: approved
#the 3 state: banned
#en statechages register the states changes.
self['state']=0
self['statechanges']={}
self['like']=0
self['dislike']=0
# If the annotation includes document metadata look to see if we have
# the document modeled already. If we don't we'll create a new one
# If we do then we'll merge the supplied links into it.
if 'document' in self:
d = document.Document(self['document'])
d.save()
super(Annotation, self).save(*args, **kwargs)
def updateState(self, *args, **kwargs):
#_add_default_permissions(self)
# If the annotation includes document metadata look to see if we have
# the document modeled already. If we don't we'll create a new one
# If we do then we'll merge the supplied links into it.
q = {
"doc" : {
"state": self['state'],
"statechanges":self['statechanges']
}
}
super(Annotation, self).updateFields(body=q,*args, **kwargs)
def updateLike(self, *args, **kwargs):
#_add_default_permissions(self)
# If the annotation includes document metadata look to see if we have
# the document modeled already. If we don't we'll create a new one
# If we do then we'll merge the supplied links into it.
q = {
"doc" : {
"like": self['like'],
"dislike":self['dislike']
}
}
super(Annotation, self).updateFields(body=q,*args, **kwargs)
#Return the number of terms, questions and feedbacks
def descriptionStats(cls,**kwargs):
q = {
"query": {
"bool": {
"must": [
{
"match": {
"descriptionId": kwargs.pop("descriptionId")
}
}
]
}
},
"aggs": {
"group_category": {
"terms": {
"field": "category"
},
"aggs": {
"group_state": {
"terms": {
"field": "state"
}
}
}
}
}
}
# #Parametros de busqueda URI:
# urls=kwargs.pop("uris")
# filtroUriSection={
# "bool": {
# "should": []
# }
# }
# existenUris=False
# for url in urls:
# existenUris=True
# seccionState = {
# "match":{
# "uri": url
# }
# }
# filtroUriSection['bool']['should'].append(seccionState)
# if existenUris:
# q['query']['bool']['must'].append(filtroUriSection)
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
if(len(res['aggregations']['group_category']['buckets'])>0):
res=res['aggregations']['group_category']['buckets']
else:
res=[]
return res
#Return the number of terms, questions and feedbacks
def annotationStats(cls,**kwargs):
q={
"size":0,
"query": {
"bool": {
"must": [
{
"match": {
"descriptionId": kwargs.pop("descriptionId")
}
}
],
"must_not": [
{
"match": {
"state": 1
}
}
],
}
},
"aggs": {
"group_by_reproot": {
"terms": {
"field": "idReplyRoot"
}
}
}
}
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
if(len(res['aggregations']['group_by_reproot']['buckets'])>0):
res=res['aggregations']['group_by_reproot']['buckets']
else:
res=[]
return res
@classmethod
def _get_Annotations_by_User(cls,**kwargs):
q={
"query": {
"bool": {
"must": [
{
"match": {
"user": kwargs.pop("user")
}
}
]
}
},
"aggs": {
"group_by_uri": {
"terms": {
"field": "uri"
}
}
},
"size": 0
}
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
if(len(res['aggregations']['group_by_uri']['buckets'])>0):
res=res['aggregations']['group_by_uri']['buckets']
else:
res=[]
return res
@classmethod
def _get_Annotations_by_Root_User(cls,**kwargs):
q={
"query": {
"bool": {
"must": [
{
"match": {
"user": kwargs.pop("user")
}
},
{
"match": {
"idReplyRoot": kwargs.pop("idReplyRoot")
}
}
]
}
}
}
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
annotations=[cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
numRes=res['hits']['total']
resultado={'annotations':annotations,'numRes':numRes,'query':q}
return resultado
#Get users that has participated as Moderator or Annotator
@classmethod
def currentActiveUsers(cls,**kwargs):
q={
"size":0,
"aggs" : {
"group_by_user": {
"terms": {
"field": "user"
}
}
}
}
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
if(len(res['aggregations']['group_by_user']['buckets'])>0):
res=res['aggregations']['group_by_user']['buckets']
else:
res=[]
return res
#Return the number of time a user register a like over an annotation
def userAlreadyLike(cls,**kwargs):
q= {
"query": {
"bool": {
"must": [
{
"match": {
"_id": kwargs.pop("id")
}
},
{
"nested": {
"path": "statechanges",
"query": {
"bool": {
"must": [
{
"match": {
"statechanges.user": kwargs.pop("email")
}
},
{
"match": {
"objtype": "annotation_like"
}
}
]
}
}
}
}
]
}
}
}
#print(q)
res = cls.es.conn.count(index="annotator",
doc_type=cls.__type__,
body=q)
return res['count']
def _get_by_multiple0(cls,**kwargs):
page=kwargs.pop("page")
estados=kwargs.pop("estados")
url=kwargs.pop("url")
textoForSearch=kwargs.pop("textoABuscar")
category=kwargs.pop("category")
notreply=kwargs.pop("notreply")
initReg=(int(page)-1)*10
q= {
"sort": [
{
"updated": {
"order": "desc",
"ignore_unmapped": True
}
}
],
"from": initReg,
"size": PAGGINATION_SIZE,
"query": {
"bool": {
"must":[
{
"match": {
"uri": url
}
}
]
}
}
}
#Parametro de busqueda por texto Box:
if textoForSearch != "":
sectSearchByText={
"match":{
"text": textoForSearch
}
}
q['query']['bool']['must'].append(sectSearchByText)
#Parametro de busqueda por category:
if category != "":
sectCategory={
"match":{
"category": category
}
}
q['query']['bool']['must'].append(sectCategory)
#Obtenemos solo los reply
if notreply:
seccionJR= {
"match":{
"category": "reply"
}
}
q['query']['bool']['must_not']=[seccionJR]
#Parametros de busqueda:
#Estados:
filtroEstadosSection={
"bool": {
"should": []
}
}
existenStates=False
for keyItem in estados.keys():
if estados[keyItem]:
existenStates=True
if(keyItem=="Approved"):
valueState=2
if(keyItem=="Archived"):
valueState=1
if(keyItem=="InProgress"):
valueState=0
seccionState = {
"match":{
"state": valueState
}
}
filtroEstadosSection['bool']['should'].append(seccionState)
if existenStates:
q['query']['bool']['must'].append(filtroEstadosSection)
#print('_get_by_multiple')
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
annotations=[cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
numRes=res['hits']['total']
resultado={'annotations':annotations,'numRes':numRes}
return resultado
@classmethod
def _get_AnnotationsApproved_by_Urls(cls,**kwargs):
listUrls=kwargs.pop('listUrls')
q= {
"sort": [
{
"category": {
"order": "desc",
"ignore_unmapped": True
}
}
],
"from": 0,
"size": 10000,
"query": {
"bool": {
"must":[]
}
}
}
#Que tengan una de las siguientes URIS:
filtroUriSection={
"bool": {
"should": []
}
}
for urlItem in listUrls:
seccionState = {
"match":{
"uri": urlItem['url']
}
}
filtroUriSection['bool']['should'].append(seccionState)
q['query']['bool']['must'].append(filtroUriSection)
# Not replies:
sectCategory={
"match":{
"category": "reply"
}
}
q['query']['bool']['must_not']=[]
q['query']['bool']['must_not'].append(sectCategory)
#Solamente los que tienen estados approvados (state=2)
valueState=2
seccionState = {
"match":{
"state": valueState
}
}
q['query']['bool']['must'].append(seccionState)
#Run the query:
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
annotations=[cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
numRes=res['hits']['total']
#Re formateo los campos text por que tienen tags y deben ser solamente textos.
for annotation in annotations:
soup=BeautifulSoup(annotation['text'], features="html.parser")
annotation['text']= soup.get_text()
resultado={'annotations':annotations,'numRes':numRes}
return resultado
@classmethod
def _get_AnnotationsApproved_by_DescriptionIds(cls,**kwargs):
descriptionId=kwargs.pop('descriptionId')
q= {
"sort": [
{
"category": {
"order": "desc",
"ignore_unmapped": True
}
}
],
"from": 0,
"size": 10000,
"query": {
"bool": {
"must":[]
}
}
}
#Que tengan una de las siguientes URIS:
filtroUriSection={
"bool": {
"should": []
}
}
#Filtro por descriptionIds
seccionDescId = {
"match":{
"descriptionId": descriptionId
}
}
q['query']['bool']['must'].append(seccionDescId)
# Not replies:
sectCategory={
"match":{
"category": "reply"
}
}
q['query']['bool']['must_not']=[]
q['query']['bool']['must_not'].append(sectCategory)
#Solamente los que tienen estados approvados (state=2)
valueState=2
seccionState = {
"match":{
"state": valueState
}
}
q['query']['bool']['must'].append(seccionState)
print(q)
#Run the query:
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
annotations=[cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
numRes=res['hits']['total']
#Re formateo los campos text por que tienen tags y deben ser solamente textos.
for annotation in annotations:
soup=BeautifulSoup(annotation['text'], features="html.parser")
annotation['text']= soup.get_text()
resultado={'annotations':annotations,'numRes':numRes}
return resultado
def _get_by_multiple(cls,**kwargs):
page=kwargs.pop("page")
estados=kwargs.pop("estados")
descriptionId=kwargs.pop("descriptionId")
textoForSearch=kwargs.pop("textoABuscar")
category=kwargs.pop("category")
notreply=kwargs.pop("notreply")
justMyContributions=False
user=''
if page== 'all':
initReg=0
pgsize=10000
else:
initReg=(int(page)-1)*10
pgsize=PAGGINATION_SIZE
if 'justMyContributions' in kwargs:
justMyContributions=kwargs.pop("justMyContributions")
user=kwargs.pop("user")
if justMyContributions:
initReg=0
pgsize=10000
q= {
"sort": [
{
"updated": {
"order": "desc",
"ignore_unmapped": True
}
}
],
"from": initReg,
"size": pgsize,
"query": {
"bool": {
"must":[]
}
}
}
#Parametros de busqueda URI:
filtroUriSection={
"bool": {
"should": []
}
}
# existenUris=False
# for url in urls:
# existenUris=True
# seccionState = {
# "match":{
# "uri": url
# }
# }
# filtroUriSection['bool']['should'].append(seccionState)
# if existenUris:
# q['query']['bool']['must'].append(filtroUriSection)
#Filtro por descriptionId
sectSearchByDescriptionId={
"match":{
"descriptionId": descriptionId
}
}
q['query']['bool']['must'].append(sectSearchByDescriptionId)
#Parametro de busqueda por texto Box:
if textoForSearch != "":
sectSearchByText={
"match":{
"text": textoForSearch
}
}
q['query']['bool']['must'].append(sectSearchByText)
#Parametro de busqueda por category:
if category != "":
sectCategory={
"match":{
"category": category
}
}
q['query']['bool']['must'].append(sectCategory)
#Obtenemos solo los reply
if notreply:
seccionJR= {
"match":{
"category": "reply"
}
}
q['query']['bool']['must_not']=[seccionJR]
#Parametros de busqueda:
#Estados:
filtroEstadosSection={
"bool": {
"should": []
}
}
existenStates=False
for keyItem in estados.keys():
if estados[keyItem]:
existenStates=True
if(keyItem=="Approved"):
valueState=2
if(keyItem=="Archived"):
valueState=1
if(keyItem=="InProgress"):
valueState=0
seccionState = {
"match":{
"state": valueState
}
}
filtroEstadosSection['bool']['should'].append(seccionState)
if existenStates:
q['query']['bool']['must'].append(filtroEstadosSection)
#print('_get_by_multiple')
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
annotations=[cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
numRes=res['hits']['total']
resultado={'annotations':annotations,'numRes':numRes}
#Redefino los valores de paginacion:
if page == 'all':
initReg=0
pgsize=10000
else:
initReg=(int(page)-1)*10
pgsize=PAGGINATION_SIZE
# Filtro unicamente las que he contribuido
listAnnotations = []
if justMyContributions:
for annotationItem in annotations:
#Look for annotation with my participations
resRootRef = Annotation._get_Annotations_by_Root_User(user=user, idReplyRoot=annotationItem['id'])
if resRootRef['numRes']>0 or annotationItem['user']==user:
listAnnotations.append(annotationItem)
annotations = listAnnotations[initReg:initReg+PAGGINATION_SIZE]
numRes = len(listAnnotations)
resultado={'annotations':annotations,'numRes':numRes}
return resultado
@classmethod
def _deleteReplies(cls,**kwargs):
anotation=kwargs.get("annotation")
listChildrenRep=[]
listChildrenRep = cls.getReplies(cls,anotation['id'],listChildrenRep)
#Delete all
for idReply in listChildrenRep:
annotation = cls.fetch(idReply)
annotation.delete()
return 'borraTodosHijos'
@classmethod
def _changeStateReplies(cls,**kwargs):
anotation=kwargs.get("annotation")
newstate=kwargs.get("newstate")
listChildrenRep=[]
listChildrenRep = cls.getReplies(cls,anotation['id'],listChildrenRep)
#Delete all
for idReply in listChildrenRep:
annotation = cls.fetch(idReply)
annotation['state'] = int(newstate)
annotation.updateState()
return 'Cambia todos los hijos a este estado.'
#Obtengo solamente los reply que no han sido archivados.
def getReplies(cls,annotationId,listChildrenRep):
q={
"query": {
"bool": {
"must": [
{
"match":{
"category":"reply"
}
}
,
{
"match":{
"idAnotationReply":"annotation-"+annotationId
}
}
],
"must_not": [
{
"match": {
"state": 1
}
}
]
}
}
}
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
annotations=[cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
numRes=res['hits']['total']
resultado={'annotations':annotations,'numRes':numRes}
if (numRes>0):
children = resultado['annotations']
for itemAnnotation in children:
listChildrenRep.append(itemAnnotation['id'])
listChildrenRep=cls.getReplies(cls,itemAnnotation['id'],listChildrenRep)
return listChildrenRep
@classmethod
def _get_by_multipleCounts(cls,**kwargs):
q= {
"query": {
"bool": {
"must":[
{
"prefix":{
"title":kwargs.get("textoABuscar")
}
}
]
}
}
}
#Parametros de busqueda:
i = 0
for key, value in kwargs.items():
i += 1
if(key=='url'):
preUrl={"bool": {
"should":[
]
}}
seccion1 = {
"prefix":{
key: 'http://'+value
}
}
preUrl['bool']['should'].append(seccion1)
seccion2 = {
"prefix":{
key: 'https://'+value
}
}
preUrl['bool']['should'].append(seccion2)
q['query']['bool']['must'].append(preUrl)
else:
if value=='Unassigned':
value=''
seccion = {
"match":{
key: value
}
}
if(key!='textoABuscar' and key!='page'):
q['query']['bool']['must'].append(seccion)
else:
seccion = {
"match":{
key: value
}
}
if(key!='textoABuscar' and key!='page'and value!=''):
q['query']['bool']['must'].append(seccion)
#print('_get_by_multipleCounts')
#print(q)
res = cls.es.conn.count(index="description",
doc_type=cls.__type__,
body=q)
return res['count']
@classmethod
def search_raw(cls, query=None, params=None, raw_result=False,
user=None, authorization_enabled=None):
"""Perform a raw Elasticsearch query
Any ElasticsearchExceptions are to be caught by the caller.
Keyword arguments:
query -- Query to send to Elasticsearch
params -- Extra keyword arguments to pass to Elasticsearch.search
raw_result -- Return Elasticsearch's response as is
user -- The user to filter the results for according to permissions
authorization_enabled -- Overrides Annotation.es.authorization_enabled
"""
if query is None:
query = {}
if authorization_enabled is None:
authorization_enabled = es.authorization_enabled
if authorization_enabled:
f = authz.permissions_filter(user)
if not f:
raise RuntimeError("Authorization filter creation failed")
filtered_query = {
'filtered': {
'filter': f
}
}
# Insert original query (if present)
if 'query' in query:
filtered_query['filtered']['query'] = query['query']
# Use the filtered query instead of the original
query['query'] = filtered_query
#print(query)
res = super(Annotation, cls).search_raw(query=query, params=params,
raw_result=raw_result)
return res
@classmethod
def _build_query(cls, query=None, offset=None, limit=None, sort=None, order=None):
if query is None:
query = {}
else:
query = dict(query) # shallow copy
# Pop 'before' and 'after' parameters out of the query
after = query.pop('after', None)
before = query.pop('before', None)
q = super(Annotation, cls)._build_query(query, offset, limit, sort, order)
# Create range query from before and/or after
if before is not None or after is not None:
clauses = q['query']['bool']['must']
# Remove match_all conjunction, because
# a range clause is added
if clauses[0] == {'match_all': {}}:
clauses.pop(0)
created_range = {'range': {'created': {}}}
if after is not None:
created_range['range']['created']['gte'] = after
if before is not None:
created_range['range']['created']['lt'] = before
clauses.append(created_range)
# attempt to expand query to include uris for other representations
# using information we may have on hand about the Document
if 'uri' in query:
clauses = q['query']['bool']
doc = document.Document.get_by_uri(query['uri'])
if doc:
for clause in clauses['must']:
# Rewrite the 'uri' clause to match any of the document URIs
if 'match' in clause and 'uri' in clause['match']:
uri_matchers = []
for uri in doc.uris():
uri_matchers.append({'match': {'uri': uri}})
del clause['match']
clause['bool'] = {
'should': uri_matchers,
'minimum_should_match': 1
}
return q
@classmethod
def _get_Annotation_byId(cls,**kwargs):
q= {
"query": {
"terms": {
"_id":[kwargs.pop("id")]
}
}
}
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
return [cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
@classmethod
def _get_Annotation_byCategory(cls,**kwargs):
q= {
"query": {
"terms": {
"category":[kwargs.pop("category")]
}
}
}
#print(q)
res = cls.es.conn.search(index="annotator",
doc_type=cls.__type__,
body=q)
return [cls(d['_source'], id=d['_id']) for d in res['hits']['hits']]
def _add_default_permissions(ann):
if 'permissions' not in ann:
ann['permissions'] = {'read': [authz.GROUP_CONSUMER]}
| 27.435246 | 114 | 0.397478 | 2,456 | 33,471 | 5.31759 | 0.15513 | 0.024809 | 0.034839 | 0.031087 | 0.581547 | 0.55268 | 0.527565 | 0.509035 | 0.482159 | 0.458652 | 0 | 0.00478 | 0.487467 | 33,471 | 1,219 | 115 | 27.457752 | 0.756514 | 0.109587 | 0 | 0.483871 | 0 | 0 | 0.128835 | 0.00081 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027295 | false | 0 | 0.004963 | 0 | 0.058313 | 0.001241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70ca64b228d2e1b40d7414d7c085dd7b93422ce5 | 2,653 | py | Python | spacy/cli/download.py | yuukos/spaCy | e4125383ed7221910ea955eae9b623c02bda64d8 | [
"MIT"
] | 1 | 2017-11-18T08:53:26.000Z | 2017-11-18T08:53:26.000Z | spacy/cli/download.py | yuukos/spaCy | e4125383ed7221910ea955eae9b623c02bda64d8 | [
"MIT"
] | null | null | null | spacy/cli/download.py | yuukos/spaCy | e4125383ed7221910ea955eae9b623c02bda64d8 | [
"MIT"
] | 1 | 2018-08-25T03:09:50.000Z | 2018-08-25T03:09:50.000Z | # coding: utf8
from __future__ import unicode_literals
import requests
import os
import subprocess
import sys
from .link import link_package
from .. import about
from .. import util
def download(model=None, direct=False):
check_error_depr(model)
if direct:
download_model('{m}/{m}.tar.gz'.format(m=model))
else:
model_name = check_shortcut(model)
compatibility = get_compatibility()
version = get_version(model_name, compatibility)
download_model('{m}-{v}/{m}-{v}.tar.gz'.format(m=model_name, v=version))
link_package(model_name, model, force=True)
def get_json(url, desc):
r = requests.get(url)
if r.status_code != 200:
util.sys_exit(
"Couldn't fetch {d}. Please find the right model for your spaCy "
"installation (v{v}), and download it manually:".format(d=desc, v=about.__version__),
"python -m spacy.download [full model name + version] --direct",
title="Server error ({c})".format(c=r.status_code))
return r.json()
def check_shortcut(model):
shortcuts = get_json(about.__shortcuts__, "available shortcuts")
return shortcuts.get(model, model)
def get_compatibility():
version = about.__version__
comp_table = get_json(about.__compatibility__, "compatibility table")
comp = comp_table['spacy']
if version not in comp:
util.sys_exit(
"No compatible models found for v{v} of spaCy.".format(v=version),
title="Compatibility error")
return comp[version]
def get_version(model, comp):
if model not in comp:
util.sys_exit(
"No compatible model found for "
"'{m}' (spaCy v{v}).".format(m=model, v=about.__version__),
title="Compatibility error")
return comp[model][0]
def download_model(filename):
util.print_msg("Downloading {f}".format(f=filename))
download_url = about.__download_url__ + '/' + filename
subprocess.call([sys.executable, '-m',
'pip', 'install', '--no-cache-dir', download_url],
env=os.environ.copy())
def check_error_depr(model):
if not model:
util.sys_exit(
"python -m spacy.download [name or shortcut]",
title="Missing model name or shortcut")
if model == 'all':
util.sys_exit(
"As of v1.7.0, the download all command is deprecated. Please "
"download the models individually via spacy.download [model name] "
"or pip install. For more info on this, see the documentation: "
"{d}".format(d=about.__docs_models__),
title="Deprecated command")
| 31.583333 | 97 | 0.641915 | 346 | 2,653 | 4.722543 | 0.320809 | 0.038556 | 0.03366 | 0.023256 | 0.134639 | 0.088127 | 0.039168 | 0.039168 | 0 | 0 | 0 | 0.003949 | 0.236336 | 2,653 | 83 | 98 | 31.963855 | 0.802567 | 0.004523 | 0 | 0.109375 | 0 | 0 | 0.275104 | 0.008336 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109375 | false | 0 | 0.125 | 0 | 0.296875 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70caecd87cd1b591066cd5ffef7445d25c55e9ec | 6,897 | py | Python | benchmark/benchmark.py | dillon-cullinan/gputreeshap | 5bba198a7c2b3298dc766740965a4dffa7d8ffa4 | [
"Apache-2.0"
] | 53 | 2020-08-19T19:37:14.000Z | 2022-03-03T03:05:33.000Z | benchmark/benchmark.py | dillon-cullinan/gputreeshap | 5bba198a7c2b3298dc766740965a4dffa7d8ffa4 | [
"Apache-2.0"
] | 15 | 2020-08-25T02:07:19.000Z | 2022-02-24T11:42:07.000Z | benchmark/benchmark.py | dillon-cullinan/gputreeshap | 5bba198a7c2b3298dc766740965a4dffa7d8ffa4 | [
"Apache-2.0"
] | 14 | 2020-08-25T01:15:31.000Z | 2022-03-13T19:15:55.000Z | import xgboost as xgb
import numpy as np
import time
from sklearn import datasets
from joblib import Memory
import pandas as pd
import argparse
memory = Memory('./cachedir', verbose=0)
# Contains a dataset in numpy format as well as the relevant objective and metric
class TestDataset:
def __init__(self, name, Xy, objective
):
self.name = name
self.objective = objective
self.X, self.y = Xy
def set_params(self, params_in):
params_in['objective'] = self.objective
if self.objective == "multi:softmax":
params_in["num_class"] = int(np.max(self.y) + 1)
return params_in
def get_dmat(self):
return xgb.DMatrix(self.X, self.y)
def get_test_dmat(self, num_rows):
rs = np.random.RandomState(432)
return xgb.DMatrix(self.X[rs.randint(0, self.X.shape[0], size=num_rows), :])
@memory.cache
def train_model(dataset, max_depth, num_rounds):
dmat = dataset.get_dmat()
params = {'tree_method': 'gpu_hist', 'max_depth': max_depth, 'eta': 0.01}
params = dataset.set_params(params)
model = xgb.train(params, dmat, num_rounds, [(dmat, 'train')])
return model
@memory.cache
def fetch_adult():
X, y = datasets.fetch_openml("adult", return_X_y=True)
y_binary = np.array([y_i != '<=50K' for y_i in y])
return X, y_binary
@memory.cache
def fetch_fashion_mnist():
X, y = datasets.fetch_openml("Fashion-MNIST", return_X_y=True)
return X, y.astype(np.int64)
@memory.cache
def get_model_stats(model):
depths = []
for t in model.get_dump():
for line in t.splitlines():
if "leaf" in line:
depths.append(line.count('\t'))
return len(model.get_dump()), len(depths), np.mean(depths)
class Model:
def __init__(self, name, dataset, num_rounds, max_depth):
self.name = name
self.dataset = dataset
self.num_rounds = num_rounds
self.max_depth = max_depth
print("Training " + name)
self.xgb_model = train_model(dataset, max_depth, num_rounds)
self.num_trees, self.num_leaves, self.average_depth = get_model_stats(self.xgb_model)
def check_accuracy(shap, margin):
if len(shap.shape) == 2:
sum = np.sum(shap, axis=len(shap.shape) - 1)
else:
sum = np.sum(shap, axis=(len(shap.shape) - 1, len(shap.shape) - 2))
if not np.allclose(sum, margin, 1e-1, 1e-1):
print("Warning: Failed 1e-1 accuracy")
def get_models(model):
test_datasets = [
TestDataset("covtype", datasets.fetch_covtype(return_X_y=True), "multi:softmax"),
TestDataset("cal_housing", datasets.fetch_california_housing(return_X_y=True),
"reg:squarederror"),
TestDataset("fashion_mnist", fetch_fashion_mnist(), "multi:softmax"),
TestDataset("adult", fetch_adult(), "binary:logistic"),
]
models = []
for d in test_datasets:
small_name = d.name + "-small"
if small_name in model or model == "all" or model == "small":
models.append(Model(small_name, d, 10, 3))
med_name = d.name + "-med"
if med_name in model or model == "all" or model == "med":
models.append(Model(med_name, d, 100, 8))
large_name = d.name + "-large"
if large_name in model or model == "all" or model == "large":
models.append(Model(large_name, d, 1000, 16))
return models
def print_model_stats(models, args):
# get model statistics
models_df = pd.DataFrame(
columns=["model", "num_rounds", "num_trees", "num_leaves", "max_depth", "average_depth"])
for m in models:
models_df = models_df.append(
{"model": m.name, "num_rounds": m.num_rounds, "num_trees": m.num_trees,
"num_leaves": m.num_leaves, "max_depth": m.max_depth,
"average_depth": m.average_depth},
ignore_index=True)
print(models_df)
print("Writing model statistics to: " + args.out_models)
models_df.to_csv(args.out_models, index=False)
def run_benchmark(args):
models = get_models(args)
print_model_stats(models, args)
predictors = ["cpu_predictor", "gpu_predictor"]
# predictors = ["gpu_predictor"]
test_rows = args.nrows
df = pd.DataFrame(
columns=["model", "test_rows", "cpu_time(s)", "cpu_std", "gpu_time(s)", "gpu_std",
"speedup"])
for m in models:
dtest = m.dataset.get_test_dmat(test_rows)
result_row = {"model": m.name, "test_rows": test_rows, "cpu_time(s)": 0.0}
for p in predictors:
m.xgb_model.set_param({"predictor": p})
samples = []
for i in range(args.niter):
start = time.perf_counter()
if args.interactions:
xgb_shap = m.xgb_model.predict(dtest, pred_interactions=True)
else:
xgb_shap = m.xgb_model.predict(dtest, pred_contribs=True)
samples.append(time.perf_counter() - start)
if p is "gpu_predictor":
result_row["gpu_time(s)"] = np.mean(samples)
result_row["gpu_std"] = np.std(samples)
else:
result_row["cpu_time(s)"] = np.mean(samples)
result_row["cpu_std"] = np.std(samples)
# Check result
margin = m.xgb_model.predict(dtest, output_margin=True)
check_accuracy(xgb_shap, margin)
result_row["speedup"] = result_row["cpu_time(s)"] / result_row["gpu_time(s)"]
df = df.append(result_row,
ignore_index=True)
print(df)
print("Writing results to: " + args.out)
df.to_csv(args.out, index=False)
def main():
parser = argparse.ArgumentParser(description='GPUTreeShap benchmark')
parser.add_argument("-model", default="all", type=str,
help="The model to be used for benchmarking. 'all' for all datasets.")
parser.add_argument("-nrows", default=10000, type=int,
help=(
"Number of test rows."))
parser.add_argument("-niter", default=5, type=int,
help=(
"Number of times to repeat the experiment."))
parser.add_argument("-format", default="text", type=str,
help="Format of output tables. E.g. text,latex,csv")
parser.add_argument("-out", default="results.csv", type=str)
parser.add_argument("-interactions", default=False, type=bool)
parser.add_argument("-out_models", default="models.csv", type=str)
args = parser.parse_args()
run_benchmark(args)
if __name__ == '__main__':
main()
| 36.3 | 98 | 0.594171 | 898 | 6,897 | 4.360802 | 0.223831 | 0.020429 | 0.030388 | 0.012257 | 0.175945 | 0.083759 | 0.083759 | 0.052605 | 0.014811 | 0 | 0 | 0.009051 | 0.279107 | 6,897 | 189 | 99 | 36.492063 | 0.77856 | 0.020879 | 0 | 0.1 | 0 | 0 | 0.138436 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093333 | false | 0 | 0.046667 | 0.006667 | 0.206667 | 0.053333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70caf4c5c860e3b4a849211337e8b3e06f90c7be | 12,529 | py | Python | swic-loopback-test.py | elvees/mcom02-swic-tools | 8a638bf709a258cbe5e394bf1366f06ffd476fba | [
"MIT"
] | null | null | null | swic-loopback-test.py | elvees/mcom02-swic-tools | 8a638bf709a258cbe5e394bf1366f06ffd476fba | [
"MIT"
] | 1 | 2019-09-27T09:24:09.000Z | 2019-09-27T09:24:09.000Z | swic-loopback-test.py | elvees/mcom02-swic-tools | 8a638bf709a258cbe5e394bf1366f06ffd476fba | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright 2019 RnD Center "ELVEES", JSC
import filecmp
import itertools
import math
import os
import random
import re
import subprocess
import tempfile
import time
import unittest
def rand_bytes(size):
# os.urandom() depends on the entropy in the system. This could increase
# time of generating of data up to 1 min, which is unacceptable. This
# function always generates data in a constant time interval. See PEP524.
return bytearray(map(random.getrandbits, itertools.repeat(8, size)))
def stats_get(dev):
regex = (r"TX packets\s(?P<tx_pckt>\d*)\s*bytes\s(?P<tx_bytes>\d*)\s*"
r"RX packets\s(?P<rx_pckt>\d*)\s*bytes\s(?P<rx_bytes>\d*)\s*"
r"EEP\s(?P<eep>\d*)\s*parity\s(?P<parity>\d*)\s*"
r"escape\s(?P<esc>\d*)\s*disconnect\s(?P<discon>\d*)\s*"
r"credit\s(?P<credit>\d*)")
proc = subprocess.Popen(['swic', dev], stdout=subprocess.PIPE)
out = proc.communicate()[0]
match = re.search(regex, out.decode("utf-8"), re.MULTILINE)
return match
class TestcaseSWIC(unittest.TestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.inputfile = '/tmp/input.bin'
cls.filesize = int(os.environ.get('INPUT_FILE_SIZE', 1024*1024))
with open(cls.inputfile, 'wb') as fout:
fout.write(rand_bytes(cls.filesize))
cls.outputfile = '/tmp/output.bin'
cls.iters = int(os.environ.get('ITERS', 5))
cls.speed = int(os.environ.get('SPEED', 408))
cls.timeout = int(os.environ.get('TIMEOUT', 10))
cls.verbose = int(os.environ.get('VERBOSE', 0))
# From theoretical analysis and formula calculation
# Bit error ratio is less than 1.034x10-13 when SpaceWire
# bus works with 200Mbps bit stream for 24h
cls.ber_threshold = 1.034e-13
cls.duration = time.time()
proc1 = subprocess.Popen(['swic', '/dev/spacewire0', '-r'],
stderr=subprocess.DEVNULL)
proc2 = subprocess.Popen(['swic', '/dev/spacewire1', '-r'],
stderr=subprocess.DEVNULL)
proc1.wait()
proc2.wait()
@classmethod
def tearDownClass(cls):
cls.duration = time.time() - cls.duration
cls.check_ber(cls, '/dev/spacewire0')
cls.check_ber(cls, '/dev/spacewire1')
os.remove(cls.inputfile)
super().tearDownClass()
def setUp(self):
self.run_procs([
['swic', '/dev/spacewire0', '-l', 'up'],
['swic', '/dev/spacewire1', '-l', 'up'],
])
def tearDown(self):
self.run_procs([
['swic', '/dev/spacewire0', '-l', 'down'],
['swic', '/dev/spacewire1', '-l', 'down'],
])
try:
os.remove(self.outputfile)
except OSError:
pass
super().tearDown()
def get_speed_mbps(self, speed):
if speed == 255:
return 2.4
elif speed == 0:
return 4.8
return 48 * (speed - 1) + 72
def run_procs(self, procs):
stdouts = []
process = []
for i, proc in enumerate(procs):
process.append(subprocess.Popen(proc,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT))
for i, proc in enumerate(process):
try:
stdouts.append(proc.communicate(timeout=self.timeout)[0])
except subprocess.TimeoutExpired:
proc.kill()
stdouts.append(proc.communicate()[0])
for proc, stdout in zip(process, stdouts):
self.assertFalse(proc.returncode,
'Non zero return code, stdout/stderr: {}'.format(
stdout.decode('UTF-8')))
def check_ber(self, dev):
m = stats_get(dev)
rx_bytes = int(m.group('rx_bytes'))
parity = int(m.group('parity'))
esc = int(m.group('esc'))
credit = int(m.group('credit'))
errors = parity + esc + credit
if self.verbose:
print('BER threshold {:10.3e}'.format(self.ber_threshold))
print('Device {}: RX bytes {}'.format(dev, rx_bytes))
print('Errors: Parity {}, Escape {}, Credit {}'.format(
parity, esc, credit))
print('Duration: {}'.format(self.duration))
if rx_bytes > 0:
ber = errors / rx_bytes
else:
ber = 0
# 24h * 60min * 60sec = 86400sec
ber *= 86400 / self.duration
if self.verbose:
print('Current BER {:10.3e}'.format(ber))
if ber > self.ber_threshold:
print('Device {}: Bit error ratio exceeds threshold, BER={:10.3e}, current={:10.3e}.'.
format(dev, self.ber_threshold, ber))
def check(self, speed, mtu, src, dst):
packets = math.ceil(self.filesize / mtu)
self.run_procs([
['swic', src,
'-m', str(mtu),
'-s', str(speed),
'-f'],
['swic', dst,
'-s', str(speed),
'-f'],
])
self.run_procs([
['swic-xfer', src, 's',
'-f', self.inputfile,
'-v'],
['swic-xfer', dst, 'r',
'-f', self.outputfile,
'-n', str(packets),
'-v'],
])
result = filecmp.cmp(self.inputfile, self.outputfile)
self.assertTrue(result,
'Input and output files mismatch, speed={}, mtu={}.'.format(speed, mtu))
def test_sanity(self):
mtu = 16*1024
for i in range(self.iters):
if self.verbose:
print('Iteration {}'.format(i+1))
with self.subTest(i=i):
self.check(self.speed, mtu, '/dev/spacewire0', '/dev/spacewire1')
self.check(self.speed, mtu, '/dev/spacewire1', '/dev/spacewire0')
def test_mtu(self):
mtu_pool = [2**x for x in range(4, 21)]
for i in range(self.iters):
random.shuffle(mtu_pool)
for mtu in mtu_pool:
if self.verbose:
print('Iteration {}, mtu={}'.format(i+1, mtu))
with self.subTest(iter=i, mtu=mtu):
self.check(self.speed, mtu, '/dev/spacewire0', '/dev/spacewire1')
def read32(self, addr):
return int((subprocess.check_output(['devmem',
hex(addr)]).strip()).decode('UTF-8'), 16)
def wait_event(self, mask, value, timeout=None):
if timeout is None:
timeout = self.timeout
rx_status = self.read32(0x38084004)
started = time.monotonic()
while rx_status & mask != value:
rx_status = self.read32(0x38084004)
self.assertLess(time.monotonic() - started, timeout,
'Timeout waiting for SpaceWire event')
def test_flush_fifo(self):
rxfifo_size = 384
desc_size = 16 * 1024
num_descs = 65
rxring_size = desc_size * num_descs
filesize = rxring_size + rxfifo_size
mtu = filesize / 2
if self.verbose:
print('\nFile size {} bytes, mtu {} bytes, speed {} Mbits/s'.
format(filesize, mtu, self.speed))
input_temp = tempfile.NamedTemporaryFile()
input_temp.write(rand_bytes(filesize))
self.run_procs([['swic',
'/dev/spacewire0',
'-m', str(mtu),
'-s', str(self.speed)]])
proc = subprocess.Popen(['swic-xfer',
'/dev/spacewire0', 's',
'-f', input_temp.name])
if self.verbose:
print('\nWaiting for fill RX FIFO')
self.wait_event(0x100, 0x100)
self.run_procs([['swic', '/dev/spacewire1', '-l', 'down']])
proc.kill()
proc.wait()
self.run_procs([['swic', '/dev/spacewire0', '-l', 'up']])
self.run_procs([['swic', '/dev/spacewire1', '-l', 'up']])
self.check(self.speed, 1024, '/dev/spacewire0', '/dev/spacewire1')
def test_link(self):
mtu = 16 * 1024
speed_bps = self.speed * 1000 * 1000
exch_time_s = round((self.filesize * 8 / speed_bps), 3)
if self.verbose:
print('\nFile size {} bytes, mtu {} bytes, speed {} Mbits/s, exchange time {} s'.
format(self.filesize, mtu, self.speed, exch_time_s))
input_temp = tempfile.NamedTemporaryFile()
input_temp.write(rand_bytes(self.filesize))
output_temp = tempfile.NamedTemporaryFile()
src = '/dev/spacewire0'
dst = '/dev/spacewire1'
self.run_procs([
['swic', src,
'-m', str(mtu),
'-s', str(self.speed)],
['swic', dst,
'-m', str(mtu),
'-s', str(self.speed)],
])
for i in range(self.iters):
brk_time_s = round(random.random() * exch_time_s, 3)
if self.verbose:
print('Iteration {}, break time {} s'.format(i+1, brk_time_s))
packets = math.ceil(self.filesize / mtu)
if random.getrandbits(1):
src, dst = dst, src
if self.verbose:
print('Transfering from {} to {}'.format(src, dst))
proc1 = subprocess.Popen(['swic-xfer',
src, 's',
'-f', input_temp.name],
stderr=subprocess.DEVNULL)
proc2 = subprocess.Popen(['swic-xfer',
dst, 'r',
'-f', output_temp.name,
'-n', str(packets)],
stderr=subprocess.DEVNULL)
time.sleep(brk_time_s)
brk_src = random.choice(['/dev/spacewire0', '/dev/spacewire1'])
if self.verbose:
print('Interface {} going down'.format(brk_src))
self.run_procs([['swic', brk_src, '-l', 'down']])
proc1.wait()
proc2.wait()
if self.verbose:
print('Interface {} going up'.format(brk_src))
self.run_procs([['swic', brk_src, '-l', 'up']])
if random.getrandbits(1):
src, dst = dst, src
if self.verbose:
print('Transfering from {} to {}'.format(src, dst))
with self.subTest(i=i):
self.check(self.speed, mtu, src, dst)
def test_full_duplex(self):
mtu = 16 * 1024
packets = math.ceil(self.filesize / mtu)
input_tmp = tempfile.NamedTemporaryFile()
input_tmp.write(rand_bytes(self.filesize))
output_tmp = tempfile.NamedTemporaryFile()
self.run_procs([
['swic', '/dev/spacewire0',
'-m', str(mtu),
'-s', str(self.speed)],
['swic', '/dev/spacewire1',
'-m', str(mtu),
'-s', str(self.speed)],
])
for i in range(self.iters):
if self.verbose:
print('Iteration {}'.format(i+1))
self.run_procs([
['swic-xfer', '/dev/spacewire0', 's',
'-f', self.inputfile,
'-v'],
['swic-xfer', '/dev/spacewire1', 's',
'-f', input_tmp.name,
'-v'],
['swic-xfer', '/dev/spacewire1', 'r',
'-f', self.outputfile,
'-n', str(packets),
'-v'],
['swic-xfer', '/dev/spacewire0', 'r',
'-f', output_tmp.name,
'-n', str(packets),
'-v'],
])
res1 = filecmp.cmp(self.inputfile, self.outputfile)
res2 = filecmp.cmp(input_tmp.name, output_tmp.name)
self.assertTrue(res1,
'SWIC0 to SWIC1 files mismatch, speed={}, mtu={}.'.
format(self.speed, mtu))
self.assertTrue(res2,
'SWIC1 to SWIC0 files mismatch, speed={}, mtu={}.'.
format(self.speed, mtu))
if __name__ == '__main__':
unittest.main(verbosity=2)
| 33.145503 | 98 | 0.497805 | 1,379 | 12,529 | 4.44525 | 0.204496 | 0.024959 | 0.025449 | 0.033931 | 0.369494 | 0.293475 | 0.224796 | 0.191843 | 0.150571 | 0.119739 | 0 | 0.02854 | 0.356772 | 12,529 | 377 | 99 | 33.233422 | 0.732101 | 0.035996 | 0 | 0.347222 | 0 | 0.013889 | 0.152125 | 0.019223 | 0 | 0 | 0.002486 | 0 | 0.017361 | 1 | 0.059028 | false | 0.003472 | 0.034722 | 0.006944 | 0.118056 | 0.059028 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70cb2199b06ad2e07c09ad18b04717313a8a69f6 | 9,489 | py | Python | Carbon Footprint/carbon_footprint.py | nityakk/Projects | 89fc183a277593de1c5f98799686d850530170ea | [
"MIT"
] | null | null | null | Carbon Footprint/carbon_footprint.py | nityakk/Projects | 89fc183a277593de1c5f98799686d850530170ea | [
"MIT"
] | null | null | null | Carbon Footprint/carbon_footprint.py | nityakk/Projects | 89fc183a277593de1c5f98799686d850530170ea | [
"MIT"
] | null | null | null | '''carbon_footprint.py
'''
#<METADATA>
QUIET_VERSION = "0.1"
PROBLEM_NAME = "Carbon Footprint"
PROBLEM_VERSION = "0.1"
PROBLEM_AUTHORS = ['Nitya Krishna Kumar, Krishna Upadhyayula']
PROBLEM_CREATION_DATE = "24-NOV-2019"
PROBLEM_DESC=\
'''
This is a problem formulation for the carbon footprint wicked problem. It
primarily concentrates on actions that the US and China can take to reduce
their carbon dioxide emmisions and also decrease overall surface temperature
of the earth.
'''
#</METADATA>
#<COMMON_DATA>
FACTORS = ['carbon', 'delta_t', 'budget']
ACTIONS = ['plant trees',
'implement CO2 direct capture techniques',
'implement composting policy',
'convert to renewable energy source',
'start export policy',
'end import policy']
COUNTRIES = ['USA', 'China']
#</COMMON_DATA>
#<COMMON_CODE>
class State:
def default(self):
new_state = {}
default_vals = {}
# source: https://www.climate.gov
default_vals['carbon'] = 400 # measured in ppm
# increase in average surface temerature since 1880
# source: https://www.climate.gov/
default_vals['delta_t'] = 0.8
new_state['world'] = default_vals
for country in COUNTRIES:
USA_country_vals = {}
China_country_vals = {}
if country == 'USA':
USA_country_vals['budget'] = 21000 # in billions
new_state['USA'] = USA_country_vals
if country == 'China':
China_country_vals['budget'] = 13000
new_state['China'] = China_country_vals
return new_state
def __init__(self, b):
# default state
news = self.default()
# check if default or not
default = False
if len(b) == len(COUNTRIES):
news['world']['carbon'] = b[0][0]
news['world']['delta_t'] = b[0][1]
for country in COUNTRIES:
if len(b[0]) == len(FACTORS) and len(b[1]) == len(FACTORS):
for i in range(len(FACTORS)):
if i < 3:
news['world'][FACTORS[i]] = b[0][i]
else:
if country == 'USA':
news[country][FACTORS[i]] = b[0][i]
else:
news[country][FACTORS[i]] = b[1][i]
else:
default = True
else:
default = True
if not default:
self.b = news
else:
self.b = self.default()
def __eq__(self,s2):
for i in self.b:
if self.b[i] != s2.b[i]:
return False
for j in self.b[i]:
if self.b[i][j] != s2.b[i][j]:
return False
return True
def __str__(self):
txt = "\n["
for i in self.b:
txt += str(self.b[i])+"\n "
return txt[:-2]+"]"
def __hash__(self):
return (self.__str__()).__hash__()
def copy(self):
# Performs an appropriately deep copy of a state,
# for use by operators in creating new states.
news = State([[]])
news.b['world']['carbon'] = self.b['world']['carbon']
news.b['world']['delta_t'] = self.b['world']['delta_t']
#news.b['world']['time_left'] = self.b['world']['time_left']
for country in COUNTRIES:
news.b[country]['budget'] = self.b[country]['budget']
#news.b[country]['land'] = self.b[country]['land']
#news.b[country]['political_stability'] = self.b[country]['political_stability']
return news
def get_action_choice(self, action):
action_choice = 0
if action == 'plant trees':
action_choice = 1
elif action == 'implement CO2 direct capture techniques':
action_choice = 2
elif action == 'implement composting policy':
action_choice = 3
elif action == 'convert to renewable energy source':
action_choice = 4
elif action == 'start export policy':
action_choice = 5
elif action == 'end import policy':
action_choice = 6
return action_choice
def can_move(self, move):
'''Tests whether it's legal to move a tile that is next
to the void in the direction given.'''
move_values = move.split(' ')
country = move_values[0]
action = move_values[1:len(move_values)]
action = ' '.join(action)
action_choice =self.get_action_choice(action)
if action_choice == 0:
return False
delta_t = self.b['world']['delta_t']
carbon_level = self.b['world']['carbon']
budget = self.b[country]['budget']
if action_choice == 1:
if carbon_level < 350 and delta_t <= 1 and budget <= 10000:
return True
if action_choice == 2:
if ((carbon_level >= 350 and carbon_level < 450 and budget > 10000)
or (carbon_level >= 450 and delta_t >= 1 and budget > 10000)):
return True
if action_choice == 3:
if ((carbon_level <= 300 or carbon_level >= 450)
and delta_t > 1 and budget <= 10000):
return True
if action_choice == 4:
if ((carbon_level < 350 and budget > 10000)
or (carbon_level >= 450 and delta_t <= 1 and budget > 10000)):
return True
if action_choice == 5:
if (carbon_level >= 350 and carbon_level < 450
and delta_t > 1 and budget <= 10000):
return True
if action_choice == 6:
if (carbon_level >= 350 and carbon_level < 450
and delta_t <= 1 and budget <= 10000):
return True
return False
def move(self, move):
news = self.copy() # start with a deep copy.
move_values = move.split(' ')
country = move_values[0]
action = move_values[1:len(move_values)]
action = ' '.join(action)
action_choice = self.get_action_choice(action)
if action_choice == 1:
news.b['world']['carbon'] -= 50
news.b['world']['delta_t'] -= 0.2
news.b[country]['budget'] -= 3000
elif action_choice == 2:
news.b['world']['carbon'] -= 150
news.b['world']['delta_t'] -= 0.2
news.b[country]['budget'] -= 8000
elif action_choice == 3:
news.b['world']['carbon'] -= 20
news.b['world']['delta_t'] -= 0.1
elif action_choice == 4:
news.b['world']['carbon'] -= 100
news.b['world']['delta_t'] -= 0.4
news.b[country]['budget'] -= 4500
elif action_choice == 5:
news.b['world']['carbon'] += 80
news.b['world']['delta_t'] += 0.1
news.b[country]['budget'] += 2000
elif action_choice == 6:
news.b['world']['carbon'] -= 50
news.b[country]['budget'] += 3000
return news # return new state
def edge_distance(self, s2):
return 1.0
def goal_test(s):
'''If all the b values are in order, then s is a goal state.'''
if s.b['world']['carbon'] <= 280 and s.b['world']['delta_t'] <= 0.05:
return True
return False
def goal_message(s):
return "You have successfully decreased the US + China Carbon Footprint!"
class Operator:
def __init__(self, name, precond, state_transf):
self.name = name
self.precond = precond
self.state_transf = state_transf
def is_applicable(self, s):
return self.precond(s)
def apply(self, s):
return self.state_transf(s)
#</COMMON_CODE>
#<INITIAL_STATE>
# Use default, but override if new value supplied
# by the user on the command line.
try:
import sys
init_state_string = sys.argv[2]
print("Initial state as given on the command line: "+init_state_string)
init_state_list = eval(init_state_string)
except:
init_state_list = [[400, 0.8, 21000], [400, 0.8, 13000]]
print("Using default initial state list:" + str(init_state_list))
print(" (To use a specific initial state, enter it on the command line, " \
"with parameters in the following order: ['carbon', 'delta_t', 'budget']. " \
"Note that carbon and delta_t should be the same for both countries. e.g.,")
print("python3 UCS.py carbon_footprint '[[400, 0.8, 21000], [400, 0.8, 13000]]'")
CREATE_INITIAL_STATE = lambda: State(init_state_list)
#</INITIAL_STATE>
#<OPERATORS>
ACTION_SPACE = [(country + " " + action) for country in COUNTRIES for action in ACTIONS]
OPERATORS = [Operator(a,
lambda s, a1=a: s.can_move(a1),
# The default value construct is needed
# here to capture the value of dir
# in each iteration of the list comp. iteration.
lambda s, a1=a: s.move(a1))
for a in ACTION_SPACE]
#</OPERATORS>
#<GOAL_TEST>
GOAL_TEST = lambda s: goal_test(s)
#</GOAL_TEST>
#<GOAL_MESSAGE_FUNCTION>
GOAL_MESSAGE_FUNCTION = lambda s: goal_message(s)
#</GOAL_MESSAGE_FUNCTION>
| 34.886029 | 92 | 0.541891 | 1,175 | 9,489 | 4.224681 | 0.203404 | 0.062853 | 0.028203 | 0.021757 | 0.28888 | 0.214948 | 0.205479 | 0.167002 | 0.158944 | 0.152297 | 0 | 0.03972 | 0.336706 | 9,489 | 271 | 93 | 35.01476 | 0.748967 | 0.112446 | 0 | 0.252577 | 0 | 0.005155 | 0.144512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07732 | false | 0 | 0.015464 | 0.025773 | 0.221649 | 0.030928 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70cd3809130978b9f18a56c77772c3f8afb2594d | 1,198 | py | Python | exp/utils/rand.py | PwLo3K46/vivit | 937642975be2ade122632d4eaef273461992d7ab | [
"MIT"
] | 1 | 2021-06-07T05:15:22.000Z | 2021-06-07T05:15:22.000Z | exp/utils/rand.py | PwLo3K46/vivit | 937642975be2ade122632d4eaef273461992d7ab | [
"MIT"
] | 2 | 2021-08-10T12:45:37.000Z | 2021-08-10T12:49:51.000Z | exp/utils/rand.py | PwLo3K46/vivit | 937642975be2ade122632d4eaef273461992d7ab | [
"MIT"
] | null | null | null | """Utility functions to control random seeds."""
import torch
class temporary_seed:
"""Temporarily set PyTorch seed to a different value, then restore current value.
This has the effect that code inside this context does not influence the outer
loop's random generator state.
"""
def __init__(self, temp_seed):
self._temp_seed = temp_seed
def __enter__(self):
"""Store the current seed."""
self._old_state = torch.get_rng_state()
torch.manual_seed(self._temp_seed)
def __exit__(self, exc_type, exc_value, traceback):
"""Restore the old random generator state."""
torch.set_rng_state(self._old_state)
def test_temporary_seed():
"""Test if temporary_seed works as expected."""
torch.manual_seed(3)
num1 = torch.rand(1)
with temporary_seed(2):
num2 = torch.rand(1)
num3 = torch.rand(1)
torch.manual_seed(3)
num4 = torch.rand(1)
num5 = torch.rand(1)
torch.manual_seed(2)
num6 = torch.rand(1)
assert torch.allclose(num1, num4)
assert torch.allclose(num3, num5)
assert torch.allclose(num2, num6)
if __name__ == "__main__":
test_temporary_seed()
| 22.603774 | 85 | 0.66778 | 165 | 1,198 | 4.563636 | 0.424242 | 0.071713 | 0.079681 | 0.042497 | 0.066401 | 0.066401 | 0 | 0 | 0 | 0 | 0 | 0.023784 | 0.22788 | 1,198 | 52 | 86 | 23.038462 | 0.79027 | 0.282137 | 0 | 0.08 | 0 | 0 | 0.009744 | 0 | 0 | 0 | 0 | 0 | 0.12 | 1 | 0.16 | false | 0 | 0.04 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70cd504265acd9c9b55efa67990f22e73373de94 | 4,297 | py | Python | dcolumn/dcolumns/tests/test_dcolumns_views.py | cnobile2012/dcolumn | db8432b73b8106fa9aa4da845addc79ac8cbfb6d | [
"MIT"
] | 9 | 2016-04-05T16:51:21.000Z | 2020-12-10T21:54:20.000Z | dcolumn/dcolumns/tests/test_dcolumns_views.py | cnobile2012/dcolumn | db8432b73b8106fa9aa4da845addc79ac8cbfb6d | [
"MIT"
] | null | null | null | dcolumn/dcolumns/tests/test_dcolumns_views.py | cnobile2012/dcolumn | db8432b73b8106fa9aa4da845addc79ac8cbfb6d | [
"MIT"
] | 1 | 2020-08-16T15:58:05.000Z | 2020-08-16T15:58:05.000Z | # -*- coding: utf-8 -*-
#
# dcolumn/dcolumns/tests/test_dcolumns_views.py
#
# WARNING: These unittests can only be run from within the original test
# framework from https://github.com/cnobile2012/dcolumn.
#
import datetime
import pytz
import dateutil
import json
from django.test import TestCase, Client
from django.core.exceptions import ValidationError
from django.urls import reverse
from dcolumn.dcolumns.views import CollectionAJAXView
from dcolumn.dcolumns.models import DynamicColumn
from example_site.books.choices import Language
from .base_tests import BaseDcolumns
class TestCollectionAJAXView(BaseDcolumns, TestCase):
_TEST_USERNAME = 'TestUser'
_TEST_PASSWORD = 'TestPassword_007'
def __init__(self, name):
super(TestCollectionAJAXView, self).__init__(name)
self.client = None
def setUp(self):
super(TestCollectionAJAXView, self).setUp()
self.client = self._set_user_auth()
def _set_user_auth(self, username=_TEST_USERNAME,
password=_TEST_PASSWORD, login=True):
client = Client()
if login:
client.login(username=username, password=password)
return client
def test_exceptions(self):
"""
Test that exceptions happen when they are supposed to happen.
"""
#self.skipTest("Temporarily skipped")
# Setup objects for the collections.
# Test for proper response
class_name = 'bookX'
url = reverse('dcolumns:api-collections',
kwargs={'class_name': class_name})
response = self.client.get(url)
msg = "response status: {}, should be 200".format(response.status_code)
self.assertEquals(response.status_code, 200, msg)
#self.assertTrue(self._has_error(response), msg)
self._test_errors(response, tests={
'class_name': 'bookX',
'message': u'Error occurred: ColumnCollection matching query ',
'valid': False
}, exclude_keys=['valid'])
def test_valid_response(self):
"""
Test that the AJAX response is valid
"""
#self.skipTest("Temporarily skipped")
# Setup objects for the collections.
author, a_cc, a_values = self._create_author_objects()
promotion, p_cc, p_values = self._create_promotion_objects()
language = Language.objects.model_objects()[3] # Russian
dc0 = self._create_dynamic_column_record(
"Date & Time", DynamicColumn.DATETIME, 'book_top', 6)
dc1 = self._create_dynamic_column_record(
"Ignore", DynamicColumn.BOOLEAN, 'book_top', 7)
dc2 = self._create_dynamic_column_record(
"Edition", DynamicColumn.NUMBER, 'book_top', 8)
dc3 = self._create_dynamic_column_record(
"Percentage", DynamicColumn.FLOAT, 'book_top', 9)
book, b_cc, b_values = self._create_book_objects(
author=author, promotion=promotion, language=language,
extra_dcs=[dc0, dc1, dc2, dc3])
value = datetime.datetime.now(pytz.utc).isoformat()
kv0 = self._create_key_value_record(book, dc0, value)
b_values[dc0.slug] = dateutil.parser.parse(kv0.value)
value = 'FALSE'
kv1 = self._create_key_value_record(book, dc1, value)
b_values[dc1.slug] = kv1.value
value = 2
kv2 = self._create_key_value_record(book, dc2, value)
b_values[dc2.slug] = kv2.value
value = 20.5
kv3 = self._create_key_value_record(book, dc3, value)
b_values[dc3.slug] = kv3.value
# Test for proper response
class_name = 'book'
url = reverse('dcolumns:api-collections',
kwargs={'class_name': class_name})
response = self.client.get(url)
msg = "response status: {}, should be 200".format(response.status_code)
self.assertEquals(response.status_code, 200, msg)
content = json.loads(response.content.decode(encoding='utf-8'))
msg = "content: {}".format(content)
self.assertEqual(content.get('class_name'), class_name, msg)
self.assertTrue('dynamicColumns' in content, msg)
self.assertTrue('relations' in content, msg)
self.assertTrue('valid' in content, msg)
| 38.711712 | 79 | 0.653945 | 500 | 4,297 | 5.41 | 0.336 | 0.040665 | 0.026617 | 0.034011 | 0.296488 | 0.234381 | 0.170795 | 0.170795 | 0.170795 | 0.127172 | 0 | 0.01659 | 0.242495 | 4,297 | 110 | 80 | 39.063636 | 0.814439 | 0.127764 | 0 | 0.12987 | 0 | 0 | 0.101437 | 0.013019 | 0 | 0 | 0 | 0 | 0.077922 | 1 | 0.064935 | false | 0.038961 | 0.142857 | 0 | 0.25974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70cd710ec42b9f267fe1b6cb37f45e9fabe2ebde | 8,840 | py | Python | nscl/datasets/clevr/definition.py | OolongQian/NSCL-PyTorch-Release | 4cf0a633ceeaa9d221d66e066ef7892c04cdf9eb | [
"MIT"
] | null | null | null | nscl/datasets/clevr/definition.py | OolongQian/NSCL-PyTorch-Release | 4cf0a633ceeaa9d221d66e066ef7892c04cdf9eb | [
"MIT"
] | null | null | null | nscl/datasets/clevr/definition.py | OolongQian/NSCL-PyTorch-Release | 4cf0a633ceeaa9d221d66e066ef7892c04cdf9eb | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
# -*- coding: utf-8 -*-
# File : definition.py
# Author : Jiayuan Mao
# Email : maojiayuan@gmail.com
# Date : 09/29/2018
#
# This file is part of NSCL-PyTorch.
# Distributed under terms of the MIT license.
"""This module instantiate DatasetDefinition's symbolic structure
form to clevr dataset."""
import six
import numpy as np
from jacinle.logging import get_logger
from nscl.datasets.definition import DatasetDefinitionBase
from .program_translator import clevr_to_nsclseq
logger = get_logger(__file__)
"""qian: from XXX import * only imports objects specified in '__all__', if is defined in XXX.
don't be afraid of these strange stuff!"""
__all__ = [
'CLEVRDefinition',
'build_clevr_dataset', 'build_symbolic_clevr_dataset',
'build_concept_retrieval_clevr_dataset', 'build_concept_quantization_clevr_dataset'
]
class CLEVRDefinition(DatasetDefinitionBase):
"""pass"""
"""the following operation_signatures, attribute_concepts, relational_concepts, synonyms,
override parent form class, and are instantiations for clevr dataset."""
"""qian: operation_signatures mean all operations the program can do.
these signatures declares the input and output forms of information.
and these symbolic forms are logical and is the key to NSCL."""
operation_signatures = [
# Part 1: clevr dataset.
('scene', [], [], 'object_set'),
('filter', ['concept'], ['object_set'], 'object_set'),
('relate', ['relational_concept'], ['object'], 'object_set'),
('relate_attribute_equal', ['attribute'], ['object'], 'object_set'),
('intersect', [], ['object_set', 'object_set'], 'object_set'),
('union', [], ['object_set', 'object_set'], 'object_set'),
('query', ['attribute'], ['object'], 'word'),
('query_attribute_equal', ['attribute'], ['object', 'object'], 'bool'),
('exist', [], ['object_set'], 'bool'),
('count', [], ['object_set'], 'integer'),
('count_less', [], ['object_set', 'object_set'], 'bool'),
('count_equal', [], ['object_set', 'object_set'], 'bool'),
('count_greater', [], ['object_set', 'object_set'], 'bool'),
]
"""qian: declare possible concepts values for each attribute."""
attribute_concepts = {
'color': ['gray', 'red', 'blue', 'green', 'brown', 'purple', 'cyan', 'yellow'],
'material': ['rubber', 'metal'],
'shape': ['cube', 'sphere', 'cylinder'],
'size': ['small', 'large']
}
"""qian: similar to above."""
relational_concepts = {
'spatial_relation': ['left', 'right', 'front', 'behind']
}
"""qian: do some extra for NLP question processing."""
synonyms = {
"thing": ["thing", "object"],
"sphere": ["sphere", "ball", "spheres", "balls"],
"cube": ["cube", "block", "cubes", "blocks"],
"cylinder": ["cylinder", "cylinders"],
"large": ["large", "big"],
"small": ["small", "tiny"],
"metal": ["metallic", "metal", "shiny"],
"rubber": ["rubber", "matte"],
}
word2lemma = {
v: k for k, vs in synonyms.items() for v in vs
}
EBD_CONCEPT_GROUPS = '<CONCEPTS>'
EBD_RELATIONAL_CONCEPT_GROUPS = '<REL_CONCEPTS>'
EBD_ATTRIBUTE_GROUPS = '<ATTRIBUTES>'
extra_embeddings = [EBD_CONCEPT_GROUPS, EBD_RELATIONAL_CONCEPT_GROUPS, EBD_ATTRIBUTE_GROUPS]
@staticmethod
def _is_object_annotation_available(scene):
assert len(scene['objects']) > 0
if 'mask' in scene['objects'][0]:
return True
return False
"""qian: the following methods could be understood when inspecting where they are invoked.
thus i jump to read others."""
def annotate_scene(self, scene):
feed_dict = dict()
if not self._is_object_annotation_available(scene):
return feed_dict
for attr_name, concepts in self.attribute_concepts.items():
concepts2id = {v: i for i, v in enumerate(concepts)}
values = list()
for obj in scene['objects']:
assert attr_name in obj
values.append(concepts2id[obj[attr_name]])
values = np.array(values, dtype='int64')
feed_dict['attribute_' + attr_name] = values
lhs, rhs = np.meshgrid(values, values)
feed_dict['attribute_relation_' + attr_name] = (lhs == rhs).astype('float32').reshape(-1)
nr_objects = len(scene['objects'])
for attr_name, concepts in self.relational_concepts.items():
concept_values = []
for concept in concepts:
values = np.zeros((nr_objects, nr_objects), dtype='float32')
assert concept in scene['relationships']
this_relation = scene['relationships'][concept]
assert len(this_relation) == nr_objects
for i, this_row in enumerate(this_relation):
for j in this_row:
values[i, j] = 1
concept_values.append(values)
concept_values = np.stack(concept_values, -1)
feed_dict['relation_' + attr_name] = concept_values.reshape(-1, concept_values.shape[-1])
return feed_dict
def annotate_question_metainfo(self, metainfo):
if 'template_filename' in metainfo:
return dict(template=metainfo['template_filename'], template_index=metainfo['question_family_index'])
return dict()
def annotate_question(self, metainfo):
return dict()
def program_to_nsclseq(self, program, question=None):
return clevr_to_nsclseq(program)
def canonize_answer(self, answer, question_type):
if answer in ('yes', 'no'):
answer = (answer == 'yes')
elif isinstance(answer, six.string_types) and answer.isdigit():
answer = int(answer)
assert 0 <= answer <= 10
return answer
def update_collate_guide(self, collate_guide):
# Scene annotations.
for attr_name in self.attribute_concepts:
collate_guide['attribute_' + attr_name] = 'concat'
collate_guide['attribute_relation_' + attr_name] = 'concat'
for attr_name in self.relational_concepts:
collate_guide['relation_' + attr_name] = 'concat'
# From ExtractConceptsAndAttributes and SearchCandidatePrograms.
for param_type in self.parameter_types:
collate_guide['question_' + param_type + 's'] = 'skip'
collate_guide['program_parserv1_groundtruth_qstree'] = 'skip'
collate_guide['program_parserv1_candidates_qstree'] = 'skip'
"""qian: i should read nscl.dataset first."""
def build_clevr_dataset(args, configs, image_root, scenes_json, questions_json):
"""qian: here we build clevr dataset for Neural-Symbolic Concept Learner."""
import jactorch.transforms.bbox as T
image_transform = T.Compose([
T.NormalizeBbox(),
T.Resize(configs.data.image_size),
T.DenormalizeBbox(),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
from nscl.datasets.datasets import NSCLDataset
dataset = NSCLDataset(
scenes_json, questions_json,
image_root=image_root, image_transform=image_transform,
vocab_json=args.data_vocab_json
)
return dataset
def build_concept_retrieval_clevr_dataset(args, configs, program, image_root, scenes_json):
import jactorch.transforms.bbox as T
image_transform = T.Compose([
T.NormalizeBbox(),
T.Resize(configs.data.image_size),
T.DenormalizeBbox(),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
from nscl.datasets.datasets import ConceptRetrievalDataset
dataset = ConceptRetrievalDataset(
program, scenes_json,
image_root=image_root, image_transform=image_transform
)
return dataset
def build_concept_quantization_clevr_dataset(args, configs, image_root, scenes_json):
import jactorch.transforms.bbox as T
image_transform = T.Compose([
T.NormalizeBbox(),
T.Resize(configs.data.image_size),
T.DenormalizeBbox(),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
from nscl.datasets.datasets import ConceptQuantizationDataset
dataset = ConceptQuantizationDataset(scenes_json, image_root=image_root, image_transform=image_transform)
return dataset
def build_symbolic_clevr_dataset(args):
from nscl.datasets.datasets import NSCLDataset
dataset = NSCLDataset(
args.data_scenes_json, args.data_questions_json,
image_root=None, image_transform=None,
vocab_json=args.data_vocab_json
)
return dataset
| 37.142857 | 113 | 0.638122 | 1,007 | 8,840 | 5.380338 | 0.296922 | 0.031561 | 0.022148 | 0.026578 | 0.287191 | 0.217793 | 0.18863 | 0.18863 | 0.145995 | 0.136766 | 0 | 0.015357 | 0.233937 | 8,840 | 237 | 114 | 37.299578 | 0.784702 | 0.054751 | 0 | 0.231707 | 0 | 0 | 0.163254 | 0.031848 | 0 | 0 | 0 | 0 | 0.030488 | 1 | 0.067073 | false | 0 | 0.073171 | 0.012195 | 0.280488 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70cdf53b5f166c1a987fffcb67c69e17cb82ec0e | 2,388 | py | Python | code/data_modeling/samplesize.py | pamta/msc-thesis | c727cc275930ee2819ec1fd3e25bf85b0535cee8 | [
"MIT"
] | null | null | null | code/data_modeling/samplesize.py | pamta/msc-thesis | c727cc275930ee2819ec1fd3e25bf85b0535cee8 | [
"MIT"
] | null | null | null | code/data_modeling/samplesize.py | pamta/msc-thesis | c727cc275930ee2819ec1fd3e25bf85b0535cee8 | [
"MIT"
] | 1 | 2021-09-13T20:35:39.000Z | 2021-09-13T20:35:39.000Z | # Script from http://veekaybee.github.io/how-big-of-a-sample-size-do-you-need/ on how to calculate sample size, adjusted for my own population size
# and confidence intervals
# Original here: http://bc-forensics.com/?p=15
import math
import pandas as pd
# SUPPORTED CONFIDENCE LEVELS: 50%, 68%, 90%, 95%, and 99%
confidence_level_constant = (
[50, 0.67],
[68, 0.99],
[80, 1.28],
[85, 1.44],
[90, 1.64],
[95, 1.96],
[99, 2.57],
)
# CALCULATE THE SAMPLE SIZE
def sample_size(population_size, confidence_level, confidence_interval):
Z = 0.0
p = 0.5
e = confidence_interval / 100.0
N = population_size
n_0 = 0.0
n = 0.0
# LOOP THROUGH SUPPORTED CONFIDENCE LEVELS AND FIND THE NUM STD
# DEVIATIONS FOR THAT CONFIDENCE LEVEL
for i in confidence_level_constant:
if i[0] == confidence_level:
Z = i[1]
if Z == 0.0:
return -1
# CALC SAMPLE SIZE
n_0 = ((Z ** 2) * p * (1 - p)) / (e ** 2)
# ADJUST SAMPLE SIZE FOR FINITE POPULATION
n = n_0 / (1 + ((n_0 - 1) / float(N)))
return int(math.ceil(n)) # THE SAMPLE SIZE
sample_sz = 0
population_sz = 10031
confidence_level = 95
confidence_interval = 5.0
#sample_sz = sample_size(population_sz, confidence_level, confidence_interval)
#print(sample_sz)
# df = pd.read_csv("to_validate.csv")
# df.sample(n=383, random_state=42).index
df = pd.read_csv("res.csv")
df["sample_50_05"] = [sample_size(size, 50.0, 5.0) for size in df["count"]]
df["sample_50_10"] = [sample_size(size, 50.0, 10.0) for size in df["count"]]
df["sample_68_05"] = [sample_size(size, 68.0, 5.0) for size in df["count"]]
df["sample_68_10"] = [sample_size(size, 68.0, 10.0) for size in df["count"]]
df["sample_80_05"] = [sample_size(size, 80.0, 5.0) for size in df["count"]]
df["sample_80_10"] = [sample_size(size, 80.0, 10.0) for size in df["count"]]
df["sample_85_05"] = [sample_size(size, 85.0, 5.0) for size in df["count"]]
df["sample_85_10"] = [sample_size(size, 85.0, 10.0) for size in df["count"]]
df["sample_90_05"] = [sample_size(size, 90.0, 5.0) for size in df["count"]]
df["sample_90_10"] = [sample_size(size, 90.0, 10.0) for size in df["count"]]
df["sample_95_05"] = [sample_size(size, 95.0, 5.0) for size in df["count"]]
df["sample_95_10"] = [sample_size(size, 95.0, 10.0) for size in df["count"]]
df.to_csv("res_sample_sizes.csv", index=False)
| 32.712329 | 147 | 0.651591 | 429 | 2,388 | 3.468531 | 0.235431 | 0.134409 | 0.112903 | 0.080645 | 0.360215 | 0.231183 | 0.231183 | 0.231183 | 0.231183 | 0.202957 | 0 | 0.103378 | 0.181742 | 2,388 | 72 | 148 | 33.166667 | 0.658137 | 0.268007 | 0 | 0 | 0 | 0 | 0.133295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.045455 | 0 | 0.113636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70ce8f18662b41a2bbf8d30ed0e2f61a241eb43f | 2,747 | py | Python | edge/upload_aivdm_sql_static_http_ships.py | dannil10/dogger | 7e4570f1aa7d5393a9ae182498573d03fe1b61e9 | [
"MIT"
] | null | null | null | edge/upload_aivdm_sql_static_http_ships.py | dannil10/dogger | 7e4570f1aa7d5393a9ae182498573d03fe1b61e9 | [
"MIT"
] | null | null | null | edge/upload_aivdm_sql_static_http_ships.py | dannil10/dogger | 7e4570f1aa7d5393a9ae182498573d03fe1b61e9 | [
"MIT"
] | null | null | null | import gateway.link
udp_upload_ais = gateway.link.SqlHttpUpdateStatic(
channels = {172},
start_delay = 0,
transmit_rate = 0.2,
max_age = 10,
max_connect_attempts = 50,
message_formats = [ {"message": {"type":5},
"imo": { "type":"int", "novalue":0 },
"callsign": { "type":"str" },
"shipname": { "type":"str" },
"shiptype": { "type":"int", "novalue":0 },
"to_bow": { "type":"int", "novalue":0, "overflow":511 },
"to_stern": { "type":"int", "novalue":0, "overflow":511 },
"to_port": { "type":"int", "novalue":0, "overflow":63 },
"to_starboard": { "type":"int", "novalue":0, "overflow":63 },
"month": { "type":"int", "novalue":0 },
"day": { "type":"int", "novalue":0 },
"hour": { "type":"int", "novalue":24 },
"minute": { "type":"int", "novalue":60 },
"draught": { "type":"float", "round": 1, "novalue":0 },
"destination": { "type":"str" },
"device_hardware_id": { "function":{"name":"create_key", "args":{"mmsi"}} },
"host_hardware_id": { "function":{"name":"get_host_hardware_id", "args":{"imo", "shipname", "callsign", "mmsi"}} },
"image_filename":{ "function":{"name":"get_png_name_by_key", "args":{"mmsi"}} } },
{"message": {"type":[1,2,3]},
"status": { "type":"int", "novalue":15 },
"turn": { "type":"float", "round":1, "novalue":-128, "underflow":-127, "overflow":127 },
"speed": { "type":"float", "round":1, "novalue":1023, "overflow":1022 },
"accuracy": { "type":"bool" },
"lon": { "type":"float", "round":5, "novalue":181 },
"lat": { "type":"float", "round":5, "novalue":91 },
"course": { "type":"float", "round":1, "novalue":360 },
"heading": { "type":"float", "round":1, "novalue":511 },
"second": { "type":"int", "novalue":[61,62,63], "nosensor":60 },
"device_hardware_id": { "function":{"name":"create_key", "args":{"mmsi"}} },
"image_filename":{ "function":{"name":"get_png_name_by_key", "args":{"mmsi"}} } } ],
config_filepath = '/srv/dogger/',
config_filename = 'conf_cloud_db.ini')
udp_upload_ais.run()
| 62.431818 | 137 | 0.415362 | 246 | 2,747 | 4.479675 | 0.398374 | 0.076225 | 0.15245 | 0.108893 | 0.408348 | 0.268603 | 0.22323 | 0.172414 | 0.172414 | 0.094374 | 0 | 0.047481 | 0.371314 | 2,747 | 43 | 138 | 63.883721 | 0.59062 | 0 | 0 | 0.05 | 0 | 0 | 0.325814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
70d460184176b8ff0b28fdd92bb6d0083ea81fa8 | 2,080 | py | Python | config.py | dr563105/basicad_framework | 0d820390116dcd58ac7061074b5e33cb0c2f26de | [
"MIT"
] | null | null | null | config.py | dr563105/basicad_framework | 0d820390116dcd58ac7061074b5e33cb0c2f26de | [
"MIT"
] | null | null | null | config.py | dr563105/basicad_framework | 0d820390116dcd58ac7061074b5e33cb0c2f26de | [
"MIT"
] | null | null | null | import os
from datetime import datetime
BASE_PATH = os.path.dirname(os.path.realpath(__file__))
CSV_PATH = u'{}/data'.format(BASE_PATH)
CSV_PATH_DS3 = u'{}/data/ds3'.format(BASE_PATH)
IMG_PATH = u'{}/data/img'.format(BASE_PATH)
IMG_PATH_DS3 = u'{}/data/ds3/img'.format(BASE_PATH)
HDF5_PATH = u'{}/data/hdf5'.format(BASE_PATH)
HDF5_PATH_DS3 = u'{}/data/hdf5/ds3'.format(BASE_PATH)
MODEL_PATH = u'{}/models'.format(BASE_PATH)
IMAGE_DIM = (160, 70) #changed from (320, 70)
TIMESTEPS = 15
GPU_ID_TO_USE = '1'
SKLEARN_TRAIN_TEST_SPLIT_SIZE = 0.2
SKLEARN_RANDOM_STATE = 42
SAMPLE_WEIGHT = None
WEIGHT_FOR_CCE = 1
WEIGHT_FOR_MSE = 0.001
WEIGHT_FOR_ST = 5
NOTIMESTEPS_INPUT_SHAPE = (70,160,1)
TIMESTEPS_INPUT_SHAPE = (15,70,160,1)
CONV2D_FILTERS_1 = 24
CONV2D_FILTERS_2 = 36
CONV2D_FILTERS_3 = 48
CONV2D_FILTERS_4 = 64
CONV2D_FILTERS_5 = 80
CONV2D_FILTERS_6 = 96
KERNEL_SIZE_1 = (5, 5)
KERNEL_SIZE_2 = (3, 3)
STRIDE_DIM_1 = (2, 2)
STRIDE_DIM_2 = (1, 1)
PADDING = 'same'
CONV2D_ACTIVATION_FN = 'relu'
LSTM_OUTPUT_UNITS = 100
LSTM_ACTIVATION_FN = 'tanh'
LSTM_RETURN_SEQ = False
DROPOUT_VALUE = 0.5
DENSE_HIDDEN_UNITS_1 = 100
DENSE_HIDDEN_UNITS_2 = 50
DENSE_HIDDEN_UNITS_3 = 10
DENSE_ACTIVATION_FN = 'relu'
DENSE_OUTPUT_ACTIVATION_FN_STEERING = 'tanh'
DENSE_OUTPUT_ACTIVATION_FN_VELOCITY = 'relu'
DENSE_OUTPUT_ACTIVATION_FN_ACCELERATION = 'tanh'
DENSE_OUTPUT_ACTIVATION_FN_CLASSIFICATION = 'softmax'
MODEL_LEARNING_RATE = 1e-04
MODEL_LEARNING_DECAY = 0.0
MODEL_LOSS_FN1 = 'mse'
MODEL_LOSS_FN2 = 'cce'
PLOT_MODEL_SAVE_FILE = 'evaluate_ts15_ds3_Class6_cseg2.png'
PLOT_MODEL_SHOW_SHAPES = True
CALLBACKS_MONITOR = 'val_loss'
CALLBACKS_MONITOR_MODE = 'min'
SAVE_FORMAT = datetime.now().strftime("%Y-%m-%dT%H:%M")
EARLYSTOPPING_PATIENCE = 15
MODEL_CHECKPOINT_FILENAME = 'model_shuffled_2LSTM_tanh.h5'
TENSORBOARD_LOG_FILENAME = "ts1_shuffled_2LSTM_tanh_"
TENSORBOARD_LOG_PATH = u'{}tb_logs/'.format(BASE_PATH) + TENSORBOARD_LOG_FILENAME + SAVE_FORMAT
CALLBACKS_VERBOSITY = 1
MODEL_FIT_VERBOSITY = 1
MODEL_CHECKPOINT_SAVE_BEST = True
MODEL_FIT_SHUFFLE = True
TRAINING_EPOCH = 50
BATCH_SIZE = 128
| 31.515152 | 95 | 0.792308 | 343 | 2,080 | 4.364431 | 0.405248 | 0.048096 | 0.074816 | 0.061456 | 0.146961 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066169 | 0.099038 | 2,080 | 65 | 96 | 32 | 0.732657 | 0.010577 | 0 | 0 | 0 | 0 | 0.118619 | 0.041808 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.03125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |