hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d21892bc6e13fbca51eb7154188132cae4f0e838 | 667 | py | Python | app/db/events.py | ilya-goldin/kanban-board-app | 3c7026aedb0e21eaccc26a2ac4a37f0b6a91a122 | [
"MIT"
] | null | null | null | app/db/events.py | ilya-goldin/kanban-board-app | 3c7026aedb0e21eaccc26a2ac4a37f0b6a91a122 | [
"MIT"
] | null | null | null | app/db/events.py | ilya-goldin/kanban-board-app | 3c7026aedb0e21eaccc26a2ac4a37f0b6a91a122 | [
"MIT"
] | null | null | null | import asyncpg
from fastapi import FastAPI
from loguru import logger
from app.core.settings.app import AppSettings
async def connect_to_db(app: FastAPI, settings: AppSettings) -> None:
logger.info('Connecting to PostgreSQL')
app.state.pool = await asyncpg.create_pool(
str(settings.database_url),
min_size=settings.min_connection_count,
max_size=settings.max_connection_count,
command_timeout=60,
)
logger.info('Connection established')
async def close_db_connection(app: FastAPI) -> None:
logger.info('Closing connection to database')
await app.state.pool.close()
logger.info('Connection closed')
| 24.703704 | 69 | 0.731634 | 85 | 667 | 5.588235 | 0.447059 | 0.084211 | 0.058947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00365 | 0.178411 | 667 | 26 | 70 | 25.653846 | 0.863139 | 0 | 0 | 0 | 0 | 0 | 0.13943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.235294 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d220977b89635aa8f8397e7f63e18931cf662876 | 609 | py | Python | skit_pipelines/components/extract_tgz.py | skit-ai/skit-pipelines | d692582107aee81b1bb4aebcf169f7260ac956b5 | [
"MIT"
] | null | null | null | skit_pipelines/components/extract_tgz.py | skit-ai/skit-pipelines | d692582107aee81b1bb4aebcf169f7260ac956b5 | [
"MIT"
] | 4 | 2022-03-22T14:17:46.000Z | 2022-03-24T16:22:23.000Z | skit_pipelines/components/extract_tgz.py | skit-ai/skit-pipelines | d692582107aee81b1bb4aebcf169f7260ac956b5 | [
"MIT"
] | null | null | null | from typing import Union
import kfp
from kfp.components import InputPath, OutputPath
from skit_pipelines import constants as pipeline_constants
def extract_tgz_archive(
tgz_path: InputPath(str),
output_path: OutputPath(str),
):
import tarfile
from loguru import logger
logger.debug(f"Extracting .tgz archive {tgz_path}.")
tar = tarfile.open(tgz_path)
tar.extractall(path=output_path)
tar.close()
logger.debug(f"Extracted successfully.")
extract_tgz_op = kfp.components.create_component_from_func(
extract_tgz_archive, base_image=pipeline_constants.BASE_IMAGE
)
| 22.555556 | 65 | 0.766831 | 81 | 609 | 5.530864 | 0.469136 | 0.066964 | 0.075893 | 0.075893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155993 | 609 | 26 | 66 | 23.423077 | 0.871595 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.333333 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d2222f7d6b30cad257fa79d950b134ab33ead31c | 2,994 | py | Python | oneflow/python/test/onnx/util.py | basicv8vc/oneflow | 2a0480b3f4ff42a59fcae945a3b3bb2d208e37a3 | [
"Apache-2.0"
] | 1 | 2020-10-13T03:03:40.000Z | 2020-10-13T03:03:40.000Z | oneflow/python/test/onnx/util.py | basicv8vc/oneflow | 2a0480b3f4ff42a59fcae945a3b3bb2d208e37a3 | [
"Apache-2.0"
] | null | null | null | oneflow/python/test/onnx/util.py | basicv8vc/oneflow | 2a0480b3f4ff42a59fcae945a3b3bb2d208e37a3 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2020 The OneFlow Authors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import numpy as np
import oneflow as flow
import onnxruntime as ort
import onnx
from collections import OrderedDict
import tempfile
import os
import shutil
def convert_to_onnx_and_check(
job_func,
print_outlier=False,
explicit_init=True,
external_data=False,
ort_optimize=True,
opset=None,
):
check_point = flow.train.CheckPoint()
if explicit_init:
# it is a trick to keep check_point.save() from hanging when there is no variable
@flow.global_function(flow.FunctionConfig())
def add_var():
return flow.get_variable(
name="trick",
shape=(1,),
dtype=flow.float,
initializer=flow.random_uniform_initializer(),
)
check_point.init()
flow_weight_dir = tempfile.TemporaryDirectory()
check_point.save(flow_weight_dir.name)
# TODO(daquexian): a more elegant way?
while not os.path.exists(os.path.join(flow_weight_dir.name, "snapshot_done")):
pass
onnx_model_dir = tempfile.TemporaryDirectory()
onnx_model_path = os.path.join(onnx_model_dir.name, "model.onnx")
flow.onnx.export(
job_func,
flow_weight_dir.name,
onnx_model_path,
opset=opset,
external_data=external_data,
)
flow_weight_dir.cleanup()
ort_sess_opt = ort.SessionOptions()
ort_sess_opt.graph_optimization_level = (
ort.GraphOptimizationLevel.ORT_ENABLE_EXTENDED
if ort_optimize
else ort.GraphOptimizationLevel.ORT_DISABLE_ALL
)
sess = ort.InferenceSession(onnx_model_path, sess_options=ort_sess_opt)
onnx_model_dir.cleanup()
assert len(sess.get_outputs()) == 1
assert len(sess.get_inputs()) <= 1
ipt_dict = OrderedDict()
for ipt in sess.get_inputs():
ipt_data = np.random.uniform(low=-10, high=10, size=ipt.shape).astype(
np.float32
)
ipt_dict[ipt.name] = ipt_data
onnx_res = sess.run([], ipt_dict)[0]
oneflow_res = job_func(*ipt_dict.values()).get().numpy()
rtol, atol = 1e-2, 1e-5
if print_outlier:
a = onnx_res.flatten()
b = oneflow_res.flatten()
for i in range(len(a)):
if np.abs(a[i] - b[i]) > atol + rtol * np.abs(b[i]):
print("a[{}]={}, b[{}]={}".format(i, a[i], i, b[i]))
assert np.allclose(onnx_res, oneflow_res, rtol=rtol, atol=atol)
| 33.640449 | 89 | 0.671343 | 418 | 2,994 | 4.626794 | 0.430622 | 0.031024 | 0.033609 | 0.02637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00952 | 0.228123 | 2,994 | 88 | 90 | 34.022727 | 0.827347 | 0.233467 | 0 | 0.029851 | 0 | 0 | 0.020122 | 0 | 0 | 0 | 0 | 0.011364 | 0.044776 | 1 | 0.029851 | false | 0.014925 | 0.119403 | 0.014925 | 0.164179 | 0.044776 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d22588027964a9ce9520023258895efa1631a6bd | 5,001 | py | Python | src/peter_sslers/lib/errors.py | jvanasco/pyramid_letsencrypt_admin | 6db37d30ef8028ff978bf6083cdf978fc88a4782 | [
"MIT"
] | 35 | 2016-04-21T18:55:31.000Z | 2022-03-30T08:22:43.000Z | src/peter_sslers/lib/errors.py | jvanasco/pyramid_letsencrypt_admin | 6db37d30ef8028ff978bf6083cdf978fc88a4782 | [
"MIT"
] | 8 | 2018-05-23T13:38:49.000Z | 2021-03-19T21:05:44.000Z | src/peter_sslers/lib/errors.py | jvanasco/pyramid_letsencrypt_admin | 6db37d30ef8028ff978bf6083cdf978fc88a4782 | [
"MIT"
] | 2 | 2016-08-18T21:07:11.000Z | 2017-01-11T09:47:40.000Z | def formstash_to_querystring(formStash):
err = []
for (k, v) in formStash.errors.items():
err.append(("%s--%s" % (k, v)).replace("\n", "+").replace(" ", "+"))
err = sorted(err)
err = "---".join(err)
return err
class _UrlSafeException(Exception):
@property
def as_querystring(self):
return str(self).replace("\n", "+").replace(" ", "+")
class GarfieldMinusGarfield(Exception):
"""
An exception for those odd moments
"""
pass
class InvalidTransition(Exception):
"""raised when a transition is invalid"""
pass
class ObjectExists(Exception):
"""raised when an object already exists, no need to create"""
pass
class ConflictingObject(Exception):
"""
raised when an object already exists
args[0] = tuple(conflicting_object, error_message_string)
"""
pass
class OpenSslError(Exception):
pass
class OpenSslError_CsrGeneration(OpenSslError):
pass
class OpenSslError_InvalidKey(OpenSslError):
pass
class OpenSslError_InvalidCSR(OpenSslError):
pass
class OpenSslError_InvalidCertificate(OpenSslError):
pass
class OpenSslError_VersionTooLow(OpenSslError):
pass
class QueueProcessingError(Exception):
pass
class AcmeError(_UrlSafeException):
pass
class AcmeDuplicateAccount(AcmeError):
"""
args[0] MUST be the duplicate AcmeAccount
"""
pass
class AcmeDuplicateChallenges(AcmeError):
pass
class AcmeDuplicateChallengesExisting(AcmeDuplicateChallenges):
"""the first arg should be a list of the active challenges"""
def __str__(self):
return (
"""One or more domains already have active challenges: %s."""
% ", ".join(
[
"`%s` (%s)" % (ac.domain.domain_name, ac.acme_challenge_type)
for ac in self.args[0]
]
)
)
class AcmeDuplicateChallenge(AcmeDuplicateChallenges):
"""the first arg should be a single active challenge"""
def __str__(self):
return (
"""This domain already has active challenges: `%s`."""
% self.args[0].domain.domain_name
)
class AcmeDuplicateOrderlessDomain(AcmeDuplicateChallenges):
pass
class AcmeServerError(AcmeError):
pass
class AcmeServer404(AcmeServerError):
pass
class AcmeCommunicationError(AcmeError):
pass
class AcmeAuthorizationFailure(AcmeError):
"""raised when an Authorization fails"""
pass
class AcmeOrphanedObject(AcmeError):
pass
class AcmeOrderError(AcmeError):
pass
class AcmeOrderFatal(AcmeOrderError):
"""
The AcmeOrder has a fatal error.
Authorizations should be killed.
"""
pass
class AcmeOrderCreatedError(AcmeOrderError):
"""
If an exception occurs AFTER an AcmeOrder is created, raise this.
It should have two attributes:
args[0] - AcmeOrder
args[1] - original exception
"""
def __str__(self):
return "An AcmeOrder-{0} was created but errored".format(self.args[0])
@property
def acme_order(self):
return self.args[0]
@property
def original_exception(self):
return self.args[1]
class AcmeOrderProcessing(AcmeOrderCreatedError):
"""
raise when the AcmeOrder is `processing` (RFC status)
this should generally indicate the user should retry their action
"""
def __str__(self):
return "An AcmeOrder-{0} was created. The order is still processing.".format(
self.args[0]
)
class AcmeOrderValid(AcmeOrderCreatedError):
"""
raise when the AcmeOrder is `valid` (RFC status)
this should generally indicate the user should retry their action
"""
def __str__(self):
return "An AcmeOrder-{0} was created. The order is valid and the CertificateSigned can be downloaded.".format(
self.args[0]
)
class AcmeMissingChallenges(AcmeError):
"""There are no Acme Challenges"""
pass
class AcmeChallengeFailure(AcmeError):
pass
class AcmeDomainsInvalid(AcmeError):
def __str__(self):
return "The following Domains are invalid: {0}".format(", ".join(self.args[0]))
class AcmeDomainsBlocklisted(AcmeDomainsInvalid):
def __str__(self):
return "The following Domains are blocklisted: {0}".format(
", ".join(self.args[0])
)
class AcmeDomainsRequireConfigurationAcmeDNS(AcmeDomainsInvalid):
def __str__(self):
return "The following Domains are not configured with ACME-DNS: {0}".format(
", ".join(self.args[0])
)
class DomainVerificationError(AcmeError):
pass
class DisplayableError(_UrlSafeException):
pass
class InvalidRequest(_UrlSafeException):
"""
raised when an end-user wants to do something invalid/not-allowed
"""
pass
# class TransitionError(_UrlSafeException):
# pass
# class OperationsContextError(_UrlSafeException):
# pass
| 20.084337 | 118 | 0.659868 | 513 | 5,001 | 6.325536 | 0.31384 | 0.077658 | 0.024961 | 0.039445 | 0.253621 | 0.228968 | 0.201849 | 0.127581 | 0.115871 | 0.069646 | 0 | 0.006049 | 0.239752 | 5,001 | 248 | 119 | 20.165323 | 0.847449 | 0.212557 | 0 | 0.392857 | 0 | 0 | 0.101071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0.241071 | 0 | 0.098214 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d2293531f48224d20922b0077cb19bb8cfd631bb | 18,212 | py | Python | cognitive_services/__main__.py | cleveranjos/Rapid-ML-Gateway | 10a14abfce3351791331642c47eddfbf622e76d2 | [
"MIT"
] | 3 | 2020-07-15T19:45:31.000Z | 2020-09-30T16:15:48.000Z | cognitive_services/__main__.py | cleveranjos/Rapid-ML-Gateway | 10a14abfce3351791331642c47eddfbf622e76d2 | [
"MIT"
] | 12 | 2020-07-15T17:00:24.000Z | 2021-01-19T21:02:00.000Z | cognitive_services/__main__.py | cleveranjos/Rapid-ML-Gateway | 10a14abfce3351791331642c47eddfbf622e76d2 | [
"MIT"
] | 2 | 2020-07-15T18:59:02.000Z | 2020-10-07T17:22:52.000Z | #! /usr/bin/env python3
import os
import sys
PARENT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(os.path.join(PARENT_DIR, 'generated'))
sys.path.append(os.path.join(PARENT_DIR, 'helper_functions'))
import argparse
import json
import logging
import logging.config
import inspect, time
from websocket import create_connection
import socket
import re
from concurrent import futures
from datetime import datetime
import requests, uuid
import configparser
import ServerSideExtension_pb2 as SSE
import grpc
import qlist
import cognitive_services as cs
from google.protobuf.json_format import MessageToDict
from ssedata import ArgType, FunctionType, ReturnType
# import helper .py files
#from scripteval import ScriptEval
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
config = configparser.ConfigParser()
class ExtensionService(SSE.ConnectorServicer):
"""
A simple SSE-plugin created for the HelloWorld example.
"""
def __init__(self, funcdef_file):
"""
Class initializer.
:param funcdef_file: a function definition JSON file
"""
self._function_definitions = funcdef_file
#self.ScriptEval = ScriptEval()
os.makedirs('logs', exist_ok=True)
log_file = os.path.join(os.path.dirname(
os.path.dirname(os.path.abspath(__file__))), 'logger.config')
logging.config.fileConfig(log_file)
logging.info(log_file)
logging.info(self._function_definitions)
logging.info('Logging enabled')
function_name = "none"
@property
def function_definitions(self):
"""
:return: json file with function definitions
"""
return self._function_definitions
@property
def functions(self):
"""
:return: Mapping of function id and implementation
"""
return {
0: '_rest_single',
}
@staticmethod
def _get_function_id(context):
"""
Retrieve function id from header.
:param context: context
:return: function id
"""
metadata = dict(context.invocation_metadata())
header = SSE.FunctionRequestHeader()
header.ParseFromString(metadata['qlik-functionrequestheader-bin'])
return header.functionId
@staticmethod
def _rest_single(request, context):
"""
Rest using single variable
"""
logging.info('Entering {} TimeStamp: {}' .format(function_name, datetime.now().strftime("%H:%M:%S.%f")))
bCache= config.get(q_function_name, 'cache')
logging.debug("Caching is set to {}" .format(bCache))
if (bCache.lower() =="true"):
logging.info("Caching ****Enabled*** for {}" .format(q_function_name))
else:
logging.info("Caching ****Disabled**** for {}" .format(q_function_name))
md = (('qlik-cache', 'no-store'),)
context.send_initial_metadata(md)
response_rows = []
request_counter = 1
#if(q_function_name=='translate'):
endpoint = config.get(q_function_name, 'endpoint')
logging.debug("endpoint is set to {}" .format(endpoint))
key = config.get(q_function_name, 'key')
logging.debug("key is set to {}" .format(key))
region = config.get(q_function_name, 'region')
logging.debug("region is set to {}" .format(region))
for request_rows in request:
logging.debug('Printing Request Rows - Request Counter {}' .format(request_counter))
request_counter = request_counter +1
for row in request_rows.rows:
param = [d.strData for d in row.duals]
logging.debug("The incoming parameter {}" .format(param))
result =""
if (len(param[0])==0):
param[0] = "NA"
if(q_function_name=='translate'):
language = '&to=' + param[1]
logging.debug('Showing Language to Translate to : {}'.format(language))
client = cs.translate(key, region, endpoint)
finished_url = client[1] +language
logging.debug('Showing finished url to : {}'.format(finished_url))
input_text = param[0].replace('"','\\').replace(',','\\,')
body = [{'text' : input_text}]
logging.debug('Showing message body: {}'.format(body))
request = requests.post(finished_url, headers=client[0], json=body)
resp= request.json()
logging.debug('Show Payload Response as Text: {}'.format(resp))
if(param[-1] =='score'):
result = str(resp[0]['detectedLanguage']['score'])
logging.debug('Score: {}'.format(result))
print(result)
print(type(result))
#duals = iter([SSE.Dual(strData=result)])
if(param[-1] =='text'):
result = resp[0]['translations'][0]['text']
print(type(result))
logging.debug('Translation: {}'.format(result))
elif(q_function_name=='language_detection'):
client = cs.authenticate_client(key, endpoint)
result =cs.language_detection(client, param)
logging.debug('language detection: {}'.format(result))
elif(q_function_name=='key_phrase_extraction'):
client = cs.authenticate_client(key, endpoint)
output =cs.key_phrase_extraction(client, param)
result = output[0]
logging.debug('key_phrase_extraction: {}'.format(result))
elif(q_function_name=='sentiment_analysis'):
client = cs.authenticate_client(key, endpoint)
output =cs.sentiment_analysis(client, param)
print(output)
print(type(output))
if(param[-1] =='sentiment'):
result = output.sentiment
elif(param[-1] =='positive_score'):
result = str(output.confidence_scores.positive)
elif(param[-1] =='negative_score'):
result = str(output.confidence_scores.negative)
elif(param[-1] =='neutral_score'):
result = str(output.confidence_scores.neutral)
else:
result = output.sentiment
logging.debug('key_phrase_extraction: {}'.format(result))
else:
result=""
duals = iter([SSE.Dual(strData=result)])
logging.debug('result {}' .format(result))
response_rows.append(SSE.Row(duals=duals))
yield SSE.BundledRows(rows=response_rows)
logging.info('Exiting {} TimeStamp: {}' .format(function_name, datetime.now().strftime("%H:%M:%S.%f")))
@staticmethod
def _cache(request, context):
"""
Cache enabled. Add the datetime stamp to the end of each string value.
:param request: iterable sequence of bundled rows
:param context: not used.
:return: string
"""
# Iterate over bundled rows
for request_rows in request:
# Iterate over rows
for row in request_rows.rows:
# Retrieve string value of parameter and append to the params variable
# Length of param is 1 since one column is received, the [0] collects the first value in the list
param = [d.strData for d in row.duals][0]
# Join with current timedate stamp
result = param + ' ' + datetime.now().isoformat()
# Create an iterable of dual with the result
duals = iter([SSE.Dual(strData=result)])
# Yield the row data as bundled rows
yield SSE.BundledRows(rows=[SSE.Row(duals=duals)])
@staticmethod
def _no_cache(request, context):
"""
Cache disabled. Add the datetime stamp to the end of each string value.
:param request:
:param context: used for disabling the cache in the header.
:return: string
"""
# Disable caching.
md = (('qlik-cache', 'no-store'),)
context.send_initial_metadata(md)
# Iterate over bundled rows
for request_rows in request:
# Iterate over rows
for row in request_rows.rows:
# Retrieve string value of parameter and append to the params variable
# Length of param is 1 since one column is received, the [0] collects the first value in the list
param = [d.strData for d in row.duals][0]
# Join with current timedate stamp
result = param + ' ' + datetime.now().isoformat()
# Create an iterable of dual with the result
duals = iter([SSE.Dual(strData=result)])
# Yield the row data as bundled rows
yield SSE.BundledRows(rows=[SSE.Row(duals=duals)])
def _get_call_info(self, context):
"""
Retreive useful information for the function call.
:param context: context
:return: string containing header info
"""
# Get metadata for the call from the context
metadata = dict(context.invocation_metadata())
# Get the function ID
func_header = SSE.FunctionRequestHeader()
func_header.ParseFromString(metadata['qlik-functionrequestheader-bin'])
func_id = func_header.functionId
# Get the common request header
common_header = SSE.CommonRequestHeader()
common_header.ParseFromString(metadata['qlik-commonrequestheader-bin'])
# Get capabilities
if not hasattr(self, 'capabilities'):
self.capabilities = self.GetCapabilities(None, context)
# Get the name of the capability called in the function
capability = [function.name for function in self.capabilities.functions if function.functionId == func_id][0]
# Get the user ID using a regular expression
match = re.match(r"UserDirectory=(?P<UserDirectory>\w*)\W+UserId=(?P<UserId>\w*)", common_header.userId, re.IGNORECASE)
if match:
userId = match.group('UserDirectory') + '/' + match.group('UserId')
else:
userId = common_header.userId
# Get the app ID
appId = common_header.appId
# Get the call's origin
peer = context.peer()
return "{0} - Capability '{1}' called by user {2} from app {3}".format(peer, capability, userId, appId)
def EvaluateScript(self, request, context):
"""
This plugin supports full script functionality, that is, all function types and all data types.
:param request:
:param context:
:return:
"""
logging.debug('In EvaluateScript: Main')
# Parse header for script request
metadata = dict(context.invocation_metadata())
logging.debug('Metadata {}',metadata)
header = SSE.ScriptRequestHeader()
header.ParseFromString(metadata['qlik-scriptrequestheader-bin'])
logging.debug('Header is : {}'.format(header))
logging.debug('Request is : {}' .format(request))
logging.debug("Context is: {}" .format(context))
return self.ScriptEval.EvaluateScript(header, request, context)
@staticmethod
def _echo_table(request, context):
"""
Echo the input table.
:param request:
:param context:
:return:
"""
for request_rows in request:
response_rows = []
for row in request_rows.rows:
response_rows.append(row)
yield SSE.BundledRows(rows=response_rows)
def GetCapabilities(self, request, context):
"""
Get capabilities.
Note that either request or context is used in the implementation of this method, but still added as
parameters. The reason is that gRPC always sends both when making a function call and therefore we must include
them to avoid error messages regarding too many parameters provided from the client.
:param request: the request, not used in this method.
:param context: the context, not used in this method.
:return: the capabilities.
"""
logging.info('GetCapabilities')
# Create an instance of the Capabilities grpc message
# Enable(or disable) script evaluation
# Set values for pluginIdentifier and pluginVersion
capabilities = SSE.Capabilities(allowScript=True,
pluginIdentifier='Qlik Rapid API Gateway - Partner Engineering',
pluginVersion='v0.1.0')
# If user defined functions supported, add the definitions to the message
with open(self.function_definitions) as json_file:
# Iterate over each function definition and add data to the capabilities grpc message
for definition in json.load(json_file)['Functions']:
function = capabilities.functions.add()
function.name = definition['Name']
function.functionId = definition['Id']
function.functionType = definition['Type']
function.returnType = definition['ReturnType']
# Retrieve name and type of each parameter
for param_name, param_type in sorted(definition['Params'].items()):
function.params.add(name=param_name, dataType=param_type)
logging.info('Adding to capabilities: {}({})'.format(function.name,
[p.name for p in function.params]))
return capabilities
def ExecuteFunction(self, request_iterator, context):
"""
Execute function call.
:param request_iterator: an iterable sequence of Row.
:param context: the context.
:return: an iterable sequence of Row.
"""
func_id = self._get_function_id(context)
logging.info(self._get_call_info(context))
# Call corresponding function
logging.info('ExecuteFunctions (functionId: {})' .format(func_id))
#self.functions[func_id]))
current_function_def = (json.load(open(self.function_definitions))['Functions'])[func_id]
logging.debug(current_function_def)
global q_function_name
q_function_name = current_function_def["Name"]
logging.debug('Logical Method Called is: {}' .format(q_function_name))
current_qrap_type = current_function_def["QRAP_Type"]
qrag_function_name ='_' + current_qrap_type
logging.debug('This is the type of QRAG Method Name: {}' .format(current_qrap_type))
logging.debug('Physical Method Called is: {}' .format(qrag_function_name))
# Convers to Method Name to Physical Main Function
qrag_id = qlist.find_key(self.functions, qrag_function_name)
logging.debug('QRAG ID: {}' .format(qrag_id))
global function_name
function_name = self.functions[qrag_id]
return getattr(self, self.functions[qrag_id])(request_iterator, context)
def Serve(self, port, pem_dir):
"""
Sets up the gRPC Server with insecure connection on port
:param port: port to listen on.
:param pem_dir: Directory including certificates
:return: None
"""
# Create gRPC server
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
SSE.add_ConnectorServicer_to_server(self, server)
if pem_dir:
# Secure connection
with open(os.path.join(pem_dir, 'sse_server_key.pem'), 'rb') as f:
private_key = f.read()
with open(os.path.join(pem_dir, 'sse_server_cert.pem'), 'rb') as f:
cert_chain = f.read()
with open(os.path.join(pem_dir, 'root_cert.pem'), 'rb') as f:
root_cert = f.read()
credentials = grpc.ssl_server_credentials([(private_key, cert_chain)], root_cert, True)
server.add_secure_port('[::]:{}'.format(port), credentials)
logging.info('*** Running server in secure mode on port: {} ***'.format(port))
else:
# Insecure connection
server.add_insecure_port('[::]:{}'.format(port))
logging.info('*** Running server in insecure mode on port: {} ***'.format(port))
# Start gRPC server
server.start()
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
conf_file = os.path.join(os.path.dirname(
os.path.abspath(__file__)), 'config', 'qrag.ini')
#config.read(os.path.join(os.path.dirname(__file__), 'config', 'qrag.ini'))
logging.debug(conf_file)
logging.info('Location of qrag.ini {}' .format(conf_file))
config.read(conf_file)
port = config.get('base', 'port')
parser.add_argument('--port', nargs='?', default=port)
parser.add_argument('--pem_dir', nargs='?')
parser.add_argument('--definition_file', nargs='?', default='functions.json')
args = parser.parse_args()
# need to locate the file when script is called from outside it's location dir.
def_file = os.path.join(os.path.dirname(os.path.abspath(__file__)), args.definition_file)
#print(def_file)
calc = ExtensionService(def_file)
logging.info('*** Server Configurations Port: {}, Pem_Dir: {}, def_file {} TimeStamp: {} ***'.format(args.port, args.pem_dir, def_file,datetime.now().isoformat()))
calc.Serve(args.port, args.pem_dir) | 43.361905 | 167 | 0.597024 | 2,007 | 18,212 | 5.285501 | 0.192327 | 0.030543 | 0.017157 | 0.008484 | 0.282617 | 0.214178 | 0.171757 | 0.154506 | 0.137443 | 0.126697 | 0 | 0.003512 | 0.296508 | 18,212 | 420 | 168 | 43.361905 | 0.824461 | 0.192895 | 0 | 0.198444 | 1 | 0 | 0.127729 | 0.017153 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050584 | false | 0 | 0.077821 | 0 | 0.159533 | 0.019455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d22d16cc4c908be77ff9ce274ee5534ee91f29e1 | 13,624 | py | Python | mantrid/loadbalancer.py | epio/mantrid | 1c699f1a4b33888b533c19cb6d025173f2160576 | [
"BSD-3-Clause"
] | 30 | 2015-01-01T00:32:47.000Z | 2021-09-07T20:25:01.000Z | mantrid/loadbalancer.py | epio/mantrid | 1c699f1a4b33888b533c19cb6d025173f2160576 | [
"BSD-3-Clause"
] | null | null | null | mantrid/loadbalancer.py | epio/mantrid | 1c699f1a4b33888b533c19cb6d025173f2160576 | [
"BSD-3-Clause"
] | 9 | 2015-05-12T05:09:12.000Z | 2021-12-29T19:07:01.000Z | import eventlet
import errno
import logging
import traceback
import mimetools
import resource
import json
import os
import sys
import argparse
from eventlet import wsgi
from eventlet.green import socket
from .actions import Unknown, Proxy, Empty, Static, Redirect, NoHosts, Spin
from .config import SimpleConfig
from .management import ManagementApp
from .stats_socket import StatsSocket
from .greenbody import GreenBody
class Balancer(object):
"""
Main loadbalancer class.
"""
nofile = 102400
save_interval = 10
action_mapping = {
"proxy": Proxy,
"empty": Empty,
"static": Static,
"redirect": Redirect,
"unknown": Unknown,
"spin": Spin,
"no_hosts": NoHosts,
}
def __init__(self, external_addresses, internal_addresses, management_addresses, state_file, uid=None, gid=65535, static_dir="/etc/mantrid/static/"):
"""
Constructor.
Takes one parameter, the dict of ports to listen on.
The key in this dict is the port number, and the value
is if it's an internal endpoint or not.
Internal endpoints do not have X-Forwarded-* stripped;
other ones do, and have X-Forwarded-For added.
"""
self.external_addresses = external_addresses
self.internal_addresses = internal_addresses
self.management_addresses = management_addresses
self.state_file = state_file
self.uid = uid
self.gid = gid
self.static_dir = static_dir
@classmethod
def main(cls):
# Parse command-line args
parser = argparse.ArgumentParser(description='The Mantrid load balancer')
parser.add_argument('--debug', dest='debug', action='store_const', const=True, help='Enable debug logging')
parser.add_argument('-c', '--config', dest='config', default=None, metavar="PATH", help='Path to the configuration file')
args = parser.parse_args()
# Set up logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG if args.debug else logging.INFO)
# Output to stderr, always
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter(
fmt = "%(asctime)s - %(levelname)8s: %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
))
sh.setLevel(logging.DEBUG)
logger.addHandler(sh)
# Check they have root access
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (cls.nofile, cls.nofile))
except (ValueError, resource.error):
logging.warning("Cannot raise resource limits (run as root/change ulimits)")
# Load settings from the config file
if args.config is None:
if os.path.exists("/etc/mantrid/mantrid.conf"):
args.config = "/etc/mantrid/mantrid.conf"
logging.info("Using configuration file %s" % args.config)
else:
args.config = "/dev/null"
logging.info("No configuration file found - using defaults.")
else:
logging.info("Using configuration file %s" % args.config)
config = SimpleConfig(args.config)
balancer = cls(
config.get_all_addresses("bind", set([(("::", 80), socket.AF_INET6)])),
config.get_all_addresses("bind_internal"),
config.get_all_addresses("bind_management", set([(("127.0.0.1", 8042), socket.AF_INET), (("::1", 8042), socket.AF_INET6)])),
config.get("state_file", "/var/lib/mantrid/state.json"),
config.get_int("uid", 4321),
config.get_int("gid", 4321),
config.get("static_dir", "/etc/mantrid/static/"),
)
balancer.run()
def load(self):
"Loads the state from the state file"
try:
if os.path.getsize(self.state_file) <= 1:
raise IOError("File is empty.")
with open(self.state_file) as fh:
state = json.load(fh)
assert isinstance(state, dict)
self.hosts = state['hosts']
self.stats = state['stats']
for key in self.stats:
self.stats[key]['open_requests'] = 0
except (IOError, OSError):
# There is no state file; start empty.
self.hosts = {}
self.stats = {}
def save(self):
"Saves the state to the state file"
with open(self.state_file, "w") as fh:
json.dump({
"hosts": self.hosts,
"stats": self.stats,
}, fh)
def run(self):
# First, initialise the process
self.load()
self.running = True
# Try to ensure the state file is readable
state_dir = os.path.dirname(self.state_file)
if not os.path.isdir(state_dir):
os.makedirs(state_dir)
if self.uid is not None:
try:
os.chown(state_dir, self.uid, -1)
except OSError:
pass
try:
os.chown(self.state_file, self.uid, -1)
except OSError:
pass
# Then, launch the socket loops
pool = GreenBody(
len(self.external_addresses) +
len(self.internal_addresses) +
len(self.management_addresses) +
1
)
pool.spawn(self.save_loop)
for address, family in self.external_addresses:
pool.spawn(self.listen_loop, address, family, internal=False)
for address, family in self.internal_addresses:
pool.spawn(self.listen_loop, address, family, internal=True)
for address, family in self.management_addresses:
pool.spawn(self.management_loop, address, family)
# Give the other threads a chance to open their listening sockets
eventlet.sleep(0.5)
# Drop to the lesser UID/GIDs, if supplied
if self.gid:
try:
os.setegid(self.gid)
os.setgid(self.gid)
except OSError:
logging.error("Cannot change to GID %i (probably not running as root)" % self.gid)
else:
logging.info("Dropped to GID %i" % self.gid)
if self.uid:
try:
os.seteuid(0)
os.setuid(self.uid)
os.seteuid(self.uid)
except OSError:
logging.error("Cannot change to UID %i (probably not running as root)" % self.uid)
else:
logging.info("Dropped to UID %i" % self.uid)
# Ensure we can save to the state file, or die hard.
try:
open(self.state_file, "a").close()
except (OSError, IOError):
logging.critical("Cannot write to state file %s" % self.state_file)
sys.exit(1)
# Wait for one to exit, or for a clean/forced shutdown
try:
pool.wait()
except (KeyboardInterrupt, StopIteration, SystemExit):
pass
except:
logging.error(traceback.format_exc())
# We're done
self.running = False
logging.info("Exiting")
### Management ###
def save_loop(self):
"""
Saves the state if it has changed.
"""
last_hash = hash(repr(self.hosts))
while self.running:
eventlet.sleep(self.save_interval)
next_hash = hash(repr(self.hosts))
if next_hash != last_hash:
self.save()
last_hash = next_hash
def management_loop(self, address, family):
"""
Accepts management requests.
"""
try:
sock = eventlet.listen(address, family)
except socket.error, e:
logging.critical("Cannot listen on (%s, %s): %s" % (address, family, e))
return
# Sleep to ensure we've dropped privileges by the time we start serving
eventlet.sleep(0.5)
# Actually serve management
logging.info("Listening for management on %s" % (address, ))
management_app = ManagementApp(self)
try:
with open("/dev/null", "w") as log_dest:
wsgi.server(
sock,
management_app.handle,
log = log_dest,
)
finally:
sock.close()
### Client handling ###
def listen_loop(self, address, family, internal=False):
"""
Accepts incoming connections.
"""
try:
sock = eventlet.listen(address, family)
except socket.error, e:
if e.errno == errno.EADDRINUSE:
logging.critical("Cannot listen on (%s, %s): already in use" % (address, family))
raise
elif e.errno == errno.EACCES and address[1] <= 1024:
logging.critical("Cannot listen on (%s, %s) (you might need to launch as root)" % (address, family))
return
logging.critical("Cannot listen on (%s, %s): %s" % (address, family, e))
return
# Sleep to ensure we've dropped privileges by the time we start serving
eventlet.sleep(0.5)
# Start serving
logging.info("Listening for requests on %s" % (address, ))
try:
eventlet.serve(
sock,
lambda sock, addr: self.handle(sock, addr, internal),
concurrency = 10000,
)
finally:
sock.close()
def resolve_host(self, host, protocol="http"):
# Special case for empty hosts dict
if not self.hosts:
return NoHosts(self, host, "unknown")
# Check for an exact or any subdomain matches
bits = host.split(".")
for i in range(len(bits)):
for prefix in ["%s://" % protocol, ""]:
subhost = prefix + (".".join(bits[i:]))
if subhost in self.hosts:
action, kwargs, allow_subs = self.hosts[subhost]
if allow_subs or i == 0:
action_class = self.action_mapping[action]
return action_class(
balancer = self,
host = host,
matched_host = subhost,
**kwargs
)
return Unknown(self, host, "unknown")
def handle(self, sock, address, internal=False):
"""
Handles an incoming HTTP connection.
"""
try:
sock = StatsSocket(sock)
rfile = sock.makefile('rb', 4096)
# Read the first line
first = rfile.readline().strip("\r\n")
words = first.split()
# Ensure it looks kind of like HTTP
if not (2 <= len(words) <= 3):
sock.sendall("HTTP/1.0 400 Bad Request\r\nConnection: close\r\nContent-length: 0\r\n\r\n")
return
path = words[1]
# Read the headers
headers = mimetools.Message(rfile, 0)
# Work out the host
try:
host = headers['Host']
except KeyError:
host = "unknown"
headers['Connection'] = "close"
if not internal:
headers['X-Forwarded-For'] = address[0]
headers['X-Forwarded-Protocol'] = ""
headers['X-Forwarded-Proto'] = ""
# Make sure they're not using odd encodings
if "Transfer-Encoding" in headers:
sock.sendall("HTTP/1.0 411 Length Required\r\nConnection: close\r\nContent-length: 0\r\n\r\n")
return
# Match the host to an action
protocol = "http"
if headers.get('X-Forwarded-Protocol', headers.get('X-Forwarded-Proto', "")).lower() in ("ssl", "https"):
protocol = "https"
action = self.resolve_host(host, protocol)
# Record us as an open connection
stats_dict = self.stats.setdefault(action.matched_host, {})
stats_dict['open_requests'] = stats_dict.get('open_requests', 0) + 1
# Run the action
try:
rfile._rbuf.seek(0)
action.handle(
sock = sock,
read_data = first + "\r\n" + str(headers) + "\r\n" + rfile._rbuf.read(),
path = path,
headers = headers,
)
finally:
stats_dict['open_requests'] -= 1
stats_dict['completed_requests'] = stats_dict.get('completed_requests', 0) + 1
stats_dict['bytes_sent'] = stats_dict.get('bytes_sent', 0) + sock.bytes_sent
stats_dict['bytes_received'] = stats_dict.get('bytes_received', 0) + sock.bytes_received
except socket.error, e:
if e.errno not in (errno.EPIPE, errno.ETIMEDOUT, errno.ECONNRESET):
logging.error(traceback.format_exc())
except:
logging.error(traceback.format_exc())
try:
sock.sendall("HTTP/1.0 500 Internal Server Error\r\n\r\nThere has been an internal error in the load balancer.")
except socket.error, e:
if e.errno != errno.EPIPE:
raise
finally:
try:
sock.close()
rfile.close()
except:
logging.error(traceback.format_exc())
if __name__ == "__main__":
Balancer.main()
| 38.485876 | 153 | 0.545435 | 1,531 | 13,624 | 4.772698 | 0.248204 | 0.020939 | 0.014233 | 0.01478 | 0.204872 | 0.144793 | 0.12317 | 0.092514 | 0.074449 | 0.059943 | 0 | 0.01116 | 0.34887 | 13,624 | 353 | 154 | 38.594901 | 0.812535 | 0.071712 | 0 | 0.226148 | 0 | 0.010601 | 0.138316 | 0.014148 | 0 | 0 | 0 | 0 | 0.003534 | 0 | null | null | 0.010601 | 0.060071 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d234e5a37645a98c004023879e482d81ecedb1c6 | 725 | py | Python | private_sharing/migrations/0008_featuredproject.py | danamlewis/open-humans | 9b08310cf151f49032b66ddd005bbd47d466cc4e | [
"MIT"
] | 57 | 2016-09-01T21:55:52.000Z | 2022-03-27T22:15:32.000Z | private_sharing/migrations/0008_featuredproject.py | danamlewis/open-humans | 9b08310cf151f49032b66ddd005bbd47d466cc4e | [
"MIT"
] | 464 | 2015-03-23T18:08:28.000Z | 2016-08-25T04:57:36.000Z | private_sharing/migrations/0008_featuredproject.py | danamlewis/open-humans | 9b08310cf151f49032b66ddd005bbd47d466cc4e | [
"MIT"
] | 25 | 2017-01-24T16:23:27.000Z | 2021-11-07T01:51:42.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.9 on 2018-01-05 01:20
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('private_sharing', '0007_auto_20171220_2038'),
]
operations = [
migrations.CreateModel(
name='FeaturedProject',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('description', models.TextField(blank=True)),
('project', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='private_sharing.DataRequestProject')),
],
),
]
| 30.208333 | 133 | 0.623448 | 76 | 725 | 5.828947 | 0.671053 | 0.054176 | 0.063205 | 0.099323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058288 | 0.242759 | 725 | 23 | 134 | 31.521739 | 0.748634 | 0.092414 | 0 | 0 | 1 | 0 | 0.166412 | 0.087023 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2374979329fc2d21717d5eca2294d35f3c0c1d9 | 2,099 | py | Python | project_name/common/models.py | brevetech/breve_drf_template | 125e476810641f919296cb878980f91f4c091cf2 | [
"MIT"
] | null | null | null | project_name/common/models.py | brevetech/breve_drf_template | 125e476810641f919296cb878980f91f4c091cf2 | [
"MIT"
] | 17 | 2021-04-05T00:22:13.000Z | 2022-01-11T04:53:47.000Z | project_name/common/models.py | brevetech/breve_drf_template | 125e476810641f919296cb878980f91f4c091cf2 | [
"MIT"
] | 1 | 2022-01-07T05:48:19.000Z | 2022-01-07T05:48:19.000Z | from django.db import models
# https://stackoverflow.com/questions/1737017/django-auto-now-and-auto-now-add/1737078#1737078
from {{project_name}}.common.enums import PersonSexEnum
class TimeStampedModel(models.Model):
"""
Defines a timestamped model with create_date (auto_now_add) and update_date (auto_now)
"""
create_date = models.DateField(
auto_now_add=True, editable=False, verbose_name="Fecha de creación"
)
update_date = models.DateField(
auto_now=True, editable=False, verbose_name="Última modificación"
)
class Meta:
abstract = True
class PersonModel(TimeStampedModel):
"""Defines a generic representation of a person data model"""
first_name = models.CharField(
max_length=50, null=False, blank=False, verbose_name="Primer Nombre"
)
second_name = models.CharField(
max_length=50, null=True, blank=True, verbose_name="Segundo Nombre"
)
first_surname = models.CharField(
max_length=50, null=False, blank=False, verbose_name="Primer Apellido"
)
second_surname = models.CharField(
max_length=50, null=True, blank=True, verbose_name="Segundo Apellido"
)
address = models.TextField(null=False, blank=False, verbose_name="Dirección")
id_number = models.CharField(
max_length=16,
verbose_name="Cédula",
unique=True,
null=False,
blank=False,
)
birthdate = models.DateField(null=False, blank=False, verbose_name="Fecha de Nacimiento")
phone = models.CharField(max_length=25, verbose_name="Télefono", null=True, blank=True)
email = models.EmailField(
max_length=50, null=True, blank=True, verbose_name="Correo Electrónico"
)
sex = models.CharField(
max_length=1,
null=False,
blank=False,
verbose_name="Sexo",
choices=PersonSexEnum.choices,
default=PersonSexEnum.FEMALE,
)
class Meta:
abstract = True
def __str__(self):
return f"{self.first_name} {self.second_name} {self.first_surname} {self.second_surname}"
| 32.292308 | 97 | 0.682706 | 254 | 2,099 | 5.468504 | 0.350394 | 0.095032 | 0.080634 | 0.12095 | 0.37365 | 0.285817 | 0.221022 | 0.205184 | 0.205184 | 0.177106 | 0 | 0.021713 | 0.2101 | 2,099 | 64 | 98 | 32.796875 | 0.816043 | 0.043354 | 0 | 0.163265 | 0 | 0 | 0.128665 | 0.011401 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.040816 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d244090a382037591d1f8d9a0c4ab8297cd9b302 | 701 | py | Python | helper_functions_class.py | lucaschatham/lambdata | 125087c521847e4f7659a4c8e34008994f3fb01b | [
"MIT"
] | null | null | null | helper_functions_class.py | lucaschatham/lambdata | 125087c521847e4f7659a4c8e34008994f3fb01b | [
"MIT"
] | null | null | null | helper_functions_class.py | lucaschatham/lambdata | 125087c521847e4f7659a4c8e34008994f3fb01b | [
"MIT"
] | null | null | null | """
Here are two different functions used for common data cleaning tasks.
You can use these functions to load data into a pandas Dataframe.
"""
import numpy as np
import pandas as pd
from sklearn.utils import shuffle
class CleanData:
def __init__(self):
"""
This init function instantiates objects
"""
return
# This function randomizes variables
def randomize(self, df, seed):
random_this = shuffle(df,random_state=seed)
return random_this
# This function determines if there are missing values in the specified DataFrame
def null_count(self, df):
num_nulls = df.isnull().sum().sum()
return num_nulls
| 22.612903 | 84 | 0.673324 | 92 | 701 | 5.021739 | 0.663043 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262482 | 701 | 30 | 85 | 23.366667 | 0.893617 | 0.162625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d245456046b81bffbc996ce46fc7291edbaf4e36 | 870 | py | Python | services/web/apps/crm/supplierprofile/views.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 84 | 2017-10-22T11:01:39.000Z | 2022-02-27T03:43:48.000Z | services/web/apps/crm/supplierprofile/views.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 22 | 2017-12-11T07:21:56.000Z | 2021-09-23T02:53:50.000Z | services/web/apps/crm/supplierprofile/views.py | prorevizor/noc | 37e44b8afc64318b10699c06a1138eee9e7d6a4e | [
"BSD-3-Clause"
] | 23 | 2017-12-06T06:59:52.000Z | 2022-02-24T00:02:25.000Z | # ---------------------------------------------------------------------
# crm.supplierprofile application
# ---------------------------------------------------------------------
# Copyright (C) 2007-2019 The NOC Project
# See LICENSE for details
# ---------------------------------------------------------------------
# NOC modules
from noc.lib.app.extdocapplication import ExtDocApplication
from noc.crm.models.supplierprofile import SupplierProfile
from noc.core.translation import ugettext as _
class SupplierProfileApplication(ExtDocApplication):
"""
SupplierProfile application
"""
title = _("Supplier Profile")
menu = [_("Setup"), _("Supplier Profiles")]
model = SupplierProfile
query_fields = ["name__icontains", "description__icontains"]
def field_row_class(self, o):
return o.style.css_class_name if o.style else ""
| 33.461538 | 71 | 0.558621 | 73 | 870 | 6.479452 | 0.671233 | 0.044397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010582 | 0.131034 | 870 | 25 | 72 | 34.8 | 0.615079 | 0.397701 | 0 | 0 | 0 | 0 | 0.149402 | 0.043825 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0.1 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d248471875d205a42c77cea45df52d51bb8e0b18 | 6,008 | py | Python | books/api/RecurringInvoicesApi.py | harshal-choudhari/books-python-wrappers | 43616ee451a78ef2f02facc1cfb1d7f1121a1464 | [
"MIT"
] | 1 | 2021-04-21T06:40:48.000Z | 2021-04-21T06:40:48.000Z | books/api/RecurringInvoicesApi.py | harshal-choudhari/books-python-wrappers | 43616ee451a78ef2f02facc1cfb1d7f1121a1464 | [
"MIT"
] | null | null | null | books/api/RecurringInvoicesApi.py | harshal-choudhari/books-python-wrappers | 43616ee451a78ef2f02facc1cfb1d7f1121a1464 | [
"MIT"
] | 1 | 2021-04-21T07:31:47.000Z | 2021-04-21T07:31:47.000Z | #$Id$#
from books.util.ZohoHttpClient import ZohoHttpClient
from books.parser.RecurringInvoiceParser import RecurringInvoiceParser
from .Api import Api
from json import dumps
base_url = Api().base_url + 'recurringinvoices/'
parser = RecurringInvoiceParser()
zoho_http_client = ZohoHttpClient()
class RecurringInvoicesApi:
"""Recurring invoice api class is used:
1.To list all the recurring invoices with pagination.
2.To get details of a recurring invoice.
3.To create a recurring invoice.
4.To update an existing recurring invoice.
5.To delete an existing recurring invoice.
6.To stop an active recurring invoice.
7.To resume a stopped recurring invoice.
8.To update the pdf template associated with the recurring invoice.
9.To get the complete history and comments of a recurring invoice.
"""
def __init__(self, authtoken, organization_id):
"""Initialize Contacts Api using user's authtoken and organization id.
Args:
authtoken(str): User's authtoken.
organization_id(str): User's organization id.
"""
self.headers = {
'Authorization': 'Zoho-oauthtoken ' + authtoken,
}
self.details = {
'organization_id': organization_id
}
def get_recurring_invoices(self, parameter=None):
"""List of recurring invoices with pagination.
Args:
parameter(dict, optional): Filter with which the list has to be
displayed. Defaults to None.
Returns:
instance: Recurring invoice list object.
"""
response = zoho_http_client.get(base_url, self.details, self.headers, parameter)
return parser.recurring_invoices(response)
def get_recurring_invoice(self, recurring_invoice_id):
"""Get recurring invoice details.
Args:
recurring_invoice_id(str): Recurring invoice id.
Returns:
instance: Recurring invoice object.
"""
url = base_url + recurring_invoice_id
response = zoho_http_client.get(url, self.details, self.headers)
return parser.recurring_invoice(response)
def create(self, recurring_invoice):
"""Create recurring invoice.
Args:
recurring_invoice(instance): Recurring invoice object.
Returns:
instance: Recurring invoice object.
"""
json_object = dumps(recurring_invoice.to_json())
data = {
'JSONString': json_object
}
response = zoho_http_client.post(base_url, self.details, self.headers, data)
return parser.recurring_invoice(response)
def update(self, recurring_invoice_id, recurring_invoice):
"""Update an existing recurring invoice.
Args:
recurring_invoice_id(str): Recurring invoice id.
recurring_invoice(instance): Recurring invoice object.
Returns:
instance: Recurring invoice object.
"""
url = base_url + recurring_invoice_id
json_object = dumps(recurring_invoice.to_json())
data = {
'JSONString': json_object
}
response = zoho_http_client.put(url, self.details, self.headers, data)
return parser.recurring_invoice(response)
def delete(self, recurring_invoice_id):
"""Delete an existing recurring invoice.
Args:
recurring_invoice_id(str): Recurring invoice id.
Returns:
str: Success message('The recurring invoice has been deleted.').
"""
url = base_url + recurring_invoice_id
response = zoho_http_client.delete(url, self.details, self.headers)
return parser.get_message(response)
def stop_recurring_invoice(self, recurring_invoice_id):
"""Stop an active recurring invoice.
Args:
recurring_invoice_id(str): Recurring invoice id.
Returns:
str: Success message ('The recurring invoice has been stopped.').
"""
url = base_url + recurring_invoice_id + '/status/stop'
response = zoho_http_client.post(url, self.details, self.headers, '')
return parser.get_message(response)
def resume_recurring_invoice(self, recurring_invoice_id):
"""Resume an active recurring invoice.
Args:
recurring_invoice_id(str): Recurring invoice id.
Returns:
str: Success message ('The recurring invoice has been activated.').
"""
url = base_url + recurring_invoice_id + '/status/resume'
response = zoho_http_client.post(url, self.details, self.headers, '')
return parser.get_message(response)
def update_recurring_invoice_template(self,
recurring_invoice_id, template_id):
"""Update the pdf template associated with the recurring invoice.
Args:
recurring_invoice_id(str): Recurring invoice id.
template_id(str): Template id.
Returns:
str: Success message ('Recurring invoice information has been
updated.').
"""
url = base_url + recurring_invoice_id + '/templates/' + template_id
response = zoho_http_client.put(url, self.details, self.headers, '')
return parser.get_message(response)
def list_recurring_invoice_history(self, recurring_invoice_id):
"""List the complete history and comments of a recurring invoice.
Args:
recurring_invoice_id(str): Recurring invoice id.
Returns:
instance: Recurring invoice history and comments list object.
"""
url = base_url + recurring_invoice_id + '/comments'
response = zoho_http_client.get(url, self.details, self.headers)
return parser.recurring_invoice_history_list(response)
| 33.19337 | 89 | 0.636152 | 652 | 6,008 | 5.67638 | 0.156442 | 0.306944 | 0.136179 | 0.053499 | 0.652256 | 0.581194 | 0.530938 | 0.503377 | 0.503377 | 0.457444 | 0 | 0.002099 | 0.286451 | 6,008 | 180 | 90 | 33.377778 | 0.861208 | 0.389314 | 0 | 0.344828 | 0 | 0 | 0.041545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.172414 | false | 0 | 0.068966 | 0 | 0.413793 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d24e88624ecd17dbeb714acc8fe1596a1a4493c1 | 34,597 | py | Python | gittle/gittle.py | justecorruptio/gittle | e046fe4731ebe4168884e51ac5baa26c79f0567d | [
"Apache-2.0"
] | 1 | 2016-09-10T15:21:30.000Z | 2016-09-10T15:21:30.000Z | gittle/gittle.py | justecorruptio/gittle | e046fe4731ebe4168884e51ac5baa26c79f0567d | [
"Apache-2.0"
] | null | null | null | gittle/gittle.py | justecorruptio/gittle | e046fe4731ebe4168884e51ac5baa26c79f0567d | [
"Apache-2.0"
] | null | null | null | # From the future
from __future__ import absolute_import
# Python imports
import os
import copy
import logging
from hashlib import sha1
from shutil import rmtree
from functools import partial, wraps
# Dulwich imports
from dulwich.repo import Repo as DulwichRepo
from dulwich.client import get_transport_and_path
from dulwich.index import build_index_from_tree, changes_from_tree
from dulwich.objects import Tree, Blob
from dulwich.server import update_server_info
# Funky imports
import funky
# Local imports
from gittle.auth import GittleAuth
from gittle.exceptions import InvalidRemoteUrl
from gittle import utils
# Exports
__all__ = ('Gittle',)
# Guarantee that a diretory exists
def mkdir_safe(path):
if path and not(os.path.exists(path)):
os.makedirs(path)
return path
# Useful decorators
# A better way to do this in the future would maybe to use Mixins
def working_only(method):
@wraps(method)
def f(self, *args, **kwargs):
assert self.is_working, "%s can not be called on a bare repository" % method.func_name
return method(self, *args, **kwargs)
return f
def bare_only(method):
@wraps(method)
def f(self, *args, **kwargs):
assert self.is_bare, "%s can not be called on a working repository" % method.func_name
return method(self, *args, **kwargs)
return f
class Gittle(object):
"""All paths used in Gittle external methods must be paths relative to the git repository
"""
DEFAULT_COMMIT = 'HEAD'
DEFAULT_BRANCH = 'master'
DEFAULT_REMOTE = 'origin'
DEFAULT_MESSAGE = '**No Message**'
DEFAULT_USER_INFO = {
'name': None,
'email': None,
}
DIFF_FUNCTIONS = {
'classic': utils.git.classic_tree_diff,
'dict': utils.git.dict_tree_diff,
'changes': utils.git.dict_tree_diff
}
DEFAULT_DIFF_TYPE = 'dict'
HIDDEN_REGEXES = [
# Hide git directory
r'.*\/\.git\/.*',
]
# References
REFS_BRANCHES = 'refs/heads/'
REFS_REMOTES = 'refs/remotes/'
REFS_TAGS = 'refs/tags/'
# Name pattern truths
# Used for detecting if files are :
# - deleted
# - added
# - changed
PATTERN_ADDED = (False, True)
PATTERN_REMOVED = (True, False)
PATTERN_MODIFIED = (True, True)
# Permissions
MODE_DIRECTORY = 040000 # Used to tell if a tree entry is a directory
# Tree depth
MAX_TREE_DEPTH = 1000
# Acceptable Root paths
ROOT_PATHS = (os.path.curdir, os.path.sep)
def __init__(self, repo_or_path, origin_uri=None, auth=None, report_activity=None, *args, **kwargs):
if isinstance(repo_or_path, DulwichRepo):
self.repo = repo_or_path
elif isinstance(repo_or_path, Gittle):
self.repo = DulwichRepo(repo_or_path.path)
elif isinstance(repo_or_path, basestring):
path = os.path.abspath(repo_or_path)
self.repo = DulwichRepo(path)
else:
logging.warning('Repo is of type %s' % type(repo_or_path))
raise Exception('Gittle must be initialized with either a dulwich repository or a string to the path')
# Set path
self.path = self.repo.path
# The remote url
self.origin_uri = origin_uri
# Report client activty
self._report_activity = report_activity
# Build ignore filter
self.hidden_regexes = copy.copy(self.HIDDEN_REGEXES)
self.hidden_regexes.extend(self._get_ignore_regexes())
self.ignore_filter = utils.paths.path_filter_regex(self.hidden_regexes)
self.filters = [
self.ignore_filter,
]
# Get authenticator
if auth:
self.authenticator = auth
else:
self.auth(*args, **kwargs)
def report_activity(self, *args, **kwargs):
if not self._report_activity:
return
return self._report_activity(*args, **kwargs)
def _format_author(self, name, email):
return "%s <%s>" % (name, email)
def _format_userinfo(self, userinfo):
name = userinfo.get('name')
email = userinfo.get('email')
if name and email:
return self._format_author(name, email)
return None
def _format_ref(self, base, extra):
return ''.join([base, extra])
def _format_ref_branch(self, branch_name):
return self._format_ref(self.REFS_BRANCHES, branch_name)
def _format_ref_remote(self, remote_name):
return self._format_ref(self.REFS_REMOTES, remote_name)
def _format_ref_tag(self, tag_name):
return self._format_ref(self.REFS_TAGS, tag_name)
@property
def head(self):
"""Return SHA of the current HEAD
"""
return self.repo.head()
@property
def is_bare(self):
"""Bare repositories have no working directories or indexes
"""
return self.repo.bare
@property
def is_working(self):
return not(self.is_bare)
def has_index(self):
"""Opposite of is_bare
"""
return self.repo.has_index()
@property
def has_commits(self):
"""
If the repository has no HEAD we consider that is has no commits
"""
try:
self.repo.head()
except KeyError:
return False
return True
def ref_walker(self, ref=None):
"""
Very simple, basic walker
"""
ref = ref or 'HEAD'
sha = self._commit_sha(ref)
return self.repo.revision_history(sha)
def branch_walker(self, branch):
branch = branch or self.DEFAULT_BRANCH
ref = self._format_ref_branch(branch)
return self.ref_walker(ref)
def commit_info(self, start=0, end=None, branch=None):
"""Return a generator of commits with all their attached information
"""
if not self.has_commits:
return []
commits = [utils.git.commit_info(entry) for entry in self.branch_walker(branch)]
if not end:
return commits
return commits[start:end]
@funky.uniquify
def recent_contributors(self, n=None, branch=None):
n = n or 10
return funky.pluck(self.commit_info(end=n, branch=branch), 'author')
@property
def commit_count(self):
try:
return len(self.ref_walker())
except KeyError:
return 0
def commits(self):
"""Return a list of SHAs for all the concerned commits
"""
return [commit['sha'] for commit in self.commit_info()]
@property
def git_dir(self):
return self.repo.controldir()
def auth(self, *args, **kwargs):
self.authenticator = GittleAuth(*args, **kwargs)
return self.authenticator
# Generate a branch selector (used for pushing)
def _wants_branch(self, branch_name=None):
branch_name = branch_name or self.DEFAULT_BRANCH
refs_key = self._format_ref_branch(branch_name)
sha = self.branches[branch_name]
def wants_func(old):
refs_key = self._format_ref_branch(branch_name)
return {
refs_key: sha
}
return wants_func
def _get_ignore_regexes(self):
gitignore_filename = os.path.join(self.path, '.gitignore')
if not os.path.exists(gitignore_filename):
return []
lines = open(gitignore_filename).readlines()
globers = map(lambda line: line.rstrip(), lines)
return utils.paths.globers_to_regex(globers)
# Get the absolute path for a file in the git repo
def abspath(self, repo_file):
return os.path.abspath(
os.path.join(self.path, repo_file)
)
# Get the relative path from the absolute path
def relpath(self, abspath):
return os.path.relpath(abspath, self.path)
@property
def last_commit(self):
return self[self.repo.head()]
@property
def index(self):
return self.repo.open_index()
@classmethod
def init(cls, path, bare=None, *args, **kwargs):
"""Initialize a repository"""
mkdir_safe(path)
# Constructor to use
if bare:
constructor = DulwichRepo.init_bare
else:
constructor = DulwichRepo.init
# Create dulwich repo
repo = constructor(path)
# Create Gittle repo
return cls(repo, *args, **kwargs)
@classmethod
def init_bare(cls, *args, **kwargs):
kwargs.setdefault('bare', True)
return cls.init(*args, **kwargs)
def get_client(self, origin_uri=None, **kwargs):
# Get the remote URL
origin_uri = origin_uri or self.origin_uri
# Fail if inexistant
if not origin_uri:
raise InvalidRemoteUrl()
client_kwargs = {}
auth_kwargs = self.authenticator.kwargs()
client_kwargs.update(auth_kwargs)
client_kwargs.update(kwargs)
client_kwargs.update({
'report_activity': self.report_activity
})
client, remote_path = get_transport_and_path(origin_uri, **client_kwargs)
return client, remote_path
def push_to(self, origin_uri, branch_name=None, progress=None, progress_stderr=None):
selector = self._wants_branch(branch_name=branch_name)
client, remote_path = self.get_client(origin_uri, progress_stderr=progress_stderr)
return client.send_pack(
remote_path,
selector,
self.repo.object_store.generate_pack_contents,
progress=progress
)
# Like: git push
def push(self, origin_uri=None, branch_name=None, progress=None, progress_stderr=None):
return self.push_to(origin_uri, branch_name, progress, progress_stderr)
# Not recommended at ALL ... !!!
def dirty_pull_from(self, origin_uri, branch_name=None):
# Remove all previously existing data
rmtree(self.path)
mkdir_safe(self.path)
self.repo = DulwichRepo.init(self.path)
# Fetch brand new copy from remote
return self.pull_from(origin_uri, branch_name)
def pull_from(self, origin_uri, branch_name=None):
return self.fetch(origin_uri)
# Like: git pull
def pull(self, origin_uri=None, branch_name=None):
return self.pull_from(origin_uri, branch_name)
def fetch_remote(self, origin_uri=None):
# Get client
client, remote_path = self.get_client(origin_uri=origin_uri)
# Fetch data from remote repository
remote_refs = client.fetch(remote_path, self.repo)
return remote_refs
def _setup_fetched_refs(self, refs, origin, bare):
remote_tags = utils.git.subrefs(refs, 'refs/tags')
remote_heads = utils.git.subrefs(refs, 'refs/heads')
# Filter refs
clean_remote_tags = utils.git.clean_refs(remote_tags)
clean_remote_heads = utils.git.clean_refs(remote_heads)
# Base of new refs
heads_base = 'refs/remotes/' + origin
if bare:
heads_base = 'refs/heads'
# Import branches
self.import_refs(
heads_base,
clean_remote_heads
)
# Import tags
self.import_refs(
'refs/tags',
clean_remote_tags
)
# Update HEAD
for k, v in refs.items():
self[k] = v
def fetch(self, origin_uri=None, bare=None, origin=None):
bare = bare or False
origin = origin or self.DEFAULT_REMOTE
# Remote refs
remote_refs = self.fetch_remote(origin_uri)
# Update head
# Hit repo because head doesn't yet exist so
# print("REFS = %s" % remote_refs)
# Update refs (branches, tags, HEAD)
self._setup_fetched_refs(remote_refs, origin, bare)
# Checkout working directories
if not bare and self.has_commits:
self.checkout_all()
else:
self.update_server_info()
@classmethod
def clone(cls, origin_uri, local_path, auth=None, mkdir=True, bare=False, *args, **kwargs):
"""Clone a remote repository"""
mkdir_safe(local_path)
# Initialize the local repository
if bare:
local_repo = cls.init_bare(local_path)
else:
local_repo = cls.init(local_path)
repo = cls(local_repo, origin_uri=origin_uri, auth=auth, *args, **kwargs)
repo.fetch(bare=bare)
# Add origin
# TODO
return repo
@classmethod
def clone_bare(cls, *args, **kwargs):
"""Same as .clone except clones to a bare repository by default
"""
kwargs.setdefault('bare', True)
return cls.clone(*args, **kwargs)
def _commit(self, committer=None, author=None, message=None, files=None, tree=None, *args, **kwargs):
if not tree:
# If no tree then stage files
modified_files = files or self.modified_files
logging.warning("STAGING : %s" % modified_files)
self.add(modified_files)
# Messages
message = message or self.DEFAULT_MESSAGE
author_msg = self._format_userinfo(author)
committer_msg = self._format_userinfo(committer)
return self.repo.do_commit(
message=message,
author=author_msg,
committer=committer_msg,
encoding='UTF-8',
tree=tree,
*args, **kwargs
)
def _tree_from_structure(self, structure):
# TODO : Support directories
tree = Tree()
for file_info in structure:
# str only
try:
data = file_info['data'].encode('ascii')
name = file_info['name'].encode('ascii')
mode = file_info['mode']
except:
# Skip file on encoding errors
continue
blob = Blob()
blob.data = data
# Store file's contents
self.repo.object_store.add_object(blob)
# Add blob entry
tree.add(
name,
mode,
blob.id
)
# Store tree
self.repo.object_store.add_object(tree)
return tree.id
# Like: git commmit -a
def commit(self, name=None, email=None, message=None, files=None, *args, **kwargs):
user_info = {
'name': name,
'email': email,
}
return self._commit(
committer=user_info,
author=user_info,
message=message,
files=files,
*args,
**kwargs
)
def commit_structure(self, name=None, email=None, message=None, structure=None, *args, **kwargs):
"""Main use is to do commits directly to bare repositories
For example doing a first Initial Commit so the repo can be cloned and worked on right away
"""
if not structure:
return
tree = self._tree_from_structure(structure)
user_info = {
'name': name,
'email': email,
}
return self._commit(
committer=user_info,
author=user_info,
message=message,
tree=tree,
*args,
**kwargs
)
# Push all local commits
# and pull all remote commits
def sync(self, origin_uri=None):
self.push(origin_uri)
return self.pull(origin_uri)
def lookup_entry(self, relpath, trackable_files=set()):
if not relpath in trackable_files:
raise KeyError
abspath = self.abspath(relpath)
with open(abspath, 'rb') as git_file:
data = git_file.read()
s = sha1()
s.update("blob %u\0" % len(data))
s.update(data)
return (s.hexdigest(), os.stat(abspath).st_mode)
@property
@funky.transform(set)
def tracked_files(self):
return list(self.index)
@property
@funky.transform(set)
def raw_files(self):
return utils.paths.subpaths(self.path)
@property
@funky.transform(set)
def ignored_files(self):
return utils.paths.subpaths(self.path, filters=self.filters)
@property
@funky.transform(set)
def trackable_files(self):
return self.raw_files - self.ignored_files
@property
@funky.transform(set)
def untracked_files(self):
return self.trackable_files - self.tracked_files
"""
@property
@funky.transform(set)
def modified_staged_files(self):
"Checks if the file has changed since last commit"
timestamp = self.last_commit.commit_time
index = self.index
return [
f
for f in self.tracked_files
if index[f][1][0] > timestamp
]
"""
# Return a list of tuples
# representing the changed elements in the git tree
def _changed_entries(self, ref=None):
ref = ref or self.DEFAULT_COMMIT
if not self.has_commits:
return []
obj_sto = self.repo.object_store
tree_id = self[ref].tree
names = self.trackable_files
lookup_func = partial(self.lookup_entry, trackable_files=names)
# Format = [((old_name, new_name), (old_mode, new_mode), (old_sha, new_sha)), ...]
tree_diff = changes_from_tree(names, lookup_func, obj_sto, tree_id, want_unchanged=False)
return list(tree_diff)
@funky.transform(set)
def _changed_entries_by_pattern(self, pattern):
changed_entries = self._changed_entries()
filtered_paths = [
funky.first_true(names)
for names, modes, sha in changed_entries
if tuple(map(bool, names)) == pattern and funky.first_true(names)
]
return filtered_paths
@property
@funky.transform(set)
def removed_files(self):
return self._changed_entries_by_pattern(self.PATTERN_REMOVED) - self.ignored_files
@property
@funky.transform(set)
def added_files(self):
return self._changed_entries_by_pattern(self.PATTERN_ADDED) - self.ignored_files
@property
@funky.transform(set)
def modified_files(self):
modified_files = self._changed_entries_by_pattern(self.PATTERN_MODIFIED) - self.ignored_files
return modified_files
@property
@funky.transform(set)
def modified_unstaged_files(self):
timestamp = self.last_commit.commit_time
return [
f
for f in self.tracked_files
if os.stat(self.abspath(f)).st_mtime > timestamp
]
@property
def pending_files(self):
"""
Returns a list of all files that could be possibly staged
"""
# Union of both
return self.modified_files | self.added_files | self.removed_files
@property
def pending_files_by_state(self):
files = {
'modified': self.modified_files,
'added': self.added_files,
'removed': self.removed_files
}
# "Flip" the dictionary
return {
path: state
for state, paths in files.items()
for path in paths
}
"""
@property
@funky.transform(set)
def modified_files(self):
return self.modified_staged_files | self.modified_unstaged_files
"""
# Like: git add
@funky.arglist_method
def stage(self, files):
return self.repo.stage(files)
def add(self, *args, **kwargs):
return self.stage(*args, **kwargs)
# Like: git rm
@funky.arglist_method
def rm(self, files, force=False):
index = self.index
index_files = filter(lambda f: f in index, files)
for f in index_files:
del self.index[f]
return index.write()
def mv_fs(self, file_pair):
old_name, new_name = file_pair
os.rename(old_name, new_name)
# Like: git mv
@funky.arglist_method
def mv(self, files_pair):
index = self.index
files_in_index = filter(lambda f: f[0] in index, files_pair)
map(self.mv_fs, files_in_index)
old_files = map(funky.first, files_in_index)
new_files = map(funky.last, files_in_index)
self.add(new_files)
self.rm(old_files)
self.add(old_files)
return
@working_only
def _checkout_tree(self, tree):
return build_index_from_tree(
self.repo.path,
self.repo.index_path(),
self.repo.object_store,
tree
)
def checkout_all(self, commit_sha=None):
commit_sha = commit_sha or self.head
commit_tree = self._commit_tree(commit_sha)
# Rebuild index from the current tree
return self._checkout_tree(commit_tree)
def checkout(self, commit_sha=None, files=None):
"""Checkout only a select amount of files
"""
commit_sha = commit_sha or self.head
files = files or []
return self
@funky.arglist_method
def reset(self, files, commit='HEAD'):
pass
def rm_all(self):
self.index.clear()
return self.index.write()
def _to_commit(self, commit_obj):
"""Allows methods to accept both SHA's or dulwich Commit objects as arguments
"""
if isinstance(commit_obj, basestring):
return self.repo[commit_obj]
return commit_obj
def _commit_sha(self, commit_obj):
"""Extracts a Dulwich commits SHA
"""
if utils.git.is_sha(commit_obj):
return commit_obj
elif isinstance(commit_obj, basestring):
# Can't use self[commit_obj] to avoid infinite recursion
commit_obj = self.repo[commit_obj]
return commit_obj.id
def _blob_data(self, sha):
"""Return a blobs content for a given SHA
"""
return self[sha].data
# Get the nth parent back for a given commit
def get_parent_commit(self, commit, n=None):
""" Recursively gets the nth parent for a given commit
Warning: Remember that parents aren't the previous commits
"""
if n is None:
n = 1
commit = self._to_commit(commit)
parents = commit.parents
if n <= 0 or not parents:
# Return a SHA
return self._commit_sha(commit)
parent_sha = parents[0]
parent = self[parent_sha]
# Recur
return self.get_parent_commit(parent, n - 1)
def get_previous_commit(self, commit_ref, n=None):
commit_sha = self._parse_reference(commit_ref)
n = n or 1
commits = self.commits()
return funky.next(commits, commit_sha, n=n, default=commit_sha)
def _parse_reference(self, ref_string):
# COMMIT_REF~x
if '~' in ref_string:
ref, count = ref_string.split('~')
count = int(count)
commit_sha = self._commit_sha(ref)
return self.get_previous_commit(commit_sha, count)
return self._commit_sha(ref_string)
def _commit_tree(self, commit_sha):
"""Return the tree object for a given commit
"""
return self[commit_sha].tree
def diff(self, commit_sha, compare_to=None, diff_type=None, filter_binary=True):
diff_type = diff_type or self.DEFAULT_DIFF_TYPE
diff_func = self.DIFF_FUNCTIONS[diff_type]
if not compare_to:
compare_to = self.get_previous_commit(commit_sha)
return self._diff_between(compare_to, commit_sha, diff_function=diff_func)
def diff_working(self, ref=None, filter_binary=True):
"""Diff between the current working directory and the HEAD
"""
return utils.git.diff_changes_paths(
self.repo.object_store,
self.path,
self._changed_entries(ref=ref),
filter_binary=filter_binary
)
def get_commit_files(self, commit_sha, parent_path=None, is_tree=None, paths=None):
"""Returns a dict of the following Format :
{
"directory/filename.txt": {
'name': 'filename.txt',
'path': "directory/filename.txt",
"sha": "xxxxxxxxxxxxxxxxxxxx",
"data": "blablabla",
"mode": 0xxxxx",
},
...
}
"""
# Default values
context = {}
is_tree = is_tree or False
parent_path = parent_path or ''
if is_tree:
tree = self[commit_sha]
else:
tree = self[self._commit_tree(commit_sha)]
for mode, path, sha in tree.entries():
# Check if entry is a directory
if mode == self.MODE_DIRECTORY:
context.update(
self.get_commit_files(sha, parent_path=os.path.join(parent_path, path), is_tree=True, paths=paths)
)
continue
subpath = os.path.join(parent_path, path)
# Only add the files we want
if not(paths is None or subpath in paths):
continue
# Add file entry
context[subpath] = {
'name': path,
'path': subpath,
'mode': mode,
'sha': sha,
'data': self._blob_data(sha),
}
return context
def file_versions(self, path):
"""Returns all commits where given file was modified
"""
versions = []
commits_info = self.commit_info()
seen_shas = set()
for commit in commits_info:
try:
files = self.get_commit_files(commit['sha'], paths=[path])
file_path, file_data = files.items()[0]
except IndexError:
continue
file_sha = file_data['sha']
if file_sha in seen_shas:
continue
else:
seen_shas.add(file_sha)
# Add file info
commit['file'] = file_data
versions.append(file_data)
return versions
def _diff_between(self, old_commit_sha, new_commit_sha, diff_function=None, filter_binary=True):
"""Internal method for getting a diff between two commits
Please use .diff method unless you have very speciic needs
"""
# If commit is first commit (new_commit_sha == old_commit_sha)
# then compare to an empty tree
if new_commit_sha == old_commit_sha:
old_tree = Tree()
else:
old_tree = self._commit_tree(old_commit_sha)
new_tree = self._commit_tree(new_commit_sha)
return diff_function(self.repo.object_store, old_tree, new_tree, filter_binary=filter_binary)
def changes(self, *args, **kwargs):
""" List of changes between two SHAs
Returns a list of lists of tuples :
[
[
(oldpath, newpath), (oldmode, newmode), (oldsha, newsha)
],
...
]
"""
kwargs['diff_type'] = 'changes'
return self.diff(*args, **kwargs)
def changes_count(self, *args, **kwargs):
return len(self.changes(*args, **kwargs))
def _refs_by_pattern(self, pattern):
refs = self.refs
def item_filter(key_value):
"""Filter only concered refs"""
key, value = key_value
return key.startswith(pattern)
def item_map(key_value):
"""Rewrite keys"""
key, value = key_value
new_key = key[len(pattern):]
return (new_key, value)
return dict(
map(item_map,
filter(
item_filter,
refs.items()
)
)
)
@property
def refs(self):
return self.repo.get_refs()
def set_refs(refs_dict):
for k, v in refs_dict.items():
self.repo[k] = v
def import_refs(self, base, other):
return self.repo.refs.import_refs(base, other)
@property
def branches(self):
return self._refs_by_pattern(self.REFS_BRANCHES)
def _active_branch(self, refs=None, head=None):
head = head or self.head
refs = refs or self.branches
try:
return {
branch: branch_head
for branch, branch_head in refs.items()
if branch_head == head
}.items()[0]
except IndexError:
pass
return (None, None)
@property
def active_branch(self):
return self._active_branch()[0]
@property
def active_sha(self):
return self._active_branch()[1]
@property
def remote_branches(self):
return self._refs_by_pattern(self.REFS_REMOTES)
@property
def tags(self):
return self._refs_by_pattern(self.REFS_TAGS)
@property
def remotes(self):
""" Dict of remotes
{
'origin': 'http://friendco.de/some_user/repo.git',
...
}
"""
config = self.repo.get_config()
return {
keys[1]: values['url']
for keys, values in config.items()
if keys[0] == 'remote'
}
def add_ref(self, new_ref, old_ref):
self.repo.refs[new_ref] = self.repo.refs[old_ref]
self.update_server_info()
def remove_ref(self, ref_name):
# Returns False if ref doesn't exist
if not ref_name in self.repo.refs:
return False
del self.repo.refs[ref_name]
self.update_server_info()
return True
def create_branch(self, base_branch, new_branch, tracking=None):
"""Try creating a new branch which tracks the given remote
if such a branch does not exist then branch off a local branch
"""
# The remote to track
tracking = self.DEFAULT_REMOTE
# Already exists
if new_branch in self.branches:
raise Exception("branch %s already exists" % new_branch)
# Get information about remote_branch
remote_branch = os.path.sep.join([tracking, base_branch])
# Fork Local
if base_branch in self.branches:
base_ref = self._format_ref_branch(base_branch)
# Fork remote
elif remote_branch in self.remote_branches:
base_ref = self._format_ref_remote(remote_branch)
# TODO : track
else:
raise Exception("Can not find the branch named '%s' to fork either locally or in '%s'" % (base_branch, tracking))
# Reference of new branch
new_ref = self._format_ref_branch(new_branch)
# Copy reference to create branch
self.add_ref(new_ref, base_ref)
return new_ref
def remove_branch(self, branch_name):
ref = self._format_ref_branch(branch_name)
return self.remove_ref(ref)
def switch_branch(self, branch_name, tracking=None, create=None):
"""Changes the current branch
"""
if create is None:
create = True
# Check if branch exists
if not branch_name in self.branches:
self.create_branch(branch_name, branch_name, tracking=tracking)
# Get branch reference
branch_ref = self._format_ref_branch(branch_name)
# Change main branch
self.repo.refs.set_symbolic_ref('HEAD', branch_ref)
if self.is_working:
# Remove all files
self.clean_working()
# Add files for the current branch
self.checkout_all()
def clean(self, force=None, directories=None):
untracked_files = self.untracked_files
map(os.remove, untracked_files)
return untracked_files
def clean_working(self):
"""Purges all the working (removes everything except .git)
used by checkout_all to get clean branch switching
"""
return self.clean()
def _get_fs_structure(self, tree_sha, depth=None, parent_sha=None):
tree = self[tree_sha]
structure = {}
if depth is None:
depth = self.MAX_TREE_DEPTH
elif depth == 0:
return structure
for mode, path, sha in tree.entries():
# tree
if mode == self.MODE_DIRECTORY:
# Recur
structure[path] = self._get_fs_structure(sha, depth=depth - 1, parent_sha=tree_sha)
# commit
else:
structure[path] = sha
structure['.'] = tree_sha
structure['..'] = parent_sha or tree_sha
return structure
def _get_fs_structure_by_path(self, tree_sha, path):
parts = path.split(os.path.sep)
depth = len(parts) + 1
structure = self._get_fs_structure(tree_sha, depth=depth)
return funky.subkey(structure, parts)
def commit_ls(self, ref, subpath=None):
"""List a "directory" for a given commit
using the tree of that commit
"""
tree_sha = self._commit_tree(ref)
# Root path
if subpath in self.ROOT_PATHS or not subpath:
return self._get_fs_structure(tree_sha, depth=1)
# Any other path
return self._get_fs_structure_by_path(tree_sha, subpath)
def commit_file(self, ref, path):
"""Return info on a given file for a given commit
"""
name, info = self.get_commit_files(ref, paths=[path]).items()[0]
return info
def commit_tree(self, ref, *args, **kwargs):
tree_sha = self._commit_tree(ref)
return self._get_fs_structure(tree_sha, *args, **kwargs)
def update_server_info(self):
if not self.is_bare:
return
update_server_info(self.repo)
def _is_fast_forward(self):
pass
def _merge_fast_forward(self):
pass
def __hash__(self):
"""This is required otherwise the memoize function will just mess it up
"""
return hash(self.path)
def __getitem__(self, key):
sha = self._parse_reference(key)
return self.repo[sha]
def __setitem__(self, key, value):
self.repo[key] = value
def __contains__(self, key):
return key in self.repo
# Alias to clone_bare
fork = clone_bare
log = commit_info
diff_count = changes_count
contributors = recent_contributors
| 29.394223 | 125 | 0.598144 | 4,238 | 34,597 | 4.676498 | 0.11798 | 0.027751 | 0.00989 | 0.01211 | 0.184823 | 0.14037 | 0.10833 | 0.067662 | 0.036531 | 0.025026 | 0 | 0.001597 | 0.31205 | 34,597 | 1,176 | 126 | 29.419218 | 0.8311 | 0.068185 | 0 | 0.207084 | 0 | 0 | 0.023059 | 0 | 0 | 0 | 0 | 0.002551 | 0.002725 | 0 | null | null | 0.00545 | 0.027248 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d24f47bb348b9648ed9893766e4cb276bd461df6 | 452 | py | Python | app/core/urls.py | vatsamail/django-profiles | d9738fcb129e4f50ecde28126f5ffcccdf1999e0 | [
"MIT"
] | 1 | 2019-05-24T14:22:04.000Z | 2019-05-24T14:22:04.000Z | app/core/urls.py | vatsamail/django-profiles | d9738fcb129e4f50ecde28126f5ffcccdf1999e0 | [
"MIT"
] | 9 | 2020-06-05T18:17:48.000Z | 2022-03-11T23:21:33.000Z | app/core/urls.py | vatsamail/django-profiles | d9738fcb129e4f50ecde28126f5ffcccdf1999e0 | [
"MIT"
] | 1 | 2018-06-22T05:54:58.000Z | 2018-06-22T05:54:58.000Z | from django.urls import include, path, re_path
from . import views
from django.contrib.auth.views import (
login,
logout,
password_reset,
password_reset_done,
password_reset_confirm,
password_reset_complete,
)
app_name = 'core'
urlpatterns = [
path('', views.HomeView.as_view(), name='home'),
re_path(r'friending/(?P<operation>.+)/(?P<pk>\d+)/$', views.friending, name='friend_unfriend'),
]
| 26.588235 | 99 | 0.64823 | 55 | 452 | 5.109091 | 0.6 | 0.185053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210177 | 452 | 16 | 100 | 28.25 | 0.787115 | 0 | 0 | 0 | 0 | 0 | 0.141593 | 0.090708 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.266667 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d250a6fd3bfdb7ab11ae4c2f8ffe9bfe5c487a4e | 745 | py | Python | Python/lab2/temp_convert_FtoC.py | varuneagle555/BSA-STEM-Merit-Badge-Week | 04da40973c99eb64184bb98b58d8bf87b337456c | [
"MIT"
] | 3 | 2016-03-22T07:05:35.000Z | 2021-01-08T21:46:32.000Z | Python/lab2/temp_convert_FtoC.py | varuneagle555/BSA-STEM-Merit-Badge-Week | 04da40973c99eb64184bb98b58d8bf87b337456c | [
"MIT"
] | null | null | null | Python/lab2/temp_convert_FtoC.py | varuneagle555/BSA-STEM-Merit-Badge-Week | 04da40973c99eb64184bb98b58d8bf87b337456c | [
"MIT"
] | 4 | 2017-02-10T22:21:18.000Z | 2022-02-20T01:06:25.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""temp_convert.py: Convert temperature F to C."""
# initialize looping variable, assume yes as first answer
continueYN = "Y"
while continueYN.upper() == "Y":
# get temperature input from the user, and prompt them for what we expect
degF = int(raw_input("Enter temperature in degrees Fahrenheit (°F) to convert: "))
degC = (degF - 32) * 5/9
print "Temperature in degrees C is: {temp}".format(temp=degC)
# check for temperature below freezing...
if degC < 0:
print "Pack long underwear!"
# check for it being a very hot day...
if degF > 100:
print "Remember to hydrate!"
continueYN = raw_input("Would you like to enter another (Y/N)? ")
| 25.689655 | 86 | 0.64698 | 108 | 745 | 4.444444 | 0.694444 | 0.0125 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015707 | 0.230872 | 745 | 28 | 87 | 26.607143 | 0.820244 | 0.331544 | 0 | 0 | 0 | 0 | 0.394077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d252d60d44fc7e878fae2a2e799df7cff950fbd9 | 597 | py | Python | setup.py | jaspershen/getDB | 6f767279775e201f9505bb1e98dd141ffe0335f7 | [
"MIT"
] | null | null | null | setup.py | jaspershen/getDB | 6f767279775e201f9505bb1e98dd141ffe0335f7 | [
"MIT"
] | null | null | null | setup.py | jaspershen/getDB | 6f767279775e201f9505bb1e98dd141ffe0335f7 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(name='getDB',
version='0.0.4',
description="This module can be used to download HMDB and KEGG database.",
license='MIT',
author='Xiaotao Shen',
author_email='shenxt1990@163.com',
url='https://github.com/jaspershen/getDB',
long_description_content_type="text/markdown",
packages=find_packages(),
install_requires=['requests', 'pandas', 'bs4', 'numpy'],
classifiers=[
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7'
]
) | 35.117647 | 80 | 0.624791 | 67 | 597 | 5.462687 | 0.80597 | 0.065574 | 0.136612 | 0.142077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032823 | 0.234506 | 597 | 17 | 81 | 35.117647 | 0.768053 | 0 | 0 | 0 | 0 | 0 | 0.411371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d255a8c98ce6037d15065ccd226fd922085a64a0 | 4,067 | py | Python | adios-1.9.0/wrappers/numpy/example/utils/ncdf2bp.py | swatisgupta/Adaptive-compression | b97a1d3d3e0e968f59c7023c7367a7efa9f672d0 | [
"BSD-2-Clause"
] | null | null | null | adios-1.9.0/wrappers/numpy/example/utils/ncdf2bp.py | swatisgupta/Adaptive-compression | b97a1d3d3e0e968f59c7023c7367a7efa9f672d0 | [
"BSD-2-Clause"
] | null | null | null | adios-1.9.0/wrappers/numpy/example/utils/ncdf2bp.py | swatisgupta/Adaptive-compression | b97a1d3d3e0e968f59c7023c7367a7efa9f672d0 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
"""
Example:
$ python ./ncdf2bp.py netcdf_file
"""
from adios import *
from scipy.io import netcdf
import numpy as np
import sys
import os
import operator
def usage():
print os.path.basename(sys.argv[0]), "netcdf_file"
if len(sys.argv) < 2:
usage()
sys.exit(0)
##fname = "MERRA100.prod.assim.tavg3_3d_mst_Cp.19791010.SUB.nc"
fname = sys.argv[1]
fout = '.'.join(fname.split('.')[:-1]) + ".bp"
tname = "time"
if len(sys.argv) > 2:
tname = sys.argv[2]
## Open NetCDF file
f = netcdf.netcdf_file(fname, 'r')
## Check dimension
assert (all(map(lambda x: x is not None,
[ val for k, val in f.dimensions.items()
if k != tname])))
## Two types of variables : time-dependent or time-independent
dimvar = {n:v for n,v in f.variables.items() if n in f.dimensions.keys()}
var = {n:v for n,v in f.variables.items() if n not in f.dimensions.keys()}
tdepvar = {n:v for n,v in var.items() if tname in v.dimensions}
tindvar = {n:v for n,v in var.items() if tname not in v.dimensions}
## Time dimension
if len(tdepvar) > 0:
assert (len(set([v.dimensions.index(tname) for v in tdepvar.values()]))==1)
tdx = tdepvar.values()[0].dimensions.index(tname)
assert (all([v.data.shape[tdx] for v in tdepvar.values()]))
tdim = tdepvar.values()[0].shape[tdx]
else:
tdim = 1
## Init ADIOS without xml
init_noxml()
allocate_buffer(BUFFER_ALLOC_WHEN.NOW, 100)
gid = declare_group ("group", tname, FLAG.YES)
select_method (gid, "POSIX1", "verbose=3", "")
d1size = 0
for name, val in f.dimensions.items():
if name == tname:
continue
print "Dimension : %s (%d)" % (name, val)
define_var (gid, name, "", DATATYPE.integer, "", "", "")
d1size += 4
"""
d2size = 0
for name, var in dimvar.items():
if name == tname:
continue
if name in f.dimensions.keys():
name = "v_" + name
print "Dim variable : %s (%s)" % (name, ','.join(var.dimensions))
define_var (gid, name, "", np2adiostype(var.data.dtype.type),
','.join(var.dimensions),
"",
"")
d2size += var.data.size * var.data.dtype.itemsize
"""
v1size = 0
for name, var in tindvar.items():
print "Variable : %s (%s)" % (name, ','.join(var.dimensions))
define_var (gid, name, "", np2adiostype(var.data.dtype.type),
','.join(var.dimensions),
"",
"")
v1size += var.data.size * var.data.dtype.itemsize
v2size = 0
for name, var in tdepvar.items():
print "Variable : %s (%s)" % (name, ','.join(var.dimensions))
define_var (gid, name, "", np2adiostype(var.data.dtype.type),
','.join(var.dimensions),
','.join([dname for dname in var.dimensions
if dname != tname]),
"0,0,0")
v2size += var.data.size * var.data.dtype.itemsize / tdim
## Clean old file
if os.access(fout, os.F_OK):
os.remove(fout)
for it in range(tdim):
print
print "Time step : %d" % (it)
fd = open("group", fout, "a")
groupsize = d1size + v1size + v2size
set_group_size(fd, groupsize)
for name, val in f.dimensions.items():
if name == tname:
continue
print "Dimension writing : %s (%d)" % (name, val)
write_int(fd, name, val)
for name, var in tindvar.items():
try:
arr = np.array(var.data,
dtype=var.data.dtype.type)
print "Time independent variable writing : %s %s" % (name, arr.shape)
write(fd, name, arr)
except ValueError:
print "Skip:", name
for name, var in tdepvar.items():
try:
arr = np.array(var.data.take([it], axis=tdx),
dtype=var.data.dtype)
print "Time dependent variable writing : %s %s" % (name, arr.shape)
write(fd, name, arr)
except ValueError:
print "Skip:", name
close(fd)
f.close()
finalize()
print
print "Done. Saved:", fout
| 27.666667 | 81 | 0.572904 | 563 | 4,067 | 4.101243 | 0.277087 | 0.039411 | 0.046774 | 0.025985 | 0.436986 | 0.392378 | 0.349502 | 0.28757 | 0.28757 | 0.28757 | 0 | 0.017827 | 0.268994 | 4,067 | 146 | 82 | 27.856164 | 0.758829 | 0.055815 | 0 | 0.27957 | 0 | 0 | 0.076142 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | null | null | 0 | 0.064516 | null | null | 0.139785 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d2571cfece71be4e3c7267fd9fb5b654ad0b459f | 1,042 | py | Python | classification/prepare_model.py | JSC-NIIAS/TwGoA4aij2021 | 9f011f506748435190f8e4e635820c8208144b94 | [
"MIT"
] | null | null | null | classification/prepare_model.py | JSC-NIIAS/TwGoA4aij2021 | 9f011f506748435190f8e4e635820c8208144b94 | [
"MIT"
] | null | null | null | classification/prepare_model.py | JSC-NIIAS/TwGoA4aij2021 | 9f011f506748435190f8e4e635820c8208144b94 | [
"MIT"
] | null | null | null | import os
import yaml
import segmentation_models_pytorch as smp
import torch
import argparse
import torch.nn as nn
import timm
from model_wrapper import Classification_model
def prepare_model(opt):
with open(opt.hyp) as f:
experiment_dict = yaml.load(f, Loader=yaml.FullLoader)
model_pretrained = timm.create_model(experiment_dict['model']['name'], pretrained=experiment_dict['model']['pretrained'],num_classes=experiment_dict['model']['num_classes'])
model = Classification_model(model_pretrained,experiment_dict['model']['model_type'],experiment_dict['model']['num_classes_mt'])
model=nn.DataParallel(model)
model.load_state_dict(torch.load(experiment_dict["savepath"])["model_state_dict"])
model=model.module.model
torch.save(model, experiment_dict["final_model_path"])
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--hyp', type=str, default='configs/baseline_signal.yaml', help='hyperparameters path')
opt = parser.parse_args()
prepare_model(opt)
| 41.68 | 177 | 0.764875 | 137 | 1,042 | 5.525547 | 0.416058 | 0.147952 | 0.125495 | 0.076618 | 0.076618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109405 | 1,042 | 24 | 178 | 43.416667 | 0.815733 | 0 | 0 | 0 | 0 | 0 | 0.167946 | 0.026871 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.363636 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d258b7f764b2791ef696f1cad34e04a51316c183 | 4,511 | py | Python | StepperComms.py | MicaelJarniac/StepperComms | 53336a3733c1b5bb30b3d001b7fe3414f9c3fab9 | [
"MIT"
] | null | null | null | StepperComms.py | MicaelJarniac/StepperComms | 53336a3733c1b5bb30b3d001b7fe3414f9c3fab9 | [
"MIT"
] | null | null | null | StepperComms.py | MicaelJarniac/StepperComms | 53336a3733c1b5bb30b3d001b7fe3414f9c3fab9 | [
"MIT"
] | null | null | null | # Required imports
import os
import sys
import serial
import time
sys.path.append(os.path.dirname(os.path.expanduser('~/projects/Python-Playground/Debug'))) # Update path accordingly
from Debug.Debug import Debug
# Declare debug
debug = Debug(True, 3).prt # Simplifies debugging messages
# Message building blocks
RW_CMD = 0x80 # Validation check
TRANSFER_SIZE_MASK = 0x3f # Masks bits used for transfer size
BYTE_MASK = 0xff # Masks 1 byte
RW_MASK = 0x40 # Bit used for defining if 'read' or 'write' command type
READ = 1 # Command of type 'read'
WRITE = 0 # 'write'
ID_AMOUNT = 38 # Amount of remote variables
# Message size
CMD_ADDR_SIZE = 1
CMD_INFO_SIZE = 1 + CMD_ADDR_SIZE # 1 byte (basic info & transfer size) + 1 byte (address)
CMD_DATA_SIZE = 61 # 61 bytes (data)
CMD_BUFF_SIZE = CMD_INFO_SIZE + CMD_DATA_SIZE # Command info + command data
# Message buffer and related
OutCmdBuffer = [None] * CMD_BUFF_SIZE # Initializes the buffer with given size
# TODO Remove not used var
OutCmdBufferId = 0 # Holds the current buffer position
# Message parameters
CmdType = WRITE # Command type ('read' or 'write')
CmdSize = 0 # size
CmdAddr = 0 # address
CmdData = [None] * CMD_DATA_SIZE # data
# Serial configuration parameters
SerPort = "/dev/serial0" # Device
SerBaud = 9600 # Baud rate
SerTout = 1 # Timeout
SerDelay = 0.05 # Delay between quick writes
# Declare serial
ser = serial.Serial(
port = SerPort, # Serial port configurable above
baudrate = SerBaud, # Baudrate configurable above
bytesize = serial.EIGHTBITS, # Byte size hardcoded 8 bits
parity = serial.PARITY_NONE, # Parity hardcoded no parity
stopbits = serial.STOPBITS_TWO, # Stop bits hardcoded 2 stopbits
timeout = SerTout, # Timeout configurable above
xonxoff = False, # ? hardcoded false
rtscts = False, # ? hardcoded false
dsrdtr = False, # ? hardcoded false
write_timeout = SerTout, # Write timeout configurable above
inter_byte_timeout = None) # ? hardcoded none
# Remote variables
RemoteVars = [None] * ID_AMOUNT # Stores received variables
def BuildMessage():
# Iterates through entire message length
for i in range(CMD_INFO_SIZE + CmdSize):
data = 0
# Builds first byte
if i == 0:
data |= RW_CMD & BYTE_MASK # Validation check bit
data |= RW_MASK & (BYTE_MASK * CmdType) # Command type bit
data |= CmdSize & TRANSFER_SIZE_MASK # Transfer size bits
# Builds second byte
elif i == 1:
data |= CmdAddr & BYTE_MASK # Address byte
# Builds remaining bytes
else:
data |= CmdData[i - CMD_INFO_SIZE] & BYTE_MASK
# Assigns built byte to its position on the message buffer
OutCmdBuffer[i] = data & BYTE_MASK
def SendMessage():
# Iterates through info bytes + command bytes
for i in range(CMD_INFO_SIZE + CmdSize):
ser.write(serial.to_bytes([OutCmdBuffer[i] & BYTE_MASK])) # Writes current message buffer position to the serial device
debug("{1:02d} - {0:08b}".format(OutCmdBuffer[i], i))
time.sleep(SerDelay)
def ReadMessage():
# TODO Read message
def GetRemoteVars():
CmdType = READ
CmdSize = 0 # TODO Requires actual data size
for i in range(ID_AMOUNT):
CmdAddr = i
BuildMessage()
SendMessage()
RemoteVars[i] = ReadMessage()
# Main loop
while True:
# Clear serial in and out buffers
ser.reset_input_buffer()
ser.reset_output_buffer()
# Placeholders
CmdType = WRITE
CmdSize = 1
CmdAddr = 31
CmdData[0] = 0x1
BuildMessage()
SendMessage()
debug("\n")
| 38.228814 | 144 | 0.552871 | 479 | 4,511 | 5.098121 | 0.359081 | 0.022932 | 0.022523 | 0.013514 | 0.023751 | 0.023751 | 0.023751 | 0.023751 | 0 | 0 | 0 | 0.018969 | 0.380625 | 4,511 | 117 | 145 | 38.555556 | 0.855047 | 0.349368 | 0 | 0.12987 | 0 | 0 | 0.022617 | 0.01183 | 0 | 0 | 0.006611 | 0.008547 | 0 | 0 | null | null | 0 | 0.064935 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d25b3d6bc31f3ca7960ee1d2b2edc46e92e9ff1d | 6,142 | py | Python | skills/eliza/test_eliza.py | oserikov/dream | 109ba2df799025dcdada1fddbb7380e1c03100eb | [
"Apache-2.0"
] | 34 | 2021-08-18T14:51:44.000Z | 2022-03-10T14:14:48.000Z | skills/eliza/test_eliza.py | oserikov/dream | 109ba2df799025dcdada1fddbb7380e1c03100eb | [
"Apache-2.0"
] | 27 | 2021-08-30T14:42:09.000Z | 2022-03-17T22:11:45.000Z | skills/eliza/test_eliza.py | oserikov/dream | 109ba2df799025dcdada1fddbb7380e1c03100eb | [
"Apache-2.0"
] | 40 | 2021-08-22T07:13:32.000Z | 2022-03-29T11:45:32.000Z | import unittest
import eliza
class ElizaTest(unittest.TestCase):
def test_decomp_1(self):
el = eliza.Eliza()
self.assertEqual([], el._match_decomp(["a"], ["a"]))
self.assertEqual([], el._match_decomp(["a", "b"], ["a", "b"]))
def test_decomp_2(self):
el = eliza.Eliza()
self.assertIsNone(el._match_decomp(["a"], ["b"]))
self.assertIsNone(el._match_decomp(["a"], ["a", "b"]))
self.assertIsNone(el._match_decomp(["a", "b"], ["a"]))
self.assertIsNone(el._match_decomp(["a", "b"], ["b", "a"]))
def test_decomp_3(self):
el = eliza.Eliza()
self.assertEqual([["a"]], el._match_decomp(["*"], ["a"]))
self.assertEqual([["a", "b"]], el._match_decomp(["*"], ["a", "b"]))
self.assertEqual([["a", "b", "c"]], el._match_decomp(["*"], ["a", "b", "c"]))
def test_decomp_4(self):
el = eliza.Eliza()
self.assertEqual([], el._match_decomp([], []))
self.assertEqual([[]], el._match_decomp(["*"], []))
def test_decomp_5(self):
el = eliza.Eliza()
self.assertIsNone(el._match_decomp(["a"], []))
self.assertIsNone(el._match_decomp([], ["a"]))
def test_decomp_6(self):
el = eliza.Eliza()
self.assertEqual([["0"]], el._match_decomp(["*", "a"], ["0", "a"]))
self.assertEqual([["0", "a"]], el._match_decomp(["*", "a"], ["0", "a", "a"]))
self.assertEqual([["0", "a", "b"]], el._match_decomp(["*", "a"], ["0", "a", "b", "a"]))
self.assertEqual([["0", "1"]], el._match_decomp(["*", "a"], ["0", "1", "a"]))
def test_decomp_7(self):
el = eliza.Eliza()
self.assertEqual([[]], el._match_decomp(["*", "a"], ["a"]))
def test_decomp_8(self):
el = eliza.Eliza()
self.assertIsNone(el._match_decomp(["*", "a"], ["a", "b"]))
self.assertIsNone(el._match_decomp(["*", "a"], ["0", "a", "b"]))
self.assertIsNone(el._match_decomp(["*", "a"], ["0", "1", "a", "b"]))
def test_decomp_9(self):
el = eliza.Eliza()
self.assertEqual([["0"], ["b"]], el._match_decomp(["*", "a", "*"], ["0", "a", "b"]))
self.assertEqual([["0"], ["b", "c"]], el._match_decomp(["*", "a", "*"], ["0", "a", "b", "c"]))
def test_decomp_10(self):
el = eliza.Eliza()
self.assertEqual([["0"], []], el._match_decomp(["*", "a", "*"], ["0", "a"]))
self.assertEqual([[], []], el._match_decomp(["*", "a", "*"], ["a"]))
self.assertEqual([[], ["b"]], el._match_decomp(["*", "a", "*"], ["a", "b"]))
def test_syn_1(self):
el = eliza.Eliza()
el.load("doctor.txt")
self.assertEqual([["am"]], el._match_decomp(["@be"], ["am"]))
self.assertEqual([["am"]], el._match_decomp(["@be", "a"], ["am", "a"]))
self.assertEqual([["am"]], el._match_decomp(["a", "@be", "b"], ["a", "am", "b"]))
def test_syn_2(self):
el = eliza.Eliza()
el.load("doctor.txt")
self.assertIsNone(el._match_decomp(["@be"], ["a"]))
def test_syn_3(self):
el = eliza.Eliza()
el.load("doctor.txt")
self.assertIsNotNone(el._match_decomp(["*", "i", "am", "@sad", "*"], ["its", "true", "i", "am", "unhappy"]))
def test_response_1(self):
el = eliza.Eliza()
el.load("doctor.txt")
self.assertEqual("In what way ?", el.respond("Men are all alike."))
self.assertEqual(
"Can you think of a specific example ?", el.respond("They're always bugging us about something or other.")
)
self.assertEqual("Your boyfriend made you come here ?", el.respond("Well, my boyfriend made me come here."))
self.assertEqual(
"I am sorry to hear that you are depressed .", el.respond("He says I'm depressed much of the time.")
)
self.assertEqual(
"Do you think that coming here will help you not to be unhappy ?", el.respond("It's true. I am unhappy.")
)
self.assertEqual(
"What would it mean to you if you got some help ?", el.respond("I need some help, that much seems certain.")
)
self.assertEqual(
"Tell me more about your family.", el.respond("Perhaps I could learn to get along with my mother.")
)
self.assertEqual("Who else in your family takes care of you ?", el.respond("My mother takes care of me."))
self.assertEqual("Your father ?", el.respond("My father."))
self.assertEqual("What resemblence do you see ?", el.respond("You are like my father in some ways."))
self.assertEqual(
"What makes you think I am not very aggressive ?",
el.respond("You are not very aggressive, but I think you don't want me to notice that."),
)
self.assertEqual("Why do you think I don't argue with you ?", el.respond("You don't argue with me."))
self.assertEqual("Does it please you to believe I am afraid of you ?", el.respond("You are afraid of me."))
self.assertEqual(
"What else comes to mind when you think of your father ?", el.respond("My father is afraid of everybody.")
)
self.assertIn(
el.respond("Bullies."),
[
"Lets discuss further why your boyfriend made you come here .",
"Earlier you said your mother .",
"But your mother takes care of you .",
"Does that have anything to do with the fact that your boyfriend made you come here ?",
"Does that have anything to do with the fact that your father ?",
"Lets discuss further why your father is afraid of everybody .",
],
)
def test_response_2(self):
el = eliza.Eliza()
el.load("doctor.txt")
self.assertEqual(el.initial(), "How do you do. Please tell me your problem.")
self.assertIn(
el.respond("Hello"), ["How do you do. Please state your problem.", "Hi. What seems to be your problem ?"]
)
self.assertEqual(el.final(), "Goodbye. Thank you for talking to me.")
if __name__ == "__main__":
unittest.main()
| 45.496296 | 120 | 0.545262 | 806 | 6,142 | 4.031017 | 0.19727 | 0.166205 | 0.124038 | 0.107725 | 0.50908 | 0.432133 | 0.340412 | 0.280702 | 0.268082 | 0.219452 | 0 | 0.007604 | 0.25057 | 6,142 | 134 | 121 | 45.835821 | 0.69824 | 0 | 0 | 0.25 | 0 | 0 | 0.287854 | 0 | 0 | 0 | 0 | 0 | 0.422414 | 1 | 0.12931 | false | 0 | 0.017241 | 0 | 0.155172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d25df58bed9f8be63b8c4a15d08e86c300ade0fd | 2,511 | py | Python | pelican_resume/resume.py | cmenguy/pelican-resume | 57105e72c24ef04ad96857f51e5e9060e6aff1f6 | [
"MIT"
] | 12 | 2016-02-07T05:16:44.000Z | 2019-11-20T08:46:10.000Z | pelican_resume/resume.py | cmenguy/pelican-resume | 57105e72c24ef04ad96857f51e5e9060e6aff1f6 | [
"MIT"
] | 1 | 2019-01-20T20:57:35.000Z | 2019-01-20T20:59:59.000Z | pelican_resume/resume.py | cmenguy/pelican-resume | 57105e72c24ef04ad96857f51e5e9060e6aff1f6 | [
"MIT"
] | 5 | 2016-06-07T23:34:36.000Z | 2020-07-13T18:01:23.000Z | '''
resume
==============================================================================
This plugin generates a PDF resume from a Markdown file using customizable CSS
'''
import os
import logging
import tempfile
from subprocess import Popen
from pelican import signals
CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
CSS_DIR = os.path.join(CURRENT_DIR, "static", "css")
logger = logging.getLogger(__name__)
def set_default_settings(settings):
settings.setdefault("RESUME_SRC", "pages/resume.md")
settings.setdefault("RESUME_PDF", "pdfs/resume.pdf")
settings.setdefault("RESUME_CSS_DIR", CSS_DIR)
settings.setdefault("RESUME_TYPE", "moderncv")
settings.setdefault("RESUME_PANDOC", "pandoc")
settings.setdefault("RESUME_WKHTMLTOPDF", "wkhtmltopdf")
def init_default_config(pelican):
from pelican.settings import DEFAULT_CONFIG
set_default_settings(DEFAULT_CONFIG)
if (pelican):
set_default_settings(pelican.settings)
def generate_pdf_resume(generator):
path = generator.path
output_path = generator.settings.get("OUTPUT_PATH")
markdown = os.path.join(path, generator.settings.get("RESUME_SRC"))
css_type = generator.settings.get("RESUME_TYPE")
css = os.path.join(generator.settings.get("RESUME_CSS_DIR"), "%s.css" % css_type)
if not os.path.exists(markdown):
logging.critical("Markdown resume not found under %s" % markdown)
return
if css and not os.path.exists(os.path.join(path, css)):
logging.warn("Resume CSS not found under %s, CSS will be ignored" % css)
css = os.path.join(path, css) if css else css
with tempfile.NamedTemporaryFile(suffix=".html") as html_output:
pdf_output = os.path.join(output_path, generator.settings.get("RESUME_PDF"))
pdf_dir = os.path.dirname(pdf_output)
if not os.path.exists(pdf_dir):
os.makedirs(pdf_dir)
pandoc = generator.settings.get("RESUME_PANDOC")
wkhtmltopdf = generator.settings.get("RESUME_WKHTMLTOPDF")
html_cmd = "%s --standalone " % pandoc
if css:
html_cmd += "-c %s " % css
html_cmd += "--from markdown --to html -o %s %s" % (html_output.name, markdown)
Popen(html_cmd, shell=True).wait()
pdf_cmd = "%s %s %s" % (wkhtmltopdf, html_output.name, pdf_output)
Popen(pdf_cmd, shell=True).wait()
def register():
signals.initialized.connect(init_default_config)
signals.article_generator_finalized.connect(generate_pdf_resume) | 38.630769 | 87 | 0.682597 | 325 | 2,511 | 5.076923 | 0.252308 | 0.043636 | 0.084848 | 0.094545 | 0.099394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169255 | 2,511 | 65 | 88 | 38.630769 | 0.790988 | 0.065313 | 0 | 0 | 1 | 0 | 0.164957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.122449 | 0 | 0.22449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d25e34eee54e20d2dc920f68d0031efffaa533b3 | 331 | py | Python | app/machine_learning.py | ludthor/CovidVisualizer | 721015e8f9f0b1c0fb2e5ba985884341d22046e2 | [
"MIT"
] | null | null | null | app/machine_learning.py | ludthor/CovidVisualizer | 721015e8f9f0b1c0fb2e5ba985884341d22046e2 | [
"MIT"
] | null | null | null | app/machine_learning.py | ludthor/CovidVisualizer | 721015e8f9f0b1c0fb2e5ba985884341d22046e2 | [
"MIT"
] | null | null | null | from sklearn.linear_model import Ridge
class MachineLearning():
def __init__(self):
self.model = None
def train_model(self, X,y):
lr = Ridge(alpha=0.5)
lr.fit(X,y)
print(lr)
self.model = lr
def predict(self, X):
preds = self.model.predict(X)
return preds
| 15.761905 | 38 | 0.570997 | 45 | 331 | 4.066667 | 0.533333 | 0.147541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008889 | 0.320242 | 331 | 20 | 39 | 16.55 | 0.804444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d26193d8f95b87350b91fd8517bcdb1ccfde7d7b | 3,936 | py | Python | Ch_5/linear_alg5.py | Skyblueballykid/linalg | 515eea984856ad39c823314178929876b21f8014 | [
"MIT"
] | null | null | null | Ch_5/linear_alg5.py | Skyblueballykid/linalg | 515eea984856ad39c823314178929876b21f8014 | [
"MIT"
] | null | null | null | Ch_5/linear_alg5.py | Skyblueballykid/linalg | 515eea984856ad39c823314178929876b21f8014 | [
"MIT"
] | null | null | null | import numpy as np
import scipy
import sympy
from numpy import linalg as lg
from numpy.linalg import solve
from numpy.linalg import eig
from scipy.integrate import quad
# Question 1
'''
A. Determinant = -21
B. Determinant = -21
'''
m1 = np.array([[3, 0, 3], [2, 3, 3], [0, 4, -1]])
print(m1)
det1 = np.linalg.det(m1)
print(det1) # correct
# Question 2
# Det = -159
# Question 3
'''
A.
Replace row 3 with k times row 3.
B.
The determinant is multiplied by k.
'''
# Question 4
m2 = np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
det2 = np.linalg.det(m2)
print(det2) # correct
# Question 5
'''
A.
False, because the determinant of A can be computed by cofactor expansion across any row or down any column. Since the determinant of A is well defined, both of these cofactor expansions will be equal.
B.
False, because the determinant of a triangular matrix is the product of the entries along the main diagonal.
'''
# Question 6
'''
If two rows of A are interchanged to produce B, then det Upper B equals negative det A.
'''
# Question 7
'''
If a multiple of one row of A is added to another row to produce matrix B, then det Upper B equals det Upper A.
'''
# Question 8
m3 = sympy.Matrix([[1, 5, -6], [-1, -4, -5], [1, 4, 7]])
print(m3)
rref1 = m3.rref()
print(rref1)
m4 = np.array([[1, 5, -6], [-1, -4, -5], [1, 4, 7]])
det3 = np.linalg.det(m4)
print(det3) # correct, det = 2
# Question 9
# Switch the rows, det of original matrix = -10, det of changed matrix = 10
# Question 10
m5 = np.array([[-25, -4, -2], [-5, 12, -4], [0, -20, 6]])
det4 = np.linalg.det(m5)
print(det4)
# The matrix is invertible because the determinant of the matrix is not zero.
# Question 11
# formula
# Question 12
mat = np.array([[1,1,0], [3, 0, 5], [0, 1, -5]])
print(mat)
det8 = np.linalg.det(mat)
print(det8)
#Cramer's Rule
# Find A1b by replacing the first column with column b
mat2 = np.array([[2,1,0], [0, 0, 5], [3, 1, -5]])
print(mat2)
det9 = np.linalg.det(mat2)
print(det9)
print(det9/det8)
#Find A2b by replacing the second column with b
mat3 = np.array([[1, 2, 0], [3, 0, 5], [0, 3, -5]])
print(mat3)
det10 = np.linalg.det(mat3)
print(det10)
print(det10/det8)
#Find A3b by replacing the third column with b
mat4 = np.array([[1, 1, 2], [3, 0, 0], [0, 1, 3]])
print(mat4)
det11 = np.linalg.det(mat4)
print(det11)
print(det11/det8)
# Answers above are correct, but try again because I misread the print output
matr = np.array([[1,1,0], [5, 0, 4], [0, 1, -4]])
print(matr)
deter = np.linalg.det(matr)
print(deter)
# Find A1b by replacing first column with b
matr1 = np.array([[5, 1, 0], [0, 0, 4], [6, 1, -4]])
print(matr1)
deter1 = np.linalg.det(matr1)
print(deter1/deter)
# Find A2b by replacing second column with b
matr2 = np.array([[1, 5, 0], [5, 0, 4], [0, 6, -4]])
print(matr2)
deter2 = np.linalg.det(matr2)
print(deter2/deter)
# Find A3b by replacing third column with b
matr3 = np.array([[1, 1, 5], [5, 0, 0], [0, 1, 6]])
print(matr3)
deter3 = np.linalg.det(matr3)
print(deter3/deter)
# Question 13
# Compute the adjugate of the given matrix
matri = np.matrix([[2, 5, 4], [1, 0, 1], [3, 2, 2]])
print(matri)
# Hermitian transpose (not correct)
print(matri.getH())
# Det of matrix
determ = np.linalg.det(matri)
print(determ)
adj_matr = np.array([[-2, -2, 5], [1, -8, 2], [2, 11, -5]])
print(adj_matr * 1/determ) # Correct
# Question 14
m6 = np.array([[3, 7], [6, 2]])
print(m6)
det5 = np.linalg.det(m6)
print(det5) # correct
# The area of the parellelogram is the absolute value of the det. In this case = 36
# Question 15
# First find the area of the parellelogram
m7 = np.array([[-5, -5], [5, 10]])
det6 = np.linalg.det(m7)
print(det6) # -25
# next find the det of matrix A
m8 = np.array([[7, -8], [-2, 8]])
print(m8)
det7 = np.linalg.det(m8)
print(det7) # 40
# Finally, multiply the absolute value of the det of the first matrix (area of the parellelogram) by the det of the second matrix
# Answer = 1000
| 23.152941 | 202 | 0.653201 | 719 | 3,936 | 3.573018 | 0.243394 | 0.043597 | 0.068509 | 0.014013 | 0.098093 | 0.063838 | 0.007007 | 0.007007 | 0.007007 | 0 | 0 | 0.088035 | 0.180386 | 3,936 | 169 | 203 | 23.289941 | 0.708308 | 0.289634 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.454545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
d26509e1b720c708ef4c28d0e261a51f29110955 | 425 | py | Python | build.py | chrahunt/conan-protobuf | c49350d1c69d2e5b40305803f3184561f433554c | [
"MIT"
] | null | null | null | build.py | chrahunt/conan-protobuf | c49350d1c69d2e5b40305803f3184561f433554c | [
"MIT"
] | null | null | null | build.py | chrahunt/conan-protobuf | c49350d1c69d2e5b40305803f3184561f433554c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from bincrafters import build_template_default
if __name__ == "__main__":
builder = build_template_default.get_builder()
# Todo: re-enable shared builds when issue resolved
# github issue: https://github.com/google/protobuf/issues/2502
builder.items = filter(lambda build: build.options["protobuf:shared"] == False, builder.items)
builder.run()
| 26.5625 | 98 | 0.701176 | 53 | 425 | 5.377358 | 0.735849 | 0.091228 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014205 | 0.171765 | 425 | 16 | 99 | 26.5625 | 0.795455 | 0.36 | 0 | 0 | 0 | 0 | 0.085502 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d26a6bee5f324041d60e07e49f5e1f8b0a925d37 | 1,099 | py | Python | extras/unbundle.py | mstriemer/amo-validator | 35b502204183d783634207e7c2e7766ea1070ce8 | [
"BSD-3-Clause"
] | 1 | 2015-07-15T20:06:09.000Z | 2015-07-15T20:06:09.000Z | extras/unbundle.py | mstriemer/amo-validator | 35b502204183d783634207e7c2e7766ea1070ce8 | [
"BSD-3-Clause"
] | null | null | null | extras/unbundle.py | mstriemer/amo-validator | 35b502204183d783634207e7c2e7766ea1070ce8 | [
"BSD-3-Clause"
] | null | null | null | import sys
import os
import zipfile
from zipfile import ZipFile
from StringIO import StringIO
source = sys.argv[1]
target = sys.argv[2]
if not target.endswith("/"):
target = "%s/" % target
def _unbundle(path, target):
zf = ZipFile(path, 'r')
contents = zf.namelist()
for item in contents:
sp = item.split("/")
if not sp[-1]:
continue
if "__MACOSX" in item:
continue
print item, ">", target + item
cpath = target + "/".join(sp[:-1])
if not os.path.exists(cpath):
os.makedirs(cpath)
if item.endswith((".jar", ".xpi", ".zip")):
now = target + item
path_item = item.split("/")
path_item[-1] = "_" + path_item[-1]
path = target + "/".join(path_item)
buff = StringIO(zf.read(item))
_unbundle(buff, path + "/")
else:
f = open(target + item, 'w')
f.write(zf.read(item))
f.close()
zf.close()
if not os.path.exists(target):
os.mkdir(target)
_unbundle(source, target)
| 22.895833 | 51 | 0.526843 | 134 | 1,099 | 4.246269 | 0.350746 | 0.035149 | 0.059754 | 0.038664 | 0.059754 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008075 | 0.323931 | 1,099 | 47 | 52 | 23.382979 | 0.757739 | 0 | 0 | 0.054054 | 0 | 0 | 0.030027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.135135 | null | null | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d26afa5cb9899f00bda32076f95a8a1292054119 | 81,920 | py | Python | linuxOperation/app/domain/forms.py | zhouli121018/core | f9700204349ecb22d45e700e9e27e79412829199 | [
"MIT"
] | null | null | null | linuxOperation/app/domain/forms.py | zhouli121018/core | f9700204349ecb22d45e700e9e27e79412829199 | [
"MIT"
] | 1 | 2021-06-10T20:45:55.000Z | 2021-06-10T20:45:55.000Z | linuxOperation/app/domain/forms.py | zhouli121018/core | f9700204349ecb22d45e700e9e27e79412829199 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
import time
import os
import math
import json
from lib.forms import BaseFied, BaseFieldFormatExt, DotDict, BaseCfilterActionFied, BaseCfilterOptionFied
from app.core.models import Mailbox, MailboxUserAttr, Domain, CoCompany, CoreAlias, DomainAttr, \
Department, CoreConfig, CoreMonitor, CoreWhitelist
from app.domain.models import Signature, SecretMail, WmCustomerInfo, WmCustomerCate, WmTemplate
from app.utils.MailboxLimitChecker import MailboxLimitChecker
from django import forms
from django.db.models import Sum,Count
from lib import validators
from lib.formats import dict_compatibility
from lib.tools import clear_redis_cache, download_excel, GenerateRsaKeys, generate_rsa, get_unicode, get_string,\
get_system_user_id, get_system_group_id, recursion_make_dir, get_random_string, \
phpLoads, phpDumps, get_client_request
from lib.validators import check_domain, check_email_ordomain
from django_redis import get_redis_connection
from django.utils.translation import ugettext as _
import base64
import time
import copy
import constants
import chardet
from auditlog.api import api_create_admin_log
from app.core.constants import MAILBOX_SEND_PERMIT, MAILBOX_RECV_PERMIT
def getSavePath(saveName):
#新版本webmail保存位置
saveDir = u"/usr/local/u-mail/data/www/webmail/netdisk/media"
if not os.path.exists(saveDir):
recursion_make_dir(saveDir)
user_name = "umail_apache"
os.chown(saveDir, get_system_user_id(user_name), get_system_group_id(user_name) )
#旧版本webmail保存位置
#saveDir2 = u"/usr/local/u-mail/data/www/webmail/attachment"
savePath = u"%s/%s"%(saveDir, saveName)
return savePath
def saveLogoToPath(filedata):
filedata = base64.decodestring(filedata.encode("utf-8","ignore").strip())
user_name = "umail_apache"
now = time.strftime("%Y%m%d%H%M%S")
decimal,_= math.modf(time.time())
saveName = u"logo_%s_%s_%03d.jpg"%(get_random_string(5), now, int(decimal*1000))
savePath = getSavePath(saveName)
with open(savePath, "wb+") as f:
f.write(filedata)
os.chown(savePath, get_system_user_id(user_name), get_system_group_id(user_name) )
return saveName
def deleteLogoFromPath(saveName):
if not saveName:
return
savePath = getSavePath(saveName)
try:
if os.path.exists(savePath):
os.unlink(savePath)
except:
pass
#域名配置的基类
class DomainForm(DotDict):
PARAM_NAME = {}
PARAM_LIST = {}
PARAM_TYPE = {}
def __init__(self, domain_id, get=None, post=None, request={}):
self.request = request
self.domain_id = BaseFied(value=domain_id, error=None)
self.get = get or {}
self.post = post or {}
self.valid = True
self.initialize()
def initialize(self):
self.initBasicParams()
self.initPostParams()
def formatOptionValue(self, key, value):
if value.lower() == u"on":
return u"1"
return value
def initBasicParams(self):
for key, default in self.PARAM_LIST.items():
sys_type = self.PARAM_TYPE[ key ]
instance = DomainAttr.objects.filter(domain_id=self.domain_id.value,type=sys_type,item=key).first()
setattr(self,"instance_%s"%key,instance)
value = instance.value if instance else default
obj = BaseFied(value=value, error=None)
setattr(self,key,obj)
def initPostParams(self):
self.initPostParamsDefaultNone()
def initPostParamsDefaultNone(self):
data = self.post if self.post else self.get
if "domain_id" in data:
self.domain_id = BaseFied(value=data["domain_id"], error=None)
for key,default in self.PARAM_LIST.items():
if not key in data:
continue
value = self.formatOptionValue(key, data[key])
obj = BaseFied(value=value, error=None)
setattr(self,key,obj)
def initPostParamsDefaultDisable(self):
data = self.post if self.post else self.get
if "domain_id" in data:
self.domain_id = BaseFied(value=data["domain_id"], error=None)
data = self.post if self.post else self.get
if data:
self.domain_id = BaseFied(value=data["domain_id"], error=None)
for key,default in self.PARAM_LIST.items():
value = self.formatOptionValue(key, data.get(key, u"-1"))
obj = BaseFied(value=value, error=None)
setattr(self,key,obj)
def is_valid(self):
if not self.domain_id.value:
self.valid = False
self.domain_id.set_error(_(u"无效的域名"))
return self.valid
self.check()
return self.valid
def check(self):
return self.valid
def checkSave(self):
if self.is_valid():
self.save()
def paramSave(self):
for key in self.PARAM_LIST.keys():
obj = getattr(self,"instance_%s"%key,None)
value = getattr(self,key).value
if obj:
sys_type = self.PARAM_TYPE[ key ]
obj.domain_id = u"{}".format(self.domain_id.value)
obj.type = u"{}".format(sys_type)
obj.item = u"{}".format(key)
obj.value = u"{}".format(value)
obj.save()
else:
sys_type = self.PARAM_TYPE[ key ]
obj = DomainAttr.objects.create(
domain_id=u"{}".format(self.domain_id.value),
type=u"{}".format(sys_type),
item=u"{}".format(key),
value=u"{}".format(value)
)
value = obj.value
if len(value) > 100:
value = u"..."
param = u"{}({})".format(self.PARAM_NAME.get(obj.item,u''),u"{}-{}".format(obj.type,obj.item))
msg = _(u"域名参数:'{}' 值:{}").format(param,value)
api_create_admin_log(self.request, obj, 'domainconfig', msg)
clear_redis_cache()
def save(self):
self.paramSave()
class DomainBasicForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_BASIC_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_BASIC_PARAMS_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_BASIC_PARAMS_TYPE)
STATUS_LIST = dict(constants.DOMAIN_BASIC_STATUS)
def initialize(self):
self.initBasicParams()
self.initPostParams()
self.initStatus()
def initStatus(self):
checker = MailboxLimitChecker()
statMailbox = checker._stat_domain_mailbox_info(domain_id=self.domain_id.value)
mailboxUsed = statMailbox["mailbox_count"]
spaceUsed = statMailbox["mailbox_size"]
netdiskUsed = statMailbox["netdisk_size"]
aliasUsed = CoreAlias.objects.filter(domain_id=self.domain_id.value).count()
self.mailboxUsed = BaseFied(value=mailboxUsed, error=None)
self.aliasUsed = BaseFied(value=aliasUsed, error=None)
self.spaceUsed = BaseFied(value=spaceUsed, error=None)
self.netdiskUsed = BaseFied(value=netdiskUsed, error=None)
def check(self):
return self.valid
def save(self):
self.paramSave()
class DomainRegLoginWelcomeForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_REG_LOGIN_WELCOME_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_REG_LOGIN_WELCOME_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_REG_LOGIN_WELCOME_TYPE)
def initialize(self):
self.subject = u""
self.content = u""
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_welcome_letter.value)
self.subject = oldData.get(u"subject",u"")
self.content = oldData.get(u"content",u"")
except:
oldData = {}
if newData:
self.subject = newData.get(u"subject",u"")
self.content = newData.get(u"content",u"")
saveData = json.dumps( {"subject" : self.subject, "content": self.content } )
self.cf_welcome_letter = BaseFied(value=saveData, error=None)
class DomainRegLoginAgreeForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_REG_LOGIN_AGREE_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_REG_LOGIN_AGREE_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_REG_LOGIN_AGREE_TYPE)
#收发限制
class DomainSysRecvLimitForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_RECV_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_RECV_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_RECV_TYPE)
SEND_LIMIT_RANGE = dict(MAILBOX_SEND_PERMIT)
RECV_LIMIT_RANGE = dict(MAILBOX_RECV_PERMIT)
def initialize(self):
self.initBasicParams()
self.initPostParams()
data = self.post if self.post else self.get
self.modify_all_limit_send = data.get("modify_all_limit_send", u"-1")
self.modify_all_limit_recv = data.get("modify_all_limit_recv", u"-1")
def initPostParams(self):
self.initPostParamsDefaultDisable()
#这里逻辑有点绕,因为 参数的意思是 limit ,所以当禁用时,按钮是unchecked的
data = self.post if self.post else self.get
if data:
self.limit_pop = BaseFied(value=data.get("limit_pop", "1"), error=None)
self.limit_imap = BaseFied(value=data.get("limit_imap", "1"), error=None)
self.limit_smtp = BaseFied(value=data.get("limit_smtp", "1"), error=None)
def check(self):
if not self.limit_send.value in self.SEND_LIMIT_RANGE:
self.limit_send.set_error(_(u"无效的发信权限"))
self.valid = False
return self.valid
if not self.limit_recv.value in self.RECV_LIMIT_RANGE:
self.limit_recv.set_error(_(u"无效的收信权限"))
self.valid = False
return self.valid
return self.valid
def save(self):
self.paramSave()
if self.modify_all_limit_send == u"1":
Mailbox.objects.filter(domain_id=self.domain_id.value).update(limit_send=self.limit_send.value)
if self.modify_all_limit_recv == u"1":
Mailbox.objects.filter(domain_id=self.domain_id.value).update(limit_recv=self.limit_recv.value)
@property
def getLimitSendParams(self):
return MAILBOX_SEND_PERMIT
@property
def getLimitRecvParams(self):
return MAILBOX_RECV_PERMIT
class DomainSysRecvWhiteListForm(DotDict):
def __init__(self, domain_id, type=u"send", get=None, post=None, request={}):
self.request = request
self.type = type
self.domain_id = BaseFied(value=domain_id, error=None)
self.get = get or {}
self.post = post or {}
self.valid = True
self.initialize()
@property
def getSendLimitWhiteList(self):
lists = CoreWhitelist.objects.filter(type=u"fix_send", operator=u"sys", domain_id=self.domain_id.value, mailbox_id=0).all()
num = 1
for d in lists:
yield num, d.id, d.email, str(d.disabled)
num += 1
@property
def getRecvLimitWhiteList(self):
lists = CoreWhitelist.objects.filter(type=u"fix_recv", operator=u"sys", domain_id=self.domain_id.value, mailbox_id=0).all()
num = 1
for d in lists:
yield num, d.id, d.email, str(d.disabled)
num += 1
def initialize(self):
def getPostMailbox(key):
#从 entry_{{ mailbox }}_id 这种格式中把 mailbox 提取出来
l = key.split("_")
l.pop(0)
flag = l.pop(-1)
mailbox = "_".join(l)
return mailbox
def setPostMailboxData(mailbox, key, value):
self.mailboxDict.setdefault(mailbox, {})
self.mailboxDict[mailbox][key] = value
#enddef
self.newMailbox = u""
self.mailboxDict = {}
self.newMailboxList = []
data = self.post if self.post else self.get
if not data:
return
newMailbox = data.get("new_mailbox", u"")
newMailboxList = data.get("new_mailbox_list", u"")
if newMailbox:
self.newMailbox = newMailbox
boxList = newMailboxList.split("|")
boxList = [box for box in boxList if box.strip()]
if boxList:
self.newMailboxList = boxList
for k,v in data.items():
if k.startswith("{}_".format(self.type)):
if k.endswith("_id"):
mailbox = getPostMailbox(k)
setPostMailboxData(mailbox, "id", v)
elif k.endswith("_delete"):
mailbox = getPostMailbox(k)
setPostMailboxData(mailbox, "delete", v)
for mailbox in self.mailboxDict.keys():
isDisabled = data.get(u"{}_{}_disabled".format(self.type, mailbox), u"1")
setPostMailboxData(mailbox, "disabled", isDisabled)
def is_valid(self):
if not self.domain_id.value:
self.valid = False
self.domain_id.set_error(_(u"无效的域名"))
return self.valid
self.check()
return self.valid
def check(self):
return self.valid
def checkSave(self):
if self.is_valid():
self.save()
def saveNewEmail(self, mailbox):
if mailbox in self.mailboxDict:
return
obj = CoreWhitelist.objects.create(type=u"fix_{}".format(self.type), operator=u"sys", domain_id=self.domain_id.value, mailbox_id=0, email=mailbox)
obj.save()
def saveOldEmail(self):
for mailbox, data in self.mailboxDict.items():
data = self.mailboxDict[mailbox]
entry_id = data.get("id", "")
if not entry_id:
continue
obj = CoreWhitelist.objects.filter(id=entry_id).first()
if not obj:
continue
if data.get("delete", u"-1") == u"1":
obj.delete()
else:
obj.operator=u"sys"
obj.type=u"fix_{}".format(self.type)
obj.disabled = data.get("disabled", "-1")
obj.save()
def save(self):
#先添加新的邮箱
if self.newMailbox:
self.saveNewEmail( self.newMailbox )
for mailbox in self.newMailboxList:
self.saveNewEmail( mailbox )
self.saveOldEmail()
#安全设置
class DomainSysSecurityForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_SECURITY_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_SECURITY_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_SECURITY_TYPE)
def initialize(self):
self.count = u"0"
self.timespan = u"0"
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_def_safe_login.value)
self.count = oldData.get(u"count",u"0")
self.timespan = oldData.get(u"timespan",u"0")
except:
oldData = {}
if newData:
for key,default in self.PARAM_LIST.items():
value = self.formatOptionValue(key, newData.get(key, u"-1"))
obj = BaseFied(value=value, error=None)
setattr(self,key,obj)
self.count = newData.get(u"count",u"0")
self.timespan = newData.get(u"timespan",u"0")
saveData = json.dumps( { "count": self.count, "timespan": self.timespan } )
self.cf_def_safe_login = BaseFied(value=saveData, error=None)
class DomainSysSecurityPasswordForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_SECURITY_PWD_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_SECURITY_PWD_VALUES)
PARAM_TYPE = dict(constants.DOMAIN_SYS_SECURITY_PWD_TYPE)
def initialize(self):
self.subject = u""
self.content = u""
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_def_login_limit_mail.value)
self.subject = oldData.get(u"subject",u"")
self.content = oldData.get(u"content",u"")
except:
oldData = {}
if newData:
self.subject = newData.get(u"subject",u"")
self.content = newData.get(u"content",u"")
saveData = json.dumps( {"subject" : self.subject, "content": self.content } )
self.cf_def_login_limit_mail = BaseFied(value=saveData, error=None)
#密码规则
class DomainSysPasswordForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_PASSWORD_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_PASSWORD_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_PASSWORD_TYPE)
PARAM_TYPE_LIMIT = constants.DOMAIN_SYS_PASSWORD_TYPE_LIMIT
PARAM_LEN_LIMIT = constants.DOMAIN_SYS_PASSWORD_LEN_LIMIT
PRAAM_RULE_VALUE = dict(constants.DOMAIN_SYS_PASSWORD_RULE_VALUE)
PARAM_RULE_LIMIT = dict(constants.DOMAIN_SYS_PASSWORD_RULE_LIMIT)
PARAM_FORBID_RULE = dict(constants.DOMAIN_SYS_PASSWORD_FORBID_RULE)
PARAM_FORBID_RULE_DEFAULT = dict(constants.DOMAIN_SYS_PASSWORD_FORBID_RULE_DEFAULT)
def initialize(self):
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_pwd_rule.value)
except:
oldData = {}
oldData = {} if not isinstance(oldData, dict) else oldData
for name, param in self.PRAAM_RULE_VALUE.items():
default = self.PARAM_RULE_LIMIT[param]
setattr(self, name, oldData.get(param, default))
if newData:
for key,default in self.PARAM_LIST.items():
value = self.formatOptionValue(key, newData.get(key, u"-1"))
obj = BaseFied(value=value, error=None)
setattr(self,key,obj)
for name, param in self.PRAAM_RULE_VALUE.items():
setattr(self, name, newData.get(param, u"-1"))
saveData = {}
for name, param in self.PRAAM_RULE_VALUE.items():
saveData[param] = getattr(self, name)
#2.2.59后,强制要求验证密码长度
saveData["passwd_size"] = "1"
self.cf_pwd_rule = BaseFied(value=json.dumps(saveData), error=None)
try:
oldData = json.loads(self.cf_pwd_forbid.value)
except:
oldData = {}
saveData = {}
for name, param in self.PARAM_FORBID_RULE.items():
default = self.PARAM_FORBID_RULE_DEFAULT[param]
if newData:
setattr(self, name, newData.get(param, '-1'))
else:
setattr(self, name, oldData.get(param, default))
saveData[param] = getattr(self, name)
self.cf_pwd_forbid = BaseFied(value=json.dumps(saveData), error=None)
def save(self):
self.paramSave()
#兼容PHP那边旧版本的强密码规则开关
#关闭PHP的开关
DomainAttr.saveAttrObjValue(
domain_id=self.domain_id.value,
type=u"webmail",
item="sw_pass_severe",
value="-1"
)
#使用超管这边的开关
DomainAttr.saveAttrObjValue(
domain_id=self.domain_id.value,
type=u"webmail",
item="sw_pass_severe_new",
value="1"
)
#第三方对接
class DomainSysInterfaceForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_INTERFACE_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_INTERFACE_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_INTERFACE_TYPE)
def initPostParams(self):
self.initPostParamsDefaultDisable()
class DomainSysInterfaceAuthApiForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_INTERFACE_AUTH_API_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_INTERFACE_AUTH_API_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_INTERFACE_AUTH_API_TYPE)
class DomainSysInterfaceIMApiForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_INTERFACE_IM_API_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_INTERFACE_IM_API_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_INTERFACE_IM_API_TYPE)
#杂项设置
class DomainSysOthersForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_OTHERS_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_OTHERS_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_OTHERS_TYPE)
SMSServiceList = (
(u'jiutian', _(u'短信通道一(九天)')),
(u'zhutong', _(u'短信通道二(助通)')),
)
@property
def get_sms_list(self):
return self.SMSServiceList
def initPostParams(self):
self.initPostParamsDefaultDisable()
data = self.post if self.post else self.get
#短信服务器配置
confSms = DomainAttr.objects.filter(domain_id=self.domain_id.value,type="system",item="cf_sms_conf").first()
dataSms = "{}" if not confSms else confSms.value
try:
jsonSms = json.loads(dataSms)
jsonSms = {} if not isinstance(jsonSms, dict) else jsonSms
except:
jsonSms = {}
self.sms_type = jsonSms.get(u"type", u"")
self.sms_account = jsonSms.get(u"account", u"")
self.sms_password = jsonSms.get(u"password", u"")
self.sms_sign = jsonSms.get(u"sign", u"")
if "sms_type" in data:
self.sms_type = data["sms_type"]
if "sms_account" in data:
self.sms_account = data["sms_account"]
if "sms_password" in data:
self.sms_password = data["sms_password"]
if "sms_sign" in data:
self.sms_sign = data["sms_sign"]
jsonSms["type"] = self.sms_type
jsonSms["account"] = self.sms_account
jsonSms["password"] = self.sms_password
jsonSms["sign"] = self.sms_sign
self.cf_sms_conf = BaseFied(value=json.dumps(jsonSms), error=None)
self.sms_cost = None
try:
if self.request.user.licence_validsms and (self.sms_account and self.sms_password):
from lib import sms_interface
self.sms_cost = sms_interface.query_sms_cost(self.sms_type, self.sms_account, self.sms_password)
except Exception,err:
print err
def save(self):
super(DomainSysOthersForm, self).save()
#旧版本的短信开关是保存在域名上的
Domain.objects.filter(id=self.domain_id.value).update(
recvsms=self.sw_recvsms.value,
sendsms=self.sw_sendsms.value,
)
class DomainSysOthersCleanForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_OTHERS_SPACE_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_OTHERS_SPACE_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_OTHERS_SPACE_TYPE)
def initialize(self):
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldCleanData = json.loads(self.cf_spaceclean.value)
except:
oldCleanData = {}
try:
oldMailData = json.loads(self.cf_spacemail.value)
except:
oldMailData = {}
oldCleanData = {} if not isinstance(oldCleanData, dict) else oldCleanData
oldMailData = {} if not isinstance(oldMailData, dict) else oldMailData
self.general_keep_time = get_unicode(oldCleanData.get(u"general_keep_time", u"0"))
self.sent_keep_time = get_unicode(oldCleanData.get(u"sent_keep_time", u"0"))
self.spam_keep_time = get_unicode(oldCleanData.get(u"spam_keep_time", u"0"))
self.trash_keep_time = get_unicode(oldCleanData.get(u"trash_keep_time", u"0"))
self.subject = oldMailData.get(u"subject", u"").strip()
self.content = oldMailData.get(u"content", u"")
self.warn_rate=get_unicode(oldMailData.get(u"warn_rate", u"85"))
if newData:
self.general_keep_time = get_unicode(newData.get(u"general_keep_time", u"0"))
self.sent_keep_time = get_unicode(newData.get(u"sent_keep_time", u"0"))
self.spam_keep_time = get_unicode(newData.get(u"spam_keep_time", u"0"))
self.trash_keep_time = get_unicode(newData.get(u"trash_keep_time", u"0"))
self.subject = newData.get(u"subject", u"").strip()
self.content = newData.get(u"content", u"")
self.warn_rate=get_unicode(newData.get(u"warn_rate", u"85"))
saveCleanData = {
u"general_keep_time" : self.general_keep_time,
u"sent_keep_time" : self.sent_keep_time,
u"spam_keep_time" : self.spam_keep_time,
u"trash_keep_time" : self.trash_keep_time,
}
saveMailData = {
u"subject" : self.subject,
u"content" : self.content,
u"warn_rate" : self.warn_rate,
}
self.cf_spaceclean = BaseFied(value=json.dumps(saveCleanData), error=None)
self.cf_spacemail = BaseFied(value=json.dumps(saveMailData), error=None)
class DomainSysOthersAttachForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SYS_OTHERS_ATTACH_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SYS_OTHERS_ATTACH_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SYS_OTHERS_ATTACH_TYPE)
def initialize(self):
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_online_attach.value)
except:
oldData = {}
#这里的设置,在数据库没数据时要初始化数据库,不然app那边会读取错误
autoSave = False
#这个是2.2.58以后不再使用的值,在该版本以前是 所有类型邮件的 “转存大小”
#在2.2.58后因为转存的邮件区分出了类型,所以这个值改为默认值
self.client_size_default = oldData.get("size", "50")
self.client_url = oldData.get("url", "")
self.client_public = oldData.get("public", "-1")
self.client_size_list = oldData.get("size_list", self.client_size_default)
self.client_size_in = oldData.get("size_in", self.client_size_default)
self.client_size_out = oldData.get("size_out", self.client_size_default)
#从系统设置中读取下载地址的默认值
if not self.client_url.strip():
obj = DomainAttr.objects.filter(domain_id=0,type=u'system',item=u'view_webmail_url').first()
self.client_url = obj.value if obj else ""
#系统设置没有配置时就读取默认值
if not self.client_url.strip() and self.request:
self.client_url = get_client_request(self.request)
autoSave = True
if newData:
self.client_size_list = newData.get("client_size_list", self.client_size_default)
self.client_size_in = newData.get("client_size_in", self.client_size_default)
self.client_size_out = newData.get("client_size_out", self.client_size_default)
self.client_url = newData.get("client_url", "")
self.client_public = newData.get("client_public", "-1")
saveData = {
u"url" : self.client_url,
u"size" : self.client_size_default,
u"size_list" : self.client_size_list,
u"size_in" : self.client_size_in,
u"size_out" : self.client_size_out,
u"public" : self.client_public,
}
self.cf_online_attach = BaseFied(value=json.dumps(saveData), error=None)
if autoSave:
self.paramSave()
class DomainSignDomainForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SIGN_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SIGN_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SIGN_TYPE)
def initialize(self):
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_domain_signature.value)
except:
oldData = {}
oldData = {} if not isinstance(oldData, dict) else oldData
self.content_html = oldData.get(u"html",u"")
if self.content_html and u"new" in oldData:
self.content_html = base64.decodestring(self.content_html)
self.content_text = oldData.get(u"text",u"")
if newData:
self.content_html = newData.get(u"content_html", u"")
self.content_text = newData.get(u"content_text", u"-1")
sw_domain_signature = newData.get("sw_domain_signature", "-1")
self.sw_domain_signature = BaseFied(value=sw_domain_signature, error=None)
saveData = {
u"html" : get_unicode(base64.encodestring(get_string(self.content_html))),
u"text" : self.content_text,
u"new" : u"1", #针对老版本的兼容标记
}
self.cf_domain_signature = BaseFied(value=json.dumps(saveData), error=None)
class DomainSignPersonalForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_SIGN_PERSONAL_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_SIGN_PERSONAL_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_SIGN_PERSONAL_TYPE)
PARAM_LIST_DEFAULT = dict(constants.DOMAIN_SIGN_PERSONAL_VALUE_DEFAULT)
def initialize(self):
self.initBasicParams()
newData = self.post if self.post else self.get
if "domain_id" in newData:
self.domain_id = BaseFied(value=newData["domain_id"], error=None)
try:
oldData = json.loads(self.cf_personal_sign.value)
except:
oldData = {}
oldData = {} if not isinstance(oldData, dict) else oldData
for name, default in self.PARAM_LIST_DEFAULT.items():
setattr(self, name, oldData.get(name, default) )
if self.personal_sign_templ:
self.personal_sign_templ = get_unicode(base64.decodestring(get_string(self.personal_sign_templ)))
if newData:
self.personal_sign_new = get_unicode(newData.get(u"personal_sign_new", u"-1"))
self.personal_sign_forward = get_unicode(newData.get(u"personal_sign_forward", u"-1"))
self.personal_sign_auto = get_unicode(newData.get(u"personal_sign_auto", u"-1"))
self.personal_sign_templ = get_unicode(newData.get(u"content_html", u""))
saveData = {
u"personal_sign_new" : self.personal_sign_new,
u"personal_sign_forward" : self.personal_sign_forward,
u"personal_sign_auto" : self.personal_sign_auto,
u"personal_sign_templ" : get_unicode(base64.encodestring(get_string(self.personal_sign_templ))),
}
self.cf_personal_sign = BaseFied(value=json.dumps(saveData), error=None)
try:
import HTMLParser
html_parser = HTMLParser.HTMLParser()
#转码让HTML能正常显示
self.personal_sign_templ2 = html_parser.unescape(self.personal_sign_templ)
except Exception,err:
print str(err)
self.personal_sign_templ2 = self.personal_sign_templ
def applyAll(self):
import cgi
caption = _(u"系统默认签名")
content = self.personal_sign_templ
content = cgi.escape(content)
content = get_unicode(content)
is_default = "1" if self.personal_sign_new == "1" else "-1"
is_fwd_default = "1" if self.personal_sign_forward == "1" else "-1"
obj_list = Mailbox.objects.filter(domain_id=self.domain_id.value)
for mailbox in obj_list:
mailbox_id = mailbox.id
obj_sign = Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id, type="domain").first()
if obj_sign:
obj_sign.content = u"{}".format(content)
obj_sign.default = u"{}".format(is_default)
obj_sign.refw_default = u"{}".format(is_fwd_default)
obj_sign.save()
else:
obj_sign = Signature.objects.create(
domain_id=u"{}".format(self.domain_id.value),
mailbox_id=u"{}".format(mailbox_id),
type=u"domain",
caption=u"{}".format(caption),
content=u"{}".format(content),
default=u"{}".format(is_default),
refw_default=u"{}".format(is_fwd_default),
)
if is_default == "1":
Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id).update(default='-1')
Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id, type="domain").update(default='1')
else:
Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id, type="domain").update(default='-1')
if is_fwd_default == "1":
Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id).update(refw_default='-1')
Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id, type="domain").update(refw_default='1')
else:
Signature.objects.filter(domain_id=self.domain_id.value, mailbox_id=mailbox_id, type="domain").update(refw_default='-1')
class DomainModuleHomeForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_MODULE_HOME_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_MODULE_HOME_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_MODULE_HOME_TYPE)
def initPostParams(self):
self.initPostParamsDefaultDisable()
class DomainModuleMailForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_MODULE_MAIL_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_MODULE_MAIL_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_MODULE_MAIL_TYPE)
def initialize(self):
self.initBasicParams()
self.sw_save_client_sent_email_old = self.sw_save_client_sent_email.value
self.initPostParamsDefaultDisable()
def save(self):
super(DomainModuleMailForm, self).save()
#与上次的值不同,就更新所有邮箱用户的按钮
if self.sw_save_client_sent_email_old != self.sw_save_client_sent_email.value:
for obj in Mailbox.objects.filter(domain_id=self.domain_id.value).all():
obj_attr = MailboxUserAttr.objects.filter(mailbox_id=obj.id, item=u'save_client_sent').first()
if not obj_attr:
obj_attr = MailboxUserAttr.objects.create(
domain_id=self.domain_id.value,
mailbox_id=obj.id,
item=u'save_client_sent',
)
obj_attr.type = u"user"
obj_attr.value = self.sw_save_client_sent_email.value
obj_attr.save()
class DomainModuleSetForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_MODULE_SET_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_MODULE_SET_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_MODULE_SET_TYPE)
def initialize(self):
self.initBasicParams()
self.initPostParamsDefaultDisable()
data = self.post if self.post else self.get
#sw_userbwlist对应的是core_domain的userbwlist列,特殊处理之
if not data:
domainObj = Domain.objects.filter(id=self.domain_id.value).first()
sw_userbwlist = "-1" if not domainObj else domainObj.userbwlist
self.sw_userbwlist = BaseFied(value=get_unicode(sw_userbwlist), error=None)
else:
self.sw_userbwlist = BaseFied(value=get_unicode(data.get("sw_userbwlist", "-1")), error=None)
def check(self):
return self.valid
def save(self):
domainObj = Domain.objects.filter(id=self.domain_id.value).first()
domainObj.userbwlist = u"{}".format(self.sw_userbwlist.value)
domainObj.save()
self.paramSave()
class DomainModuleOtherForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_MODULE_OTHER_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_MODULE_OTHER_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_MODULE_OTHER_TYPE)
def initPostParams(self):
self.initPostParamsDefaultDisable()
#密级管理
class DomainSecretForm(DotDict):
def __init__(self, get=None, post=None, request={}):
self.request = request
self.get = get or {}
self.post = post or {}
self.error = u""
self.action = u""
self.grade = constants.DOMAIN_SECRET_GRADE_1
self.addList = []
self.delList = []
self.valid = True
self.initialize()
def initialize(self):
data = self.post if self.post else self.get
if data:
self.action = data.get(u"action", u"")
self.grade = data.get(u"grade", constants.DOMAIN_SECRET_GRADE_1)
if self.action == u"new":
boxList = data.get(u"mailbox", "")
boxList = [box.strip() for box in boxList.split("|") if box.strip()]
self.addList = boxList
if self.action == u"del":
idList = data.get(u"idlist", "")
idList = [box.strip() for box in idList.split("|") if box.strip()]
self.delList = idList
for grade, name in constants.DOMAIN_SECRET_GRADE_ALL:
grade_num = len(SecretMail.objects.filter(secret_grade=grade))
setattr(self, "gradeNum_{}".format( int(grade)+1 ), grade_num)
@staticmethod
def getBoxListByGrade(grade):
dataList = []
lists = SecretMail.objects.filter(secret_grade=grade)
for d in lists:
mailbox_id = d.mailbox_id
boxObj = Mailbox.objects.filter(id=mailbox_id).first()
mailbox = _(u"已删除帐号") if not boxObj else boxObj.username
dataList.append( {
"id" : d.id,
"mailbox" : mailbox,
}
)
return dataList
def is_valid(self):
self.check()
return self.valid
def check(self):
if self.action == u"new":
for mailbox in self.addList:
boxObj = Mailbox.objects.filter(username=mailbox).first()
if not boxObj:
self.error = _(u"邮箱帐号不存在")
self.valid = False
return self.valid
return self.valid
def save(self):
if self.action == u"new":
for mailbox in self.addList:
boxObj = Mailbox.objects.filter(username=mailbox).first()
if not boxObj:
continue
obj = SecretMail.objects.filter(secret_grade=self.grade, mailbox_id=boxObj.id).first()
if not obj:
SecretMail.objects.create(secret_grade=self.grade, mailbox_id=boxObj.id)
if self.action == u"del":
for entry_id in self.delList:
SecretMail.objects.filter(id=entry_id).delete()
#添加公共通讯录
class DomainPublicInputForm(DotDict):
def __init__(self, domain_id, instance=None, post=None, get=None, request={}):
self.request = request
self.post = post or {}
self.get = get or {}
self.error = u""
self.domain_id = int(domain_id)
self.instance = instance
self.valid = True
self.initialize()
def initialize(self):
self.fullname = BaseFied(value=u"", error=None)
self.cate_id = BaseFied(value=0, error=None)
self.gender = BaseFied(value=u"F", error=None)
self.birthday = BaseFied(value=u"", error=None)
self.pref_email = BaseFied(value=u"", error=None)
self.pref_tel = BaseFied(value=u"", error=None)
self.home_tel = BaseFied(value=u"", error=None)
self.work_tel = BaseFied(value=u"", error=None)
self.im_qq = BaseFied(value=u"", error=None)
self.im_msn = BaseFied(value=u"", error=None)
self.remark = BaseFied(value=u"", error=None)
data = self.post if self.post else self.get
if self.instance:
self.fullname = BaseFied(value=self.instance.fullname, error=None)
self.cate_id = BaseFied(value=self.instance.cate_id, error=None)
self.gender = BaseFied(value=self.instance.gender, error=None)
self.birthday = BaseFied(value=self.instance.birthday, error=None)
self.pref_email = BaseFied(value=self.instance.pref_email, error=None)
self.pref_tel = BaseFied(value=self.instance.pref_tel, error=None)
self.home_tel = BaseFied(value=self.instance.home_tel, error=None)
self.work_tel = BaseFied(value=self.instance.work_tel, error=None)
self.im_qq = BaseFied(value=self.instance.im_qq, error=None)
self.im_msn = BaseFied(value=self.instance.im_msn, error=None)
self.remark = BaseFied(value=self.instance.remark, error=None)
if data:
self.fullname = BaseFied(value=data[u"fullname"], error=None)
self.cate_id = BaseFied(value=data.get(u"cate_id",0), error=None)
self.gender = BaseFied(value=data.get(u"gender",u"F"), error=None)
self.birthday = BaseFied(value=data[u"birthday"], error=None)
self.pref_email = BaseFied(value=data[u"pref_email"], error=None)
self.pref_tel = BaseFied(value=data[u"pref_tel"], error=None)
self.home_tel = BaseFied(value=data[u"home_tel"], error=None)
self.work_tel = BaseFied(value=data[u"work_tel"], error=None)
self.im_qq = BaseFied(value=data[u"im_qq"], error=None)
self.im_msn = BaseFied(value=data[u"im_msn"], error=None)
self.remark = BaseFied(value=data[u"remark"], error=None)
def is_valid(self):
self.check()
return self.valid
def check(self):
fullname = u"" if not self.fullname.value.strip() else self.fullname.value.strip()
if not fullname:
self.fullname.set_error(_(u"请填写姓名"))
self.valid = False
return self.valid
pref_email = u"" if not self.pref_email.value.strip() else self.pref_email.value.strip()
if not pref_email:
self.pref_email.set_error(_(u"请填写邮箱地址"))
self.valid = False
return self.valid
if not check_email_ordomain(pref_email):
self.pref_email.set_error(_(u"不合法的邮箱地址格式"))
self.valid = False
return self.valid
#生日不应该是个必填项,用一个默认值填充
birthday = u"" if not self.birthday.value.strip() else self.birthday.value.strip()
if not birthday:
self.birthday = BaseFied(value="1970-01-01", error=None)
return self.valid
def save(self):
if self.instance:
obj = self.instance
obj.domain_id = u"{}".format(self.domain_id)
obj.fullname = u"{}".format(self.fullname.value)
obj.cate_id = u"{}".format(self.cate_id.value)
obj.gender = u"{}".format(self.gender.value)
obj.birthday = u"{}".format(self.birthday.value)
obj.pref_email = u"{}".format(self.pref_email.value)
obj.pref_tel = u"{}".format(self.pref_tel.value)
obj.home_tel = u"{}".format(self.home_tel.value)
obj.work_tel = u"{}".format(self.work_tel.value)
obj.im_qq = u"{}".format(self.im_qq.value)
obj.im_msn = u"{}".format(self.im_msn.value)
obj.remark = u"{}".format(self.remark.value)
obj.save()
else:
WmCustomerInfo.objects.create(
domain_id=u"{}".format(self.domain_id),
fullname=u"{}".format(self.fullname.value),
cate_id=u"{}".format(self.cate_id.value),
gender=u"{}".format(self.gender.value),
birthday=u"{}".format(self.birthday.value),
pref_email=u"{}".format(self.pref_email.value),
pref_tel=u"{}".format(self.pref_tel.value),
home_tel=u"{}".format(self.home_tel.value),
work_tel=u"{}".format(self.work_tel.value),
im_qq=u"{}".format(self.im_qq.value),
im_msn=u"{}".format(self.im_msn.value),
remark=u"{}".format(self.remark.value),
)
@property
def get_cate_list(self):
return WmCustomerCate.objects.filter(domain_id=self.domain_id).all()
#批量导入/删除通讯录
class DomainPublicImportForm(DotDict):
COL_ADD_LIST = [
"fullname", "pref_email", "pref_tel", "cate_type", "remark",
"birthday", "gender", "work_tel", "home_tel", "im_qq", "im_msn"
]
def __init__(self, domain_id, action=u"import_add", instance=None, post=None, get=None, request={}):
self.request = request
self.post = post or {}
self.get = get or {}
self.action = action
self.error = u""
self.domain_id = int(domain_id)
self.instance = instance
self.valid = True
self.data_list = []
self.insert_list = []
self.fail_list = []
self.import_error = []
self.initialize()
def initialize(self):
data = self.post if self.post else self.get
import_data = ""
if "import_data" in data:
import_data = data["import_data"]
import_data = import_data.replace("\r\n","\n")
import_data = import_data.replace("\r","\n")
if self.action == "import_del":
for line in import_data.split("\n"):
fullname = self.joinString(line)
if not fullname:
continue
self.data_list.append( (line,fullname) )
else:
for line in import_data.split("\n"):
line = self.joinString(line)
if not line:
continue
data = {}
for idx,col in enumerate(line.split("\t")):
if idx >= len(self.COL_ADD_LIST):
break
col_name = self.COL_ADD_LIST[idx]
if col.upper() in ("${EMPTY}","EMPTY"):
col = ""
data[ col_name ] = col.strip()
if not data:
continue
self.data_list.append( (line,data) )
def joinString(self, line):
code_1 = []
code_2 = []
line = line.replace(";","\t")
for s in line:
if s == "\t":
if code_1:
code_2.append( "".join(code_1) )
code_1 = []
continue
code_1.append( s )
if code_1:
code_2.append( "".join(code_1) )
return "\t".join(code_2)
def checkSave(self):
if self.action == "import_add":
self.checkImportAdd()
elif self.action == "import_del":
self.checkImportDel()
self.save()
return False if self.import_error else True
def checkImportAdd(self):
for line,data in self.data_list:
if len(data) < len(self.COL_ADD_LIST):
self.import_error.append( _(u"数据列不足: {}").format(line) )
continue
self.insert_list.append( data )
def checkImportDel(self):
for line,fullname in self.data_list:
self.insert_list.append( fullname )
def save(self):
cate_id_map = {}
if self.action == "import_add":
for data in self.insert_list:
try:
fullname = data["fullname"].strip()
pref_email = data["pref_email"].strip()
pref_tel = data["pref_tel"].strip()
cate_type = data["cate_type"].strip()
remark = data["remark"].strip()
birthday = data["birthday"].strip()
gender = data["gender"].strip()
work_tel = data["work_tel"].strip()
home_tel = data["home_tel"].strip()
im_qq = data["im_qq"].strip()
im_msn = data["im_msn"].strip()
except Exception,err:
self.import_error.append( _(u"数据格式错误: {} : {}").format(line,get_unicode(err)) )
continue
if not pref_email or not check_email_ordomain(pref_email):
self.import_error.append( _(u"不合法的邮箱地址: {} : '{}'").format(line,pref_email) )
continue
if not fullname:
self.import_error.append( _(u"未填写姓名: {} : '{}'").format(line,fullname) )
continue
if not birthday:
birthday = u"1970-01-01"
cate_id = 0
if cate_type:
cate_id = cate_id_map.get(cate_type, 0)
if not cate_id:
cate_obj = WmCustomerCate.objects.filter(domain_id=self.domain_id, name=cate_type).first()
if not cate_obj:
cate_obj = WmCustomerCate.objects.create(
domain_id=u"{}".format(self.domain_id),
name=u"{}".format(cate_type),
parent_id=-1,
order=0,
)
cate_id = cate_obj.id
cate_id_map[cate_type] = cate_id
try:
obj = WmCustomerInfo.objects.filter(domain_id=self.domain_id, fullname=fullname, pref_email=pref_email).first()
if obj:
obj.domain_id = u"{}".format(self.domain_id)
obj.fullname = u"{}".format(fullname)
obj.cate_id = u"{}".format(cate_id)
obj.gender = u"{}".format(gender)
obj.birthday = u"{}".format(birthday)
obj.pref_email = u"{}".format(pref_email)
obj.pref_tel = u"{}".format(pref_tel)
obj.home_tel = u"{}".format(home_tel)
obj.work_tel = u"{}".format(work_tel)
obj.im_qq = u"{}".format(im_qq)
obj.im_msn = u"{}".format(im_msn)
obj.remark = u"{}".format(remark)
obj.save()
else:
WmCustomerInfo.objects.create(
domain_id=u"{}".format(self.domain_id),
fullname=u"{}".format(fullname),
cate_id=u"{}".format(cate_id),
gender=u"{}".format(gender),
birthday=u"{}".format(birthday),
pref_email=u"{}".format(pref_email),
pref_tel=u"{}".format(pref_tel),
home_tel=u"{}".format(home_tel),
work_tel=u"{}".format(work_tel),
im_qq=u"{}".format(im_qq),
im_msn=u"{}".format(im_msn),
remark=u"{}".format(remark),
)
except Exception,err:
self.import_error.append( _(u"数据保存失败: {} : {}").format(line,get_unicode(err)) )
continue
elif self.action == "import_del":
for fullname in self.insert_list:
WmCustomerInfo.objects.filter(fullname=fullname, domain_id=self.domain_id).delete()
def export(self):
import xlwt,StringIO,os
#创建workbook对象并设置编码
ws = xlwt.Workbook(encoding='utf-8')
w = ws.add_sheet(_(u'公共通讯录'),cell_overwrite_ok=True)
w.write(0, 0, _(u"客户名称"))
w.write(0, 1, _(u"邮件地址"))
w.write(0, 2, _(u"联系电话"))
w.write(0, 3, _(u"客户分组"))
w.write(0, 4, _(u"备注"))
w.write(0, 5, _(u"生日"))
w.write(0, 6, _(u"性别"))
w.write(0, 7, _(u"工作电话"))
w.write(0, 8, _(u"家庭电话"))
w.write(0, 9, u"QQ")
w.write(0, 10, u"MSN")
excel_row = 1
cate_id_map = {}
lists = WmCustomerInfo.objects.filter(domain_id=self.domain_id).all()
for d in lists:
fullname = d.fullname.strip()
pref_email = d.pref_email.strip()
pref_tel = d.pref_tel.strip()
cate_id = d.cate_id
if not cate_id in cate_id_map:
obj_cate = WmCustomerCate.objects.filter(domain_id=self.domain_id, id=cate_id).first()
if obj_cate:
cate_type = obj_cate.name.strip()
else:
cate_type = u""
cate_id_map[cate_id] = cate_type
cate_type = cate_id_map[cate_id]
remark = d.remark.strip()
birthday = get_unicode(d.birthday).strip()
gender = d.gender.strip()
work_tel = d.work_tel.strip()
home_tel = d.home_tel.strip()
im_qq = d.im_qq.strip()
im_msn = d.im_msn.strip()
w.write(excel_row, 0, fullname)
w.write(excel_row, 1, pref_email)
w.write(excel_row, 2, pref_tel)
w.write(excel_row, 3, cate_type)
w.write(excel_row, 4, remark)
w.write(excel_row, 5, birthday)
w.write(excel_row, 6, gender)
w.write(excel_row, 7, work_tel)
w.write(excel_row, 8, home_tel)
w.write(excel_row, 9, im_qq)
w.write(excel_row, 10, im_msn)
excel_row += 1
return download_excel(ws,"public_list.xls")
#客户分类列表
class DomainPublicTypeForm(DotDict):
def __init__(self, domain_id, instance=None, post=None, get=None, request={}):
self.request = request
self.post = post or {}
self.get = get or {}
self.error = u""
self.domain_id = int(domain_id)
self.instance = instance
self.valid = True
self.initialize()
def initialize(self):
self.name = BaseFied(value=u"", error=None)
self.parent_id = BaseFied(value=-1, error=None)
self.order = BaseFied(value=u"0", error=None)
data = self.post if self.post else self.get
if self.instance:
self.name = BaseFied(value=self.instance.name, error=None)
self.parent_id = BaseFied(value=self.instance.parent_id, error=None)
self.order = BaseFied(value=self.instance.order, error=None)
if data:
parent_id = -1 if int(data[u"parent_id"])<=0 else int(data[u"parent_id"])
self.domain_id = int(data[u"domain_id"])
self.name = BaseFied(value=data[u"name"], error=None)
self.parent_id = BaseFied(value=parent_id, error=None)
self.order = BaseFied(value=data.get(u"order",u"0"), error=None)
def is_valid(self):
self.check()
return self.valid
def check(self):
name = u"" if not self.name.value.strip() else self.name.value.strip()
if not name:
self.name.set_error(_(u"请填写分类名称"))
self.valid = False
return self.valid
return self.valid
def save(self):
if self.instance:
obj = self.instance
obj.domain_id = u"{}".format(self.domain_id)
obj.name = u"{}".format(self.name.value)
obj.parent_id = u"{}".format(self.parent_id.value)
obj.order = u"{}".format(self.order.value)
obj.save()
else:
WmCustomerCate.objects.create(
domain_id=u"{}".format(self.domain_id),
name=u"{}".format(self.name.value),
parent_id=u"{}".format(self.parent_id.value),
order=u"{}".format(self.order.value),
)
@property
def get_cate_list(self):
return WmCustomerCate.objects.filter(domain_id=self.domain_id).all()
#域名列表管理
class DomainListForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_LIST_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_LIST_PARAMS_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_LIST_PARAMS_TYPE)
def initialize(self):
self.error = ""
self.initBasicParams()
self.initPostParamsDefaultDisable()
data = self.post if self.post else self.get
if not data:
domainObj = Domain.objects.filter(id=self.domain_id.value).first()
domainDisabled = u"-1" if not domainObj else domainObj.disabled
domainWechatHost = u"-1" if not domainObj else domainObj.is_wx_host
domainName = u"" if not domainObj else domainObj.domain
self.domainName = BaseFied(value=domainName, error=None)
self.domainDisabled = BaseFied(value=str(domainDisabled), error=None)
self.domainWechatHost = BaseFied(value=str(domainWechatHost), error=None)
else:
self.domainDisabled = BaseFied(value=str(data.get("domainDisabled", u"1")), error=None)
self.domainWechatHost = BaseFied(value=str(data.get("domainWechatHost", u"-1")), error=None)
self.domainName = BaseFied(value=data.get("domainName", u""), error=None)
self.operate = data.get(u"operate",u"add")
def checkSave(self):
if not self.domainName.value.strip():
self.error = _(u"请设置域名名称")
return False
if not check_email_ordomain('test@'+self.domainName.value):
self.error = _(u"错误的域名格式")
return False
if self.operate == u"add":
obj = Domain.objects.filter(domain=self.domainName.value).first()
if obj:
self.error = _(u"域名已存在")
return False
obj = CoreAlias.objects.filter(source=u'@%s'%self.domainName.value).first()
if obj:
self.error = _(u"域名已存在于域别名中的虚拟地址中")
return False
if self.domainName.value in ("comingchina.com","fenbu.comingchina.com") \
and unicode(self.request.user).startswith(u"demo_admin@"):
self.error = _(u"演示版本主域名不可修改")
return False
self.save()
return True
def save(self):
#微信主域名只能存在一个
if self.domainWechatHost.value == "1":
Domain.objects.all().update(is_wx_host=u"0")
if str(self.domain_id.value) != "0":
domainObj = Domain.objects.filter(id=self.domain_id.value).first()
#禁止修改域名名称
#domainObj.domain = u"{}".format(self.domainName.value)
domainObj.disabled = u"{}".format(self.domainDisabled.value)
domainObj.is_wx_host = u"{}".format(self.domainWechatHost.value)
domainObj.save()
else:
domainObj = Domain.objects.create(
domain = u"{}".format(self.domainName.value),
disabled = u"{}".format(self.domainDisabled.value),
is_wx_host = u"{}".format(self.domainWechatHost.value),
)
self.domain_id = BaseFied(value=domainObj.id, error=None)
#需要记录域名创建日期
DomainAttr.objects.create(
domain_id = self.domain_id.value,
type = u'system',
item = u'created',
value = time.strftime('%Y-%m-%d %H:%M:%S')
)
self.paramSave()
@property
def getLimitSendParams(self):
return MAILBOX_SEND_PERMIT
@property
def getLimitRecvParams(self):
return MAILBOX_RECV_PERMIT
class DomainDkimForm(DotDict):
ItemKey = 'dkim_privatekey'
ItemType = 'system'
def __init__(self, domain_id, request={}):
super(DomainDkimForm, self).__init__()
self.request = request
self.domain_id = domain_id
self.initialize()
def initialize(self):
self.error = u""
self.private_key = u""
self.public_key = u""
self.verify_success = False
self.verify_failure = False
attrs = DomainAttr.objects.filter(item=self.ItemKey, type=self.ItemType, domain_id=self.domain_id)
attr = attrs.first() if attrs else None
if attr:
try:
_, public_key = generate_rsa(pkey=attr.value)
self.private_key = attr.value
self.public_key = self.makePublicKey(self.private_key)
except:
self.autoSet()
#self.error = u'您的密钥格式不正确,请清除后重新生成!'
else:
self.autoSet()
def makePublicKey(self, private_key):
_, public_key = generate_rsa(pkey=private_key)
public_key = "".join(public_key.split("\n")[1:-1])
public_key = u"v=DKIM1;k=rsa;p={}".format(public_key)
return public_key
def autoSet(self):
private_key, _ = generate_rsa()
attr, _ = DomainAttr.objects.get_or_create(item=self.ItemKey, type=self.ItemType, domain_id=self.domain_id)
attr.value = private_key
attr.save()
self.private_key = private_key
self.public_key = self.makePublicKey(self.private_key)
clear_redis_cache()
return self.checkVerify()
def importFile(self, request):
private_key = request.POST.get('certfile', '').replace('\r', '').strip()
if not private_key:
self.error = _(u'请选择密钥文件导入')
return False
else:
try:
private_key, public_key = generate_rsa(pkey=private_key)
except Exception,err:
self.error = _(u'您导入的密钥格式不正确,请重新生成: %s')%str(err)
self.verify_failure = True
else:
attr, _ = DomainAttr.objects.get_or_create(item=self.ItemKey, type=self.ItemType, domain_id=self.domain_id)
attr.value = private_key
attr.save()
self.private_key = private_key
self.public_key = self.makePublicKey(self.private_key)
clear_redis_cache()
return self.checkVerify()
return False
def export(self):
from django.http import HttpResponse
try:
attr = DomainAttr.objects.get(item=self.ItemKey, type=self.ItemType, domain_id=self.domain_id)
except DomainAttr.DoesNotExist:
self.error = _(u'密钥数据不存在')
return None
response = HttpResponse(content_type='text/plain')
response['Content-Disposition'] = 'attachment; filename=dkim.key'
response.write(attr.value)
return response
def delete(self):
DomainAttr.objects.filter(item=self.ItemKey, type=self.ItemType, domain_id=self.domain_id).delete()
Domain.objects.filter(id=self.domain_id).update(dkim=u'-1')
self.initialize()
clear_redis_cache()
return True
def checkVerify(self):
from lib import dkim_tools
domain_name = Domain.objects.filter(id=self.domain_id).first().domain
if not dkim_tools.valid_domain(domain=domain_name, rdtype='dkim', record=self.public_key):
self.error = _(u"验证DKIM记录不通过,请确认SPF、MX记录已经配置正确!")
self.verify_failure = True
return False
try:
if not self.private_key:
self.error = _(u"未设置加密私钥")
return False
import dkim
from email.header import make_header
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
# 生成邮件
mail = MIMEMultipart()
part = MIMEText(_(u"测试邮件"), 'plain', 'utf-8')
mail['Date'] = time.strftime("%a, %d %b %Y %H:%M:%S %z")
mail["From"] = "test@umail.com"
mail["To"] = "test@umail.com"
mail['Subject'] = make_header(((_(u"测试DKIM邮件"), 'utf-8'),))
mail.attach(part)
maildata = mail.as_string()
# 进行签名
signature = dkim.sign(maildata, 'umail', domain_name, self.private_key)
signature = signature.replace('\r', '').lstrip()
self.verify_success = True
Domain.objects.filter(id=self.domain_id).update(dkim=u'1')
return True
except Exception,err:
self.error = _(u"测试签名邮件时发生错误: {}").format(str(err))
self.verify_failure = True
#验证失败后需要关闭DKIM开关
Domain.objects.filter(id=self.domain_id).update(dkim=u'-1')
return False
#webmail页面定制
class DomainWebBasicForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_WEB_BASIC_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_WEB_BASIC_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_WEB_BASIC_TYPE)
def initPostParams(self):
self.initPostParamsDefaultDisable()
def initialize(self):
self.initBasicParams()
self.initPostParams()
obj = CoCompany.objects.filter(domain_id=self.domain_id.value).first()
self.company = BaseFied(value=u"" if not obj else obj.company, error=None)
if u"company" in self.post:
self.company = BaseFied(value=self.post[u"company"], error=None)
def save(self):
self.paramSave()
obj, create = CoCompany.objects.get_or_create(domain_id=self.domain_id.value)
obj.company = self.company.value
obj.save()
#webmail页面定制---系统公告
class DomainWebAnounceForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_WEB_ANOUNCE_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_WEB_ANOUNCE_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_WEB_ANOUNCE_YPE)
def initPostParams(self):
self.initPostParamsDefaultDisable()
def initialize(self):
self.initBasicParams()
self.initPostParams()
self.content = self.cf_announce.value
try:
data = json.loads(self.cf_announce_set.value)
data = {} if not isinstance(data, dict) else data
except:
data = {}
self.title = data.get(u"title", u"")
self.title_color = data.get(u"title_color", u"")
self.height = data.get(u"height", u"")
if self.post:
self.title = self.post.get(u"title", u"")
self.title_color = self.post.get(u"title_color", u"")
self.height = self.post.get(u"height", u"")
self.content = self.post.get(u"content", u"")
data = {
u"title" : self.title,
u"title_color" : self.title_color,
u"height" : self.height,
}
self.cf_announce_set = BaseFied(value=json.dumps(data), error=None)
self.cf_announce = BaseFied(value=self.content, error=None)
#logo设置
class DomainWebLogoForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_LOGO_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_LOGO_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_LOGO_TYPE)
def getData(self,item):
cache = u"cache_%s"%item
if hasattr(self, cache):
return getattr(self, cache)
value = DomainAttr.getAttrObjValue(self.domain_id.value, type=u"webmail", item=item)
if value and value.strip():
savePath = getSavePath(value)
if os.path.exists(savePath):
with open(savePath,"rb") as f:
data = f.read()
data = base64.encodestring(data)
setattr(self, cache, data)
return data
setattr(self, cache, u"")
return u""
def getWebmailLogoData(self):
return self.getData(u"cf_webmail_logo")
def getLoginLogoData(self):
return self.getData(u"cf_login_logo")
def saveLogo(self, filedata, item):
saveName = saveLogoToPath(filedata)
DomainAttr.saveAttrObjValue(self.domain_id.value, type=u"webmail", item=item, value=saveName)
return True
def importLogoLogin(self):
item = u"cf_login_logo"
filedata = self.post.get("logofile", u"")
if not filedata:
return False
return self.saveLogo(filedata, item)
def deleteLogoLogin(self):
saveName = self.cf_login_logo.value
if saveName:
deleteLogoFromPath(saveName)
self.cf_login_logo = BaseFied(value=u"", error=None)
DomainAttr.saveAttrObjValue(self.domain_id.value, type=u"webmail", item=u"cf_login_logo", value=u"")
def importLogoWebmail(self):
item = u"cf_webmail_logo"
filedata = self.post.get("logofile", u"")
if not filedata:
return False
return self.saveLogo(filedata, item)
def deleteLogoWebmail(self):
saveName = self.cf_webmail_logo.value
if saveName:
deleteLogoFromPath(saveName)
self.cf_webmail_logo = BaseFied(value=u"", error=None)
DomainAttr.saveAttrObjValue(self.domain_id.value, type=u"webmail", item=u"cf_webmail_logo", value=u"")
#登录模板设置
class DomainWebLoginTempForm(DotDict):
PARAM_LIST = dict(constants.DOMAIN_LOGIN_TEMP_LIST)
def __init__(self, domain_id, post={}, request={}):
self.post = post
self.request = request
self.domain_id = domain_id
self.initialize()
def initialize(self):
v = DomainAttr.objects.filter(domain_id=self.domain_id, type=u"webmail", item=u"cf_login_page").first()
v = u"default" if not v else v.value
self.cf_login_page = BaseFied(value=v, error=None)
def clickLoginTemplImg(self, domain_id, name):
item = u"cf_login_page"
if not name in self.PARAM_LIST:
return False
DomainAttr.saveAttrObjValue(domain_id=domain_id, type=u"webmail", item=item, value=name)
return True
#页面广告设置
class DomainWebAdForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_WEB_AD_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_WEB_AD_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_WEB_AD_TYPE)
def initialize(self):
self.initBasicParams()
try:
data = json.loads(self.cf_adsetting2.value)
except:
data = {}
self.login_1 = data.get(u"login_1", {})
self.login_2 = data.get(u"login_2", {})
self.webmail = data.get(u"webmail", {})
self.image_name_1 = self.login_1.get(u"image", u"")
self.advert_link_1 = self.login_1.get(u"link", u"")
self.image_name_2 = self.login_2.get(u"image", u"")
self.advert_link_2 = self.login_2.get(u"link", u"")
self.webmail_name = self.webmail.get(u"image", u"")
self.webmail_link = self.webmail.get(u"link", u"")
def getImgData(self, name, data):
cache = u"cache_%s"%name
if hasattr(self, cache):
return getattr(self, cache)
if not data or data==u"-1":
return u""
value = data.get(name,{}).get(u"image","")
if value and value.strip():
savePath = getSavePath(value)
if os.path.exists(savePath):
with open(savePath,"rb") as f:
data = f.read()
data = base64.encodestring(data)
setattr(self, cache, data)
return data
setattr(self, cache, u"")
return u""
def getData(self):
item = u"cf_adsetting2"
cache = u"cache_%s"%item
if hasattr(self, cache):
return getattr(self, cache)
data = DomainAttr.getAttrObjValue(domain_id=self.domain_id.value, type=u"webmail", item=item)
try:
data = json.loads(data)
except:
data = {}
setattr(self, cache, data)
return data
def getAdvertData_1(self):
data = self.getData()
return self.getImgData(u"login_1", data)
def getAdvertData_2(self):
data = self.getData()
return self.getImgData(u"login_2", data)
def getAdvertData_3(self):
data = self.getData()
return self.getImgData(u"webmail", data)
def importAdvertData(self, action):
filedata = self.post.get("logofile", u"")
if not filedata:
return
name = saveLogoToPath(filedata)
if action == "login_advert_1":
deleteLogoFromPath(self.image_name_1)
self.image_name_1 = name
self.advert_link_1 = self.post.get(u"advert_link_1", u"")
elif action == "login_advert_2":
deleteLogoFromPath(self.image_name_2)
self.image_name_2 = name
self.advert_link_2 = self.post.get(u"advert_link_2", u"")
elif action == "login_advert_3":
deleteLogoFromPath(self.webmail_name)
self.webmail_name = name
self.webmail_link = self.post.get(u"webmail_link", u"")
self.saveData()
def deleteAdvertData(self, action):
if action == "login_advert_1_del":
deleteLogoFromPath(self.image_name_1)
self.image_name_1 = u""
self.advert_link_1 = u""
elif action == "login_advert_2_del":
deleteLogoFromPath(self.image_name_2)
self.image_name_2 = u""
self.advert_link_2 = u""
elif action == "login_advert_3_del":
deleteLogoFromPath(self.webmail_name)
self.webmail_name = u""
self.webmail_link = u""
self.saveData()
def saveData(self):
data = {
"login_1" : {"image":self.image_name_1,"link":self.advert_link_1},
"login_2" : {"image":self.image_name_2,"link":self.advert_link_2},
"webmail" : {"image":self.webmail_name,"link":self.webmail_link},
}
data = json.dumps(data)
DomainAttr.saveAttrObjValue(domain_id=self.domain_id.value, type=u"webmail", item=u"cf_adsetting2", value=data)
#首页链接设置
class DomainWebLinkForm(DomainForm):
PARAM_NAME = dict(constants.DOMAIN_WEB_LINK_PARAMS)
PARAM_LIST = dict(constants.DOMAIN_WEB_LINK_VALUE)
PARAM_TYPE = dict(constants.DOMAIN_WEB_LINK_TYPE)
def initialize(self):
"""
{'0':
{'order': '',
'links': [
{'url': 'http://', 'desc': '', 'icon': None, 'title': ''},
{'url': 'http://', 'desc': '', 'icon': '', 'title': ''},
{'url': 'http://', 'desc': '', 'icon': '', 'title': ''},
{'url': 'http://', 'desc': '', 'icon': '', 'title': ''}
],
'title': ''
}
}
"""
self.initBasicParams()
try:
data = json.loads(self.cf_webmail_link2.value)
except:
data = {}
self.data = data
if not isinstance(self.data, dict):
self.data = {}
def getLinkList(self):
for i in self.data.keys():
dd = self.getLinkIndex(i)
yield i, dd
def getLinkIndex(self, idx):
dd = {
u"order" : u"1",
u"title" : u"",
u"links" : [],
}
for j in xrange(4):
dd["url_%s"%j] = u""
dd["desc_%s"%j] = u""
dd["icon_%s"%j] = u""
dd["title_%s"%j] = u""
dd["img_%s"%j] = u""
if not str(idx).isdigit():
return dd
idx = str(idx)
if not idx in self.data:
return dd
dd = {
u"order" : self.data[idx][u"order"],
u"title" : self.data[idx][u"title"],
}
d_link = self.data[idx][u"links"]
for j,v in enumerate(d_link):
icon = v[u"icon"]
dd["url_%s"%j] = v[u"url"]
dd["desc_%s"%j] = v[u"desc"]
dd["icon_%s"%j] = v[u"icon"]
dd["title_%s"%j] = v[u"title"]
imgData = self.getImgData(v[u"icon"])
dd["img_%s"%j] = imgData
return dd
def getImgData(self, value):
if value.strip():
savePath = getSavePath(value)
if os.path.exists(savePath):
with open(savePath,"rb") as f:
data = f.read()
data = base64.encodestring(data)
return data
return u""
def checkSaveNew(self, idx=""):
title = self.post.get(u"title", "")
order = self.post.get(u"order", "1")
data_link_1 = {
u"url" : self.post.get(u"url_0", ""),
u"desc" : self.post.get(u"desc_0", ""),
u"title" : self.post.get(u"title_0", ""),
}
data_link_2 = {
u"url" : self.post.get(u"url_1", ""),
u"desc" : self.post.get(u"desc_1", ""),
u"title" : self.post.get(u"title_1", ""),
}
data_link_3 = {
u"url" : self.post.get(u"url_2", ""),
u"desc" : self.post.get(u"desc_2", ""),
u"title" : self.post.get(u"title_2", ""),
}
data_link_4 = {
u"url" : self.post.get(u"url_3", ""),
u"desc" : self.post.get(u"desc_3", ""),
u"title" : self.post.get(u"title_3", ""),
}
icon_1, icon_2, icon_3, icon_4 = self.setLogoData(idx)
data_link_1["icon"] = icon_1
data_link_2["icon"] = icon_2
data_link_3["icon"] = icon_3
data_link_4["icon"] = icon_4
data = {
u'order' : order,
u'title' : title,
u'links' : [data_link_1, data_link_2, data_link_3, data_link_4],
}
if str(idx).isdigit():
idx = int(idx)
self.checkDelete(idx)
else:
idx = 0 if not self.data else max( [int(i) for i in self.data.keys()] )+1
self.data[str(idx)] = data
self.saveData()
return True
def setLogoData(self, idx):
def setLogoData2(default, logofile):
if default:
deleteLogoFromPath(default)
return saveLogoToPath(logofile)
#end def
icon_1_default = u""
icon_2_default = u""
icon_3_default = u""
icon_4_default = u""
idx = str(idx)
if idx in self.data:
icon_1_default = self.data[idx]["links"][0]["icon"]
icon_2_default = self.data[idx]["links"][1]["icon"]
icon_3_default = self.data[idx]["links"][2]["icon"]
icon_4_default = self.data[idx]["links"][3]["icon"]
icon_1 = setLogoData2(icon_1_default, self.post.get(u"logofile_1", "").strip())
icon_2 = setLogoData2(icon_2_default, self.post.get(u"logofile_2", "").strip())
icon_3 = setLogoData2(icon_3_default, self.post.get(u"logofile_3", "").strip())
icon_4 = setLogoData2(icon_4_default, self.post.get(u"logofile_4", "").strip())
return icon_1, icon_2, icon_3, icon_4
def checkDelete(self, idx):
if not str(idx).isdigit():
return False
idx = str(idx)
if not idx in self.data:
return False
data = self.data.pop(idx)
self.saveData()
for d in data["links"]:
deleteLogoFromPath(d["icon"])
def saveData(self):
data = json.dumps(self.data)
self.cf_webmail_link2 = BaseFied(value=data, error=None)
self.save()
#信纸设置
class DomainWebLetterForm(DotDict):
def __init__(self, domain_id, instance=None, get=None, post=None, request={}):
self.request = request
self.domain_id = BaseFied(value=domain_id, error=None)
self.get = get or {}
self.post = post or {}
self.instance = instance
self.valid = True
self.initialize()
def initialize(self):
self.name = u""
self.image = u""
self.content = u""
self.filedata = u""
if self.instance:
self.name = self.instance.name
self.image = self.instance.image
self.content = self.instance.content
if self.post:
self.name = self.post.get(u"name",u"")
self.content = self.post.get(u"content",u"")
self.filedata = self.post.get(u"logofile",u"")
def getImgData(self):
value = self.image
if value and value.strip():
savePath = getSavePath(value)
if os.path.exists(savePath):
with open(savePath,"rb") as f:
data = f.read()
data = base64.encodestring(data)
return data
return u""
def checkSave(self):
self.save()
return True
def save(self):
saveName = saveLogoToPath(self.filedata)
if self.instance:
obj = self.instance
obj.name = u"{}".format(self.name)
obj.image = u"{}".format(saveName)
obj.content = u"{}".format(self.content)
obj.save()
else:
obj = WmTemplate.objects.create(
domain_id=u"{}".format(self.domain_id.value),
name=u"{}".format(self.name),
image=u"{}".format(saveName),
content=u"{}".format(self.content)
)
self.instance = obj
| 39.083969 | 154 | 0.585278 | 9,772 | 81,920 | 4.719505 | 0.068563 | 0.031397 | 0.025499 | 0.014875 | 0.589258 | 0.521932 | 0.451571 | 0.360134 | 0.281749 | 0.240183 | 0 | 0.005817 | 0.297009 | 81,920 | 2,095 | 155 | 39.102625 | 0.795013 | 0.009949 | 0 | 0.386712 | 0 | 0 | 0.051394 | 0.00227 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.011925 | 0.035207 | null | null | 0.001136 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d26dbcccea877eec0764524f32244d3a230c796d | 434 | py | Python | model/DB Automation/add_db.py | chrisdcao/Covid_Map_Hanoi | 07d18cad8c1b4988795d9ec2aca5ae1fefdff892 | [
"MIT"
] | 1 | 2021-09-09T07:55:00.000Z | 2021-09-09T07:55:00.000Z | model/DB Automation/add_db.py | chrisdcao/Covid_Map_Hanoi | 07d18cad8c1b4988795d9ec2aca5ae1fefdff892 | [
"MIT"
] | null | null | null | model/DB Automation/add_db.py | chrisdcao/Covid_Map_Hanoi | 07d18cad8c1b4988795d9ec2aca5ae1fefdff892 | [
"MIT"
] | null | null | null | import pyodbc
import mysql.connector
conn = mysql.connector.connect(user='root', password='', port='3307', host='localhost', database='coviddb')
cursor = conn.cursor(buffered=True)
cursor.execute('SELECT * FROM coviddb.markers')
cursor.execute('''
INSERT INTO coviddb.markers(id, name, address, subject, lat, lng, type)
VALUES
('0','0','0','0','0','0','None')
''')
conn.commit()
| 22.842105 | 107 | 0.615207 | 52 | 434 | 5.134615 | 0.673077 | 0.037453 | 0.044944 | 0.044944 | 0.022472 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028818 | 0.200461 | 434 | 18 | 108 | 24.111111 | 0.740634 | 0 | 0 | 0 | 0 | 0 | 0.493088 | 0.073733 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.090909 | 0.181818 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d26f1afb5207b56be2e3191794a04329185695ac | 1,818 | py | Python | factor calculation scripts/15.smoothearningstopriceratio.py | cagdemir/equity-index-predictors | 2546e72328de848222cb6a1c744ababab2058477 | [
"MIT"
] | null | null | null | factor calculation scripts/15.smoothearningstopriceratio.py | cagdemir/equity-index-predictors | 2546e72328de848222cb6a1c744ababab2058477 | [
"MIT"
] | null | null | null | factor calculation scripts/15.smoothearningstopriceratio.py | cagdemir/equity-index-predictors | 2546e72328de848222cb6a1c744ababab2058477 | [
"MIT"
] | 1 | 2021-07-21T12:24:51.000Z | 2021-07-21T12:24:51.000Z | # -*- coding: utf-8 -*-
"""
Created on Fri Nov 29 18:00:53 2019
@author: Administrator
"""
import pdblp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
con = pdblp.BCon(debug=False, port=8194, timeout=5000)
con.start()
index_tickers = ['NYA Index', 'SPX Index', 'CCMP Index','NDX Index','CDAX Index' ,'DAX Index',
'ASX Index','UKX Index', 'TPX Index','NKY Index', 'SHCOMP Index' ,
'SZCOMP Index','XUTUM Index','XU100 Index', 'MEXBOL Index',
'IBOV Index', 'IMOEX Index' , 'JALSH Index']
from datetime import date
start = '20040101'
firstday = '19990101'
today = date.today().strftime('%Y%m%d')
pe_ratio = con.bdh(index_tickers, 'PE RATIO', firstday, today)
pe_ratio_int = pe_ratio.interpolate(method='linear')
pe_ratio_int_w = pe_ratio_int.groupby(pd.Grouper(freq='W')).last()
#pe_ratio_last = pe_ratio_int_w[pe_ratio_int_w.index>=start]
#
#pe_ratio_last.columns = [i[0] for i in pe_ratio_last.columns]
#pe_ratio_last= pe_ratio_last[index_tickers]
pe_ratio_smoothed = pe_ratio_int_w.rolling(500, min_periods=100).mean()
var_no='15'
pe_ratio_smoothed_last = pe_ratio_smoothed[pe_ratio_smoothed.index>=start]
pe_ratio_smoothed_last.columns = [i[0] for i in pe_ratio_smoothed_last.columns]
pe_ratio_smoothed_last = pe_ratio_smoothed_last[index_tickers]
pe_ratio_smoothed_last.columns = [var_no+'_'+i for i in pe_ratio_smoothed_last.columns]
# pe_ratio_smoothed_last = pe_ratio_smoothed_last[index_tickers]
#pe_ratio_smoothed_last.columns = ['15_US_NY','15_US_SPX','15_US_CCMP', '15_DE','15_UK','15_JP','15_CH_SH','15_CH_SZ', '15_TR','15_MX','15_BR','15_RU','15_SA']
pe_ratio_smoothed_last.to_excel('C:/Users/sb0538/Desktop/15022020/excels/15_peratiosmoothed.xlsx')
| 33.054545 | 160 | 0.718372 | 292 | 1,818 | 4.143836 | 0.39726 | 0.161983 | 0.173554 | 0.172727 | 0.347107 | 0.299174 | 0.273554 | 0.210744 | 0.210744 | 0.178512 | 0 | 0.057766 | 0.143014 | 1,818 | 54 | 161 | 33.666667 | 0.71887 | 0.256876 | 0 | 0 | 0 | 0 | 0.22283 | 0.049257 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.24 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d26f95f1c9db6cafe8a214de467a08368f6b0271 | 2,378 | py | Python | py2ts/generate_service_registry.py | conanfanli/py2ts | 8543ad03f19f094b0771c3b0cfc26a89eefd95ed | [
"MIT"
] | 3 | 2020-04-10T22:09:44.000Z | 2020-11-29T07:19:28.000Z | py2ts/generate_service_registry.py | conanfanli/py2ts | 8543ad03f19f094b0771c3b0cfc26a89eefd95ed | [
"MIT"
] | 1 | 2020-04-11T14:25:50.000Z | 2020-04-11T14:25:50.000Z | py2ts/generate_service_registry.py | conanfanli/py2ts | 8543ad03f19f094b0771c3b0cfc26a89eefd95ed | [
"MIT"
] | 1 | 2021-05-15T09:22:41.000Z | 2021-05-15T09:22:41.000Z | #!/usr/bin/env python
import logging
import re
import subprocess
import sys
from typing import Dict
logger = logging.getLogger("py2ts.generate_service_registry")
logging.basicConfig(level=logging.INFO)
class RipgrepError(Exception):
pass
def camel_to_snake(name: str) -> str:
name = re.sub("(.)([A-Z][a-z]+)", r"\1_\2", name)
return re.sub("([a-z0-9])([A-Z])", r"\1_\2", name).lower()
def get_service_registry_code(class_module_map: Dict[str, str]) -> str:
"""Return generated code for service registry."""
imports = []
services = []
for service_name, path in class_module_map.items():
imports.append(f"from {path} import {service_name}")
services.append(
f"{camel_to_snake(service_name)}: {service_name} = {service_name}()"
)
imports_code = "\n".join(imports)
services_code = "\n ".join(sorted(services))
return f"""
# Generated code. DO NOT EDIT!
from dataclasses import dataclass
{imports_code}
@dataclass
class ServiceRegistry:
{services_code}
service_registry = ServiceRegistry()
"""
def get_class_module_map() -> Dict[str, str]:
class_module_map = {}
result = subprocess.run(
f"rg '^(class \\w+Service)[\\(:]' -t py -o -r '$1'",
shell=True,
capture_output=True,
)
# Command successful
if result.returncode == 0:
# E.g., ['smartcat/services.py:class TrainingDataSetService:', 'smartcat/services.py:class SmartCatService:']
outputs = result.stdout.decode("utf-8").strip().split("\n")
logger.info(f"Output of rg:{outputs}")
for output in outputs:
# E.g., smartcat/services.py-class SmartCatService
file_path, class_name = output.split(":class ")
module = file_path.split(".py")[0].replace("/", ".")
assert class_name not in class_module_map, f"Found duplicate {class_name}"
class_module_map[class_name] = module
elif result.returncode >= 1:
# resultcode of 1 means no matches were found
raise RipgrepError(
f"Got code: {result.returncode} with message {result.stderr!r}"
)
return class_module_map
if __name__ == "__main__":
try:
code = get_service_registry_code(get_class_module_map())
print(code)
except RipgrepError as e:
logger.error(e)
sys.exit(1)
| 27.976471 | 117 | 0.638352 | 304 | 2,378 | 4.805921 | 0.378289 | 0.067762 | 0.07666 | 0.047228 | 0.115674 | 0.079398 | 0 | 0 | 0 | 0 | 0 | 0.007563 | 0.221615 | 2,378 | 84 | 118 | 28.309524 | 0.78174 | 0.119428 | 0 | 0 | 1 | 0 | 0.260077 | 0.02975 | 0 | 0 | 0 | 0 | 0.017241 | 1 | 0.051724 | false | 0.017241 | 0.172414 | 0 | 0.293103 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d274bf60d6abc1273072877c9d1d6cd1119e3863 | 776 | py | Python | django_qiniu/utils.py | 9nix00/django-qiniu | 08a403dc156b4971eef5af359048a6d2ce485245 | [
"MIT"
] | 1 | 2018-06-21T03:14:20.000Z | 2018-06-21T03:14:20.000Z | django_qiniu/utils.py | 9nix00/django-qiniu | 08a403dc156b4971eef5af359048a6d2ce485245 | [
"MIT"
] | null | null | null | django_qiniu/utils.py | 9nix00/django-qiniu | 08a403dc156b4971eef5af359048a6d2ce485245 | [
"MIT"
] | 1 | 2018-06-21T03:14:21.000Z | 2018-06-21T03:14:21.000Z | # -*- coding: utf-8 -*-
from account_helper.middleware import get_current_user_id
from django.utils import timezone
from django.conf import settings
from hashlib import sha1
import os
def user_upload_dir(instance, filename):
name_struct = os.path.splitext(filename)
current_user_id = get_current_user_id()
expire = 3600 if not hasattr(settings, 'QINIU_PREVIEW_EXPIRE') else settings.QINIU_PREVIEW_EXPIRE
return '{4}/{0}/{3}/{1}{2}'.format(current_user_id,
sha1(filename.encode('utf-8')).hexdigest(),
name_struct[-1] if len(name_struct) > 1 else '',
timezone.now().strftime('%Y-%m-%d-%H-%M'),
expire)
| 38.8 | 101 | 0.590206 | 95 | 776 | 4.610526 | 0.557895 | 0.100457 | 0.118721 | 0.073059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027422 | 0.295103 | 776 | 19 | 102 | 40.842105 | 0.773309 | 0.027062 | 0 | 0 | 0 | 0 | 0.075697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d27c6795141864bd67b93ea1ed9caca681ced3fd | 10,246 | py | Python | pysoundcloud/client.py | omarcostahamido/PySoundCloud | 1ca53a280c77f6b5f52868adefa332c4de56858f | [
"MIT"
] | 4 | 2021-09-15T06:40:02.000Z | 2022-01-16T03:31:59.000Z | pysoundcloud/client.py | AnthonyMakesStuff/PySoundCloud | 1ca53a280c77f6b5f52868adefa332c4de56858f | [
"MIT"
] | 1 | 2021-04-22T04:18:42.000Z | 2021-05-09T09:22:59.000Z | pysoundcloud/client.py | AnthonyMakesStuff/PySoundCloud | 1ca53a280c77f6b5f52868adefa332c4de56858f | [
"MIT"
] | 1 | 2020-09-05T02:14:37.000Z | 2020-09-05T02:14:37.000Z | import re
import requests
from typing import Union
from pysoundcloud.soundcloudplaylists import SoundCloudPlaylists
from pysoundcloud.soundcloudsearchresults import SoundCloudSearchResults
from pysoundcloud.soundcloudlikedtracks import SoundCloudLikedTracks
from pysoundcloud.soundcloudplaylist import SoundCloudPlaylist
from pysoundcloud.soundcloudtrack import SoundCloudTrack
from pysoundcloud.soundcloudrelatedtracks import SoundCloudRelatedTracks
class Client:
base_url: str = "https://api-v2.soundcloud.com/"
client_id: str = ""
"""
:var base_url: The base url for all requests to the SoundCloud API
:var client_id: The client ID to use with the SoundCloud API
"""
def __init__(self, client_id: str) -> None:
"""
Setup the SoundCloud client to interact with the API
:param client_id: Your SoundCloud client ID
:return: None
"""
self.client_id = client_id
def search(self,
query: str,
limit: int = 10,
offset: int = 0) -> Union[bool, SoundCloudSearchResults]:
"""
Search SoundCloud for the specified query
For some reason it doesn't always work and I have no clue why
:param query: The query to search for
:param limit: The number of results to return
:param offset: The start position from 0
:return: SoundCloudSearchResults, or False if response is not 200
"""
parameters = {"q": query,
"limit": limit,
"offset": offset,
"client_id": self.client_id}
url = self.base_url + "search"
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
return SoundCloudSearchResults(response, client_id=self.client_id, parent=self)
def track(self, track_id: int) -> Union[bool, SoundCloudTrack]:
"""
Gets data about the track with the specified track_id
:param track_id: The track id to search for
:return: a SoundCloudTrack with data about the track, or False if response is not 200
"""
parameters = {"client_id": self.client_id}
url = self.base_url + "tracks/{}".format(track_id)
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
return SoundCloudTrack(response.json(), self.client_id, parent=self)
def related(self, track_id: int, limit: int = 10, offset: int = 0) -> Union[bool, SoundCloudRelatedTracks]:
"""
Gets tracks related to the specified track_id
:param track_id: The track id to find related tracks for
:param limit: The number of tracks to find
:param offset: The number of tracks to search for from zero, so offset 10 and limit 10 means find tracks 10-20
:return: SoundCloudRelatedTracks with the tracks, or False if response is not 200
"""
parameters = {"limit": limit,
"offset": offset,
"client_id": self.client_id}
url = self.base_url + "tracks/{}/related".format(track_id)
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
return SoundCloudRelatedTracks(response.json(), self.client_id)
def playlists(self,
track_id: int,
representation: str = "mini",
limit: int = 10,
offset: int = 0) -> Union[bool, SoundCloudPlaylists]:
"""
Gets playlists containing a specified track
:param track_id: The track ID to find playlists containing
:param representation: The type of representation (either full or mini)
:param limit: The number of results to return
:param offset: The start position from 0
:return: SoundCloudPlaylists containing the track, or False if response is not 200
"""
parameters = {"representation": representation,
"limit": limit,
"offset": offset,
"client_id": self.client_id}
url = self.base_url + "tracks/{}/playlists_without_albums".format(track_id)
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
return SoundCloudPlaylists(response.json(), self.client_id, parent=self)
def albums(self,
track_id: int,
representation: str = "mini",
limit: int = 10,
offset: int = 0) -> Union[bool, SoundCloudPlaylists]:
"""
Gets albums containing a specified track
:param track_id: The track ID to find albums containing
:param representation: The type of representation (either full or mini)
:param limit: The number of results to return
:param offset: The start position from 0
:return: SoundCloudPlaylists containing the track, or False if response is not 200
"""
parameters = {"representation": representation,
"limit": limit,
"offset": offset,
"client_id": self.client_id}
url = self.base_url + "tracks/{}/albums".format(track_id)
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
if (len(response.json()["collection"]) == 0):
return False
return SoundCloudPlaylists(response.json(), self.client_id)
def comments(self):
"""
.. todo::
Add a function to get comments on a specific track or by a specific user
"""
pass # Todo: add comments
def web_profiles(self):
"""
.. todo::
Add a function to get the "web profiles" of a specific user
"""
pass # Todo: add web_profiles
def liked_tracks(self,
user_id: int,
limit: int = 24,
offset: int = 0) -> Union[bool, SoundCloudLikedTracks]:
"""
Gets the user's liked tracks
:param user_id: The ID of the user to find liked tracks for
:param limit: The number of results to return
:param offset: The start position from 0
:return: SoundCloudLikedTracks containing all the tracks, or False if response is not 200
"""
parameters = {"client_id": self.client_id,
"limit": limit,
"offset": offset}
url = self.base_url + "users/{}/track_likes".format(user_id)
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
return SoundCloudLikedTracks(response, client_id=self.client_id)
def playlist(self,
playlist_id: int = None,
playlist_url: str = None,
representation: str = "full",
secret_token: str = None) -> Union[bool, SoundCloudPlaylist]:
"""
Get a playlist based on a specified playlist_id or playlist_url
:param playlist_id: The ID of the playlist
:param playlist_url: The URL of the playlist
:param representation: The playlist representation (either fill or mini)
:param secret_token: An optional secret token
:return: A SoundCloudPlaylist, or False if response is not 200
"""
if (playlist_id is None):
if (playlist_url is not None):
response = requests.get(playlist_url)
patterns = [
r'<meta property="twitter:app:url:(?:googleplay|iphone|ipad)'
r'content="soundcloud:\/\/playlists:([0-9]+)">',
r'<meta property="twitter:player" content="https:\/\/w\.soundcloud\.com\/player\/\?url=https'
r'%3(?:a|A)%2(?:f|F)%2(?:f|F)api\.soundcloud\.com%2(?:f|F)playlists%2(?:f|F)([0-9]+)',
r'<meta property="al:(?:ios|android):url" content="soundcloud:\/\/playlists:([0-9]+)">',
r'<link rel="alternate" href="android-app:\/\/com\.soundcloud\.android\/soundcloud\/'
r'playlists:([0-9]+)">',
r'<link rel="alternate" href="ios-app:\/\/336353151\/soundcloud\/playlists:([0-9]+)">'
]
for pattern in patterns:
if (playlist_id is None):
search_results = re.search(pattern,
response.text)
if (search_results is not None):
playlist_id = search_results.group(1)
if (playlist_id is None):
print("Error: Could not find the playlist id from the url \"{}\"".format(playlist_url))
return False
parameters = {"representation": representation,
"client_id": self.client_id}
if (secret_token is not None):
parameters["secret_token"] = secret_token
url = self.base_url + "playlists/{}".format(playlist_id)
response = requests.get(url, params=parameters)
if (response.status_code != 200):
print("Error: Received code {}".format(response.status_code))
return False
return SoundCloudPlaylist(response.json(),
self.client_id,
parent=self)
| 44.547826 | 119 | 0.576127 | 1,125 | 10,246 | 5.157333 | 0.144 | 0.042744 | 0.033092 | 0.027921 | 0.54981 | 0.520338 | 0.487073 | 0.462082 | 0.425715 | 0.407963 | 0 | 0.013953 | 0.328518 | 10,246 | 229 | 120 | 44.742358 | 0.82936 | 0.23658 | 0 | 0.447761 | 0 | 0.029851 | 0.157736 | 0.071017 | 0 | 0 | 0 | 0.017467 | 0 | 1 | 0.074627 | false | 0.014925 | 0.067164 | 0 | 0.283582 | 0.059701 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d27cc7e2f11f688e99e9542aba655008056fb669 | 859 | py | Python | rojak-analyzer/generate_stopwords.py | pyk/rojak | 0dd69efedb58ee5d951e1a43cdfa65b60f8bb7c7 | [
"BSD-3-Clause"
] | 107 | 2016-10-02T05:54:42.000Z | 2021-08-05T00:20:51.000Z | rojak-analyzer/generate_stopwords.py | pyk/rojak | 0dd69efedb58ee5d951e1a43cdfa65b60f8bb7c7 | [
"BSD-3-Clause"
] | 134 | 2016-10-02T21:21:08.000Z | 2016-12-27T02:46:34.000Z | rojak-analyzer/generate_stopwords.py | pyk/rojak | 0dd69efedb58ee5d951e1a43cdfa65b60f8bb7c7 | [
"BSD-3-Clause"
] | 54 | 2016-10-02T08:47:56.000Z | 2020-03-08T00:56:03.000Z | # Run this script to create stopwords.py based on stopwords.txt
import json
def generate(input_txt, output_py):
# Read line by line
txt_file = open(input_txt)
words = set([])
for raw_line in txt_file:
line = raw_line.strip()
# Skip empty line
if len(line) < 1: continue
# Skip comments
if line[0] == '#': continue
# Collect the stopwords
words.add(line)
# Dump the array to a file
output = open(output_py, 'w')
output.write('# DO NOT EDIT THIS FILE!\n')
output.write('# Edit stopwords.txt, generate this file again via ')
output.write('generate_stopwords.py\n')
output.write('stopwords = set(%s)' % (json.dumps(sorted(words),
indent=4)))
output.close()
txt_file.close()
if __name__ == '__main__':
generate('stopwords.txt', 'stopwords.py')
| 29.62069 | 71 | 0.622817 | 119 | 859 | 4.344538 | 0.478992 | 0.085106 | 0.046422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004666 | 0.251455 | 859 | 28 | 72 | 30.678571 | 0.799378 | 0.181607 | 0 | 0 | 1 | 0 | 0.221264 | 0.033046 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d27e557da62812d946f0019863efdd827d603a76 | 1,024 | py | Python | model.py | nitro-code/inception-api | 0ee40b1bdc7cccec8e15921ff835ce29070a66f6 | [
"MIT"
] | 1 | 2017-08-18T09:13:47.000Z | 2017-08-18T09:13:47.000Z | model.py | nitroventures/inception-api | 0ee40b1bdc7cccec8e15921ff835ce29070a66f6 | [
"MIT"
] | null | null | null | model.py | nitroventures/inception-api | 0ee40b1bdc7cccec8e15921ff835ce29070a66f6 | [
"MIT"
] | null | null | null | import tensorflow as tf
from keras.preprocessing import image
from keras.applications.inception_v3 import InceptionV3, preprocess_input, decode_predictions
import numpy as np
import h5py
model = InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
graph = tf.get_default_graph()
def pil2array(pillow_img):
return np.array(pillow_img.getdata(), np.float32).reshape(pillow_img.size[1], pillow_img.size[0], 3)
def predict_pil(pillow_img):
img_array = pil2array(pillow_img)
return predict_nparray(img_array)
def predict_nparray(img_as_array):
global graph
img_batch_as_array = np.expand_dims(img_as_array, axis=0)
img_batch_as_array = preprocess_input(img_batch_as_array)
with graph.as_default():
preds = model.predict(img_batch_as_array)
decoded_preds = decode_predictions(preds, top=3)[0]
predictions = [{'label': label, 'descr': description, 'prob': probability} for label,description, probability in decoded_preds]
return predictions
| 34.133333 | 131 | 0.775391 | 146 | 1,024 | 5.157534 | 0.424658 | 0.071713 | 0.053121 | 0.079681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01573 | 0.130859 | 1,024 | 29 | 132 | 35.310345 | 0.830337 | 0 | 0 | 0 | 0 | 0 | 0.021484 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.238095 | 0.047619 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d2831bfec38a388ec3a1badd4f034aaa55b158a5 | 1,661 | py | Python | sfdata/posts/migrations/0001_initial.py | adjspecies/sfdata | 9522176c1c80e9f0aeecf77da6576e8465238383 | [
"MIT"
] | 1 | 2019-01-24T01:57:21.000Z | 2019-01-24T01:57:21.000Z | sfdata/posts/migrations/0001_initial.py | adjspecies/sfdata | 9522176c1c80e9f0aeecf77da6576e8465238383 | [
"MIT"
] | null | null | null | sfdata/posts/migrations/0001_initial.py | adjspecies/sfdata | 9522176c1c80e9f0aeecf77da6576e8465238383 | [
"MIT"
] | 1 | 2018-12-22T02:20:39.000Z | 2018-12-22T02:20:39.000Z | # Generated by Django 2.1.4 on 2018-12-21 21:55
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Author',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.TextField()),
],
),
migrations.CreateModel(
name='Post',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.TextField()),
('date', models.DateTimeField()),
('wordcount', models.IntegerField()),
('in_series', models.BooleanField()),
('views', models.IntegerField()),
('faves', models.IntegerField()),
('comments', models.IntegerField()),
('votes', models.IntegerField()),
('author', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='posts.Author')),
],
),
migrations.CreateModel(
name='Tag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.TextField()),
],
),
migrations.AddField(
model_name='post',
name='tags',
field=models.ManyToManyField(to='posts.Tag'),
),
]
| 33.22 | 114 | 0.526791 | 145 | 1,661 | 5.951724 | 0.42069 | 0.104287 | 0.086906 | 0.079954 | 0.341831 | 0.341831 | 0.341831 | 0.341831 | 0.341831 | 0.341831 | 0 | 0.013429 | 0.327514 | 1,661 | 49 | 115 | 33.897959 | 0.759176 | 0.027092 | 0 | 0.428571 | 1 | 0 | 0.07311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d28385fbbbc8e61ca535b580e9a5d1d70c77fe44 | 1,361 | py | Python | test/test_tools.py | cokelaer/sequana | da35de12b45f38b4fa488c7a15a6d9829890b44e | [
"BSD-3-Clause"
] | 138 | 2016-07-13T06:24:45.000Z | 2022-03-28T13:12:03.000Z | test/test_tools.py | cokelaer/sequana | da35de12b45f38b4fa488c7a15a6d9829890b44e | [
"BSD-3-Clause"
] | 655 | 2016-03-10T17:33:40.000Z | 2022-03-30T16:10:45.000Z | test/test_tools.py | cokelaer/sequana | da35de12b45f38b4fa488c7a15a6d9829890b44e | [
"BSD-3-Clause"
] | 39 | 2016-11-04T11:40:58.000Z | 2022-03-15T08:12:29.000Z | from sequana.tools import bam_to_mapped_unmapped_fastq, reverse_complement, StatsBAM2Mapped
from sequana import sequana_data
from sequana.tools import bam_get_paired_distance, GZLineCounter, PairedFastQ
from sequana.tools import genbank_features_parser
def test_StatsBAM2Mapped():
data = sequana_data("test.bam", "testing")
res = StatsBAM2Mapped(data)
res.to_html()
def test_bam2fastq():
data = sequana_data("test.bam", "testing")
res = bam_to_mapped_unmapped_fastq(data)
def test_reverse_complement():
assert reverse_complement("AACCGGTTA") == 'TAACCGGTT'
def test_reverse():
from sequana.tools import reverse
assert reverse("AACCGG") == 'GGCCAA'
def test_distance():
data = sequana_data("test.bam", "testing")
distances = bam_get_paired_distance(data)
def test_gc_content():
from sequana.tools import gc_content
data = sequana_data('measles.fa', "testing")
gc_content(data, 10)['chr1']
gc_content(data, 101, circular=True)['chr1']
def test_genbank_features_parser():
data = sequana_data("JB409847.gbk")
genbank_features_parser(data)
def test_gzlinecounter():
assert len(GZLineCounter(sequana_data("test.fastq.gz"))) == 1000
def test_paired_file():
f1 = sequana_data("test.fastq.gz")
f2 = sequana_data("test.fastq.gz")
assert PairedFastQ(f1,f2).is_synchronised()
| 26.686275 | 91 | 0.739897 | 177 | 1,361 | 5.418079 | 0.293785 | 0.103233 | 0.093848 | 0.114703 | 0.264859 | 0.096976 | 0.066736 | 0 | 0 | 0 | 0 | 0.021496 | 0.145481 | 1,361 | 50 | 92 | 27.22 | 0.803095 | 0 | 0 | 0.090909 | 0 | 0 | 0.110948 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 1 | 0.272727 | false | 0 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
962a9c50351cba1947f6e3a1a14ce2f159196743 | 1,205 | py | Python | oldp/apps/search/templatetags/search.py | docsuleman/oldp | 8dcaa8e6e435794c872346b5014945ace885adb4 | [
"MIT"
] | 66 | 2018-05-07T12:34:39.000Z | 2022-02-23T20:14:24.000Z | oldp/apps/search/templatetags/search.py | Justice-PLP-DHV/oldp | eadf235bb0925453d9a5b81963a0ce53afeb17fd | [
"MIT"
] | 68 | 2018-06-11T16:13:17.000Z | 2022-02-10T08:03:26.000Z | oldp/apps/search/templatetags/search.py | Justice-PLP-DHV/oldp | eadf235bb0925453d9a5b81963a0ce53afeb17fd | [
"MIT"
] | 15 | 2018-06-23T19:41:13.000Z | 2021-08-18T08:21:49.000Z | from datetime import datetime
from dateutil.relativedelta import relativedelta
from django import template
from django.template.defaultfilters import urlencode
from django.urls import reverse
from haystack.models import SearchResult
from haystack.utils.highlighting import Highlighter
register = template.Library()
@register.filter
def get_search_snippet(search_result: SearchResult, query: str) -> str:
hlr = Highlighter(query, html_tag='strong')
if search_result and hasattr(search_result, 'get_stored_fields') and 'text' in search_result.get_stored_fields():
text = search_result.get_stored_fields()['text']
return hlr.highlight(text)
else:
return ''
@register.filter
def format_date(start_date: datetime) -> str:
"""
Format for search facets (year-month)
"""
return start_date.strftime('%Y-%m')
@register.filter
def date_range_query(start_date: datetime, date_format='%Y-%m-%d') -> str:
"""
Monthly range
"""
return start_date.strftime(date_format) + ',' + (start_date + relativedelta(months=1)).strftime(date_format)
@register.filter
def search_url(query):
return reverse('haystack_search') + '?q=' + urlencode(query)
| 28.690476 | 117 | 0.73444 | 152 | 1,205 | 5.644737 | 0.381579 | 0.06993 | 0.079254 | 0.073427 | 0.10373 | 0.072261 | 0 | 0 | 0 | 0 | 0 | 0.00098 | 0.153527 | 1,205 | 41 | 118 | 29.390244 | 0.840196 | 0.042324 | 0 | 0.16 | 0 | 0 | 0.0561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.28 | 0.04 | 0.64 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
962cd88d6f79f8b3352c0cd041ccfcff6c478fe5 | 11,137 | py | Python | sdk/python/pulumi_oci/sch/get_service_connector.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/sch/get_service_connector.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/sch/get_service_connector.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'GetServiceConnectorResult',
'AwaitableGetServiceConnectorResult',
'get_service_connector',
]
@pulumi.output_type
class GetServiceConnectorResult:
"""
A collection of values returned by getServiceConnector.
"""
def __init__(__self__, compartment_id=None, defined_tags=None, description=None, display_name=None, freeform_tags=None, id=None, lifecyle_details=None, service_connector_id=None, source=None, state=None, system_tags=None, target=None, tasks=None, time_created=None, time_updated=None):
if compartment_id and not isinstance(compartment_id, str):
raise TypeError("Expected argument 'compartment_id' to be a str")
pulumi.set(__self__, "compartment_id", compartment_id)
if defined_tags and not isinstance(defined_tags, dict):
raise TypeError("Expected argument 'defined_tags' to be a dict")
pulumi.set(__self__, "defined_tags", defined_tags)
if description and not isinstance(description, str):
raise TypeError("Expected argument 'description' to be a str")
pulumi.set(__self__, "description", description)
if display_name and not isinstance(display_name, str):
raise TypeError("Expected argument 'display_name' to be a str")
pulumi.set(__self__, "display_name", display_name)
if freeform_tags and not isinstance(freeform_tags, dict):
raise TypeError("Expected argument 'freeform_tags' to be a dict")
pulumi.set(__self__, "freeform_tags", freeform_tags)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if lifecyle_details and not isinstance(lifecyle_details, str):
raise TypeError("Expected argument 'lifecyle_details' to be a str")
pulumi.set(__self__, "lifecyle_details", lifecyle_details)
if service_connector_id and not isinstance(service_connector_id, str):
raise TypeError("Expected argument 'service_connector_id' to be a str")
pulumi.set(__self__, "service_connector_id", service_connector_id)
if source and not isinstance(source, dict):
raise TypeError("Expected argument 'source' to be a dict")
pulumi.set(__self__, "source", source)
if state and not isinstance(state, str):
raise TypeError("Expected argument 'state' to be a str")
pulumi.set(__self__, "state", state)
if system_tags and not isinstance(system_tags, dict):
raise TypeError("Expected argument 'system_tags' to be a dict")
pulumi.set(__self__, "system_tags", system_tags)
if target and not isinstance(target, dict):
raise TypeError("Expected argument 'target' to be a dict")
pulumi.set(__self__, "target", target)
if tasks and not isinstance(tasks, list):
raise TypeError("Expected argument 'tasks' to be a list")
pulumi.set(__self__, "tasks", tasks)
if time_created and not isinstance(time_created, str):
raise TypeError("Expected argument 'time_created' to be a str")
pulumi.set(__self__, "time_created", time_created)
if time_updated and not isinstance(time_updated, str):
raise TypeError("Expected argument 'time_updated' to be a str")
pulumi.set(__self__, "time_updated", time_updated)
@property
@pulumi.getter(name="compartmentId")
def compartment_id(self) -> str:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the compartment containing the metric.
"""
return pulumi.get(self, "compartment_id")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Mapping[str, Any]:
"""
Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the resource. Avoid entering confidential information.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> str:
"""
A user-friendly name. It does not have to be unique, and it is changeable. Avoid entering confidential information.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Mapping[str, Any]:
"""
Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter
def id(self) -> str:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the service connector.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="lifecyleDetails")
def lifecyle_details(self) -> str:
"""
A message describing the current state in more detail. For example, the message might provide actionable information for a resource in a `FAILED` state.
"""
return pulumi.get(self, "lifecyle_details")
@property
@pulumi.getter(name="serviceConnectorId")
def service_connector_id(self) -> str:
return pulumi.get(self, "service_connector_id")
@property
@pulumi.getter
def source(self) -> 'outputs.GetServiceConnectorSourceResult':
"""
An object that represents the source of the flow defined by the service connector. An example source is the VCNFlow logs within the NetworkLogs group. For more information about flows defined by service connectors, see [Service Connector Hub Overview](https://docs.cloud.oracle.com/iaas/Content/service-connector-hub/overview.htm).
"""
return pulumi.get(self, "source")
@property
@pulumi.getter
def state(self) -> str:
"""
The current state of the service connector.
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="systemTags")
def system_tags(self) -> Mapping[str, Any]:
"""
The system tags associated with this resource, if any. The system tags are set by Oracle Cloud Infrastructure services. Each key is predefined and scoped to namespaces. For more information, see [Resource Tags](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcetags.htm). Example: `{orcl-cloud: {free-tier-retain: true}}`
"""
return pulumi.get(self, "system_tags")
@property
@pulumi.getter
def target(self) -> 'outputs.GetServiceConnectorTargetResult':
"""
An object that represents the target of the flow defined by the service connector. An example target is a stream. For more information about flows defined by service connectors, see [Service Connector Hub Overview](https://docs.cloud.oracle.com/iaas/Content/service-connector-hub/overview.htm).
"""
return pulumi.get(self, "target")
@property
@pulumi.getter
def tasks(self) -> Sequence['outputs.GetServiceConnectorTaskResult']:
"""
The list of tasks.
"""
return pulumi.get(self, "tasks")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> str:
"""
The date and time when the service connector was created. Format is defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2020-01-25T21:10:29.600Z`
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="timeUpdated")
def time_updated(self) -> str:
"""
The date and time when the service connector was updated. Format is defined by [RFC3339](https://tools.ietf.org/html/rfc3339). Example: `2020-01-25T21:10:29.600Z`
"""
return pulumi.get(self, "time_updated")
class AwaitableGetServiceConnectorResult(GetServiceConnectorResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetServiceConnectorResult(
compartment_id=self.compartment_id,
defined_tags=self.defined_tags,
description=self.description,
display_name=self.display_name,
freeform_tags=self.freeform_tags,
id=self.id,
lifecyle_details=self.lifecyle_details,
service_connector_id=self.service_connector_id,
source=self.source,
state=self.state,
system_tags=self.system_tags,
target=self.target,
tasks=self.tasks,
time_created=self.time_created,
time_updated=self.time_updated)
def get_service_connector(service_connector_id: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetServiceConnectorResult:
"""
This data source provides details about a specific Service Connector resource in Oracle Cloud Infrastructure Service Connector Hub service.
Gets the specified service connector's configuration information.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_service_connector = oci.sch.get_service_connector(service_connector_id=oci_sch_service_connector["test_service_connector"]["id"])
```
:param str service_connector_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the service connector.
"""
__args__ = dict()
__args__['serviceConnectorId'] = service_connector_id
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('oci:sch/getServiceConnector:getServiceConnector', __args__, opts=opts, typ=GetServiceConnectorResult).value
return AwaitableGetServiceConnectorResult(
compartment_id=__ret__.compartment_id,
defined_tags=__ret__.defined_tags,
description=__ret__.description,
display_name=__ret__.display_name,
freeform_tags=__ret__.freeform_tags,
id=__ret__.id,
lifecyle_details=__ret__.lifecyle_details,
service_connector_id=__ret__.service_connector_id,
source=__ret__.source,
state=__ret__.state,
system_tags=__ret__.system_tags,
target=__ret__.target,
tasks=__ret__.tasks,
time_created=__ret__.time_created,
time_updated=__ret__.time_updated)
| 43.846457 | 347 | 0.679806 | 1,333 | 11,137 | 5.456114 | 0.174044 | 0.079197 | 0.042073 | 0.061873 | 0.349099 | 0.278702 | 0.228241 | 0.194143 | 0.14382 | 0.137082 | 0 | 0.00588 | 0.221155 | 11,137 | 253 | 348 | 44.019763 | 0.832603 | 0.274401 | 0 | 0.128049 | 1 | 0 | 0.172754 | 0.034317 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109756 | false | 0 | 0.036585 | 0.006098 | 0.262195 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
962f56d3ff295087050794dbedace7481235e971 | 337 | py | Python | molecule/default/tests/test_creation.py | stackhpc/ansible-role-luks | 8c4b5f472ab0aef3d2a776d4fcd37ca17c6eac05 | [
"Apache-1.1"
] | 3 | 2020-04-14T19:57:25.000Z | 2021-01-11T09:09:16.000Z | molecule/default/tests/test_creation.py | stackhpc/ansible-role-luks | 8c4b5f472ab0aef3d2a776d4fcd37ca17c6eac05 | [
"Apache-1.1"
] | 4 | 2020-08-12T10:24:25.000Z | 2022-01-17T17:48:28.000Z | molecule/default/tests/test_creation.py | stackhpc/ansible-role-luks | 8c4b5f472ab0aef3d2a776d4fcd37ca17c6eac05 | [
"Apache-1.1"
] | 2 | 2021-06-17T21:57:42.000Z | 2022-02-20T08:02:43.000Z | import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
def test_crypto_devices(host):
f = host.file('/dev/mapper/cryptotest')
assert f.exists
f = host.file('/dev/mapper/crypto-test1')
assert f.exists
| 24.071429 | 63 | 0.744807 | 46 | 337 | 5.282609 | 0.565217 | 0.115226 | 0.17284 | 0.222222 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003413 | 0.130564 | 337 | 13 | 64 | 25.923077 | 0.825939 | 0 | 0 | 0.222222 | 0 | 0 | 0.21365 | 0.204748 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9632975c75b20b8d1e791a57c8e86aa3a4d6057f | 586 | py | Python | w0rplib/url.py | w0rp/w0rpzone | 06aa9f8871cefcbefbbfdfcba0abfd4fa2629d0c | [
"BSD-2-Clause"
] | null | null | null | w0rplib/url.py | w0rp/w0rpzone | 06aa9f8871cefcbefbbfdfcba0abfd4fa2629d0c | [
"BSD-2-Clause"
] | 13 | 2019-07-05T18:44:46.000Z | 2021-06-19T12:19:46.000Z | w0rplib/url.py | w0rp/w0rpzone | 06aa9f8871cefcbefbbfdfcba0abfd4fa2629d0c | [
"BSD-2-Clause"
] | null | null | null | from django.views.generic.base import RedirectView
from django.conf.urls import re_path
def redir(regex, redirect_url, name=None):
"""
A shorter wrapper around RedirectView for 301 redirects.
"""
return re_path(
regex,
RedirectView.as_view(url=redirect_url, permanent=True),
name=name,
)
def redir_temp(regex, redirect_url, name=None):
"""
A shorter wrapper around RedirectView for 302 redirects.
"""
return re_path(
regex,
RedirectView.as_view(url=redirect_url, permanent=False),
name=name,
)
| 23.44 | 64 | 0.663823 | 73 | 586 | 5.191781 | 0.452055 | 0.116095 | 0.084433 | 0.105541 | 0.670185 | 0.670185 | 0.670185 | 0.670185 | 0.670185 | 0.670185 | 0 | 0.013514 | 0.242321 | 586 | 24 | 65 | 24.416667 | 0.84009 | 0.192833 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9634bfc41291ba70d3cb8d6d1b58e82b77a84ebf | 494 | py | Python | commanderbot_lib/database/yaml_file_database.py | CommanderBot-Dev/commanderbot-lib | 2716279b059056eaf0797085149b61f71b175ed5 | [
"MIT"
] | 1 | 2020-09-25T19:22:47.000Z | 2020-09-25T19:22:47.000Z | commanderbot_lib/database/yaml_file_database.py | CommanderBot-Dev/commanderbot-lib | 2716279b059056eaf0797085149b61f71b175ed5 | [
"MIT"
] | 1 | 2021-01-06T00:22:56.000Z | 2021-08-29T20:54:50.000Z | commanderbot_lib/database/yaml_file_database.py | CommanderBot-Dev/commanderbot-lib | 2716279b059056eaf0797085149b61f71b175ed5 | [
"MIT"
] | 2 | 2020-09-25T19:23:07.000Z | 2020-09-25T21:06:11.000Z | from typing import IO
from commanderbot_lib.database.abc.file_database import FileDatabase
from commanderbot_lib.database.mixins.yaml_file_database_mixin import (
YamlFileDatabaseMixin,
)
class YamlFileDatabase(FileDatabase, YamlFileDatabaseMixin):
# @implements FileDatabase
async def load(self, file: IO) -> dict:
return await self.load_yaml(file)
# @implements FileDatabase
async def dump(self, data: dict, file: IO):
await self.dump_yaml(data, file)
| 29.058824 | 71 | 0.755061 | 59 | 494 | 6.186441 | 0.440678 | 0.087671 | 0.10411 | 0.147945 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168016 | 494 | 16 | 72 | 30.875 | 0.888078 | 0.09919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.3 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
963a4d3128c84db58d2f454e777068e2515b774e | 307 | py | Python | cooee/actions.py | yschimke/cooee-cli-py | 74edeb58ee5cfd0887b73de4f90ffa28892e24df | [
"Apache-2.0"
] | null | null | null | cooee/actions.py | yschimke/cooee-cli-py | 74edeb58ee5cfd0887b73de4f90ffa28892e24df | [
"Apache-2.0"
] | null | null | null | cooee/actions.py | yschimke/cooee-cli-py | 74edeb58ee5cfd0887b73de4f90ffa28892e24df | [
"Apache-2.0"
] | null | null | null | import webbrowser
from typing import Dict, Any
from prompt_toolkit import print_formatted_text
from .format import todo_string
def launch_action(result: Dict[str, Any]):
if "location" in result:
webbrowser.open(result["location"])
else:
print_formatted_text(todo_string(result))
| 21.928571 | 49 | 0.745928 | 41 | 307 | 5.390244 | 0.585366 | 0.126697 | 0.162896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175896 | 307 | 13 | 50 | 23.615385 | 0.873518 | 0 | 0 | 0 | 0 | 0 | 0.052117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.555556 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
963e81d1f86297198f40e8bbac901cbb13572805 | 829 | py | Python | python/q04.py | holisound/70-math-quizs-for-programmers | 746d98435a496fd8313a233fe4c2a59fd11d3823 | [
"MIT"
] | null | null | null | python/q04.py | holisound/70-math-quizs-for-programmers | 746d98435a496fd8313a233fe4c2a59fd11d3823 | [
"MIT"
] | null | null | null | python/q04.py | holisound/70-math-quizs-for-programmers | 746d98435a496fd8313a233fe4c2a59fd11d3823 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from collections import deque
def cutBar(m, n):
res, now = 0, 1
while now < n:
now += now if now < m else m
res += 1
return res
def cutBarBFS(m, n):
if n == 1:
return 0
que = deque([n])
res = 0
while que:
size = len(que)
for _ in range(min(m, size)):
bar = que.popleft()
left = bar >> 1
right = bar - left
if left > 1:
que.append(left)
if right > 1:
que.append(right)
res += 1
return res
def cutBarDFS(m, n, now):
if now >= n:
return 0
if now < m:
return 1 + cutBarDFS(m, n, now * 2)
return 1 + cutBarDFS(m, n, now + m)
print cutBar(3, 8)
print cutBar(3, 20)
print cutBar(5, 100)
print cutBar(1, 1)
| 19.738095 | 43 | 0.472859 | 122 | 829 | 3.204918 | 0.319672 | 0.025575 | 0.084399 | 0.107417 | 0.189258 | 0.107417 | 0 | 0 | 0 | 0 | 0 | 0.052738 | 0.405308 | 829 | 41 | 44 | 20.219512 | 0.740365 | 0.025332 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.029412 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
963ee361844cacc5317b943abf161599e3643da8 | 1,247 | py | Python | DRAFTS/CookieStealer.py | henryza/Python | 34af4a915e7bec27268b619246833e65e48d1cb8 | [
"MIT"
] | null | null | null | DRAFTS/CookieStealer.py | henryza/Python | 34af4a915e7bec27268b619246833e65e48d1cb8 | [
"MIT"
] | null | null | null | DRAFTS/CookieStealer.py | henryza/Python | 34af4a915e7bec27268b619246833e65e48d1cb8 | [
"MIT"
] | null | null | null | import requests
import json
class test(object):
def __init__(self):
self._debug = False
self._http_debug = False
self._https = True
self._session = requests.session() # use single session for all requests
def update_csrf(self):
# Retrieve server csrf and update session's headers
for cookie in self._session.cookies:
if cookie.name == 'ccsrftoken':
csrftoken = cookie.value[1:-1] # token stored as a list
self._session.headers.update({'X-CSRFTOKEN': csrftoken})
def login(self,host,username,password):
self.host = host
if self._https is True:
self.url_prefix = 'https://' + self.host
else:
self.url_prefix = 'http://' + self.host
url = self.url_prefix + '/logincheck'
res = self._session.post(url,
data='username='+username+'&secretkey='+password,
verify = False)
#self.dprint(res)
# Update session's csrftoken
self.update_csrf()
def get(self, url):
url = url
res = self._session.get(url)
return res.content
f = test()
f.login(ip,username, password) | 30.414634 | 81 | 0.567763 | 144 | 1,247 | 4.784722 | 0.423611 | 0.079826 | 0.056604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002381 | 0.326383 | 1,247 | 41 | 82 | 30.414634 | 0.817857 | 0.121091 | 0 | 0 | 0 | 0 | 0.061412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0.1 | 0.066667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9643beb9c22472b136ce8bcd1f8f9fb526f1f46a | 11,096 | py | Python | dependencies/FontTools/Lib/fontTools/misc/bezierTools.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 21 | 2015-01-16T05:10:02.000Z | 2021-06-11T20:48:15.000Z | dependencies/FontTools/Lib/fontTools/misc/bezierTools.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 1 | 2019-09-09T12:10:27.000Z | 2020-05-22T10:12:14.000Z | dependencies/FontTools/Lib/fontTools/misc/bezierTools.py | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 2 | 2015-05-03T04:51:08.000Z | 2018-08-24T08:28:53.000Z | """fontTools.misc.bezierTools.py -- tools for working with bezier path segments."""
__all__ = [
"calcQuadraticBounds",
"calcCubicBounds",
"splitLine",
"splitQuadratic",
"splitCubic",
"splitQuadraticAtT",
"splitCubicAtT",
"solveQuadratic",
"solveCubic",
]
from fontTools.misc.arrayTools import calcBounds
import numpy
epsilon = 1e-12
def calcQuadraticBounds(pt1, pt2, pt3):
"""Return the bounding rectangle for a qudratic bezier segment.
pt1 and pt3 are the "anchor" points, pt2 is the "handle".
>>> calcQuadraticBounds((0, 0), (50, 100), (100, 0))
(0.0, 0.0, 100.0, 50.0)
>>> calcQuadraticBounds((0, 0), (100, 0), (100, 100))
(0.0, 0.0, 100.0, 100.0)
"""
a, b, c = calcQuadraticParameters(pt1, pt2, pt3)
# calc first derivative
ax, ay = a * 2
bx, by = b
roots = []
if ax != 0:
roots.append(-bx/ax)
if ay != 0:
roots.append(-by/ay)
points = [a*t*t + b*t + c for t in roots if 0 <= t < 1] + [pt1, pt3]
return calcBounds(points)
def calcCubicBounds(pt1, pt2, pt3, pt4):
"""Return the bounding rectangle for a cubic bezier segment.
pt1 and pt4 are the "anchor" points, pt2 and pt3 are the "handles".
>>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0))
(0.0, 0.0, 100.0, 75.0)
>>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100))
(0.0, 0.0, 100.0, 100.0)
>>> calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0))
(35.5662432703, 0.0, 64.4337567297, 75.0)
"""
a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4)
# calc first derivative
ax, ay = a * 3.0
bx, by = b * 2.0
cx, cy = c
xRoots = [t for t in solveQuadratic(ax, bx, cx) if 0 <= t < 1]
yRoots = [t for t in solveQuadratic(ay, by, cy) if 0 <= t < 1]
roots = xRoots + yRoots
points = [(a*t*t*t + b*t*t + c * t + d) for t in roots] + [pt1, pt4]
return calcBounds(points)
def splitLine(pt1, pt2, where, isHorizontal):
"""Split the line between pt1 and pt2 at position 'where', which
is an x coordinate if isHorizontal is False, a y coordinate if
isHorizontal is True. Return a list of two line segments if the
line was successfully split, or a list containing the original
line.
>>> printSegments(splitLine((0, 0), (100, 100), 50, True))
((0, 0), (50.0, 50.0))
((50.0, 50.0), (100, 100))
>>> printSegments(splitLine((0, 0), (100, 100), 100, True))
((0, 0), (100, 100))
>>> printSegments(splitLine((0, 0), (100, 100), 0, True))
((0, 0), (0.0, 0.0))
((0.0, 0.0), (100, 100))
>>> printSegments(splitLine((0, 0), (100, 100), 0, False))
((0, 0), (0.0, 0.0))
((0.0, 0.0), (100, 100))
"""
pt1, pt2 = numpy.array((pt1, pt2))
a = (pt2 - pt1)
b = pt1
ax = a[isHorizontal]
if ax == 0:
return [(pt1, pt2)]
t = float(where - b[isHorizontal]) / ax
if 0 <= t < 1:
midPt = a * t + b
return [(pt1, midPt), (midPt, pt2)]
else:
return [(pt1, pt2)]
def splitQuadratic(pt1, pt2, pt3, where, isHorizontal):
"""Split the quadratic curve between pt1, pt2 and pt3 at position 'where',
which is an x coordinate if isHorizontal is False, a y coordinate if
isHorizontal is True. Return a list of curve segments.
>>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False))
((0, 0), (50, 100), (100, 0))
>>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False))
((0.0, 0.0), (25.0, 50.0), (50.0, 50.0))
((50.0, 50.0), (75.0, 50.0), (100.0, 0.0))
>>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False))
((0.0, 0.0), (12.5, 25.0), (25.0, 37.5))
((25.0, 37.5), (62.5, 75.0), (100.0, 0.0))
>>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True))
((0.0, 0.0), (7.32233047034, 14.6446609407), (14.6446609407, 25.0))
((14.6446609407, 25.0), (50.0, 75.0), (85.3553390593, 25.0))
((85.3553390593, 25.0), (92.6776695297, 14.6446609407), (100.0, -7.1054273576e-15))
>>> # XXX I'm not at all sure if the following behavior is desirable:
>>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True))
((0.0, 0.0), (25.0, 50.0), (50.0, 50.0))
((50.0, 50.0), (50.0, 50.0), (50.0, 50.0))
((50.0, 50.0), (75.0, 50.0), (100.0, 0.0))
"""
a, b, c = calcQuadraticParameters(pt1, pt2, pt3)
solutions = solveQuadratic(a[isHorizontal], b[isHorizontal],
c[isHorizontal] - where)
solutions = [t for t in solutions if 0 <= t < 1]
solutions.sort()
if not solutions:
return [(pt1, pt2, pt3)]
return _splitQuadraticAtT(a, b, c, *solutions)
def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal):
"""Split the cubic curve between pt1, pt2, pt3 and pt4 at position 'where',
which is an x coordinate if isHorizontal is False, a y coordinate if
isHorizontal is True. Return a list of curve segments.
>>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False))
((0, 0), (25, 100), (75, 100), (100, 0))
>>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False))
((0.0, 0.0), (12.5, 50.0), (31.25, 75.0), (50.0, 75.0))
((50.0, 75.0), (68.75, 75.0), (87.5, 50.0), (100.0, 0.0))
>>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True))
((0.0, 0.0), (2.2937927384, 9.17517095361), (4.79804488188, 17.5085042869), (7.47413641001, 25.0))
((7.47413641001, 25.0), (31.2886200204, 91.6666666667), (68.7113799796, 91.6666666667), (92.52586359, 25.0))
((92.52586359, 25.0), (95.2019551181, 17.5085042869), (97.7062072616, 9.17517095361), (100.0, 1.7763568394e-15))
"""
a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4)
solutions = solveCubic(a[isHorizontal], b[isHorizontal], c[isHorizontal],
d[isHorizontal] - where)
solutions = [t for t in solutions if 0 <= t < 1]
solutions.sort()
if not solutions:
return [(pt1, pt2, pt3, pt4)]
return _splitCubicAtT(a, b, c, d, *solutions)
def splitQuadraticAtT(pt1, pt2, pt3, *ts):
"""Split the quadratic curve between pt1, pt2 and pt3 at one or more
values of t. Return a list of curve segments.
>>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5))
((0.0, 0.0), (25.0, 50.0), (50.0, 50.0))
((50.0, 50.0), (75.0, 50.0), (100.0, 0.0))
>>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75))
((0.0, 0.0), (25.0, 50.0), (50.0, 50.0))
((50.0, 50.0), (62.5, 50.0), (75.0, 37.5))
((75.0, 37.5), (87.5, 25.0), (100.0, 0.0))
"""
a, b, c = calcQuadraticParameters(pt1, pt2, pt3)
return _splitQuadraticAtT(a, b, c, *ts)
def splitCubicAtT(pt1, pt2, pt3, pt4, *ts):
"""Split the cubic curve between pt1, pt2, pt3 and pt4 at one or more
values of t. Return a list of curve segments.
>>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5))
((0.0, 0.0), (12.5, 50.0), (31.25, 75.0), (50.0, 75.0))
((50.0, 75.0), (68.75, 75.0), (87.5, 50.0), (100.0, 0.0))
>>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75))
((0.0, 0.0), (12.5, 50.0), (31.25, 75.0), (50.0, 75.0))
((50.0, 75.0), (59.375, 75.0), (68.75, 68.75), (77.34375, 56.25))
((77.34375, 56.25), (85.9375, 43.75), (93.75, 25.0), (100.0, 0.0))
"""
a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4)
return _splitCubicAtT(a, b, c, d, *ts)
def _splitQuadraticAtT(a, b, c, *ts):
ts = list(ts)
segments = []
ts.insert(0, 0.0)
ts.append(1.0)
for i in range(len(ts) - 1):
t1 = ts[i]
t2 = ts[i+1]
delta = (t2 - t1)
# calc new a, b and c
a1 = a * delta**2
b1 = (2*a*t1 + b) * delta
c1 = a*t1**2 + b*t1 + c
pt1, pt2, pt3 = calcQuadraticPoints(a1, b1, c1)
segments.append((pt1, pt2, pt3))
return segments
def _splitCubicAtT(a, b, c, d, *ts):
ts = list(ts)
ts.insert(0, 0.0)
ts.append(1.0)
segments = []
for i in range(len(ts) - 1):
t1 = ts[i]
t2 = ts[i+1]
delta = (t2 - t1)
# calc new a, b, c and d
a1 = a * delta**3
b1 = (3*a*t1 + b) * delta**2
c1 = (2*b*t1 + c + 3*a*t1**2) * delta
d1 = a*t1**3 + b*t1**2 + c*t1 + d
pt1, pt2, pt3, pt4 = calcCubicPoints(a1, b1, c1, d1)
segments.append((pt1, pt2, pt3, pt4))
return segments
#
# Equation solvers.
#
from math import sqrt, acos, cos, pi
def solveQuadratic(a, b, c,
sqrt=sqrt):
"""Solve a quadratic equation where a, b and c are real.
a*x*x + b*x + c = 0
This function returns a list of roots. Note that the returned list
is neither guaranteed to be sorted nor to contain unique values!
"""
if abs(a) < epsilon:
if abs(b) < epsilon:
# We have a non-equation; therefore, we have no valid solution
roots = []
else:
# We have a linear equation with 1 root.
roots = [-c/b]
else:
# We have a true quadratic equation. Apply the quadratic formula to find two roots.
DD = b*b - 4.0*a*c
if DD >= 0.0:
rDD = sqrt(DD)
roots = [(-b+rDD)/2.0/a, (-b-rDD)/2.0/a]
else:
# complex roots, ignore
roots = []
return roots
def solveCubic(a, b, c, d,
abs=abs, pow=pow, sqrt=sqrt, cos=cos, acos=acos, pi=pi):
"""Solve a cubic equation where a, b, c and d are real.
a*x*x*x + b*x*x + c*x + d = 0
This function returns a list of roots. Note that the returned list
is neither guaranteed to be sorted nor to contain unique values!
"""
#
# adapted from:
# CUBIC.C - Solve a cubic polynomial
# public domain by Ross Cottrell
# found at: http://www.strangecreations.com/library/snippets/Cubic.C
#
if abs(a) < epsilon:
# don't just test for zero; for very small values of 'a' solveCubic()
# returns unreliable results, so we fall back to quad.
return solveQuadratic(b, c, d)
a = float(a)
a1 = b/a
a2 = c/a
a3 = d/a
Q = (a1*a1 - 3.0*a2)/9.0
R = (2.0*a1*a1*a1 - 9.0*a1*a2 + 27.0*a3)/54.0
R2_Q3 = R*R - Q*Q*Q
if R2_Q3 < 0:
theta = acos(R/sqrt(Q*Q*Q))
rQ2 = -2.0*sqrt(Q)
x0 = rQ2*cos(theta/3.0) - a1/3.0
x1 = rQ2*cos((theta+2.0*pi)/3.0) - a1/3.0
x2 = rQ2*cos((theta+4.0*pi)/3.0) - a1/3.0
return [x0, x1, x2]
else:
if Q == 0 and R == 0:
x = 0
else:
x = pow(sqrt(R2_Q3)+abs(R), 1/3.0)
x = x + Q/x
if R >= 0.0:
x = -x
x = x - a1/3.0
return [x]
#
# Conversion routines for points to parameters and vice versa
#
def calcQuadraticParameters(pt1, pt2, pt3):
pt1, pt2, pt3 = numpy.array((pt1, pt2, pt3))
c = pt1
b = (pt2 - c) * 2.0
a = pt3 - c - b
return a, b, c
def calcCubicParameters(pt1, pt2, pt3, pt4):
pt1, pt2, pt3, pt4 = numpy.array((pt1, pt2, pt3, pt4))
d = pt1
c = (pt2 - d) * 3.0
b = (pt3 - pt2) * 3.0 - c
a = pt4 - d - c - b
return a, b, c, d
def calcQuadraticPoints(a, b, c):
pt1 = c
pt2 = (b * 0.5) + c
pt3 = a + b + c
return pt1, pt2, pt3
def calcCubicPoints(a, b, c, d):
pt1 = d
pt2 = (c / 3.0) + d
pt3 = (b + c) / 3.0 + pt2
pt4 = a + d + c + b
return pt1, pt2, pt3, pt4
def _segmentrepr(obj):
"""
>>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], numpy.array([0.1, 2.2])]]])
'(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))'
"""
try:
it = iter(obj)
except TypeError:
return str(obj)
else:
return "(%s)" % ", ".join([_segmentrepr(x) for x in it])
def printSegments(segments):
"""Helper for the doctests, displaying each segment in a list of
segments on a single line as a tuple.
"""
for segment in segments:
print _segmentrepr(segment)
if __name__ == "__main__":
import doctest
doctest.testmod()
| 30.31694 | 114 | 0.594809 | 1,978 | 11,096 | 3.324065 | 0.145602 | 0.034677 | 0.025551 | 0.018251 | 0.484867 | 0.426464 | 0.385856 | 0.377643 | 0.358327 | 0.314829 | 0 | 0.175217 | 0.200703 | 11,096 | 365 | 115 | 30.4 | 0.566129 | 0.057769 | 0 | 0.25 | 0 | 0 | 0.026978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.021739 | null | null | 0.01087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96461908df787c4f715fcb78e9b9a2b6846a1ccf | 15,328 | py | Python | hypha/apply/projects/models/project.py | slifty/hypha | 93313933c26589858beb9a861e33431658cd3b24 | [
"BSD-3-Clause"
] | null | null | null | hypha/apply/projects/models/project.py | slifty/hypha | 93313933c26589858beb9a861e33431658cd3b24 | [
"BSD-3-Clause"
] | null | null | null | hypha/apply/projects/models/project.py | slifty/hypha | 93313933c26589858beb9a861e33431658cd3b24 | [
"BSD-3-Clause"
] | null | null | null | import collections
import decimal
import json
import logging
from django.apps import apps
from django.conf import settings
from django.contrib.contenttypes.fields import GenericRelation
from django.contrib.postgres.fields import JSONField
from django.core.exceptions import ValidationError
from django.core.validators import MinValueValidator
from django.db import models
from django.db.models import Count, F, Max, OuterRef, Subquery, Sum, Value
from django.db.models.functions import Cast, Coalesce
from django.db.models.signals import post_delete
from django.dispatch.dispatcher import receiver
from django.urls import reverse
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from wagtail.contrib.settings.models import BaseSetting, register_setting
from wagtail.core.fields import StreamField
from addressfield.fields import ADDRESS_FIELDS_ORDER
from hypha.apply.funds.models.mixins import AccessFormData
from hypha.apply.stream_forms.blocks import FormFieldsBlock
from hypha.apply.stream_forms.files import StreamFieldDataEncoder
from hypha.apply.stream_forms.models import BaseStreamForm
from hypha.apply.utils.storage import PrivateStorage
from .vendor import Vendor
logger = logging.getLogger(__name__)
def contract_path(instance, filename):
return f'projects/{instance.project_id}/contracts/{filename}'
def document_path(instance, filename):
return f'projects/{instance.project_id}/supporting_documents/{filename}'
COMMITTED = 'committed'
CONTRACTING = 'contracting'
IN_PROGRESS = 'in_progress'
CLOSING = 'closing'
COMPLETE = 'complete'
PROJECT_STATUS_CHOICES = [
(COMMITTED, _('Committed')),
(CONTRACTING, _('Contracting')),
(IN_PROGRESS, _('In Progress')),
(CLOSING, _('Closing')),
(COMPLETE, _('Complete')),
]
class ProjectQuerySet(models.QuerySet):
def active(self):
# Projects that are not finished.
return self.exclude(status=COMPLETE)
def in_progress(self):
# Projects that users need to interact with, submitting reports or payment request.
return self.filter(
status__in=(IN_PROGRESS, CLOSING,)
)
def complete(self):
return self.filter(status=COMPLETE)
def in_approval(self):
return self.filter(
is_locked=True,
status=COMMITTED,
approvals__isnull=True,
)
def by_end_date(self, desc=False):
order = getattr(F('proposed_end'), 'desc' if desc else 'asc')(nulls_last=True)
return self.order_by(order)
def with_amount_paid(self):
return self.annotate(
amount_paid=Coalesce(Sum('invoices__paid_value'), Value(0)),
)
def with_last_payment(self):
return self.annotate(
last_payment_request=Max('invoices__requested_at'),
)
def with_outstanding_reports(self):
Report = apps.get_model('application_projects', 'Report')
return self.annotate(
outstanding_reports=Subquery(
Report.objects.filter(
project=OuterRef('pk'),
).to_do().order_by().values('project').annotate(
count=Count('pk'),
).values('count'),
output_field=models.IntegerField(),
)
)
def with_start_date(self):
return self.annotate(
start=Cast(
Subquery(
Contract.objects.filter(
project=OuterRef('pk'),
).approved().order_by(
'approved_at'
).values('approved_at')[:1]
),
models.DateField(),
)
)
def for_table(self):
return self.with_amount_paid().with_last_payment().with_outstanding_reports().select_related(
'report_config',
'submission__page',
'lead',
)
class Project(BaseStreamForm, AccessFormData, models.Model):
lead = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL, related_name='lead_projects')
submission = models.OneToOneField("funds.ApplicationSubmission", on_delete=models.CASCADE)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.SET_NULL, null=True, related_name='owned_projects')
title = models.TextField()
vendor = models.ForeignKey(
"application_projects.Vendor",
on_delete=models.SET_NULL,
null=True, blank=True, related_name='projects'
)
value = models.DecimalField(
default=0,
max_digits=10,
decimal_places=2,
validators=[MinValueValidator(decimal.Decimal('0.01'))],
)
proposed_start = models.DateTimeField(_('Proposed Start Date'), null=True)
proposed_end = models.DateTimeField(_('Proposed End Date'), null=True)
status = models.TextField(choices=PROJECT_STATUS_CHOICES, default=COMMITTED)
form_data = JSONField(encoder=StreamFieldDataEncoder, default=dict)
form_fields = StreamField(FormFieldsBlock(), null=True)
# tracks read/write state of the Project
is_locked = models.BooleanField(default=False)
# tracks updates to the Projects fields via the Project Application Form.
user_has_updated_details = models.BooleanField(default=False)
activities = GenericRelation(
'activity.Activity',
content_type_field='source_content_type',
object_id_field='source_object_id',
related_query_name='project',
)
created_at = models.DateTimeField(auto_now_add=True)
sent_to_compliance_at = models.DateTimeField(null=True)
objects = ProjectQuerySet.as_manager()
def __str__(self):
return self.title
@property
def status_display(self):
return self.get_status_display()
def get_address_display(self):
try:
address = json.loads(self.vendor.address)
except (json.JSONDecodeError, AttributeError):
return ''
else:
return ', '.join(
address.get(field)
for field in ADDRESS_FIELDS_ORDER
if address.get(field)
)
@classmethod
def create_from_submission(cls, submission):
"""
Create a Project from the given submission.
Returns a new Project or the given ApplicationSubmissions existing
Project.
"""
if not settings.PROJECTS_ENABLED:
logging.error(f'Tried to create a Project for Submission ID={submission.id} while projects are disabled')
return None
# OneToOne relations on the targetted model cannot be accessed without
# an exception when the relation doesn't exist (is None). Since we
# want to fail fast here, we can use hasattr instead.
if hasattr(submission, 'project'):
return submission.project
# See if there is a form field named "legal name", if not use user name.
legal_name = submission.get_answer_from_label('legal name') or submission.user.full_name
vendor, _ = Vendor.objects.get_or_create(
user=submission.user
)
vendor.name = legal_name
vendor.address = submission.form_data.get('address', '')
vendor.save()
return Project.objects.create(
submission=submission,
user=submission.user,
title=submission.title,
vendor=vendor,
value=submission.form_data.get('value', 0),
)
@property
def start_date(self):
# Assume project starts when OTF are happy with the first signed contract
first_approved_contract = self.contracts.approved().order_by('approved_at').first()
if not first_approved_contract:
return None
return first_approved_contract.approved_at.date()
@property
def end_date(self):
# Aiming for the proposed end date as the last day of the project
# If still ongoing assume today is the end
return max(
self.proposed_end.date(),
timezone.now().date(),
)
def paid_value(self):
return self.invoices.paid_value()
def unpaid_value(self):
return self.invoices.unpaid_value()
def clean(self):
if self.proposed_start is None:
return
if self.proposed_end is None:
return
if self.proposed_start > self.proposed_end:
raise ValidationError(_('Proposed End Date must be after Proposed Start Date'))
def save(self, *args, **kwargs):
creating = not self.pk
if creating:
files = self.extract_files()
else:
self.process_file_data(self.form_data)
super().save(*args, **kwargs)
if creating:
self.process_file_data(files)
def editable_by(self, user):
if self.editable:
return True
# Approver can edit it when they are approving
return user.is_approver and self.can_make_approval
@property
def editable(self):
if self.status not in (CONTRACTING, COMMITTED):
return True
# Someone has approved the project - consider it locked while with contracting
if self.approvals.exists():
return False
# Someone must lead the project to make changes
return self.lead and not self.is_locked
def get_absolute_url(self):
if settings.PROJECTS_ENABLED:
return reverse('apply:projects:detail', args=[self.id])
return '#'
@property
def can_make_approval(self):
return self.is_locked and self.status == COMMITTED
def can_request_funding(self):
"""
Should we show this Project's funding block?
"""
return self.status in (CLOSING, IN_PROGRESS)
@property
def can_send_for_approval(self):
"""
Wrapper to expose the pending approval state
We don't want to expose a "Sent for Approval" state to the end User so
we infer it from the current status being "Comitted" and the Project
being locked.
"""
correct_state = self.status == COMMITTED and not self.is_locked
return correct_state and self.user_has_updated_details
@property
def requires_approval(self):
return not self.approvals.exists()
def get_missing_document_categories(self):
"""
Get the number of documents required to meet each DocumentCategorys minimum
"""
# Count the number of documents in each category currently
existing_categories = DocumentCategory.objects.filter(packet_files__project=self)
counter = collections.Counter(existing_categories)
# Find the difference between the current count and recommended count
for category in DocumentCategory.objects.all():
current_count = counter[category]
difference = category.recommended_minimum - current_count
if difference > 0:
yield {
'category': category,
'difference': difference,
}
@property
def is_in_progress(self):
return self.status == IN_PROGRESS
@property
def has_deliverables(self):
return self.deliverables.exists()
# def send_to_compliance(self, request):
# """Notify Compliance about this Project."""
# messenger(
# MESSAGES.SENT_TO_COMPLIANCE,
# request=request,
# user=request.user,
# source=self,
# )
# self.sent_to_compliance_at = timezone.now()
# self.save(update_fields=['sent_to_compliance_at'])
@register_setting
class ProjectSettings(BaseSetting):
compliance_email = models.TextField("Compliance Email")
vendor_setup_required = models.BooleanField(default=True)
class Approval(models.Model):
project = models.ForeignKey("Project", on_delete=models.CASCADE, related_name="approvals")
by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="approvals")
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
unique_together = ['project', 'by']
def __str__(self):
return _('Approval of {project} by {user}').format(project=self.project, user=self.by)
class ContractQuerySet(models.QuerySet):
def approved(self):
return self.filter(is_signed=True, approver__isnull=False)
class Contract(models.Model):
approver = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL, related_name='contracts')
project = models.ForeignKey("Project", on_delete=models.CASCADE, related_name="contracts")
file = models.FileField(upload_to=contract_path, storage=PrivateStorage())
is_signed = models.BooleanField("Signed?", default=False)
created_at = models.DateTimeField(auto_now_add=True)
approved_at = models.DateTimeField(null=True)
objects = ContractQuerySet.as_manager()
@property
def state(self):
return _('Signed') if self.is_signed else _('Unsigned')
def __str__(self):
return _('Contract for {project} ({state})').format(project=self.project, state=self.state)
def get_absolute_url(self):
return reverse('apply:projects:contract', args=[self.project.pk, self.pk])
class PacketFile(models.Model):
category = models.ForeignKey("DocumentCategory", null=True, on_delete=models.CASCADE, related_name="packet_files")
project = models.ForeignKey("Project", on_delete=models.CASCADE, related_name="packet_files")
title = models.TextField()
document = models.FileField(upload_to=document_path, storage=PrivateStorage())
def __str__(self):
return _('Project file: {title}').format(title=self.title)
def get_remove_form(self):
"""
Get an instantiated RemoveDocumentForm with this class as `instance`.
This allows us to build instances of the RemoveDocumentForm for each
instance of PacketFile in the supporting documents template. The
standard Delegated View flow makes it difficult to create these forms
in the view or template.
"""
from ..forms import RemoveDocumentForm
return RemoveDocumentForm(instance=self)
@receiver(post_delete, sender=PacketFile)
def delete_packetfile_file(sender, instance, **kwargs):
# Remove the file and don't save the base model
instance.document.delete(False)
class DocumentCategory(models.Model):
name = models.CharField(max_length=254)
recommended_minimum = models.PositiveIntegerField()
def __str__(self):
return self.name
class Meta:
ordering = ('name',)
verbose_name_plural = 'Document Categories'
class Deliverable(models.Model):
name = models.TextField()
available_to_invoice = models.IntegerField(default=1)
unit_price = models.DecimalField(
max_digits=10,
decimal_places=2,
validators=[MinValueValidator(decimal.Decimal('0.01'))],
)
project = models.ForeignKey(
Project,
null=True, blank=True,
on_delete=models.CASCADE,
related_name='deliverables'
)
def __str__(self):
return self.name
| 32.892704 | 123 | 0.666819 | 1,757 | 15,328 | 5.639727 | 0.223677 | 0.022202 | 0.022606 | 0.014835 | 0.17368 | 0.134827 | 0.117469 | 0.105359 | 0.087193 | 0.066404 | 0 | 0.001811 | 0.24328 | 15,328 | 465 | 124 | 32.963441 | 0.852487 | 0.13459 | 0 | 0.162939 | 0 | 0 | 0.081367 | 0.017852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134185 | false | 0 | 0.089457 | 0.086262 | 0.536741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
9646e8e2458a9b399cec0bf5ce7ece6cbbdffad6 | 1,938 | py | Python | temp/src/square.py | wvu-irl/smart-2 | b39b6d477b5259b3bf0d96180a154ee1dafae0ac | [
"MIT"
] | null | null | null | temp/src/square.py | wvu-irl/smart-2 | b39b6d477b5259b3bf0d96180a154ee1dafae0ac | [
"MIT"
] | null | null | null | temp/src/square.py | wvu-irl/smart-2 | b39b6d477b5259b3bf0d96180a154ee1dafae0ac | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist
from math import radians
import os
import numpy as np
from nav_msgs.msg import Odometry
class DrawASquare():
def __init__(self):
# initiliaze
rospy.init_node('drawasquare', anonymous=True)
# What to do you ctrl + c
rospy.on_shutdown(self.shutdown)
self.cmd_vel = rospy.Publisher('cmd_vel', Twist, queue_size=10)
# 10 HZ
r = rospy.Rate(10);
# create two different Twist() variables. One for moving forward. One for turning 45 degrees.
# let's go forward at 0.2 m/s
move_cmd = Twist()
move_cmd.linear.x = 0.25
# by default angular.z is 0 so setting this isn't required
#let's turn at 45 deg/s
turn_cmd = Twist()
turn_cmd.linear.x = 0
turn_cmd.angular.z = radians(45); #45 deg/s in radians/s
#two keep drawing squares. Go forward for 4 seconds (40 x 10 HZ) then turn for 2 second
count = 0
while not rospy.is_shutdown():
# go forward 1 m (4 seconds * 0.25 m / seconds)
rospy.loginfo("Going Straight")
for x in range(0,40):
self.cmd_vel.publish(move_cmd)
r.sleep()
# turn 90 degrees
rospy.loginfo("Turning")
for x in range(0,20):
self.cmd_vel.publish(turn_cmd)
r.sleep()
count = count + 1
if(count == 4):
#count = 0
shutdown(self)
if(count == 0):
rospy.loginfo("TurtleBot should be close to the original starting position (but it's probably way off)")
def shutdown(self):
# stop turtlebot
rospy.loginfo("Stop Drawing Squares")
self.cmd_vel.publish(Twist())
rospy.sleep(1)
if __name__ == '__main__':
try:
DrawASquare()
except:
rospy.loginfo("node terminated.")
| 27.685714 | 120 | 0.583591 | 270 | 1,938 | 4.077778 | 0.448148 | 0.027248 | 0.036331 | 0.046322 | 0.021798 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034928 | 0.320433 | 1,938 | 69 | 121 | 28.086957 | 0.801063 | 0.237358 | 0 | 0.04878 | 0 | 0 | 0.1162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.146341 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
964a8ebce3df5d896031c77dad18e3a15b609702 | 527 | py | Python | tests/test_wps_dummy.py | f-PLT/emu | c0bb27d57afcaa361772ce99eaf11f706983b3b2 | [
"Apache-2.0"
] | 3 | 2015-11-10T10:08:07.000Z | 2019-09-09T20:41:25.000Z | tests/test_wps_dummy.py | f-PLT/emu | c0bb27d57afcaa361772ce99eaf11f706983b3b2 | [
"Apache-2.0"
] | 76 | 2015-02-01T23:17:17.000Z | 2021-12-20T14:17:59.000Z | tests/test_wps_dummy.py | f-PLT/emu | c0bb27d57afcaa361772ce99eaf11f706983b3b2 | [
"Apache-2.0"
] | 8 | 2016-10-13T16:44:02.000Z | 2020-12-22T18:36:53.000Z | from pywps import Service
from pywps.tests import assert_response_success
from .common import client_for, get_output
from emu.processes.wps_dummy import Dummy
def test_wps_dummy():
client = client_for(Service(processes=[Dummy()]))
datainputs = "input1=10;input2=2"
resp = client.get(
service='WPS', request='Execute', version='1.0.0',
identifier='dummyprocess',
datainputs=datainputs)
assert_response_success(resp)
assert get_output(resp.xml) == {'output1': '11', 'output2': '1'}
| 31 | 68 | 0.705882 | 68 | 527 | 5.308824 | 0.529412 | 0.049862 | 0.116343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029613 | 0.166983 | 527 | 16 | 69 | 32.9375 | 0.792711 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
964ae4268e2f7a93ee8eacf634fa2376a1e04d95 | 476 | py | Python | test/talker.py | cjds/rosgo | 2a832421948707baca6413fe4394e28ed0c36d86 | [
"Apache-2.0"
] | 148 | 2016-02-16T18:29:34.000Z | 2022-03-18T13:13:46.000Z | test/talker.py | cjds/rosgo | 2a832421948707baca6413fe4394e28ed0c36d86 | [
"Apache-2.0"
] | 24 | 2018-12-21T19:32:15.000Z | 2021-01-20T00:27:51.000Z | test/talker.py | cjds/rosgo | 2a832421948707baca6413fe4394e28ed0c36d86 | [
"Apache-2.0"
] | 45 | 2015-11-16T06:31:10.000Z | 2022-03-28T12:46:44.000Z | #!/usr/bin/env python
import rospy
from std_msgs.msg import String
def talker():
pub = rospy.Publisher('chatter', String)
rospy.init_node('talker', anonymous=True)
while not rospy.is_shutdown():
str = "%s: hello world %s" % (rospy.get_name(), rospy.get_time())
rospy.loginfo(str)
pub.publish(String(str))
rospy.sleep(1.0)
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException:
pass
| 22.666667 | 73 | 0.628151 | 61 | 476 | 4.688525 | 0.688525 | 0.055944 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005495 | 0.235294 | 476 | 20 | 74 | 23.8 | 0.78022 | 0.042017 | 0 | 0 | 0 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.133333 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
964b60ee1051cb3579b95c9af76b42037448ddeb | 9,365 | py | Python | point_to_box/model.py | BavarianToolbox/point_to_box | 6769739361410499596f53a60704cbedae56bd81 | [
"Apache-2.0"
] | null | null | null | point_to_box/model.py | BavarianToolbox/point_to_box | 6769739361410499596f53a60704cbedae56bd81 | [
"Apache-2.0"
] | null | null | null | point_to_box/model.py | BavarianToolbox/point_to_box | 6769739361410499596f53a60704cbedae56bd81 | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED! DO NOT EDIT! File to edit: nbs/02_model.ipynb (unless otherwise specified).
__all__ = ['EfficientLoc', 'CIoU']
# Cell
#export
from efficientnet_pytorch import EfficientNet
import copy
import time
import math
import torch
import torch.optim as opt
from torch.utils.data import DataLoader
from torchvision import transforms
# Cell
class EfficientLoc():
def __init__(self, version = 'efficientnet-b0', in_channels = 4, out_features = 4, export = False):
"""
EfficientLoc model class for loading, training, and exporting models
"""
self.version = version
# self.inter_channels = versoin_dict([version])
# TODO
# check version is compliant
self.in_channels = in_channels
self.out_features = out_features
self.export = export
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self.data_parallel = False
self.model = self.get_model(version = self.version,
in_channels = self.in_channels, out_features = self.out_features)
def get_model(self, version, in_channels, out_features):
"""
Adjusts efficient net model architecture for point-to-box data
"""
version_chnls = {
'efficientnet-b0': 1280,
'efficientnet-b1': 1280,
'efficientnet-b2': 1408,
'efficientnet-b3': 1536,
'efficientnet-b4': 1792
# 'efficientnet-b5': 456
# 'efficientnet-b6': 528
# 'efficientnet-b7': 600
# 'efficientnet-b8': 672
# 'efficientnet-l2': 800
}
inter_channel = version_chnls[version]
model = EfficientNet.from_pretrained(version, include_top = False)
# adjust in channels in conv stem
model._change_in_channels(in_channels)
# if self.export:
model.set_swish(memory_efficient= (not self.export))
model = torch.nn.Sequential(
model,
# torch.nn.AdaptiveAvgPool2d(),
torch.nn.Dropout(0.2),
torch.nn.Flatten(),
torch.nn.Linear(inter_channel, out_features),
# torch.nn.Linear(100, out_features),
torch.nn.Sigmoid()
)
for param in model.parameters():
param.requires_grad = True
if torch.cuda.device_count() > 1:
print(f'Using {torch.cuda.device_count()} GPUs')
model = torch.nn.DataParallel(model)
self.data_parallel = True
model.to(self.device)
return model
def train(self, dataloaders, criterion, optimizer, num_epochs, ds_sizes, print_every = 100, scheduler=None):
"""
Training function for model
**Params**
loaders : dict of val/train DataLoaders
criterion : loss function
optimizer : training optimizer
num_epochs : number of training epochs
ds_sizes : dict of number of samples in
print_every : batch_interval for intermediate loss printing
scheduler : Optional learning rate scheduler
"""
train_start = time.time()
best_model_wts = copy.deepcopy(self.model.state_dict())
best_loss = 10000000.0
for epoch in range(num_epochs):
print(f'Epoch {epoch + 1}/{num_epochs}')
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
phase_start = time.time()
if phase == 'train':
self.model.train()
else:
self.model.eval()
inter_loss = 0.
running_loss = 0.
batches_past = 0
# Iterate over data.
for i, (inputs, labels) in enumerate(dataloaders[phase]):
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# zero the parameter gradients
optimizer.zero_grad()
# forward, only track history in train phase
with torch.set_grad_enabled(phase == 'train'):
outputs = self.model(inputs)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
running_loss += loss.item()
inter_loss += loss.item()
if (i+1) % print_every == 0:
inter_loss = inter_loss / ((i+1-batches_past) * inputs.shape[0])
print(f'Intermediate loss: {inter_loss:.6f}')
inter_loss = 0.
batches_past = i+1
if phase == 'train' and scheduler is not None:
scheduler.step()
epoch_loss = running_loss / ds_sizes[phase]
phase_duration = time.time() - phase_start
phase_duration = f'{(phase_duration // 60):.0f}m {(phase_duration % 60):.0f}s'
print('-' * 5)
print(f'{phase} Phase Duration: {phase_duration} Average Loss: {epoch_loss:.6f} in ')
print('-' * 5)
# deep copy the model
if phase == 'val' and epoch_loss < best_loss:
best_loss = epoch_loss
best_model_wts = copy.deepcopy(self.model.state_dict())
time_elapsed = time.time() - train_start
print(f'Training complete in {(time_elapsed // 60):.0f}m {(time_elapsed % 60):.0f}s')
print(f'Best val Loss: {best_loss:.4f}')
# load best model weights
self.model.load_state_dict(best_model_wts)
def save(self, dst, info = None):
"""Save model and optimizer state dict
**Params**
dst : destination file path including .pth file name
info : Optional dictionary with model info
"""
if info:
torch.save(info, dst)
else:
model_dict = self.model.state_dict()
if self.data_parallel:
model_dict = self.model.module.state_dict()
torch.save({
'base_arch' : self.version,
'model_state_dict' : model_dict,
}, dst)
def load(self, model_state_dict):
"""Load model weights from state-dict"""
self.model.load_state_dict(model_state_dict)
def _export(self, dst, dummy, verbose = True):
"""Export model as onnx graph
**Params**
dst : destination including .onnx file name
dummy : dummy variable for export structure, shape (B,C,W,H)
"""
self.model.eval()
torch.onnx.export(self.model, dummy, dst, verbose = verbose)
# Cell
class CIoU(torch.nn.Module):
"""Complete IoU loss class"""
def __init__(self) -> None:
super(CIoU, self).__init__()
def forward(self, input: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
return self.ciou(input, target)
# return F.l1_loss(input, target, reduction=self.reduction)
def ciou(self, bboxes1, bboxes2):
bboxes1 = torch.sigmoid(bboxes1)
bboxes2 = torch.sigmoid(bboxes2)
rows = bboxes1.shape[0]
cols = bboxes2.shape[0]
cious = torch.zeros((rows, cols))
if rows * cols == 0:
return cious
exchange = False
if bboxes1.shape[0] > bboxes2.shape[0]:
bboxes1, bboxes2 = bboxes2, bboxes1
cious = torch.zeros((cols, rows))
exchange = True
w1 = torch.exp(bboxes1[:, 2])
h1 = torch.exp(bboxes1[:, 3])
w2 = torch.exp(bboxes2[:, 2])
h2 = torch.exp(bboxes2[:, 3])
area1 = w1 * h1
area2 = w2 * h2
center_x1 = bboxes1[:, 0]
center_y1 = bboxes1[:, 1]
center_x2 = bboxes2[:, 0]
center_y2 = bboxes2[:, 1]
inter_l = torch.max(center_x1 - w1 / 2,center_x2 - w2 / 2)
inter_r = torch.min(center_x1 + w1 / 2,center_x2 + w2 / 2)
inter_t = torch.max(center_y1 - h1 / 2,center_y2 - h2 / 2)
inter_b = torch.min(center_y1 + h1 / 2,center_y2 + h2 / 2)
inter_area = torch.clamp((inter_r - inter_l),min=0) * torch.clamp((inter_b - inter_t),min=0)
c_l = torch.min(center_x1 - w1 / 2,center_x2 - w2 / 2)
c_r = torch.max(center_x1 + w1 / 2,center_x2 + w2 / 2)
c_t = torch.min(center_y1 - h1 / 2,center_y2 - h2 / 2)
c_b = torch.max(center_y1 + h1 / 2,center_y2 + h2 / 2)
inter_diag = (center_x2 - center_x1)**2 + (center_y2 - center_y1)**2
c_diag = torch.clamp((c_r - c_l),min=0)**2 + torch.clamp((c_b - c_t),min=0)**2
union = area1+area2-inter_area
u = (inter_diag) / c_diag
iou = inter_area / union
v = (4 / (math.pi ** 2)) * torch.pow((torch.atan(w2 / h2) - torch.atan(w1 / h1)), 2)
with torch.no_grad():
S = (iou>0.5).float()
alpha= S*v/(1-iou+v)
cious = iou - u - alpha * v
cious = torch.clamp(cious,min=-1.0,max = 1.0)
if exchange:
cious = cious.T
return torch.sum(1-cious) | 33.091873 | 112 | 0.553017 | 1,119 | 9,365 | 4.47185 | 0.243968 | 0.023381 | 0.016787 | 0.014388 | 0.078937 | 0.070144 | 0.070144 | 0.070144 | 0.070144 | 0.04996 | 0 | 0.036428 | 0.337533 | 9,365 | 283 | 113 | 33.091873 | 0.770148 | 0.173412 | 0 | 0.074534 | 1 | 0 | 0.068828 | 0.003601 | 0 | 0 | 0 | 0.003534 | 0 | 1 | 0.055901 | false | 0 | 0.049689 | 0.006211 | 0.142857 | 0.068323 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
964d531f50577c7580159804463196dbab58e21c | 7,643 | py | Python | Receptive_Field_PyNN/2rtna_connected_to_4ReceptiveFields/anmy_TDXY.py | mahmoud-a-ali/Thesis_sample_codes | 02a912dd012291b00c89db195b4cba2ebb4d35fe | [
"MIT"
] | null | null | null | Receptive_Field_PyNN/2rtna_connected_to_4ReceptiveFields/anmy_TDXY.py | mahmoud-a-ali/Thesis_sample_codes | 02a912dd012291b00c89db195b4cba2ebb4d35fe | [
"MIT"
] | null | null | null | Receptive_Field_PyNN/2rtna_connected_to_4ReceptiveFields/anmy_TDXY.py | mahmoud-a-ali/Thesis_sample_codes | 02a912dd012291b00c89db195b4cba2ebb4d35fe | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Sat Jun 2 13:09:55 2018
@author: mali
"""
#import time
import pickle
import pyNN.utility.plotting as plot
import matplotlib.pyplot as plt
import comn_conversion as cnvrt
import prnt_plt_anmy as ppanmy
# file and folder names =======================================================
fldr_name = 'rslts/icub64x64/'
pickle_filename = 'TDXY.pickle'
file_pth = cnvrt.read_flenfldr_ncrntpth(fldr_name, pickle_filename )
with open(file_pth , 'rb') as tdxy:
TDXY = pickle.load( tdxy )
print '### lenght of TDXY : {}'.format( len(TDXY) ) # 2+ 2*n_orn )
pop = TDXY[0]
t_ist = 1040
print 'check pop: L_rtna_TDXY'
print '### T : {}'.format(pop[0][t_ist]) # dimension 4 x t_stp x depend
print '### 1D : {}'.format(pop[1][t_ist]) # dimension 4 x t_stp x depend
print '### X : {}'.format(pop[2][t_ist]) # dimension 4 x t_stp x depend
print '### Y : {}'.format(pop[3][t_ist]) # dimension 4 x t_stp x depend
print pop[0]
print pop[1]
#required variables============================================================
n_rtna = 2 # till now should be two
n_orn = 4
rtna_w = 64
rtna_h = 64
krnl_sz = 5
rf_w = rtna_w - krnl_sz +1
rf_h = rtna_h - krnl_sz +1
subplt_rws = n_rtna
subplt_cls = n_orn+1
########### to make animation fast as scale now in micro second ###############
#first to scale be divide over 10 or 100 ======================================
T=TDXY[0][0]
t10u=T [0:T[-1]:100]
#print '### t_10u : {}'.format(t10u)
# second find all times has spikes any one of the rtna or rf ==================
t_spks=[]
for pop in range ( len(TDXY) ):
for inst in range( len(TDXY[pop][0]) ):
if TDXY[pop][2][inst]!=[] :
t_spks.append( TDXY[pop][0][inst] )
print pop, TDXY[pop][0][inst]
t_spks.sort()
for each in t_spks:
count = t_spks.count(each)
if count > 1:
t_spks.remove(each)
print 't_spks : {}'.format( t_spks )
#animate the rtna_rf =========================================================
#print 'abplt_rw, sbplt_cl, rtna_w, rtna_h, rf_w, rf_h: {}, {}, {}, {}, {}, {} '.format(subplt_rws, subplt_cls, rtna_w, rtna_h, rf_w, rf_h)
fig, axs = plt.subplots(subplt_rws, subplt_cls, sharex=False, sharey=False) #, figsize=(12,5))
axs = ppanmy.init_fig_mxn_sbplt_wxh_res (fig, axs, rtna_h, rtna_w, rf_w, rf_h, subplt_rws, subplt_cls)
plt.grid(True)
plt.show(block=False)
plt.pause(.01)
#for i in t_spks: #t10u:
# axs = ppanmy.init_fig_mxn_sbplt_wxh_res (fig, axs, rtna_h, rtna_w, rf_w, rf_h, subplt_rws, subplt_cls)
# plt.suptitle('rtna_rf_orn_3: t= {} usec'.format( i ) )
# if subplt_rws==1:
# axs[0].scatter( TDXY[0][2][i], TDXY[0][3][i] )
# for col in range (subplt_cls):
# axs[col].scatter( TDXY[col+1][2][i], TDXY[col+1][3][i] )
## plt.savefig( 'fgrs/anmy_1/{}_t{}.png'.format(vrjn, i) )
# plt.show(block=False)
# plt.pause(2)
# for col in range(subplt_cls):
# axs[col].cla()
#
# elif subplt_rws==2:
# for col in range (subplt_cls):
# axs[0][0].scatter( TDXY[0][2][i], TDXY[0][3][i] )
# axs[1][0].scatter( TDXY[1][2][i], TDXY[1][3][i] )
# for col in range(1,n_orn+1):
# row=0
# axs[row][col].scatter( TDXY[col+1][2][i], TDXY[col+1][3][i] )
# for col in range(1,n_orn):
# row=1
# axs[row][col].scatter( TDXY[n_orn+1+col][2][i], TDXY[n_orn+1+col][3][i] )
## plt.savefig( 'fgrs/anmy_1/{}_t{}.png'.format(vrjn, i) )
# plt.show(block=False)
# plt.pause(2)
# for row in range(subplt_rws):
# for col in range (subplt_cls):
# axs[row][col].cla()
#
print '##### required variables: \n n_rtna={}, TDXY_len={}, rtna_w={}, rtna_h={}, krnl_sz={}, rf_w={} , rf_h={}'.format( n_rtna , len(TDXY), rtna_w, rtna_h, krnl_sz, rf_w , rf_h )
plt.show(block=False)
last_t_spks=-310
for i in range( len(t_spks) ): #t10u:
# plt.pause(2)
if t_spks[i]-last_t_spks > 300:
#clear
if subplt_rws==2:
for row in range(subplt_rws):
for col in range (subplt_cls):
axs[row][col].cla()
elif subplt_rws==1:
for col in range(subplt_cls):
axs[col].cla()
axs = ppanmy.init_fig_mxn_sbplt_wxh_res (fig, axs, rtna_h, rtna_w, rf_w, rf_h, subplt_rws, subplt_cls)
plt.suptitle('rtna_rf_orn: t= {} usec'.format( t_spks[i] ) )
plt.pause(1.5)
#--------------------------------------------------------------------------
if subplt_rws==1:
axs[0].scatter( TDXY[0][2][t_spks[i]], TDXY[0][3][t_spks[i]] )
for col in range (subplt_cls):
axs[col].scatter( TDXY[col+1][2][t_spks[i]], TDXY[col+1][3][t_spks[i]] )
# plt.savefig( 'fgrs/anmy_1/{}_t{}.png'.format(vrjn, i) )
elif subplt_rws==2:
for col in range (subplt_cls):
axs[0][0].scatter( TDXY[0][2][t_spks[i]], TDXY[0][3][t_spks[i]] )
axs[1][0].scatter( TDXY[1][2][t_spks[i]], TDXY[1][3][t_spks[i]] )
for col in range(1,n_orn+1):
row=0
axs[row][col].scatter( TDXY[col+1][2][t_spks[i]], TDXY[col+1][3][t_spks[i]] )
for col in range(1,n_orn+1):
row=1
axs[row][col].scatter( TDXY[n_orn+1+col][2][t_spks[i]], TDXY[n_orn+1+col][3][t_spks[i]] )
# plt.savefig( 'fgrs/anmy_1/{}_t{}.png'.format(vrjn, i) )
#--------------------------------------------------------------------------
plt.pause(.5)
else: #====================================================================
#--------------------------------------------------------------------------
if subplt_rws==1:
axs[0].scatter( TDXY[0][2][t_spks[i]], TDXY[0][3][t_spks[i]] )
for col in range (subplt_cls):
axs[col].scatter( TDXY[col+1][2][t_spks[i]], TDXY[col+1][3][t_spks[i]] )
# plt.savefig( 'fgrs/anmy_1/{}_t{}.png'.format(vrjn, i) )
elif subplt_rws==2:
for col in range (subplt_cls):
axs[0][0].scatter( TDXY[0][2][t_spks[i]], TDXY[0][3][t_spks[i]] )
axs[1][0].scatter( TDXY[1][2][t_spks[i]], TDXY[1][3][t_spks[i]] )
for col in range(1,n_orn+1):
row=0
axs[row][col].scatter( TDXY[col+1][2][t_spks[i]], TDXY[col+1][3][t_spks[i]] )
for col in range(1,n_orn+1):
row=1
axs[row][col].scatter( TDXY[n_orn+1+col][2][t_spks[i]], TDXY[n_orn+1+col][3][t_spks[i]] )
# plt.savefig( 'fgrs/anmy_1/{}_t{}.png'.format(vrjn, i) )
#--------------------------------------------------------------------------
plt.pause(.5)
last_t_spks = t_spks[i]
# suing builtin animation function ===========================================
#strt_tm = TDXY[0][0][0]
#stop_tm = TDXY[0][0][-1]
#print '\n### n_orn x n_rtna : {}x{}'.format(n_orn, n_rtna)
#print '\n### strt_tm - stop_tm : {} - {}'.format(strt_tm, stop_tm)
#ppanmy.anmy_rtna_rf_orn( TDXY, rtna_h, rtna_w, n_rtna, krnl_sz, strt_tm , stop_tm)
| 37.282927 | 180 | 0.483056 | 1,137 | 7,643 | 3.061566 | 0.153034 | 0.057455 | 0.046538 | 0.059753 | 0.569951 | 0.562769 | 0.555587 | 0.551853 | 0.537489 | 0.52054 | 0 | 0.038607 | 0.271359 | 7,643 | 204 | 181 | 37.465686 | 0.586461 | 0.426665 | 0 | 0.378947 | 0 | 0.010526 | 0.062336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.115789 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
965034c03fdf2183dfe02406617dfa08e3bd353a | 1,153 | py | Python | recipes/Python/223585_Stable_deep_sorting_dottedindexed_attributes/recipe-223585.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/223585_Stable_deep_sorting_dottedindexed_attributes/recipe-223585.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/223585_Stable_deep_sorting_dottedindexed_attributes/recipe-223585.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | def sortByAttrs(seq, attrs):
listComp = ['seq[:] = [(']
for attr in attrs:
listComp.append('seq[i].%s, ' % attr)
listComp.append('i, seq[i]) for i in xrange(len(seq))]')
exec('%s' % ''.join(listComp))
seq.sort()
seq[:] = [obj[-1] for obj in seq]
return
#
# begin test code
#
from random import randint
class a:
def __init__(self):
self.x = (randint(1, 5), randint(1, 5))
class b:
def __init__(self):
self.x = randint(1, 5)
self.y = (a(), a())
class c:
def __init__(self, arg):
self.x = arg
self.y = b()
if __name__ == '__main__':
aList = [c(1), c(2), c(3), c(4), c(5), c(6)]
print '\n...to be sorted by obj.y.y[1].x[1]'
print ' then, as needed, by obj.y.x'
print ' then, as needed, by obj.x\n\n ',
for i in range(6):
print '(' + str(aList[i].y.y[1].x[1]) + ',',
print str(aList[i].y.x) + ',',
print str(aList[i].x) + ') ',
sortByAttrs(aList, ['y.y[1].x[1]', 'y.x', 'x'])
print '\n\n...now sorted by listed attributes.\n\n ',
for i in range(6):
print '(' + str(aList[i].y.y[1].x[1]) + ',',
print str(aList[i].y.x) + ',',
print str(aList[i].x) + ') ',
print
#
# end test code
#
| 18.015625 | 58 | 0.542064 | 206 | 1,153 | 2.936893 | 0.286408 | 0.079339 | 0.128926 | 0.138843 | 0.383471 | 0.375207 | 0.294215 | 0.294215 | 0.21157 | 0.21157 | 0 | 0.025275 | 0.210755 | 1,153 | 63 | 59 | 18.301587 | 0.63956 | 0.025152 | 0 | 0.27027 | 0 | 0 | 0.221128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027027 | null | null | 0.297297 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96519d3044209db1a7fd83988e9afafa3678e598 | 398 | py | Python | examples/compat/ggplot_point.py | azjps/bokeh | 13375db53d4c60216f3bcf5aacccb081cf19450a | [
"BSD-3-Clause"
] | 1 | 2017-04-27T09:15:48.000Z | 2017-04-27T09:15:48.000Z | app/static/libs/bokeh/examples/compat/ggplot_point.py | TBxy/bokeh_start_app | 755494f6bc60e92ce17022bbd7f707a39132cbd0 | [
"MIT"
] | null | null | null | app/static/libs/bokeh/examples/compat/ggplot_point.py | TBxy/bokeh_start_app | 755494f6bc60e92ce17022bbd7f707a39132cbd0 | [
"MIT"
] | 1 | 2021-09-09T03:33:04.000Z | 2021-09-09T03:33:04.000Z | from ggplot import aes, geom_point, ggplot, mtcars
import matplotlib.pyplot as plt
from pandas import DataFrame
from bokeh import mpl
from bokeh.plotting import output_file, show
g = ggplot(mtcars, aes(x='wt', y='mpg', color='qsec')) + geom_point()
g.make()
plt.title("Point ggplot-based plot in Bokeh.")
output_file("ggplot_point.html", title="ggplot_point.py example")
show(mpl.to_bokeh())
| 23.411765 | 69 | 0.751256 | 64 | 398 | 4.5625 | 0.546875 | 0.061644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120603 | 398 | 16 | 70 | 24.875 | 0.834286 | 0 | 0 | 0 | 0 | 0 | 0.20603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9655150c478e5c7edceea8519f955d5cbf7c2792 | 3,604 | py | Python | BuyandBye_project/users/forms.py | sthasam2/BuyandBye | 07a998f289f9ae87b234cd6ca653a4fdb2765b95 | [
"MIT"
] | 1 | 2019-12-26T16:52:10.000Z | 2019-12-26T16:52:10.000Z | BuyandBye_project/users/forms.py | sthasam2/buyandbye | 07a998f289f9ae87b234cd6ca653a4fdb2765b95 | [
"MIT"
] | 13 | 2021-06-02T03:51:06.000Z | 2022-03-12T00:53:22.000Z | BuyandBye_project/users/forms.py | sthasam2/buyandbye | 07a998f289f9ae87b234cd6ca653a4fdb2765b95 | [
"MIT"
] | null | null | null | from datetime import date
from django import forms
from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
from phonenumber_field.formfields import PhoneNumberField
from .models import Profile
from .options import STATE_CHOICES, YEARS
from .utils import AgeValidator
class UserRegisterForm(UserCreationForm):
first_name = forms.CharField(
max_length=30, widget=forms.TextInput(attrs={"placeholder": "Given name"})
)
middle_name = forms.CharField(
max_length=30,
required=False,
widget=forms.TextInput(attrs={"placeholder": "Middle name"}),
)
last_name = forms.CharField(
max_length=30, widget=forms.TextInput(attrs={"placeholder": "Surname"})
)
date_of_birth = forms.DateField(
label="Date of Birth",
initial=date.today(),
required=True,
help_text="Age must be above 16",
validators=[AgeValidator],
widget=forms.SelectDateWidget(years=YEARS),
)
email = forms.EmailField(
max_length=150,
widget=forms.TextInput(attrs={"placeholder": "e.g. xyz@domain.com"}),
)
address1 = forms.CharField(
max_length=100,
help_text="Street, District",
widget=forms.TextInput(attrs={"placeholder": "Street, District"}),
)
address2 = forms.CharField(
max_length=100,
help_text="State",
widget=forms.Select(attrs={"placeholder": "State"}, choices=STATE_CHOICES),
)
phone = PhoneNumberField(
required=False,
initial="+977",
help_text="Phone number must contain country calling code (e.g. +97798XXYYZZSS)",
)
class Meta:
model = User
fields = [
"first_name",
"middle_name",
"last_name",
"date_of_birth",
"username",
"email",
"phone",
"password1",
"password2",
"address1",
"address2",
]
# widget={
# 'username': forms.TextInput(attrs={'placeholder': 'Enter desired username.'}),
# }
class UserUpdateForm(forms.ModelForm):
class Meta:
model = User
fields = [
"username",
]
class ProfileUpdateForm(forms.ModelForm):
first_name = forms.CharField(
max_length=30, widget=forms.TextInput(attrs={"placeholder": "Given name"})
)
middle_name = forms.CharField(
max_length=30,
required=False,
widget=forms.TextInput(attrs={"placeholder": "Middle name"}),
)
last_name = forms.CharField(
max_length=30, widget=forms.TextInput(attrs={"placeholder": "Surname"})
)
email = forms.EmailField(
max_length=150,
widget=forms.TextInput(attrs={"placeholder": "e.g. xyz@domain.com"}),
)
address1 = forms.CharField(
max_length=100,
help_text="Street, District",
widget=forms.TextInput(attrs={"placeholder": "Street, District"}),
)
address2 = forms.CharField(
max_length=100,
help_text="State",
widget=forms.Select(attrs={"placeholder": "State"}, choices=STATE_CHOICES),
)
phone = PhoneNumberField(
required=False,
help_text="Phone number must contain country calling code (e.g. +97798XXYYZZSS)",
)
class Meta:
model = Profile
fields = [
"first_name",
"middle_name",
"last_name",
"email",
"address1",
"address2",
"phone",
"image",
]
| 29.064516 | 92 | 0.59434 | 355 | 3,604 | 5.923944 | 0.253521 | 0.067998 | 0.099382 | 0.156919 | 0.664765 | 0.65145 | 0.65145 | 0.620067 | 0.620067 | 0.620067 | 0 | 0.021293 | 0.283296 | 3,604 | 123 | 93 | 29.300813 | 0.792877 | 0.025805 | 0 | 0.618182 | 0 | 0 | 0.184488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.018182 | 0.072727 | 0 | 0.263636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96565fe229818a242f95852b7feea959f0bbeb31 | 10,110 | py | Python | kolab/tokibi/tokibi.py | oshiooshi/kolab | 5f34614a995b2a31156b65e6eb9d512b9867540e | [
"MIT"
] | null | null | null | kolab/tokibi/tokibi.py | oshiooshi/kolab | 5f34614a995b2a31156b65e6eb9d512b9867540e | [
"MIT"
] | null | null | null | kolab/tokibi/tokibi.py | oshiooshi/kolab | 5f34614a995b2a31156b65e6eb9d512b9867540e | [
"MIT"
] | null | null | null | import sys
import pegtree as pg
from pegtree.visitor import ParseTreeVisitor
import random
# from . import verb
import verb
EMPTY = tuple()
# オプション
OPTION = {
'Simple': False, # シンプルな表現を優先する
'Block': False, # Expressionに <e> </e> ブロックをつける
'EnglishFirst': False, # 英訳の精度を優先する
'ShuffleSynonym': True, # 同音異議語をシャッフルする
'MultipleSentence': False, # 複数行モード
'ShuffleOrder': True, # 順序も入れ替える
'Verbose': True, # デバッグ出力あり
}
# {心が折れた|やる気が失せた}フリをする
# 現状:[猫|ネコ] -> ランダム
# 将来: 猫 -> 異音同義語(synonyms) -> ランダム (自動的) これをどう作るか?
# 順番を入れ替える -> NSuffix(「に」のように助詞)
# [ネコ|ネコ|] -> 省略可能
# 雑音を入れる <- BERT
# Aに Bを 足す 「Aに」を取り除く -> Bを 足す -> Aがありませんよ。
# randomize
RandomIndex = 0
def randomize():
global RandomIndex
if OPTION['ShuffleSynonym']:
RandomIndex = random.randint(1, 1789)
else:
RandomIndex = 0
def random_index(arraysize: int, seed):
if OPTION['ShuffleSynonym']:
return (RandomIndex + seed) % arraysize
return 0
def alt(s: str):
if '|' in s:
ss = s.split('|')
if OPTION['EnglishFirst']:
return ss[-1] # 最後が英語
return ss[random_index(len(ss), len(s))]
return s
def choice(ss: list):
return ss[random_index(len(ss), 17)]
# def conjugate(w, mode=0, vpos=None):
# suffix = ''
# if mode & verb.CASE == verb.CASE:
# if RandomIndex % 2 != 0:
# mode = (mode & ~verb.CASE) | verb.NOUN
# suffix = alt('とき、|場合、|際、')
# else:
# suffix = '、'
# if mode & verb.THEN == verb.THEN:
# if RandomIndex % 2 != 0:
# mode = (mode & ~verb.THEN) | verb._I
# suffix = '、'
# return verb.conjugate(w, mode, vpos) + suffix
# NExpr
def identity(e):
return e
class NExpr(object):
subs: tuple
def __init__(self, subs=EMPTY):
self.subs = tuple(NWord(s) if isinstance(s, str) else s for s in subs)
def apply(self, dict_or_func=identity):
if len(self.subs) > 0:
(e.apply(dict_or_func) for e in self.subs)
return self
def generate(self):
ss = []
c = 0
while c < 5:
randomize()
buffers = []
self.emit(buffers)
s = ''.join(buffers)
if s not in ss:
ss.append(s)
else:
c += 1
return ss
class NWord(NExpr):
w: str
def __init__(self, w):
NExpr.__init__(self)
self.w = str(w)
def __repr__(self):
if '|' in self.w:
return '[' + self.w + ']'
return self.w
def apply(self, dict_or_func=identity):
if not isinstance(dict_or_func, dict):
return dict_or_func(self)
return self
def emit(self, buffers):
buffers.append(alt(self.w))
class NVerb(NExpr):
w: str
vpos: str
mode: int
def __init__(self, w, vpos, mode=0):
NExpr.__init__(self)
self.w = str(w)
self.vpos = vpos
self.mode = mode
def __repr__(self):
return verb.conjugate(self.w, self.mode, self.vpos)
def apply(self, dict_or_func=identity):
if not isinstance(dict_or_func, dict):
return dict_or_func(self)
return self
def emit(self, buffers):
buffers.append(verb.conjugate(self.w, self.mode|verb.POL, self.vpos))
class NChoice(NExpr):
def __init__(self, *subs):
NExpr.__init__(self, subs)
def __repr__(self):
ss = []
for p in self.subs:
ss.append(repr(p))
return ' | '.join(ss)
def apply(self, dict_or_func=identity):
return NChoice(*(e.apply(dict_or_func) for e in self.subs))
def emit(self, buffers):
choice(self.subs).emit(buffers)
class NPhrase(NExpr):
def __init__(self, *subs):
NExpr.__init__(self, subs)
def __repr__(self):
ss = []
for p in self.subs:
ss.append(grouping(p))
return ' '.join(ss)
def apply(self, dict_or_func=identity):
return NPhrase(*(e.apply(dict_or_func) for e in self.subs))
def emit(self, buffers):
for p in self.subs:
p.emit(buffers)
def grouping(e):
if isinstance(e, NPhrase):
return '{' + repr(e) + '}'
return repr(e)
class NOrdered(NExpr):
def __init__(self, *subs):
NExpr.__init__(self, subs)
def __repr__(self):
ss = []
for p in self.subs:
ss.append(grouping(p))
return '/'.join(ss)
def apply(self, dict_or_func=identity):
return NOrdered(*(e.apply(dict_or_func) for e in self.subs))
def emit(self, buffers):
subs = list(self.subs)
if OPTION['ShuffleOrder']:
random.shuffle(subs)
for p in subs:
p.emit(buffers)
class NClause(NExpr): # 名詞節 〜する(verb)+名詞(noun)
def __init__(self, verb, noun):
NExpr.__init__(self, (verb,noun))
def __repr__(self):
return grouping(self.subs[0]) + grouping(self.subs[1])
def apply(self, dict_or_func=identity):
return NClause(*(e.apply(dict_or_func) for e in self.subs))
def emit(self, buffers):
verb = self.subs[0]
noun = self.subs[1]
if isinstance(verb, NClause):
verb.subs[0].emit(buffers)
else:
verb.emit(buffers)
noun.emit(buffers)
class NSuffix(NExpr):
suffix: str
def __init__(self, e, suffix):
NExpr.__init__(self, (e,))
self.suffix = suffix
def __repr__(self):
return grouping(self.subs[0]) + self.suffix
def apply(self, dict_or_func=identity):
return NSuffix(self.subs[0].apply(dict_or_func), self.suffix)
def emit(self, buffers):
self.subs[0].emit(buffers)
buffers.append(self.suffix)
neko = NWord('猫|ねこ|ネコ')
print('@', neko, neko.generate())
wo = NSuffix(neko, 'を')
print('@', wo, wo.generate())
ni = NSuffix(neko, 'に')
print('@', ni, ni.generate())
ageru = NVerb('あげる', 'V1', 0)
e = NPhrase(NOrdered(ni, wo), ageru)
print('@', e, e.generate())
class NLiteral(NExpr):
w: str
def __init__(self, w):
NExpr.__init__(self)
self.w = str(w)
def __repr__(self):
return self.w
def apply(self, dict_or_func=identity):
if not isinstance(dict_or_func):
return dict_or_func(self)
def emit(self, buffers):
buffers.append(self.w)
class NSymbol(NExpr):
index: int
w: str
def __init__(self, index, w):
NExpr.__init__(self)
self.index = index
self.w = str(w)
def __repr__(self):
return self.w
def apply(self, dict_or_func=identity):
if isinstance(dict_or_func, dict):
if self.index in dict_or_func:
return dict_or_func[self.index]
if self.w in dict_or_func:
return dict_or_func[self.w]
return self
else:
return dict_or_func(self)
def emit(self, buffers):
buffers.append(self.w)
## ここから下は気にしなくていいです。
## テキストを NExpr (構文木)に変換しています。
peg = pg.grammar('tokibi.pegtree')
tokibi_parser = pg.generate(peg)
class TokibiReader(ParseTreeVisitor):
def __init__(self, synonyms=None):
ParseTreeVisitor.__init__(self)
self.indexes = {}
self.synonyms = {} if synonyms is None else synonyms
def parse(self, s):
tree = tokibi_parser(s)
self.indexes = {}
nexpr = self.visit(tree)
return nexpr #, self.indexes
# [#NPhrase [#NOrdered [#NSuffix [#NSymbol 'str'][# 'が']][#NSuffix [#NSymbol 'prefix'][# 'で']]][#NWord '始まるかどうか']]
def acceptNChoice(self, tree):
ne = NChoice(*(self.visit(t) for t in tree))
return ne
def acceptNPhrase(self, tree):
ne = NPhrase(*(self.visit(t) for t in tree))
if len(ne.subs) == 1:
return ne.subs[0]
return ne
def acceptNClause(self, tree):
ne = NClause(self.visit(tree[0]), self.visit(tree[1]))
return ne
def acceptNOrdered(self, tree):
ne = NOrdered(*(self.visit(t) for t in tree))
return ne
def acceptNSuffix(self, tree):
t = self.visit(tree[0])
suffix = str(tree[1])
return NSuffix(t, suffix)
def acceptNSymbol(self, tree):
s = str(tree)
if s not in self.indexes:
self.indexes[s] = len(self.indexes)
return NSymbol(self.indexes[s], s)
def acceptNLiteral(self, tree):
s = str(tree)
return NLiteral(s)
def acceptNWord(self, tree):
s = str(tree)
w, vpos, mode = verb.parse(s.split('|')[0])
if vpos.startswith('V') or vpos == 'ADJ':
return NVerb(w, vpos, mode)
return NWord(s)
def acceptNPiece(self, tree):
s = str(tree)
return NWord(s)
tokibi_reader = TokibiReader()
def parse(s, synonyms=None):
if synonyms is not None:
tokibi_reader.synonyms = synonyms
if s.endswith('かどうか'):
s = s[:-4]
e = tokibi_reader.parse(s)
e = NClause(e, NWord('かどうか'))
else:
e = tokibi_reader.parse(s)
#print(grouping(e[0]))
return e
def read_tsv(filename):
with open(filename) as f:
for line in f.readlines():
line = line.strip()
if line == '' or line.startswith('#'):
continue
if '#' in line:
line = line.split('#')[1].strip()
e = parse(line)
print(e, e.generate())
# t = parse('{心が折れた|やる気が失せた}気がする')
# print(t, t.generate())
# t = parse('望遠鏡で{子犬が泳ぐ}のを見る')
# print(t)
# if __name__ == '__main__':
# if len(sys.argv) > 1:
# read_tsv(sys.argv[1])
# else:
# e = parse('望遠鏡で/{[子犬|Puppy]が泳ぐ}[様子|の]を見る')
# print(e, e.generate())
# e2 = parse('[猫|ねこ]が/[虎|トラ]と等しくないかどうか')
# #e2, _ = parse('{Aと/B(子犬)を/順に/[ひとつずつ]表示した}結果')
# e = parse('Aを調べる')
# e = e.apply({0: e2})
# print(e, e.generate())
# e = parse('A(事実)を調べる')
# e = e.apply({0: e2})
# print(e, e.generate())
| 24.128878 | 114 | 0.559149 | 1,324 | 10,110 | 4.121601 | 0.163897 | 0.030786 | 0.05131 | 0.02932 | 0.356606 | 0.327836 | 0.301448 | 0.283856 | 0.252336 | 0.231446 | 0 | 0.00692 | 0.299604 | 10,110 | 418 | 115 | 24.186603 | 0.763593 | 0.151533 | 0 | 0.369004 | 0 | 0 | 0.021624 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214022 | false | 0 | 0.01845 | 0.04428 | 0.483395 | 0.01845 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
965a2d366a9e0c3114e09f3517d25bed152a9d40 | 2,363 | py | Python | pull_into_place/commands/run_additional_metrics.py | Kortemme-Lab/pull_into_place | 0019a6cec2a6130ebbaa49d7ab67d4c840fbe33c | [
"MIT"
] | 3 | 2018-05-31T18:46:46.000Z | 2020-05-04T03:27:38.000Z | pull_into_place/commands/run_additional_metrics.py | Kortemme-Lab/pull_into_place | 0019a6cec2a6130ebbaa49d7ab67d4c840fbe33c | [
"MIT"
] | 14 | 2016-09-14T00:16:49.000Z | 2018-04-11T03:04:21.000Z | pull_into_place/commands/run_additional_metrics.py | Kortemme-Lab/pull_into_place | 0019a6cec2a6130ebbaa49d7ab67d4c840fbe33c | [
"MIT"
] | 1 | 2017-11-27T07:35:56.000Z | 2017-11-27T07:35:56.000Z | #!/usr/bin/env python2
"""\
Run additional filters on a folder of pdbs and copy the results back
into the original pdb.
Usage:
pull_into_place run_additional_metrics <directory> [options]
Options:
--max-runtime TIME [default: 12:00:00]
The runtime limit for each design job. The default value is
set pretty low so that the short queue is available by default. This
should work fine more often than not, but you also shouldn't be
surprised if you need to increase this.
--max-memory MEM [default: 2G]
The memory limit for each design job.
--mkdir
Make the directory corresponding to this step in the pipeline, but
don't do anything else. This is useful if you want to create custom
input files for just this step.
--test-run
Run on the short queue with a limited number of iterations. This
option automatically clears old results.
--clear
Clear existing results before submitting new jobs.
To use this class:
1. You need to initiate it with the directory where your pdb files
to be rerun are.
2. You need to use the setters for the Rosetta executable and the
metric.
"""
from klab import docopt, scripting, cluster
from pull_into_place import pipeline, big_jobs
@scripting.catch_and_print_errors()
def main():
args = docopt.docopt(__doc__)
cluster.require_qsub()
# Setup the workspace.
workspace = pipeline.AdditionalMetricWorkspace(args['<directory>'])
workspace.check_paths()
workspace.check_rosetta()
workspace.make_dirs()
if args['--mkdir']:
return
if args['--clear'] or args['--test-run']:
workspace.clear_outputs()
# Decide which inputs to use.
inputs = workspace.unclaimed_inputs
nstruct = len(inputs) * len(args['--nstruct'])
if not inputs:
print """\
All the input structures have already been (or are already being)
designed. If you want to rerun all the inputs from scratch, use the
--clear flag."""
raise SystemExit
# Submit the design job.
big_jobs.submit(
'pip_add_metrics.py', workspace,
inputs=inputs, nstruct=nstruct,
max_runtime=args['--max-runtime'],
max_memory=args['--max-memory'],
test_run=args['--test-run']
)
| 28.130952 | 77 | 0.66314 | 325 | 2,363 | 4.741538 | 0.498462 | 0.01817 | 0.017521 | 0.023361 | 0.027255 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005688 | 0.25603 | 2,363 | 83 | 78 | 28.46988 | 0.870876 | 0.039357 | 0 | 0 | 0 | 0 | 0.231421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068966 | null | null | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
965a870632eb281fc73c846d9b482a54e2ad0de9 | 827 | py | Python | setup.py | japherwocky/cl3ver | 148242feb676cc675bbdf11ae39c3179b9a6ffe1 | [
"MIT"
] | 1 | 2017-04-01T00:15:38.000Z | 2017-04-01T00:15:38.000Z | setup.py | japherwocky/cl3ver | 148242feb676cc675bbdf11ae39c3179b9a6ffe1 | [
"MIT"
] | null | null | null | setup.py | japherwocky/cl3ver | 148242feb676cc675bbdf11ae39c3179b9a6ffe1 | [
"MIT"
] | null | null | null | from distutils.core import setup
setup(
name = 'cl3ver',
packages = ['cl3ver'],
license = 'MIT',
install_requires = ['requests'],
version = '0.2',
description = 'A python 3 wrapper for the cleverbot.com API',
author = 'Japhy Bartlett',
author_email = 'cl3ver@pearachute.com',
url = 'https://github.com/japherwocky/cl3ver',
download_url = 'https://github.com/japherwocky/cl3ver/tarball/0.2.tar.gz',
keywords = ['cleverbot', 'wrapper', 'clever', 'chatbot', 'cl3ver'],
classifiers =[
'Programming Language :: Python :: 3 :: Only',
'License :: OSI Approved :: MIT License',
'Intended Audience :: Developers',
'Natural Language :: English',
],
)
| 39.380952 | 84 | 0.541717 | 77 | 827 | 5.779221 | 0.688312 | 0.008989 | 0.062921 | 0.076404 | 0.152809 | 0.152809 | 0 | 0 | 0 | 0 | 0 | 0.021277 | 0.318017 | 827 | 20 | 85 | 41.35 | 0.767731 | 0 | 0 | 0 | 0 | 0 | 0.449819 | 0.025393 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
966165e75931deeaee2d1ab429f5cda6020e085f | 19,860 | py | Python | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/jabber/pubsub.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | 1 | 2017-03-28T06:41:51.000Z | 2017-03-28T06:41:51.000Z | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/jabber/pubsub.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | null | null | null | linux-distro/package/nuxleus/Source/Vendor/Microsoft/IronPython-2.0.1/Lib/headstock/example/microblog/microblog/jabber/pubsub.py | mdavid/nuxleus | 653f1310d8bf08eaa5a7e3326c2349e56a6abdc2 | [
"BSD-3-Clause"
] | 1 | 2016-12-13T21:08:58.000Z | 2016-12-13T21:08:58.000Z | # -*- coding: utf-8 -*-
import re
from Axon.Component import component
from Kamaelia.Util.Backplane import PublishTo, SubscribeTo
from Axon.Ipc import shutdownMicroprocess, producerFinished
from Kamaelia.Protocol.HTTP.HTTPClient import SimpleHTTPClient
from headstock.api.jid import JID
from headstock.api.im import Message, Body
from headstock.api.pubsub import Node, Item, Message
from headstock.api.discovery import *
from headstock.lib.utils import generate_unique
from bridge import Element as E
from bridge.common import XMPP_CLIENT_NS, XMPP_ROSTER_NS, \
XMPP_LAST_NS, XMPP_DISCO_INFO_NS, XMPP_DISCO_ITEMS_NS,\
XMPP_PUBSUB_NS
from amplee.utils import extract_url_trail, get_isodate,\
generate_uuid_uri
from amplee.error import ResourceOperationException
from microblog.atompub.resource import ResourceWrapper
from microblog.jabber.atomhandler import FeedReaderComponent
__all__ = ['DiscoHandler', 'ItemsHandler', 'MessageHandler']
publish_item_rx = re.compile(r'\[(.*)\] ([\w ]*)')
retract_item_rx = re.compile(r'\[(.*)\] ([\w:\-]*)')
geo_rx = re.compile(r'(.*) ([\[\.|\d,|\-\]]*)')
GEORSS_NS = u"http://www.georss.org/georss"
GEORSS_PREFIX = u"georss"
class DiscoHandler(component):
Inboxes = {"inbox" : "",
"control" : "",
"initiate" : "",
"jid" : "",
"error" : "",
"features.result": "",
"items.result": "",
"items.error" : "",
"docreate" : "",
"docreatecollection": "",
"dodelete" : "",
"dounsubscribe" : "",
"dosubscribe" : "",
"subscribed": "",
"subscriptions.result": "",
"affiliations.result": "",
"created": "",
"deleted" : ""}
Outboxes = {"outbox" : "",
"signal" : "Shutdown signal",
"message" : "",
"features-disco": "headstock.api.discovery.FeaturesDiscovery query to the server",
"features-announce": "headstock.api.discovery.FeaturesDiscovery informs"\
"the other components about the features instance received from the server",
"items-disco" : "",
"create-node" : "",
"delete-node" : "",
"subscribe-node" : "",
"unsubscribe-node" : "",
"subscriptions-disco": "",
"affiliations-disco" : ""}
def __init__(self, from_jid, atompub, host='localhost', session_id=None, profile=None):
super(DiscoHandler, self).__init__()
self.from_jid = from_jid
self.atompub = atompub
self.xmpphost = host
self.session_id = session_id
self.profile = profile
self._collection = None
self.pubsub_top_level_node = u'home/%s/%s' % (self.xmpphost, self.from_jid.node)
@property
def collection(self):
# Lazy loading of collection
if not self._collection:
self._collection = self.atompub.get_collection(self.profile.username)
return self._collection
def initComponents(self):
sub = SubscribeTo("JID.%s" % self.session_id)
self.link((sub, 'outbox'), (self, 'jid'))
self.addChildren(sub)
sub.activate()
pub = PublishTo("DISCO_FEAT.%s" % self.session_id)
self.link((self, 'features-announce'), (pub, 'inbox'))
self.addChildren(pub)
pub.activate()
sub = SubscribeTo("BOUND.%s" % self.session_id)
self.link((sub, 'outbox'), (self, 'initiate'))
self.addChildren(sub)
sub.activate()
return 1
def main(self):
yield self.initComponents()
while 1:
if self.dataReady("control"):
mes = self.recv("control")
if isinstance(mes, shutdownMicroprocess) or isinstance(mes, producerFinished):
self.send(producerFinished(), "signal")
break
if self.dataReady("jid"):
self.from_jid = self.recv('jid')
self.pubsub_top_level_node = u'home/%s/%s' % (self.xmpphost, self.from_jid.node)
if self.dataReady("initiate"):
self.recv("initiate")
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=self.pubsub_top_level_node)
self.send(p, "create-node")
yield 1
#d = FeaturesDiscovery(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost)
#self.send(d, "features-disco")
d = SubscriptionsDiscovery(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost)
self.send(d, "subscriptions-disco")
d = AffiliationsDiscovery(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost)
self.send(d, "affiliations-disco")
n = ItemsDiscovery(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=self.pubsub_top_level_node)
self.send(n, "items-disco")
# The response to our discovery query
# is a a headstock.api.discovery.FeaturesDiscovery instance.
# What we immediatly do is to notify all handlers
# interested in that event about it.
if self.dataReady('features.result'):
disco = self.recv('features.result')
for feature in disco.features:
print " ", feature.var
self.send(disco, 'features-announce')
if self.dataReady('items.result'):
items = self.recv('items.result')
print "%s has %d item(s)" % (items.node_name, len(items.items))
#for item in items.items:
#n = ItemsDiscovery(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
# node_name=item.node)
#self.send(n, "items-disco")
if self.dataReady('items.error'):
items_disco = self.recv('items.error')
print "DISCO ERROR: ", items_disco.node_name, items_disco.error
if items_disco.error.condition == 'item-not-found':
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=items_disco.node_name)
self.send(p, "create-node")
yield 1
if self.dataReady('subscriptions.result'):
subscriptions = self.recv('subscriptions.result')
for sub in subscriptions.subscriptions:
print "Subscription: %s (%s)" % (sub.node, sub.state)
if self.dataReady('affiliations.result'):
affiliations = self.recv('affiliations.result')
for aff in affiliations.affiliations:
print "Affiliation: %s %s" % (aff.node, aff.affiliation)
if self.dataReady('docreate'):
nodeid = self.recv('docreate').strip()
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=nodeid)
self.send(p, "create-node")
if self.dataReady('docreatecollection'):
nodeid = self.recv('docreatecollection').strip()
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=nodeid)
p.set_default_collection_conf()
self.send(p, "create-node")
if self.dataReady('dodelete'):
nodeid = self.recv('dodelete').strip()
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=nodeid)
self.send(p, "delete-node")
if self.dataReady('dosubscribe'):
nodeid = self.recv('dosubscribe').strip()
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=nodeid, sub_jid=self.from_jid.nodeid())
self.send(p, "subscribe-node")
if self.dataReady('dounsubscribe'):
nodeid = self.recv('dounsubscribe').strip()
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=nodeid, sub_jid=self.from_jid.nodeid())
self.send(p, "unsubscribe-node")
if self.dataReady('created'):
node = self.recv('created')
print "Node created: %s" % node.node_name
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=node.node_name, sub_jid=self.from_jid.nodeid())
self.send(p, "subscribe-node")
if self.dataReady('subscribed'):
node = self.recv('subscribed')
print "Node subscribed: %s" % node.node_name
if self.dataReady('deleted'):
node = self.recv('deleted')
print "Node deleted: %s" % node.node_name
if self.dataReady('error'):
node = self.recv('error')
print node.error
if not self.anyReady():
self.pause()
yield 1
class ItemsHandler(component):
Inboxes = {"inbox" : "",
"topublish" : "",
"todelete" : "",
"topurge": "",
"control" : "",
"xmpp.result": "",
"published": "",
"publish.error": "",
"retract.error": "",
"jid" : "",
"_feedresponse": "",
"_delresponse": ""}
Outboxes = {"outbox" : "",
"publish" : "",
"delete" : "",
"purge" : "",
"signal" : "Shutdown signal",
"_feedrequest": "",
"_delrequest": ""}
def __init__(self, from_jid, atompub, host='localhost', session_id=None, profile=None):
super(ItemsHandler, self).__init__()
self.from_jid = from_jid
self.atompub = atompub
self.xmpphost = host
self.session_id = session_id
self.profile = profile
self._collection = None
self.pubsub_top_level_node = u'home/%s/%s' % (self.xmpphost, self.from_jid.node)
@property
def collection(self):
# Lazy loading of collection
if not self._collection:
self._collection = self.atompub.get_collection(self.profile.username)
return self._collection
def initComponents(self):
sub = SubscribeTo("JID.%s" % self.session_id)
self.link((sub, 'outbox'), (self, 'jid'))
self.addChildren(sub)
sub.activate()
feedreader = FeedReaderComponent(use_etags=False)
self.addChildren(feedreader)
feedreader.activate()
client = SimpleHTTPClient()
self.addChildren(client)
self.link((self, '_feedrequest'), (client, 'inbox'))
self.link((client, 'outbox'), (feedreader, 'inbox'))
self.link((feedreader, 'outbox'), (self, '_feedresponse'))
client.activate()
client = SimpleHTTPClient()
self.addChildren(client)
self.link((self, '_delrequest'), (client, 'inbox'))
self.link((client, 'outbox'), (self, '_delresponse'))
client.activate()
return 1
def make_entry(self, msg, node):
uuid = generate_uuid_uri()
entry = E.load('./entry.atom').xml_root
entry.get_child('id', ns=entry.xml_ns).xml_text = uuid
dt = get_isodate()
entry.get_child('author', ns=entry.xml_ns).get_child('name', ns=entry.xml_ns).xml_text = unicode(self.profile.username)
entry.get_child('published', ns=entry.xml_ns).xml_text = dt
entry.get_child('updated', ns=entry.xml_ns).xml_text = dt
entry.get_child('content', ns=entry.xml_ns).xml_text = unicode(msg)
if node != self.pubsub_top_level_node:
tag = extract_url_trail(node)
E(u'category', namespace=entry.xml_ns, prefix=entry.xml_prefix,
attributes={u'term': unicode(tag)}, parent=entry)
return uuid, entry
def add_geo_point(self, entry, long, lat):
E(u'point', prefix=GEORSS_PREFIX, namespace=GEORSS_NS,
content=u'%s %s' % (unicode(long), unicode(lat)), parent=entry)
def main(self):
yield self.initComponents()
while 1:
if self.dataReady("control"):
mes = self.recv("control")
if isinstance(mes, shutdownMicroprocess) or isinstance(mes, producerFinished):
self.send(producerFinished(), "signal")
break
if self.dataReady("jid"):
self.from_jid = self.recv('jid')
self.pubsub_top_level_node = u'home/%s/%s' % (self.xmpphost, self.from_jid.node)
if self.dataReady("topublish"):
message = self.recv("topublish")
node = self.pubsub_top_level_node
m = geo_rx.match(message)
long = lat = None
if not m:
m = publish_item_rx.match(message)
if m:
node, message = m.groups()
else:
message, long_lat = m.groups()
long, lat = long_lat.strip('[').rstrip(']').split(',')
uuid, entry = self.make_entry(message, node)
if long and lat:
self.add_geo_point(entry, long, lat)
i = Item(id=uuid, payload=entry)
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=unicode(node), item=i)
self.send(p, "publish")
yield 1
if self.dataReady("todelete"):
item_id = self.recv("todelete")
node = self.pubsub_top_level_node
m = retract_item_rx.match(item_id)
if m:
node, item_id = m.groups()
i = Item(id=unicode(item_id))
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=unicode(node), item=i)
self.send(p, "delete")
yield 1
if self.dataReady("topurge"):
node_id = self.recv("topurge")
p = Node(unicode(self.from_jid), u'pubsub.%s' % self.xmpphost,
node_name=node_id)
self.send(p, "purge")
params = {'url': '%s/feed' % (self.collection.get_base_edit_uri().rstrip('/')),
'method': 'GET'}
self.send(params, '_feedrequest')
if self.dataReady("published"):
node = self.recv("published")
print "Item published: %s" % node
if self.dataReady("publish.error"):
node = self.recv("publish.error")
print node.error
if self.dataReady("retract.error"):
node = self.recv("retract.error")
print node.error
if self.dataReady("_feedresponse"):
feed = self.recv("_feedresponse")
for entry in feed.entries:
for link in entry.links:
if link.rel == 'edit':
params = {'url': link.href, 'method': 'DELETE'}
self.send(params, '_delrequest')
if self.dataReady("_delresponse"):
self.recv("_delresponse")
if not self.anyReady():
self.pause()
yield 1
class MessageHandler(component):
Inboxes = {"inbox" : "",
"control" : "",
"jid" : "",
"_response" : ""}
Outboxes = {"outbox" : "",
"signal" : "Shutdown signal",
"items-disco" : "",
"_request": ""}
def __init__(self, from_jid, atompub, host='localhost', session_id=None, profile=None):
super(MessageHandler, self).__init__()
self.from_jid = from_jid
self.atompub = atompub
self.xmpphost = host
self.session_id = session_id
self.profile = profile
self._collection = None
@property
def collection(self):
# Lazy loading of collection
if not self._collection:
self._collection = self.atompub.get_collection(self.profile.username)
return self._collection
def initComponents(self):
sub = SubscribeTo("JID.%s" % self.session_id)
self.link((sub, 'outbox'), (self, 'jid'))
self.addChildren(sub)
sub.activate()
client = SimpleHTTPClient()
self.addChildren(client)
self.link((self, '_request'), (client, 'inbox'))
self.link((client, 'outbox'), (self, '_response'))
client.activate()
return 1
def main(self):
yield self.initComponents()
while 1:
if self.dataReady("control"):
mes = self.recv("control")
if isinstance(mes, shutdownMicroprocess) or isinstance(mes, producerFinished):
self.send(producerFinished(), "signal")
break
if self.dataReady("jid"):
self.from_jid = self.recv('jid')
if self.dataReady("_response"):
#discard the HTTP response for now
member_entry = self.recv("_response")
if self.dataReady("inbox"):
msg = self.recv("inbox")
collection = self.collection
if collection:
for item in msg.items:
if item.event == 'item' and item.payload:
print "Published item: %s" % item.id
member = collection.get_member(item.id)
if not member:
if isinstance(item.payload, list):
body = item.payload[0].xml()
else:
body = item.payload.xml()
params = {'url': collection.get_base_edit_uri(),
'method': 'POST', 'postbody': body,
'extraheaders': {'content-type': 'application/atom+xml;type=entry',
'content-length': str(len(body)),
'slug': item.id}}
self.send(params, '_request')
elif item.event == 'retract':
print "Removed item: %s" % item.id
params = {'url': '%s/%s' % (collection.get_base_edit_uri().rstrip('/'),
item.id.encode('utf-8')),
'method': 'DELETE'}
self.send(params, '_request')
if not self.anyReady():
self.pause()
yield 1
| 39.72 | 127 | 0.507049 | 1,945 | 19,860 | 5.04473 | 0.14036 | 0.024969 | 0.035874 | 0.029352 | 0.479413 | 0.451794 | 0.428761 | 0.393905 | 0.384631 | 0.355075 | 0 | 0.001278 | 0.369839 | 19,860 | 499 | 128 | 39.799599 | 0.782741 | 0.029255 | 0 | 0.423469 | 0 | 0 | 0.134299 | 0.005866 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.040816 | null | null | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96640311a4d3b46c933f3f768041f09fa3a2cb24 | 3,588 | py | Python | u24_lymphocyte/third_party/treeano/sandbox/nodes/update_dropout.py | ALSM-PhD/quip_classification | 7347bfaa5cf11ae2d7a528fbcc43322a12c795d3 | [
"BSD-3-Clause"
] | 45 | 2015-04-26T04:45:51.000Z | 2022-01-24T15:03:55.000Z | u24_lymphocyte/third_party/treeano/sandbox/nodes/update_dropout.py | ALSM-PhD/quip_classification | 7347bfaa5cf11ae2d7a528fbcc43322a12c795d3 | [
"BSD-3-Clause"
] | 8 | 2018-07-20T20:54:51.000Z | 2020-06-12T05:36:04.000Z | u24_lymphocyte/third_party/treeano/sandbox/nodes/update_dropout.py | ALSM-PhD/quip_classification | 7347bfaa5cf11ae2d7a528fbcc43322a12c795d3 | [
"BSD-3-Clause"
] | 22 | 2018-05-21T23:57:20.000Z | 2022-02-21T00:48:32.000Z | """
technique that randomly 0's out the update deltas for each parameter
"""
import theano
import theano.tensor as T
from theano.sandbox.rng_mrg import MRG_RandomStreams
import treeano
import treeano.nodes as tn
fX = theano.config.floatX
@treeano.register_node("update_dropout")
class UpdateDropoutNode(treeano.Wrapper1NodeImpl):
hyperparameter_names = ("update_dropout_probability",
"rescale_updates")
def mutate_update_deltas(self, network, update_deltas):
if network.find_hyperparameter(["deterministic"]):
return
p = network.find_hyperparameter(["update_dropout_probability"], 0)
if p == 0:
return
rescale_updates = network.find_hyperparameter(["rescale_updates"],
False)
keep_prob = 1 - p
rescale_factor = 1 / keep_prob
srng = MRG_RandomStreams()
# TODO parameterize search tags (to affect not only "parameters"s)
vws = network.find_vws_in_subtree(tags={"parameter"},
is_shared=True)
for vw in vws:
if vw.variable not in update_deltas:
continue
mask = srng.binomial(size=(), p=keep_prob, dtype=fX)
if rescale_updates:
mask *= rescale_factor
update_deltas[vw.variable] *= mask
@treeano.register_node("momentum_update_dropout")
class MomentumUpdateDropoutNode(treeano.Wrapper1NodeImpl):
"""
randomly 0's out the update deltas for each parameter with momentum
like update dropout, but with some probability (momentum), whether
or not an update is dropped out is kept the same as the previous
iteration
"""
hyperparameter_names = ("update_dropout_probability",
"rescale_updates",
"update_dropout_momentum")
def mutate_update_deltas(self, network, update_deltas):
if network.find_hyperparameter(["deterministic"]):
return
p = network.find_hyperparameter(["update_dropout_probability"], 0)
if p == 0:
return
rescale_updates = network.find_hyperparameter(["rescale_updates"],
False)
momentum = network.find_hyperparameter(["update_dropout_momentum"])
keep_prob = 1 - p
rescale_factor = 1 / keep_prob
srng = MRG_RandomStreams()
# TODO parameterize search tags (to affect not only "parameters"s)
vws = network.find_vws_in_subtree(tags={"parameter"},
is_shared=True)
for vw in vws:
if vw.variable not in update_deltas:
continue
is_kept = network.create_vw(
"momentum_update_dropout_is_kept(%s)" % vw.name,
shape=(),
is_shared=True,
tags={"state"},
# TODO: Should this be a random bool with prob p for each?
default_inits=[treeano.inits.ConstantInit(1)]).variable
keep_mask = srng.binomial(size=(), p=keep_prob, dtype=fX)
momentum_mask = srng.binomial(size=(), p=momentum, dtype=fX)
new_is_kept = (momentum_mask * is_kept
+ (1 - momentum_mask) * keep_mask)
mask = new_is_kept
if rescale_updates:
mask *= rescale_factor
update_deltas[is_kept] = new_is_kept - is_kept
update_deltas[vw.variable] *= mask
| 36.612245 | 75 | 0.593924 | 388 | 3,588 | 5.262887 | 0.268041 | 0.064643 | 0.0857 | 0.045544 | 0.629285 | 0.580803 | 0.580803 | 0.524976 | 0.480901 | 0.445642 | 0 | 0.005783 | 0.325251 | 3,588 | 97 | 76 | 36.989691 | 0.83767 | 0.130156 | 0 | 0.597015 | 0 | 0 | 0.107328 | 0.067445 | 0 | 0 | 0 | 0.020619 | 0 | 1 | 0.029851 | false | 0 | 0.074627 | 0 | 0.223881 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9667f9974ca754b017d3785df5cd5e5a88c0fff5 | 9,337 | py | Python | testscripts/RDKB/component/TAD/TS_TAD_Download_SetInvalidDiagnosticsState.py | cablelabs/tools-tdkb | 1fd5af0f6b23ce6614a4cfcbbaec4dde430fad69 | [
"Apache-2.0"
] | null | null | null | testscripts/RDKB/component/TAD/TS_TAD_Download_SetInvalidDiagnosticsState.py | cablelabs/tools-tdkb | 1fd5af0f6b23ce6614a4cfcbbaec4dde430fad69 | [
"Apache-2.0"
] | null | null | null | testscripts/RDKB/component/TAD/TS_TAD_Download_SetInvalidDiagnosticsState.py | cablelabs/tools-tdkb | 1fd5af0f6b23ce6614a4cfcbbaec4dde430fad69 | [
"Apache-2.0"
] | null | null | null | ##########################################################################
# If not stated otherwise in this file or this component's Licenses.txt
# file the following copyright and licenses apply:
#
# Copyright 2016 RDK Management
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################################
'''
<?xml version='1.0' encoding='utf-8'?>
<xml>
<id></id>
<!-- Do not edit id. This will be auto filled while exporting. If you are adding a new script keep the id empty -->
<version>2</version>
<!-- Do not edit version. This will be auto incremented while updating. If you are adding a new script you can keep the vresion as 1 -->
<name>TS_TAD_Download_SetInvalidDiagnosticsState</name>
<!-- If you are adding a new script you can specify the script name. Script Name should be unique same as this file name with out .py extension -->
<primitive_test_id> </primitive_test_id>
<!-- Do not change primitive_test_id if you are editing an existing script. -->
<primitive_test_name>TADstub_Get</primitive_test_name>
<!-- -->
<primitive_test_version>3</primitive_test_version>
<!-- -->
<status>FREE</status>
<!-- -->
<synopsis>To check if Diagnostics state of download can be set with invalid value. Requested and Canceled are the only writable values.If the test fails,set any writable parameter and check if the DiagnosticsState changes to None</synopsis>
<!-- -->
<groups_id />
<!-- -->
<execution_time>1</execution_time>
<!-- -->
<long_duration>false</long_duration>
<!-- -->
<advanced_script>false</advanced_script>
<!-- execution_time is the time out time for test execution -->
<remarks>RDKB doesn't support Download Diagnostics feature till now</remarks>
<!-- Reason for skipping the tests if marked to skip -->
<skip>false</skip>
<!-- -->
<box_types>
</box_types>
<rdk_versions>
<rdk_version>RDKB</rdk_version>
<!-- -->
</rdk_versions>
<test_cases>
<test_case_id>TC_TAD_34</test_case_id>
<test_objective>To check if Diagnostics state of download can be set with invalid value. Requested and Canceled are the only writable values.If the test fails,set any writable parameter and check if the DiagnosticsState changes to None</test_objective>
<test_type>Positive</test_type>
<test_setup>XB3,Emulator</test_setup>
<pre_requisite>1.Ccsp Components should be in a running state else invoke cosa_start.sh manually that includes all the ccsp components.
2.TDK Agent should be in running state or invoke it through StartTdk.sh script</pre_requisite>
<api_or_interface_used>TADstub_Get</api_or_interface_used>
<input_parameters>Device.IP.Diagnostics.DownloadDiagnostics.DiagnosticsState
Device.IP.Diagnostics.DownloadDiagnostics.Interface
Device.IP.Diagnostics.DownloadDiagnostics.DownloadURL</input_parameters>
<automation_approch>1. Load TAD modules
2. From script invoke TADstub_Set to set all the writable parameters
3. Check whether the result params get changed along with the download DignosticsState
4. Validation of the result is done within the python script and send the result status to Test Manager.
5.Test Manager will publish the result in GUI as PASS/FAILURE based on the response from TAD stub.</automation_approch>
<except_output>CheckPoint 1:
The output should be logged in the Agent console/Component log
CheckPoint 2:
Stub function result should be success and should see corresponding log in the agent console log
CheckPoint 3:
TestManager GUI will publish the result as PASS in Execution/Console page of Test Manager</except_output>
<priority>High</priority>
<test_stub_interface>None</test_stub_interface>
<test_script>TS_TAD_Download_SetInvalidDiagnosticsState</test_script>
<skipped>No</skipped>
<release_version></release_version>
<remarks></remarks>
</test_cases>
<script_tags />
</xml>
'''
# use tdklib library,which provides a wrapper for tdk testcase script
import tdklib;
#Test component to be tested
obj = tdklib.TDKScriptingLibrary("tad","1");
#IP and Port of box, No need to change,
#This will be replaced with correspoing Box Ip and port while executing script
ip = <ipaddress>
port = <port>
obj.configureTestCase(ip,port,'TS_TAD_SetInvalidDownloadDiagnosticsState');
#Get the result of connection with test component and DUT
loadmodulestatus =obj.getLoadModuleResult();
print "[LIB LOAD STATUS] : %s" %loadmodulestatus ;
if "SUCCESS" in loadmodulestatus.upper():
#Set the result status of execution
obj.setLoadModuleStatus("SUCCESS");
tdkTestObj = obj.createTestStep('TADstub_Set');
tdkTestObj.addParameter("ParamName","Device.IP.Diagnostics.DownloadDiagnostics.DiagnosticsState");
tdkTestObj.addParameter("ParamValue","Completed");
tdkTestObj.addParameter("Type","string");
expectedresult="FAILURE";
tdkTestObj.executeTestCase(expectedresult);
actualresult = tdkTestObj.getResult();
details = tdkTestObj.getResultDetails();
if expectedresult in actualresult:
#Set the result status of execution
tdkTestObj.setResultStatus("SUCCESS");
print "TEST STEP 1:Set DiagnosticsState of download as completed";
print "EXPECTED RESULT 1: DiagnosticsState of download must be Requested or Canceled";
print "ACTUAL RESULT 1: Can not set diagnosticsState of download as completed, details : %s" %details;
#Get the result of execution
print "[TEST EXECUTION RESULT] : SUCCESS";
tdkTestObj = obj.createTestStep('TADstub_Set');
tdkTestObj.addParameter("ParamName","Device.IP.Diagnostics.DownloadDiagnostics.Interface");
tdkTestObj.addParameter("ParamValue","Interface_erouter0");
tdkTestObj.addParameter("Type","string");
expectedresult="SUCCESS";
tdkTestObj.executeTestCase(expectedresult);
actualresult = tdkTestObj.getResult();
details = tdkTestObj.getResultDetails();
if expectedresult in actualresult:
#Set the result status of execution
tdkTestObj.setResultStatus("SUCCESS");
print "TEST STEP 2: Set the interface of Download";
print "EXPECTED RESULT 2: Should set the interface of Download ";
print "ACTUAL RESULT 2: %s" %details;
#Get the result of execution
print "[TEST EXECUTION RESULT] : SUCCESS";
tdkTestObj = obj.createTestStep('TADstub_Get');
tdkTestObj.addParameter("paramName","Device.IP.Diagnostics.DownloadDiagnostics.DiagnosticsState");
expectedresult="SUCCESS";
tdkTestObj.executeTestCase(expectedresult);
actualresult = tdkTestObj.getResult();
details= tdkTestObj.getResultDetails();
if expectedresult in actualresult and details=="None":
#Set the result status of execution
tdkTestObj.setResultStatus("SUCCESS");
print "TEST STEP 3 :Get DiagnosticsState of download as None";
print "EXPECTED RESULT 3 :Should get the DiagnosticsState of download as None ";
print "ACTUAL RESULT 3 :The DiagnosticsState of download is , details : %s" %details;
#Get the result of execution
print "[TEST EXECUTION RESULT] : SUCCESS";
else:
#Set the result status of execution
tdkTestObj.setResultStatus("FAILURE");
print "TEST STEP 3 :Get DiagnosticsState of download as None";
print "EXPECTED RESULT 3 :Should get the Diagnostics State of download as None";
print "ACTUAL RESULT 3 :The DiagnosticsState of download is , details : %s" %details;
#Get the result of execution
print "[TEST EXECUTION RESULT] : FAILURE";
else:
#Set the result status of execution
tdkTestObj.setResultStatus("FAILURE");
print "TEST STEP 2: Set the interface of Download";
print "EXPECTED RESULT 2: Should set the interface of Download ";
print "ACTUAL RESULT 2: %s" %details;
#Get the result of execution
print "[TEST EXECUTION RESULT] : FAILURE";
else:
#Set the result status of execution
tdkTestObj.setResultStatus("FAILURE");
print "TEST STEP 1:Set DiagnosticsState of download as completed";
print "EXPECTED RESULT 1: DiagnosticsState of download must be Requested or Canceled";
print "ACTUAL RESULT 1: DiagnosticsState of download is set as completed, details : %s" %details;
#Get the result of execution
print "[TEST EXECUTION RESULT] : FAILURE";
obj.unloadModule("tad");
else:
print "Failed to load tad module";
obj.setLoadModuleStatus("FAILURE");
print "Module loading failed";
| 49.402116 | 256 | 0.698297 | 1,148 | 9,337 | 5.608885 | 0.249129 | 0.026557 | 0.044417 | 0.01522 | 0.485324 | 0.452089 | 0.440596 | 0.436869 | 0.421183 | 0.411865 | 0 | 0.006431 | 0.2006 | 9,337 | 188 | 257 | 49.664894 | 0.856243 | 0.142658 | 0 | 0.680556 | 0 | 0 | 0.410106 | 0.048884 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.013889 | null | null | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
966aec55e4c71e579f2f8dad4a1f0d280f90e1cf | 1,354 | py | Python | rest_framework_helpers/fields/relation.py | Apkawa/django-rest-framework-helpers | f4b24bf326e081a215ca5c1c117441ea8f78cbb4 | [
"MIT"
] | null | null | null | rest_framework_helpers/fields/relation.py | Apkawa/django-rest-framework-helpers | f4b24bf326e081a215ca5c1c117441ea8f78cbb4 | [
"MIT"
] | null | null | null | rest_framework_helpers/fields/relation.py | Apkawa/django-rest-framework-helpers | f4b24bf326e081a215ca5c1c117441ea8f78cbb4 | [
"MIT"
] | null | null | null | # coding: utf-8
from __future__ import unicode_literals
from collections import OrderedDict
import six
from django.db.models import Model
from rest_framework import serializers
class SerializableRelationField(serializers.RelatedField):
def __init__(self, serializer, serializer_function=None, *args, **kwargs):
super(SerializableRelationField, self).__init__(*args, **kwargs)
self.serializer = serializer
self.serializer_function = serializer_function or self.do_serialize
def to_representation(self, value):
if isinstance(value, Model):
return self.serializer_function(value, self.serializer)
return value
def to_internal_value(self, data):
return self.serializer.Meta.model.objects.get(id=data)
def do_serialize(self, obj, serializer=None):
if not serializer:
return obj.pk
return serializer(obj).data
@property
def choices(self):
queryset = self.get_queryset()
if queryset is None:
# Ensure that field.choices returns something sensible
# even when accessed with a read-only field.
return {}
return OrderedDict([
(
six.text_type(item.pk),
self.display_value(item)
)
for item in queryset
])
| 30.088889 | 78 | 0.655096 | 149 | 1,354 | 5.778523 | 0.47651 | 0.097561 | 0.055749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001012 | 0.27031 | 1,354 | 44 | 79 | 30.772727 | 0.870445 | 0.080502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15625 | false | 0 | 0.15625 | 0.03125 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
966b9c223ecd09480f1a0fd34d6be56ce22a2ada | 11,772 | py | Python | revisions/models.py | debrouwere/django-revisions | d5b95806c65e66a720a2d9ec2f5ffb16698d9275 | [
"BSD-2-Clause-FreeBSD"
] | 6 | 2015-11-05T11:48:46.000Z | 2021-04-14T07:10:16.000Z | revisions/models.py | debrouwere/django-revisions | d5b95806c65e66a720a2d9ec2f5ffb16698d9275 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | revisions/models.py | debrouwere/django-revisions | d5b95806c65e66a720a2d9ec2f5ffb16698d9275 | [
"BSD-2-Clause-FreeBSD"
] | 1 | 2019-02-13T21:01:48.000Z | 2019-02-13T21:01:48.000Z | # encoding: utf-8
import uuid
import difflib
from datetime import date
from django.db import models
from django.utils.translation import ugettext as _
from django.core.exceptions import ImproperlyConfigured, ValidationError
from django.db import IntegrityError
from django.contrib.contenttypes.models import ContentType
from revisions import managers, utils
import inspect
# the crux of all errors seems to be that, with VersionedBaseModel,
# doing setattr(self, self.pk_name, None) does _not_ lead to creating
# a new object, and thus versioning as a whole doesn't work
# the only thing lacking from the VersionedModelBase is a version id.
# You may use VersionedModelBase if you need to specify your own
# AutoField (e.g. using UUIDs) or if you're trying to adapt an existing
# model to ``django-revisions`` and have an AutoField not named
# ``vid``.
class VersionedModelBase(models.Model, utils.ClonableMixin):
@classmethod
def get_base_model(cls):
base = cls
while isinstance(base._meta.pk, models.OneToOneField):
base = base._meta.pk.rel.to
return base
@property
def base_model(self):
return self.get_base_model()
@property
def pk_name(self):
return self.base_model._meta.pk.attname
# For UUIDs in particular, we need a way to know the order of revisions
# e.g. through a ``changed`` datetime field.
@classmethod
def get_comparator_name(cls):
if hasattr(cls.Versioning, 'comparator'):
return cls.Versioning.comparator
else:
return cls.get_base_model()._meta.pk.attname
@property
def comparator_name(self):
return self.get_comparator_name()
@property
def comparator(self):
return getattr(self, self.comparator_name)
@classmethod
def get_implementations(cls):
models = [contenttype.model_class() for contenttype in ContentType.objects.all()]
return [model for model in models if isinstance(model, cls)]
@property
def _base_model(self):
base = self
while isinstance(base._meta.pk, models.OneToOneField):
base = base._meta.pk.rel.to
return base
@property
def _base_table(self):
return self._base_model._meta.db_table
# content bundle id
cid = models.CharField(max_length=36, editable=False, null=True, db_index=True)
# managers
latest = managers.LatestManager()
objects = models.Manager()
# all related revisions, plus easy shortcuts to the previous and next revision
def get_revisions(self):
qs = self.__class__.objects.filter(cid=self.cid).order_by(self.comparator_name)
try:
qs.prev = qs.filter(**{self.comparator_name + '__lt': self.comparator}).order_by('-' + self.comparator_name)[0]
except IndexError:
qs.prev = None
try:
qs.next = qs.filter(**{self.comparator_name + '__gt': self.comparator})[0]
except IndexError:
qs.next = None
return qs
def check_if_latest_revision(self):
return self.comparator >= max([version.comparator for version in self.get_revisions()])
@classmethod
def fetch(cls, criterion):
if isinstance(criterion, int) or isinstance(criterion, str):
return cls.objects.get(pk=criterion)
elif isinstance(criterion, models.Model):
return criterion
elif isinstance(criterion, date):
pub_date = cls.Versioning.publication_date
if pub_date:
return cls.objects.filter(**{pub_date + '__lte': criterion}).order('-' + self.comparator_name)[0]
else:
raise ImproperlyConfigured("""Please specify which field counts as the publication
date for this model. You can do so inside a Versioning class. Read the docs
for more info.""")
else:
raise TypeError("Can only fetch an object using a primary key, a date or a datetime object.")
def revert_to(self, criterion):
revert_to_obj = self.__class__.fetch(criterion)
# You can only revert a model instance back to a previous instance.
# Not any ol' object will do, and we check for that.
if revert_to_obj.pk not in self.get_revisions().values_list('pk', flat=True):
raise IndexError("Cannot revert to a primary key that is not part of the content bundle.")
else:
return revert_to_obj.revise()
def get_latest_revision(self):
return self.get_revisions().order_by('-' + self.comparator)[0]
def make_current_revision(self):
if not self.check_if_latest_revision():
self.save()
def show_diff_to(self, to, field):
frm = unicode(getattr(self, field)).split()
to = unicode(getattr(to, field)).split()
differ = difflib.HtmlDiff()
return differ.make_table(frm, to)
def _get_unique_checks(self, exclude=[]):
# for parity with Django's unique_together notation shortcut
def parse_shortcut(unique_together):
unique_together = tuple(unique_together)
if len(unique_together) and isinstance(unique_together[0], basestring):
unique_together = (unique_together, )
return unique_together
# Django actually checks uniqueness for a single field in the very same way it
# does things for unique_together, something we happily take advantage of
unique = tuple([(field,) for field in getattr(self.Versioning, 'unique', ())])
unique_together = \
unique + \
parse_shortcut(getattr(self.Versioning, 'unique_together', ())) + \
parse_shortcut(getattr(self._meta, 'unique_together', ()))
model = self.__class__()
model._meta.unique_together = unique_together
return models.Model._get_unique_checks(model, exclude)
def _get_attribute_history(self, name):
if self.__dict__.get(name, False):
return [(version.__dict__[name], version) for version in self.get_revisions()]
else:
raise AttributeError(name)
def _get_related_objects(self, relatedmanager):
""" This method extends a regular related-manager by also including objects
that are related to other versions of the same content, instead of just to
this one object. """
related_model = relatedmanager.model
related_model_name = related_model._meta.module_name
# The foreign key field name on related objects often, by convention,
# coincides with the name of the class it relates to, but not always,
# e.g. you could do something like
# class Book(models.Model):
# thingmabob = models.ForeignKey(Author)
#
# There is, afaik, no elegant way to get a RelatedManager to tell us that
# related objects refer to this class by 'thingmabob', leading to this
# kind of convoluted deep dive into the internals of the related class.
#
# By all means, I'd welcome suggestions for prettier code.
ref_name = self._meta._name_map[related_model_name][0].field.name
pks = [story.pk for story in self.get_revisions()]
objs = related_model._default_manager.filter(**{ref_name + '__in': pks})
return objs
def __getattr__(self, name):
# we catch all lookups that start with 'related_'
if name.startswith('related_'):
related_name = "_".join(name.split("_")[1:])
attribute = getattr(self, related_name, False)
# we piggyback off of an existing relationship,
# so the attribute has to exist and it has to be a
# RelatedManager or ManyRelatedManager
if attribute:
# (we check the module instead of using isinstance, since
# ManyRelatedManager is created using a factory so doesn't
# actually exist inside of the module)
if attribute.__class__.__dict__['__module__'] == 'django.db.models.fields.related':
return self._get_related_objects(attribute)
if name.endswith('_history'):
attribute = name.replace('_history', '')
return self._get_attribute_history(attribute)
raise AttributeError(name)
def prepare_for_writing(self):
"""
This method allows you to clear out certain fields in the model that are
specific to each revision, like a log message.
"""
for field in self.Versioning.clear_each_revision:
super(VersionedModelBase, self).__setattr__(field, '')
def validate_bundle(self):
# uniqueness constraints per bundle can't be checked at the database level,
# which means we'll have to do so in the save method
if getattr(self.Versioning, 'unique_together', None) or getattr(self.Versioning, 'unique', None):
# replace ValidationError with IntegrityError because this is what users will expect
try:
self.validate_unique()
except ValidationError, error:
raise IntegrityError(error)
def revise(self):
self.validate_bundle()
return self.clone()
def save(self, *vargs, **kwargs):
# The first revision of a piece of content won't have a bundle id yet,
# and because the object isn't persisted in the database, there's no
# primary key either, so we use a UUID as the bundle ID.
#
# (Note for smart alecks: Django chokes on using super/save() more than
# once in the save method, so doing a preliminary save to get the PK
# and using that value for a bundle ID is rather hard.)
if not self.cid:
self.cid = uuid.uuid4().hex
self.validate_bundle()
super(VersionedModelBase, self).save(*vargs, **kwargs)
def delete_revision(self, *vargs, **kwargs):
super(VersionedModelBase, self).delete(*vargs, **kwargs)
def delete(self, *vargs, **kwargs):
for revision in self.get_revisions():
revision.delete_revision(*vargs, **kwargs)
class Meta:
abstract = True
class Versioning:
clear_each_revision = []
publication_date = None
unique_together = ()
class VersionedModel(VersionedModelBase):
vid = models.AutoField(primary_key=True)
class Meta:
abstract = True
class TrashableModel(models.Model):
""" Users wanting a version history may also expect a trash bin
that allows them to recover deleted content, as is e.g. the
case in WordPress. This is that thing. """
_is_trash = models.BooleanField(db_column='is_trash', default=False, editable=False)
@property
def is_trash(self):
return self._is_trash
def get_content_bundle(self):
if isinstance(self, VersionedModelBase):
return self.get_revisions()
else:
return [self]
def delete(self):
"""
It makes no sense to trash individual revisions: either you keep a version history or you don't.
If you want to undo a revision, you should use obj.revert_to(preferred_revision) instead.
"""
for obj in self.get_content_bundle():
obj._is_trash = True
obj.save()
def delete_permanently(self):
for obj in self.get_content_bundle():
super(TrashableModel, obj).delete()
class Meta:
abstract = True | 39.503356 | 123 | 0.641777 | 1,462 | 11,772 | 5.015732 | 0.252394 | 0.032456 | 0.013364 | 0.012273 | 0.107187 | 0.046638 | 0.031638 | 0.024001 | 0.024001 | 0.024001 | 0 | 0.00129 | 0.275824 | 11,772 | 298 | 124 | 39.503356 | 0.858886 | 0.20897 | 0 | 0.217391 | 0 | 0 | 0.057645 | 0.003632 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.054348 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
966c228f07ae23bcd47d41a07f74a33292ad5f8f | 1,260 | py | Python | noir/templating.py | gi0baro/noir | b187922d4f6055dbcb745c5299db907aac574398 | [
"BSD-3-Clause"
] | 2 | 2021-06-10T13:09:27.000Z | 2021-06-11T09:37:02.000Z | noir/templating.py | gi0baro/noir | b187922d4f6055dbcb745c5299db907aac574398 | [
"BSD-3-Clause"
] | null | null | null | noir/templating.py | gi0baro/noir | b187922d4f6055dbcb745c5299db907aac574398 | [
"BSD-3-Clause"
] | null | null | null | import json
import os
from typing import Any, Dict, Optional
import tomlkit
import yaml
from renoir.apis import Renoir, ESCAPES, MODES
from renoir.writers import Writer as _Writer
from .utils import adict, obj_to_adict
class Writer(_Writer):
@staticmethod
def _to_unicode(data):
if data is None:
return ""
if isinstance(data, bool):
return str(data).lower()
return _Writer._to_unicode(data)
class Templater(Renoir):
_writers = {**Renoir._writers, **{ESCAPES.common: Writer}}
def _indent(text: str, spaces: int = 2) -> str:
offset = " " * spaces
rv = f"\n{offset}".join(text.split("\n"))
return rv
def _to_json(obj: Any, indent: Optional[int] = None) -> str:
return json.dumps(obj, indent=indent)
def _to_toml(obj: Any) -> str:
return tomlkit.dumps(obj)
def _to_yaml(obj: Any) -> str:
return yaml.dump(obj)
def base_ctx(ctx: Dict[str, Any]):
ctx.update(
env=obj_to_adict(os.environ),
indent=_indent,
to_json=_to_json,
to_toml=_to_toml,
to_yaml=_to_yaml
)
yaml.add_representer(adict, yaml.representer.Representer.represent_dict)
templater = Templater(mode=MODES.plain, adjust_indent=True, contexts=[base_ctx])
| 21.355932 | 80 | 0.666667 | 175 | 1,260 | 4.6 | 0.36 | 0.024845 | 0.024845 | 0.037267 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001012 | 0.215873 | 1,260 | 58 | 81 | 21.724138 | 0.813765 | 0 | 0 | 0 | 0 | 0 | 0.010317 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.210526 | 0.078947 | 0.631579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
96702b58ef9f60b2130e8a0e754ad89b97258e50 | 691 | py | Python | mainapp/views.py | H0oxy/sportcars | dcd76736bfe88630b3ccce7e4ee0ad9398494f08 | [
"MIT"
] | null | null | null | mainapp/views.py | H0oxy/sportcars | dcd76736bfe88630b3ccce7e4ee0ad9398494f08 | [
"MIT"
] | null | null | null | mainapp/views.py | H0oxy/sportcars | dcd76736bfe88630b3ccce7e4ee0ad9398494f08 | [
"MIT"
] | null | null | null | from django.views.generic import ListView
from rest_framework.permissions import AllowAny
from rest_framework.viewsets import ModelViewSet
from mainapp.models import Manufacturer, Car
from mainapp.serializers import ManufacturerSerializer, CarSerializer
class ManufacturerList(ListView):
model = Manufacturer
class CarList(ListView):
model = Car
class ManufacturerViewSet(ModelViewSet):
# queryset = Manufacturer.objects.all()
queryset = Manufacturer.objects.filter(is_active=True)
serializer_class = ManufacturerSerializer
class CarViewSet(ModelViewSet):
permission_classes = [AllowAny]
queryset = Car.objects.all()
serializer_class = CarSerializer
| 25.592593 | 69 | 0.797395 | 70 | 691 | 7.785714 | 0.5 | 0.029358 | 0.062385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138929 | 691 | 26 | 70 | 26.576923 | 0.915966 | 0.053546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
9676900dd098082bdefdd8316547347e26bd4ef9 | 376 | py | Python | panos/example_with_output_template/loader.py | nembery/Skillets | 4c0a259d4fb49550605c5eb5316d83f109612271 | [
"Apache-2.0"
] | 1 | 2019-04-17T19:30:46.000Z | 2019-04-17T19:30:46.000Z | panos/example_with_output_template/loader.py | nembery/Skillets | 4c0a259d4fb49550605c5eb5316d83f109612271 | [
"Apache-2.0"
] | null | null | null | panos/example_with_output_template/loader.py | nembery/Skillets | 4c0a259d4fb49550605c5eb5316d83f109612271 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from skilletlib import SkilletLoader
sl = SkilletLoader('.')
skillet = sl.get_skillet_with_name('panos_cli_example')
context = dict()
context['cli_command'] = 'show system info'
context['username'] = 'admin'
context['password'] = 'NOPE'
context['ip_address'] = 'NOPE'
output = skillet.execute(context)
print(output.get('output_template', 'n/a'))
| 19.789474 | 55 | 0.723404 | 48 | 376 | 5.5 | 0.708333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002985 | 0.109043 | 376 | 18 | 56 | 20.888889 | 0.785075 | 0.055851 | 0 | 0 | 0 | 0 | 0.288952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.1 | 0.1 | 0 | 0.1 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
9677e8577eb71d6a56fc8178b8340df0cf85efc4 | 2,184 | py | Python | setup.py | pletnes/cloud-pysec | 4f91e3875ee36cb3e9b361e8b598070ce9523128 | [
"Apache-2.0"
] | null | null | null | setup.py | pletnes/cloud-pysec | 4f91e3875ee36cb3e9b361e8b598070ce9523128 | [
"Apache-2.0"
] | null | null | null | setup.py | pletnes/cloud-pysec | 4f91e3875ee36cb3e9b361e8b598070ce9523128 | [
"Apache-2.0"
] | null | null | null | """ xssec setup """
import codecs
from os import path
from setuptools import setup, find_packages
from sap.conf.config import USE_SAP_PY_JWT
CURRENT_DIR = path.abspath(path.dirname(__file__))
README_LOCATION = path.join(CURRENT_DIR, 'README.md')
VERSION = ''
with open(path.join(CURRENT_DIR, 'version.txt'), 'r') as version_file:
VERSION = version_file.read()
with codecs.open(README_LOCATION, 'r', 'utf-8') as readme_file:
LONG_DESCRIPTION = readme_file.read()
sap_py_jwt_dep = ''
if USE_SAP_PY_JWT:
sap_py_jwt_dep = 'sap_py_jwt>=1.1.1'
else:
sap_py_jwt_dep = 'cryptography'
setup(
name='sap_xssec',
url='https://github.com/SAP/cloud-pysec',
version=VERSION.strip(),
author='SAP SE',
description=('SAP Python Security Library'),
packages=find_packages(include=['sap*']),
data_files=[('.', ['version.txt', 'CHANGELOG.md'])],
test_suite='tests',
install_requires=[
'deprecation>=2.1.0',
'requests>=2.21.0',
'six>=1.11.0',
'pyjwt>=1.7.0',
'{}'.format(sap_py_jwt_dep)
],
long_description=LONG_DESCRIPTION,
long_description_content_type="text/markdown",
classifiers=[
# http://pypi.python.org/pypi?%3Aaction=list_classifiers
"Development Status :: 5 - Production/Stable",
"Topic :: Security",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Operating System :: POSIX :: BSD",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
],
)
| 34.125 | 70 | 0.628663 | 256 | 2,184 | 5.183594 | 0.441406 | 0.14318 | 0.188395 | 0.097965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018757 | 0.218864 | 2,184 | 63 | 71 | 34.666667 | 0.759086 | 0.031136 | 0 | 0.035714 | 0 | 0 | 0.446183 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96785881acee4c6b6e5fbf6dfa6bcd4a371b2db4 | 1,957 | py | Python | mutators/implementations/mutation_change_proto.py | freingruber/JavaScript-Raider | d1c1fff2fcfc60f210b93dbe063216fa1a83c1d0 | [
"Apache-2.0"
] | 91 | 2022-01-24T07:32:34.000Z | 2022-03-31T23:37:15.000Z | mutators/implementations/mutation_change_proto.py | zeusguy/JavaScript-Raider | d1c1fff2fcfc60f210b93dbe063216fa1a83c1d0 | [
"Apache-2.0"
] | null | null | null | mutators/implementations/mutation_change_proto.py | zeusguy/JavaScript-Raider | d1c1fff2fcfc60f210b93dbe063216fa1a83c1d0 | [
"Apache-2.0"
] | 11 | 2022-01-24T14:21:12.000Z | 2022-03-31T23:37:23.000Z | # Copyright 2022 @ReneFreingruber
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import utils
import tagging_engine.tagging as tagging
from tagging_engine.tagging import Tag
import mutators.testcase_mutators_helpers as testcase_mutators_helpers
def mutation_change_proto(content, state):
# utils.dbg_msg("Mutation operation: Change proto")
tagging.add_tag(Tag.MUTATION_CHANGE_PROTO1)
# TODO
# Currently I don't return in "lhs" or "rhs" the __proto__ of a function
# So code like this:
# Math.abs.__proto__ = Math.sign.__proto__
# Can currently not be created. Is this required?
# => has such code an effect?
random_line_number = testcase_mutators_helpers.get_random_line_number_to_insert_code(state)
(start_line_with, end_line_with) = testcase_mutators_helpers.get_start_and_end_line_symbols(state, random_line_number, content)
(lhs, code_possibilities) = testcase_mutators_helpers.get_proto_change_lhs(state, random_line_number)
rhs = testcase_mutators_helpers.get_proto_change_rhs(state, random_line_number, code_possibilities)
new_code_line = "%s%s.__proto__ = %s%s" % (start_line_with, lhs, rhs, end_line_with)
# Now just insert the new line to the testcase & state
lines = content.split("\n")
lines.insert(random_line_number, new_code_line)
new_content = "\n".join(lines)
state.state_insert_line(random_line_number, new_content, new_code_line)
return new_content, state
| 40.770833 | 131 | 0.772611 | 291 | 1,957 | 4.900344 | 0.412371 | 0.049088 | 0.078541 | 0.072931 | 0.051893 | 0.051893 | 0 | 0 | 0 | 0 | 0 | 0.005428 | 0.152785 | 1,957 | 47 | 132 | 41.638298 | 0.854644 | 0.442003 | 0 | 0 | 0 | 0 | 0.023364 | 0 | 0 | 0 | 0 | 0.021277 | 0 | 1 | 0.0625 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
9679546d86fe3d9ab266b6fcd96932146df7b271 | 406 | py | Python | hello.py | AaronTrip/cgi-lab | cc932dfe21c27f3ca054233fe5bc73783facee6b | [
"Apache-2.0"
] | null | null | null | hello.py | AaronTrip/cgi-lab | cc932dfe21c27f3ca054233fe5bc73783facee6b | [
"Apache-2.0"
] | null | null | null | hello.py | AaronTrip/cgi-lab | cc932dfe21c27f3ca054233fe5bc73783facee6b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
import os, json
print("Content-type: text/html\r\n\r\n")
print()
print("<Title>Test CGI</title>")
print("<p>Hello World cmput404 class!<p/>")
print(os.environ)
json_object = json.dumps(dict(os.environ), indent = 4)
print(json_object)
'''for param in os.environ.keys():
if(param == "HTTP_USER_AGENT"):
print("<b>%20s<b/>: %s<br>" % (param, os.environ[param]))
'''
| 18.454545 | 65 | 0.640394 | 65 | 406 | 3.938462 | 0.615385 | 0.140625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019943 | 0.135468 | 406 | 21 | 66 | 19.333333 | 0.709402 | 0.051724 | 0 | 0 | 0 | 0 | 0.360656 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.75 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
967f854c2cc3d7839a4210800ff6ac34aa126d0b | 3,493 | py | Python | tests/test_classes.py | fossabot/RPGenie | eb3ee17ede0dbbec787766d607b2f5b89d65533d | [
"MIT"
] | 32 | 2017-09-03T21:14:17.000Z | 2022-01-12T04:26:28.000Z | tests/test_classes.py | fossabot/RPGenie | eb3ee17ede0dbbec787766d607b2f5b89d65533d | [
"MIT"
] | 9 | 2017-09-12T13:16:43.000Z | 2022-01-19T18:53:48.000Z | tests/test_classes.py | fossabot/RPGenie | eb3ee17ede0dbbec787766d607b2f5b89d65533d | [
"MIT"
] | 19 | 2017-10-12T03:14:54.000Z | 2021-06-12T18:30:33.000Z | #! python3
""" Pytest-compatible tests for src/classes.py """
import sys
from pathlib import Path
from copy import deepcopy
from unittest import mock
# A workaround for tests not automatically setting
# root/src/ as the current working directory
path_to_src = Path(__file__).parent.parent / "src"
sys.path.insert(0, str(path_to_src))
from classes import Item, Inventory, Player, Character
from settings import *
def initialiser(testcase):
""" Initialises all test cases with data """
def inner(*args, **kwargs):
items = [Item(i) for i in range(kwargs.get("itemcount", 3))]
inv = Inventory(items=deepcopy(items), **kwargs)
return testcase(items, inv, *args, **kwargs)
return inner
@initialiser
def test_testData(items, inv, *args, **kwargs):
""" Assert the test data itself is valid """
assert items == inv.items
@initialiser
def test_inv_append(items, inv, *args, **kwargs):
""" Test for inventory append functionality """
itemcount = len(items)
for i in range(inv.max_capacity - itemcount):
assert inv.append(Item(2)) == f"{Item(2).name} added to inventory"
assert inv.append(Item(1)) == "No room in inventory"
assert len(inv) == inv.max_capacity
#Separate tests for stackable items
assert inv.append(Item(0)) == f"2 {Item(0).name} in container"
assert inv.items[inv.items.index(Item(0))]._count == 2
@initialiser
def test_inv_remove(items, inv, *args, **kwargs):
""" Test for inventory item removal """
inv.items[inv.items.index(Item(0))]._count += 2
# Non-stackable items
assert inv.remove(Item(1)) == f"{Item(1).name} was successfully removed"
assert inv.items.count(Item(1)) == 0
# Stackable items
assert inv.remove(Item(0)) == f"1/{inv.items[inv.items.index(Item(0))]._count+1} {Item(0).name} removed"
assert inv.items.count(Item(0)) == 1
assert inv.remove(Item(0), count=3) == "You don't have that many"
assert inv.remove(Item(0), count=2) == f"{Item(0).name} was successfully removed"
assert inv.items.count(Item(0)) == 0
@initialiser
def test_inv_equip_unequip(items, inv, *args, **kwargs):
""" Test for inventory item equip/unequip functionality """
# Equipping items
assert inv.equip(Item(1)) == f"You equip {Item(1).name}"
assert inv.equip(Item(2)) == "You can't equip that"
# Unequipping items
assert inv.unequip('weapon') == f"You unequip {Item(1).name}"
assert inv.unequip('off-hand') == "That slot is empty"
assert inv.gear['head'] is None
assert inv.gear['weapon'] is None
@initialiser
def test_inv_combine(items, inv, *args, **kwargs):
""" Test for item combining functionality """
assert inv.better_combine_item(inv.items[1], 0, inv.items[2]) == "Combination successful"
assert len(inv) == 2
assert inv.better_combine_item(inv.items[0], 0, inv.items[1]) == "Could not combine those items"
assert len(inv) == 2
def test_char_levelmixin():
""" Test for level-up functionality """
char = Character('John Doe', max_level = 5)
assert 1 == char.level
assert 85 == char.next_level
assert char.give_exp(85) == f"Congratulations! You've levelled up; your new level is {char.level}\nEXP required for next level: {int(char.next_level-char.experience)}\nCurrent EXP: {char.experience}"
for _ in range(char.max_level - char.level):
char.give_exp(char.next_level)
assert char.level == char.max_level
assert char.give_exp(char.next_level) == f""
| 36.768421 | 203 | 0.678214 | 517 | 3,493 | 4.504836 | 0.261122 | 0.073422 | 0.030915 | 0.046372 | 0.291971 | 0.254186 | 0.173036 | 0.1155 | 0.069558 | 0 | 0 | 0.017043 | 0.176925 | 3,493 | 94 | 204 | 37.159574 | 0.793043 | 0.150014 | 0 | 0.118644 | 0 | 0.033898 | 0.208033 | 0.032612 | 0 | 0 | 0 | 0 | 0.474576 | 1 | 0.135593 | false | 0 | 0.101695 | 0 | 0.271186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96856b2747e7c36d91fb23b1dc5b4f022aab0d68 | 17,925 | py | Python | islecler.py | mrtyasar/PythonLearn | b8fa5d97b9c811365db8457f42f1e1d04e4dc8a4 | [
"Apache-2.0"
] | null | null | null | islecler.py | mrtyasar/PythonLearn | b8fa5d97b9c811365db8457f42f1e1d04e4dc8a4 | [
"Apache-2.0"
] | null | null | null | islecler.py | mrtyasar/PythonLearn | b8fa5d97b9c811365db8457f42f1e1d04e4dc8a4 | [
"Apache-2.0"
] | null | null | null | #----------------------------------#
######### ARİTMETİK İŞLEÇLER #######
#----------------------------------#
# + toplama
# - çıkarma
# * çarpma
# / bölme
# ** kuvvet
# % modülüs/kalan bulma
# // taban bölme/ tam bölme
#aritmetik işleçler sayısal işlemler yapmamızı sağlar
print(45+57)#102
#yalnız + ve * işaretleri karakter dizileri içinde kullanılabilir
#karakter dizilerini birleştirmek için + işareti
print("Selam "+"Bugün "+"Hava çok güzel.")#Selam Bugün Hava çok güzel.
# * işareti karakter dizileri tekrarlamak için kullanılabilir
print("w"*3+".tnbc1"+".com")#www.tnbc1.com
# % işleci sayının bölümünden kalanı bulur
print(30 % 4)#2
#sayının kalanını bularak tek mi çift mi olduğunu bulabiliriz
sayi = int(input("Bir sayı giriniz: "))
if sayi % 2 == 0:
print("Girdiğiniz sayı bir çift sayıdır.")
else:
print("Girdiğiniz sayı bir tek sayıdır.")
#eğer bir sayının 2 ye bölümünden kalan 0 ise o sayı çift bir sayıdır
#veya bu % işleci ile sayının başka bir sayı ile tam bölünüp bölünmediğini
# bulabiliriz
print(36 % 9)#0 #yani 36 9 a tam bölünüyor
#program yazalım:
bolunen = int(input("Herhangi bir sayı giriniz: "))
bolen = int(input("Herhangi bir sayı daha giriniz: "))
sablon = "{} sayısı {} sayısına tam".format(bolunen,bolen)
if bolunen % bolen == 0:
print(sablon,"bölünüyor!")
else:
print(sablon,"bölünmüyor!")
#çıktı:
#Herhangi bir sayı giriniz: 2876
#Herhangi bir sayı daha giriniz: 123
#2876 sayısı 123 sayısına tam bölünmüyor!
# bir sayının son basamağını elde etmek içinde kullanabiliriz
#bu yüzden bir sayının 10 bölümünde kalanını buluruz
print(65 % 10)#5
print(543 % 10)#3
#----------------#
#--//-tam bölme--#
#----------------#
a = 6 / 3
print(type(a))#float # 2.0
#pythonda sayıların bölmelerin sonucu kesirli olur yani float tipinde
b = 6 // 3
print(b)#3
print(type(b))#int #tam bölebildik
print(int(a))#2 # bu şekilde de float tipini inte çevirebildik
#----------------#
# ROUND #
#----------------#
#round() bir gömülü fonksiyondur
#bu fonksiyonun bir sayının değerini yuvarlamamızı sağlar
print(round(2.70))#3
print(round(2.30))#2
print(round(5.68,1))#5.7
print(round(5.68,2))#5.68
print(round(7.9,2))#7.9
#-----------------#
# ** #
#-----------------#
#bir sayının karesini bulmak
#bunun için 2 rakamına ihityacımız vardır
print(124**2)#15376
#bir sayının karakökünü bulmak
#karakökünü bulmak için 0.5 e ihtiyacımız vardır
print(625 ** 0.5)#25.0
#eğer ondalıklı sayı yani float tipli sayı istemiyorsak
#ifadeyi işlemi int tipine çevirmemiz gerekir
print(int(625 ** 0.5))#25
#bir sayının küpünü bulmak
#küpünü bulmak için 3 rakamına ihtiyacımız vardır
print(124 ** 3)#1906624
#bu işlemleri pow() fonksiyonları ile de yapabiliriz
print(pow(24,3))#13824
print(pow(96,2))#9216
#-------------------------------------#
# KARŞILAŞTIRMA İŞLEÇLERİ #
#-------------------------------------#
#işlenenler arasında bir karşılaştırma ilişkisi kuran işleçlerdir
# == eşittir
# != eşit değildir
# > büyüktür
# < küçüktür
# >= büyük eşittir
# <= küçük eşittir
parola = "xyz05"
soru = input("parolanız: ")
if soru == parola:
print("doğru parola!")
elif soru != parola:
print("yanlış parola!")
#başka bir örnek:
sayi = input("sayı: ")
if int(sayi) <= 100:
print("sayı 100 veya 100'den küçük")
elif int(sayi) >= 100:
print("sayı 100 veya 100'den büyük")
#-------------------------#
# BOOL İŞLEÇLERİ #
#-------------------------#
#bool da sadece iki değer vardır true ve false
#bilgisayar biliminde olduğu gibi 0 false dir 1 true dur
a = 1
print(a == 1)#a değeri 1 e eşit midir?
#True
print(a == 2)#False
# o değeri ve boş veri tipleri False'Dir
# bunun haricinde kalan her şey True
#bu durumu bool() adlı fonksiyondan yararlanarak öğrenebiliriz
print(bool(4))#True
print(bool("armut"))#True
print(bool(" "))#True
print(bool(2288281))#True
print(bool("0"))#True
print(bool(0))#False
print(bool(""))#False
#bool değerleri yazılım dünyasında önemli bir yeri vardır
#daha önce kullandığım koşul bloglarında koşulun gerçekleşmesi
#veya gerçekleşmemesi bool a bağlıdır yan, true ve false
isim = input("isminiz: ")
if isim == "Ferhat":
print("Ne güzel bir isminiz vardır")
else:
print(isim,"ismini pek sevmem!")
#isminiz: caner
#caner ismini pek sevmem!
# eğer diyoruz isim ferhat ifadesi true ise şunu göster diyoruz
# eğer true değeri dışında herhangi bir şey yani false ise şunu göster diyoruz
isim = input("isminiz: ")
print(isim == "Ferhat")#True
#
b = ""
print(bool(b))#False
#içi boş veri tiplerin her zaman false olacağını bilerek şöyle
#program yazabiliriz:
kullanici = input("Kullanıcı adınız: ")
if bool(kullanici) == True:
print("Teşekkürler")
else:
print("Kullanıcı adı alanı boş bırakılamaz!")
# eğer kullancı bir şeyler yazarsa bool(kullanici) komutu true verecek
# ekrana teşekkürler yazısı yazılacak
# eğer kullanıcı bir şey yazmadan entera tıklar ise false olacak ve else çalışacaktır
#bu işlemi genellikle şu şekilde yazarız:
kullaniciOne = input("Kullanıcı adınızı yazınız: ")
if kullaniciOne:
print("Teşekkürler")
else:
print("kullanıcı adı boş bırakılamaz")
#---------------------------------#
# BOOL İŞLEÇLERİ #
#---------------------------------#
#AND
#OR
#NOT
#and
#gmail giriş sistemi yazalım
#gmail giriş sisteminde kullanıcı adı ve parola yani her ikisi de doğru olmalıdır
kullaniciAdi = input("Kullanıcı adınız: ")
parola = input("Parolanınız: ")
if kullaniciAdi == "AliVeli":
if parola == "123456":
print("Sisteme hoşgeldiniz")
else :
print("Yanlış kullanıcı adı veya parola!")
else:
print("Yanlış kullanıcı adı veya parola")
#bu işlemi daha kolay yazabiliriz
kullanici = input("Kullanıcı adınızı yazınız: ")
sifre = input("şifrenizi yazınız: ")
if kullanici == "aliveli" and sifre == "12345":
print("programa hoşgeldiniz")
else:
print("Yanlış kullanıcı adı veya parola")
#and işlecini kullanarak iki durumu bağladık
#and işlecinin mantığı her iki durumun gerçekleşmesidir
#bütün koşullar gerçekleşiyorsa true döner
#onun haricinde tüm sonuçlar false dir
a = 23
b = 10
print(a == 23)#True
print(b == 10)#True
print(a == 23 and b == 10)#True
print(a == 23 and b == 15)#False
# OR
#or veya demektir
#her iki koşuldan biri true olursa yine de çalışır
c = 10
d = 100
print(c == 10)#True
print(d == 100)#True
print(c == 1 or d == 100)#True
# c koşulu yanlış olsa da d koşulu doğru olduğu için çıktı True oldu
# sınavdan alınan notların harf karşılığını gösteren program
x = int(input("Notunuz: "))
if x > 100 or x < 0:
print("Böyle bir not yok")
elif x >= 90 and x <= 100:
print("A aldınız")
elif x >= 80 and x <= 89:
print("B aldınız.")
elif x >= 70 and x <= 79:
print("C aldınız")
elif x >=60 and x <= 69:
print("D aldınız.")
elif x >= 0 and x <= 59:
print("F aldınız.")
#şu şekilde daha kısa biçimde yazabiliriz
z = int(input("notunuz: "))
if x > 100 or x < 0:
print("Böyle bir not yoktur.")
elif z >= 90 <= 100:
print("A aldınız")
elif z >=80 <= 89:
print("B aldınız")
elif z >= 70 <= 79:
print("C aldınız")
elif z >= 60 <=69:
print("D aldınız")
elif z >=0 <=59:
print("F aldınız")
# and i kaldırdığımızda aynı sonucu alabiliyoruz
## not ##
# not bir bool işlecidir. türkçe karşılığı değil demektir
# özellikle kullanıcı tarafından değer girilip girilmediğini
#denetlmek için kullanılır
#eğer kullanıcı değer girilise not değeri çalışacak
#eğer kullanıcı boş bırakılsa true değeri çalışacak
parola = input("Şifrenizi giriniz Lütfen: ")
if not parola:
print("Şifre boş bırakılamaz")
#şifrenizi giriniz yazısı geldiğinizde cevap vermeyip entera tıkladım
#değer true olunca print fonksiyonu çalıştı
print(bool(parola))#false
#makineye şunu soruyoruz aslında:
#parola boş bırakılmamış değil mi?
#makinede bize: hayır boş bırakılmış diyor
print(bool(not parola))#True
#makineye parola boş bırakılmış değil mi? sorusunu soruyoruz
#makine de bize true evet boş bırakılmış diyor
#yani ikisinin arasındaki fark bırakılmamış/bırakılmış değil? midir
#yani not isleçi makineye "boş bırakılmış değil mi?" sorusunu soruyor
#eğer boş bırakıldıysa cevap True oluyor evet bırakılmış demek oluyor
#----------------------------------#
# Değer Atama İşleçleri #
#----------------------------------#
# değer atama işlemi "=" işleciyle yapılır
a = 25
#a değişkenin içine 25 değerini atadık
## += işleci
#değişkenin değerine değer eklemek için kullanılır
a += 10 # a değişkenin değerine 10 değeri daha ekledik
print(a) # 35
## -=
#değişkenin değerinin düşürmek yani çıkarmak için kullanılır
a -= 5 #a değişkeninden 5 değer çıkardık
print(a)#30
## /=
# değişkenin değeriyle bölme işlemi yapmak için kullanılır
a /= 2 #a değişkenin değerini 2 sayısıyla böldük
print(a)#15.0
## *=
#değişkenin değerini çarpmak için kullanılır
a *= 4 # a değişkenin değerini 4 ile çarptık
print(a)#60.0
## %=
#değişkenin değerinin bölme işleminde kalanını bulmak için kullanılır
a %= 7 #a değişkenin değerinin 7 ile bölünmesinden kalanını bulduk
print(a)#4.0
## **=
#değişkenin değerinin kuvvetini, küpünü ve karakökünü bulmak için kullanılır
a **= 2#a değişkenin kuvvetini bulduk
print(a)#16.0
## //=
#değişkenin değerinin tam bölünmesini bulmak için kullanılır
a //= 2
print(a)#8
#bu işleçler normalde şu işlemi yapar örneğin
#a = a + 5
#print(a)#5
#fakat bu işlem hızlı bir seçenek değildir ama mantıksal olarak bu şekilde işlem yapar
#işleçlerin sağ ve solda olma farkı
# += veya =+ -= veya =-
a =- 5
print(a) # -5
# a değerine -5 değerini verdik
## := (walrus operatörü)
#örnek:
giris = len(input("Adın ne?"))
if giris < 4:
print("Adın kısaymış")
elif giris < 6:
print("Adın biraz uzunmuş")
else:
print("Uzun bir adın varmış.")
#bu kodu := işlecini kullanarakta yazabiliriz
if (giris := len(input("Adınız nedir?"))) < 4:
print("Adın kısaymış")
elif giris < 6:
print("Adın biraz uzunmuş")
else:
print("Çok uzun bir adın varmış.")
# := tek avantajı işlemimizi tek satıra sığdırması
# çok kullanılmaz
#zaten yeni bir işleç olduğundan sadece python 3.8.1 de çalışır
#--------------------------------#
# AİTLİK İŞLEÇLERİ #
#--------------------------------#
#bir karakter dizisinin değişkenin içinde bulunup bulunmadığını
#kontrol edebilmemizi sağlar
#bu işlemi in adlı işleç sayesinde yaparız
a = "asdfg"
print("a" in a)#True
#makineye "a" değeri a değişkenin içinde var mı? sorruyoruz
print("A" in a)#False
print("j" in a)#False
# "j" değeri a değişkenin içinde var mı? cevap: Hayır yok False
#--------------------------------#
# KİMLİK İŞLEÇLERİ #
#--------------------------------#
#pythonda her şeyin yani her nesnenin arka planda bir kimlik numarası vardır
#bunu öğrenmek için id() adlı fonskiyondan yararlanırız
a = 50
print(id(a))#140705130925248
# a nın kimlik numarasını yazdır dedik
name = "Hello my name is Murat"
print(id(name))#2704421625648
#pythonda her nesenin eşsiz tek ve benzersiz bir kimlikleri vardır
#python belli bir değere kadar önbellekte aynı kimlik numarasıyla tutar
nameOr = 100
print(id(nameOr))#140705130926848
nameOrOne = 100
print(id(nameOrOne))#140705130926848
#belli bir değeri artan değerleri önbellekte farklı kimlik no larıyla tutar
y = 1000
print(id(y))#2467428862544
u = 1000
print(id(u))#1586531830352
#aynı değere sahip olarak gözükselerde python farklı kimlikle tanıtıyor
#bunun nedeni python sadece ufak nesneleri önbellekte tutar
#diğer büyük nesneleri ise yeni bir depolama işlemi yapar
#ufak ve büyük değerleri öğrenmek için:
for k in range(-1000,1000):
for v in range(-1000,1000):
if k is v:
print(k)
#çıkan sonuca göre -5 ila 256 arasındaki değerleri önbellekte tutabiliyor
## is
number = 1000
numberOne = 1000
print(id(number))#2209573079632
print(id(numberOne))#2756858382928
print(number is 1000)#False
print(numberOne is 1000)#False
#is kimlikliklerine göre eşit midir aynı mıdır sorusunu sorar
#is ve == işleci çok kere karıştılır ikisinin arasındaki fark:
#is nesnelerin kimliklerine bakarak aynı mı olduklarını inceler
# == ise nesnelerin değerlerine bakarak aynı mı olduklarını inceler
print(number is 1000)#false
#ayrı kimlikleri olduklarından cevap false
print(number == 1000)#True
#a 1000 değerine sahip oldukları için cevap true
#is in arka planda yaptığı şey kabaca bu:
print(id(number)==id(1000))#false
ornek = "Python"
print(ornek is "Python") #True
ornekOne = "Python güçlü ve kolay bir proglama dilidir"
print(ornekOne is "Python güçlü ve kolay bir proglama dilidir")#False
print(ornekOne == "Python güçlü ve kolay bir proglama dilidir")#True
#sayısal değerlerde olduğu gibi karakter dizilerinde de küçük olanlar önbellekte
#büyük olan karakter dizileri içinde yeni bir kimlik ve depolama tanınmaktadır
## UYGULAMA ÖRNEKLERİ ##
#------------------------------------#
# BASİT BİR HESAP MAKİNESİ #
#------------------------------------#
#programımız bir hesap makinesi olacak
#kullanıya bir sayı girecek ve bu sayı ile topla mı çıkarma mı yapacak karar verecek
#buna göre ise işlemler yapacak
#kullanıcıya bazı seçenekler sunalım:
giris = """
(1) topla
(2) çıkar
(3) çarp
(4) böl
(5) karesini hesapla
(6) karakökünü hesapla
"""
print(giris)
soru = input("Yapmak istediğiniz işlemin numarasını giriniz: ")#kullanıcan hangi işlemi yapacağını soracağız
if soru == "1":
sayi1 = int(input("Toplama işlemi için ilk sayıyı giriniz: "))
sayi2 = int(input("Toplama işlemi için ikinci sayıyı giriniz: "))
print(sayi1,"+",sayi2,"=",sayi1+sayi2)
elif soru == "2":
sayi3 = int(input("Çıkarma işlemi için ilk sayıyı giriniz: "))
sayi4 = int(input("Çıkarma işlemi için ikinci sayıyı giriniz: "))
print(sayi3,"-",sayi4,"=",sayi3-sayi4)
elif soru == "3":
sayi5 = int(input("Çarpma işlemi için ilk sayıyı giriniz: "))
sayi6 = int(input("Çarpma işlemi için ikinci sayıyı giriniz:"))
print(sayi5,"*",sayi6,"=",sayi5*sayi6)
elif soru == "4":
sayi7 = int(input("Bölme işlemi için ilk sayıyı giriniz: "))
sayi8 = int(input("Bölme işlemi için ikinci sayıyı giriniz: "))
print(sayi7,"/",sayi8,"=",sayi7/sayi8)
elif soru == "5":
sayi9 = int(input("Karesini hesaplamak istediğiniz bir sayıyı giriniz: "))
print(sayi9,"sayının karesi =",sayi9 ** 2)
elif soru == "6":
sayi10 = int(input("Karekökünü hesaplamak için istediğiniz sayıyı giriniz: "))
print(sayi10,"sayısının karakökü =",sayi10 ** 0.5)
else:
print("Yanlış giriş.")
print("Aşağıdaki seçeneklerden birini giriniz: ",giris)
"""
Temel olarak program şu şekilde:
eğer böyle bir durum varsa:
şöyle bir işlem yap
yok eğer şöyle bir durum varsa:
böyle bir işlem yap
eğer bambaşka bir durum varsa:
şöyle bir şey yap
"""
#-----------------------------------#
# SÜRÜME GÖRE İŞLEM YAPAN PROGRAM
#-----------------------------------#
#Pythonda 3.x serisinde yazılan kodlar 2.x serinde çalışmaz
#yazdığımız kodların hangi python sürümünde çalıştırılmasını isteyebilirz
#veya 3.x de yazdığımız kodların 2.x çalıştırılması haline kullanıya hata mesajı verdilebiliriz
#sys modulünü çağıralım içe aktaralım
import sys
#modül içindeki istediğimiz değişkene erişelim
print(sys.version_info)
#sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
#birde version değişkenin vereceği çıktıya bakalım
print(sys.version)#3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
#fakat işimize version_info değişkeni yarıyor
#version_info nun verdiği çıktı gözüken bazı şeyler:
#major, python serisinin ana sürüm numarası
#minor, alt sürüm numarası
#micro, en alt sürüm numarasını verir
#bu değerlere ulaşmak için:
print(sys.version_info.major)#3
print(sys.version_info.minor)#7
print(sys.version_info.micro)#4
#Programımızı hangi sürüm ile çalıştırılması gerektiğini kontrol eden bir program yazalım
#bu program için major ve minor u kullanacağız ihtiyaç dahilinde micro da kullanabiliriz
import sys
_2x_metni = """
Python'ın 2.x sürümlerinden birini kullanıyorsunuz
Programı çalıştırabilmek için sisteminizde Python'ın
3.x sürümlerinden biri kurulu olmalı."""
_3x_metni = "Programa Hoşgeldiniz!"
if sys.version_info.major < 3:
print(_2x_metni)
else:
print(_3x_metni)
#burada ilk başta modül içindeki araçları kullanmak için import ediyoruz
#daha sonra 2.x serisini kullanan biri için hata mesajı oluşturuyoruz
#değişkenlerin adları sayıyla başlayamayacağı için alt çizgi ile başladık
#sonra python3 kullanıcıları için merhaba metni yarattık
#eğer dedik major numarası yani ana sürümü 3 ten küçükse şunu yazdır
#bunun dışındaki bütün durumları için ise _3x_metnini bastır dedik
# 2.x sürümlerinde türkçe karakterleri makine algılayamıyordu
#bunu çözmek için ise :
# -*- coding: utf-8 -*-
#bu kodu yapıştırıyorduk 3.x te bu sorun kalkmıştı
#fakat bu sadece programın çökmesini engeller türkçe karakterler bozuk gözükür
#örneğin _2x_metin 2.x sürümlerinde çalışınca şöyle gözükür:
"""
Python'ın 2.x sürümlerinden birini kullanıyorsunuz.
Programı çalıştırabilmek için sisteminizde Python'ın
3.x sürümlerinden biri kurulu olmalı."""
#bunu engellemek için karakter dizimizin önüne u eklemek
# u ise unicode kavramından gelmektedir
_2x_metni = u"""
Python'ın 2.x sürümlerinden birini kullanıyorsunuz.
Programı çalıştırabilmek için sisteminizde Python'ın
3.x sürümlerinden biri kurulu olmalı."""
#3 ten küçük sürümlere hata mesajı yazdırabildik
#şimdi ise 3.4 gibi küçük sürümlere hata mesajı yazdırabiliriz
hataMesaj3 = u"""
Şuan Python'un eski sürümünü kullanıyorsunuz.
Lütfen güncelleyiniz!
"""
if sys.version_info.major == 3 and sys.version_info.minor == 8:
print("bla bla")
else:
print(hataMesaj3)
#böylece 3.8 altı kullanan kullancılara bir heta mesajı gösterdik
#bu işlemi için version değişkenini de kullanabiliriz
if "3.7" in sys.version:
print("Güncel versiyondasınız")
else:
print(hataMesaj3)
| 27.283105 | 108 | 0.699749 | 2,500 | 17,925 | 5.024 | 0.3216 | 0.008599 | 0.008917 | 0.004379 | 0.163933 | 0.100318 | 0.063694 | 0.057882 | 0.040287 | 0.035191 | 0 | 0.040388 | 0.16569 | 17,925 | 656 | 109 | 27.324695 | 0.796456 | 0.56357 | 0 | 0.204545 | 0 | 0 | 0.331825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007576 | 0 | 0.007576 | 0.484848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
968998458ff06ecf90db3e1b78179a0db936b801 | 907 | py | Python | leetcode/practise/59.py | Weis-98/learning_journey | 2ef40880d4551d5d44fa71eff98eca98361022d0 | [
"MIT"
] | null | null | null | leetcode/practise/59.py | Weis-98/learning_journey | 2ef40880d4551d5d44fa71eff98eca98361022d0 | [
"MIT"
] | null | null | null | leetcode/practise/59.py | Weis-98/learning_journey | 2ef40880d4551d5d44fa71eff98eca98361022d0 | [
"MIT"
] | null | null | null | class Solution:
def generateMatrix(self, n):
matrix = []
num = 1
for i in range(n):
matrix.append([0 for i in range(n)])
top = 0
bottom = n - 1
left = 0
right = n - 1
while top <= bottom and left <= right:
for i in range(left, right + 1):
matrix[top][i] = num
num += 1
for j in range(top + 1, bottom + 1):
matrix[j][right] = num
num += 1
if top < bottom and left < right:
for i in range(right - 1, left - 1, -1):
matrix[bottom][i] = num
num += 1
for j in range(bottom - 1, top, -1):
matrix[j][left] = num
num += 1
top, bottom, left, right = top + 1, bottom - 1, left + 1, right - 1
return matrix | 34.884615 | 79 | 0.406836 | 114 | 907 | 3.236842 | 0.210526 | 0.113821 | 0.065041 | 0.119241 | 0.341463 | 0.276423 | 0.276423 | 0.276423 | 0.173442 | 0 | 0 | 0.047109 | 0.485116 | 907 | 26 | 80 | 34.884615 | 0.743041 | 0 | 0 | 0.153846 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
968a32ab6cc052ecd19269370677c3356ed68536 | 1,028 | py | Python | test_project/test_app/migrations/0002_auto_20180514_0720.py | iCHEF/queryfilter | 0ae4faf525e162d2720d328b96fa179d68277f1e | [
"Apache-2.0"
] | 4 | 2018-05-11T18:07:32.000Z | 2019-07-30T13:38:49.000Z | test_project/test_app/migrations/0002_auto_20180514_0720.py | iCHEF/queryfilter | 0ae4faf525e162d2720d328b96fa179d68277f1e | [
"Apache-2.0"
] | 6 | 2018-02-26T04:46:36.000Z | 2019-04-10T06:17:12.000Z | test_project/test_app/migrations/0002_auto_20180514_0720.py | iCHEF/queryfilter | 0ae4faf525e162d2720d328b96fa179d68277f1e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.13 on 2018-05-14 07:20
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('test_app', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='data',
name='address',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='data',
name='age',
field=models.IntegerField(default=22),
),
migrations.AlterField(
model_name='data',
name='name',
field=models.TextField(default=''),
),
migrations.AlterField(
model_name='data',
name='price',
field=models.IntegerField(default=10),
),
migrations.AlterField(
model_name='data',
name='type',
field=models.IntegerField(default=0),
),
]
| 25.073171 | 50 | 0.535992 | 94 | 1,028 | 5.734043 | 0.5 | 0.083488 | 0.120594 | 0.157699 | 0.374768 | 0.374768 | 0.237477 | 0.237477 | 0.237477 | 0.237477 | 0 | 0.039706 | 0.338521 | 1,028 | 40 | 51 | 25.7 | 0.752941 | 0.067121 | 0 | 0.484848 | 1 | 0 | 0.0659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
968afeca8b633bb5b9753043627e7d2f6a06eb50 | 360 | py | Python | pdx-extract/tests/test_utils.py | michaelheyman/PSU-Code-Review | 5a55d981425aaad69dc9ee06baaaef22bc426893 | [
"MIT"
] | null | null | null | pdx-extract/tests/test_utils.py | michaelheyman/PSU-Code-Review | 5a55d981425aaad69dc9ee06baaaef22bc426893 | [
"MIT"
] | null | null | null | pdx-extract/tests/test_utils.py | michaelheyman/PSU-Code-Review | 5a55d981425aaad69dc9ee06baaaef22bc426893 | [
"MIT"
] | null | null | null | import unittest.mock as mock
from app import utils
@mock.patch("app.utils.get_current_timestamp")
def test_generate_filename_generates_formatted_timestamp(mock_timestamp):
mock_timestamp.return_value = 1_555_555_555.555_555
filename = utils.generate_filename()
assert mock_timestamp.called is True
assert filename == "20190417194555.json"
| 25.714286 | 73 | 0.802778 | 49 | 360 | 5.571429 | 0.55102 | 0.087912 | 0.098901 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095541 | 0.127778 | 360 | 13 | 74 | 27.692308 | 0.773885 | 0 | 0 | 0 | 1 | 0 | 0.138889 | 0.086111 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
968d107ac6d19ef65f3e2a523c368a10dd9ff203 | 8,243 | py | Python | IzVerifier/test/test_izverifier.py | ahmedlawi92/IzVerifier | b367935f66810b4c4897cc860c5a3e2070f1890f | [
"MIT"
] | null | null | null | IzVerifier/test/test_izverifier.py | ahmedlawi92/IzVerifier | b367935f66810b4c4897cc860c5a3e2070f1890f | [
"MIT"
] | null | null | null | IzVerifier/test/test_izverifier.py | ahmedlawi92/IzVerifier | b367935f66810b4c4897cc860c5a3e2070f1890f | [
"MIT"
] | null | null | null | from IzVerifier.izspecs.containers.izclasses import IzClasses
__author__ = 'fcanas'
import unittest
from IzVerifier.izspecs.containers.izconditions import IzConditions
from IzVerifier.izspecs.containers.izstrings import IzStrings
from IzVerifier.izspecs.containers.izvariables import IzVariables
from IzVerifier.izverifier import IzVerifier
from IzVerifier.izspecs.containers.constants import *
path1 = 'data/sample_installer_iz5/izpack/'
path2 = 'data/sample_installer_iz5/resources/'
source_path2 = 'data/sample_code_base/src/'
pom = 'data/sample_installer_iz5/pom.xml'
class TestVerifier(unittest.TestCase):
"""
Basic testing of verifier class.
"""
def setUp(self):
args = {
'specs_path': path1,
'sources': [source_path2],
'resources_path': path2,
'pom': pom,
'specs': ['conditions', 'strings', 'variables']
}
self.izv = IzVerifier(args)
self.izv.reporter.set_terminal_width() # Sets width to width of terminal
def test_IzPaths(self):
"""
Testing install.xml path parsing.
"""
specs = [('variables', 'variables.xml'),
('conditions', 'conditions.xml'),
('dynamicvariables', 'dynamic_variables.xml'),
('resources', 'resources.xml'),
('panels', 'panels.xml'),
('packs', 'packs.xml')]
self.assertTrue(self.izv != None)
for spec in specs:
path = self.izv.paths.get_path(spec[0])
self.assertTrue(spec[1] in path,
msg=path + "!=" + spec[1])
def test_IzConditions(self):
"""
Testing the strings container.
"""
conditions = self.izv.paths.get_path('conditions')
self.assertEquals(conditions, 'data/sample_installer_iz5/izpack/conditions.xml')
izc = IzConditions(conditions)
self.assertTrue(izc != None)
# Test for number of keys in conditions.xml plus white list
num = len(izc.get_keys()) - len(izc.properties[WHITE_LIST])
print num
self.assertEquals(num, 15, str(num) + "!=15")
def test_langpack_paths(self):
"""
Test that we parsed the langpack paths from resources.xml
"""
langpacks = [('default', 'data/sample_installer_iz5/resources/langpacks/CustomLangPack.xml'),
('eng', 'data/sample_installer_iz5/resources/langpacks/CustomLangPack.xml')]
for tpack, fpack in zip(langpacks, self.izv.paths.get_langpacks().keys()):
self.assertEquals(tpack[1], self.izv.paths.get_langpack_path(tpack[0]))
def test_IzStrings(self):
"""
Testing the strings container.
"""
langpack = self.izv.paths.get_langpack_path()
izs = IzStrings(langpack)
self.assertTrue(izs != None)
# Test for number of strings
num = len(izs.get_keys())
self.assertEquals(num, 5, str(num) + '!=4')
def test_IzVariables(self):
"""
Testing the variables container.
"""
variables = self.izv.paths.get_path('variables')
self.assertEquals(variables, 'data/sample_installer_iz5/izpack/variables.xml')
izv = IzVariables(variables)
self.assertTrue(izv != None)
num = len(izv.get_keys()) - len(izv.properties[WHITE_LIST])
self.assertEquals(num, 3, str(num) + '!=3')
def test_verifyStrings(self):
"""
Verify strings in sample installer
"""
hits = self.izv.verify('strings', verbosity=2, filter_classes=True)
undefined_strings = {'some.string.4',
'my.error.message.id.test',
'password.empty',
'password.not.equal',
'some.user',
'some.user.panel.info',
'some.user.password',
'some.user.password.confirm',
'some.string.5',
'some.string.6',
'hello.world',
'my.izpack5.key.1',
'my.izpack5.key.2',
'my.izpack5.key.3'}
found_strings, location = zip(*hits)
strings_not_found = undefined_strings - set(found_strings)
additional_found_strings = set(found_strings) - undefined_strings
self.assertTrue(len(strings_not_found) == 0, "Strings not found: " + str(strings_not_found))
self.assertTrue(len(additional_found_strings) == 0, "Should not have been found: " + str(additional_found_strings))
def test_verifyConditions(self):
"""
Verify conditions in sample installer.
"""
hits = self.izv.verify('conditions', verbosity=2)
undefined_conditions = {'myinstallerclass.condition',
'some.condition.2',
'some.condition.1'}
found_conditions, location = zip(*hits)
for id in undefined_conditions:
self.assertTrue(id in found_conditions)
def test_verifyVariables(self):
"""
Verify conditions in sample installer.
"""
hits = self.izv.verify('variables', verbosity=1)
num = len(hits)
self.assertTrue(num == 5)
def test_verifyAll(self):
"""
Verify all specs on sample installer.
"""
hits = self.izv.verify_all(verbosity=1)
num = len(hits)
assert (num != 0)
def test_findReference(self):
"""
Find some references to items in source code and specs.
"""
hits = self.izv.find_references('some.user.password', verbosity=2)
self.assertEquals(len(hits), 2)
hits = self.izv.find_references('password.empty', verbosity=2)
self.assertEquals(len(hits), 1)
# Ref in code
hits = self.izv.find_references('some.string.3', verbosity=2)
self.assertEquals(len(hits), 1)
# var substitution not yet implemented for find references, so this
# test will miss the ref in Foo.java
hits = self.izv.find_references('some.condition.1', verbosity=2)
self.assertEquals(len(hits), 1)
def test_verifyClasses(self):
"""
Testing the izclasses container.
"""
classes = IzClasses(source_path2)
classes.print_keys()
self.assertEquals(len(classes.get_keys()), 5)
hits = self.izv.verify('classes', verbosity=2)
self.assertEquals(len(hits), 5)
referenced = self.izv.get_referenced('classes')
self.assertTrue(referenced.has_key('com.sample.installer.Foo'))
self.assertTrue(referenced.has_key('com.sample.installer.SuperValidator'))
self.assertTrue(referenced.has_key('com.sample.installer.SuperDuperValidator'))
self.assertTrue(referenced.has_key('com.sample.installer.BarListener'))
def test_findReferencedClasses(self):
"""
Testing the IzVerifiers ability to find the classes used in an installer
"""
found_referenced_classes = self.izv.referenced_classes
actual_referenced_classes = {
'data/sample_code_base/src/com/sample/installer/Foo.java',
'data/sample_code_base/src/com/sample/installer/Apples.java',
'data/sample_code_base/src/com/sample/installer/Pineapples.java',
'data/sample_code_base/src/com/sample/installer/Bar.java'
}
found_referenced_classes = set(found_referenced_classes)
extra_classes_found = found_referenced_classes - actual_referenced_classes
classes_not_found = actual_referenced_classes - found_referenced_classes
self.assertTrue(len(extra_classes_found) == 0)
self.assertTrue(len(classes_not_found) == 0)
for reffed_class in extra_classes_found:
print "this class shouldn't have been found %s" % reffed_class
for reffed_class in classes_not_found:
print "this class should have been found %s" % reffed_class
if __name__ == '__main__':
unittest.main()
| 35.076596 | 123 | 0.605241 | 898 | 8,243 | 5.402004 | 0.201559 | 0.02886 | 0.020408 | 0.031746 | 0.297464 | 0.207586 | 0.148011 | 0.119975 | 0.048856 | 0.022263 | 0 | 0.010313 | 0.282421 | 8,243 | 234 | 124 | 35.226496 | 0.809806 | 0.027781 | 0 | 0.036496 | 0 | 0 | 0.205096 | 0.112364 | 0 | 0 | 0 | 0 | 0.20438 | 0 | null | null | 0.043796 | 0.051095 | null | null | 0.029197 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
969769f903879e83d04d75567c8891aa3f6d52df | 726 | py | Python | build_you/models/company.py | bostud/build_you | 258a336a82a1da9efc102770f5d8bf83abc13379 | [
"MIT"
] | null | null | null | build_you/models/company.py | bostud/build_you | 258a336a82a1da9efc102770f5d8bf83abc13379 | [
"MIT"
] | null | null | null | build_you/models/company.py | bostud/build_you | 258a336a82a1da9efc102770f5d8bf83abc13379 | [
"MIT"
] | null | null | null | import enum
from sqlalchemy import Column, ForeignKey, String, JSON, Integer, Enum
from sqlalchemy.orm import relationship
from build_you.models.base import BaseModel
from build_you.database import Base
class Company(BaseModel, Base):
__tablename__ = 'company'
class Status(enum.Enum):
ACTIVE = 1
INACTIVE = 2
DELETED = 3
name = Column(String(length=255), unique=True, index=True)
owner_id = Column(Integer, ForeignKey('user.id'))
settings = Column(JSON, default={}, nullable=True)
status = Column(Enum(Status), default=Status.ACTIVE)
owner = relationship('User', back_populates='companies')
company_objects = relationship('BuildObject', back_populates='company')
| 30.25 | 75 | 0.717631 | 87 | 726 | 5.873563 | 0.505747 | 0.031311 | 0.07045 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010033 | 0.176309 | 726 | 23 | 76 | 31.565217 | 0.844482 | 0 | 0 | 0 | 0 | 0 | 0.061983 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.294118 | 0 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
969ab9ef2de1a8573dbf6d90268f74dbc9e16fef | 29,479 | py | Python | utilities/access_concatenate_daily.py | pizzathief/PyFluxPro | c075c0040b4a9d6c9ab75ca1cef158f1307f8396 | [
"BSD-3-Clause"
] | 1 | 2021-01-17T20:53:39.000Z | 2021-01-17T20:53:39.000Z | utilities/access_concatenate_daily.py | pizzathief/PyFluxPro | c075c0040b4a9d6c9ab75ca1cef158f1307f8396 | [
"BSD-3-Clause"
] | null | null | null | utilities/access_concatenate_daily.py | pizzathief/PyFluxPro | c075c0040b4a9d6c9ab75ca1cef158f1307f8396 | [
"BSD-3-Clause"
] | null | null | null | """
Purpose:
Reads the hourly ACCESS files pulled from the BoM OPeNDAP site
and concatenates them into a single file.
This script file takes a control file name on the command line.
The control file lists the sites to be processed and the variables
to be processed.
Normal usage is to process all files in a monthly sub-directory.
Usage:
python access_concat.py access_concat.txt
Author: PRI
Date: September 2015
"""
# Python modules
import configobj
import datetime
import glob
import logging
import netCDF4
import numpy
import os
import pytz
import pdb
from scipy.interpolate import interp1d
import sys
# since the scripts directory is there, try importing the modules
sys.path.append('../scripts')
# PFP
import constants as c
import meteorologicalfunctions as mf
import qcio
import qcutils
# !!! classes !!!
class ACCESSData(object):
def __init__(self):
self.globalattr = {}
self.globalattr["file_list"] = []
self.variables = {}
self.varattr = {}
# !!! start of function definitions !!!
def get_info_dict(cf,site):
info = {}
in_path = cf["Sites"][site]["in_filepath"]
in_name = cf["Sites"][site]["in_filename"]
info["in_filename"] = os.path.join(in_path,in_name)
out_path = cf["Sites"][site]["out_filepath"]
if not os.path.exists(out_path): os.makedirs(out_path)
out_name = cf["Sites"][site]["out_filename"]
info["out_filename"] = os.path.join(out_path,out_name)
info["interpolate"] = True
if not cf["Sites"][site].as_bool("interpolate"):
info["interpolate"] = False
info["site_name"] = cf["Sites"][site]["site_name"]
info["site_timezone"] = cf["Sites"][site]["site_timezone"]
info["site_tz"] = pytz.timezone(info["site_timezone"])
return info
def get_datetime(ds_60minutes,f,info):
valid_date = f.variables["valid_date"][:]
nRecs = len(valid_date)
valid_time = f.variables["valid_time"][:]
dl = [datetime.datetime.strptime(str(int(valid_date[i])*10000+int(valid_time[i])),"%Y%m%d%H%M") for i in range(0,nRecs)]
dt_utc_all = numpy.array(dl)
time_step = numpy.array([(dt_utc_all[i]-dt_utc_all[i-1]).total_seconds() for i in range(1,len(dt_utc_all))])
time_step = numpy.append(time_step,3600)
idx = numpy.where(time_step!=0)[0]
dt_utc = dt_utc_all[idx]
dt_utc = [x.replace(tzinfo=pytz.utc) for x in dt_utc]
dt_loc = [x.astimezone(info["site_tz"]) for x in dt_utc]
dt_loc = [x-x.dst() for x in dt_loc]
dt_loc = [x.replace(tzinfo=None) for x in dt_loc]
ds_60minutes.series["DateTime"] = {}
ds_60minutes.series["DateTime"]["Data"] = dt_loc
nRecs = len(ds_60minutes.series["DateTime"]["Data"])
ds_60minutes.globalattributes["nc_nrecs"] = nRecs
return idx
def set_globalattributes(ds_60minutes,info):
ds_60minutes.globalattributes["time_step"] = 60
ds_60minutes.globalattributes["time_zone"] = info["site_timezone"]
ds_60minutes.globalattributes["site_name"] = info["site_name"]
ds_60minutes.globalattributes["xl_datemode"] = 0
ds_60minutes.globalattributes["nc_level"] = "L1"
return
def get_accessdata(cf,ds_60minutes,f,info):
# latitude and longitude, chose central pixel of 3x3 grid
ds_60minutes.globalattributes["latitude"] = f.variables["lat"][1]
ds_60minutes.globalattributes["longitude"] = f.variables["lon"][1]
# list of variables to process
var_list = cf["Variables"].keys()
# get a series of Python datetimes and put this into the data structure
valid_date = f.variables["valid_date"][:]
nRecs = len(valid_date)
valid_time = f.variables["valid_time"][:]
dl = [datetime.datetime.strptime(str(int(valid_date[i])*10000+int(valid_time[i])),"%Y%m%d%H%M") for i in range(0,nRecs)]
dt_utc_all = numpy.array(dl)
time_step = numpy.array([(dt_utc_all[i]-dt_utc_all[i-1]).total_seconds() for i in range(1,len(dt_utc_all))])
time_step = numpy.append(time_step,3600)
idxne0 = numpy.where(time_step!=0)[0]
idxeq0 = numpy.where(time_step==0)[0]
idx_clipped = numpy.where((idxeq0>0)&(idxeq0<nRecs))[0]
idxeq0 = idxeq0[idx_clipped]
dt_utc = dt_utc_all[idxne0]
dt_utc = [x.replace(tzinfo=pytz.utc) for x in dt_utc]
dt_loc = [x.astimezone(info["site_tz"]) for x in dt_utc]
dt_loc = [x-x.dst() for x in dt_loc]
dt_loc = [x.replace(tzinfo=None) for x in dt_loc]
flag = numpy.zeros(len(dt_loc),dtype=numpy.int32)
ds_60minutes.series["DateTime"] = {}
ds_60minutes.series["DateTime"]["Data"] = dt_loc
ds_60minutes.series["DateTime"]["Flag"] = flag
ds_60minutes.series["DateTime_UTC"] = {}
ds_60minutes.series["DateTime_UTC"]["Data"] = dt_utc
ds_60minutes.series["DateTime_UTC"]["Flag"] = flag
nRecs = len(ds_60minutes.series["DateTime"]["Data"])
ds_60minutes.globalattributes["nc_nrecs"] = nRecs
# we're done with valid_date and valid_time, drop them from the variable list
for item in ["valid_date","valid_time","lat","lon"]:
if item in var_list: var_list.remove(item)
# create the QC flag with all zeros
nRecs = ds_60minutes.globalattributes["nc_nrecs"]
flag_60minutes = numpy.zeros(nRecs,dtype=numpy.int32)
# get the UTC hour
hr_utc = [x.hour for x in dt_utc]
attr = qcutils.MakeAttributeDictionary(long_name='UTC hour')
qcutils.CreateSeries(ds_60minutes,'Hr_UTC',hr_utc,Flag=flag_60minutes,Attr=attr)
# now loop over the variables listed in the control file
for label in var_list:
# get the name of the variable in the ACCESS file
access_name = qcutils.get_keyvaluefromcf(cf,["Variables",label],"access_name",default=label)
# warn the user if the variable not found
if access_name not in f.variables.keys():
msg = "Requested variable "+access_name
msg = msg+" not found in ACCESS data"
logging.error(msg)
continue
# get the variable attibutes
attr = get_variableattributes(f,access_name)
# loop over the 3x3 matrix of ACCESS grid data supplied
for i in range(0,3):
for j in range(0,3):
label_ij = label+'_'+str(i)+str(j)
if len(f.variables[access_name].shape)==3:
series = f.variables[access_name][:,i,j]
elif len(f.variables[access_name].shape)==4:
series = f.variables[access_name][:,0,i,j]
else:
msg = "Unrecognised variable ("+label
msg = msg+") dimension in ACCESS file"
logging.error(msg)
series = series[idxne0]
qcutils.CreateSeries(ds_60minutes,label_ij,series,
Flag=flag_60minutes,Attr=attr)
return
def get_variableattributes(f,access_name):
attr = {}
# following code for netCDF4.MFDataset()
# for vattr in f.variables[access_name].ncattrs():
# attr[vattr] = getattr(f.variables[access_name],vattr)
# following code for access_read_mfiles2()
attr = f.varattr[access_name]
attr["missing_value"] = c.missing_value
return attr
def changeunits_airtemperature(ds_60minutes):
attr = qcutils.GetAttributeDictionary(ds_60minutes,"Ta_00")
if attr["units"] == "K":
for i in range(0,3):
for j in range(0,3):
label = "Ta_"+str(i)+str(j)
Ta,f,a = qcutils.GetSeriesasMA(ds_60minutes,label)
Ta = Ta - c.C2K
attr["units"] = "C"
qcutils.CreateSeries(ds_60minutes,label,Ta,Flag=f,Attr=attr)
return
def changeunits_soiltemperature(ds_60minutes):
attr = qcutils.GetAttributeDictionary(ds_60minutes,"Ts_00")
if attr["units"] == "K":
for i in range(0,3):
for j in range(0,3):
label = "Ts_"+str(i)+str(j)
Ts,f,a = qcutils.GetSeriesasMA(ds_60minutes,label)
Ts = Ts - c.C2K
attr["units"] = "C"
qcutils.CreateSeries(ds_60minutes,label,Ts,Flag=f,Attr=attr)
return
def changeunits_pressure(ds_60minutes):
attr = qcutils.GetAttributeDictionary(ds_60minutes,"ps_00")
if attr["units"] == "Pa":
for i in range(0,3):
for j in range(0,3):
label = "ps_"+str(i)+str(j)
ps,f,a = qcutils.GetSeriesasMA(ds_60minutes,label)
ps = ps/float(1000)
attr["units"] = "kPa"
qcutils.CreateSeries(ds_60minutes,label,ps,Flag=f,Attr=attr)
return
def get_windspeedanddirection(ds_60minutes):
for i in range(0,3):
for j in range(0,3):
u_label = "u_"+str(i)+str(j)
v_label = "v_"+str(i)+str(j)
Ws_label = "Ws_"+str(i)+str(j)
u,f,a = qcutils.GetSeriesasMA(ds_60minutes,u_label)
v,f,a = qcutils.GetSeriesasMA(ds_60minutes,v_label)
Ws = numpy.sqrt(u*u+v*v)
attr = qcutils.MakeAttributeDictionary(long_name="Wind speed",
units="m/s",height="10m")
qcutils.CreateSeries(ds_60minutes,Ws_label,Ws,Flag=f,Attr=attr)
# wind direction from components
for i in range(0,3):
for j in range(0,3):
u_label = "u_"+str(i)+str(j)
v_label = "v_"+str(i)+str(j)
Wd_label = "Wd_"+str(i)+str(j)
u,f,a = qcutils.GetSeriesasMA(ds_60minutes,u_label)
v,f,a = qcutils.GetSeriesasMA(ds_60minutes,v_label)
Wd = float(270) - numpy.ma.arctan2(v,u)*float(180)/numpy.pi
index = numpy.ma.where(Wd>360)[0]
if len(index)>0: Wd[index] = Wd[index] - float(360)
attr = qcutils.MakeAttributeDictionary(long_name="Wind direction",
units="degrees",height="10m")
qcutils.CreateSeries(ds_60minutes,Wd_label,Wd,Flag=f,Attr=attr)
return
def get_relativehumidity(ds_60minutes):
for i in range(0,3):
for j in range(0,3):
q_label = "q_"+str(i)+str(j)
Ta_label = "Ta_"+str(i)+str(j)
ps_label = "ps_"+str(i)+str(j)
RH_label = "RH_"+str(i)+str(j)
q,f,a = qcutils.GetSeriesasMA(ds_60minutes,q_label)
Ta,f,a = qcutils.GetSeriesasMA(ds_60minutes,Ta_label)
ps,f,a = qcutils.GetSeriesasMA(ds_60minutes,ps_label)
RH = mf.RHfromspecifichumidity(q, Ta, ps)
attr = qcutils.MakeAttributeDictionary(long_name='Relative humidity',
units='%',standard_name='not defined')
qcutils.CreateSeries(ds_60minutes,RH_label,RH,Flag=f,Attr=attr)
return
def get_absolutehumidity(ds_60minutes):
for i in range(0,3):
for j in range(0,3):
Ta_label = "Ta_"+str(i)+str(j)
RH_label = "RH_"+str(i)+str(j)
Ah_label = "Ah_"+str(i)+str(j)
Ta,f,a = qcutils.GetSeriesasMA(ds_60minutes,Ta_label)
RH,f,a = qcutils.GetSeriesasMA(ds_60minutes,RH_label)
Ah = mf.absolutehumidityfromRH(Ta, RH)
attr = qcutils.MakeAttributeDictionary(long_name='Absolute humidity',
units='g/m3',standard_name='not defined')
qcutils.CreateSeries(ds_60minutes,Ah_label,Ah,Flag=f,Attr=attr)
return
def changeunits_soilmoisture(ds_60minutes):
attr = qcutils.GetAttributeDictionary(ds_60minutes,"Sws_00")
for i in range(0,3):
for j in range(0,3):
label = "Sws_"+str(i)+str(j)
Sws,f,a = qcutils.GetSeriesasMA(ds_60minutes,label)
Sws = Sws/float(100)
attr["units"] = "frac"
qcutils.CreateSeries(ds_60minutes,label,Sws,Flag=f,Attr=attr)
return
def get_radiation(ds_60minutes):
for i in range(0,3):
for j in range(0,3):
label_Fn = "Fn_"+str(i)+str(j)
label_Fsd = "Fsd_"+str(i)+str(j)
label_Fld = "Fld_"+str(i)+str(j)
label_Fsu = "Fsu_"+str(i)+str(j)
label_Flu = "Flu_"+str(i)+str(j)
label_Fn_sw = "Fn_sw_"+str(i)+str(j)
label_Fn_lw = "Fn_lw_"+str(i)+str(j)
Fsd,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fsd)
Fld,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fld)
Fn_sw,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fn_sw)
Fn_lw,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fn_lw)
Fsu = Fsd - Fn_sw
Flu = Fld - Fn_lw
Fn = (Fsd-Fsu)+(Fld-Flu)
attr = qcutils.MakeAttributeDictionary(long_name='Up-welling long wave',
standard_name='surface_upwelling_longwave_flux_in_air',
units='W/m2')
qcutils.CreateSeries(ds_60minutes,label_Flu,Flu,Flag=f,Attr=attr)
attr = qcutils.MakeAttributeDictionary(long_name='Up-welling short wave',
standard_name='surface_upwelling_shortwave_flux_in_air',
units='W/m2')
qcutils.CreateSeries(ds_60minutes,label_Fsu,Fsu,Flag=f,Attr=attr)
attr = qcutils.MakeAttributeDictionary(long_name='Calculated net radiation',
standard_name='surface_net_allwave_radiation',
units='W/m2')
qcutils.CreateSeries(ds_60minutes,label_Fn,Fn,Flag=f,Attr=attr)
return
def get_groundheatflux(ds_60minutes):
for i in range(0,3):
for j in range(0,3):
label_Fg = "Fg_"+str(i)+str(j)
label_Fn = "Fn_"+str(i)+str(j)
label_Fh = "Fh_"+str(i)+str(j)
label_Fe = "Fe_"+str(i)+str(j)
Fn,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fn)
Fh,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fh)
Fe,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fe)
Fg = Fn - Fh - Fe
attr = qcutils.MakeAttributeDictionary(long_name='Calculated ground heat flux',
standard_name='downward_heat_flux_in_soil',
units='W/m2')
qcutils.CreateSeries(ds_60minutes,label_Fg,Fg,Flag=f,Attr=attr)
return
def get_availableenergy(ds_60miutes):
for i in range(0,3):
for j in range(0,3):
label_Fg = "Fg_"+str(i)+str(j)
label_Fn = "Fn_"+str(i)+str(j)
label_Fa = "Fa_"+str(i)+str(j)
Fn,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fn)
Fg,f,a = qcutils.GetSeriesasMA(ds_60minutes,label_Fg)
Fa = Fn - Fg
attr = qcutils.MakeAttributeDictionary(long_name='Calculated available energy',
standard_name='not defined',units='W/m2')
qcutils.CreateSeries(ds_60minutes,label_Fa,Fa,Flag=f,Attr=attr)
return
def perdelta(start,end,delta):
curr = start
while curr <= end:
yield curr
curr += delta
def interpolate_to_30minutes(ds_60minutes):
ds_30minutes = qcio.DataStructure()
# copy the global attributes
for this_attr in ds_60minutes.globalattributes.keys():
ds_30minutes.globalattributes[this_attr] = ds_60minutes.globalattributes[this_attr]
# update the global attribute "time_step"
ds_30minutes.globalattributes["time_step"] = 30
# generate the 30 minute datetime series
dt_loc_60minutes = ds_60minutes.series["DateTime"]["Data"]
dt_loc_30minutes = [x for x in perdelta(dt_loc_60minutes[0],dt_loc_60minutes[-1],datetime.timedelta(minutes=30))]
nRecs_30minutes = len(dt_loc_30minutes)
dt_utc_60minutes = ds_60minutes.series["DateTime_UTC"]["Data"]
dt_utc_30minutes = [x for x in perdelta(dt_utc_60minutes[0],dt_utc_60minutes[-1],datetime.timedelta(minutes=30))]
# update the global attribute "nc_nrecs"
ds_30minutes.globalattributes['nc_nrecs'] = nRecs_30minutes
ds_30minutes.series["DateTime"] = {}
ds_30minutes.series["DateTime"]["Data"] = dt_loc_30minutes
flag = numpy.zeros(len(dt_loc_30minutes),dtype=numpy.int32)
ds_30minutes.series["DateTime"]["Flag"] = flag
ds_30minutes.series["DateTime_UTC"] = {}
ds_30minutes.series["DateTime_UTC"]["Data"] = dt_utc_30minutes
flag = numpy.zeros(len(dt_utc_30minutes),dtype=numpy.int32)
ds_30minutes.series["DateTime_UTC"]["Flag"] = flag
# get the year, month etc from the datetime
qcutils.get_xldatefromdatetime(ds_30minutes)
qcutils.get_ymdhmsfromdatetime(ds_30minutes)
# interpolate to 30 minutes
nRecs_60 = len(ds_60minutes.series["DateTime"]["Data"])
nRecs_30 = len(ds_30minutes.series["DateTime"]["Data"])
x_60minutes = numpy.arange(0,nRecs_60,1)
x_30minutes = numpy.arange(0,nRecs_60-0.5,0.5)
varlist_60 = ds_60minutes.series.keys()
# strip out the date and time variables already done
for item in ["DateTime","DateTime_UTC","xlDateTime","Year","Month","Day","Hour","Minute","Second","Hdh","Hr_UTC"]:
if item in varlist_60: varlist_60.remove(item)
# now do the interpolation (its OK to interpolate accumulated precipitation)
for label in varlist_60:
series_60minutes,flag,attr = qcutils.GetSeries(ds_60minutes,label)
ci_60minutes = numpy.zeros(len(series_60minutes))
idx = numpy.where(abs(series_60minutes-float(c.missing_value))<c.eps)[0]
ci_60minutes[idx] = float(1)
int_fn = interp1d(x_60minutes,series_60minutes)
series_30minutes = int_fn(x_30minutes)
int_fn = interp1d(x_60minutes,ci_60minutes)
ci_30minutes = int_fn(x_30minutes)
idx = numpy.where(abs(ci_30minutes-float(0))>c.eps)[0]
series_30minutes[idx] = numpy.float64(c.missing_value)
flag_30minutes = numpy.zeros(nRecs_30, dtype=numpy.int32)
flag_30minutes[idx] = numpy.int32(1)
qcutils.CreateSeries(ds_30minutes,label,series_30minutes,Flag=flag_30minutes,Attr=attr)
# get the UTC hour
hr_utc = [float(x.hour)+float(x.minute)/60 for x in dt_utc_30minutes]
attr = qcutils.MakeAttributeDictionary(long_name='UTC hour')
flag_30minutes = numpy.zeros(nRecs_30, dtype=numpy.int32)
qcutils.CreateSeries(ds_30minutes,'Hr_UTC',hr_utc,Flag=flag_30minutes,Attr=attr)
return ds_30minutes
def get_instantaneous_precip30(ds_30minutes):
hr_utc,f,a = qcutils.GetSeries(ds_30minutes,'Hr_UTC')
for i in range(0,3):
for j in range(0,3):
label = "Precip_"+str(i)+str(j)
# get the accumulated precipitation
accum,flag,attr = qcutils.GetSeries(ds_30minutes,label)
# get the 30 minute precipitation
precip = numpy.ediff1d(accum,to_begin=0)
# now we deal with the reset of accumulated precipitation at 00, 06, 12 and 18 UTC
# indices of analysis times 00, 06, 12, and 18
idx1 = numpy.where(numpy.mod(hr_utc,6)==0)[0]
# set 30 minute precipitation at these times to half of the analysis value
precip[idx1] = accum[idx1]/float(2)
# now get the indices of the 30 minute period immediately the analysis time
# these values will have been interpolated between the last forecast value
# and the analysis value, they need to be set to half of the analysis value
idx2 = idx1-1
# remove negative indices
idx2 = idx2[idx2>=0]
# set these 30 minute times to half the analysis value
precip[idx2] = accum[idx2+1]/float(2)
# set precipitations less than 0.01 mm to 0
idx3 = numpy.ma.where(precip<0.01)[0]
precip[idx3] = float(0)
# set instantaneous precipitation to missing when accumlated precipitation was missing
idx = numpy.where(flag!=0)[0]
precip[idx] = float(c.missing_value)
# set some variable attributes
attr["long_name"] = "Precipitation total over time step"
attr["units"] = "mm/30 minutes"
qcutils.CreateSeries(ds_30minutes,label,precip,Flag=flag,Attr=attr)
return
def get_instantaneous_precip60(ds_60minutes):
hr_utc,f,a = qcutils.GetSeries(ds_60minutes,'Hr_UTC')
for i in range(0,3):
for j in range(0,3):
label = "Precip_"+str(i)+str(j)
# get the accumulated precipitation
accum,flag,attr = qcutils.GetSeries(ds_60minutes,label)
# get the 30 minute precipitation
precip = numpy.ediff1d(accum,to_begin=0)
# now we deal with the reset of accumulated precipitation at 00, 06, 12 and 18 UTC
# indices of analysis times 00, 06, 12, and 18
idx1 = numpy.where(numpy.mod(hr_utc,6)==0)[0]
# set 30 minute precipitation at these times to the analysis value
precip[idx1] = accum[idx1]
# set accumulated precipitations less than 0.001 mm to 0
idx2 = numpy.ma.where(precip<0.01)[0]
precip[idx2] = float(0)
# set instantaneous precipitation to missing when accumlated precipitation was missing
idx = numpy.where(flag!=0)[0]
precip[idx] = float(c.missing_value)
# set some variable attributes
attr["long_name"] = "Precipitation total over time step"
attr["units"] = "mm/60 minutes"
qcutils.CreateSeries(ds_60minutes,label,precip,Flag=flag,Attr=attr)
def access_read_mfiles2(file_list,var_list=[]):
f = ACCESSData()
# check that we have a list of files to process
if len(file_list)==0:
print "access_read_mfiles: empty file_list received, returning ..."
return f
# make sure latitude and longitude are read
if "lat" not in var_list: var_list.append("lat")
if "lon" not in var_list: var_list.append("lon")
# make sure valid_date and valid_time are read
if "valid_date" not in var_list: var_list.append("valid_date")
if "valid_time" not in var_list: var_list.append("valid_time")
for file_name in file_list:
# open the netCDF file
ncfile = netCDF4.Dataset(file_name)
# check the number of records
dims = ncfile.dimensions
shape = (len(dims["time"]),len(dims["lat"]),len(dims["lon"]))
# move to the next file if this file doesn't have 25 time records
if shape[0]!=1:
print "access_read_mfiles: length of time dimension in "+file_name+" is "+str(shape[0])+" (expected 1)"
continue
# move to the next file if this file doesn't have 3 latitude records
if shape[1]!=3:
print "access_read_mfiles: length of lat dimension in "+file_name+" is "+str(shape[1])+" (expected 3)"
continue
# move to the next file if this file doesn't have 3 longitude records
if shape[2]!=3:
print "access_read_mfiles: length of lon dimension in "+file_name+" is "+str(shape[2])+" (expected 3)"
continue
# seems OK to continue with this file ...
# add the file name to the file_list in the global attributes
f.globalattr["file_list"].append(file_name)
# get the global attributes
for gattr in ncfile.ncattrs():
if gattr not in f.globalattr:
f.globalattr[gattr] = getattr(ncfile,gattr)
# if no variable list was passed to this routine, use all variables
if len(var_list)==0:
var_list=ncfile.variables.keys()
# load the data into the data structure
for var in var_list:
# get the name of the variable in the ACCESS file
access_name = qcutils.get_keyvaluefromcf(cf,["Variables",var],"access_name",default=var)
# check that the requested variable exists in the ACCESS file
if access_name in ncfile.variables.keys():
# check to see if the variable is already in the data structure
if access_name not in f.variables.keys():
f.variables[access_name] = ncfile.variables[access_name][:]
else:
f.variables[access_name] = numpy.concatenate((f.variables[access_name],ncfile.variables[access_name][:]),axis=0)
# now copy the variable attribiutes
# create the variable attribute dictionary
if access_name not in f.varattr: f.varattr[access_name] = {}
# loop over the variable attributes
for this_attr in ncfile.variables[access_name].ncattrs():
# check to see if the attribute has already
if this_attr not in f.varattr[access_name].keys():
# add the variable attribute if it's not there already
f.varattr[access_name][this_attr] = getattr(ncfile.variables[access_name],this_attr)
else:
print "access_read_mfiles: ACCESS variable "+access_name+" not found in "+file_name
if access_name not in f.variables.keys():
f.variables[access_name] = makedummyseries(shape)
else:
f.variables[access_name] = numpy.concatenate((f.variables[access_name],makedummyseries(shape)),axis=0)
# close the netCDF file
ncfile.close()
# return with the data structure
return f
def makedummyseries(shape):
return numpy.ma.masked_all(shape)
# !!! end of function definitions !!!
# !!! start of main program !!!
# start the logger
logging.basicConfig(filename='access_concat.log',level=logging.DEBUG)
console = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s', '%H:%M:%S')
console.setFormatter(formatter)
console.setLevel(logging.INFO)
logging.getLogger('').addHandler(console)
# get the control file name from the command line
#cf_name = sys.argv[1]
cf_name = qcio.get_controlfilename(path='../controlfiles',title='Choose a control file')
# get the control file contents
logging.info('Reading the control file')
cf = configobj.ConfigObj(cf_name)
# get stuff from the control file
logging.info('Getting control file contents')
site_list = cf["Sites"].keys()
var_list = cf["Variables"].keys()
# loop over sites
#site_list = ["AdelaideRiver"]
for site in site_list:
info = get_info_dict(cf,site)
logging.info("Processing site "+info["site_name"])
# instance the data structures
logging.info('Creating the data structures')
ds_60minutes = qcio.DataStructure()
# get a sorted list of files that match the mask in the control file
file_list = sorted(glob.glob(info["in_filename"]))
# read the netcdf files
logging.info('Reading the netCDF files for '+info["site_name"])
f = access_read_mfiles2(file_list,var_list=var_list)
# get the data from the netCDF files and write it to the 60 minute data structure
logging.info('Getting the ACCESS data')
get_accessdata(cf,ds_60minutes,f,info)
# set some global attributes
logging.info('Setting global attributes')
set_globalattributes(ds_60minutes,info)
# check for time gaps in the file
logging.info("Checking for time gaps")
if qcutils.CheckTimeStep(ds_60minutes):
qcutils.FixTimeStep(ds_60minutes)
# get the datetime in some different formats
logging.info('Getting xlDateTime and YMDHMS')
qcutils.get_xldatefromdatetime(ds_60minutes)
qcutils.get_ymdhmsfromdatetime(ds_60minutes)
#f.close()
# get derived quantities and adjust units
logging.info("Changing units and getting derived quantities")
# air temperature from K to C
changeunits_airtemperature(ds_60minutes)
# soil temperature from K to C
changeunits_soiltemperature(ds_60minutes)
# pressure from Pa to kPa
changeunits_pressure(ds_60minutes)
# wind speed from components
get_windspeedanddirection(ds_60minutes)
# relative humidity from temperature, specific humidity and pressure
get_relativehumidity(ds_60minutes)
# absolute humidity from temperature and relative humidity
get_absolutehumidity(ds_60minutes)
# soil moisture from kg/m2 to m3/m3
changeunits_soilmoisture(ds_60minutes)
# net radiation and upwelling short and long wave radiation
get_radiation(ds_60minutes)
# ground heat flux as residual
get_groundheatflux(ds_60minutes)
# Available energy
get_availableenergy(ds_60minutes)
if info["interpolate"]:
# interploate from 60 minute time step to 30 minute time step
logging.info("Interpolating data to 30 minute time step")
ds_30minutes = interpolate_to_30minutes(ds_60minutes)
# get instantaneous precipitation from accumulated precipitation
get_instantaneous_precip30(ds_30minutes)
# write to netCDF file
logging.info("Writing 30 minute data to netCDF file")
ncfile = qcio.nc_open_write(info["out_filename"])
qcio.nc_write_series(ncfile, ds_30minutes,ndims=1)
else:
# get instantaneous precipitation from accumulated precipitation
get_instantaneous_precip60(ds_60minutes)
# write to netCDF file
logging.info("Writing 60 minute data to netCDF file")
ncfile = qcio.nc_open_write(info["out_filename"])
qcio.nc_write_series(ncfile, ds_60minutes,ndims=1)
logging.info('All done!')
| 47.546774 | 132 | 0.641915 | 4,015 | 29,479 | 4.549689 | 0.12802 | 0.063229 | 0.013029 | 0.01489 | 0.516779 | 0.433514 | 0.37357 | 0.281491 | 0.235233 | 0.221602 | 0 | 0.032588 | 0.247396 | 29,479 | 619 | 133 | 47.623586 | 0.790769 | 0.15645 | 0 | 0.286316 | 0 | 0 | 0.106632 | 0.005424 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.031579 | null | null | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
969bbfec8ddf57f2a21ea2c8536548a16473aafe | 2,771 | py | Python | avem_theme/functions/sanitize.py | mverleg/django-boots-plain-theme | 2355270293ddb3db4762470a43c72311bf11be07 | [
"BSD-3-Clause"
] | null | null | null | avem_theme/functions/sanitize.py | mverleg/django-boots-plain-theme | 2355270293ddb3db4762470a43c72311bf11be07 | [
"BSD-3-Clause"
] | null | null | null | avem_theme/functions/sanitize.py | mverleg/django-boots-plain-theme | 2355270293ddb3db4762470a43c72311bf11be07 | [
"BSD-3-Clause"
] | null | null | null |
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
from django.conf import settings
DEFAULT_NOSCR_ALLOWED_TAGS = 'strong:title b i em:title p:title h1:title h2:title h3:title h4:title h5:title ' + \
'div:title span:title ol ul li:title a:href:title:rel img:src:alt:title dl td:title dd:title' + \
'table:cellspacing:cellpadding thead tbody th tr td:title:colspan:rowspan br'
def sanitize_html(text, add_nofollow = False,
allowed_tags = getattr(settings, 'NOSCR_ALLOWED_TAGS', DEFAULT_NOSCR_ALLOWED_TAGS)):
"""
Cleans an html string:
* remove any not-whitelisted tags
- remove any potentially malicious tags or attributes
- remove any invalid tags that may break layout
* esca[e any <, > and & from remaining text (by bs4); this prevents
> >> <<script>script> alert("Haha, I hacked your page."); </</script>script>\
* optionally add nofollow attributes to foreign anchors
* removes comments
:comment * optionally replace some tags with others:
:arg text: Input html.
:arg allowed_tags: Argument should be in form 'tag2:attr1:attr2 tag2:attr1 tag3', where tags are allowed HTML
tags, and attrs are the allowed attributes for that tag.
:return: Sanitized html.
This is based on https://djangosnippets.org/snippets/1655/
"""
try:
from bs4 import BeautifulSoup, Comment, NavigableString
except ImportError:
raise ImportError('to use sanitize_html() and |noscr, you need to install beautifulsoup4')
""" function to check if urls are absolute
note that example.com/path/file.html is relative, officially and in Firefox """
is_relative = lambda url: not bool(urlparse(url).netloc)
""" regex to remove javascript """
#todo: what exactly is the point of this? is there js in attribute values?
#js_regex = compile(r'[\s]*(&#x.{1,7})?'.join(list('javascript')))
""" allowed tags structure """
allowed_tags = [tag.split(':') for tag in allowed_tags.split()]
allowed_tags = {tag[0]: tag[1:] for tag in allowed_tags}
""" create comment-free soup """
soup = BeautifulSoup(text)
for comment in soup.findAll(text = lambda text: isinstance(text, Comment)):
comment.extract()
for tag in soup.find_all(recursive = True):
if tag.name not in allowed_tags:
""" hide forbidden tags (keeping content) """
tag.hidden = True
else:
""" whitelisted tags """
tag.attrs = {attr: val for attr, val in tag.attrs.items() if attr in allowed_tags[tag.name]}
""" add nofollow to external links if requested """
if add_nofollow and tag.name == 'a' and 'href' in tag.attrs:
if not is_relative(tag.attrs['href']):
tag.attrs['rel'] = (tag.attrs['rel'] if 'rel' in tag.attrs else []) + ['nofollow']
""" return as unicode """
return soup.renderContents().decode('utf8')
| 37.958904 | 114 | 0.714904 | 411 | 2,771 | 4.761557 | 0.489051 | 0.06745 | 0.026571 | 0.023505 | 0.019417 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00994 | 0.164922 | 2,771 | 72 | 115 | 38.486111 | 0.835782 | 0.320823 | 0 | 0.137931 | 0 | 0.068966 | 0.239921 | 0.03503 | 0 | 0 | 0 | 0.013889 | 0 | 1 | 0.034483 | false | 0 | 0.241379 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
969ea553ff4cdd6978d9da12725a1d04afc89e38 | 354 | py | Python | tests/i18n/patterns/urls/wrong_namespace.py | Yoann-Vie/esgi-hearthstone | 115d03426c7e8e80d89883b78ac72114c29bed12 | [
"PSF-2.0",
"BSD-3-Clause"
] | null | null | null | tests/i18n/patterns/urls/wrong_namespace.py | Yoann-Vie/esgi-hearthstone | 115d03426c7e8e80d89883b78ac72114c29bed12 | [
"PSF-2.0",
"BSD-3-Clause"
] | null | null | null | tests/i18n/patterns/urls/wrong_namespace.py | Yoann-Vie/esgi-hearthstone | 115d03426c7e8e80d89883b78ac72114c29bed12 | [
"PSF-2.0",
"BSD-3-Clause"
] | null | null | null | from django.conf.urls import url
from django.conf.urls.i18n import i18n_patterns
from django.utils.translation import gettext_lazy as _
from django.views.generic import TemplateView
view = TemplateView.as_view(template_name='dummy.html')
app_name = 'account'
urlpatterns = i18n_patterns(
url(_(r'^register/$'), view, name='register'),
)
| 29.5 | 56 | 0.757062 | 48 | 354 | 5.416667 | 0.541667 | 0.153846 | 0.107692 | 0.138462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0.135593 | 354 | 11 | 57 | 32.181818 | 0.830065 | 0 | 0 | 0 | 0 | 0 | 0.104956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
96a94e5f66df21e992b1df975469b8edd292ca16 | 3,285 | py | Python | ffttest.py | teslaworksumn/Reactor | ba6d2d80bd606047e81a5e1ccc0f1af26497feb7 | [
"MIT"
] | null | null | null | ffttest.py | teslaworksumn/Reactor | ba6d2d80bd606047e81a5e1ccc0f1af26497feb7 | [
"MIT"
] | null | null | null | ffttest.py | teslaworksumn/Reactor | ba6d2d80bd606047e81a5e1ccc0f1af26497feb7 | [
"MIT"
] | null | null | null | # From http://julip.co/2012/05/arduino-python-soundlight-spectrum/
# Python 2.7 code to analyze sound and interface with Arduino
import pyaudio # from http://people.csail.mit.edu/hubert/pyaudio/
import serial # from http://pyserial.sourceforge.net/
import numpy # from http://numpy.scipy.org/
import audioop
import sys
import math
import struct
'''
Sources
http://www.swharden.com/blog/2010-03-05-realtime-fft-graph-of-audio-wav-file-or-microphone-input-with-python-scipy-and-wckgraph/
http://macdevcenter.com/pub/a/python/2001/01/31/numerically.html?page=2
'''
MAX = 0
NUM = 20
def list_devices():
# List all audio input devices
p = pyaudio.PyAudio()
i = 0
n = p.get_device_count()
while i < n:
dev = p.get_device_info_by_index(i)
if dev['maxInputChannels'] > 0:
print str(i)+'. '+dev['name']
i += 1
def fft():
chunk = 2**11 # Change if too fast/slow, never less than 2**11
scale = 25 # Change if too dim/bright
exponent = 3 # Change if too little/too much difference between loud and quiet sounds
samplerate = 44100
# CHANGE THIS TO CORRECT INPUT DEVICE
# Enable stereo mixing in your sound card
# to make you sound output an input
# Use list_devices() to list all your input devices
device = 1 # Mic
#device = 4 # SF2
p = pyaudio.PyAudio()
stream = p.open(format = pyaudio.paInt16,
channels = 1,
rate = 44100,
input = True,
frames_per_buffer = chunk,
input_device_index = device)
print "Starting, use Ctrl+C to stop"
try:
ser = serial.Serial(
port='/dev/ttyS0',
timeout=1
)
while True:
data = stream.read(chunk)
# Do FFT
levels = calculate_levels(data, chunk, samplerate)
# Make it look better and send to serial
for level in levels:
level = max(min(level / scale, 1.0), 0.0)
level = level**exponent
level = int(level * 255)
#ser.write(chr(level))
#sys.stdout.write(str(level)+' ')
#sys.stdout.write('\n')
#s = ser.read(6)
except KeyboardInterrupt:
pass
finally:
print "\nStopping"
stream.close()
p.terminate()
#ser.close()
def calculate_levels(data, chunk, samplerate):
# Use FFT to calculate volume for each frequency
global MAX
# Convert raw sound data to Numpy array
fmt = "%dH"%(len(data)/2)
data2 = struct.unpack(fmt, data)
data2 = numpy.array(data2, dtype='h')
# Apply FFT
fourier = numpy.fft.fft(data2)
ffty = numpy.abs(fourier[0:len(fourier)/2])/1000
ffty1=ffty[:len(ffty)/2]
ffty2=ffty[len(ffty)/2::]+2
ffty2=ffty2[::-1]
ffty=ffty1+ffty2
ffty=numpy.log(ffty)-2
fourier = list(ffty)[4:-4]
fourier = fourier[:len(fourier)/2]
size = len(fourier)
# Add up for 6 lights
levels = [sum(fourier[i:(i+size/NUM)]) for i in xrange(0, size, size/NUM)][:NUM]
return levels
if __name__ == '__main__':
#list_devices()
fft()
| 28.076923 | 128 | 0.578082 | 436 | 3,285 | 4.302752 | 0.46789 | 0.017058 | 0.017591 | 0.025586 | 0.036247 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0407 | 0.304414 | 3,285 | 116 | 129 | 28.318966 | 0.780306 | 0.264536 | 0 | 0.028986 | 0 | 0 | 0.037805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.014493 | 0.101449 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96ab9f2c7f20292bca2815ee86e2e792b39a18da | 1,412 | py | Python | mouse.py | Ra-Na/android-mouse-cursor | b9f0a8394871cb17a2d6ec1a0cc2548b86990ce0 | [
"MIT"
] | 7 | 2019-12-05T13:34:37.000Z | 2022-01-15T09:58:11.000Z | mouse.py | Ra-Na/android-mouse-cursor | b9f0a8394871cb17a2d6ec1a0cc2548b86990ce0 | [
"MIT"
] | null | null | null | mouse.py | Ra-Na/android-mouse-cursor | b9f0a8394871cb17a2d6ec1a0cc2548b86990ce0 | [
"MIT"
] | 5 | 2019-07-27T02:28:04.000Z | 2022-02-14T15:10:25.000Z | import socket
def getch(): # define non-Windows version
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
# get your phones IP by visiting https://www.whatismyip.com/
# then specify your IPv6 here like so
UDP_IP = "2a01:30:2a04:3c1:c83c:2315:9d2b:9a40" # IPv6
UDP_PORT = 9999
print "UDP target IP:", UDP_IP
print "UDP target port:", UDP_PORT
print ""
print "W, A, S, D - Move mouse"
print "Space - Click"
print "Q - Quit"
# IPv6
sock = socket.socket(socket.AF_INET6, # Internet
socket.SOCK_DGRAM) # UDP
# IPv4
# sock = socket.socket(socket.AF_INET, # Internet
# socket.SOCK_DGRAM) # UDP
while True:
key = ord(getch())
if key == 119: # W
# print 'up'
sock.sendto('0', (UDP_IP, UDP_PORT))
elif key == 97: # A
# print 'left'
sock.sendto('2', (UDP_IP, UDP_PORT))
elif key == 115: # S
# print 'down'
sock.sendto('1', (UDP_IP, UDP_PORT))
elif key == 100: # D
# print 'right'
sock.sendto('3', (UDP_IP, UDP_PORT))
elif key == 113: # Q
break
elif key == 32: # Space
# print 'click'
sock.sendto('4', (UDP_IP, UDP_PORT))
| 25.214286 | 62 | 0.576487 | 198 | 1,412 | 4.010101 | 0.474747 | 0.044081 | 0.050378 | 0.075567 | 0.221662 | 0.095718 | 0 | 0 | 0 | 0 | 0 | 0.051793 | 0.288952 | 1,412 | 55 | 63 | 25.672727 | 0.739044 | 0.228045 | 0 | 0 | 0 | 0 | 0.121241 | 0.033835 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.057143 | null | null | 0.171429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96aeee51e8b4208d515dafe2237e76a19c17dd76 | 895 | py | Python | players/human.py | pikatyuu/deep-learning-othello | d9f149b01f079f5d021ba9655445cd43a847a628 | [
"MIT"
] | null | null | null | players/human.py | pikatyuu/deep-learning-othello | d9f149b01f079f5d021ba9655445cd43a847a628 | [
"MIT"
] | null | null | null | players/human.py | pikatyuu/deep-learning-othello | d9f149b01f079f5d021ba9655445cd43a847a628 | [
"MIT"
] | null | null | null | class Human():
def __init__(self, name="Human"):
self.name = name
def action(self, game):
safe_input = False
while not safe_input:
pos = input("choose a position: ")
if pos == "draw":
game.draw()
elif pos == "exit":
import sys
sys.exit()
elif pos == "movable":
print(game.movable)
elif len(pos) == 2:
clone = game.clone()
pos = tuple(map(int, tuple(pos)))
if clone.can_play(pos):
safe_input = True
else:
print("// Error: Can't put it down //")
else:
print("Error: Invaild input")
return game.play(pos)
def game_finished(self, game):
pass
def all_game_finished(self):
pass
| 27.96875 | 59 | 0.444693 | 94 | 895 | 4.117021 | 0.478723 | 0.069767 | 0.072351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00202 | 0.446927 | 895 | 31 | 60 | 28.870968 | 0.779798 | 0 | 0 | 0.142857 | 0 | 0 | 0.099441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.035714 | 0 | 0.25 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
96b02e8ac66ecd2c65e6e010e248801adc096f97 | 497 | py | Python | clase6/clases.py | Tank3-TK3/codigo-basico-Python | 580e8d284fa8a4d70b2a264762c91bd64c89ab80 | [
"MIT"
] | 7 | 2021-04-19T01:32:49.000Z | 2021-06-04T17:38:04.000Z | clase6/clases.py | Tank3-TK3/codigo-basico-Python | 580e8d284fa8a4d70b2a264762c91bd64c89ab80 | [
"MIT"
] | null | null | null | clase6/clases.py | Tank3-TK3/codigo-basico-Python | 580e8d284fa8a4d70b2a264762c91bd64c89ab80 | [
"MIT"
] | null | null | null | class Animals:
def comer(self):
print("Comiendo")
def dormir(self):
print("Durmiendo")
class Perro:
def __init__(self, nombre):
self.nombre = nombre
def comer(self):
print("Comiendo")
def dormir(self):
print("Durmiendo")
def ladrar(self):
print("Ladrando")
print("--------------------------------------------------")
firulais = Perro("Firulais")
firulais.comer()
firulais.dormir()
firulais.ladrar()
print("--------------------------------------------------") | 23.666667 | 60 | 0.525151 | 47 | 497 | 5.468085 | 0.319149 | 0.175097 | 0.093385 | 0.132296 | 0.404669 | 0.404669 | 0.404669 | 0.404669 | 0.404669 | 0.404669 | 0 | 0 | 0.142857 | 497 | 21 | 61 | 23.666667 | 0.603286 | 0 | 0 | 0.5 | 0 | 0 | 0.313808 | 0.209205 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0 | 0.4 | 0.35 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96b0583a014d7b5a8ac9ea17b0f8eea2bc40f0eb | 3,103 | py | Python | homeworks_advanced/homework2_attention_in_seq2seq/modules.py | BiscuitsLayer/ml-mipt | 24917705189d2eb97a07132405b4f93654cb1aaf | [
"MIT"
] | 1 | 2021-08-01T11:29:11.000Z | 2021-08-01T11:29:11.000Z | homeworks_advanced/homework2_attention_in_seq2seq/modules.py | ivasio/ml-mipt | 9c8896b4dfe46ee02bc5fdbca47acffbeca6828e | [
"MIT"
] | null | null | null | homeworks_advanced/homework2_attention_in_seq2seq/modules.py | ivasio/ml-mipt | 9c8896b4dfe46ee02bc5fdbca47acffbeca6828e | [
"MIT"
] | null | null | null | import random
import torch
from torch import nn
from torch.nn import functional as F
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.input_dim = input_dim
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout)
self.dropout = nn.Dropout(p=dropout)
def forward(self, src):
embedded = self.dropout(self.embedding(src))
output, (hidden, cell) = self.rnn(embedded)
return output, hidden, cell
class Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super().__init__()
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.attn = # <YOUR CODE HERE>
def forward(self, hidden, encoder_outputs):
# <YOUR CODE HERE>
return
class DecoderWithAttention(nn.Module):
def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention):
super().__init__()
self.emb_dim = emb_dim
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.output_dim = output_dim
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = # <YOUR CODE HERE>
self.out = # <YOUR CODE HERE>
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, encoder_outputs):
# <YOUR CODE HERE>
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.dec_hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src sent len, batch size]
#trg = [trg sent len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
# Again, now batch is the first dimention instead of zero
batch_size = trg.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
enc_states, hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, max_len):
output, hidden = self.decoder(input, hidden, enc_states)
outputs[t] = output
teacher_force = random.random() < teacher_forcing_ratio
top1 = output.max(1)[1]
input = (trg[t] if teacher_force else top1)
return outputs
| 29.836538 | 92 | 0.638737 | 431 | 3,103 | 4.348028 | 0.241299 | 0.057631 | 0.033618 | 0.032017 | 0.211313 | 0.151547 | 0.078975 | 0.051227 | 0.051227 | 0.051227 | 0 | 0.00707 | 0.270706 | 3,103 | 103 | 93 | 30.126214 | 0.821034 | 0.162101 | 0 | 0.163934 | 0 | 0 | 0.021268 | 0 | 0 | 0 | 0 | 0.009709 | 0.016393 | 0 | null | null | 0 | 0.065574 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96b8879f01bcc6f2a6fb4f8f1c990b4167027165 | 5,377 | py | Python | mgs/v1.0/data_server.py | vt-rocksat-2017/dashboard | e99a71edc74dd8b7f3eec023c381524561a7b6e4 | [
"MIT"
] | 1 | 2017-08-09T19:57:38.000Z | 2017-08-09T19:57:38.000Z | mgs/v1.0/data_server.py | vt-rocksat-2017/dashboard | e99a71edc74dd8b7f3eec023c381524561a7b6e4 | [
"MIT"
] | null | null | null | mgs/v1.0/data_server.py | vt-rocksat-2017/dashboard | e99a71edc74dd8b7f3eec023c381524561a7b6e4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#########################################
# Title: Rocksat Data Server Class #
# Project: Rocksat #
# Version: 1.0 #
# Date: August, 2017 #
# Author: Zach Leffke, KJ4QLP #
# Comment: Initial Version #
#########################################
import socket
import threading
import sys
import os
import errno
import time
import binascii
import numpy
import datetime as dt
from logger import *
class Data_Server(threading.Thread):
def __init__ (self, options):
threading.Thread.__init__(self,name = 'DataServer')
self._stop = threading.Event()
self.ip = options.ip
self.port = options.port
self.id = options.id
self.ts = options.ts
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #TCP Socket
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.connected = False
self.log_fh = setup_logger(self.id, 'main', self.ts)
self.logger = logging.getLogger('main')
self.last_frame_ts = dt.datetime.utcnow() #Time Stamp of last received frame
self.frame_count = 0
self.adsb_count = 0
self.ais_count = 0
self.hw_count = 0
def run(self):
print "Data Server Running..."
try:
self.sock.connect((self.ip, self.port))
self.connected = True
print self.utc_ts() + "Connected to Modem..."
except Exception as e:
self.Handle_Connection_Exception(e)
while (not self._stop.isSet()):
if self.connected == True:
data = self.sock.recv(4096)
if len(data) == 256:
self.Decode_Frame(data, dt.datetime.utcnow())
else:
self.connected = False
elif self.connected == False:
print self.utc_ts() + "Disconnected from modem..."
time.sleep(1)
try:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #TCP Socket
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.connect((self.ip, self.port))
self.connected = True
print self.utc_ts() + "Connected to Modem..."
except Exception as e:
self.Handle_Connection_Exception(e)
sys.exit()
def Decode_Frame(self, rx_frame, ts):
self.frame_count += 1
self.last_frame_ts = ts
#print str(self.frame_count) + ',' + binascii.hexlify(rx_frame)
self.logger.info(str(self.frame_count) + ',' + binascii.hexlify(rx_frame))
self.Decode_Header(rx_frame)
def Decode_Header(self, rx_frame):
callsign = str(rx_frame[0:6]) #Callsign
dn_pkt_id = numpy.uint16(struct.unpack('>H',rx_frame[6:8]))[0] #downlink frame id
up_pkt_id = numpy.uint16(struct.unpack('>H',rx_frame[8:10]))[0] #uplink frame id
msg_type = numpy.uint8(struct.unpack('>B',rx_frame[10]))[0] #message type, 0=ADSB, 1=AIS, 2=HW
msg_type_str = ""
if msg_type == 0: msg_type_str = 'ADSB'
elif msg_type == 1: msg_type_str = ' AIS'
elif msg_type == 2: msg_type_str = ' HW'
print self.last_frame_ts, self.frame_count, callsign, dn_pkt_id, up_pkt_id, msg_type_str
def Handle_Connection_Exception(self, e):
#print e, type(e)
errorcode = e[0]
if errorcode==errno.ECONNREFUSED:
pass
#print errorcode, "Connection refused"
elif errorcode==errno.EISCONN:
print errorcode, "Transport endpoint is already connected"
self.sock.close()
else:
print e
self.sock.close()
self.connected = False
def get_frame_counts(self):
self.valid_count = len(self.valid.time_tx)
self.fault_count = len(self.fault.time_tx)
self.recon_count = len(self.recon.time_tx)
self.total_count = self.valid_count + self.fault_count + self.recon_count
#print self.utc_ts(), self.total_count, self.valid_count, self.fault_count, self.recon_count
return self.total_count, self.valid_count, self.fault_count, self.recon_count
def set_start_time(self, start):
print self.utc_ts() + "Mission Clock Started"
ts = start.strftime('%Y%m%d_%H%M%S')
self.log_file = "./log/rocksat_"+ self.id + "_" + ts + ".log"
log_f = open(self.log_file, 'a')
msg = "Rocksat Receiver ID: " + self.id + "\n"
msg += "Log Initialization Time Stamp: " + str(start) + " UTC\n\n"
log_f.write(msg)
log_f.close()
self.log_flag = True
print self.utc_ts() + "Logging Started: " + self.log_file
self.valid_start = True
self.start_time = start
for i in range(len(self.valid.time_rx)):
self.valid.rx_offset[i] = (self.valid.time_rx[i]-self.start_time).total_seconds()
def stop(self):
self._stop.set()
def stopped(self):
return self._stop.isSet()
def utc_ts(self):
return str(dt.datetime.utcnow()) + " UTC | "
| 38.407143 | 109 | 0.565929 | 674 | 5,377 | 4.330861 | 0.255193 | 0.024666 | 0.024666 | 0.028777 | 0.292566 | 0.272011 | 0.272011 | 0.272011 | 0.272011 | 0.217883 | 0 | 0.012362 | 0.307978 | 5,377 | 139 | 110 | 38.683453 | 0.772104 | 0.109169 | 0 | 0.212963 | 0 | 0 | 0.065343 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009259 | 0.092593 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96b9956367c551043c19348764e4606177dd4559 | 555 | py | Python | day01/python/beckel/solution.py | clssn/aoc-2019 | a978e5235855be937e60a1e7f88d1ef9b541be15 | [
"MIT"
] | 22 | 2019-11-27T08:28:46.000Z | 2021-04-27T05:37:08.000Z | day01/python/wiedmann/solution.py | sancho1241/aoc-2019 | e0f63824c8250e0f84a42805e1a7ff7d9232002c | [
"MIT"
] | 77 | 2019-11-16T17:22:42.000Z | 2021-05-10T20:36:36.000Z | day01/python/wiedmann/solution.py | sancho1241/aoc-2019 | e0f63824c8250e0f84a42805e1a7ff7d9232002c | [
"MIT"
] | 43 | 2019-11-27T06:36:51.000Z | 2021-11-03T20:56:48.000Z | import math
def fuel_needed(mass):
return math.floor(int(mass)/3 - 2)
def fuel_needed_recursive(mass):
fuel_needed_i = fuel_needed(mass)
if (fuel_needed_i <= 0):
return 0
return fuel_needed_i + fuel_needed_recursive(fuel_needed_i)
total_fuel = 0
total_fuel_recursive = 0
with open("input.txt", "r") as fp:
for line in fp:
total_fuel += fuel_needed(line)
total_fuel_recursive += fuel_needed_recursive(line)
print("Total fuel: " + str(total_fuel))
print("Total fuel recursive: " + str(total_fuel_recursive))
| 25.227273 | 63 | 0.704505 | 85 | 555 | 4.294118 | 0.329412 | 0.273973 | 0.120548 | 0.082192 | 0.115068 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013304 | 0.187387 | 555 | 21 | 64 | 26.428571 | 0.796009 | 0 | 0 | 0 | 0 | 0 | 0.079279 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0625 | 0.0625 | 0.375 | 0.125 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96bee57e7d78263abb2c0dde497d36d9e3def948 | 1,364 | py | Python | generated-libraries/python/netapp/vserver/vserver_aggr_info.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | 2 | 2017-03-28T15:31:26.000Z | 2018-08-16T22:15:18.000Z | generated-libraries/python/netapp/vserver/vserver_aggr_info.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | generated-libraries/python/netapp/vserver/vserver_aggr_info.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | from netapp.netapp_object import NetAppObject
class VserverAggrInfo(NetAppObject):
"""
Assigned aggregate name and available size.
"""
_aggr_availsize = None
@property
def aggr_availsize(self):
"""
Assigned aggregate available size.
Attributes: non-creatable, non-modifiable
"""
return self._aggr_availsize
@aggr_availsize.setter
def aggr_availsize(self, val):
if val != None:
self.validate('aggr_availsize', val)
self._aggr_availsize = val
_aggr_name = None
@property
def aggr_name(self):
"""
Assigned aggregate name.
Attributes: non-creatable, modifiable
"""
return self._aggr_name
@aggr_name.setter
def aggr_name(self, val):
if val != None:
self.validate('aggr_name', val)
self._aggr_name = val
@staticmethod
def get_api_name():
return "vserver-aggr-info"
@staticmethod
def get_desired_attrs():
return [
'aggr-availsize',
'aggr-name',
]
def describe_properties(self):
return {
'aggr_availsize': { 'class': int, 'is_list': False, 'required': 'optional' },
'aggr_name': { 'class': basestring, 'is_list': False, 'required': 'optional' },
}
| 26.230769 | 91 | 0.577713 | 137 | 1,364 | 5.532847 | 0.321168 | 0.154354 | 0.055409 | 0.050132 | 0.155673 | 0.084433 | 0.084433 | 0.084433 | 0 | 0 | 0 | 0 | 0.317449 | 1,364 | 51 | 92 | 26.745098 | 0.814178 | 0.134164 | 0 | 0.176471 | 0 | 0 | 0.12874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.029412 | 0.088235 | 0.470588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
736b9802fb2c5a179b409bf71bdd9ff72225db52 | 998 | py | Python | 13. REST API using OpenAPI, Flask & Connexions/source_code/test-api/src/test_api/core/pets.py | Edmartt/articles | 93d62086ff141f5646193afb868973e94f33f1e6 | [
"MIT"
] | 31 | 2020-03-01T20:27:03.000Z | 2022-02-15T14:53:09.000Z | 13. REST API using OpenAPI, Flask & Connexions/source_code/test-api/src/test_api/core/pets.py | hmajid2301/articles | 27f38cc6c2dd470d879b30d54d1e804a7d76caab | [
"MIT"
] | 24 | 2020-04-04T12:18:25.000Z | 2022-03-29T08:41:57.000Z | 13. REST API using OpenAPI, Flask & Connexions/source_code/test-api/src/test_api/core/pets.py | Edmartt/articles | 93d62086ff141f5646193afb868973e94f33f1e6 | [
"MIT"
] | 52 | 2020-02-29T04:01:10.000Z | 2022-03-11T07:54:16.000Z | import json
def get_all_pets():
pets = read_from_file()
pets_in_store = []
for k, v in pets.items():
current_pet = {"id": k, **v}
pets_in_store.append(current_pet)
return pets
def remove_pet(id):
pets = read_from_file()
del pets[id]
write_to_file(pets)
def update_pet(id, pet):
pets = read_from_file()
ids = pets.keys()
pets[id] = {"name": pet.name, "breed": pet.breed, "price": pet.price}
write_to_file(pets)
def add_pet(pet):
pets = read_from_file()
ids = pets.keys()
new_id = int(ids[-1]) + 1
pets[new_id] = {"name": pet.name, "breed": pet.breed, "price": pet.price}
write_to_file(pets)
def get_pet(id):
pets = read_from_file()
pet = pets[id]
pet["id"] = id
return pet
def write_to_file(content):
with open("./pets.json", "w") as pets:
pets.write(json.dumps(content))
def read_from_file():
with open("./pets.json", "r") as pets:
return json.loads(pets.read())
| 19.96 | 77 | 0.603206 | 156 | 998 | 3.641026 | 0.269231 | 0.084507 | 0.126761 | 0.140845 | 0.411972 | 0.380282 | 0.306338 | 0.306338 | 0.200704 | 0.200704 | 0 | 0.002625 | 0.236473 | 998 | 49 | 78 | 20.367347 | 0.742782 | 0 | 0 | 0.294118 | 0 | 0 | 0.056112 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.029412 | 0 | 0.323529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
736ef7d551671fb41b699b2055b5a873b3f9d021 | 13,229 | py | Python | IBMWatson_Examples/WatsonNLU.py | sptennak/TextAnalytics | dde30337dc4d769ce7fb31b6f3021721bcd0b056 | [
"Apache-2.0"
] | 4 | 2018-07-11T06:58:53.000Z | 2020-09-06T13:17:54.000Z | IBMWatson_Examples/WatsonNLU.py | sptennak/TextAnalytics | dde30337dc4d769ce7fb31b6f3021721bcd0b056 | [
"Apache-2.0"
] | null | null | null | IBMWatson_Examples/WatsonNLU.py | sptennak/TextAnalytics | dde30337dc4d769ce7fb31b6f3021721bcd0b056 | [
"Apache-2.0"
] | 1 | 2020-09-06T13:18:00.000Z | 2020-09-06T13:18:00.000Z | # -*- coding: utf-8 -*-
"""
Created on Fri May 18 22:15:35 2018
@author: Sumudu Tennakoon
References:
[1] https://www.ibm.com/watson/developercloud/natural-language-understanding/api/v1/
"""
from watson_developer_cloud import NaturalLanguageUnderstandingV1, WatsonException, WatsonApiException
from watson_developer_cloud.natural_language_understanding_v1 import Features, EntitiesOptions, KeywordsOptions, RelationsOptions
import pandas as pd
import numpy as np
from timeit import default_timer as timer
import multiprocessing
import sys
###############################################################################
def IAM_Auth(APIKey, Version='2018-03-16'):
ServiceAuthentication = NaturalLanguageUnderstandingV1(
version= Version,
iam_api_key= APIKey
)
ServiceAuthentication.set_url('https://gateway-fra.watsonplatform.net/natural-language-understanding/api')
#To prevent IBM from accessing user input and Watson responses... https://www.ibm.com/watson/developercloud/conversation/api/v1/python.html?python#data-collection
ServiceAuthentication.set_default_headers({'x-watson-learning-opt-out': "true"})
return ServiceAuthentication
def Basic_Auth(UserName, Password, Version='2018-03-16'):
ServiceAuthentication = NaturalLanguageUnderstandingV1(
version= Version,
username= UserName,
password= Password
)
ServiceAuthentication.set_url('https://gateway-fra.watsonplatform.net/natural-language-understanding/api')
#To prevent IBM from accessing user input and Watson responses... https://www.ibm.com/watson/developercloud/conversation/api/v1/python.html?python#data-collection
ServiceAuthentication.set_default_headers({'x-watson-learning-opt-out': "true"})
return ServiceAuthentication
###############################################################################
def TextNLU(ServiceAuthentication, TextID, Text, ModelID=None, Emotion=False, Sentiment=False, Mentions =False, EntityLimit=50, TextLimit=50000, ReturnText=True):
Notes = ''
try:
Response = ServiceAuthentication.analyze(
text=Text,
features=Features(
relations=RelationsOptions(
model = ModelID,
),
entities=EntitiesOptions(
emotion=Emotion,
sentiment=Sentiment,
mentions=Mentions,
model = ModelID,
limit=EntityLimit
),
),
limit_text_characters = TextLimit, #https://console.bluemix.net/docs/services/natural-language-understanding/usage-limits.html#usage-limits
return_analyzed_text=ReturnText
)
Notes='RECIEVED'
except:
EXP = sys.exc_info()
Notes = str(EXP[0])+'['+''.join(EXP[1].args)+']'
Notes = 'NLU:'+Notes
# Process Response Header
WatsonResponseHeader = pd.DataFrame({'TextID':[TextID]})
try:
WatsonResponseHeader['language'] = Response['language']
WatsonResponseHeader['text_characters'] = Response['usage']['text_characters'] #Number of characters processed
WatsonResponseHeader['text_units'] = Response['usage']['text_units'] #Number of characters processed
WatsonResponseHeader['features'] = Response['usage']['features'] #Number of features used, such as entities, sentiment, etc.
WatsonResponseHeader['entities'] = len(Response['entities'])
WatsonResponseHeader['analyzed_text'] = Response['analyzed_text']
except:
EXP = sys.exc_info()
Notes= Notes+ '\tHEADER:' + str(EXP[0])+'['+''.join(EXP[1].args)+']'
# Process Response Details
try:
if len(Response['entities']) != 0:
WatsonResponseDetail = pd.DataFrame(Response['entities'])
WatsonResponseDetail.insert(0, 'TextID', TextID)
if 'sentiment' in WatsonResponseDetail.columns:
Split= WatsonResponseDetail.sentiment.apply(pd.Series)
WatsonResponseDetail['sentiment_'+Split.columns]= Split
WatsonResponseDetail.drop('sentiment', axis=1, inplace=True)
else:
raise Exception('NO ENTITIES FOUND')
except:
EXP = sys.exc_info()
Notes= Notes+ '\tDETAIL:' + str(EXP[0])+'['+''.join(EXP[1].args)+']'
WatsonResponseDetail = pd.DataFrame()
WatsonResponseHeader['Notes'] = Notes
return WatsonResponseHeader, WatsonResponseDetail
###############################################################################
# GUI
###############################################################################
import tkinter as tk #(https://wiki.python.org/moin/TkInter)
from tkinter import filedialog
from tkinter import scrolledtext
import configparser #(https://docs.python.org/3.4/library/configparser.html)
import traceback
class ApplicationWindow(tk.Frame):
def __init__(self, master=None):
super().__init__(master)
self.pack()
self.UserName = tk.StringVar()
self.Password = tk.StringVar()
self.APIKey = tk.StringVar()
self.Version = tk.StringVar()
self.ModelID = tk.StringVar()
self.ConfigFile = tk.StringVar()
self.InputTextFile = tk.StringVar()
self.Input = tk.StringVar()
self.CreateWidgets()
def CreateWidgets(self):
#Menu
MenuBar = tk.Menu(self.master)
self.master.config(menu=MenuBar)
FileMenu = tk.Menu(MenuBar)
MenuBar.add_cascade(label='File', menu=FileMenu)
FileMenu.add_command(label='Load Config', command=None)
FileMenu.add_command(label='Save Config', command=self.SaveConfig)
FileMenu.add_command(label='Save Config As', command=self.SaveConfigAs)
FileMenu.add_command(label='Close', command=root.destroy)
HelpMenu = tk.Menu(MenuBar)
MenuBar.add_cascade(label='Help', menu=HelpMenu)
HelpMenu.add_command(label='About', command=None)
#Field
self.Btn_InputTextFile = tk.Button(self, text='Input Text File', fg='blue', command=self.OpenInputTextFile)
self.Ent_InputTextFile = tk.Entry(self, textvariable=self.InputTextFile)
self.Btn_ConfigFile = tk.Button(self, text='Config File', fg='blue', command=self.OpenConfigFile)
self.Ent_ConfigFile = tk.Entry(self, textvariable=self.ConfigFile)
self.Lbl_UserName = tk.Label(self, text='User Name')
self.Ent_UserName = tk.Entry(self, textvariable=self.UserName)
self.Lbl_Password = tk.Label(self, text='Password')
self.Ent_Password = tk.Entry(self, textvariable=self.Password)
self.Lbl_APIKey = tk.Label(self, text='APIKey')
self.Ent_APIKey = tk.Entry(self, textvariable=self.APIKey)
self.Lbl_Version = tk.Label(self, text='Version')
self.Ent_Version = tk.Entry(self, textvariable=self.Version)
self.Lbl_ModelID = tk.Label(self, text='Model ID')
self.Ent_ModelID = tk.Entry(self, textvariable=self.ModelID)
# Input Text
self.Txt_Input= scrolledtext.ScrolledText(self, height=15)
# Output Textbox
self.Txt_Output= scrolledtext.ScrolledText(self, height=15)
# Buttons
self.Btn_Start = tk.Button(self, text='START', fg='green', command=self.Start)
self.Btn_Close = tk.Button(self, text='CLOSE WINDOW', fg='red', command=root.destroy)
#######################################################################
# Pack Wigdgets
self.Btn_InputTextFile.grid(row=0,column=0, padx=10)
self.Ent_InputTextFile.grid(row=0,column=1, padx=10)
self.Btn_ConfigFile.grid(row=1,column=0, padx=10)
self.Ent_ConfigFile.grid(row=1,column=1, padx=10)
self.Lbl_Version.grid(row=2,column=0, padx=10)
self.Ent_Version.grid(row=2,column=1, padx=10)
self.Lbl_UserName.grid(row=3,column=0, padx=10)
self.Ent_UserName.grid(row=3,column=1, padx=10)
self.Lbl_Password.grid(row=4,column=0, padx=10)
self.Ent_Password.grid(row=4,column=1, padx=10)
self.Lbl_APIKey.grid(row=5,column=0, padx=10)
self.Ent_APIKey.grid(row=5,column=1, padx=10)
self.Lbl_ModelID.grid(row=6,column=0)
self.Ent_ModelID.grid(row=6,column=1, padx=10)
self.Btn_Start.grid(row=7,column=0, columnspan=2)
self.Btn_Close.grid(row=8,column=0, columnspan=2, pady=10)
self.Txt_Input.grid(row=0,column=2, rowspan=6, columnspan=2, padx=10, pady=10)
self.Txt_Input.insert(tk.END, 'Hello World')
self.Txt_Output.grid(row=6,column=2, rowspan=3, columnspan=2, padx=10, pady=10)
self.Txt_Output.insert(tk.END, '>')
#######################################################################
def Start(self):
try:
Version = self.Version.set('2018-03-16')
TextID = 'GUI'
Text = self.Txt_Input.get(1.0,tk.END)
Version = self.Version.get()
ModelID = self.ModelID.get()
Emotion = True
Sentiment = True
UserName = self.UserName.get()
Password = self.Password.get()
APIKey = self.APIKey.get()
ServiceAuthentication = Basic_Auth(UserName, Password, Version)
WatsonResponseHeader, WatsonResponseDetail = TextNLU(ServiceAuthentication, TextID, Text, ModelID=None)#, Emotion=False, Sentiment=False, Mentions =False, =50, TextLimit=50000, ReturnText=True)
print('Application Started')
Text = '> Version:{}\n UserName:{} \n Password:{}\n APIKey:{}\n ModelID:{}\n\n'.format(self.Version.get(), self.UserName.get(), self.Password.get(), self.APIKey.get(), self.ModelID.get())
Text = Text + ' Text: {}\n\n'.format(Text)
self.Txt_Output.insert(tk.END, Text)
except:
print(traceback.print_exc())
def OpenInputTextFile(self):
try:
FileName = filedialog.askopenfilename(title = 'Select Input Text File',filetypes = (('Text Files','*.txt'), ('All files','*.*')))
if FileName!='':
self.Txt_Input.delete(1.0, tk.END)
self.InputTextFile.set(FileName)
with open(FileName, 'r') as inputfile:
Text = inputfile.read()
self.Txt_Input.insert(tk.END , Text)
self.Input.set(Text)
else:
pass
print(FileName)
except:
print(traceback.print_exc())
def OpenConfigFile(self):
try:
config = configparser.ConfigParser()
FileName = filedialog.askopenfilename(title = 'Select Config File',filetypes = (('Config Files','*.cfg'),('Text Files','*.txt'), ('All files','*.*')))
if FileName!='':
self.ConfigFile.set(FileName)
config.read(FileName)
self.Version.set(config['DEFAULT']['version'])
self.UserName.set(config['DEFAULT']['username'])
self.Password.set(config['DEFAULT']['password'])
self.APIKey.set(config['DEFAULT']['apikey'])
self.ModelID.set(config['DEFAULT']['modelid'])
self.Txt_Output.insert(tk.END, 'Config File Loded: {}\n>'.format(FileName))
else:
pass
except:
print(traceback.print_exc())
def SaveConfig(self):
FileName = self.ConfigFile.get()
try:
if FileName != '':
config = configparser.ConfigParser()
config['DEFAULT'] = {'Version': self.Version.get(), 'UserName': self.UserName.get(), 'Password': self.Password.get(), 'APIKey': self.APIKey.get(), 'ModelID': self.ModelID.get()}
with open(FileName, 'w') as configfile:
config.write(configfile)
self.Txt_Output.insert(tk.END, 'Config File Saved: {}\n>'.format(FileName))
except:
print(traceback.print_exc())
def SaveConfigAs(self):
try:
File = filedialog.asksaveasfile(mode='w',defaultextension=".cfg")
FileName=File.name
if File is None:
pass
else:
config = configparser.ConfigParser()
config['DEFAULT'] = {'Version': self.Version.get(), 'UserName': self.UserName.get(), 'Password': self.Password.get(), 'APIKey': self.APIKey.get(), 'ModelID': self.ModelID.get()}
config.write(File)
File.close()
self.Txt_Output.insert(tk.END, 'Config File Saved As: {}\n>'.format(FileName))
except:
print(traceback.print_exc())
root = tk.Tk()
AppWindow = ApplicationWindow(master=root)
AppWindow.master.title('IBM Watson Natural Language Processing')
#AppWindow.master.maxsize(1024, 768)
AppWindow.mainloop()
| 45.150171 | 205 | 0.595434 | 1,391 | 13,229 | 5.59238 | 0.196981 | 0.016197 | 0.016712 | 0.020697 | 0.423191 | 0.310066 | 0.242448 | 0.213266 | 0.160818 | 0.150791 | 0 | 0.016349 | 0.246353 | 13,229 | 292 | 206 | 45.304795 | 0.763892 | 0.079447 | 0 | 0.24 | 0 | 0 | 0.097776 | 0.004277 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0.071111 | 0.053333 | 0 | 0.115556 | 0.031111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
737c8fcb95ea540c79cfba48d2fa31a9bd9f57a9 | 1,227 | py | Python | src/main/fileextractors/fileextractor.py | michael-stanin/Subtitles-Distributor | e4638d952235f96276729239596dc31d9ccc2ee1 | [
"MIT"
] | 1 | 2017-06-03T19:42:05.000Z | 2017-06-03T19:42:05.000Z | src/main/fileextractors/fileextractor.py | michael-stanin/Subtitles-Distributor | e4638d952235f96276729239596dc31d9ccc2ee1 | [
"MIT"
] | null | null | null | src/main/fileextractors/fileextractor.py | michael-stanin/Subtitles-Distributor | e4638d952235f96276729239596dc31d9ccc2ee1 | [
"MIT"
] | null | null | null | import logging
from main.fileextractors.compressedfile import get_compressed_file
from main.utilities.fileutils import dir_path
from main.utilities.subtitlesadjuster import ArchiveAdjuster
class FileExtractor:
def __init__(self, subname, movfile):
self.sn, self.mn = subname, movfile
self.subzip = get_compressed_file(self.sn)
self.log = logging.getLogger(__name__)
def run(self):
if self.subzip:
return self._extractfile() and self._adjust_subs()
return False
def _adjust_subs(self):
return ArchiveAdjuster(self.subzip, self.sn, self.mn).adjust()
def _extractfile(self):
self.log.info("Start extracting %s to: %s", self.sn, dir_path(self.mn))
extracted = self._extract_subtitles_to_movie_dir()
self.log.info("End extracting %s to: %s - with result %s", self.sn, dir_path(self.mn), repr(extracted))
return extracted
def _extract_subtitles_to_movie_dir(self):
extracted = False
try:
self.subzip.accessor.extractall(dir_path(self.mn))
extracted = True
except Exception as e:
self.log.exception("Failed to extract: %s", e)
return extracted
| 35.057143 | 111 | 0.673187 | 155 | 1,227 | 5.122581 | 0.374194 | 0.037783 | 0.037783 | 0.049118 | 0.164987 | 0.125945 | 0.050378 | 0 | 0 | 0 | 0 | 0 | 0.231459 | 1,227 | 34 | 112 | 36.088235 | 0.841994 | 0 | 0 | 0.071429 | 0 | 0 | 0.07172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.142857 | 0.035714 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
7382da4a97a03a9bab8ad1771db18f2352be8d95 | 5,518 | py | Python | SDis_Self-Training/plotting/createScatterPlot.py | mgeorgati/DasymetricMapping | d87b97a076cca3e03286c6b27b118904e03315c0 | [
"BSD-3-Clause"
] | null | null | null | SDis_Self-Training/plotting/createScatterPlot.py | mgeorgati/DasymetricMapping | d87b97a076cca3e03286c6b27b118904e03315c0 | [
"BSD-3-Clause"
] | null | null | null | SDis_Self-Training/plotting/createScatterPlot.py | mgeorgati/DasymetricMapping | d87b97a076cca3e03286c6b27b118904e03315c0 | [
"BSD-3-Clause"
] | null | null | null | import sys, os, seaborn as sns, rasterio, pandas as pd
import numpy as np
import matplotlib.pyplot as plt
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from config.definitions import ROOT_DIR, ancillary_path, city,year
attr_value ="totalpop"
gtP = ROOT_DIR + "/Evaluation/{0}_groundTruth/{2}_{0}_{1}.tif".format(city,attr_value,year)
srcGT= rasterio.open(gtP)
popGT = srcGT.read(1)
print(popGT.min(),popGT.max(), popGT.mean())
#prP = ROOT_DIR + "/Evaluation/{0}/apcatbr/div_{0}_dissever01WIESMN_500_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value)
def scatterplot(prP):
cp = "C:/Users/NM12LQ/OneDrive - Aalborg Universitet/PopNetV2_backup/data_prep/ams_ProjectData/temp_tif/ams_CLC_2012_2018Reclas3.tif"
srcC= rasterio.open(cp)
corine = srcC.read(1)
name = prP.split(".tif")[0].split("/")[-1]
print(name)
gtP = ROOT_DIR + "/Evaluation/{0}_groundTruth/{2}_{0}_{1}.tif".format(city,attr_value,year)
srcGT= rasterio.open(gtP)
popGT = srcGT.read(1)
print(popGT.min(),popGT.max(), popGT.mean())
srcPR= rasterio.open(prP)
popPR = srcPR.read(1)
popPR[(np.where(popPR <= -9999))] = 0
print(popPR.min(),popPR.max(), popPR.mean())
cr=corine.flatten()
x=popGT.flatten()
y=popPR.flatten()
df = pd.DataFrame(data={"gt": x, "predictions":y, "cr":cr})
plt.figure(figsize=(20,20))
g= sns.lmplot(data=df, x="gt", y="predictions", hue="cr", palette=["#0d2dc1","#ff9c1c","#71b951","#24f33d","#90308f", "#a8a8a8"],ci = None, order=2, scatter_kws={"s":0.5, "alpha": 0.5}, line_kws={"lw":2, "alpha": 0.5}, legend=False)
plt.legend(title= "Land Cover", labels= ['Water','Urban Fabric', 'Agriculture', 'Green Spaces','Industry','Transportation' ], loc='lower right', fontsize=5)
plt.title('{0}'.format( name), fontsize=11)
# Set x-axis label
plt.xlabel('Ground Truth (persons)', fontsize=11)
# Set y-axis label
plt.ylabel('Predictions (persons)', fontsize=11)
#total pop
#plt.xscale('log')
#plt.yscale('log')
#mobile Adults
#plt.xlim((0,200))
#plt.ylim((-100,500))pl
plt.axis('square')
plt.xlim((0,400))
plt.ylim((0,350))
plt.tight_layout()
#plt.show()
plt.savefig(ROOT_DIR + "/Evaluation/{0}/ScatterPlots/SP4_{2}.png".format(city,attr_value, name),format='png',dpi=300)
evalFiles = [#gtP,
#ROOT_DIR + "/Evaluation/{0}/aprf/dissever00/{0}_dissever00WIESMN_2018_ams_Dasy_aprf_p[1]_12AIL12_1IL_it10_{1}.tif".format(city,attr_value),
#ROOT_DIR + "/Evaluation/{0}/aprf/dissever01/{0}_dissever01WIESMN_100_2018_ams_DasyA_aprf_p[1]_12AIL12_13IL_it10_{1}.tif".format(city,attr_value),
#ROOT_DIR + "/Evaluation/{0}/apcatbr/{0}_dissever01WIESMN_100_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
#ROOT_DIR + "/Evaluation/{0}/apcatbr/{0}_dissever01WIESMN_250_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/{0}_dissever01WIESMN_500_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
]
evalFilesMAEbp = [ROOT_DIR + "/Evaluation/{0}/Pycno/mae_{0}_{2}_{0}_{1}_pycno.tif".format(city,attr_value,year),
ROOT_DIR + "/Evaluation/{0}/Dasy/mae_{0}_{2}_{0}_{1}_dasyWIESMN.tif".format(city,attr_value,year),
ROOT_DIR + "/Evaluation/{0}/aprf/dissever00/mae_{0}_dissever00WIESMN_2018_ams_Dasy_aprf_p[1]_12AIL12_1IL_it10_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/aprf/dissever01/mae_{0}_dissever01WIESMN_100_2018_ams_DasyA_aprf_p[1]_12AIL12_13IL_it10_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/mae_{0}_dissever01WIESMN_100_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/mae_{0}_dissever01WIESMN_250_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/mae_{0}_dissever01WIESMN_500_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/mae_{0}_dissever01WIESMN_250_2018_ams_DasyA_apcatbr_p[1]_3AIL5_12IL_it10_ag_{1}.tif".format(city,attr_value)]
evalFilesPEbp = [ROOT_DIR + "/Evaluation/{0}/Pycno/div_{0}_{2}_{0}_{1}_pycno.tif".format(city,attr_value,year),
ROOT_DIR + "/Evaluation/{0}/Dasy/div_{0}_{2}_{0}_{1}_dasyWIESMN.tif".format(city,attr_value,year),
ROOT_DIR + "/Evaluation/{0}/aprf/dissever00/div_{0}_dissever00WIESMN_2018_ams_Dasy_aprf_p[1]_12AIL12_1IL_it10_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/aprf/dissever01/div_{0}_dissever01WIESMN_100_2018_ams_DasyA_aprf_p[1]_12AIL12_13IL_it10_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/div_{0}_dissever01WIESMN_100_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/div_{0}_dissever01WIESMN_250_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value),
ROOT_DIR + "/Evaluation/{0}/apcatbr/div_{0}_dissever01WIESMN_500_2018_ams_DasyA_apcatbr_p[1]_12AIL12_12IL_it10_ag_{1}.tif".format(city,attr_value)]
for i in evalFiles:
scatterplot(i)
| 66.481928 | 237 | 0.702791 | 826 | 5,518 | 4.33293 | 0.223971 | 0.048896 | 0.113998 | 0.120704 | 0.659681 | 0.633697 | 0.624756 | 0.623079 | 0.623079 | 0.613859 | 0 | 0.098313 | 0.129938 | 5,518 | 82 | 238 | 67.292683 | 0.647157 | 0.156941 | 0 | 0.135593 | 0 | 0.152542 | 0.430882 | 0.377615 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016949 | false | 0 | 0.067797 | 0 | 0.084746 | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7393a024a0f2a49dd9e4ca3dcf823461e29e512f | 885 | py | Python | controllers/editor.py | matumaros/BomberApe | d71616192fd54d9a595261c258e4c7367d2eac5d | [
"Apache-2.0"
] | null | null | null | controllers/editor.py | matumaros/BomberApe | d71616192fd54d9a595261c258e4c7367d2eac5d | [
"Apache-2.0"
] | null | null | null | controllers/editor.py | matumaros/BomberApe | d71616192fd54d9a595261c258e4c7367d2eac5d | [
"Apache-2.0"
] | null | null | null |
from models.tilemap import TileMap
class EditorController:
def __init__(self, view):
self.view = view
self.tilemap = TileMap()
def place_tile(self, coord, ttype):
self.tilemap.add_tile(coord, ttype)
self.view.board.update_tiles({coord: ttype})
def place_spawn(self, coord):
self.tilemap.add_spawn(coord)
self.view.board.update_spawns({coord: 'None'})
def get_tiles(self):
layers = self.tilemap.layers
tiles = layers['ground'].copy()
tiles.update(layers['util'])
tiles.update(layers['powerup'])
tiles.update(layers['wall'])
return tiles
def save(self):
self.tilemap.save()
def load(self, map_path):
self.tilemap.load(map_path)
self.view.board.update_tiles(self.get_tiles())
self.view.board.update_spawns(self.tilemap.spawns)
| 26.029412 | 58 | 0.632768 | 111 | 885 | 4.900901 | 0.297297 | 0.141544 | 0.095588 | 0.139706 | 0.180147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.239548 | 885 | 33 | 59 | 26.818182 | 0.808321 | 0 | 0 | 0 | 0 | 0 | 0.028313 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.041667 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
739ba1a424b3444916622cc94f3e8ea065012ebc | 13,648 | py | Python | perma_web/perma/forms.py | leppert/perma | adb0cec29679c3d161d72330e19114f89f8c42ac | [
"MIT",
"Unlicense"
] | null | null | null | perma_web/perma/forms.py | leppert/perma | adb0cec29679c3d161d72330e19114f89f8c42ac | [
"MIT",
"Unlicense"
] | null | null | null | perma_web/perma/forms.py | leppert/perma | adb0cec29679c3d161d72330e19114f89f8c42ac | [
"MIT",
"Unlicense"
] | null | null | null | import logging
from django import forms
from django.forms import ModelForm
from django.forms.widgets import flatatt
from django.utils.html import mark_safe
from perma.models import Registrar, Organization, LinkUser
logger = logging.getLogger(__name__)
class RegistrarForm(ModelForm):
class Meta:
model = Registrar
fields = ['name', 'email', 'website']
class OrganizationWithRegistrarForm(ModelForm):
registrar = forms.ModelChoiceField(queryset=Registrar.objects.all().order_by('name'), empty_label=None)
class Meta:
model = Organization
fields = ['name', 'registrar']
class OrganizationForm(ModelForm):
class Meta:
model = Organization
fields = ['name']
class CreateUserForm(forms.ModelForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
"""
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email"]
error_messages = {
'duplicate_email': "A user with that email address already exists.",
}
email = forms.EmailField()
def clean_email(self):
# Since User.email is unique, this check is redundant,
# but it sets a nicer error message than the ORM.
email = self.cleaned_data["email"]
try:
LinkUser.objects.get(email=email)
except LinkUser.DoesNotExist:
return email
raise forms.ValidationError(self.error_messages['duplicate_email'])
class CreateUserFormWithRegistrar(CreateUserForm):
"""
add registrar to the create user form
"""
registrar = forms.ModelChoiceField(queryset=Registrar.objects.all().order_by('name'), empty_label=None)
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email", "registrar"]
def clean_registrar(self):
registrar = self.cleaned_data["registrar"]
return registrar
class CreateUserFormWithCourt(CreateUserForm):
"""
add court to the create user form
"""
requested_account_note = forms.CharField(required=True)
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email", "requested_account_note"]
def __init__(self, *args, **kwargs):
super(CreateUserFormWithCourt, self).__init__(*args, **kwargs)
self.fields['requested_account_note'].label = "Your court"
self.fields['first_name'].label = "Your first name"
self.fields['last_name'].label = "Your last name"
self.fields['email'].label = "Your email"
class CreateUserFormWithUniversity(CreateUserForm):
"""
add court to the create user form
"""
requested_account_note = forms.CharField(required=True)
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email", "requested_account_note"]
def __init__(self, *args, **kwargs):
super(CreateUserFormWithUniversity, self).__init__(*args, **kwargs)
self.fields['requested_account_note'].label = "Your university"
class CustomSelectSingleAsList(forms.SelectMultiple):
# Thank you, http://stackoverflow.com/a/14971139
def render(self, name, value, attrs=None, choices=()):
if value is None: value = []
final_attrs = self.build_attrs(attrs, name=name)
output = [u'<select %s>' % flatatt(final_attrs)] # NOTE removed the multiple attribute
options = self.render_options(choices, value)
if options:
output.append(options)
output.append('</select>')
return mark_safe(u'\n'.join(output))
class CreateUserFormWithOrganization(CreateUserForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
"""
def __init__(self, *args, **kwargs):
registrar_id = False
org_member_id = False
if 'registrar_id' in kwargs:
registrar_id = kwargs.pop('registrar_id')
if 'org_member_id' in kwargs:
org_member_id = kwargs.pop('org_member_id')
super(CreateUserFormWithOrganization, self).__init__(*args, **kwargs)
if registrar_id:
self.fields['organizations'].queryset = Organization.objects.filter(registrar_id=registrar_id).order_by('name')
elif org_member_id:
user = LinkUser.objects.get(id=org_member_id)
self.fields['organizations'].queryset = user.organizations.all()
else:
self.fields['organizations'].queryset = Organization.objects.all().order_by('name')
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email", "organizations"]
organizations = forms.ModelMultipleChoiceField(queryset=Organization.objects.all().order_by('name'),label="Organization", widget=CustomSelectSingleAsList)
def clean_organization(self):
organizations = self.cleaned_data["organizations"]
return organizations
class UserFormEdit(forms.ModelForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
This is the edit form, so we strip it down even more
"""
error_messages = {
}
email = forms.EmailField()
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email"]
class RegistrarMemberFormEdit(UserFormEdit):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
This is the edit form, so we strip it down even more
"""
registrar = forms.ModelChoiceField(queryset=Registrar.objects.all().order_by('name'), empty_label=None)
class Meta:
model = LinkUser
fields = ["first_name", "last_name", "email", "registrar"]
class OrganizationMemberWithOrganizationFormEdit(forms.ModelForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
This is stripped down even further to match out editing needs
"""
def __init__(self, *args, **kwargs):
registrar_id = False
if 'registrar_id' in kwargs:
registrar_id = kwargs.pop('registrar_id')
super(OrganizationMemberWithOrganizationFormEdit, self).__init__(*args, **kwargs)
if registrar_id:
self.fields['organizations'].queryset = Organization.objects.filter(registrar_id=registrar_id).order_by('name')
class Meta:
model = LinkUser
fields = ["organizations"]
org = forms.ModelMultipleChoiceField(queryset=Organization.objects.all().order_by('name'),label="Organization", required=False,)
class OrganizationMemberWithOrganizationOrgAsOrganizationMemberFormEdit(forms.ModelForm):
"""
TODO: this form has a gross name. rename it.
"""
def __init__(self, *args, **kwargs):
user_id = False
if 'organization_user_id' in kwargs:
organization_user_id = kwargs.pop('organization_user_id')
super(OrganizationMemberWithOrganizationOrgAsOrganizationMemberFormEdit, self).__init__(*args, **kwargs)
if organization_user_id:
editing_user = LinkUser.objects.get(pk=organization_user_id)
self.fields['organizations'].queryset = editing_user.organizations.all().order_by('name')
class Meta:
model = LinkUser
fields = ["organizations"]
org = forms.ModelMultipleChoiceField(queryset=Organization.objects.all().order_by('name'),label="Organization", required=False,)
class OrganizationMemberWithGroupFormEdit(UserFormEdit):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
This is stripped down even further to match out editing needs
"""
def __init__(self, *args, **kwargs):
registrar_id = False
if 'registrar_id' in kwargs:
registrar_id = kwargs.pop('registrar_id')
super(OrganizationMemberWithGroupFormEdit, self).__init__(*args, **kwargs)
if registrar_id:
self.fields['organizations'].queryset = Organization.objects.filter(registrar_id=registrar_id).order_by('name')
class Meta:
model = LinkUser
fields = ("first_name", "last_name", "email", "organizations",)
org = forms.ModelChoiceField(queryset=Organization.objects.all().order_by('name'), empty_label=None, label="Organization", required=False,)
class UserAddRegistrarForm(forms.ModelForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
This is stripped down even further to match out editing needs
"""
class Meta:
model = LinkUser
fields = ("registrar",)
registrar = forms.ModelChoiceField(queryset=Registrar.objects.all().order_by('name'), empty_label=None)
class UserAddOrganizationForm(forms.ModelForm):
"""
add an org when a regular user is promoted to an org user
"""
def __init__(self, *args, **kwargs):
registrar_id = False
org_member_id = False
target_user_id = False
if 'registrar_id' in kwargs:
registrar_id = kwargs.pop('registrar_id')
if 'org_member_id' in kwargs:
org_member_id = kwargs.pop('org_member_id')
if 'target_user_id' in kwargs:
target_user_id = kwargs.pop('target_user_id')
super(UserAddOrganizationForm, self).__init__(*args, **kwargs)
target_user = LinkUser.objects.get(pk=target_user_id)
# Registrars can only edit their own organization members
if registrar_id:
# Get the orgs the logged in user admins. Exclude the ones
# the target user is already in
orgs = Organization.objects.filter(registrar_id=registrar_id).exclude(pk__in=target_user.organizations.all())
elif org_member_id:
# Get the orgs the logged in user admins. Exclude the ones
# the target user is already in
org_member = LinkUser.objects.get(pk=org_member_id)
orgs = org_member.organizations.all().exclude(pk__in=target_user.organizations.all())
else:
# Must be registry member.
orgs = Organization.objects.all().exclude(pk__in=target_user.organizations.all())
self.fields['organizations'] = forms.ModelMultipleChoiceField(queryset=orgs.order_by('name'), label="Organization", widget=CustomSelectSingleAsList)
class Meta:
model = LinkUser
fields = ("organizations",)
def save(self, commit=True):
user = super(UserAddOrganizationForm, self).save(commit=False)
if commit:
user.save()
return user
class UserRegForm(forms.ModelForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
"""
error_messages = {
'duplicate_email': "A user with that email address already exists.",
}
email = forms.EmailField()
#password = forms.CharField(label="Password", widget=forms.PasswordInput)
class Meta:
model = LinkUser
fields = ("email", "first_name", "last_name")
def clean_email(self):
# Since User.email is unique, this check is redundant,
# but it sets a nicer error message than the ORM.
email = self.cleaned_data["email"]
try:
LinkUser.objects.get(email=email)
except LinkUser.DoesNotExist:
return email
raise forms.ValidationError(self.error_messages['duplicate_email'])
class UserFormSelfEdit(forms.ModelForm):
"""
stripped down user reg form
This is mostly a django.contrib.auth.forms.UserCreationForm
This is stripped down even further to match our editing needs
"""
class Meta:
model = LinkUser
fields = ("first_name", "last_name", "email")
email = forms.EmailField()
class SetPasswordForm(forms.Form):
"""
A form that lets a user change set his/her password without entering the
old password
"""
error_messages = {
'password_mismatch': "The two password fields didn't match.",
}
new_password1 = forms.CharField(label="New password",
widget=forms.PasswordInput)
new_password2 = forms.CharField(label="New password confirmation",
widget=forms.PasswordInput)
def __init__(self, user, *args, **kwargs):
self.user = user
super(SetPasswordForm, self).__init__(*args, **kwargs)
def clean_new_password2(self):
password1 = self.cleaned_data.get('new_password1')
password2 = self.cleaned_data.get('new_password2')
if password1 and password2:
if password1 != password2:
logger.debug('mismatch')
raise forms.ValidationError(self.error_messages['password_mismatch'])
return password2
def save(self, commit=True):
self.user.set_password(self.cleaned_data['new_password1'])
if commit:
self.user.save()
return self.user
class UploadFileForm(forms.Form):
title = forms.CharField(required=True)
url = forms.URLField(required=True)
file = forms.FileField(required=True)
class ContactForm(forms.Form):
"""
The form we use on the contact page. Just an email (optional)
and a message
"""
email = forms.EmailField()
message = forms.CharField(widget=forms.Textarea)
| 31.81352 | 158 | 0.656726 | 1,502 | 13,648 | 5.803595 | 0.150466 | 0.035333 | 0.027303 | 0.035333 | 0.605369 | 0.561661 | 0.542847 | 0.513135 | 0.499943 | 0.499254 | 0 | 0.002029 | 0.241574 | 13,648 | 428 | 159 | 31.88785 | 0.840112 | 0.156433 | 0 | 0.50655 | 0 | 0 | 0.118261 | 0.007854 | 0 | 0 | 0 | 0.002336 | 0 | 1 | 0.069869 | false | 0.065502 | 0.026201 | 0 | 0.406114 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
739baac2ff5ef50ecd5e6693fbb6afb0bb494d6a | 5,403 | py | Python | samples/sample-2.py | shoriwe/LVaED | 68ca38eed2b4c2b1b7a6a8304c8effbcf2f977f7 | [
"MIT"
] | null | null | null | samples/sample-2.py | shoriwe/LVaED | 68ca38eed2b4c2b1b7a6a8304c8effbcf2f977f7 | [
"MIT"
] | 19 | 2021-02-08T22:14:16.000Z | 2021-03-03T15:13:07.000Z | samples/sample-2.py | shoriwe/LVaED | 68ca38eed2b4c2b1b7a6a8304c8effbcf2f977f7 | [
"MIT"
] | 3 | 2021-08-30T01:06:32.000Z | 2022-02-21T03:22:28.000Z | import io
import os
import re
import zipfile
import flask
import markdown
import blueprints.example
import blueprints.home
import blueprints.presentation
import blueprints.transformations
class Zipper(object):
def __init__(self):
self._content = None
self._content_handler = io.BytesIO()
def append(self, filename: str, content: bytes):
zip_file = zipfile.ZipFile(self._content_handler, "a", zipfile.ZIP_DEFLATED, False)
zip_file.writestr(filename, content)
for file in zip_file.filelist:
file.create_system = 0
zip_file.close()
self._content_handler.seek(0)
self._content = self._content_handler.read()
def append_directory(self, path: str):
for directory_path, directories, files in os.walk(path):
for file in files:
file_path = os.path.join(directory_path, file)
with open(file_path, "rb") as file_object:
self.append(file_path, file_object.read())
self._content_handler.seek(0)
self._content = self._content_handler.read()
def content(self) -> bytes:
return self._content
def pygmentize(raw_markdown: str) -> str:
languages = re.findall(re.compile("(?<=^```)\\w+$", re.M), raw_markdown)
last_index = 0
for language in languages:
list_markdown = raw_markdown.split("\n")
code_block_start_index = list_markdown.index(f"```{language}", last_index)
code_block_end_index = list_markdown.index("```", code_block_start_index)
for index in range(code_block_start_index + 1, code_block_end_index):
list_markdown[index] = f"\t{list_markdown[index]}"
list_markdown[code_block_start_index] = "\t" + list_markdown[code_block_start_index].replace("```", ":::")
list_markdown[code_block_end_index] = "\n"
raw_markdown = "\n".join(list_markdown)
last_index = code_block_end_index
return raw_markdown
def render_article(article_path: str) -> str:
with open(article_path) as file:
content = file.read()
html = markdown.markdown(pygmentize(content), extensions=["codehilite"])
return html
def zip_library(library_directory: str) -> Zipper:
z = Zipper()
z.append_directory(library_directory)
return z
def load_articles(app: flask.Flask):
app.config["articles"] = {
"list": render_article("markdown/articles/list.md"),
"stack": render_article("markdown/articles/stack.md"),
"queue": render_article("markdown/articles/queue.md")
}
def load_libraries(app: flask.Flask):
app.config["libraries"] = {
"all": zip_library("DataTypes").content(),
"c": zip_library("DataTypes/C").content(),
"java": zip_library("DataTypes/Java").content(),
"python": zip_library("DataTypes/Python").content()
}
def load_examples(app: flask.Flask):
app.config["examples"] = {}
app.config["examples"]["c"] = {
"simple_list": render_article("markdown/examples/c/simple_list.md"),
"double_list": render_article("markdown/examples/c/double_list.md"),
"circular_simple_list": render_article("markdown/examples/c/circular_simple_list.md"),
"circular_double_list": render_article("markdown/examples/c/circular_double_list.md"),
"array_stack": render_article("markdown/examples/c/array_stack.md"),
"list_stack": render_article("markdown/examples/c/list_stack.md"),
"array_queue": render_article("markdown/examples/c/array_queue.md"),
"list_queue": render_article("markdown/examples/c/list_queue.md"),
"priority_queue": render_article("markdown/examples/c/priority_queue.md")
}
app.config["examples"]["java"] = {
"simple_list": render_article("markdown/examples/java/simple_list.md"),
"double_list": render_article("markdown/examples/java/double_list.md"),
"circular_simple_list": render_article("markdown/examples/java/circular_simple_list.md"),
"circular_double_list": render_article("markdown/examples/java/circular_double_list.md"),
"array_stack": render_article("markdown/examples/java/array_stack.md"),
"list_stack": render_article("markdown/examples/java/list_stack.md"),
"array_queue": render_article("markdown/examples/java/array_queue.md"),
"list_queue": render_article("markdown/examples/java/list_queue.md"),
"priority_queue": render_article("markdown/examples/java/priority_queue.md")
}
app.config["examples"]["python"] = {
"simple_list": render_article("markdown/examples/python/simple_list.md"),
"double_list": render_article("markdown/examples/python/double_list.md"),
"circular_simple_list": render_article("markdown/examples/python/circular_simple_list.md"),
"circular_double_list": render_article("markdown/examples/python/circular_double_list.md"),
"array_stack": render_article("markdown/examples/python/array_stack.md"),
"list_stack": render_article("markdown/examples/python/list_stack.md"),
"array_queue": render_article("markdown/examples/python/array_queue.md"),
"list_queue": render_article("markdown/examples/python/list_queue.md"),
"priority_queue": render_article("markdown/examples/python/priority_queue.md")
}
def setup() -> flask.Flask:
app = flask.Flask(__name__, template_folder="templates")
app.register_blueprint(blueprints.home.home_blueprint)
app.register_blueprint(blueprints.presentation.presentation_blueprint)
app.register_blueprint(blueprints.example.example_blueprint)
app.register_blueprint(blueprints.transformations.transformations_blueprint)
load_articles(app)
load_libraries(app)
load_examples(app)
return app
def main():
app = setup()
app.run("127.0.0.1", 5000, debug=False)
if __name__ == '__main__':
main()
| 37.006849 | 108 | 0.760689 | 727 | 5,403 | 5.361761 | 0.155433 | 0.103386 | 0.161621 | 0.200872 | 0.535403 | 0.482299 | 0.426886 | 0.368138 | 0.368138 | 0.172653 | 0 | 0.00306 | 0.092726 | 5,403 | 145 | 109 | 37.262069 | 0.792126 | 0 | 0 | 0.033898 | 0 | 0 | 0.318897 | 0.212475 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101695 | false | 0 | 0.084746 | 0.008475 | 0.237288 | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
739ceade8d1851b8f8c7cabe7fe9035c80fe7143 | 9,388 | py | Python | django-openstack/django_openstack/syspanel/views/instances.py | tylesmit/openstack-dashboard | 8199011a98aa8bc5672e977db014f61eccc4668c | [
"Apache-2.0"
] | 2 | 2015-05-18T13:50:23.000Z | 2015-05-18T14:47:08.000Z | django-openstack/django_openstack/syspanel/views/instances.py | tylesmit/openstack-dashboard | 8199011a98aa8bc5672e977db014f61eccc4668c | [
"Apache-2.0"
] | null | null | null | django-openstack/django_openstack/syspanel/views/instances.py | tylesmit/openstack-dashboard | 8199011a98aa8bc5672e977db014f61eccc4668c | [
"Apache-2.0"
] | null | null | null | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2011 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Copyright 2011 Fourth Paradigm Development, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django import template
from django import http
from django.conf import settings
from django.contrib.auth.decorators import login_required
from django.shortcuts import render_to_response
from django.utils.translation import ugettext as _
import datetime
import logging
from django.contrib import messages
from django_openstack import api
from django_openstack import forms
from django_openstack.dash.views import instances as dash_instances
from openstackx.api import exceptions as api_exceptions
TerminateInstance = dash_instances.TerminateInstance
RebootInstance = dash_instances.RebootInstance
LOG = logging.getLogger('django_openstack.syspanel.views.instances')
def _next_month(date_start):
y = date_start.year + (date_start.month + 1)/13
m = ((date_start.month + 1)%13)
if m == 0:
m = 1
return datetime.date(y, m, 1)
def _current_month():
today = datetime.date.today()
return datetime.date(today.year, today.month,1)
def _get_start_and_end_date(request):
try:
date_start = datetime.date(int(request.GET['date_year']), int(request.GET['date_month']), 1)
except:
today = datetime.date.today()
date_start = datetime.date(today.year, today.month,1)
date_end = _next_month(date_start)
datetime_start = datetime.datetime.combine(date_start, datetime.time())
datetime_end = datetime.datetime.combine(date_end, datetime.time())
if date_end > datetime.date.today():
datetime_end = datetime.datetime.utcnow()
return (date_start, date_end, datetime_start, datetime_end)
@login_required
def usage(request):
(date_start, date_end, datetime_start, datetime_end) = _get_start_and_end_date(request)
service_list = []
usage_list = []
max_vcpus = max_gigabytes = 0
total_ram = 0
if date_start > _current_month():
messages.error(request, 'No data for the selected period')
date_end = date_start
datetime_end = datetime_start
else:
try:
service_list = api.service_list(request)
except api_exceptions.ApiException, e:
LOG.error('ApiException fetching service list in instance usage',
exc_info=True)
messages.error(request,
'Unable to get service info: %s' % e.message)
for service in service_list:
if service.type == 'nova-compute':
max_vcpus += service.stats['max_vcpus']
max_gigabytes += service.stats['max_gigabytes']
total_ram += settings.COMPUTE_HOST_RAM_GB
try:
usage_list = api.usage_list(request, datetime_start, datetime_end)
except api_exceptions.ApiException, e:
LOG.error('ApiException fetching usage list in instance usage'
' on date range "%s to %s"' % (datetime_start,
datetime_end),
exc_info=True)
messages.error(request, 'Unable to get usage info: %s' % e.message)
dateform = forms.DateForm()
dateform['date'].field.initial = date_start
global_summary = {'max_vcpus': max_vcpus, 'max_gigabytes': max_gigabytes,
'total_active_disk_size': 0, 'total_active_vcpus': 0,
'total_active_ram_size': 0}
for usage in usage_list:
# FIXME: api needs a simpler dict interface (with iteration) - anthony
# NOTE(mgius): Changed this on the api end. Not too much neater, but
# at least its not going into private member data of an external
# class anymore
#usage = usage._info
for k in usage._attrs:
v = usage.__getattr__(k)
if type(v) in [float, int]:
if not k in global_summary:
global_summary[k] = 0
global_summary[k] += v
max_disk_tb = used_disk_tb = available_disk_tb = 0
max_disk_tb = global_summary['max_gigabytes'] / float(1000)
used_disk_tb = global_summary['total_active_disk_size'] / float(1000)
available_disk_tb = (global_summary['max_gigabytes'] / float(1000) - \
global_summary['total_active_disk_size'] / float(1000))
used_ram = global_summary['total_active_ram_size'] / float(1024)
avail_ram = total_ram - used_ram
ram_unit = "GB"
if total_ram > 999:
ram_unit = "TB"
total_ram /= float(1024)
used_ram /= float(1024)
avail_ram /= float(1024)
return render_to_response(
'syspanel_usage.html',{
'dateform': dateform,
'usage_list': usage_list,
'global_summary': global_summary,
'available_cores': global_summary['max_vcpus'] - global_summary['total_active_vcpus'],
'available_disk': global_summary['max_gigabytes'] - global_summary['total_active_disk_size'],
'max_disk_tb': max_disk_tb,
'used_disk_tb': used_disk_tb,
'available_disk_tb': available_disk_tb,
'total_ram': total_ram,
'used_ram': used_ram,
'avail_ram': avail_ram,
'ram_unit': ram_unit,
'external_links': settings.EXTERNAL_MONITORING,
}, context_instance = template.RequestContext(request))
@login_required
def tenant_usage(request, tenant_id):
(date_start, date_end, datetime_start, datetime_end) = _get_start_and_end_date(request)
if date_start > _current_month():
messages.error(request, 'No data for the selected period')
date_end = date_start
datetime_end = datetime_start
dateform = forms.DateForm()
dateform['date'].field.initial = date_start
usage = {}
try:
usage = api.usage_get(request, tenant_id, datetime_start, datetime_end)
except api_exceptions.ApiException, e:
LOG.error('ApiException getting usage info for tenant "%s"'
' on date range "%s to %s"' % (tenant_id,
datetime_start,
datetime_end))
messages.error(request, 'Unable to get usage info: %s' % e.message)
running_instances = []
terminated_instances = []
if hasattr(usage, 'instances'):
now = datetime.datetime.now()
for i in usage.instances:
# this is just a way to phrase uptime in a way that is compatible
# with the 'timesince' filter. Use of local time intentional
i['uptime_at'] = now - datetime.timedelta(seconds=i['uptime'])
if i['ended_at']:
terminated_instances.append(i)
else:
running_instances.append(i)
return render_to_response('syspanel_tenant_usage.html', {
'dateform': dateform,
'usage': usage,
'instances': running_instances + terminated_instances,
'tenant_id': tenant_id,
}, context_instance = template.RequestContext(request))
@login_required
def index(request):
for f in (TerminateInstance, RebootInstance):
_, handled = f.maybe_handle(request)
if handled:
return handled
instances = []
try:
instances = api.server_list(request)
except Exception as e:
LOG.error('Unspecified error in instance index', exc_info=True)
messages.error(request, 'Unable to get instance list: %s' % e.message)
# We don't have any way of showing errors for these, so don't bother
# trying to reuse the forms from above
terminate_form = TerminateInstance()
reboot_form = RebootInstance()
return render_to_response('syspanel_instances.html', {
'instances': instances,
'terminate_form': terminate_form,
'reboot_form': reboot_form,
}, context_instance=template.RequestContext(request))
@login_required
def refresh(request):
for f in (TerminateInstance, RebootInstance):
_, handled = f.maybe_handle(request)
if handled:
return handled
instances = []
try:
instances = api.server_list(request)
except Exception as e:
messages.error(request, 'Unable to get instance list: %s' % e.message)
# We don't have any way of showing errors for these, so don't bother
# trying to reuse the forms from above
terminate_form = TerminateInstance()
reboot_form = RebootInstance()
return render_to_response('_syspanel_instance_list.html', {
'instances': instances,
'terminate_form': terminate_form,
'reboot_form': reboot_form,
}, context_instance=template.RequestContext(request))
| 36.96063 | 101 | 0.659139 | 1,161 | 9,388 | 5.101637 | 0.226529 | 0.025832 | 0.024312 | 0.028364 | 0.440655 | 0.406044 | 0.381901 | 0.371096 | 0.303562 | 0.263042 | 0 | 0.009819 | 0.251491 | 9,388 | 253 | 102 | 37.106719 | 0.833072 | 0.14444 | 0 | 0.370166 | 0 | 0 | 0.145214 | 0.030992 | 0 | 0 | 0 | 0.003953 | 0 | 0 | null | null | 0 | 0.071823 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
739eb239f78d72920cbdfea243f1d357367bd4a8 | 2,187 | py | Python | ddcz/migrations/0010_creativepage_creativepageconcept_creativepagesection.py | Nathaka/graveyard | dcc5ba2fa1679318e65c0078f734cbfeeb287c32 | [
"MIT"
] | 6 | 2018-06-10T09:47:50.000Z | 2022-02-13T12:22:07.000Z | ddcz/migrations/0010_creativepage_creativepageconcept_creativepagesection.py | Nathaka/graveyard | dcc5ba2fa1679318e65c0078f734cbfeeb287c32 | [
"MIT"
] | 268 | 2018-05-30T21:54:50.000Z | 2022-01-08T21:00:03.000Z | ddcz/migrations/0010_creativepage_creativepageconcept_creativepagesection.py | jimmeak/graveyard | 4c0f9d5e8b6c965171d9dc228c765b662f5b7ab4 | [
"MIT"
] | 4 | 2018-09-14T03:50:08.000Z | 2021-04-19T19:36:23.000Z | # Generated by Django 2.0.2 on 2018-06-13 22:10
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("ddcz", "0009_auto_20180610_2246"),
]
operations = [
migrations.CreateModel(
name="CreativePage",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("name", models.CharField(max_length=30)),
("slug", models.SlugField(max_length=30)),
("model_class", models.CharField(max_length=50)),
],
),
migrations.CreateModel(
name="CreativePageConcept",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("text", models.TextField()),
(
"page",
models.OneToOneField(
on_delete=django.db.models.deletion.CASCADE,
to="ddcz.CreativePage",
),
),
],
),
migrations.CreateModel(
name="CreativePageSection",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("name", models.CharField(max_length=30)),
("slug", models.SlugField(max_length=30)),
],
),
]
| 30.375 | 68 | 0.40695 | 150 | 2,187 | 5.786667 | 0.413333 | 0.051843 | 0.050691 | 0.079493 | 0.41129 | 0.41129 | 0.41129 | 0.41129 | 0.41129 | 0.41129 | 0 | 0.037239 | 0.496571 | 2,187 | 71 | 69 | 30.802817 | 0.751135 | 0.020576 | 0 | 0.646154 | 1 | 0 | 0.065888 | 0.010748 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046154 | 0 | 0.092308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73a022545603af3f26c0bf2eec8dadb8c4ffd178 | 2,693 | py | Python | glue/viewers/matplotlib/qt/toolbar.py | tiagopereira/glue | 85bf7ce2d252d7bc405e8160b56fc83d46b9cbe4 | [
"BSD-3-Clause"
] | 1 | 2019-12-17T07:58:35.000Z | 2019-12-17T07:58:35.000Z | glue/viewers/matplotlib/qt/toolbar.py | scalet98/glue | ff949ad52e205c20561f48c05f870b2abb39e0b0 | [
"BSD-3-Clause"
] | null | null | null | glue/viewers/matplotlib/qt/toolbar.py | scalet98/glue | ff949ad52e205c20561f48c05f870b2abb39e0b0 | [
"BSD-3-Clause"
] | 1 | 2019-08-04T14:10:12.000Z | 2019-08-04T14:10:12.000Z | from __future__ import absolute_import, division, print_function
from matplotlib.backends.backend_qt5 import NavigationToolbar2QT
from glue.config import viewer_tool
from glue.viewers.common.tool import CheckableTool, Tool
__all__ = ['MatplotlibTool', 'MatplotlibCheckableTool', 'HomeTool', 'SaveTool',
'PanTool', 'ZoomTool']
def _ensure_mpl_nav(viewer):
# Set up virtual Matplotlib navigation toolbar (don't show it)
if not hasattr(viewer, '_mpl_nav'):
viewer._mpl_nav = NavigationToolbar2QT(viewer.central_widget.canvas, viewer)
viewer._mpl_nav.hide()
def _cleanup_mpl_nav(viewer):
if getattr(viewer, '_mpl_nav', None) is not None:
viewer._mpl_nav.setParent(None)
viewer._mpl_nav.parent = None
class MatplotlibTool(Tool):
def __init__(self, viewer=None):
super(MatplotlibTool, self).__init__(viewer=viewer)
_ensure_mpl_nav(viewer)
def close(self):
_cleanup_mpl_nav(self.viewer)
super(MatplotlibTool, self).close()
class MatplotlibCheckableTool(CheckableTool):
def __init__(self, viewer=None):
super(MatplotlibCheckableTool, self).__init__(viewer=viewer)
_ensure_mpl_nav(viewer)
def close(self):
_cleanup_mpl_nav(self.viewer)
super(MatplotlibCheckableTool, self).close()
@viewer_tool
class HomeTool(MatplotlibTool):
tool_id = 'mpl:home'
icon = 'glue_home'
action_text = 'Home'
tool_tip = 'Reset original zoom'
shortcut = 'H'
def activate(self):
if hasattr(self.viewer, 'state') and hasattr(self.viewer.state, 'reset_limits'):
self.viewer.state.reset_limits()
else:
self.viewer._mpl_nav.home()
@viewer_tool
class SaveTool(MatplotlibTool):
tool_id = 'mpl:save'
icon = 'glue_filesave'
action_text = 'Save plot to file'
tool_tip = 'Save the figure'
def activate(self):
self.viewer._mpl_nav.save_figure()
@viewer_tool
class PanTool(MatplotlibCheckableTool):
tool_id = 'mpl:pan'
icon = 'glue_move'
action_text = 'Pan'
tool_tip = 'Pan axes with left mouse, zoom with right'
shortcut = 'M'
def activate(self):
self.viewer._mpl_nav.pan()
def deactivate(self):
if hasattr(self.viewer, '_mpl_nav'):
self.viewer._mpl_nav.pan()
@viewer_tool
class ZoomTool(MatplotlibCheckableTool):
tool_id = 'mpl:zoom'
icon = 'glue_zoom_to_rect'
action_text = 'Zoom'
tool_tip = 'Zoom to rectangle'
shortcut = 'Z'
def activate(self):
self.viewer._mpl_nav.zoom()
def deactivate(self):
if hasattr(self.viewer, '_mpl_nav'):
self.viewer._mpl_nav.zoom()
| 24.935185 | 88 | 0.678797 | 329 | 2,693 | 5.25228 | 0.279635 | 0.069444 | 0.097222 | 0.074074 | 0.292824 | 0.241319 | 0.211227 | 0.157407 | 0.157407 | 0.157407 | 0 | 0.001418 | 0.214259 | 2,693 | 107 | 89 | 25.168224 | 0.815217 | 0.02228 | 0 | 0.333333 | 0 | 0 | 0.121247 | 0.008742 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.055556 | 0 | 0.569444 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
73a9012563f8e544e446267b12c23f24456df159 | 1,563 | py | Python | peeldb/migrations/0033_auto_20171018_1423.py | ashwin31/opensource-job-portal | 2885ea52f8660e893fe0531c986e3bee33d986a2 | [
"MIT"
] | 1 | 2021-09-27T05:01:39.000Z | 2021-09-27T05:01:39.000Z | peeldb/migrations/0033_auto_20171018_1423.py | kiran1415/opensource-job-portal | 2885ea52f8660e893fe0531c986e3bee33d986a2 | [
"MIT"
] | null | null | null | peeldb/migrations/0033_auto_20171018_1423.py | kiran1415/opensource-job-portal | 2885ea52f8660e893fe0531c986e3bee33d986a2 | [
"MIT"
] | 1 | 2022-01-05T09:02:32.000Z | 2022-01-05T09:02:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2017-10-18 14:23
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('peeldb', '0032_skill_skill_type'),
]
operations = [
migrations.AlterField(
model_name='jobpost',
name='job_type',
field=models.CharField(choices=[('full-time', 'Full Time'),
('internship', 'Internship'),
('walk-in', 'Walk-in'),
('government', 'Government'),
('Fresher', 'Fresher')], max_length=50),
),
migrations.AlterField(
model_name='searchresult',
name='job_type',
field=models.CharField(blank=True, choices=[('full-time', 'Full Time'),
('internship', 'Internship'),
('walk-in', 'Walk-in'),
('government', 'Government'),
('Fresher', 'Fresher')], max_length=20, null=True),
),
migrations.AlterField(
model_name='skill',
name='skill_type',
field=models.CharField(choices=[('it', 'IT'), ('non-it', 'Non-IT'), ('other', 'Other')], default='it', max_length=20),
),
]
| 40.076923 | 130 | 0.435061 | 124 | 1,563 | 5.346774 | 0.467742 | 0.048265 | 0.113122 | 0.131222 | 0.435897 | 0.38914 | 0.295626 | 0.295626 | 0.295626 | 0.295626 | 0 | 0.030067 | 0.425464 | 1,563 | 38 | 131 | 41.131579 | 0.708241 | 0.043506 | 0 | 0.451613 | 1 | 0 | 0.185657 | 0.014075 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.064516 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73aaee020a07b3d8d2a092fd658dc4eb59eaed84 | 878 | py | Python | setup.py | harsh020/synthetic_metric | acecba0150a37c58613a477918ad407373c4cd5c | [
"MIT"
] | 1 | 2021-11-08T09:19:02.000Z | 2021-11-08T09:19:02.000Z | setup.py | harsh020/synthetic_metric | acecba0150a37c58613a477918ad407373c4cd5c | [
"MIT"
] | 2 | 2021-10-14T11:30:21.000Z | 2021-10-14T11:55:50.000Z | setup.py | harsh020/synthetic_metric | acecba0150a37c58613a477918ad407373c4cd5c | [
"MIT"
] | null | null | null | import setuptools
setuptools.setup(
name="synmetric",
version="0.2.dev1",
license='MIT',
author="Harsh Soni",
author_email="author@example.com",
description="Metric to evaluate data quality for synthetic data.",
url="https://github.com/harsh020/synthetic_metric",
download_url = 'https://github.com/harsh020/synthetic_metric/archive/v_02dev1.tar.gz',
project_urls={
"Bug Tracker": "https://github.com/harsh020/synthetic_metric/issues",
},
classifiers=[
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
packages=setuptools.find_packages(),
python_requires=">=3.6",
install_requires = [
'numpy',
'pandas',
'scikit-learn',
'scipy'
]
)
| 28.322581 | 90 | 0.624146 | 93 | 878 | 5.784946 | 0.688172 | 0.061338 | 0.078067 | 0.122677 | 0.217472 | 0.217472 | 0.148699 | 0 | 0 | 0 | 0 | 0.028065 | 0.228929 | 878 | 30 | 91 | 29.266667 | 0.766617 | 0 | 0 | 0 | 0 | 0 | 0.505695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.035714 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73ac5bc20db43b168b228169be2bbfd420f16a64 | 2,184 | py | Python | notario/tests/validators/test_hybrid.py | alfredodeza/notario | 036bdc8435778c6f20f059d3789c8eb8242cff92 | [
"MIT"
] | 4 | 2015-08-20T20:14:55.000Z | 2018-06-01T14:39:29.000Z | notario/tests/validators/test_hybrid.py | alfredodeza/notario | 036bdc8435778c6f20f059d3789c8eb8242cff92 | [
"MIT"
] | 9 | 2016-02-04T21:46:12.000Z | 2018-11-14T04:43:10.000Z | notario/tests/validators/test_hybrid.py | alfredodeza/notario | 036bdc8435778c6f20f059d3789c8eb8242cff92 | [
"MIT"
] | 4 | 2015-04-29T20:40:12.000Z | 2018-11-14T04:08:20.000Z | from pytest import raises
from notario.validators import Hybrid
from notario.exceptions import Invalid
from notario.decorators import optional
from notario import validate
def validator(x):
assert x, 'fail'
class TestHybrid(object):
def test_use_validator_passes(self):
schema = ()
hybrid = Hybrid(validator, schema)
assert hybrid(1) is None
def test_use_validator_fails(self):
schema = ()
hybrid = Hybrid(validator, schema)
with raises(Invalid) as exc:
hybrid(0)
error = exc.value.args[0]
assert '0 did not pass validation against callable' in error
def test_use_schema_passes(self):
schema = ('a', 1)
hybrid = Hybrid(validator, schema)
hybrid({0: ('a', 1)})
def test_use_schema_fails(self):
schema = ('a', 2)
hybrid = Hybrid(validator, schema)
with raises(Invalid) as exc:
hybrid({0: ('a', 1)})
error = exc.value.args[0]
assert 'a -> 1 did not match 2' in error
class TestFunctional(object):
def test_passes_single_value(self):
sschema = (1, 2)
schema = ('a', Hybrid(validator, sschema))
data = {'a': 2}
assert validate(data, schema) is None
def test_passes_object(self):
sschema = (1, 2)
schema = ('a', Hybrid(validator, sschema))
data = {'a': {1: 2}}
assert validate(data, schema) is None
def test_fail_object(self):
sschema = (1, 1)
schema = ('a', Hybrid(validator, sschema))
data = {'a': {1: 2}}
with raises(Invalid) as exc:
validate(data, schema)
error = exc.value.args[0]
assert '1 -> 2 did not match 1' in error
assert error.startswith('-> a -> 1')
def test_extra_unexpected_items(self):
optional_schema = (optional(1), 1)
schema = ('a', Hybrid(validator, optional_schema))
data = {'a': {'foo': 'bar'}}
with raises(Invalid) as exc:
validate(data, schema)
error = exc.value.args[0]
assert '-> a did not match {}' in error
assert exc.value.reason == 'unexpected extra items'
| 29.513514 | 68 | 0.588828 | 275 | 2,184 | 4.589091 | 0.207273 | 0.044374 | 0.031696 | 0.085578 | 0.451664 | 0.451664 | 0.374802 | 0.3542 | 0.3542 | 0.264659 | 0 | 0.020658 | 0.290751 | 2,184 | 73 | 69 | 29.917808 | 0.794061 | 0 | 0 | 0.457627 | 0 | 0 | 0.07326 | 0 | 0 | 0 | 0 | 0 | 0.169492 | 1 | 0.152542 | false | 0.084746 | 0.084746 | 0 | 0.271186 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
73ad356948f61ca0a0905878d21b428c799f6aa2 | 380 | py | Python | watch/migrations/0014_auto_20201101_2304.py | msyoki/Neighborhood | d7eb55ba7772388850d8bcf04a867aba3fa81665 | [
"Unlicense"
] | null | null | null | watch/migrations/0014_auto_20201101_2304.py | msyoki/Neighborhood | d7eb55ba7772388850d8bcf04a867aba3fa81665 | [
"Unlicense"
] | null | null | null | watch/migrations/0014_auto_20201101_2304.py | msyoki/Neighborhood | d7eb55ba7772388850d8bcf04a867aba3fa81665 | [
"Unlicense"
] | 1 | 2021-02-08T10:27:06.000Z | 2021-02-08T10:27:06.000Z | # Generated by Django 2.0.2 on 2020-11-01 20:04
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('watch', '0013_alert'),
]
operations = [
migrations.AlterField(
model_name='alert',
name='news',
field=models.CharField(max_length=300, null=True),
),
]
| 20 | 62 | 0.584211 | 42 | 380 | 5.214286 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08209 | 0.294737 | 380 | 18 | 63 | 21.111111 | 0.735075 | 0.118421 | 0 | 0 | 1 | 0 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73b2b67943acda046ca7c7f56efd2e03603a7e68 | 4,140 | py | Python | tests/test_client.py | KazkiMatz/py-googletrans | c1d6d5d27c7386c2a1aa6c78dfe376dbb910f7a5 | [
"MIT"
] | null | null | null | tests/test_client.py | KazkiMatz/py-googletrans | c1d6d5d27c7386c2a1aa6c78dfe376dbb910f7a5 | [
"MIT"
] | 1 | 2020-11-28T18:53:18.000Z | 2020-11-28T18:53:18.000Z | tests/test_client.py | TashinAhmed/googletrans | 9c0014cdcdc22e1f146624279f8dd69c3c62e385 | [
"MIT"
] | null | null | null | from httpcore import TimeoutException
from httpcore._exceptions import ConnectError
from httpx import Timeout, Client, ConnectTimeout
from unittest.mock import patch
from pytest import raises
from googletrans import Translator
def test_bind_multiple_service_urls():
service_urls = [
'translate.google.com',
'translate.google.co.kr',
]
translator = Translator(service_urls=service_urls)
assert translator.service_urls == service_urls
assert translator.translate('test', dest='ko')
assert translator.detect('Hello')
def test_api_service_urls():
service_urls = ['translate.googleapis.com']
translator = Translator(service_urls=service_urls)
assert translator.service_urls == service_urls
assert translator.translate('test', dest='ko')
assert translator.detect('Hello')
def test_source_language(translator):
result = translator.translate('안녕하세요.')
assert result.src == 'ko'
def test_pronunciation(translator):
result = translator.translate('안녕하세요.', dest='ja')
assert result.pronunciation == 'Kon\'nichiwa.'
def test_pronunciation_issue_175(translator):
result = translator.translate('Hello', src='en', dest='ru')
assert result.pronunciation is not None
def test_latin_to_english(translator):
result = translator.translate('veritas lux mea', src='la', dest='en')
assert result.text == 'The truth is my light'
def test_unicode(translator):
result = translator.translate(u'안녕하세요.', src='ko', dest='ja')
assert result.text == u'こんにちは。'
def test_emoji(translator):
result = translator.translate('😀')
assert result.text == u'😀'
def test_language_name(translator):
result = translator.translate(u'Hello', src='ENGLISH', dest='iRiSh')
assert result.text == u'Dia dhuit'
def test_language_name_with_space(translator):
result = translator.translate(
u'Hello', src='en', dest='chinese (simplified)')
assert result.dest == 'zh-cn'
def test_language_rfc1766(translator):
result = translator.translate(u'luna', src='it_ch@euro', dest='en')
assert result.text == u'moon'
def test_special_chars(translator):
text = u"©×《》"
result = translator.translate(text, src='en', dest='en')
assert result.text == text
def test_translate_list(translator):
args = (['test', 'exam'], 'ko', 'en')
translations = translator.translate(*args)
assert translations[0].text == u'테스트'
assert translations[1].text == u'시험'
def test_detect_language(translator):
ko = translator.detect(u'한국어')
en = translator.detect('English')
rubg = translator.detect('тест')
assert ko.lang == 'ko'
assert en.lang == 'en'
assert rubg.lang == ['ru', 'bg']
def test_detect_list(translator):
items = [u'한국어', ' English', 'тест']
result = translator.detect(items)
assert result[0].lang == 'ko'
assert result[1].lang == 'en'
assert result[2].lang == ['ru', 'bg']
def test_src_in_special_cases(translator):
args = ('Tere', 'en', 'ee')
result = translator.translate(*args)
assert result.text in ('Hello', 'Hi,')
def test_src_not_in_supported_languages(translator):
args = ('Hello', 'en', 'zzz')
with raises(ValueError):
translator.translate(*args)
def test_dest_in_special_cases(translator):
args = ('hello', 'ee', 'en')
result = translator.translate(*args)
assert result.text == 'Tere'
def test_dest_not_in_supported_languages(translator):
args = ('Hello', 'zzz', 'en')
with raises(ValueError):
translator.translate(*args)
def test_timeout():
# httpx will raise ConnectError in some conditions
with raises((TimeoutException, ConnectError, ConnectTimeout)):
translator = Translator(timeout=Timeout(0.0001))
translator.translate('안녕하세요.')
class MockResponse:
def __init__(self, status_code):
self.status_code = status_code
self.text = 'tkk:\'translation\''
@patch.object(Client, 'get', return_value=MockResponse('403'))
def test_403_error(session_mock):
translator = Translator()
assert translator.translate('test', dest='ko')
| 25.714286 | 73 | 0.68913 | 508 | 4,140 | 5.474409 | 0.257874 | 0.052859 | 0.107875 | 0.113269 | 0.381517 | 0.24991 | 0.237325 | 0.143114 | 0.107156 | 0.107156 | 0 | 0.006719 | 0.173188 | 4,140 | 160 | 74 | 25.875 | 0.804557 | 0.011594 | 0 | 0.151515 | 0 | 0 | 0.096577 | 0.011247 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.222222 | false | 0 | 0.060606 | 0 | 0.292929 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
73b325d3f7c7dfbcd48251ddfe6b8d3299767cb6 | 540 | py | Python | src/python/pants/backend/codegen/avro/avro_subsystem.py | danxmoran/pants | 7fafd7d789747c9e6a266847a0ccce92c3fa0754 | [
"Apache-2.0"
] | null | null | null | src/python/pants/backend/codegen/avro/avro_subsystem.py | danxmoran/pants | 7fafd7d789747c9e6a266847a0ccce92c3fa0754 | [
"Apache-2.0"
] | 22 | 2022-01-27T09:59:50.000Z | 2022-03-30T07:06:49.000Z | src/python/pants/backend/codegen/avro/avro_subsystem.py | danxmoran/pants | 7fafd7d789747c9e6a266847a0ccce92c3fa0754 | [
"Apache-2.0"
] | null | null | null | # Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import annotations
from pants.option.option_types import BoolOption
from pants.option.subsystem import Subsystem
class AvroSubsystem(Subsystem):
options_scope = "avro"
help = "General Avro codegen settings."
tailor = BoolOption(
"--tailor",
default=True,
help="If true, add `avro_sources` targets with the `tailor` goal.",
advanced=True,
)
| 27 | 75 | 0.709259 | 64 | 540 | 5.875 | 0.65625 | 0.047872 | 0.079787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013953 | 0.203704 | 540 | 19 | 76 | 28.421053 | 0.860465 | 0.233333 | 0 | 0 | 0 | 0 | 0.245742 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.